id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
259350284
pes2o/s2orc
v3-fos-license
LGALS1 regulates cell adhesion to promote the progression of ovarian cancer The present study aimed to explore the significance and molecular mechanisms of galectin-1 (LGALS1) in ovarian cancer (OC). Using the Gene Expression Omnibus database and The Cancer Genome Atlas database, the results of the present study demonstrated that LGALS1 mRNA expression was markedly increased in OC and associated with advanced tumor, lymphatic metastasis and residual lesions. In Kaplan-Meier analysis, patients who expressed LGALS1 highly had a poor prognosis. Furthermore, using The Cancer Genome Atlas database, differentially expressed genes that are potentially regulated by LGALS1 in OC were determined. Gene Ontology, Kyoto Encyclopedia of Genes and Genomes, and Gene Set Enrichment Analysis were used to build a biological network of upregulated differentially expressed genes. The results of the enrichment analysis revealed that the upregulated differentially expressed genes were primarily associated with ‘ECM-receptor interaction’, ‘cell-matrix adhesion’ and ‘focal adhesion’, which are closely associated with the metastasis of cancer cells. Subsequently, cell adhesion was selected for further analysis. The results demonstrated that LGALS1 was co-expressed with the candidate genes. Subsequently, the elevated expression levels of candidate genes were verified in OC tissues, and survival analysis indicated that high expression of candidate genes was associated with shortened overall survival of patients with OC. In the present study, OC samples were also collected to verify the high protein expression levels of LGALS1 and fibronectin 1. The results of the present study highlighted that LGALS1 may regulate cell adhesion and participate in the development of OC. Therefore, LGALS1 exhibits potential as a therapeutic target in OC. Introduction Ovarian cancer (OC) is a common malignancy, with 200,000 cancer-related deaths in 2020, marking it the eighth most fatal female malignant tumor worldwide. In addition, OC possesses the worst prognosis and highest mortality rate among all gynecological cancers (1)(2)(3). As OC cells are extremely invasive and spread rapidly from the primary site to achieve extensive metastases, the majority of patients with OC have often progressed to an advanced stage at the preliminary diagnosis. Following the completion of initial treatment for OC, including cytoreductive surgery and platinum-based chemotherapy, patients with BRCA1/BRCA2 genetic mutations or HRD(Homologous Recombination Defect) often select PARP inhibitor maintenance therapy as a subsequent therapeutic option (4,5). Notably, this is the most well-established treatment strategy. The five-year overall survival (OS) rate of patients with OC remains at <50% (3,6,7), despite the addition of multiple molecular targeted therapeutic signaling cascades into treatment regimes. In addition, the adverse clinical outcomes of widespread metastasis and recurrence remain. Galectin-1 (LGALS1), the first member of the galectin family with carbohydrate recognition structure, is a 14.5 KDa homodimer that encodes galectin-1 (8). When LGALS1 is secreted, it interacts with extracellular matrix (ECM) glycoproteins, such as laminin or fibronectin, to play a role in cell division, migration, adhesion, invasion, immune response and other activities that promote the metastasis of tumor cells (8)(9)(10)(11). Results of previous studies demonstrated that LGALS1 is overexpressed in carcinoma-associated fibroblasts (CAFs), and is positively correlated with the expression of epithelial-mesenchymal transition (EMT) interstitial markers (12), supporting the invasion and metastasis of tumors (13). In clinical practice, the elevated expression of LGALS1 has been detected in lung cancer (14), liver cancer (15), colorectal cancer (16), OC (17) and other diseases, demonstrating the potential role of LGALS1 as a marker for disease monitoring and therapy. However, further research into the transcriptional network and functional mechanisms of LGALS1 in OC is required. In recent years, the rapid processing of millions of library clones has become a reliable laboratory process, due to the LGALS1 regulates cell adhesion to promote the progression of ovarian cancer development of high-throughput sequencing techniques. The detection of genes that play roles in key biological processes provides novel insights into therapeutic targets and mechanisms of cancer development. In the present study, a total of five datasets were downloaded from the Gene Expression Omnibus (GEO) database, and LGALS1 was found to be highly expressed in OC tissues. Kaplan-Meier analysis demonstrated that high expression of LGALS1 leads to a poor prognosis in patients with OC. Using The Cancer Genome Atlas (TCGA) database, differentially expressed genes (DEGs) were identified according to the median expression of LGALS1. Subsequently, Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses were performed using the DEGs. Gene Set Enrichment Analysis (GSEA)is used to explore genome-wide molecular mechanisms. Thus, genes involved in the biological processes of cell-matrix adhesion were selected, and their impact on the survival of patients with OC were analyzed. The results were further validated using an OC dataset. After analyzing the association between these cell adhesion molecules (CAMs) and the expression of LGALS1 using TCGA database, the expression levels of LGALS1 and fibronectin 1 (FN1) in clinical samples were determined, and the characteristics of clinical cases were further investigated. Results of the present study support the hypothesis that LGALS1, as a gene impacting OC development, modulates gene expression levels and may exhibit potential as a therapeutic target. Materials and methods Database resource. A total of five datasets that met the inclusion criteria were downloaded from the GEO database (http://www.ncbi.nlm.nih.gov/geo). The inclusion criteria was as follows: i) :(Available tissue samples that were derived from human epithelial OC tissues, the healthy ovarian epithelium and fallopian tube epithelium; and ii) sample quantities within each dataset were not <10. A total of 83 control samples and 271 OC samples were included in the present study (Table I). The original matrix data were standardized using the RMA algorithm from R software (version 4.0.0). Differences in LGALS1 mRNA expression levels were compared between the two groups using an unpaired and paired Student's t-test, and P<0.05 was considered to indicate a statistically significant difference. The gene expression information and the corresponding clinical data of 354 OC samples were downloaded from the University of California Santa Cruz database (https://xenabrowser.net/datapages/). Transcripts per million (TPM) were used to normalize the RNA-seq data. The specimen belonged to the primary tumor, and the fresh tissue was preserved at -80˚C. LGALS1 expression is presented as the mean ± standard deviation and analyzed in combination with the clinical characteristics. Unpaired Student's t-test was used to compare the differences of two groups. Acquisition of DEGs. According to the median expression of LGALS1 in TCGA database, LGALS1 expression was divided into a high expression group (> median) and low expression group (< median). The R bioconductor package DESeq2 (18) was utilized to search for DEGs. The following criteria were applied to find DEGs: A false discovery rate <0.05, |Log2 FC| >1.8 and P<0.05. A heatmap was subsequently created for the DEGs in each sample. Annotation of the biological functions of DEGs. The cluster-Profiler and Goplot packages in the R software were used to analyze and visualize GO terms and KEGG pathway enrichment results of DEGs. P<0.05 and q<0.05 were considered to indicate a statistically significant difference. GSEA. GSEA (version 4.2.2) software was used to perform GSEA analysis on all genes of TCGA OC dataset. The following conditions were set: i) The grouping method was the same as for screening DEGs; ii) 1,000 genomic permutations were performed per analysis; iii) P<0.05 was considered to indicate a statistically significant difference; and iv) FDR(False Discovery Rates) <25%, and normalized enrichment score (NES) >1.0 demonstrated that the enrichment to gene set was significant. Identification of the expression of CAMs. Using r-GGStatsplot package in R software, the association between LGALS1 and the expression of CAMs in TCGA database was calculated. The correlation between expression levels was investigated using Pearson's correlation analysis, and significance was assessed using a Student's t-test. The mRNA expression levels of CAMs in OC tissue were verified using the dataset GSE66957 from platform GPL15048, containing 12 ovarian samples from healthy controls (HC) and 57 OC samples. Clinical samples. Tissues of patients who received surgical treatment in the Department of Gynecology, The Second Hospital of Jilin University from July 2020 to December 2020, were collected. Samples were obtained from 43 patients with OC and 29 patients with benign gynecological diseases that required surgical removal of the ovaries. The ovaries of HC were confirmed to be healthy by an independent pathologist, and the relevant clinical characteristics of patients with OC were recorded. The surgical treatment of OC was comprehensive staging laparotomy and cytoreductive surgery. Patients who had received neoadjuvant radiotherapy, chemotherapy and other specific therapies prior to surgery were excluded. An independent pathologist confirmed that the tissue was epithelial OC. The age of onset of OC ranged from 34 to 79 years old, with a median age of 55 years. A total of 21 patients were younger than 55 years old, and 22 patients were older than 55 years old. The Ethics Committee of The Second Hospital of Jilin University approved tissue collection (ethics approval no. 2020069). All patients provided written informed consent prior to inclusion in the study. Immunohistochemistry. Immunohistochemistry was used to examine the expression levels of LGALS1 and FN1 in the tissues of 43 patients with OC and 29 HC. The resected OC and benign ovarian epithelial tumor tissues were fixed with formalin, and 4-µm-thick tissue sections were cut and heated in EDTA repair solution (PH, 9.0) for antigen repair. Tissues were incubated with primary antibodies against LGALS1 (1:300; cat. no. 11858-1-AP; ProteinTech Group, Inc.) and FN (1:200; cat. no. WL00712a; WANLEIBIO) overnight at 4˚C. Following primary incubation, tissues were incubated with the conjugate secondary antibody (1:500; cat. no. 115-035-003; Jackson ImmunoResearch Laboratories, Inc.) for 50 min at room temperature. Tissues were washed with PBS three times for 5 min each time. Intensity was scored as follows: Colorless, 0; light yellow, 1; yellowish brown, 2; and brown, 3. The percentage of the total cell population that was positive within the visual field was scored as follows: <10%, 0 scores; 11-24%, 1 score; 25-49%, 2 scores; and >50%, 3 scores. When two scores were multiplied, a result ≤2 was considered to indicate a negative expression, and a result >2 was considered to indicate a positive expression. Statistical analysis. Statistical analysis was performed using R Software 4.0.0 (R Development Core Team) and GraphPad Prism 9.0 (GraphPad Software). All experiments were repeated three times. The count data were represented by the number of cases (percentage), and the other data were shown as the mean ± standard deviation. Two groups were compared with the paired and unpaired Student's t-test. A paired T test was used to compare the difference in LGALS1 expression between the two groups (Data sets: GSE69428 and GSE30587). Unpaired T test was used to compare the difference of LGALS1 expression (Data sets: GSE26712, GSE10971, GSE12171), and to examine the relationship between LGALS1 expression and clinicopathological characteristics in OC. The differences in CAM mRNA expression between OC and HC were also analyzed using an unpaired t-test. Kaplan-Meier curve was used to evaluate the effects of LGALS1 and CAMs on OS of OC patients. The significance of the survival differences between the groups was assessed using a log-rank test. Immunohistochemical difference between the two groups was Chi-square test. Pearson's correlation was used to analyze the correlation between LGALS1 and CAMs expression. P<0.05 was considered to be statistically significant. Results High expression of LGALS1 in OC is associated with prognosis. To understand the expression of LGALS1 mRNA in OC, the GEO database was used for analysis. Each dataset was normalized (Fig. 1A). Results of previous studies demonstrated that serous OC arises from the tubal epithelium and is secondary to the ovary; whereas epithelial OC was initially considered to originate from the epithelium on the ovary surface (19,20). Therefore, healthy tubal tissue was included in the control group. Compared with healthy ovarian epithelial tissue, healthy fallopian tube epithelium and serous ovarian low malignant potential tumor, results of the present study demonstrated that LGALS1 mRNA was highly expressed in OC (Fig. 1B). Further analysis demonstrated that LGALS1 mRNA expression in omental metastasis lesions was higher than in primary OC. Among 354 patients with OC in the TCGA database, the median age of diagnosis was 59, 88.4% patients had high-grade histological type, 94.3% patients had advanced stage (stage III-IV) at the first diagnosis, 69.9% of the patients with lymph node dissection had lymphatic metastasis, 81.6% OC patients could not reach R0 resection at the first operation (Table II). LGALS1 mRNA expression was markedly higher in the advanced stage (stage III-IV), lymphatic metastasis and residual lesion groups. However, high expression of LGALS1 was not significantly associated with other clinical features (Fig. 1C). The association between LGALS1 expression and patient survival was investigated using Kaplan-Meier analysis. LGALS1 mRNA expression was negatively correlated with the OS of patients with serous OC. Moreover, results of the subgroup analysis demonstrated that Grade 1 and 2, stages III and IV, and treatment with platinum or Taxol exhibited statistical significance (Fig. 1D) Identification and functional annotation of DEGs. A total of 208 DEGs were obtained using the filtering criteria. In total, 83 genes exhibited significant upregulation and 125 genes exhibited significant downregulation. A volcano map was used to demonstrate DEGs associated with LGALS1 expression ( Fig. 2A). The significant DEGs were merged to create a heatmap, according to the levels of expression (Fig. 2B). As LGALS1 may act as an oncogene in OC, clusterProfiler and Goplot were used for GO and KEGG annotation and visualization, using upregulated DEGs. Enrichment analysis of biological processes demonstrated that DEGs were involved in ECM organization, collagen fibril organization, cell-substrate adhesion, cell-matrix adhesion, collagen metabolic processes and ECM disassembly. Moreover, analysis of KEGG pathways demonstrated that DEGs were involved in protein digestion and absorption, ECM-receptor interaction, the PI3K-Akt signaling pathway and focal adhesion (Fig. 2C-E). CAM genes involved in cell matrix adhesion were selected to analyze the association with LGALS1 expression. Results of the present study demonstrated that LGALS1 was positively co-expressed with FN1, ITGA11, GREM1, COL1A1, COL3A1 and POSTN (Fig. 2F). LGALS1 activation pathway in OC. GSEA of TCGA genes demonstrated that the significantly enriched gene sets were mainly concentrated in the LGALS1 high expression group. These gene sets included CAMs, ECM-receptor interaction and focal adhesion gene sets (Fig. 3A-C). Validation of CAMs. Kaplan-Meier Plotter was used to analyze how the mRNA expression of CAMs affected the OS of OC patients (Fig. 4A-F). Results of the present study demonstrated that the expression levels of FN1, ITGA11, GREM1, COL1A1, COL3A1 and POSTN were significantly correlated with poor OS rates. Gene expression levels of CAMs in HC and OC were compared using the dataset GSE66957 (Fig. 4G). The results demonstrated that the expression levels of the aforementioned CAM genes were markedly increased in OC, except for POSTN. Clinical sample validation. Compared with HC, the protein expression levels of LGALS1 and FN1 were significantly upregulated in OC. The positive rate of LGALS1 expression in FIGO (Federation Internationale Of Gynecologie And Obstetrigue) III/IV epithelial OC (73.08%) was significantly higher than that in FIGOI/II epithelial OC (29.41%; Fig. 5A and B). In addition, the expression rate of LGALS1 in patients with high-grade OC (67.86%) was significantly higher than that in patients with low-grade OC (33.33%; Table III). Notably, there was no significant difference in LGALS1 expression between patients of different ages and patients with lymph node metastases. When comparing the clinical samples obtained during the present study with data obtained from TCGA, the trend of LGALS1 expression in staging was consistent. The expression rate of FN1 protein in HC was 27.59%, and that in OC tissue was 62.79%. Moreover, when compared with HC, the expression of FN1 protein in patients with OC was significantly upregulated (Fig. 5C-E). Discussion OC is a highly metastatic disease with a poor prognosis, and the underlying molecular mechanisms remain to be fully elucidated. Using the GEO database, results of the present study demonstrated that LGALS1 mRNA was highly expressed in OC tissues, compared with the healthy ovaries, healthy oviduct tissues and serous ovarian low malignant potential tumor. Moreover, increased expression levels of LGALS1 were associated with lymph node metastases and residual tumor lesions. Results of previous studies demonstrated that serum detection of LGALS1 exhibits potential for the diagnosis of OC (21,22), and that LGALS1 levels reduce following tumor excision and chemotherapy (21). Results of the present study also demonstrated that elevated LGALS1 expression was negatively associated with a poor prognosis, particularly in patients with advanced OC, stage III and IV, Grade 1 and 2, satisfactory reduction surgery, and treatment with platinum or paclitaxel. Moreover, similar findings have been observed in thyroid cancer, breast cancer and pancreatic cancer (21,23,24). The development of inhibitors that target LGALS1 exhibit potential as future anti-cancer therapies (25,26). Collectively, results of the present study demonstrated that LGALS1 may exhibit potential as a target for the treatment of OC. Enrichment analysis of upregulated DEGs was carried out to further understand the oncogenic mechanisms of LGALS1 in OC. Findings of the GO annotation revealed that ECM genes, which encode collagen fibers, fibronectin and metalloproteinases, were a large proportion of the upregulated DEGs. These genes were mostly engaged in cell-matrix adhesion, ECM-receptor interaction, ECM degradation and collagen fiber decomposition. According to results of the KEGG analysis, DEGs were involved in protein digestion and absorption, ECM-receptor interaction, the PI3K-Akt signaling pathway and focal adhesion. All of these are involved in ECM alteration, and are closely associated with the proliferation and metastasis of tumor cells. The uncontrolled adhesion The total number of patients for some categories is <354, due to missing information in the database. interaction alters the molecular characteristics of local ECM components, such as their morphology and hardness, and promotes cancer progression (27). Results of previous studies demonstrated that OC originates in the epithelium, and moves to the abdominal cavity through single cell or multicellular aggregation. These cells are attached to peritoneal mesothelial cells that are planted in the basement membrane, and to degraded ECM components for diffusion (27,28). Further investigations into the mechanisms underlying OC metastasis are required. The use of GSEA in the present study demonstrated the significant enrichment of genes involved in focal adhesion, ECM-receptor interaction and CAMs. These results further indicated that LGALS1 may impact the mechanisms involved in cell adhesion. As a molecular glue, LGALS1 heterotypic recognition glycoprotein provides diversity in ECM junctions and intercellular tightness, and improves the physical force for the directed invasion of tumor cells (11,29). To further understand the role of LGALS1 in cell matrix adhesion events, a total of six CAMs of the aforementioned pathway were selected for subsequent analysis. As a structural scaffold, FN1 regulates cell adhesion, growth and migration, and plays a vital role in embryonic development and wound healing (30). FN1 is both a mesenchymal marker and a promoter of EMT (31), and its abnormal expression is associated with a poor prognosis in patients with cancer (32,33) and platinum resistance (34). Moreover, members of the integrin family regulate cell-cell interactions and cell-cell adhesion. As a specific collagen receptor, ITGA11 initiates the recombination and alteration of the stiffness of the collagen matrix (35), which significantly facilitates the migration and invasion of cancer (36,37). COL1A1 and COL3A1 belong to the collagen family, and are the main components of the ECM. Levental et al (38) demonstrated that type I collagen cross-linking is associated with ECM stiffness, which stimulates focal adhesion and PI3K signal transduction, to enhance breast cancer cell growth and invasion. Gao et al (39) reported that the silencing of COL1A1 expression inhibited the EMT process and cell motility, and reduced tumor aggressiveness. COL3A1 is also an important protein in the development and progression of bladder, glioma, head and neck cancer (39)(40)(41). Following knockdown of COL3A1 using small interfering RNA, cell growth was prolonged, migration ability was weakened, and colony formation was inhibited (39). Moreover, GREM1 is a highly conserved glycoprotein that promotes intracellular infiltration and exosmosis of MDA-MB-231 cells in zebrafish, through activation of CAFs. Results of a previous study demonstrated that GREM1 expression was markedly increased at the infiltrated edge (42), highlighting the function of the protein in cancer metastasis. In addition, POSTN is expressed in pan-carcinomas, and is associated with metastasis, recurrence and poor prognosis (43). Results of a previous study demonstrated that POSTN from CAFs of OC acts as an integrin avb3 receptor, to activate the downstream PI3K-Akt signaling cascade (44). This initiates EMT and increases the malignancy of cancer. Results of this previous study demonstrated that candidate genes were positively correlated with LGALS1 expression, and exerted adverse effects on the clinical outcomes of patients with OC (44). Results of the present study LGALS1 is involved in cell adhesion. Collectively, results of the present study revealed that the LGALS1 protein was highly expressed in OC, and that the expression of LGALS1 was significantly associated with increasing pathological grade and clinical stage. These results indicated that LGALS1 may promote the development of OC. Notably, results of a previous study demonstrated that the addition of recombinant LGALS1 promoted the adhesion of OC cell lines to FN in a dose-dependent manner, whereas free-floating cells exhibited no response to FN under the same conditions. These results highlighted that FN altered cell spatial localization and promoted cell-ECM association under the action of LGALS1; however, this was cell statedependent (45). Results of the present study demonstrated that FN1 protein expression was markedly upregulated in OC, compared with HC. Thus, we hypothesized that two proteins may interact to accelerate the progression of OC; however, further investigations are required. In conclusion, abnormalities in cell adhesion cause the ECM to decompose and recombine, increasing the activity and aggressiveness of tumor cells. Notably, these are crucial processes for OC cell shedding and implantation. Results of the present study demonstrated that high expression levels of LGALS1 were associated with a poor prognosis in patients with OC. Moreover, results of the present study identified six candidate genes that may regulate cell adhesion pathways to participate in OC progression. However, further studies into the specific mechanisms are required. Transcriptional changes regulated by LGALS1 further the understanding of signal networks, and may lead to the development of novel targeted therapies for OC.
2023-07-07T22:16:37.758Z
2023-06-13T00:00:00.000
{ "year": 2023, "sha1": "b4068a49da74419bb2929f3e76d0482b550c3a82", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.3892/ol.2023.13912", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b94333d06e8d9c73cadc572be8bba5429d733f2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
10911899
pes2o/s2orc
v3-fos-license
The genetic basis for survivorship in coronary artery disease Survivorship is a trait characterized by endurance and virility in the face of hardship. It is largely considered a psychosocial attribute developed during fatal conditions, rather than a biological trait for robustness in the context of complex, age-dependent diseases like coronary artery disease (CAD). The purpose of this paper is to present the novel phenotype, survivorship in CAD as an observed survival advantage concurrent with clinically significant CAD. We present a model for characterizing survivorship in CAD and its relationships with overlapping time- and clinically-related phenotypes. We offer an optimal measurement interval for investigating survivorship in CAD. We hypothesize genetic contributions to this construct and review the literature for evidence of genetic contribution to overlapping phenotypes in support of our hypothesis. We also present preliminary evidence of genetic effects on survival in people with clinically significant CAD from a primary case-control study of symptomatic coronary disease. Identifying gene variants that confer improved survival in the context of clinically appreciable CAD may improve our understanding of cardioprotective mechanisms acting at the gene level and potentially impact patients clinically in the future. Further, characterizing other survival-variant genetic effects may improve signal-to-noise ratio in detecting gene associations for CAD. INTRODUCTION Survivorship is a unique clinical construct that can be characterized by the intersection of temporal factors related to lifespan, disease-related burden and treatment, and mortality. The term survivorship connotes traits or conditions of maintaining survival, whereas survival characterizes the state of living. Coronary disease continues to be a leading cause of death in the U. S. and a significant source of rising disease burden (Roger et al., 2012), despite a decline in cardiovascular disease-related mortality in past decades as a result of improved knowledge of risk factors and biomarkers, and advances in pharmacotherapeutics and coronary interventions (McGovern et al., 2001). Moreover, clinical prediction of survival likelihood in the setting of coronary artery disease (CAD) is inaccurate and difficult. Given recent advances in the identification of cardioprotective gene variants, biological markers may provide critical insight into the conditions that support survival in the context of CAD, or, what we term, survivorship in CAD. The purpose of this paper is to introduce and define a novel phenotype, "survivorship in CAD," to provide a review of the literature, to present hypotheses for genetic contributions, and to present preliminary evidence of survival-variant genes unique to CAD. SURVIVAL vs. SURVIVORSHIP The concept of survivorship has many related terms and definitions, depending upon the context and the field. "Survival" is most commonly used as an epidemiological construct denoting a period of time between an event and a "failure." The medical community considers survival to be the length of time from medical intervention (e.g., coronary artery bypass surgery, stent placement, initiation of aspirin) to an event such as death (i.e., failure). The primary medical goal is to evaluate effectiveness in the prevention of mortality, and this model implies that the disease state is known or diagnosed. Mullan (1985) states that people become survivors at the time they are diagnosed with a life-threatening disease. In a New England Journal of Medicine essay, he explained, "Survival . . . begins at the point of diagnosis because that is the time when cancer patients are forced to confront their own mortality and begin to make adjustments that will be part of their immediate, and to some extent, longterm future" (Mullan, 1985, 271). Survivor-"ship" implies that a trait or attribute is possessed related to survival, such as in the case of cancer survivors exhibiting strength and perseverance throughout their treatment and in the confrontation of death (Zebrack, 2000). While the term "survivorship" most often has a psychosocial connotation of sustaining life during hardship, we hypothesize that biologic robustness can be a trait favoring survival even in the context of complex diseases such as coronary disease. The gerontology literature refers to survivorship as lifespan longevity irrespective of health indices (Murabito et al., 2012); and, many researchers in aging consider longevity to be healthy survival to old age. Centenarians are exemplars of longevity, but some data suggest that nearly one-third of centenarians have had age-related morbidities for 15 or more years (Terry et al., 2008), making at least some of those survivors apparently robust to the effects of pathophysiologic insults-perhaps due to some biological advantages. Some researchers have begun to explore the hypothesis that centenarians are genetically predisposed to physiological states that are protective against heart disease (Grimaldi et al., 2006) and other conditions, perhaps via "buffered disease genes" that promote longevity by "buffering" genetically determined age-related diseases (Bergman et al., 2007). However, these people are different from those with known coronary disease who survive despite their disease, who may potentially embody distinct genetic characteristics. These caveats lead us to make an important distinction about defining the construct of survivorship in CAD. Timing and context are everything. We present a model (Figure 1) that accounts for temporal, clinical, and genetic interrelationships that characterize survivorship in CAD. In our goal to determine an optimal definition for survivorship in CAD and determine the plausibility of genetic contribution to this phenotype, we considered the interplay of these factors. We present a theory-based definition for survivorship in CAD as a survival advantage concurrent with clinically significant CAD and propose an optimal measurement of this phenotype as the time at initial biological atherosclerotic disease onset to time of coronary-related death. We discuss the caveats of observing and measuring the phenotype as recommended and offer alternatives for approximating the construct of survivorship in CAD through the use of overlapping phenotypes. We hypothesize that genes may contribute to the survivorship in CAD phenotype. THE HYPOTHESIS OF GENETIC CONTRIBUTION TO SURVIVORSHIP IN CAD The natural history of CAD makes elucidating genetic contributions to "survivorship" a complex challenge. Our model (Figure 1) helps to illustrate the interplay of these factors in generating our hypothesis. Survivorship in CAD is only one of a number of related and overlapping CAD phenotypes. Genetic contributions to these overlapping phenotypes suggest that there may be shared genetic markers for survivorship in CAD. Moreover, there could be different but correlated genetic effects at each stage, reflected by genetic heterogeneity and/or differential magnitudes of genetic effect across the time-disease continuum. Recent findings identify genetic associations with risk variants in cardiovascular-related mortality and sudden cardiac death (see extensive review by Arking and Sotoodehnia, 2012). However, the survivorship in CAD phenotype hinges on the distinction between determining risk for CAD-related mortality events vs. characterizing the propensity to survive beyond clinical expectation with significant CAD (as in, cardioprotection). The key is being able to parse out the proportion of overlapping genetic variance that contributes specifically to survivorship in the context of CAD. To lend additional support for hypothesized genetic involvement in the survivorship in CAD phenotype, we look to genetic variants associated with known cardioprotection phenotypes-both in-vitro and limited in-vivo evidence, reviewed later in the paper. Genetic heterogeneity likely drives marked inter-individual variation in disease initiation, progression, and response to treatment, thus, is likely to affect survival likelihood. Naghavi and SHAPE task force members (Naghavi et al., 2006) provide insight into the variability therein: "At every level of risk factor exposure, the amount of established atherosclerosis and the vulnerability of actual events varies greatly, probably because of genetic variability in an individual's susceptibility to atherosclerosis and propensity to arterial thrombosis ("vulnerable blood") and ventricular arrhythmias ("vulnerable myocardium")", (Naghavi et al., 2006, 4H). Coupled with great variance in modifiable risk factor profiles, the task of finding biological variants and/or biomarkers that pinpoint the initiation of CAD is a daunting one. The following are necessary for accurately defining the origin of CAD in the study of this novel phenotype: (1) a consensus for what constitutes "the" biological/biochemical initiation of CAD; (2) a sensitive and reliable biomarker for that initial biological disease state; and, (3) evidence that the particular biomarker(s) independently predict development of clinically appreciable CAD. In estimating the earliest known point of origin for disease initiation (i.e., implications 1 and 2), the epidemiological tenets of what is "detectable" in the preclinical phase of disease are currently being challenged by scientific advances, which could make future research for their prediction of the development of CAD (and survivorship in CAD) more viable. We next present a review of the evidence for genetic involvement in each of the presented phases in our model and reflect on the impact of temporal contexts of the CAD continuum to the study of the genetics of survivorship in CAD. THE TEMPORAL STAGES OF BIOLOGY AND DISEASE: IDENTIFYING THE SURVIVORSHIP IN CAD POINT OF ORIGIN Our lifespans are a complex continuum of biological and environmental interactions that result in various stages of wellness and decline (and/or disease). In Figure 1, the lifespan stages are depicted by the top horizontal line progressing from birth to death. Disease progression is also by nature a temporal process with multiple clinical stages, represented here by the middle horizontal lines. Specific to CAD, the major clinical stages can be considered: development of risk factors/preclinical disease, symptomatic stage with/without diagnosis and/or coronary event(s); treatment/maintenance, and death. Inter-individual variance in disease presentation and the heterogeneous nature of CAD affect the onset and length of these stages, and, inherently the way that CAD phenotypes are defined and studied. Given aforementioned definitions, "survivorship in CAD" could simply be defined as survival time from point of diagnosis to death. While this definition suits investigations of medical treatment, genetic investigations seeking to identify markers associated with survival as a function of biologic protection from CAD-related mortality should consider an alternate definition, specifically, beginning with a better and earlier point of origin. The time point at which we define the origin of the survival curve is critical. In survival analyses, the origin refers to the natural point in time in which the person becomes "at risk." If CAD diagnosis were used as the origin for the survivorship in CAD phenotype, this would be an inaccurate assessment because the pre-clinical phase of CAD also corresponds to a period where risk for death from CAD is known. Specifically, it is estimated that the initial presentation of CAD is sudden cardiac death (SCD) for at least 20-30% of cases (Myerburg and Junttila, 2012;Roger et al., 2012). SCD cases demonstrate risk of death from asymptomatic CAD during the pre-clinical/pre-diagnosis phase and there is evidence of genetic variants associated with the risk for CAD-associated sudden death (Westaway et al., 2011). Therefore, the optimal origin for defining the survivorship in CAD phenotype should be the initial biological disease state, which we know occurs before the diagnosis of CAD. Methodologically, this is currently problematic. We cannot "carbon-date" the exact onset of disease, given the potentially long period of pre-clinical, asymptomatic phase associated with CAD. Current proxies are traditional biomarkers, such as coronary artery calcium (CAC) scores, C-reactive protein (CRP), and carotid intima media thickness (CIMT), discussed later. DEVELOPMENT OF RISK FACTORS/PRE-CLINICAL DISEASE Development of risk factors and pre-clinical disease overlaps with our defined phase of survivorship in CAD and likely shares genetic contributions to this phenotype. Here, we consider both traditional (age, diabetes, smoking, hypertension, dyslipidemia, obesity, and family history of early-onset CAD) and non-traditional risk factors. The multitude of non-traditional risk factors include, for example, CRP, homocysteine, fibrinogen, lipoprotein a, calcium score, metabolic syndrome, renal disease, and microproteinuria. Evidence of genetic contribution to these well-studied risk factors has been established and summarized extensively in the literature to date. For any disease, the pre-clinical phase is characterized by three periods: the initiation of biological insult, the un-detectable pre-clinical state, and the detectable pre-clinical state (Herman et al., 2002). Pre-clinical coronary disease is characterized by the development of risk factors and the physiologic initiation of inflammation and atheroma development. An ideal marker for approximating the initial biological disease state for survivorship in CAD would be one that had high sensitivity and specificity for detecting initial endothelial dysfunction (oxidation of low-density lipoprotein in the arterial wall and/or inflammation of endothelial cells). Genetic contribution to these broad physiologic phases warrants a separate review. Briefly, some strong subclinical CAD biomarker candidates exist for the earliest known phases of atherosclerotic development (Table 1). Accurate detection of the earliest initiation of the biophysiologic process of disease development appears to be on the horizon. The "traditional" subclinical markers of vascular disease (CRP, CAC, and CIMT) have evidence of moderate heritability. Heritability estimates for CRP levels is estimated between 26 and 45%, depending on the population studied (MacGregor et al., 1999;Pankow et al., 2001;Lange et al., 2006;Fox et al., 2008). The heritability of quantity of CAC within carotid vessels of asymptomatic white individuals has been estimated between 38 and 42% (Cassidy-Bushrow et al., 2007;Rampersaud et al., 2008) after adjustment for various risk factors. Cassidy-Bushrow et al. (2007) further demonstrated heritability of progression of CAC across 7 years, with a post-adjustment estimate of 40%, wherein genetic factors explained 14% of the variation in CAC progression. The heritability of CIMT was first demonstrated in the Framingham Offspring cohort, with age-and sex-adjusted estimates of 37-44% (Fox et al., 2003). Estimates of twin cohorts (Zhao et al., 2008;Lee et al., 2012) are higher with adjusted h 2 of 38-59%, and as much as 65% (adjusted h 2 ) in Caribbean Hispanic populations (Sacco et al., 2009). Pre-clinical disease can begin as early as childhood, especially where traditional risk factors are present in early ages. The presence of raised fibrous plaques has been documented in children as young as 8 years of age who had type 1 diabetes and who died from accidental causes in the Bogalusa Heart Study (Berenson et al., 1998). Other epidemiologic studies of children and young adults (Pathobiological Determinants of Youth Study, Cardiovascular Risk in Young Finns; Coronary Artery Risk Development in Young Adults) have documented the presence of coronary and aortic fatty streaks in these populations and correlations between detectable streaks and some traditional risk factors (McMahan et al., 2006;Loria et al., 2007;Hartiala et al., 2012). The fact that the biological origin of CAD may begin so early in life constitutes another layer of complexity in defining the point of origin for survivorship in CAD; yet, advances may be leading us to more accurate detection of the initial biophysiologic insults leading to CAD, as early as childhood. Given the potential for emerging biomarkers to characterize the earliest detectable initiation of atherosclerotic disease (earlier than CRP, CAC, and CIMT), we assert that the construct of survivorship in CAD remains best defined as survival from time at initial biological atherosclerotic disease onset to time of coronary-related death. We believe that biomarker advances will allow us to optimally define earlier points of origin in the near future. If such biomarkers (CRC, CAC, CIMT, or novel) are measured in an at-risk, pre-clinical population, the survivorship in CAD origin could be the time at biomarker capture (for those with biomarker levels corresponding to increased risk for CAD). Participants could be followed from pre-clinical phase to point at conversion to CAD diagnosis, then until coronary-related death. Capture of genetic data from this type of cohort would be ideal for investigating the genetic contribution to survivorship in CAD. Furthermore, one could test the genetic contributions to each phase or point of survival along the time-disease-treatment trajectory. This approach could discriminate whether identified genes are equally important at each phase or whether there are different genes corresponding to different effects across this continuum. Until earlier biomarkers are validated, or until an at-risk cohort (with sufficient data and power) is prospectively followed as previously described, proxies for the origin point, such as time of clinical CAD diagnosis, will be necessary. SYMPTOMATIC STAGE WITH/WITHOUT CAD DIAGNOSIS CAD diagnosis is driven by symptomatology. For those whose initial symptoms are less severe or who survive their initial atherosclerotic vascular event, numerous candidate genes have been associated with the presence and diagnosis of CAD. Recently, Yoo et al. (2012) explored the role of polymorphisms in the Rho-associated kinase 2 (ROCK-2) gene in vasospastic angina and found that a 5-marker haplotype conferred protection from coronary vasospasm in 106 Korean adults undergoing coronary angiography (p = 0.007). Genetic investigation into CAD symptoms is new territory in the literature but the demonstration of genetic association with vasospasm identifies the potential for genetic variants to be involved in the predisposition to CAD symptoms. Age of onset of CAD diagnosis has strong evidence of genetic contributions to development of CAD, with earlier age of onset suggesting a stronger genetic effect (Arking et al., 2003;Hauser et al., 2004;Connelly et al., 2006;Shah et al., 2006Shah et al., , 2009. Estimates of over 2000 known or suspected candidate genes involved in CAD are present in the literature (IBC 50K CAD Consortium, 2011). The 9p21 region has the most evidence established to date for CAD, as the most highly replicated variants come from the 9p21 region for significant associations with CAD and myocardial infarction (Palomaki et al., 2010), and sudden and arrhythmic cardiac death (Newton-Cheh et al., 2009). TREATMENT/MAINTENANCE The treatment and maintenance phase of the CAD continuum primarily includes interventional cardiology procedures and/or pharmacological management. Genetic contribution to treatment response in CAD is also well-established, particularly as the pharmacogenomics revolution has been the fastest pipeline for translation of genetic screening in the context of complex, multifactorial diseases. A recent review by Voora and Ginsburg (2012) summarizes the evidence of genetic associations with the most common cardiovascular drug classes used in the prevention and treatment of CAD (antiplatelet agents, warfarin, statins, beta-blockers, diuretics, and antiarrhythmic drugs). Significant pharmacogenetic associations have been reported for mortality. This is key to our construct for two reasons: first, cardiovascular treatment effects and the survivorship in CAD phenotype could share genetic variation-further supporting our hypothesis; and, second, there is a likelihood that in situations where non-shared genetic effects are present for survivorship in CAD, cardiovascular treatment effects (involving genetic risk or not) may independently bias survival models. Similar caveats persist related to treatment effects from interventional procedures for CAD, such as with coronary artery bypass grafting (CABG), percutaneous coronary angioplasty (PTCA), and stent placement, for which patients may also have varied prognoses and outcomes based on certain genetic markers (Muehlschlegel et al., 2010;Cayla et al., 2011;Lobato et al., 2011). DEATH A landmark analysis of clinician vs. computational prognoses of survival by Kong and colleagues (Kong et al., 1989) indicated that clinically, we are poor at predicting survival in CAD; and, that accurate estimation of survival outcomes is difficult. Over 20 years later, strong evidence has been found implicating the following as independent predictors of death in CAD: age, race (Thomas et al., 2010), presence of comorbidities (diabetes/metabolic syndrome), renal disease, hypertension (Emerging Risk Factors Collaboration et al., 2010), depression (Whang et al., 2010), and reduced ejection fraction (Movahed and Sattur, 2010;Kuhl et al., 2011). In those at-risk or diagnosed with CAD, the use of aspirin, betablockers, ACE-inhibitors, and statins can significantly reduce the risk of death by 15-50%; early reperfusion can reduce mortality by 25-30%. Genetic factors may provide further insight into mortality risk. Evidence for genetic involvement in mortality phenotypes such as in sudden cardiac death (Arking and Sotoodehnia, 2012), acute coronary syndrome (Xu et al., 2013), and all-cause mortality post-acute coronary syndrome (Morgan et al., 2008(Morgan et al., , 2011 has been modestly demonstrated. Aouizerat et al. (2011) conducted a GWAS of sudden cardiac death in patients with CAD and reported 11 significant gene associations for six novel candidates and validated eight known variants for this phenotype. Johnson and colleagues (Johnson et al., 2012) have identified multiple fatty acid gene variants associated with survival to admission and survival to discharge in out-of-hospital sudden cardiac death. The relationship between SCD and survivorship is testable. If there is a risk allele for SCD, we could expect the frequency of that allele to be lower in survivors of CAD. CAVEAT: CARDIOPROTECTIVE GENES Given our distinction regarding survivorship in CAD as the propensity to survive beyond clinical expectation with significant CAD, it is logical that the focus of finding unique genetic variants for survivorship could be on cardioprotective genetic variants rather than cardiovascular risk markers. It is also important to consider the potential for heterozygous advantage for survival conferred by some risk variants for these overlapping constructs, as in the example of malaria and sickle cell disease. Some promising data exist on cardioprotective genes, but the majority of the science in this area is limited to animal models and in-vitro studies of cellular survival of cardiac cells and/or prevention of apoptosis Weng et al., 2010). Innate cardioprotective phenotypes include ischemic preconditioning, hypoxic preconditioning, and heat shock preconditioning. Work in this area has uncovered some promising candidate genes, most prominently the nuclear factor kappa-B (NFKB) (Wilhide et al., 2011) and related heat shock protein (HSP) genes, necessary mediators of cardioprotective mechanisms following ischemic preconditioning (Tranter et al., 2010). Micro-RNAs are hypothesized to be involved in cardioprotection due to their ability to regulate processes involved in cardiac injury and protection (see review by Kukreja et al., 2011 Population-based studies have identified some candidate genes significantly associated with protection against incident CAD, but have failed to produce candidate genes for survival in CAD. Briefly, A single nucleotide polymorphism (rs3217989) corresponding to cyclin-dependent kinase inhibitor-2B (CDKN2B) in the 9p21 region was protective against incident CAD in a sample of 548 African Americans [OR = 0.19, 95% CI = 0.07-0.50, p = 0.0008, a finding that was further replicated in a larger combined sample of 990 African Americans (Kral et al., 2011)]. The ADA * 2 allele in the adenosine deaminase (ADA) gene was hypothesized by Safranow et al. (2007) to modulate cardioprotection via its indirect effects on levels of adenosine-a potent cardioprotective agent. They reported lower frequencies of the ADA * 2 gene variant in CAD-diagnosed individuals in a sample of 371 Poles (Safranow et al., 2007). ALTERNATIVE HYPOTHESES We hypothesize that genes could play a unique role in the ability to survive in the context of clinically significant coronary disease. Genetic principals that offer support for survival traits in the context of unfavored phenotypes are heterozygous advantage and antagonistic pleiotropy. For example, evaluation of the Framingham Heart Study revealed that APOE gene variants were associated with survival-related pleiotropic effects in cancer and age of onset of CAD (Kulminski et al., 2011). An alternative hypothesis is that there is no genetic involvement in coronary disease-related survival. As we have presented earlier, up to onethird of centenarians have age-related morbidities (such as heart disease) for 15 or more years (Terry et al., 2008). Furthermore, we have presented evidence of genetic contribution to overlapping phenotypes with survivorship in CAD and hypothesize shared genetic variance among these phenotypes. Other non-genetic factors could contribute more significantly to survival (such as treatment effects) that may mask or override any genetic effects contributing to survivorship in CAD. Observations of greater survival in the context of CAD could be also be due to a population effect in which a select group (or family) has a tendency for longer survival in spite of the presence of CAD (theoretically, as in family-based centenarians from Mediterranean areas). PILOT EVIDENCE OF SURVIVAL-VARIANT GENES IN CAD USING A PROXY DEFINITION FOR SURVIVORSHIP IN CAD As we are unaware of a cohort for which pre-clinical biomarkers and genetic data have been captured for at-risk, asymptomatic people prospectively followed through development of CAD that has adequately powered mortality rates, we set out to generate at least preliminary evidence of genetic variation in survival likelihood for symptomatic CAD-diagnosed people. We performed a secondary analysis of 1885 subjects from a primary case-control genetics study of symptomatic patients undergoing cardiac catheterization (Catheterization Genetics Study; CATHGEN), described elsewhere (Sutton et al., 2008). Briefly, the primary study recruited patients presenting to the cardiac catheterization lab (regardless of disease status) for a cardiovascular genetics study and biorepository in which medical history, clinical data, and biological samples were collected. Biological data were stored in the Center for Human Genetics facility; all other data were stored in the Duke Databank for Cardiovascular Disease and maintained at the Duke Clinical Research Institute (Fortin et al., 1995). Both the primary and secondary studies obtained approval from the Duke University Medical Center Institutional Review Board. All participants provided informed consent for participation. Our specific aim for the secondary analysis was to evaluate whether the likelihood of survival was significantly different among candidate genes for cases with angiographically-defined, clinically significant CAD. For this case-only analysis, we evaluated 34 previously genotyped single nucleotide polymorphisms (SNPs; Table 2), representing five a priori CAD candidate genes of interest from the primary CATHGEN study. The candidate genes for our secondary analyses were either previously associated with age-related CAD phenotypes or were highly suspect of having survival effects, based on published findings from our group [ALOX5AP Crosslin et al., 2009), FAM5C (Connelly et al., 2006), KALRN (Wang et al., 2007), LSAMP , PLA2G7 (Sutton et al., 2008)]. Genomic DNA for CATHGEN was extracted from whole blood using the Puregene system (Gentra Systems, Minneapolis, MN, USA). Genotyping was performed at the Duke Center for Human Genetics genotyping laboratory (Durham, NC, USA). The primary study originally selected SNPs in 2004 based on reported literature for CAD and age-related CAD candidate genes, and from a list of age-related CAD candidate SNPs in the GENECARD family study of early-onset CAD (Hauser et al., 2004). Additional follow-up SNPs in these candidate genes were later selected based on the aims of ancillary studies and/or availability of the SNPs for these candidate genes on selected genotyping platforms. Three different genotyping platforms were used in the primary study: TaqMan 7900 HT (Applied Biosystems, New York, USA); Illumina HumanOmni1-Quad_v1-0_C chip; and, custom Illumina GoldenGate Bead Arrays (San Diego, CA, USA). The 384-well plates included a total of 20 quality control samples (eight CEPH (Center d'Étude du Polymorphisme Humain) pedigree individuals, eight study sample duplicates, and four no-template controls). SNP mismatches were reviewed by an independent genotyping supervisor for potential genotyping errors. Each SNP had a call frequency across all individuals of at least 95%; each individual had a call rate across all SNPs of at least 95%. For the TaqMan platform, blinded duplicate samples were used to determine the error rates, which were <0.2% among SNPs that passed genotyping quality control. For the secondary analysis, we selected SNPs for these candidate genes that met our quality control metrics and that were also genotyped at that time in the Framingham Heart Study (as we were anticipating replication analyses in that dataset), leaving the 34 SNPs in Table 2 for this secondary pilot analysis. All SNPs met our established criteria Zhang et al., 2010) for quality control (QC) Hardy-Weinberg equilibrium (HWE) and linkage disequilibrium (LD). Statistical analyses were performed using the R program's Survival package (R Development Core Team, 2008). Means and frequencies were calculated for demographic variables, diagnosis, and events. The survivorship in CAD phenotype was defined in this analysis as time from angiographically-defined CAD case status in symptomatic individuals to time at all-cause mortality. The primary study did not recruit asymptomatic patients with CRP, CAC, or CIMT data to serve as an optimal point of origin for evaluation of survivorship in CAD. We considered the alternative analytical approach in which we would identify a subset of control subjects that converted to CAD diagnosis (to use as a proxy for the pre-symptomatic point of origin); however, only two participants would have met the criteria for the analyses. More importantly, by our recommendation, CRP, CAC, and CIMT levels would be ideal for verifying the increased risk for CAD in these "converters," but these labs are not routinely ordered on patients that meet "control" status after cardiac catheterization. An excess of missing data for the cause of death variable precluded our use of the ideal phenotype endpoint of coronary-related death. Therefore, we used time from cardiac catheterization to all-cause mortality as a proxy definition. In order to determine if there were significant genetic effects on survival in the context of CAD but not in controls, we performed analyses stratified by CAD case status. Thus, Cox proportional hazards models estimated instantaneous risk [hazard ratio and 95% confidence interval (CI)] of all-cause mortality by genotype groups separately in CAD cases and controls censored on number of days from study enrollment (time at coronary catheterization) to all-cause death or last follow-up. Survival curves for long-term survival were illustrated using Kaplan-Meier curves. Cases were defined as having at least one major epicardial vessel having at least 75% stenosis on coronary angiography (Duke CAD index > 23) (Sutton et al., 2008). Controls were defined as having no appreciable CAD (Duke CAD index < 23), corresponding to angiographic data indicating no more than one major epicardial vessel having less than or equal to 75% occlusion as demonstrated by coronary angiography, and no documented history of cerebrovascular or peripheral vascular disease, myocardial infarction, transplant, or interventional or surgical coronary revascularization procedures (Sutton et al., 2008). An additive inheritance model was assumed, assigning wild-type genotypes a value of 0, heterozygous genotypes a value of 1, and risk homozygous genotypes a value of 2. All SNPs were analyzed separately with a basic model evaluating only the main effect of genotype and a separate covariate model controlling for age, sex, body mass index (BMI), and histories of smoking, type 2 diabetes, hyperlipidemia and hypertension. As a post-hoc analysis for the case-only group, we added CAD index as a covariate in the full clinical covariate model in order to determine if the significant survival effects were impacted by disease severity. In addition, we evaluated the impact of sex on the survival results using a stratified Cox proportional hazards analyses in CAD cases, controlling for age, body mass index (BMI), and histories of smoking, type 2 diabetes, hyperlipidemia and hypertension. For this pilot analysis, 1885 subjects with genetic data meeting standard quality control metrics were analyzed. Demographic and cohort characteristics are presented in Table 3. Four hundred subjects (21.2%) were deceased on follow-up at the time of analysis; 1155 subjects met our criteria for CAD diagnosis with an event rate of 21.9% (n = 253) and 730 were considered controls, with an event rate of 20.1% (n = 147). Vital events were confirmed through the National Death Index, as previously described . As expected, CAD cases who experienced events tended to be older, had a greater burden of disease (higher CAD index), lower BMI, and a higher frequency of hypertension. Deceased CAD cases also had a lower frequency of hyperlipidemia, demonstrating a paradoxical phenomenon between mortality and hyperlipidemia, previously reported by our group and likely due to treatment effects or confounders (Wang et al., 2009;Shah et al., 2012). Deceased participants with CAD also had a reduced frequency of smoking compared to living CAD cases, which may be explained by the self-report measurement of history of smoking and/or the option to report having ever smoked or currently smoking as positive history vs. not currently smoking as negative history. Racial representation varied little by event and diagnosis groups. Based on a follow-up of 2738 days (about 7 years, 5 months), three SNPs (rs1462845, rs1915585, and rs6788787) in the LSAMP gene had significant hazards of death in CAD cases ( Table 4, uncorrected p-values), but differed in their gene dosing patterns and directions of effect. Data for these three SNPs met the Age,x ± SD, (range) 62.8 ± 13.0 (20-93) 61.3 ± 12.6 (24-91) 68.4 ± 12.9 (20-93) 59. Other 138 (7.0) 111 (7.5) 27 (6.7) 59 (6.7) 17 (6.7) Body mass index,x ± SD 29.5 ± 6.6 2 9 .9 ± 6.6 2 8 .4 ± 6.4 3 0 .0 ± 6.6 2 8 .7 ± 5.8 Smoking, no (%) 1013 ( assumption of proportional hazards (data not shown). Minor allele frequencies are presented in Table 4 and genotype frequencies by sample and case status are presented in Table 5. For the rs1462845 SNP, the risk of death for CAD cases was 1.24 times greater for each addition of the minor (risk) allele compared to wild-type genotype (p = 0.044). For rs1915585 and rs6788787, the hazard ratios are less than one (with significant p-values; p = 0.044 and 0.037, respectively), suggesting that the minor alleles may have a significant protective effect against risk of death in CAD. The genotype effects on survival remained significant when controlling for known CAD risk factors (Table 4). In order to determine whether the survival effects for genotype were specifically driven by CAD, we then performed previously described survival analyses in control subjects. None of the control subjects had significant gene effects on survival in either model, suggesting the genotype effects on survival may be unique to CAD. Evaluating the Kaplan-Meier (K-M) survival curves for CAD cases revealed an expected gene dosing pattern for rs1462845 (Figure 2). In an additive genetic model of survival, we would expect to see a gene-dosing phenomenon, whereby having two copies of the risk allele (risk homozygous genotype) confers the worst survival, having no copies of the risk allele (wild-type homozygous genotype) confers the best survival, and having one copy of each (heterozygous genotype) falls between the two. For rs1915585 (Figure 3) and rs6788787 (Figure 4), K-M curves revealed that subjects with the heterozygous genotype (blue line) conferred better survival than those with the wild-type genotype, which explains the hazard ratios of less than 1 for these SNPs. These results could be due to the low frequency of risk homozygous carriers for the rs1915585 and rs6788787 variants in our sample, the result of insufficient power or rare homozygosity (see Table 5). Another possible explanation for observing this phenomenon is heterozygous advantage. Heterozygous advantage (or, hybrid vigor) can be the result of dominance or overdominance in the population, which acts to selectively advantage the heterozygous individual in spite of the presence of a single copy of the risk allele. These SNPs do not have documented evidence of hybrid vigor at this time. Other possible explanations for this inappropriate gene-dosing could be antagonistic pleiotropy or survival bias. Others have simulated erosion of genetic associations for highly lethal diseases (such as myocardial infarction) due to culling of risk variants from the population (Anderson et al., 2011). As aforementioned, longitudinal data would allow FIGURE 2 | Kaplan-Meier survival curves for CAD cases vs. controls in additive (genotype) model for LSAMP SNP rs1462845. X -axis displays the number of days from index catheterization to death (all-cause mortality). Y -axis displays the Kaplan-Meier survival probability by genotype. G is the minor allele; AA, wild-type genotype (reference; black curve); AG, heterozygous genotype (blue curve); and GG, risk homozygous genotype (red curve). FIGURE 3 | Kaplan-Meier survival curves for CAD cases vs. controls in additive (genotype) model for LSAMP SNP rs1915585. X -axis displays the number of days from index catheterization to death (all-cause mortality). Y -axis displays the Kaplan-Meier survival probability by genotype. T is the minor allele; GG, wild-type genotype (reference, black curve); GT, heterozygous genotype (blue curve); and TT, risk homozygous genotype (red curve). for evaluation of whether the surviorship in CAD effects are due to genetic factors or are observed as a result of survival bias. It is important to note that traditional survival bias would be considered a loss of genetic information due to mortality events in the data; yet, we are interested in characterizing a genetically-driven net survival advantage in the context of CAD. While our pilot data are not designed to show exact proof of the latter concept, our results do lend preliminary support for genetic differences in survival unique to symptomatic CAD, adding to the theoretical basis for genetic involvment in the survivorship in CAD phenotype. FIGURE 4 | Kaplan-Meier survival curves for CAD cases vs. controls in additive (genotype) model for LSAMP SNP rs6788787. X -axis displays the number of days from catheterization to death (all-cause mortality). Y -axis displays the Kaplan-Meier survival probability by genotype. A is the minor allele; GG, wild-type genotype (reference; black curve); GA, heterozygous genotype; and AA, risk homozygous genotype (red curve). FIGURE 5 | Kaplan-Meier survival curves for CAD cases vs. controls in dominant (allele) model for LSAMP SNP rs1462845. X -axis displays the number of days from index catheterization to death (all-cause mortality). Y -axis displays the Kaplan-Meier survival probability by genotype. G is the minor allele; AA, wild-type genotype (reference; black curve), and red curve indicates AG and GG genotypes combined. Furthermore, when examining the dominant model, (a common approach to improve power to detect an association when the minor allele frequency is low), the K-M curve for rs1462845 showed expected curves, wherein having any copy of the minor (risk) allele confers worse survival compared to the wild-type genotype ( Figure 5). However, combining risk genotypes in the dominant model for the rs6788787 SNP (Figure 6, red line) results in a survival curve that suggests the minor (risk) allele confers better survival in the presence of CAD. Theoretically, this could drive unexpected or spurious results in an association between the LSAMP SNP and CAD if survival effects were not considered. The SNP rs1915585 (Figure 7) showed a trend FIGURE 6 | Kaplan-Meier survival curves for CAD cases vs. controls in dominant (allele) model for LSAMP SNP rs6788787. X -axis displays the number of days from catheterization to death (all-cause mortality). Y -axis displays the Kaplan-Meier survival probability by genotype. A is the minor allele; GG, wild-type genotype (reference; black curve); and red curve indicates GA and AA genotypes combined. similar to rs6788787 for both additive and dominant models. To determine if the significant survival effects by genotype were due to the severity of CAD, we then performed the case-only Cox model, adding CAD index as a term in the full covariate model. The genotype effect remained significant for all three SNPs when controlling for disease severity (p = 0.03 for rs1462845, p = 0.01 for rs1915585, and p = 0.01 for rs6788787). CAD index itself was only marginally significant as a predictor of survival in this model (p = 0.07 for rs1462845 and rs1915585 and p = 0.04 for rs6788787). As sex differences are established in CAD and CADrelated mortality, we also evaluated Kaplan-Meier curves and Cox models for CAD cases stratified by sex, controlling for age, body mass index (BMI), and histories of smoking, type 2 diabetes, hyperlipidemia, and hypertension. All three SNPs had significant genotype effects in male CAD diagnosed subjects (n = 850) but not female CAD diagnosed subjects (n = 305; Supplemental Table 1). The rs1462845 survival curves (Supplemental Figure 1) demonstrated genotype-specific effects in males comparable to the full dataset and no appreciable genotype-specific pattern on survival in females. For the rs1915585 and rs6788787 SNPs (Supplemental Figures 2, 3), genotype-specific survival patterns for males and females were consistent with the full sample model (e.g., consistent heterozygous advantage effects), but conclusions about these effects cannot be drawn due to the reduced power in this sex-based analyses and reduced homozygosity similar to the full CAD case models (Supplemental Table 1). These results suggest sex-specific genotype effects on survival in symptomatic, CAD-diagnosed individuals for the rs1462845 SNPs and possible sex differences for other LSAMP SNPs; however, a larger sample size is necessary to confirm this relationship. In summary, three LSAMP SNPs show significant differences in survival by genotype in CAD cases but not controls, even after adjusting for age, sex, body mass index (BMI), and histories of smoking, type 2 diabetes, hyperlipidemia, hypertension, and FIGURE 7 | Kaplan-Meier survival curves for CAD cases vs. controls in dominant (allele) model for LSAMP SNP rs1915585. X -axis displays the number of days from index catheterization to death (all-cause mortality). Y -axis displays the Kaplan-Meier survival probability by genotype. T is the minor allele; GG, wild-type genotype (reference; black curve); and red curve indicates GT and TT genotypes combined. disease severity. Sex-stratified analyses revealed that these effects were unique to males. This work provides preliminary evidence of gene-related survival effects unique to CAD. The limbic systemassociated membrane protein (LSAMP) gene is a 64-68 kD gene located on the long arm of chromosome 3. It encodes a neuronal surface glycoprotein [Online Mendelian Inheritance in Man (OMIM), 2011] and has been described as a tumor suppressor gene that may be fundamental to brain development . Our group has previously found multiple LSAMP SNPs to be significantly associated with late age of onset CAD (diagnosis in males ≥51 years of age, females ≥56 years of age), some of which had stronger genetic effects in individuals with severe CAD, i.e., the presence of left main coronary disease , further supporting LSAMP's candidacy for CADspecific survival effects. The rs6788787 and rs1915585 SNPs were previously reported to be in strong linkage disequilibrium in CATHGEN [pairwise r-square = 0.74 ], however, our subset sample data do not support LD between these two markers (all pairwise correlations less than 60%). CONCLUSION Every minute, someone dies of coronary disease in America (Roger et al., 2011). The ratio of deaths per year to incident cases of stable angina is roughly equal (∼400,000:500,000) (Roger et al., 2011). Investigating genetic contributions to improved survivorship with CAD could provide unique insights for better health promotion and prognosis. Identifying genetic variants associated with improved survivorship in CAD could lead to improved clinical prediction and insights into biological mechanisms critical to the complex disease-survival interface. Knowledge of such variants could also support a shift in focus from preventing death to promoting optimal conditions for survival in concert with known genetic makeup. Measurement of the optimal phenotype interval, consideration for shared and non-shared genetic effects with overlapping constructs, and interplay with treatment effects in this process will be critical components of such investigations. We hypothesize genetic contribution to this phenotype based on our model depicting shared genetic variation with related and overlapping constructs. We further support the concept of a genetic basis for survivorship in CAD with preliminary evidence of genetic differences in survival in CAD for certain LSAMP SNPs. This model can help generate hypotheses about the genetic architecture of survivorship in CAD and support appropriate measurement. Future work can address important questions about genetic and other factors involved in CAD-specific mortality and survival, which may help improve clinical prediction of survival and mortality in people with CAD. This could lead to further insight into the biological and/or functional mechanisms of survivorship in CAD. Finally, refining the genetic factors for survivorship in CAD may lead to improvement in the signal-to-noise ratio in genetic associations with CAD as well as for replications.
2016-06-17T01:33:23.990Z
2013-09-27T00:00:00.000
{ "year": 2013, "sha1": "a04daf794624452b5d8d3e9e276d5b62fa309bf7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fgene.2013.00191", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a04daf794624452b5d8d3e9e276d5b62fa309bf7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253658932
pes2o/s2orc
v3-fos-license
An MRI Based Ischemic Stroke Classification – A Mechanism Oriented Approach Oxfordshire Community Stroke Project and Trial of Org 10172 in acute stroke treatment are the commonly used ischemic stroke classification systems at present. However, they underutilize the newer imaging technologies. Diffusion-weighted magnetic resonance imaging (DW-MRI) of the brain can detect the site and extent of infarcts accurately. From the MRI patterns, the mechanisms of ischemic stroke can be inferred. We propose to classify ischemic infarcts into the following types based on their DW-MRI appearance: cortical territorial infarcts, striatocapsular infarcts, superficial perforator infarcts, cortical and deep watershed infarcts, lacunar infarcts, long insular artery (LIA) infarcts, branch atheromatous disease (BAD) infarcts, corpus callosal infarcts, infratentorial infarcts, and unclassifiable infarcts. This DW-MRI-based classification of ischemic stroke is easy, fast, and mechanism oriented. A review of the literature reveals that cortical territorial, striatocapsular, and corpus callosal infarcts are associated with embolic sources and large artery intracranial atherosclerosis. Superficial perforator and LIA infarcts are also probably embolic. Watershed infarcts are frequently associated with severe carotid disease with microembolism or hemodynamic failure. Mechanisms of BAD infarcts include microatheroma, junctional plaque or a plaque within a parent artery blocking the orifice of a large, deep penetrating, or circumferential artery. Small lacunar infarcts are due to the lipohyalinosis of penetrating arteries. Types and mechanisms of infratentorial infarcts are similar to supratentorial infarcts. Such a classification system is useful for prognosticating acute stroke, arranging specific investigations, and planning strategies for secondary prevention and research. that the initial subtyping of stroke by TOAST matched the final diagnosis in only 62% of cases. In OCSP and TOAST classifications, anatomical localization of the infarct, identification of the vascular territory, and understanding the mechanism of the stroke by clinical history, physical examination, and basic investigations are skilled works and the reliability depends on the experience of the physicians. These factors point to the need for alternative methods of stroke identification and classification based on more advanced investigations. DW-MRI can detect infarcts within minutes of stroke onset and the extent and site of infarcts can be accurately assessed. MRI and DWI are now available in almost all stroke care centers and the time taken for a DW-MRI is <3-min. If a magnetic resonance angiogram (MRA) is added to the protocol, the diagnostic accuracy still increases. In a study, it was found that pre-MRI TOAST matched the final diagnosis in 48%, which improves to 83% after DWI and 94% after DW-MRI with MRA. Pre-MRI OCSP diagnosis matched the final diagnosis in 67%, improving to 100% after DW-MRI. [7] Rovira et al. [8] has suggested a stroke classification based on DW-MRI. However, it does not define the subtypes clearly. Even though the cortical infarcts are correctly depicted, the subcortical infarcts are dumped together without clear demarcation. In this review, we attempt to subclassify to subclassify the ischemic strokes based on the patterns of DW-MRI. The definitions are adopted from the standard definitions used in clinical studies and trials. The emphasis of the classification is to identify the stroke type by DW-MRI and to reach a logical conclusion of the stroke mechanism from the type of infarct. We propose to classify ischemic stroke into the following types: Cortical territorial infarcts Cortical territorial infarcts are ischemic lesions involving the cerebral cortex and subcortical white matter in the territory of the major cerebral arteries, i.e., anterior cerebral arteries (ACA), middle cerebral arteries (MCA), and posterior cerebral arteries (PCA). The terminal branches of the main cerebral arteries form a pial plexus. In humans, the cerebral cortex and the subcortical U-fibers are supplied by short arterioles of less than 50-µm diameter from the brain surface, whereas the centrum semiovale is supplied by 2 to 5 cm long medullary end arteries arising from the pial plexus. [9] Occlusion of the major cerebral vessels cause ischemic lesions of the cortex and subcortical areas. These ischemic lesions can also be called pial territory infarcts. The templates depicting the major arterial territories of the brain, introduced by Damasio or the topographic brain atlas introduced by Kim et al., can be used for the identification of territorial infarcts. [10,11] The twelve templates of Damasio are based on diagrams of consecutive CT scan sections of the brain approximately 8 mm apart. The topographic brain atlases of Kim are digital maps of supratentorial infarcts, generated using DW-MRI. According to these templates and maps, the anterior part of the medial cortex and subcortex up to the parieto-occipital sulcus, the medial orbital gyri on the orbitofrontal surface and the superior frontal gyrus constitute the cortical territories of ACA. The lateral cortices of frontal, temporal, and parietal lobes except for a small anterior and posterior area, constitute the cortical territories of MCA. Anteriorly this extends to the superior frontal sulcus, and posteriorly to the middle occipital gyrus. The lateral orbital gyri of the inferior frontal surface are also supplied by MCA. The retrosplenial medial cortex, occipital pole and the adjacent lateral surface, and the inferomedial temporal lobe constitute the cortical territories of PCA. Cortical territorial infarcts are seen in MRI as signal alterations confined to the areas supplied by the MCA, ACA, and PCA [ Figure 1, panels a-c]. The infarcts may be restricted to the cortex or may involve the subcortex also. Any infarct which affects the cortical ribbon and is 10 mm or more in size with or without subcortical infarcts can be taken as a cortical territorial infarct. However, cortical lesions <10 mm may represent the cortical spotty lesions of superficial perforator infarct (described later). MCA cortical territorial infarcts may be associated with lenticulostriate territory infarcts to produce a complete territorial infarct [ Figure 1, panel d]. Even though prognostically different, all these infarcts can be categorized as territorial infarcts because the mechanism of stroke in these conditions is the same. Cortical territorial infarcts may sometimes become fragmented and appear as several small, disseminated lesions in cortical and subcortical areas [ Figure 1, panel e]. [8] This may be due to the breaking up of emboli with reperfusion or due to multiple emboli. The wide variability in the territorial supply of large arteries may also create some confusion in delineating the territory of infarct in DW-MRI. However, as the mechanism of stroke in all the major cortical territorial infarctions is the same, this disparity is not important from a classification point of view. Cortical territorial infarcts are usually caused by embolic stroke-either cardiac or artery to artery embolism from extracranial large vessels. Cardiac embolism is more common than carotid artery disease. [12,13] However, in Asians and black population intracranial atherosclerosis is a common etiology for cortical territorial infarcts. [14,15] Patients with cardioembolic infarcts are likely to have total anterior circulation infarcts, with a higher baseline National Institutes of Health Stroke Scale (NIHSS) score. On the other hand, patients with intracranial atherosclerosis are likely to have partial anterior circulation infarcts with lower baseline NIHSS score and milder neurological deficits. [16] Striatocapsular (Deep territorial) infarcts Striatocapsular infarcts (SCI) are a distinct form of subcortical infarcts in the striatocapsular area caused by simultaneous occlusion of more than one adjacent lenticulostriate arteries. Lenticulostriate arteries are deep perforators that arise from the proximal part of MCA. These deep perforating branches supply the head and body of caudate nucleus (superior part), lateral segment of globus pallidus, the putamen, dorsal half of internal capsule and the lateral part of the anterior commissure. The size, shape, site, pathogenesis and clinical features distinguish SCI from other subcortical infarcts. The first systematic description of SCI was given by Bladin et al. in 1984. [1] They found 11 cases of SCI in 1600 patients admitted for stroke in the Austin Hospital Stroke Unit. The definition of SCI is based on its radiological characteristics. [17] On axial CT/MRI, these infarcts are lentiform, triangular or 'comma'-shaped [ Figure 1, panel f]. The size of the infarct is 3 -4.5 cm, with a width of 1 -2 cm and a depth of 2 -4 cm. The infarct involves the head of caudate nucleus, putamen and anterior limb of internal capsule. Globus pallidus, genu and posterior limb of internal capsule are usually spared. The overlying cortex is also spared. The typical comma-shaped lesion has a head as caudate nucleus along with anterior limb of internal capsule and a tail as lentiform nucleus. Some studies have included coma, lenticular or triangular lesion which involves at least two components in the striatocapsular area-head of caudate plus internal capsule or putamen plus internal capsule also as SCI. [18] There are some controversies regarding the size of the SCI also. Even though 3 cm is the most common accepted size, some studies used a lower limit of 2 cm. [18] The most common clinical manifestation of SCI is weakness. [17,19] Hemiplegia is due to the involvement of the corticospinal tract in the posterosuperior segment of the lenticulostriate artery (LSA) territory. [20] LSA territory can be divided into a superior and an inferior subsegment in the coronal plane. This can be further divided into an anterior and a posterior subsegment in the axial plane. The anterior [20] by MR tractography have demonstrated that the corticospinal tract crosses the LSA territory only at the posterosuperior subsegment and corticospinal tract involvement in this subsegment has a significant correlation with stroke severity. As the corticospinal tract descends, it quickly exits the LSA territory and enters the area supplied by the anterior choroidal artery. This may be the reason for the radiological sparing of the posterior limb of internal capsule in many SCI, even though hemiplegia is the commonest clinical manifestation. Another important feature of striatocapsular infarct is the presence of cortical signs such as dysphasia, dyspraxia, hemineglect, and eye deviation. [19] Identification of SCI is important because the mechanism of stroke in SCI is different from lacunar infarcts. Cardiac abnormalities and severe carotid artery disease sufficient enough to produce an embolism to the M1 segment of MCA and intracranial atherosclerotic disease affecting the MCA predisposing to LSA ostial thrombosis are the important cause of SCI. [19,21] Superficial perforator infarcts Acute infarctions confined to the territory of the superficial perforator arteries [white matter medullary arteries] are called superficial perforator infarcts (SPI). Superficial perforator arteries originate from the pial branches of anterior, middle, and posterior cerebral arteries. They are 2--5-cm long end arteries that descend toward the upper part of lateral ventricle and supply the white matter of centrum semiovale. [9] An SPI (white-matter medullary infarct) can be defined as an infarct located in the territory of the perforating medullary artery. They are superficially located, oval or circular lesions scattered in the centrum semiovale and may be associated with spotty cortical lesions [ Figure 1, panel g]. Radiologically, the outermost limit of SPI is taken as the cortical ribbon, while the innermost limit is taken as the corona radiata at the level of the deep perforating artery. [22] Spotty cortical lesions are defined as small hyperintense signals in the cortex <10 mm in size detected by DW-MRI that are smaller than lesions of white matter medullary infarcts. [23,24] Spotty cortical lesions detected by DW-MRI are associated with microembolic signals in transcranial doppler studies and are related to small infarcts due to microemboli originating from the heart or large arteries occluding the small cortical arteriols. [24] In DW-MRI, it may be difficult to differentiate between internal watershed infarcts and SPI. Hence, some studies have lumped them together and categorized them as subcortical white matter infarcts. Lee et al. [22] in a study of 54 patients with SPI and 29 patients with internal watershed infarcts, found that SPI were superficially located, oval or circular lesions and were widely scattered, whereas internal watershed infarcts showed a tendency to localize on paraventricular regions where it appeared as a chain-like or sausage-like lesions. The diameter of the internal watershed infarct was significantly larger than SPI. They also found that SPI are frequently associated with cortical spotty lesions. SPI usually has a lower NIHSS score and a favorable outcome. The pathogenesis and etiology of SPI are not definite. However, the most common pathology considered is embolic. Lammie et al. [25] in a postmortem study of 12 cases of small centrum semiovale infarcts found that 10 out of 12 cases had probable embolic etiology. Yonemura et al. [26] also found that small centrum ovale lesions were associated with large-vessel and heart diseases. Boiten et al. [27] in a sub-analysis of the European Carotid Surgery Trial found that small white matter medullary infarcts were associated with carotid large artery disease in 66% of cases. Lee et al. [23] in a study of 103 patients with medullary infarcts found that 65 patients (63%) had large artery disease and 12 (11.7%) had cardiac embolic sources. [23] More than 80% of them had cortical spotty lesions indicating an embolic mechanism. Watershed infarcts Watersheds are areas that lie at the junction of two different drainage areas. Watershed infarcts (WSI) are ischemic lesions that occur in characteristic locations at the junction of two non-anastomosing arterial territories. There are two supratentorial watershed areas. WSI are best demonstrated by DW-MRI in the acute phase. An infarct is considered to be in a watershed area when the border between two main arterial territories divides the infarct into two parts so that the smallest part would be at least one-third of the total infarct. [28] This criterion has to be fulfilled in all radiological slices in which infarcts are seen. The territories of cerebrovascular supply can be demarcated using the templates of the topographic brain atlas of Kim. [11] Radiologically, anterior WSI appear as a fronto-parasagittal wedge extending from the anterior horn of the lateral ventricle to the frontal cortex [ Figure 1, panel h] or as a linear strip in the paramedian white matter slightly lateral to the interhemispheric fissure. Posterior WSI appear as parieto-temporo-occipital wedge extending from the occipital horn of the lateral ventricle to the parieto-occipital cortex [ Figure 1, panel I]. [29] Radiologically, internal WSI can be of two types -confluent and partial. [30] Confluent infarcts are larger and cigar-shaped [ Figure 1, panel J]. Partial infarcts have a 'Rosary-like' (chain-like) appearance (Figure 1, panel K). Both are seen alongside the lateral ventricle and in the centrum semi ovale. [31] Rosary-like internal WSI may be defined as three or more lesions 3 mm or greater in diameter arranged in a linear pattern parallel to the lateral ventricle in the centrum semiovale or corona ratiata. [32] There are several studies focusing on the pathophysiology and mechanisms of WSI. These infarcts are associated with large artery atherosclerosis and cardioembolic sources. Microembolism and hemodynamic failure due to hypotension are the main mechanisms postulated. Microemboli are small emboli of 50-300 µm in size, mostly composed of cholesterol crystals. [33] They arise from unstable carotid plaques or the stump of an occluded internal carotid artery. Small thrombi travel preferentially to watershed areas because of their smaller size. Internal WSI, especially partial type, are more associated with hemodynamic mechanisms in patients with severe carotid disease, whereas cortical WSI are more associated with cardiac disease and embolism. [29,31,32,34] Long insular artery infarcts The long insular artery (LIA), which arises from the insular segment of MCA (M2), has been anatomically recognized as a subtype of the white matter medullary artery. It supplies the insular cortex, extreme capsule, claustrum, and external capsule, often extending to the corona radiata. [35] LIA infarcts are frequently mistaken as lenticulostriate infarcts (lacunar infarcts) because the sizes and shapes of both infarcts are similar in axial MR imaging. The LIA infarcts are best identified in coronal MRI. The subcortical white matter and basal ganglia on coronal MRI images can be divided into three vascular territories: The white matter medullary arteries (WMMA) territory, the long insular arteries (LIA) territory, and the lenticulostriate arteries territory [ Figure 2]. A virtual line from the tip of the anterior horn of lateral ventricle to the top of the superior limb of the insular cleft corresponds closely to the vascular territory of the LIA and an infarct along this line can be considered a LIA infarct [ Figure 2]. [36] Those infarcts situated under this line and extending vertically (craniocaudal) can be considered lenticulostriate artery infarcts. Patients with LIA infarctions demonstrate classic lacunar syndromes. The prevalence of embolic high-risk sources and moderate-risk sources is significantly higher in the long insular artery than in the lenticulostriate artery group. [2] Branch atheromatous disease Branch Atheromatous Disease (BAD) is a pathological finding of stenosis or occlusion at the origin of a large size penetrating artery, due to a microatheroma or a large parent artery plaque. The term BAD was introduced by Caplan to explain an alternative mechanism other than lipohyalinosis for small subcortical infarcts. [1] BAD affects arteries of larger caliber like large proximal lenticulostriate arteries, basilar artery branches, Heubner's artery, anterior choroidal arteries and thalamogeniculate arteries. [37][38][39] There are three mechanisms of stroke in BAD -plaque within a parent artery blocking the branch orifice, plaque extending into the branch from the parent artery (junctional plaque) and microatheroma originating in the orifice of a branch [ Figure 3, panels a-c]. [37] Current imaging modalities such as MR angiography, CT angiography, and DSA are of limited utility in the diagnosis of BAD. They depict the morphology of arteries and are unable to show inner vessel wall changes. High-resolution 3-Tesla magnetic resonance imaging has recently been used for visualization of the inner wall of MCA and basilar artery and is useful in the diagnosis of BAD. [38,39] A recent review searching the available literature showed a lack of clear-cut definitions of BAD strokes. [40] Even though, the term BAD implies an arterial pathology, most studies of BAD are based on vascular territory, size, and shape of the infarcts. Yamamoto et al. [41] defined BAD in lenticulostriate artery territory as infarcts with a size more than 10 mm in diameter and visible in three or more axial slices at 7 mm and that of anterior pontine arteries as unilateral infarcts extending to the basal surface of the pons on MRI. The STRIVE Classification of "small subcortical infarct" with a size less than 20 mm does not differentiate between BAD and lacunar infarcts. [42] Nakase et al. [43] defined BAD infarct as a subcortical lesion ≥15 mm in diameter in more than 3 slices at 5 mm [ Figure 3, Panels As the BAD vascular lesions are located proximally along the perforator arteries, BAD infarcts are larger and have a poorer prognosis than lacunar infarcts. Compared to lacunar strokes, the duration of hospitalization and the residual disability of patients are also significantly greater, as is early neurological deterioration. Early neurological deterioration, defined as an increase of more than 2-point in the National Institutes of Health Stroke Scale within 48 h of stroke onset, is a well-known phenomenon in BAD stroke. [43] It is found that BAD is associated with intracranial atherosclerotic disease of MCA and basilar artery. [44,45] Lacunar Infarcts Lacunar infarcts are small infarcts in the basal ganglia, thalamus, brainstem (especially pons), internal capsule, and deep cerebral white matter resulting from the occlusion of a single small perforating artery. Acute lacunar infarcts are seen in DW-MRI as small hyperintense signal lesions of <15-mm size in the classical sites of these infarcts [ Figure 4, panel a]. The terms lacune, lacunar stroke and lacunar infarct are not the same. A lacune is a small fluid-filled cavity that is considered the healed stage of a small deep brain infarct. Lacunar stroke is a clinical stroke syndrome with the symptoms and signs of a small subcortical or brainstem lesion. Lacunar infarct is a clinical stroke syndrome of the lacunar type where the underlying lesion on brain imaging is an infarct. The concept of lacunar stroke, the characteristics of the infarcts, the clinical syndromes, and the arterial pathology causing lacunar infarcts were described best by Miller Fisher. [46] These infarcts are due to the occlusion of a single vessel of the perforator arterial system, i.e., lenticulostriate, thalamoperforator and paramedian arterioles of the brainstem. The most common pathology of the blood vessels producing lacunar infarcts is lipohyalinosis. [46] Lipohyalinosis is destructive segmental microangiopathy of small vessels, histologically characterized by loss of arterial architecture, vessel wall thickening, focal arteriolar dilatation, and extravasation of blood components through the wall. In acute cases, evidence of fibrinoid vessel wall necrosis is also seen. Such vascular lesions involve small arteries 40-200 μm in diameter. [46] The TOAST classification and National Institute of Neurological Disorders and Stroke define lacunar infarcts as brain infarctions <15 mm in diameter and accompanied by a lacunar syndrome. [47] The classical lacunar syndromes are pure motor stroke, pure sensory stroke, ataxic hemiparesis, dysarthria-clumsy hand syndrome, and mixed sensorimotor syndrome. The Standards for Reporting Vascular changes on Neuroimaging (STRIVE) Criteria define lacune as a round or ovoid, fluid-filled cavity between 3 mm and 15 mm in diameter in the territory of one perforating arteriole. [42] On FLAIR-MRI images, lacunes have a central CSF-like hypointensity with a surrounding rim of hyperintensity. A typical lacune evolves from a "recent small subcortical infarct" in the territory of a deep perforating arteriole. [42] Even though lipohyalinosis is the proposed mechanism of lacunar infarcts, other mechanisms may also be contributing to the subcortical infarcts of <15 mm in size on DW-MRI. [48,49] In pathological studies, Fisher described not only lipohyalinosis but also microatheroma and embolism as the mechanisms of lacunar infarcts. It was found that lipohyalinosis affects vessels 40-200 µm in diameter and produces lacune of 2 -5 mm in size. Microatheroma and emboli affect vessels 200-850 µm in diameter and produce lacune of >5 mm in size. Subsequent studies using CT/MRI have suggested the possibility of two subtypes of lacunar infarcts; one associated with white matter hyperintensities (WMH), previous lacunes, and hypertension and the other not associated with these features. It is considered that lacunar infarcts associated with WMH and asymptomatic lacunes are lipohyalinotic in origin, whereas isolated lacunar infarcts without these changes may be due to microatheroma and embolism. [48,49] Lacunae have to be differentiated from enlarged perivascular space which has a signal intensity of CSF. The latter is smaller than 3 mm in size and appears ovoid or round when imaged perpendicular to the course of a vessel, or linear when imaged parallel to the vessel. [42] Enlarged perivascular spaces are common in anterior perforated substances and the lower part of the basal ganglia and putamen. Key diagnostic characteristics and etiologies of various types of infarcts are summarized in Table 1. Corpus Callosal Infarcts Corpus callosal infarcts are rare. This is because of the peculiarities of vascular supply of corpus callosum. The corpus callosum derives its blood supply from three main arterial systems, anterior communicating artery (subcallosal artery and median callosal artery) supplying the genu and rostrum of corpus callosum, pericallosal artery supplying the body, and the splenial artery from PCA supplying the splenium. These arteries form a callosal pial plexus from which short arterioles of less than 100 um diameters and 8 mm length, penetrate the white matter. [9] Because of the peculiarities of blood supply, vascular changes with aging, hypertension, Binswanger disease and lacunar infarcts are rare in corpus callosum. However, cerebral vasculitis is one of the rare but important causes of corpus callosal infarction. The commonest part of corpus callosum involved in infarction is splenium followed by body and genu. Rostral infarcts are rare. The main etiology for corpus callosal stroke are cardioembolism and large artery atherosclerosis. [50,51] Thus apart from the vasculitic infarcts, the mechanism of corpus callosal infarcts and cortical territorial infarcts are similar. Infratentorial Infarcts The brainstem is supplied by long circumferential, short circumferential, and small perforating arteries arising from Large lacunar infarcts may also be due to microatheroma or embolism. ACA -anterior cerebral artery; ALIC -anterior limb of internal capsule; BAD -branch atheromatous disease; CWI -cortical watershed infarct; ICAS -intracranial atherosclerosis; IWI -internal watershed infarct; LIA -Long insular artery; MCA -middle cerebral artery; MRI -magnetic resonance imaging; PCA -posterior cerebral artery; PLIC -posterior limb of internal capsule vertebral, basilar, and posterior cerebral arteries. The long circumferential arteries which include the posterior inferior cerebellar arteries (PICA), the anterior inferior cerebellar arteries (AICA), and the superior cerebellar arteries, supply the cerebellum also. Most posterior circulation strokes are characterized by the concomitant involvement of brainstem, cerebellum, thalamus, and occipital lobe and the etiology is usually embolic. [52] However, isolated small brainstem infarcts are mostly caused by BAD or lacunar infarcts. BAD infarcts are more common than lacunar infarcts. The BAD infarcts and lacunar infarcts of anterior circulation are defined by their size, whereas in the brainstem, it is identified by their site, size, and shape. Large artery atherosclerosis is the most common cause of isolated medullary infarcts and they produce infarcts in the lateral and posterior medulla. [53] Lacunar infarcts are rare in the medulla because most of the medulla is supplied by circumferential arteries rather than by perforating arteries from vertebra arteries. They usually involve the medial and anterior parts of the medulla. [53] Isolated pontine infarct constitutes ~15% of posterior circulation strokes. Wedge-shaped paramedian infarcts that extend to the surface of the pons are attributed to basilar branch atherosclerosis [ Figure 4, panel b], whereas small well-circumscribed 'deep infarcts' are thought to be due to lipohyalinotic small vessel disease. [54] Isolated midbrain infarcts are rare. Small deep infarcts are caused by occlusion of penetrating branches from the basilar artery and are possibly considered as lacunar pathology. Infarcts extending to midbrain surface are caused by atherosclerosis of PCA. [55] Bilateral isolated infarctions of the medial medulla and pons are rare. The characteristic brain MRI finding of these infarcts has been described as "heart appearance" on diffusion-weighted imaging [ Figure 4, panel c]. This appearance is due to bilateral involvement of the anteromedial and the anterolateral arterial territories sparing the lateral territories. The anteromedial and the anterolateral medulla and pons are supplied by paramedian and the short circumferential arteries. the lateral medulla and pons are supplied by the long circumferential branches, namely posterior inferior cerebellar artery, anterior inferior cerebellar artery, and superior cerebellar artery. Large-artery atherosclerosis and branch disease are the most common stroke mechanisms in such infarcts. [56,57] Isolated cerebellar infarcts can be territorial or non-territorial. Territorial infarcts are large infarcts in the territory of long circumferential arteries. The mechanisms of these infarcts are embolic or atherosclerotic in situ thrombosis of basilar or long circumferential arteries. [58,59] Cerebellar infarcts <2 cm are considered borderzone infarcts (non-territorial). [60] However, the mechanism of stroke in non-territorial and territorial cerebellar infarcts are same. Unclassifiable infarcts It may not be possible to include all the cerebral infarcts in the above categories, the reasons being delay in getting a DW-MRI, forme fruste infarcts, ambiguity in the definitions of some subcortical infarcts and inexperience of the physicians. These infarcts can be included under unclassifiable infarcts. This classification system has certain limitations. Since this is an MRI-bases classification, its utility may be limited in small centers and community-based stroke programs where MRI is unavailable. Reperfusion therapy may alter the MRI pattern of an infarct. For example, a large striatocapsular infarct may become smaller in size after thrombolysis and may mimic a BAD infarct or a lacunar infarct. This may result in over-or underrespresentation of stroke types and may result in misunderstanding of stroke mechanisms. Difficulty may be encountered in classifying subcortical infarcts correctly based on their size, due to lack of uniform definitions. Lastly, further studies are needed to identify the MRI patterns of 'infarcts of other etiologies' and 'infarcts of unknown etiology' described in the TOAST classification. conclusIons To conclude, DW-MRI, which is available in most stroke centres, can be used for classifying strokes more scientifically and objectively. DW-MRI patterns can predict the possible stroke etiologies and mechanisms accurately. Further target-oriented investigations, such as angiograms, high-resolution magnetic resonance imaging, and cardiac evaluation can bring out the stroke mechanisms more accurately. This will make the stroke workup easier and more precise. Moreover, classifying clinical phenotypes according to these patterns can influence the treatment protocols and predict the prognosis better. Above all, this classification system opens a new horizon for clinical trials related to acute ischemic stroke. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2022-11-19T16:30:43.717Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "f16f121bfb991f21a2568c2296636d12bf439925", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/aian.aian_365_22", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f086d872836d435d5e6d13130d658740d338b915", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
59365885
pes2o/s2orc
v3-fos-license
Hunting Faint Dwarf Galaxies in the Field Using Integrated Light Surveys We discuss the approach of searching low mass dwarf galaxies, $\lesssim10^6\textrm{ M}_{\odot}$, in the general field, using integrated light surveys. By exploring the limiting surface brightness-spatial resolution ($\mu_{\textrm{eff,lim}}-\theta$) parameter space, we suggest that faint field dwarfs in the Local Volume, between $3$ and $10 \textrm{ Mpc}$, are expected to be detected effectively and in large numbers using integrated light photometric surveys, complementary to the classical star counts method. We use a sample of Local Group dwarf galaxies to construct relations between their photometric and structural parameters, $\textrm{M}_{*}$-$\mu_{\textrm{eff,V}}$ and $\textrm{M}_{*}$-$\textrm{R}_{\textrm{eff}}$. We use these relations, along with assumed functional forms for the halo mass function and the stellar mass-halo mass relation, to calculate the lowest detectable stellar masses in the Local Volume and the expected number of galaxies as a function of the limiting surface brightness and spatial resolution. The number of detected galaxies depends mostly on the limiting surface brightness for distances $>3\textrm{ Mpc}$ while spatial resolution starts to play a role at distances $>8\textrm{ Mpc}$. Surveys with $\mu_{\textrm{eff,lim}}\sim30\textrm{ mag arcsec}^{-2}$ should be able to detect galaxies with stellar masses down to $\sim10^4 \textrm{ M}_{\odot}$ in the Local Volume. Depending on the assumed stellar mass-halo mass relation, the expected number of galaxies between $3$ and $10\textrm{ Mpc}$ is $0.04-0.35\textrm{ deg}^{-2}$, assuming a limiting surface brightness of $\sim29-30\textrm{ mag arcsec}^{-2}$ and a spatial resolution $<4''$. We currently look for field dwarf galaxies by performing a blank wide-field survey with the Dragonfly Telephoto Array, optimized for the detection of ultra-low surface brightness structures. INTRODUCTION The number of low mass dwarf galaxies in the Local Volume provides strong constraints on modern theories of galaxy formation (Klypin et al. 2015). There are currently no strong constraints on the lower mass cutoff of the luminosity function of galaxies. Of particular interest is whether the luminosity function extends all the way to Ultra Faint Dwarf (UFD) regime (e.g. Geha et al. 2009) or there is a cutoff at higher masses. Related important and extensively discussed uncertainty regards the shape of the stellar mass-halo mass (SMHM) relation on the low mass end. There is no observational data for testing the abundance-matching-derived SMHM relation at low stellar masses (e.g. Guo et al. 2010;Behroozi et al. 2013;Moster et al. 2013;Karukes & Salucci 2017) and further more, recent studies have shown a possible large scatter in this relation at low masses (Garrison-Kimmel et al. 2017;Munshi et al. 2017). Therefore, performing a systematic deep, wide-field search for faint objects in the general field is of great importance. This is also critical for our understanding of physical processes involved in low mass galaxy formation in the field. Geha et al. 2012 showed that dwarf galaxies (10 7 < M * < 10 9 M ) with no active star formation are extremely rare (< 0.06%) in the field. It is interesting to examine this finding in the Local Volume, including further lower mass dwarfs. In the Local Group, dwarf galaxies have very low surface brightness, of 28 mag arcsec −2 (McConnachie 2012; Klypin et al. 2015). Many of the dwarf galaxies known to us today were at first detected using direct star counts. In this approach, galaxies are detected by identifying a stellar overdensity through individual star counts (compared to the density at larger scales, in order to confirm they are not part of a galactic background or foreground). Star count surveys have proven to be very successful, with the detection of dozens of dwarf galaxies and star clusters in the Local Group and even slightly beyond (e.g. Irwin 1994;Ibata et al. 2007;Koposov et al. 2008;Walsh et al. 2009;Belokurov et al. 2010;Richardson et al. 2011;Martin et al. 2013;Koposov et al. 2015;Bechtol et al. 2015;Torrealba et al. 2016a;Torrealba et al. 2016b). Studies based on star counts are able to reach effective surface brightness levels of 30 mag arcsec −2 or fainter but suffer from another limiting factor. The brightness of stars, observed as point sources, decreases with the square of the distance and hence star count surveys are only efficient at identifying dwarf galaxies in the local universe. Beyond 5 Mpc, galaxies are more easily detected as integrated light objects, as surface brightness is independent of distance (for low redshift objects that do not suffer from cosmological surface brightness dimming of the form (1+z) −4 ). In fact, many of the faint galaxies discovered in the Local Volume in recent years were detected as integrated light objects and have remarkably expanded the census of faint and ultra faint dwarf candidates beyond the Local Group (Karachentsev et al. 2013;Karachentsev et al. 2014;Karachentsev et al. 2015;Merritt et al. 2014;Romanowsky et al. 2016;Javanmardi et al. 2016;Henkel et al. 2017;Müller et al. 2017). Furthermore, many recent surveys find faint, unresolved low surface brightness dwarf galaxies in nearby groups and clusters, demonstrating a possible gold mine for finding faint galaxies in the field (Merritt et al. 2014;Ferrarese et al. 2016;Müller et al. 2017;Müller et al. 2017;Geha et al. 2017;Greco et al. 2017). Integrated light surveys offer complementary benefits and drawbacks compared to star count surveys: they are able FIG. 1.-A dwarf galaxy with M * = 10 5 M as a function of increasing distance, artificially created using the ArtPop code. From a galaxy well-resolved into stars at 500 kpc, it gradually becomes less resolved and turns into an integrated flux object at 4 Mpc. The surface brightness remains constant while the size of the galaxy decreases with the distance. to efficiently probe large volumes, but require extreme surface brightness sensitivity. Moreover, follow-up observations are required in order to measure distances and determine whether a galaxy is associated with a group of galaxies or is in its foreground or background (Merritt et al. 2016b;Danieli et al. 2017). Recent advances allow imaging large areas of the galactic sky down to ultra low surface brightness levels and provide an excellent platform for hunting dwarf galaxies in the field.The tremendous progress in improving the surface brightness limit in the last few years was achievable by using a new innovative design that minimizes systematic errors that often limit the accuracy of background estimation and flat-fielding. The Dragonfly Telephoto Array (hereafter Dragonfly) is an example for such an imaging system. It was designed to overcome the systematic limitations that prevent conventional telescopes from being able to image down to low surface brightness levels . It is comprised of 48 high-end commercial telephoto lenses that feature nanofabricated coatings with sub-wavelength structures to yield a factor of ten improvement in wide-angle scattered light relative to other conventional astronomical telescopes. Its performance is equivalent to that of a 1 meter aperture refractor with a f/0.39 focal ratio and a wide field of view of six square degrees. Dragonfly is specialized to efficiently observe extended objects down to hitherto unprecedentedly low surface brightness levels, and is therefore ideally suited to detect possible dwarfs candidates in the field. Dragonfly has already proven successful in identifying dozens of low surface brightness (26 − 28 mag arcsec −2 ) objects in various fields (Merritt et al. 2014;Cohen et al. in prep, Danieli et al. in prep). In this paper, we demonstrate the importance of surface brightness as a key parameter in such systematic searches. We also discuss the trade-off between surface brightness and resolution and what roles they both play in our ability to detect faint objects. We build on known cosmological models and on the census of dwarf galaxies in the Local Group to estimate the expected number of dwarf galaxies in the field. This paper is organized as follows: we start by presenting a new tool, ArtPop, for simulating the appearance of galaxies in various photometric systems, using artificial stellar populations, in Section 2. We use ArtPop to demonstrate the variation in visibility of dwarf galaxies at different distances. Next, we explore the detectability of the lowest mass galaxies in the Local Volume (out to 10 Mpc) using integrated light, in Section 3. We present a model for the expected number of field dwarfs in Section 3.1 and present our results in Section 3.2. We then discuss the advantages of integrated light surveys compared to (ground-based) star counts surveys in certain regimes in Section 4. We conclude and discuss a systematic search for field dwarf galaxies, over a wide field in the Local Volume, using the Dragonfly Telephoto Array in Section 5. SIMULATED IMAGES OF GALAXIES WITH ARTPOP In order to demonstrate how an ultra faint dwarf galaxy will be observed at different distances, we introduce a new tool, ArtPop, for simulating the appearance of galaxies in different photometric systems, using Artificial stellar Populations. ArtPop requires three sets of parameters as input: 1. The galaxy stellar population parameters: Initial Mass Function (IMF), stellar mass or number of stars (M * or N stars ), age and chemical composition (currently parameterized by [Fe/H]). ArtPop creates the artificial galaxy from stars in the following way: N stars stars are sampled according to the IMF from MIST isochrones (Dotter 2016;Choi et al. 2016). The selected stars are shifted to the right distance and distributed spatially according to a Sérsic profile that serves as a probability distribution for their position. Then, each point source is convolved with the PSF of the imaging system. In Figure 1 we show an ArtPop galaxy as observed with the Dragonfly Telephoto Array photometric system: SDSS g-band, 2."8 pixel −1 and the Dragonfly PSF (Merritt et al. 2014). The galaxy was constructed using 5 · 10 5 stars, corresponding to a stellar mass of M * = 10 5 M for a Salpeter (1955) IMF, and has an effective radius of 400 pc. At a distance of 500 kpc (top left) the galaxy is easily resolved into stars and different stellar populations can be detected and quantified. However, at farther distances, the resolved galaxy transforms to appear as a low surface brightness "blob"; less and less individual stars can be identified and the galaxy turns into a smooth object. Once its angular size is small than the spatial resolution of the imaging system, it will look like a point source and the brightness of the centered pixel will decrease as ∼ D 2 . As can be seen from this example, a very low mass galaxy can be identified as a dwarf galaxy candidate at larger distances than those achieved in star counts surveys. In the next section we present a model for the expected number of field dwarf galaxies and a calculation for the required observational capabilities for testing these predictions. INTEGRATED LIGHT IMAGING In this section we present a model ( § 3.1) and results ( § 3.2) for calculating the abundances of dwarf galaxies in the field, between 3 and 10 Mpc, using integrated light imaging. Methodology We start with compiling all known dwarf galaxies in the Local Group along with their photometric and structural parameters (McConnachie 2012; Bechtol et al. 2015;Drlica-Wagner et al. 2015;Torrealba et al. 2016a;Torrealba et al. 2016b;Homma et al. 2016;Simon et al. 2017;Homma et al. 2017). We use this observational data set to get an estimate for the number of dwarf galaxies in the field. An important assumption in our model is that dwarf galaxies in the field have similar statistical properties as dwarf satellite galaxies in the Local Group. Of course, field dwarfs might have different properties than dwarf galaxies in the Local Group due to cosmic variance and environmental effects that can impact their formation mechanisms (see, e.g., Geha et al. 2009). However, this is by far the most complete sample of dwarf galaxies down to the lowest masses, available to us. We use the effective radii, magnitudes in V -band and ellipticities to calculate the mean V -band surface brightness within the effective radius, µ eff,V . We assume a V -band massto-light ratio of M/L V = 2.0, appropriate for old (10 Gyr) metal-poor ([Z/H] < −1) populations (Conroy et al. 2009). The assumption of an old, metal-poor stellar population is conservative, as field dwarfs might have younger stellar populations, possibly even with activestar formation (Geha et al. 2012). We use this observational data set to obtain a relation between the stellar mass and the effective surface brightness and between the effective radius and the stellar mass. The best-fit to the observational data is given by with a 2σ scatter of 0.92 dex, and with a 2σ scatter of 0.29 dex. The compiled data along with the best-fit relations are presented in Figure 2. Another key ingredient in our calculation is the stellar mass-halo mass relation for galaxies, M * − M h . A powerful and widely used technique to derive this relation is by using the abundance matching ansatz. In its simplest implementation, observed galaxies are matched in a one-to-one fashion with dark matter halos from a dark matter-only simulation while assuming a monotonic relation between the stellar mass, M * and the dark matter halo mass, M h , such that the cumulative number density of dark matter halos matches the cumulative number density of galaxies (e.g. Frenk et al. 1988;Yang et al. 2003;Kravtsov et al. 2004;Conroy et al. 2006;Vale & Ostriker 2006;Vale & Ostriker 2006;Guo et al. 2010;Behroozi et al. 2013;Moster et al. 2013;Brook et al. 2014;Garrison-Kimmel et al. 2014;Sawala et al. 2015;Rodriguez-Puebla et al. 2017;Munshi et al. 2017;Read et al. 2017;Moster et al. 2017). Although abundance matching studies are in good agreement for halos of masses 10 11 M , at lower masses there is a large uncertainty in the stellar masshalo mass relation. Different studies present various slopes for the relation below stellar masses of a few ×10 7 M , presumably due to incompleteness at the low mass end and due to the variations in the halo mass function assumed in various simulations. The derived slopes of the low mass M * − M h relation, α, where M * ∝ M α h , span a wide range of 1.6 − 3.1. Moreover, while the scatter at the high mass end is consistently measured to be relatively small, ∼ 0.2 dex or less, low luminosity galaxies have a more stochastic star formation, resulting in a large scatter. Recent studies have explored the significance of the scatter in the M * − M h relation and quantified the scatter in this relation for low mass galaxies (Garrison-Kimmel et al. 2017;Munshi et al. 2017;Jethwa et al. 2018). The uncertainty in the M * − M h relation at low masses can have a large effect on our predictions. In this calculation we adopt two relations -the relation from Rodriguez-Puebla et al. 2017 as a lower limit and the relation from Behroozi et al. 2013 as an upper limit. In Figure 3 we show these two relations along with other recently derived stellar mass-halo mass relations. We calculate the cumulative halo number density as a function of halo mass assuming a dark matter halo mass function from Tinker et al. 2010, obtained using the HMFcalc code (Murray et al. 2013). We adopt cosmological parameters consistent with the 7 year WMAP results (Komatsu et al. 2011): The upper x-axis of Figure 3 shows the values of the cumulative number density as a function of halo mass. Given the model ingredients described above, we can calculate the expected number of dwarf galaxies with a particular distance, size, and surface brightness. The number of detected galaxies also depends on the imaging capabilities, in particular, the limiting surface brightness, µ eff,lim , and the spatial resolution, θ. For a given limiting surface brightness, µ eff,lim , we use the linear relation shown in equation 1 to get the estimated value for the limiting stellar mass, M * ,lim . In order to keep our calculation conservative we consider the value of the stellar mass after adding a 2σ scatter, i.e., we use: log M * ,lim = −0.51 · µ eff,lim + 19.23 + 2σ log M * , where σ is the standard deviation. We then use the relation shown in equation 2 to estimate the effective radius of the galaxies with such stellar masses, considering the smallest detectable objects to be log R eff,lim = 0.23 · log M * − 1.93 − 2σ log R eff , i.e., within 2σ range of the average effective radius. In the next step we consider the spatial resolution in arcseconds, θ, which determines the limiting physical size of detectable objects and thus the visible horizon for the lowest detectable stellar masses. Given a spatial resolution, θ, objects with limiting effective radius R eff,lim can be identified as galaxies to distances of with effective radius calculated as described above: where σ log M * = 0.92 and σ log R eff = 0.29. The described model ingredients are combined to calculate the predicted cumulative number of galaxies, as a function of stellar mass, limiting surface brightness and resolution, in the Local Volume (D LV =10 Mpc), in the following way: where dn dM is the differential stellar mass function, converted from the differential halo mass function using the M * − M h relation. The lower limit of the integration is given by M * ,lim , and the upper limit is the lowest stellar mass that can be detected out to the edge of the Local Volume, M * (θ, D = 10). n(> M * (θ, D = 10)) is the cumulative number density of galaxies for stellar mass larger than M * (θ, D = 10), again converted from the cumulative number density of halos using the M * − M h relation. In the next section we present the results we obtain using the model just described. Detection Limits As described in Section 3.1, assuming a limiting surface brightness and spatial resolution we use the linear relations, µ eff,V − log M * and log M * − log R eff , that were constructed using the empirical measured properties of dwarf galaxies in the complete Local Group sample, to infer the minimal detectable stellar mass for a set of (µ eff ,θ) at various distances. In Figure 4, we show the minimal detectable stellar mass as a function of limiting surface brightness and spatial resolution, for different distances. For all distances, the minimal detectable stellar mass depends strongly on the limiting surface brightness, which is a crucial parameter when carrying our integrated light surveys for the purposes of detecting new low mass galaxies. The limiting stellar mass changes quite dramatically ranging from stellar masses of ∼ 10 8 M for surveys with limiting surface brightness of ∼ 24 mag arcsec −2 to stellar masses as low as ∼ 10 4 M for limiting surface brightness of ∼ 30 mag arcsec −2 . While the surface brightness limit plays such an important role for the entire range of distances examined, spatial resolution starts to impact around distances of 8 Mpc. Detection at the outskirts of the Local Group, at 3 Mpc, are entirely independent of spatial resolution over the range that we probed. Limited to the Local Volume, even with a low spatial resolution of θ ∼ 5 arcsec, extremely low mass galaxies are potentially detectable. In Figure 4 we show mass limits for 95% completeness. In the following we include all galaxies that fall within the (θ, µ) limits. Clearly, the minimal detectable stellar mass shown here affects the number of predicted field galaxies in the Local Volume, presented in the next section. Detection Rates of Field Dwarf Galaxies in the Local Volume The resulting model predicted detection rates of dwarf galaxies in the field for two volumes, 3 − 10 Mpc and 3 − 5 Mpc, are shown in Figure 5. We show results for two stellar mass-halo mass relations, Rodrigues-Puebla et al. (2017, hereafter, RP17) in the right panel and Behroozi et al. (2013, hereafter B13) in the left panel, in order to get lower and upper limits for the estimated values as well as to highlight the sensitivity of the adopted model and the importance of constraining this relation observationally at low masses. The figures show the expected number of galaxies per square degree depending on limiting surface brightness and spatial resolution. The expected number of detected field galaxies varies significantly given the limiting surface brightness and the spatial resolution as well as different stellar mass-halo mass relations, ranging between 0.002 and and 0.35 galaxies per square degree in the Local Volume ( Figure 5). The largest number of galaxies is obtained when we adopt the B13 stellar mass-halo mass relation (left panels). There, the number of predicted detected galaxies is as high as 0.35 galaxies per square degree, for limiting surface brightness levels fainter than µ eff,lim ∼ 29.5 mag arcsec −2 and a spatial resolution better than θ ∼ 3.5". However, adopting the RP17 relation reduces the number of detected field dwarfs for the same observational limits to ∼ 0.05 galaxies per square degree. Consistently with the results presented in 3.2.1, the limiting surface brightness plays an important role in the predicted number of galaxies while only at surface brightness limits of 26.5 mag arcsec −2 the spatial resolution starts to be important. Then, the number of predicted galaxies per square degree increases as we lower the limiting surface brightness and the spatial resolution, as expected. For the smaller volume, 3 − 5 Mpc, the predicted number 3-5 Mpc FIG. 5.-The cumulative number of predicted field galaxies per square degree to be detected in the Local Volume, between 3 and 10 Mpc (upper panel) and between 3 and 5 Mpc (lower panel) using an integrated light imaging, assuming a limiting effective surface brightness in V -band, µ eff,lim , and a spatial resolution, θ. The left and right panels were calculated assuming the Behroozi et al. (2013) is obviously much smaller, ranging between 0 and and 0.04 galaxies per square degree, depending on the surface brightness limit and the spatial resolution. Considering galaxies in this volume, the spatial resolution seems to be an almost insignificant parameter in the context of only detecting the galaxies. Of course, better spatial resolution is crucial for resolving the galaxies into their different stellar populations. Similar to the results for the larger volume, assuming the two stellar mass-halo mass relations, B13 and RP17, results in significantly different values for the predicted cumulative number. Comparing the two panels in each figure, it is easy to notice that at fixed number density there is almost two orders of magnitudes difference in their brightness between RP17 and B13. These remarkable differences when adopting two differ-ent stellar mass-halo mass relations emphasize the necessity of detecting these dwarfs and placing strong constraints the stellar mass-halo mass relation at low masses. COMPARISON TO STAR COUNTS After quantifying the detectability of dwarf galaxies using integrated light surveys, we now turn to compare our results to expectations from star count surveys. For an integrated light survey we adopt the values from the Dragonfly Nearby Galaxies Survey (Merritt et al. 2014;Merritt et al. 2016a;Merritt et al. 2016b;Danieli et al. 2017). The surface brightness limit in this survey is ∼ 29.5 mag arcsec −2 on scales of ∼ 10 arcsec. For the purpose of comparing to star count surveys we adopt parameters of two surveys: the Dark En- Dragonfly 20% completeness Dragonfly 50% completeness Dragonfly 80% completeness star counts m g, lim = 24.6 star counts m g, lim = 26.5 FIG. 6.-Limiting distances at a function of stellar mass for integrated light imaging with the Dragonfly Telephoto Array (µ eff,lim = 29.5 mag arcsec −2 ) and using the star counts method in surveys limited by magnitudes of 24.6 and 26.5 mag corresponding to the DECam and Hyper Supreme Cam Surveys. The blue shades show the limiting distances for galaxies that are detectable in 20, 50 and 80% of a large sample of simulated galaxies. The limiting distances for the Dragonfly 20% and 50% completeness levels is mostly dominated by the spatial resolution, the 80% sample limiting distance drops quickly at logarithmic stellar masses of ∼ 5.5 M due to the limiting surface brightness. (Aihara et al. 2017) where a g−band limiting magnitude of 26.5 mag is assumed. For the two galaxy detection methods, integrated light and star counts, we calculate the limiting distance for detection as a function of stellar mass, i.e., the farthest distance a galaxy in a particular stellar mass is likely to be detected. For the integrated light detections, the following steps were taken: for each stellar mass, 10 6 galaxies were simulated, where for each galaxy a surface brightness and an effective radius (in kpc) were assigned such that they will be normally distributed with means and variances calculated in 3.1, µ eff,V ∼ N (µ µeff,V , σ 2 µeff,V ) and R eff ∼ N (µ Reff , σ 2 Reff ). The galaxies were then placed at random distances (D = 0 − 20 Mpc) and their corresponding angular sizes in arcseconds were calculated. The integrated surface brightness is independent of distance for these distances. Galaxies with surface brightness lower than 29.5 mag arcsec −2 and with sizes larger than 10 arcsec were flagged as 'detected'. Then, the limiting distance for detection was calculated for 20, 50 and 80% out of the total number of simulated galaxies. Detections using the star count method were determined in the following way: first, we calculate a MIST model isochrone for a single stellar population of age 10.0 Gyr and metallicity of [Fe/H] = -2, and obtain synthetic photometry in the DECam bands. Given the galaxy stellar mass, the number of stars is obtained by integrating the IMF weights for those stars with magnitudes brighter than g−band magnitude limits of 24.6 mag and 26.5 mag. The limiting distance, D lim for the two star count surveys, is the distance for which a galaxy with stellar mass, M * has a minimal number of detectable stars with an apparent magnitude brighter than 24.6 mag and 26.5 mag, respectively. We assume that a minimum of 20±10 stars is required for a significant detection. This is a conservative lower limit, based on inspection of color-magnitude diagrams in the discovery papers of Milky Way ultra faint dwarfs (Belokurov et al. 2014;Torrealba et al. 2016b). 5 Also, we assume that all stars brighter than the survey limit can be used in the analysis. In practice, brighter limits are usually employed; as an example, Koposov et al. (2015) used r < 23 mag rather than r < 24.6 mag due to uncertainties in the star/galaxy classification at fainter magnitudes in DES. The resulting limiting distances as a function of stellar mass for integrated light and star counts detections are shown is Figure 6. The blue shaded regions show detectable distances for 20 − 50 − 80% of the galaxies in the simulated Dragonfly sample where the grey curves show the detectable distances for star count surveys with magnitude limits of 24.6 mag and 26.5 mag in g−band (Shaded grey regions show the effect of varying the minimum number of detected stars between 10 and 30). An integrated light survey using the Dragonfly Telephoto Array is forecasted to reach greater limiting distances than using star count survey with the listed magnitude limits, for a large fraction of the simulated galaxies. The mass cutoff in the 80% detection curve is due to Dragonfly's limiting surface brightness while the limiting distance for all of the Dragonfly curves is determined by the angular resolution. The shape of the star count surveys detections limit curve matches the general shape of the isochrones: we require 20 stars to be detected, and at low masses the galaxies do not have this many giants. As a result, the brightest stars are subgiants, and the limiting distance plummets for these faint stars. At the high mass end, star count surveys are restricted by the limiting magnitude of the instrument as the brightness of stars, and thus the number of detectable tracer stars, decreases with the square of the distance. In contrast, integrated light sur- face brightness is conserved with distance and thus integrated light surveys are mostly restricted by their limiting surface brightness and resolution, thus allow dwarfs to be detected beyond the Local Group. The power of these complementary approaches can also be seen in Figure 7. We show the cumulative number of expected dwarf galaxies in a Dragonfly integrated light survey and in a star count surveys assuming two magnitude limits. Similar to Section 3.2.2, we assume dark matter halo mass function from Tinker et al. (2010) and repeat the calculation for the two stellar mass-halo mass relation: B13 and RP17. As demonstrated before, the expected detection rate increases by an order of magnitude when adopting the B13 stellar mass-halo mass relation compared to RP17. The two imaging techniques essentially cover complementary phase space where star count surveys are better at detecting the plentiful lower mass nearby galaxies, where as integrated light surveys help to increase the number of extended, low and high mass, galaxy candidates further away, depending mostly on their limiting surface brightness and spatial resolution. SUMMARY AND DISCUSSION Recent developments of integrated light techniques are sensitive enough to allow dwarfs to be detected beyond the Local Group (Karachentsev et al. 2014;Merritt et al. 2014) and to allow a statistical probe of the low mass dwarf population. In this paper we study the prospects of integrated light imaging in the context of constructing a complete census of dwarf galaxies in the Local Volume general field (between 3 and 10 Mpc), down to very low masses. We present a model for calculating the predicted detection rates of dwarf galaxies using integrated light surveys, depending on their limiting effective surface brightness and spatial resolution. Two assumptions are made and should be noted: 1. We partially base our model on properties that were measured for the dwarf galaxy population in the Local Group, which can ultimately be very different from the statistical properties of dwarf galaxies in the field. This assumption can be revisited when the first samples of very low mass field galaxies are available. 2. We assume a one-to-one relation for the stellar masshalo mass (M * − M h ) relation where the scatter at the low mass end may be much larger than the 0.2 dex scatter measured at the high mass end. The principle result of this paper is presented in Figure 5. Assuming two M * − M h relations, Behroozi et al. (2013) and the Rodrigues-Puebla et al. (2017) ,we present the predicted abundances of field dwarfs in the Local Volume, in integrated light surveys over a range of values for the limiting surface brightness, µ eff,lim and spatial resolution, θ. Assuming the B13 relation, low mass dwarf galaxies should be detected in large numbers, ∼ 0.3−0.4 deg −2 , when carrying out an integrated light survey with a limiting surface brightness larger than ∼ 29 mag arcsec −2 and a spatial resolution better than ∼ 5 arcsec. The result decreases by an order of magnitude when we adopt the RP17 relation. This drastic change when adopting two different stellar mass-halo mass relations, illustrates the necessity of performing a systematic search of such objects in the field. Proving the existence or alternatively the lack of a large population of faint and ultra-faint galaxies will provide an important constraint on the M * − M h relation at low masses. We compare our results to those that can be achieved using the 'standard' star counts method in Figures 6 and 7. We demonstrate that integrated light imaging is complementary to the star counts method and has different strengths and weaknesses. While the star counts technique is dominated by the inverse square law, imaging the integrated light of extended galaxies takes advantage of the conservation of surface brightness in the Local Universe. Motivated by the results of this study we are in the first stages of conducting the 'Dragonfly Blank Wide Field Survey'. This is a deep photometric survey of a wide blank area to be carried out with the Dragonfly Telephoto Array. Its main goal is to detect a large set of galaxy candidates, predicted to exist in the field, and to study their properties. We hope to be able to shed light on theories of isolated galaxy formation which are currently fail to be constrained since so-far no low mass dwarfs were detected in the field. In order to study these galaxy candidates further, follow-up deep high resolution observations will need to be taken. FIG. 8.-The minimal detectable stellar mass in integrated light surveys, assuming a limiting effective surface brightness in the V −band and a spatial resolution. Black contour lines indicate constant minimal stellar mass for different values of limiting effective surface brightness and spatial resolution. A V −band mass-to-light ratio of M/L V = 0.3 has assumed, appropriate for, e.g., a 1 Gyr old, Z/H=-1 stellar population. better than θ ∼ 3.5", compared to 0.35 for a mass-to-light ratio of 2.0, when adopting the B13 stellar mass-halo mass relation (left panels). Similarly to the results presented in Section 3.2, adopting the RP17 relation reduces the reduces the number of detected field dwarfs for the same observational limits. 3-10 Mpc FIG. 9.-The cumulative number of predicted field galaxies per square degree to be detected in the Local Volume, between 3 and 10 Mpc (upper panel) and between 3 and 5 Mpc (lower panel) using an integrated light imaging, assuming a limiting effective surface brightness in V -band, µ eff,lim , and a spatial resolution, θ. The left and right panels were calculated assuming the Behroozi et al. (2013) and the Rodrigues-Puebla et al. (2017) stellar mass-halo mass relations, respectively. A V −band mass-to-light ratio of M/L V = 0.3 has assumed, appropriate for, e.g., a 1 Gyr old, Z/H=-1 stellar population.
2018-02-12T18:55:53.000Z
2017-11-02T00:00:00.000
{ "year": 2018, "sha1": "e9bb6bae15ddec53c0133d5f4b42c9332eb8574e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1711.00860", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e9bb6bae15ddec53c0133d5f4b42c9332eb8574e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
232018594
pes2o/s2orc
v3-fos-license
Three Heads are Better than Two: HBcrAg as a New Predictor of HBV-related HCC Patients with chronic hepatitis B virus (HBV) infection are at risk of developing hepatocellular carcinoma (HCC), and serum markers reflecting viral replication are potential predictors for HCC development. Besides the levels of serum HBV DNA and hepatitis B surface antigen (HBsAg), hepatitis B core-related antigen (HBcrAg) quantification is an emerging serological marker for viral replication. Unlike HBV DNA and HBsAg, HBcrAg is a covalently closed circular DNA-derived protein marker, consisting of hepatitis B e antigen (HBeAg), p22cr, and hepatitis B core antigen. In treatment-naïve HBV patients, higher HBcrAg levels are shown to be associated with an increased risk of HCC in several studies. More importantly, HBcrAg may complement HBV DNA level to predict HCC development. For example, an Asian treatment-naïve cohort study’s data showed that HBcrAg level of 4 log U/mL was effective to stratify HCC risk in HBeAg-negative patients with intermediate viral loads, who may not need antiviral therapy because of the low to moderate risk of HCC. In patients receiving prolonged nucleos(t)ide analogue with profound viral suppression, most data indicated that HBV DNA and HBsAg levels no longer serve as HCC predictors. However, several studies suggested on-treatment HBcrAg levels may remain as an HCC predictor. In summary, HBcrAg level can be a useful biomarker for treatment-naïve patients, but its value in on-treatment patients needs validation. The next challenge is how to combine HBcrAg with the other viral markers to construct a better HCC prediction model, optimizing the management of HBV patients. Introduction Chronic hepatitis B virus (HBV) infection continues to be a major public health issue worldwide, although safe and effective vaccines are available for more than 3 decades. Recent data estimated that more than 257 million individuals worldwide are positive for hepatitis B surface antigen (HBsAg). 1 These individuals with chronic hepatitis B infection are at an increased risk of developing liver cirrhosis, hepatic decompensation, and hepatocellular carcinoma (HCC); 15% to 40% of these individuals will develop these serious sequelae during their lifetime. Nucleos(t)ide analogues (NAs) are the most adopted antiviral treatment for patients with chronic hepatitis B (CHB). NAs effectively suppress HBV replication to undetectable levels through the inhibition of viral reverse transcriptase. 2 It not only stops the progression of liver fibrosis but also reduces the risk of HCC. However, rebound of viremia frequently occurs after the discontinuation of NA, primarily because of the persistence of the active transcriptional template of HBV covalently closed circular DNA (cccDNA). Therefore, prolonged antiviral therapy is usually necessary until the clearance of HBsAg. Identifying CHB patients with high HCC risk for early antiviral therapy is an urgent need because more than half of these patients will suffer from serious liver sequelae during their lifetime. So far, there are several serum quantitative markers for viral replication, including HBV DNA, HBsAg, and hepatitis B core-related antigen (HBcrAg). Previous studies showed and validated the role of HBV DNA and HBsAg in predicting HCC development in treatment-naïve patient. [3][4][5] Recently, HBcrAg level has been increasingly recognized as an emerging predictor for HCC development. [6][7][8] In this review, the evolutionary roles of these 3 biomarkers in predicting HCC development among CHB patients will be summarized and discussed. What is HBcrAg? HBcrAg consists of 3 precore/core protein products sharing an identical 149 amino acid sequence: hepatitis B core antigen (HBcAg), hepatitis B e antigen (HBeAg), and a 22-kDa precore protein(p22cr) (Figure 1). 9 HBcAg is the nucleocapsid that encloses the viral DNA. HBeAg is a circulating protein derived from the core gene, then modified and secreted from liver cells. It usually serves as a marker of active viral replication. 10 The p22cr is the dominant precore/core protein in HBV DNA-negative particles. For both HBeAg-positive and -negative patients, HBcAg only accounts for 3.1-37.4% (median 10.5%) of HBcrAg in HBsAg-positive particles. 11 There are two start codons in the HBV's precore/core open reading frame ( Figure 2A). The first ATG encodes the entire amino acid sequence of hepatitis B core protein (HBcAg) plus a 29 amino acids precore sequence at the N-terminal end. 12 There are 2 major products derived from post-translational modifications of precore protein. HBeAg is generated after the first cleavage of the first 19 amino acids, a signal sequence that allows translocation into the lumen of endoplasmic reticulum (ER), and second cleavage of up to 34 amino acids from the arginine-rich C-terminal end. HBeAg is then secreted through the ER and Golgi apparatus. 12 Another product is 22-kDa precore protein (p22cr), containing the uncleaved signal peptide and lacking the arginine-rich domain involved in binding the RNA pregenome or the DNA genome. In contrast to the first start codon, the second ATG specifically encodes HBcAg only. The viral mutations of the precore region highly affect the ratio between HBeAg/p22cr and HBcAg. The precore stop codon mutation (G1896A) is a single-base substitution of G-to-A at nucleotide position 1896 located in the precore gene, which creates a stop codon that prevents the translation of the precore open reading frame and terminates the production of HBeAg/p22cr, but not HBcAg. Therefore, emergence of the mutation lowers the expression level of HBeAg, which has been shown to be associated with HBeAg-negative disease and is expected to cause a lower HBcrAg level in serum. [13][14][15][16][17] This variant may explain the different HBcrAg levels between HBeAg-negative and -positive patients. 18,19 A sensitive enzyme immunoassay (EIA) specific for HBcAg and HBeAg was first introduced in 2002, and Kimura et al. 20 designated these proteins translated from the precore/core gene as HBcrAg. With the pretreatment by detergents, this assay can detect HBcAg and HBeAg even in anti-HBc or anti-HBe positive specimens. Not only HBcAg and HBeAg but also p22cr can be measured by the serological testing. So far, there is only one commercial HBcrAg assay available (Lumipulse G HBcrAg assay from Fujirebio). Its linear range for quantification spans from 3 to 7 logU/ml and its lowest sensitivity limit could reach 2 logU/ml. HBcrAg levels well correlate with HBV DNA levels. 20 Yoshida et al. reported good performance of the HBcrAg assay in identifying high viremic individuals among the treatment-naïve CHB patients using systematic review and meta-analysis. 21 A positive correlation between HBcrAg and HBsAg levels was also observed in Asian and European cohorts. 18,19 A Taiwanese cohort study including 2,666 patients showed a high correlation between HBcrAg and HBV DNA levels (r = 0.83; P < .001), while a moderate correlation between HBcrAg and HBsAg levels (r = 0.59; P < .001). 8 It is plausible that HBcrAg is better correlated with HBV DNA than HBsAg because both HBcrAg and HBV DNA are only derived from cccDNA, whereas HBsAg may also be translated from the integrated viral genome. 22 In addition, serum HBcrAg levels have been shown to correlate well with cccDNA level and its transcription activity. 19,[23][24][25] Suzuki et al. first reported the correlation between HBcrAg and other viral markers, including HBeAg, HBsAg, serum HBV DNA, and intrahepatic cccDNA. 23 They enrolled a total of 57 patients with chronic hepatitis B, and HBcrAg was observed to correlate positively with cccDNA (r =0.692, P < 0.001). This correlation was also observed in another study of hepatitis B virus (HBV) re-infection after liver transplantation. 24 Conducted by Wong et al., a study enrolling 138 patients with chronic hepatitis B reported that HBcrAg correlated positively with cccDNA not only in overall patients (r =0.70, P < 0.001) but also in patients achieving undetectable HBV DNA level after antiviral therapy (r =0.42, P < 0.001). 19 Viral replication markers to predict HCC risk in treatment-naïve patients To optimally investigate the causal relationship of dynamic factors, cohort studies are preferred over cross-sectional studies. Therefore, the role of viral factors in predicting HCC is only reviewed in cohort studies ( Table 1). The positive relationship between serum HBV DNA level and HCC risk was first shown by a prospective community-based cohort study known as REVEAL-HBV (Risk Evaluation of Viral Load Elevation and Associated Liver Disease/Cancer-Hepatitis B Virus), which followed 3,653 adult Taiwanese HBsAg seropositive patients over a mean follow-up period of 11.4 years. 3 In particular, the HCC risk started to increase when serum viral load ≧2,000 IU/mL but increased dramatically when viral load ≧20,000 IU/mL. The HBV DNA cutoff levels of 2000 IU/mL and 20,000 IU/mL are thus recommended to categorize the patients into low, intermediate, and high-risk groups. The role of HBsAg level was first shown by a hospital-based cohort study with the acronym of ERADICATE-B (Elucidation of Risk fActors for DIsease Control or Advancement in Taiwanese hEpatitis B carriers), which enrolled 2,688 Taiwanese HBV carriers who did not have evidence of cirrhosis at baseline and remained treatment-free during the follow-up period. 4 The mean follow-up period was 14.7 years. The data showed that HBsAg level could complement HBV DNA level in predicting HCC risk. In patients with viral load <2000 IU/mL, serum HBsAg level of 1000 IU/mL further stratified the risk, which is also validated by the REVEAL-HBV cohort. 5 The association between serum HBcrAg level and the development of HCC was first reported by Kumada et al. 26 They selected 117 NA-treated and 117 treatment-naïve patients using the propensity score matching, which was designed to explore whether the NA treatment lowers the HCC risk. The multivariable analysis showed that serum HBcrAg level greater than 3.0 log U/mL was associated with HCC development. The predictive value of serum HBcrAg level for the development of HCC in treatment-naïve patients was first reported by Tada et al. in a Japanese cohort. 6 The hospital-based retrospective cohort recruited a total of 711 treatment-naïve patients with available HBcrAg levels. After a median follow-up period of 10.7 years, patients with an HBcrAg level greater than 2.9 log U/ml were associated with more than 5-fold increase in the risk of HCC development than those below the level. Another retrospective cohort study, enrolling 207 spontaneous HBeAg seroconverters from Hong Kong, was conducted to explore the relationship between HBcrAg level and HCC development. 7 The median follow-up period was 13.1 years. The data showed that a higher baseline serum HBcrAg level at the time of HBeAg seroconversion was associated with increased risk of HCC development. The adjusted hazard ratio (HR) was 1.75 when stratifying the patients by baseline HBcrAg level of 5.21 log U/mL. To be noted, it is not a treatment-free cohort as nearly half of the study patients received NAs at a median of 5.5 years after the spontaneous HBeAg seroconversion. A shared limitation of both studies is relatively small patient numbers, which make it difficult to explore rare events, such as HCC development. The ERADICATE-B cohort study from Taiwan actually overcomes these challenges. 8 It showed a linear relationship between serum log 10 HBcrAg level and HCC risk. The multivariable analysis revealed that, compared to the patients with the HBcrAg less than 4 log U/mL, those with the HBcrAg between 4 and 5 log U/mL, 5 and 6 log U/mL, and greater or equal to 6 log U/mL had higher HCC risk with adjusted HRs of 2.83 (95% Viral replication markers to predict HCC development in on-treatment patients Oral antiviral therapy is effective in inhibiting reverse transcription of pregenomic RNA, thus serum HBV DNA could be lowered to an undetectable level in most of the on-treatment patients. In contrast, HBsAg is translated from the viral mRNA directly, thus it remained high and stable. However, the HBcrAg kinetic was different from both as it consists of the protein products translated from the precore/core mRNA. A decline in serum HBcrAg level in CHB patients after NA therapy was first reported by a study from Hong Kong. 27 A small patient number is a common limitation for both studies. In a recent Japanese treatment cohort on a large number of patients, Hosaka et al. 28 found a positive correlation between on-treatment HBcrAg level and HCC risk. A total of 1,268 CHB patients were enrolled and about half of them received potent NA therapy. The median follow-up duration was 8.9 years. The data showed that a higher HBcrAg level at 1 year after antiviral therapy was associated with higher HCC risk and the cutoff values were 4.9 log U/mL and 4.4 log U/mL for the HBeAg-positive and HBeAg-negative patients, respectively. The multivariable analysis revealed that, In addition to the Japanese cohort studies, a recent Hong Kong study enrolling 1,400 NA-treated CHB patients was conducted to explore the relationship between HBcrAg level and HCC development. 31 All the patients received potent NA therapy. During a median follow-up duration of 45 months, 85 patients developed HCC. High serum HBcrAg levels were defined as >2.9 log U/mL in the HBeAg-negative patients and >4.9 log U/mL in the HBeAg-positive patients (cutoffs adopted from Tada's and Hosaka's reports, respectively). 6,28 They found that high on-treatment HBcrAg levels were associated with an increased risk of HCC development in the overall cohort and HBeAg-negative patients (n=1,042), but not in HBeAg-positive patients (n=358). In addition, serum HBsAg levels were not associated with HCC risk. However, 92.6% of the patients had received NA before the enrollment with various duration (median: 44 months, interquartile range: 18-71 months), and around 70% had already achieved undetectable HBV DNA when HBcrAg was determined. All these factors might influence the interpretation of the findings and thus should be cautious. In short, on-treatment levels and kinetic of HBcrAg are potential predictors for HCC development in NA-treated patients. Most data are coming from Asian countries, especially from Japan (Table 2). We need more data from other countries for validation. Challenges ahead and the way forward Current data have suggested that HBcrAg level serves as an HCC predictor in treatment-naïve patients and on-treatment level or kinetic of HBcrAg may stratify HCC risk in NA-treated patients. However, several fundamental and essential questions remain to be answered. First, HBcrAg assay detects HBeAg, p22cr and HBcAg, and HBcAg represents around 10.5% of the precore/ core proteins in HBsAg-positive particles only. 11 As HBeAg and p22cr share the first start codon of the precore/core region, production of both proteins could be highly affected by the precore stop codon (G1896A) mutation ( Figure 2B). In other words, serum HBcrAg levels will be reduced when precore stop codon mutation (G1896A) emerges. This may raise two issues. 1. Although HBcrAg is now considered as a surrogate marker of intrahepatic cccDNA level, the proportion of precore stop codon mutation (G1896A) should be included as an adjusting variable in the prediction formula. 2. Although the REVEAL-HBV cohort study has shown that the precore stop codon mutation (A1896) was associated with a lower HCC risk compared to the precore wild-type (G1896), 32 the relationship between the viral variant and HCC has not been well validated. If the relationship is confirmed, the viral variants may confound the relationship between HBcrAg level and HCC as it also affects the production of HBcrAg. In addition, basal core promoter (BCP) mutations are another potential confounder as they are HCC-associated variants that may influence HBcrAg production. [33][34][35][36] More studies are needed to clarify whether HBcrAg levels remain an independent risk factor after adjusting these viral variants (Figure 3). Second, the underlying mechanism of how HBcrAg levels affect HCC development is unclear. As HBcrAg is a viral protein translated from cccDNA level, a common hypothesis is that higher HBcrAg levels indicate more replication competent viral templates and less controlled viral replication, which could lead to more liver damage followed by higher risk of cirrhosis and HCC development. The data from ERADICATE-B study has supported the hypothesis by showing the positive relationship between HBcrAg and the risk of developing pre-HCC adverse events in treatment-naïve patients. 37 A Japanese cohort study also had a similar finding. These data confirmed the hypothesis in treatment-naïve patients. 38 In contrast, once patients receive NA treatment, it prevents not only the HBV-related liver necroinflammation but also the progression of liver fibrosis. That is why most of the data suggest that HBV DNA and HBsAg levels are no more HCC predictors after NA treatment. 39,40 It is still unknown whether HBcrAg level plays a different role from HBV DNA and HBsAg levels in predicting the residual HCC risk in patients after prolonged NA therapy. More clinical data and mechanistic studies are awaited to address this issue. The last challenge is how to apply the HBcrAg quantification in clinical practice. Current lines of evidence show that it could not replace HBV DNA level to predict HCC development or to serve as the criteria for initiating antiviral treatment, 8 but whether HBcrAg levels may serve as a biomarker to optimize the management of HBeAg-negative patients at grey zone, who have either high HBV DNA levels or mildly elevated ALT levels, deserves further investigation. 41 Conflicting results about the HCC risk in the grey zone patients existed in recent studies, suggesting this is a heterogeneous population. [42][43][44] As HBcrAg level is another biomarker for viral replication, it may play a given role in stratifying the risk of HCC development in this special clinical setting, which may help physicians optimize the management of the so-called "grey zone" patients. Conclusion In summary, the viral markers representative of HBV replication are useful predictors for HBV-related HCC in treatment-naïve patients. The next key question is how to combine all of them to achieve a more accurate HCC prediction in treatment-naïve patients. In those receiving prolonged antiviral therapy, more conclusive studies are needed to clarify the relationship between HBcrAg level and the HCC risk. Author contribution Conceptualization: Jer-Wei Wu and Tai
2021-02-24T06:16:40.771Z
2021-02-23T00:00:00.000
{ "year": 2021, "sha1": "b013da2b8b4804e7a649fb96c2a7dfcb93e639d3", "oa_license": "CCBYNC", "oa_url": "https://www.e-cmh.org/upload/pdf/cmh-2021-0012.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "966956ed671305739d14a4ac73abe22154d23643", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270777710
pes2o/s2orc
v3-fos-license
Effect of Foliar Application with Potassium Silicate and Seaweed Extract on Plant Growth , Productivity, Quality Attributes and Storability of Potato ) INTRODUCTION Potato (Solanum tuberosum L.) is considered the fourth most important global vegetable crop for local consumption and export (Muthoni and Nyamongo, 2009).Improving vegetative growth is an important reason for increasing the productivity of potato tubers of high quality and good storability.This is a goal that must be achieved by spraying plants with SWE and potassium silicate treatments.SWE contains various microelements (i.e.Cu, Zn, Mo, B, Co), and also contains auxins, gibberellins, cytokinins, and large of polyamines (Papenfus et al., 2012), abscisic acid, and brassinosteroids (Stirk et al., 2014), several osmoprotectants (betaine, proline, mannitol) (González et al., 2013) and other components of great biological importance that encourage plants to promote nutrient absorption and translocation (Craigie, 2011). Spraying the SWE on plants increase the ability of root growth, stem thickness, growth, and crop yield (Thirumaran et al., 2009), and extended the shelf life (Khan et al., 2009).Miceli et al. (2021) found that Ecklonia maxima, which is extracted from brown algae, promote plant growth and increase crop yield, plant morphology and physiology (total biomass accumulations, expansion of leaves, intrastomatic conduction, improvement of water use efficiency, nitrogen utilization, etc.). SWE acts as a biostimulant for plants and is an effective and economical way to improve plant growth and tuber quality developments by increasing the efficiency of using water and available minerals (Ronga et al., 2019), also, increases the absorption of many nutrients, enhances growth, and increases its resistance to frost, fungal diseases, and various stress conditions.Add to this its effectiveness in improving quality and increasing shelf life of fruits (Zodape, 2001), it also has a clear effect on plant growth, which leads to an improvement in the overall potato yield, both qualitatively and quantitatively (Sarhan, 2011). The application of SWE to potato plants resulted in a notable improvement in various aspects of vegetable growth, such as plant height, leaf count, wet and dry weight, average leaf area, total carbohydrates, and chlorophyll levels in the leaves (Rizk et al., 2018).All physical properties of potato tubers (diameter, length, volume, and specific gravity), tubers per hill, tuber yield and nutritional values were also improved.Zaki et al. (2021) found that treatment with SWE at a concentration of 1% three times in a row gave the most reduced weight misfortune rate, the most noteworthy dry matter rate, and the most elevated rate of starch and protein amid the capacity period (4 months) beneath room temperature conditions. Potassium silicate could be an awesome source of silicon and potassium and is exceedingly dissolvable.As of late, it is utilized in agrarian generation frameworks as a silica modifier, which gives sums of potassium that offer assistance increment the opportunity to exchange sugars from clears out to tubers.It is useful, especially when plants are under stresses.On the other hand, silicon enhances soil fertility, the absorption of minerals, and enhance resistance to diseases and pests, and plant growth, thus increasing yields (Crusciol et al., 2009), improve tuber quality (Dkhil et al., 2011) and Improving the structure of plants, regulating the transpiration process, and increasing plant tolerance to toxic elements (Hou et al., 2006).Talebi et al. (2015) found that spraying potassium silicate increases the content of soluble carbohydrates and protein in the leaves of potato plants, thus increasing the yield.Abd El-Gawad et al. (2017) found that spraying silicate of potassium improved the performance of plants by stimulating more than one mechanism to relieve stress on the plant, as it gave higher readings of leaf area, total chlorophyll, dry matter of leaves, total dissolved carbohydrates, starch, protein, amino acids in leaves compared to control. Potassium silicate treatment improves plant growth of potato (cv.Lady Rosetta) and the nutrient content (NPK) of the plants, in addition to, increasing the productivity and quality compared to untreated plants.It also reduces weight loss, reduces tuber decomposition, and increases their shelf life during storage (Abou-El-Hassan et al., 2020). Spraying potassium silicate improved the overall indicators of vegetative growth, as well as content of mineral, starch in plants and tubers (Pilon et al., 2014 andSalim et al., 2014), increased dry matter accumulation of tubers (Vulavala et al., 2016) and increased the starch content (Wadas, 2023). Pre-harvest spraying of potassium silicate is a promising strategic treatment for fruit quality management and great control of post-harvest losses for all horticultural crops (Mohamed et al., 2017). Consequently, we carried out a study to investigate how foliar spraying with varying concentrations of SWE and potassium silicate impacts the growth, yield, components, physical and chemical properties, quality, and storability of potatoes. Field experiment The experiment was implemented during the summer years of 2022 and 2023 at the Faculty of Agriculture, Cairo University, in cooperation with the Post-Harvest Vegetables Research Department, Horticultural Research Institute (HRI), Agricultural Research Center (ARC) to determine the spraying effect of potassium silicate and seaweed extract (SWE) on vegetative growth and yield components, tuber quality and storability of potato (Solanum tuberosum L.).Spunta. The physical and chemical properties of clay soils (Table 1) were analyzed at the Soil and Water Research Institute, (ARC).The tubers were planted amid the primary week of February amid both think about seasons.The planning was carried out agreeing to the suggestions of the Service of Agribusiness, at that point it was isolated into plots with an range of 12.6 m2 (length 4.5 m and width 2.8 m), each piece containing 4 ridges (width 70 cm).Tubers were planted at a profundity of 15 cm and 25 cm separated on one side of the ridges, and each test plot contained 72 plants.Concentrations 1%, 2% from each medication potassium silicate and SWE were splashed foliar another to the untreated plants (spraying with water as it were) as a control.The medications were connected three times amid the development period of potato plants, beginning 30 days after planting and each 15 days.The past medications were organized in a totally randomized square plan with three replications. Vegetative growth Five plants from each experimental plot were randomly taken 80 days after planting to measure the growth characteristics.Plant height, leaf area, number of leaves per plant, fresh and dry weight/ plant were measured.The chlorophyll reading of the third upper leaf was measured in SPAD, where SPAD = 10 mg chlorophyll/g fresh weight using a Minolta model SPAD-502.Si and K determined by ICP atomic emission spectrometer (Stefánsson et al., 2007). Yield component and tuber properties The total yield per plot and tubers weight/plant were determined after 105 days form planting.Ten potato tubers were taken randomly for the average weight, tuber length, and diameter determinations.Tuber firmness, dry matter, total carbohydrates, and starch content, silicon and potassium were determined in tubers. Storage experiment Tubers were collected at the appropriate harvest time from each experimental plot, then transported to the laboratory, and sorting.It was packaged in a plastic net bag (weight 2 kg) and stored at 10°C and 85% relative humidity.Each replicate consists of 3 packages weighing 6 kg.Each 2 kg bag is represented in trial units (EU). Fifteen EUs were prepared for each treatment.Samples were pooled from three replicates of each EU and randomly selected in a completely random distribution.Measurements were performed immediately after harvesting and at 0-1, 2-3 and 4-monthly intervals for the following parameters: 1. Weight loss (%): It was calculated according to the equation: = [(initial weight of tubersweight of tubers at sampling data)/ (initial weight of tubers)] x100.2. Tubers with a general appearance rating of 5 or lower were deemed unmarketable.The scale used to judge general appearance was 9 to 1, with 9 denoting excellent, 7 good, 5 fair, 3 poor, and 1 unsalable.3. Decay percentage: All tubers that were shriveled, broken, or ruined as a result of an infection with microorganisms were counted and documented visually.This information was then computed in relation to the total original weight of tubers that had been preserved (Cheour et al., 1990).4. Firmness: A 1.5 mm diameter firmness tester, also known as a pressure tester, was used to measure it at the same two locations on every tuber.5. Dry matter: the percentage was calculated by: % Dry matter = (dry weight / fresh weight) ×100.6.Total carbohydrates were determined using spectrophotometer at wave length 420 nm according to AOAC.(1990).7. Starch content in tubers was determined in dry matter according to AOAC.(1990). Statistical analysis For field experiment: Every season's data was subjected to statistical analysis, and when the errors were uniform, a pooled analysis was done.Levene (1960) test was used to determine whether the variances for the two seasons were homogeneous.The study's two seasons' worth of combined data was examined.For storage experiment: Snedecor and Cochran's (1980) analysis of variance was used to statistically evaluate the data.Waller and Duncan (1969) said that the Duncan multiple range test method was used rather than a mean comparison. Vegetative growth The data shown in Table (2) show that, when compared to untreated (control) potato plants, plants treated with 1% or 2% SWE and potassium silicate significantly increased their chlorophyll reading and other parameters of vegetative growth, such as plant height, leaf area, number of leaves per plant, fresh and dry weight of the plant.Vegetative growth is enhanced by high concentrations of both treatments over lower concentrations.But when it came to enhancing vegetative growth indices, 2% SWE worked best.In this regard, untreated (control) had the lowest results.These outcomes agreed with the findings of Abou- El-Hassan et al. (2020), Rizk et al. (2018), and Abd El-Gawad et al. (2017). The fact that SWE contains essential elements for growth, such as macro-and micronutrients, amino acids, vitamins, betaine, betaine-like compounds, gibberellins, cytokinins, and auxins, may account for its beneficial effect on plant growth.It might raise levels of phytohormones like indole acetic acid (IAA), gibberellins (GA3), and active cytokinins, which encourage cell division and elongation.(Awad et al., 2006); improves and stimulates the primary biosynthesis of chlorophyll (Garbaye and Churin, 1996); increases plant resistance to disease due to its antimicrobial, antibacterial, anti-yeast, and anti-mold activity; and stimulates the uptake of N, P, K, Mg, Ca, Zn, Fe, and Cu, which lessens the inhibitory effect of toxic sodium and restores growth (Selvan and Sivakumar, 2013). Silicon strengthens the cell walls of plants.This results from silicon being deposited as amorphous silica in the cell wall, which is a positive effect (Khan et al., 2017).Furthermore, when micronutrients like iron, zinc, copper, and manganese are present in high concentrations, silicon lessens their toxicity, which promotes plant growth through interactions with nutrient absorption.(Wang et al., 2013).Potassium silicate also enhancing tissue elasticity and the volume of interconnected water that is associated with all expansion and growth (Shi et al., 2016). Spraying with potassium silicate improves vegetative growth indicators of plants.This may be due to the vital role of potassium in nutrition and enhancing the transport of substances and protein synthesis (Abd El-Gawad et al., 2017).In this regard, silicon has a role in the photosynthesis process chain, as it improves it and prevents the deterioration of chlorophyll.Silica bodies act as healthy windows that allow light to be transmitted to the mesophyll area, which in turn increases the content of chlorophyll (Pilon et al., 2014). Potassium clearly has a beneficial effect on potatoes by lessening the occurrence or intensity of early blight.It plays a significant role in numerous essential metabolic processes as well because it enhances growth indicators by acting as an activator or catalyst for numerous enzymes (Abou-El-Hassan et al., 2020). Silicon and potassium contents in leaves The data in Table (3) show that leaf silicon and potassium content varies significantly between different concentrations of SWE and potassium silicate treatments.Regarding silicon content, potassium silicate with 2% or 1% provided the highest silicon content values in the leaves with significant differences between them, while the lowest values were obtained with the 1% or 2% SWE treatments and the untreated control with no significant differences between them.Regarding the potassium content in the leaves, 2% potassium silicate gave the highest values, followed by 1% with significant differences between them, while 1% or 2% SWE were less effective in this regard.Untreated plants (control) gave the lowest values.Since potassium silicate contains potassium as a primary element, this facilitates the absorption of more of it as well as silicon as a secondary element, resulting in more potassium and silicon accumulating in the leaf and thus being transferred to the tubers very easily (Shehata et al., 2018 andAbdel-Latif et al., 2019).Kanto et al. (2004) proved that spraying strawberry plants with potassium silicate leads to an increase in their silicon content by approximately 2 to 24 times compared to the control.They reported that in leaves containing more than 1.5% silicates, diseases were significantly suppressed.Also, Shehata et al. (2018) found that spraying cucumbers with potassium silicate results in an increase in silicon and potassium content in the leaves and fruits compared to the control. Yield and its components The data in Table 4 show that preharvest treated plants resulted in a significant increase in average tuber weight, tuber weight per plant, and total yield per plot compared to unsprayed plants.SWE at 2% and potassium silicate at 2% were the best treatments in terms of yield increase and organic ingredients, but the lower concentration of these treatments was less effective in this regard.The increase in yield is due to the increase in the weight of the tubers.The lowest value in this regard was found in untreated plants (control).This result is consistent with the one The main effect of SWE application on potato plants increases the yield and its components, which is probably due to an increase in growth indicators, resulting in an increase in tuber number and tuber/plant weight, which is reflected in the total yield (Sarhan, 2011), the role of SWE as a tuber growth stimulator may be associated with increasing the availability of various nutrients and facilitating the availability of macronutrients, as well as its ability to meet some micronutrient requirements of the crop (Helaly, 2021).Also, presence of auxins in seaweed extracts will increase production of vitamins and hormones.In addition, naturally contain the hormones GA3, GA7, cytokinins, vitamins, and macro-and microelements present in chelated forms, which are easily absorbed by plants and thus enhance the efficiency of the plant photosynthesis, which increases the yield (Helaly, 2016).Cytokinins have a role in dividing nutrients in the vegetative organs, while in the reproductive organs; High levels of cytokinin may be related to nutrients.The response of the plant treated with SWE suggests that it is involved in stimulating the transfer of cytokinin from the roots of the plant to its reproductive organs, or most likely in stimulating the amount or synthesis of endogenous cytokinin (Arthur et al., 2003).Increased available cytokinin will in turn lead to increased cytokinin supply to ripening fruits (Abd El-Moniem and Abd-Allah, 2008). It is possible that the stimulating effect of potassium silicate treatment on the crop is a result of increased plant absorption of nitrogen, phosphorus and potassium, which improves the characteristics of vegetative growth and increases the yield.This led to increased stimulation of photosynthesis and metabolism of various organic compounds that are transmitted through the leaves of plants to the tubers, with increasing in the weight and qualitative characteristics of the tubers (Abou-El-Hassan et al., 2020), also Salim et al. (2014) and Talebi et al. (2015) explained that potassium silicate helps plants grow, become more resistant to yeast, fungal and bacterial diseases, increase tolerance to environmental stresses and increase plant productivity.Fertilization with silicon in turn leads to an impressive result in producing potato tubers with a greater fresh weight and thus an increase in the total yield (Soltani et al., 2018). Tuber quality characteristics The data presented in Tables 5 and 6 revealed that all pre-harvest applications resulted in a significant moral increase in tuber physical properties: tuber length and dry matter as well as their chemical properties (carbohydrates and starch contents) compared to untreated plants.In this regard, SWE at 2% and potassium silicate at 2% were the best in tuber quality, with significant differences between them in these characteristics, these substances are follow by 1%.The lowest values for these traits were found in untreated plants.There were no significant differences in tuber diameter between all treatments and control.However, the highest values of tuber firmness were obtained with potassium silicate at 2% or 1% with significant differences between them, followed by SWE at 2% or 1% with significant differences between them.These results were in agreement with those obtained by Abd El-Gawad et al.The effect of SWE concentrations could be due to increased absorption of various nutrients and increased photosynthesis, which led to increased accumulation of metabolites in the reproductive organs, which in turn ultimately led to improved tuber quality (Haider, 2012). Potassium silicate increased the firmness of the tubers.This occurs due to the deposition of silicon in the walls of plant cells, which can increase the firmness and rigidity of their walls (Khan et al., 2017).Artyszak (2018) who explained that silicon would help ensure food safety under climatic changes.Marschner (2012) found that potassium silicate has a positive role on the dry matter content due to the presence of potassium in it.This plays an important role in improving the products of the photosynthesis process and their easy transfer from leaves to tubers. Silicon and potassium content in tubers The data presented in Table (7) show that the silicon and potassium content in the tubers differed significantly between treatments.As for silicon, potassium silicate 2% and 1% gave significantly higher silicon content in the nodules, although there were significant differences between them.The minimum values of silicon content were observed in SWE at 1% or 2% and in untreated plants (control), with no significant differences between them.Regarding the potassium content, potassium silicate gave the highest values for potassium with a proportion of 2% and 1%, although there were significant differences between them.However, 1% or 2% SWE were less effective in this regard and there were no significant differences between them.The lowest value was achieved in untreated plants (control).Shehata et al. (2018) came to a similar result. Weight loss percentage The data presented in Table 8 show that the percentage weight loss of potato tubers increases significantly and continuously during the storage period.These results are consistent with Zaki et al. (2021) agree about potato tubers.The increase in physiological weight loss may be due to damage and sprouting, but also to tuber moisture loss through the transpiration process and nutrient consumption during the respiration process, which increases with the length of storage time (Kazami et al., 2001). All preharvest treatments resulted in a significant decrease in percent weight loss compared to the untreated treatment (control); while 2% SWE minimized the percentage weight loss, followed by 2% potassium silicate, with a reasonably significant difference between them. Low concentrations of these substances had the least effect on reducing weight loss rates.In contrast, the untreated control group gave the highest percentage weight loss in two seasons, and this value was consistent with that reported by Shehata et al. (2018) and Shehata et al. (2019).Our results show a beneficial effect of SWE on chemical properties and vegetative growth of potato tubers, which maintained the physiological metabolic balance after harvest and reduced tuber dryness.This is consistent with Abd El-Basir, (2013).The decrease in weight loss enhances the role of SWE in reducing susceptibility to fungal and bacterial diseases, decreased respiratory rate, which greatly affects the ability of the tubers to store (Kolodziejczyk, 2016). One of the tremendous effects of silicates on reducing the rate of weight loss during storage is that silicon covers the stomata of fruits with a layer, which reduces their respiration rate and at the same time leads to a reduction in weight loss (Hammash and El Assi, 2007).In addition, silicon reduces the permeability of cell membranes (Laing et al., 1993), increases the stability of the membrane and confirms its integrity (Agarie et al., 1998).Following Si application, modification of cell membranes occurs, resulting in a reduction in surface water loss and thus reduced fruit weight loss (Epstein, 2001).Shehata et al. (2018) proved that the silicone also helps fruits improve their quality because it inhibits the respiration process and thus reduces the physiological loss in fruit weight. The interaction between storage times and preharvest treatments had a significant impact on the percentage of weight loss.After 4 months of storage, the 2% SWE treatment showed the lowest percentage of weight loss, while the untreated control showed the highest percentage of weight loss.These results were for both seasons. General appearance (GA) The data presented in Table (9) showed that the general appearance decreases with the length of storage time of the tubers at 10 °C.Similar results were reported by Kassem et al. (2014) reported about potato tubers.The decrease in GA during storage could be due to wilting, decay, shrinkage, color change and germination (Banaras et al., 2005).All treatments had the highest GA value compared to the untreated control.However, SWE treatment at 2% and potassium silicate treatment at 2% were the most effective in maintaining GA, with no significant difference between them.A lower concentration of these substances was less effective.The worst GA was recorded for the untreated control.These results were obtained across two seasonal studies and were consistent with those of Shehata et al. (2018) andShehata et al. (2019). The enhanced effect of the two seasons can be attributed to the fact that SWE contains nutrients, organic compounds, and macro-and microelements (Khan et al., 2009) and is rich in organic acids, enzymes and several mineral substances (Gad EL-Hak et al., 2012), these minerals (K, Ca, Fe, Mg and Mn) reduce the percentage weight loss and preserve color during different storage times (Shehata et al., 2015).Kaluwa et al. (2010) proved that the general effect of silicon is a suppression of respiratory rate and an increased accumulation of antioxidants and total phenols, which increases the fruit's ability to relieve stress and tolerate long-term storage. The interaction between storage periods and treatments was significant.However, 2% SWE and potassium silicate showed good appearance after 4 months of storage, while 1% SWE gave good appearance after 3 months, while the untreated control showed unacceptable appearance at the end of storage (4 months) in both seasons. Decay percentage The data presented in Table (10) clearly show that the degree of decay of potato tubers increased steadily and consistently with increasing storage time.These results are consistent with Zaki et al. (2021) on potato tubers and may be due to the increased water loss and use of complex molecules in the respiration process, which affects the shine and shine of the tubers, reduces the strength of the tubers and makes them more susceptible to fungal infections (Kolodziejczyk, 2016). All treatments reduced the rate of decay and prolonged the storage period of tubers compared to the control treatment.However, seaweed extract at 2% and potassium silicate at 2% treatments did not show any decay during 4 months.However, SWE at 1% showed no decay up to 3 months of storage.It was also observed that decay began in the control tubers after two months of storage, then it increased during 4 months in both seasons, and it was consistent with Afifi (2016) and Zaki et al. (2021).This decrease in the decay of SWE treatment may be attributed to its role as an anti-disease, reducing susceptibility to disease, and reducing the respiration rate, which greatly affects the increase in the tubers' ability to store (Kolodziejczyk, 2016).Tarabih et al. (2014) proved that potassium silicate plays a role in increasing the concentration of antifungal compounds, as well as enzymes such as the PAL enzyme, to have the ability to increase concentration of antioxidant compounds in the cells, which reduces the occurrence of decay in fruits, and this is a very good one effect. Tuber firmness The firmness of the potato tuber is an important factor in consumer acceptance of the product (Kassem et al., 2014).The data presented in Table (11) showed that the strength of potato tubers decreased significantly during storage in the two seasons.Kuyu et al. (2019) came to the same results for potato tubers.Tigist et al. (2013) found that the decrease in firmness is due to the loss of moisture, which causes wilting and wrinkling on the surface of the tuber.Therefore, loss of firmness occurs as a result of weight loss or is a sign of it.An obvious decrease in hardness is expected and this may be due to increased metabolic process and activity of enzymes responsible for starch hydrolysis and degradation (Page et al., 2008). All treatments had a significant effect on increased tuber firmness compared to the control.However, potassium silicate at 2% or 1% were the most effective in reducing firmness loss, with significant differences between them, followed by SWE at 2% or 1%, while the highest firmness losses of tubers were found in the control.According to Shehata et al. (2018) andShehata et al. (2019) these result achieved in two seasons. The results may be due to an increase SWE in the amount of K and Ca available in the ground and facilitated for plants (Abou El-Yazied et al., 2012) These elements, in turn, increase in the fruit, increasing osmotic capacity and water absorption and reducing water loss, which affects the hardness of the fruit during storage (Afifi, 2016). Potassium silicate had a positive effect on maintaining the firmenss of potato tubers by depositing silicon between the cell wall and membranes, which maintained the soluble barrier against leakage during storage ( Tesfay et al., 2011).Silicon increases the activity of many cellular enzymes, particularly chitinase, peroxidase and polyphenol oxidase, and increases the deposition of intracellular callose formation and hydrogen peroxidase (Shetty et al., 2012), this improves firmness and fabric firmness and extends durability (Liang et al., 1993). The interaction between treatments and storage times was significant during the two seasons.Potassium silicate at 2% showed the highest firmness values for the tubers in all storage periods, while the lowest values were recorded for the untreated tubers. Dry matter percentage The data obtained in Table (12) show that the dry matter content of potato tubers decreases significantly over the storage period.Similar results were obtained by Zaki et al. (2021) about potato.Fruit respiration is an important chemical process for all living plant tissues, as starches and sugars (dry matter) are oxidized to carbon dioxide and water vapor with the release of heat (Atala et al., 2019).The treatments effectively maintain dry matter content compared to the control treatment.The 2% SWE and potassium silicate treatments were the most effective dry matter maintenance treatments, with significant differences, followed by the 1% SWE treatment, and the lowest values resulted in control.These results were obtained in two seasons and were reported by Zaki et al. (2021). As for the interaction between treatments and storage times, it was significant after 4 months of storage.The 2% SWE treatment resulted in an increase in dry matter percentage, followed by the 2% potassium silicate treatment, with significant differences, while the control treatment gave the lowest percentage in the same period. Total carbohydrates percentage The total carbohydrate content data in Table (13) showed that tuber content decreases with the length of storage time, and these results were obtained in two consecutive seasons.The decrease in total carbohydrate concentration may be due to the greater loss of sugar through respiration compared to the loss of water through transpiration (Wills et al., 1998). The treatments had a significantly higher value of total carbohydrate content compared to the control.However, treatment with 2% SWE and 2% potassium silicate was most effective in maintaining total carbohydrate content, with significant differences.A lower concentration of these substances was less effective.The lowest value for total carbohydrate content was measured in tubers of untreated plants.These results were obtained in both seasons of the study and are consistent with Gad El-Rab (2018); This may be due to the enhanced effect of SWE on leaf area (photosynthetic surfaces), total chlorophyll and the content of some important minerals, as shown by a study by Hamed (2012), and the maintenance of carbohydrate content (Mohamed, 2014). As for the interaction between storage time and treatment, it was significant during both study seasons.Starch hydrolysis takes place mainly inside the tubers, where phosphorus is broken down and phosphorylase is activated.This is related to the accumulation of sugar in the tubers (Claassen et al., 1993).The treated tubers had a significantly higher starch content compared to the control.The most effective treatment was 2% SWE and potassium silicate as it maintained the starch content with significant differences, while the lowest starch content value was recorded in the control.These results were achieved over two seasons and were consistent with Zaki et al. (2021).In general, after 4 months of storage, the interaction between treatments and storage duration was significant.The treatment with 2% SWE and potassium silicate maintained the starch content without significant differences in two seasons, while the untreated control treatment gave the lowest percentages in the same storage period. CONCLUSION It might be concluded that potato plant cv.Spunta treated with SWE at 2% and potassium silicate at 2% enhanced vegetative growth indicators of plant, total yield and its components and tuber quality.Also, tubers obtained from these treatments may help in extending postharvest life of the potato tubers.This treatment plays an important role in lowering weight loss, suppressing pathogens infections and delaying softening and maintaining quality of tubers during storage, which all lead to enhancing keeping quality of the tubers as well as retaining its nutritional value for longer periods at 10ºC and 85% relative humidity (RH). Table 2 . Effect of foliar application with potassium silicate and seaweed extract on vegetative growth characters of potato in 2022 and 2023 seasons (combined analysis). Means in the same column having the same letter are not significantly different at 0.05 level by Duncan's multiple rang test. Table 3 . Effect of foliar application with potassium silicate and seaweed extract on silicon and potassium content in leaves of potato plants in 2022 and 2023 seasons (combined analysis). Means in the same column having the same letter are not significantly different at 0.05 level by Duncan's multiple rang test. Table 4 . Effect of foliar application of potassium silicate and seaweed extract on total tuber yield and its components of potato in 2022 and 2023 seasons (combined analysis). Means in the same column having the same letter are not significantly different at 0.05 level by Duncan's multiple rang test. Table 5 . Effect of foliar application of potassium silicate and seaweed extract on physical properties of potato tubers in 2022 and 2023 seasons (combined analysis). Treatments Tuber length (cm) Tuber diameter (cm) Firmness (kg/cm 2 ) Dry matter (%) 1% potassium silicate Means in the same column having the same letter are not significantly different at 0.05 level by Duncan's multiple rang test. Table 7 . Effect of foliar application of potassium silicate and seaweed extract on silicon and potassium content in tubers of potato in 2022 and 2023 seasons (combined analysis). Means in the same column having the same letter are not significantly different at 0.05 level by Duncan's multiple rang test. Table 8 . Effect of foliar application of potassium silicate and seaweed extract on weight loss (%) of potato tubers during cold storage at 10 °C in 2022 and 2023 seasons. Means in the same column having the same letter are not significantly different at 0.05 level by Duncan's multiple rang test. Table 9 . Effect of foliar application of potassium silicate and seaweed extract on general appearance (score) of potato tubers during cold storage at 10 °C in 2022 and 2023 Means in the same column having the same letter are not significantly different at 0.05 level by Duncan's multiple rang test. Table 10 . Effect of foliar application of potassium silicate and seaweed extract on decay (%) of potato tubers during cold storage at 10 °C in 2022 and 2023 seasons. Means in the same column having the same letter are not significantly different at 0.05 level by Duncan's multiple rang test. Table 11 . Effect of foliar application of potassium silicate and seaweed extract on firmness (kg/cm 2 ) of potato tubers during cold storage at 10 °C in 2022 and 2023 seasons. Means in the same column having the same letter are not significantly different at 0.05 level by Duncan's multiple rang test. Table 13 . Effect of foliar application of potassium silicate and seaweed extract on total carbohydrates (%) of potato tubers during cold storage at 10 °C in 2022 and 2023 The data inTable (14)clearly show that the percentage of starch in potato tubers decreases steadily and continuously with the length of the storage period; the results were achieved in both seasons and were consistent with Zaki et al. (2021) on potato. Table 14 . Effect of foliar application of potassium silicate and seaweed extract on starch (%) of potato tubers during cold storage at 10 °C in 2022 and 2023 seasons. Means in the same column having the same letter are not significantly different at 0.05 level by Duncan's multiple rang test. SA, Said ZA, Attia MM and Rageh MA (2015). authors thank Prof. Dr. Said Abdullah Shehata thanked for his support and valuable advice and thanked the project "Development of Export Crops -Extending the Shelf Life of Fruits and Reducing Losses" for his support.quality and storability of sweet pepper.Annals of Agric.Sci., Moshtohor, 57(1): 77 -88.Shehata Effect of foliar application of micronutrients, magnesium and wrapping films on yield, quality and storability of green bean pods.Fayoum J. Agric.Res. Zheng Q, Shen Q and Guo S (2013). Postharvest: An introduction to the physiology and handling of fruit, vegetables and ornamentals.Ed.: CAB International, Wallingford, UK.Zagazig J.Agric.Res.,
2024-06-28T15:16:40.897Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "ce46d3553546c81e3f5d07bbe3d26900cb37b48d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.21608/sjas.2024.293878.1432", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e57da9cd7179acb0159e0a1fccaac1f73101094c", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
247244878
pes2o/s2orc
v3-fos-license
Momentum-Resolved Exciton Coupling and Valley Polarization Dynamics in Monolayer WS$_2$ Coupling between exciton states across the Brillouin zone in monolayer transition metal dichalcogenides can lead to ultrafast valley depolarization. Using time- and angle-resolved photoemission, we present momentum- and energy-resolved measurements of exciton coupling in monolayer WS$_2$. By comparing full 4D ($k_x, k_y, E, t$) data sets after both linearly and circularly polarized excitation, we are able to disentangle intervalley and intravalley exciton coupling dynamics. Recording in the exciton binding energy basis instead of excitation energy, we observe strong mixing between the B$_{1s}$ exciton and A$_{n>1}$ states. The photoelectron energy and momentum distributions observed from excitons populated via intervalley coupling (e.g. K$^-$ $\rightarrow$ K$^+$) indicate that the dominant valley depolarization mechanism conserves the exciton binding energy and center-of-mass momentum, consistent with intervalley Coulomb exchange. On longer timescales, exciton relaxation is accompanied by contraction of the momentum space distribution. In TMD monolayers, the same strong Coulomb forces that give exciton binding energies on the order of ∼0.5 eV [23,24] can also give rise to substantial interactions between exciton states, both within the same valley (intra-valley coupling) and between different valleys (intervalley coupling). In particular, the Coulomb exchange interaction couples bright excitons of opposite spin character, coupling A and B excitons within the same valley (A + ← → B + ) or degenerate excitons in opposite valleys (A + ← → A − , B + ← → B − ), as illustrated in Fig. 1a) [25][26][27][28]. Due to this strong coupling, the exciton eigenstates are, in general, a combination of exciton states with mixed spin and valley characters [26,27,[29][30][31][32]. Optical excitation addresses only the bright states, in which the electron and hole occupy the same valley with small total momentum Q = k e − k h and have net spin zero. Photoexcitation thus creates a superposition of eigenstates * thomas.allison@stonybrook.edu which then rapidly evolves in time, leading effectively to relaxation of the initial excitation and valley depolarization. The strength of the eigenstate splitting due to Coulomb exchange, and thus its contribution to valley depolarization, is disputed among theoretical models [26,27,30,[33][34][35]. The additional role of exciton-phonon interactions in both intervalley and intravalley exciton dynamics is also non-negligible [36][37][38][39]. Recently, technological advancements in time-and angle-resolved photoemission spectroscopy (tr-ARPES) have enabled the technique to be applied to small monolayer TMD samples [62][63][64][65][66], providing direct momentumspace visualization of exciton wavefunctions as well as previously inaccessible dark states. In this article, we present comprehensive tr-ARPES measurements of the exciton dynamics in monolayer WS 2 following excitation at 2.4 eV, the nominal B exciton resonance [61]. We . c) In our tr-ARPES spectra, the exciton signals are separated by exciton binding energy. Thus, exciton states with lower binding energies appear at higher energies above the VBM. E BG denotes the electronic band gap. Excitation and binding energies are derived from [23,24,61]. d) Linearly or circularly polarized pump pulses excite the sample, and a time-delayed XUV probe pulse photoejects electrons that are extracted into the momentum microscope column. e) Cut along the WS2 K−Γ−K valence band structure, collected with hν probe = 27.6 eV. f ) Raw exciton signal at 210 fs delay (hν probe = 25.2 eV). The K + , K − , Σ, and Γ valley locations are indicated. measure full 4D (k x , k y , E, t) photoelectron distributions after both linearly polarized and circularly polarized photoexcitation. Resolving exciton binding energy (Fig. 1c)) instead of excitation energy, we observe previously unseen strong mixing between A n>1 and B 1s excitons in the initial photoexcited spectrum. With parallel momentum detection across the full Brillouin zone, we provide the first reported momentum-space visualizations of circular dichroism and ultrafast valley depolarization in the monolayer TMDs. We also observe that the exciton relaxation is accompanied by significant contraction of the initial exciton distribution in momentum space. These measurements report on the time-, energy-, and momentum-dependence of intervalley and intravalley exciton coupling, providing new insights on exciton formation in TMDs and the many-exciton coupled wave function. Our measurement scheme is shown in Fig. 1d). Linearly and circularly polarized pump pulses (hν pump = 2.4 eV) and p-polarized extreme ultraviolet (XUV) probe pulses (hν probe = 20−30 eV) with variable delay illuminate the sample and photoelectrons are collected by a custom time-of-flight momentum microscope [67,68]. High data rates are enabled by conducting the experiment at 61 MHz repetition rate with XUV probe pulses produced via cavity-enhanced high-harmonic generation (CE-HHG). The laser system and HHG beamline have been previously described in detail in [69][70][71]. The sample is an exfoliated monolayer of WS 2 stacked on an exfoliated buffer layer of hexagonal boron nitride on a silicon substrate. We use the spatial imaging capabilities of the momentum microscope [72] to isolate the photoelectron signal from the ∼10×10 µm 2 monolayer region of interest of the sample. The valence band structure of the sample for a cut along the K-Γ-K axis of WS 2 is shown in Fig. 1e)). The measured band structure shows that the valence band maximum (VBM) is located at the K + and K − valleys at the edges of the WS 2 Brillouin zone, as expected for a monolayer sample. The energy resolution is broadened to approximately 160 meV due to sample inhomogeneity [62], but the spin-orbit splitting of the valence bands at the K valley is still clearly resolved. Additional sample characterization and experimental details can be found in the Supplemental Material [73]. All measurements are done at room temperature unless stated otherwise. The 2.4 eV pump pulses produce photoexcited signals at the K + and K − valleys (Fig. 1f)). In contrast to previous studies on monolayer WSe 2 /hBN and strongly pumped WS 2 on bare silicon [62,64], the signals we ob- serve at the Σ valleys are centered ∼100 meV higher than the K valley signal, and are much weaker in intensity than previously reported in WSe 2 [62]. We find that the Σ/K intensity ratio depends strongly on the probe photon energy, but is always more than 2.5× smaller than that found in similar measurements of bulk WS 2 [73,74], where the Σ valleys are lower in energy than the K valleys but the photoemission matrix elements are similar. Thus, we believe there is only minor involvement of excitons with electrons at Σ and focus here on the K valley excitons. By varying the excitation fluence between 1.3 µJ/cm 2 and 29 µJ/cm 2 , we find the tr-ARPES signals to be fluence independent below 5 µJ/cm 2 [73]. Thus, all measurements reported here are conducted at 5 µJ/cm 2 excitation fluence, corresponding to an excitation density of approximately 7 x 10 11 carriers/cm 2 at our pump energy [61]. The cross-correlation of the pump and probe pulses yields a Gaussian instrument response function with 200 ± 20 fs FWHM. Photoexcitation with linearly polarized light populates the K + and K − valleys equally and both valleys show the same dynamics. The time-resolved photoelectron spectrum recorded with s-polarized excitation is shown in Fig. 2a). No intensity is ever observed in the conduction band at E VBM + hν pump = 2.4 eV, indicating the direct formation of bound excitons. Exciton signals appear below the conduction band due to the exciton binding energy [62,75,76], as illustrated in Fig. 1c) and the leftmost scales in Fig. 2. The most prominent feature at early pump-probe delays is the large intensity at energies between 2.05−2.3 eV above the VBM in the K valley. This corresponds to exciton binding energies compatible with excited A excitons (A n>1 ) [23,24]. At longer delays, a lower energy feature centered at approximately 1.93 eV grows in and persists beyond the longest pump-probe delays recorded (25 ps). This lower energy feature appears at binding energies compatible with those expected for both the A 1s and B 1s excitons, which are expected to have similar binding energy [24,77,78]. Similar results are obtained with p-polarized excitation, indicating that excitation of spin-forbidden intravalley excitons by the out-of-plane component of the electric field has a negligible effect on the observed signals, as expected due to the much smaller transition dipole for these excitations [73,79,80]. The spectrum of Fig. 2a) consists of multiple overlap- ping components. To deconvolve the overlapping spectral and temporal components of the experimental data, we have applied global analysis (GA) [81][82][83][84][85], which reduces the signal to a few principal spectral components S i (E), each with simple exponential time dynamics f i (t) convolved with the instrument response, viz. We find an excellent fit with only N = 2 components as shown in Fig. 2b). Component 1 corresponds to the initially excited population and is peaked at E − E VBM = 2.15 eV but also shows a long tail to lower photoelectron energies (larger binding energies) covering the region of the B 1s exciton. We assign this to an initially excited mixture of A n>1 and B 1s excitons. Despite initial photoexcitation of the B exciton resonance, we clearly observe strong weighting towards lower binding energies consistent with population of the A n>1 excited states. This is seen both in the GA results and in the raw data, with both much more weighted towards the A n>1 states than the B exciton than what would be expected from the optical absorption spectrum [61]. This indicates very strong mixing of the B 1s states with A n>1 states, such that photoexcitation of what is nominally the B exciton resonance promptly populates A n>1 exciton states as well. Such A/B mixing due to intravalley Coulomb exchange has been discussed before [32], although the degree of mixing we observe here is much larger than suggested by this previous work. Component 1 decays with a time constant of 378 ± 40 fs, giving rise to component 2, shown in Fig. 2b). Component 2 is centered at the energy of the long-delay pho-toelectron spectrum and has a GA lifetime longer than 50 ps. We assign component 2 to a mixture of relaxed bright and dark 1s excitons with binding energies of approximately 0.35 eV. We find adding additional components beyond N = 2 does not improve the quality of the global fit or offer additional physical insight. More details of the GA can be found in the Supplemental Material [73]. The dynamics observed under linearly polarized excitation can be due to a mixture of both intervalley and intravalley relaxation mechanisms. To disentangle their relative contributions, we use circularly polarized pump pulses to prepare valley-polarized excitons. We excite the sample with both σ + and σ − polarizations, which preferentially excite K + and K − valleys, respectively. Figs. 3a) and 3b) show the integrated K + and K − valley signals under σ + and σ − polarizations, respectively. Fig. 3c) shows the valley asymmetry, ρ(t), defined by: where I K + and I K − denote the integrated intensity in the K + and K − valleys, respectively. The valley asymmetry decays in approximately 250 fs, limited by the instrument response. We observe similar time scales for the decay of ρ(t) for low-temperature data recorded at 126 K [73], suggesting exciton-phonon coupling is not a main driver of the dynamics. For comparison, we also show the s-polarized data in Fig. 3c), which shows no valley asymmetry. The K + and K − valley signals following s-polarized photoexcitation can be found in the Supplemental Material [73]. Figs. 4a) and 4b) show the time-resolved photoelectron spectra and the S 1 (E) GA spectral components for the K − and K + valleys after σ − excitation. Strikingly, the spectrum in the unpumped K + valley does not show any appreciable difference to that of the initially pumped K − valley, except an approximately 50 fs delay between the population of the two valleys. We quantify this by applying the same GA described above to the K + and K − valleys independently in the circularly polarized data. For the unpumped valley, we allow for a shift, ∆t, in the onset of the time dynamics f i (t) → f i (t − ∆t). We find the spectral components and exponential rates in the K + and K − valleys to be similar to one another and also to those found under s-polarized excitation. The delayed onset captured by ∆t was found to be the singular notable difference between the dynamics in the two valleys. From the GA fitting, we find ∆t = 43 ± 4 fs for σ − excitation and ∆t = 53 ± 6 fs for σ + . These 50 fs shifts are also apparent in the integrated signals of Figs. 3a) and 3b). As a control, we analyzed the s-polarized data in the same way and find ∆t = 6 ± 5 fs [73]. The small 50 fs shift, indicating very rapid valley depolarization, is consistent with the ∼250 fs time scale on which ρ(t) becomes zero when the instrument response is considered. The integrated GA model results are also shown as the lines in Fig. 3. FIG. 4. Excitons formed by σ − excitation. a) Comparison of the K − and K + valleys following σ − excitation shows that the two valleys present nearly identical dynamics, the difference being a 43 ± 4 fs delay in the appearance of signal in the unpumped K + valley. hν probe = 25.2 eV. b) The GA spectral components S1(E) for the K − and K + valley signals show that the population transfer from the pumped valleys to the unpumped valleys does not involve significant changes in the energy distribution. Importantly, the prompt valley depolarization we observe in the tr-ARPES signal is not accompanied by energy relaxation. This is evident from both the data of Fig. 4a) as well as the GA analysis in Fig. 4b), with S 1,K + (E) closely resembling S 1,K − (E). This is consistent with valley depolarization driven by intervalley Coulomb exchange, which couples energetically degenerate bright exciton states, A ± ← → A ∓ , B ± ← → B ∓ [8,26,28,30,33], but is in contrast to other recently proposed non-degenerate intervalley depolarization mechanisms that couple A ± ← → B ∓ , B ± ← → A ∓ [45,60,[86][87][88]. The observed timescale is also consistent with calculations of intervalley exchange matrix elements. For large ∼0.1Å −1 center-of-mass momentum, valley depolarization via the exchange interaction is expected to be extremely efficient, with eigenstate energy splittings of 10s of meV [28] and corresponding valley depolarization predicted in several 10s of fs [33]. In Fig. 5, we additionally examine the momentum distributions of the photoelectrons. The data shown are recorded after σ + excitation with 30 eV probe energy. A representative image of the initial momentum distribution of the K + valley signal is shown in Fig. 5a). At 5 ps, the distribution has relaxed to the narrower one in Fig. 5b). We quantify the extent of the photoelectron momentum distributions in the K + and K − valleys as a function of time by fitting the energy-integrated K valley signal with a Gaussian, exp[−(1/2)|k − K| 2 /(∆k) 2 ], and report the standard deviation, ∆k, in Fig. 5. We observe that the initial photoelectron momentum distribution encompasses nearly twice the extent of the relaxed photoelectron population at approximately 5 ps delay time. The final distribution width of ∆k ∼ 0.07Å −1 is commensurate with the recent experimental measurement of relaxed exciton states of WSe 2 at 90 K [63]. Remarkably, no large differences are observed in the momentum distributions in the K + and K − valleys. For example, the initial K + valley distribution with ∆k = 0.12Å −1 arrives at the K − valley 50 fs later with the same width. The Coulomb exchange interaction conserves the total exciton momentum Q = k e − k h . While we do not measure Q directly in this experiment, we conjecture that the width of the distribution in Q is correlated with the width of our photoelectron distributions. Thus, the conservation of the photoelectron momentum distribution after intervalley coupling suggests conservation of the exciton momentum, consistent with the intervalley exchange coupling mechanism of valley depolarization. While energy conservation and momentum conservation during valley depolarization are both consistent with intervalley Coulomb exchange coupling, the similarity of the energy and momentum distributions between the pumped and unpumped valleys suggests that rate of transfer does not appear to depend strongly on the exciton binding energy or exciton momentum. The strength of the exchange interaction is expected to scale as |Q| and the square of the electron-hole wavefunction overlap [28,33,89]. This would indicate faster transfer for excitons with larger momentum or tighter electron-hole binding. However, within our experimental resolution, we do not observe such Q-or E-dependence in the population transfer. In this work, we have used time-of-flight momentum microscopy combined with ultrashort XUV pulses at 61 MHz repetition rate to image the exciton dynamics in monolayer WS 2 . Our measurements record the dynamics in the natural momentum-space basis in which theory and calculations are formulated, and shed new light on the ultrafast intervalley and intravalley coupling dynamics in monolayer TMDs. While these dynamics have been the subject of extensive optical spectroscopy, to our knowledge these are the first reported momentum-space measurements of valley depolarization in the monolayer TMDs. Future work with higher resolution can address the energy-and momentum-dependence of exciton coupling in further detail and also study these phenomena in 2D heterostructures. S1. Sample fabrication and characterization To assemble the monolayer WS 2 /hBN/Si heterostructure, WS 2 (HQ Graphene, n-type) and hexagonal boron nitride (hBN) flakes are first exfoliated onto separate SiO 2 (300 nm)/Si substrates (Fig. S1a) and b)). Raman spectroscopy (Fig. S1c)) and photoluminescence (PL) measurements (Fig. S1d)) are used to distinguish monolayer WS 2 flakes. The spectra are measured at room temperature with 514 nm (2.41 eV) excitation in a backscattering configuration using a Renishaw Raman microscope. The power of the excitation beam is ∼100 µW, and a 100× objective lens focused the beam to a spot size of ∼1 µm on the target flake. The collected signal is dispersed by a grating with a groove density of 1800/mm. The integration time is set to 120 s for Raman measurements and 5 s for PL measurements. The strong PL signal and the obvious longitudinal acoustic mode (∼350 cm −1 ) in the Raman spectrum show that the target WS 2 transferred to the sample stack is a monolayer [1]. Next, a dry transfer method is used to stack the WS 2 /hBN heterostructure. A polydimethylsiloxane (PDMS) hemisphere is first made on a clean glass slide and then covered by a thin film of polycarbonate (PC). This PDMS/PC stamp is then used to pick up the monolayer WS 2 flake from the SiO 2 /Si substrate (Fig. S1e)). The pick-up procedure is to lower the PDMS/PC stamp and heat the sample stage to 70 • C, and when the target flake is fully covered by PC film, shut down the heating and slowly detach the PDMS/PC from the sample stage; the WS 2 flake is picked up by the PDMS/PC stamp after separation. Then, the PDMS/PC/WS 2 is used to further pick up the bottom hBN flake (∼10-20 nm thickness) by the same procedure (Fig. S1f)). The PDMS/PC/WS 2 /hBN is then transferred onto a pre-patterned gold-grid-marked Si substrate with good alignment by heating the sample stage to 130 • C and slowly lifting up the PDMS stamp; the PC/WS 2 /hBN remains on the Si substrate. The PC film is then dissolved in chloroform. Afterwards, the WS 2 /hBN heterostructure (Fig. S1g)) is annealed at 300 • C in ultra-high vacuum (UHV) for 1 hour to clean up any polymer residue. For ARPES measurements, the finished sample was annealed to 150 • C in UHV daily for 30-60 minutes and allowed to cool completely. The sample can be clearly identified using the real-space imaging mode of the momentum microscope (Fig. S1h)). S2. tr-ARPES experimental apparatus and data analysis The experiments presented here are driven by a home-built Yb:fiber frequency comb laser [4] producing 1 µJ, 185 fs pulses centered at 1035 nm, at 61.3 MHz repetition rate. The laser pulses are resonantly enhanced in an optical cavity to produce ∼10-10.5 kW of intracavity average power. At the cavity focus, the laser reaches a peak intensity of ∼10 14 W/cm 2 and high-order harmonics are generated in a jet of argon gas. Harmonics from 10 to 40 eV are separated by a time-preserving grating monochromator, and the desired, isolated harmonic is refocused onto the sample [5,6]. For tr-ARPES measurements, we employ a custom time-of-flight momentum microscope to measure the real-and momentum-space images of the sample [7,8]. In the real-space imaging mode, we use a broadband Xe-Hg lamp to uniformly illuminate the full sample. The work function contrast between monolayer WS 2 , hBN, and silicon allows S2 FIG. S1. Sample characterization. Optical microscope images of a) the exfoliated monolayer WS2 flake and b) the hBN. c) Raman spectrum of the exfoliated WS2, hν excitation = 2.41 eV. The intensity ratio of the WS2 2LA mode at the M point (∼350 cm −1 ) and the A1g mode (∼415 cm −1 ) of >2 is indicative of a monolayer sample [1,2]. d) Photoluminescence emission of the exfoliated WS2 shows strong emission intensity arising from the monolayer. hν excitation = 2.41 eV. e) Optical microscope image of the WS2 picked up on the PDMS/PC stamp and f ) the PDMS/PC/WS2 with the picked up bottom hBN flake. g) Image of the WS2/hBN on the target Si substrate. The blue and green dashed lines indicate the outline of the monolayer WS2 and hBN, respectively. The inclusion of the hBN buffer layer is essential to preserve the electronic structure of the monolayer TMDs [3]. h) Image of the sample in the real-space imaging mode of the momentum microscope, taken with hν probe = 4.75 eV. The WS2 flake extends onto the Si substrate to prevent sample charging during ARPES measurements. S3 the WS 2 /hBN overlapping region of interest to be clearly identified for the tr-ARPES experiments (Fig. S1h)). For momentum-space measurements, we implement an electron high-pass filter using two grids in front of the detector to pass only a ∼4.4 eV wide energy region of the photoemission distribution to the detector. This allows us to detect only the desired photoemission signal near the Fermi level and mitigate saturation of the detector by suppressing the strong photoemission signal from fully occupied states below the relevant regions of the valence band. The energy cut-off is apparent at the bottom of the band structure at −3 eV in Fig. 1e) of the main text. The high-pass filter is tunable and is adjusted to pass approximately 1−1.5 eV below the valence band maximum (VBM) for pump-probe experiments. All measurements are performed in vacuum better than 4 x 10 −10 Torr. The measured tr-ARPES 4D (k x , k y , E, t) photoelectron distributions are normalized at each individual pump-probe delay to the maximum intensity in the image at that delay. The VBM energy is determined by fitting the upper and lower spin-orbit split band intensities in the K valleys to a double Gaussian and extracting the center of the upper fit band. At our low excitation fluence, we do not observe any time-resolved shifts of the VBM in this work. We also do not observe any band gap renormalization or any laser-assisted photoelectric effect (LAPE) signal. The presented background-subtracted signals are produced by averaging the 3D (k x , k y , E) images of the five most negative pumpprobe delays and subtracting this average from the 4D (k x , k y , E, t) distribution. To produce the valley asymmetry ratios presented in Fig. 3 of the main text and Fig. S5, the intensities of the K + and K − valleys are scaled to the intensity at the longest pump-probe delay for the corresponding valleys in the linearly polarized excitation data. This allows for normalization of ARPES matrix element effects that cause unequal intensities in the different valleys across the momentum space image, in particular due to the direction of the probe electric field. With our tunable XUV probe, we observe strong dependence of the Σ valley photoemission intensity on the probe photon energy. Above approximately 23 eV, the intensity of the photoemission signal in the Σ valleys is extremely weak and nearly undetectable. With 22.8 eV and 20.4 eV probe energies, we observe the ratio K/Σ for the maximum total integrated valley intensities is approximately 1.73 and 0.96, respectively. An example of the exciton signals observed with hν probe = 20.4 eV is shown in Fig. S2a). Upon photoexcitation, we observe that the appearance of signal in the Σ valleys rises at the same time as that of the K valleys (Fig. S2b)). This Σ valley signal appears centered near 2.1 eV above the VBM, approximately 100 meV higher in energy than the K valley signal, as evidenced by the energy distribution curves presented in Fig. S2c). This is in contrast to recent tr-ARPES measurements at higher excitation densities for monolayer WSe 2 /hBN and monolayer WS 2 on bare silicon [9,10], where the Σ valley was observed to be roughly isoenergetic with the K valley. The prompt appearance of signal in the Σ valleys at early times could result from the pump excitation of the B exciton resonance which may be situated energetically near the electronic band gap. Photoexcitation above the band gap has previously shown prompt and strong appearance of excitons with electrons in the Σ valleys of monolayer WSe 2 /hBN [9]. Interestingly, calculations for monolayer MoS 2 have shown that the B 1s exciton, which includes a mixture of the A 1s exciton due to intravalley Coulomb exchange coupling, may show small, but nonzero, amplitude for the wavefunction in the interior of the Brillouin zone towards the Σ valley [11]. The integrated K valley signal intensities for various pump pulse fluences are shown in Fig. S3. We observe that, with pump fluences ≤ 5 µJ/cm 2 , the time dynamics are invariant to the fluence. Based on these results, all of the data presented in this work was taken with an incident pump fluence of 5 µJ/cm 2 . Accounting for the 5.5% absorption of monolayer WS 2 at 2.4 eV [12], we estimate that this fluence corresponds to an excited carrier density of approximately 7 x 10 11 carriers/cm 2 . This excitation density is well below the ∼3 x 10 12 carriers/cm 2 limit of the Mott transition [13]. In this pump fluence regime, the sample maintains a temperature of 302-303 K during the experiment. {x ,ŷ ,ẑ } as the basis of the incident light such that −ẑ is the propagation direction (Fig. S4). The incident circularly polarized electric field in this basis, E ± , can be expressed as: where E 0 denotes the amplitude, ω is the angular frequency, andσ ± denotes the right-and left-circular polarization states. In the sample basis {x,ŷ,ẑ},x =x andŷ = cos θŷ + sin θẑ. Thus, the electric field in the sample basis is given by: E ± = E 0 (cos ωtx ± sin ωt(cos θŷ + sin θẑ)) (S3) = E 0 (cos ωtx ± sin ωt cos θŷ ± sin ωt sin θẑ). (S4) In the sample basis, the circularly polarized light can be parameterized as: and thex andŷ terms can be reexpressed as: The electric field can then be written as: In our experimental geometry, θ ∼ = 48 • . For photoexcitation of the sample by incident left circularly polarized light (σ − ), the electric field in the sample plane is given by: indicating that in the plane of sample, the amplitude ratio between theσ − component and theσ + component is approximately 5:1 and the intensity ratio is 25:1. This ratio is reversed in the case of right circularly polarized photoexcitation. Thus, our circularly polarized photoexcitation of the sample is predominantly of the desired helicity, but contains a small contribution from the opposite helicity as well as an out-of-plane component. In WS 2 , excitation by the out-of-planeẑ component of the electric field has a very small transition dipole moment [14], particularly for the B exciton resonance [15], and thus we do not expect any appreciable contribution from thisẑ component. This is confirmed experimentally by comparing the results of s-and p-polarized excitation, with no differences observed within our measurement uncertainty. S6. Global analysis The global analysis (GA) algorithm employed here is similar to that previously applied to a variety of time-resolved spectroscopies [16,17]. GA is widely used to decompose conjested time-resolved spectra into individual spectral components described by exponential time dynamics [18][19][20]. This approach assumes that the spectral and temporal components of a time-resolved spectrum I(E, t) can be separated (i.e., that the spectral components do not shift in time). Here, the momentum-integrated signal of the desired valleys, I(E, t), is decomposed into two principal spectral components, S 1 (E) and S 2 (E), each described by exponential time dynamics f 1 (t) and f 2 (t) convolved with the Gaussian IRF: The two exponential decay lifetimes are denoted by τ 1 and τ 2 , and c 1 and c 2 are amplitude constants. An intensity offset factor, y 0 , is included as a fit parameter as the initial, constant value of S 1 (E) and S 2 (E). The time dynamics for the spectral component S 2 (E) in Eqn. S12 arise from the assumption that component 2 is formed by the decaying population of component 1 rather than by direct excitation by the pump pulse. The global fit is performed by minimizing χ 2 = (I exp. − I model ) 2 /σ 2 exp. . We find reduced χ 2 values of 1.2−1.4. For the GA of the separated K + and K − circular excitation data, we employ the same method but we allow the onset time for the time dynamics, t 0 , to be a fit parameter: I(E, t) = S 1 (E)[c 1 (e −(t−t0)/τ1 * IRF )] + S 2 (E)[c 2 (−e −(t−t0)/τ1 * IRF + e −(t−t0)/τ2 * IRF )]. (S13) The K + and K − valleys are fit separately, and the shift in the onset of the time dynamics between the two valleys, ∆t, is given by: where t 0,K pumped and t 0,K unpumped are the t 0 parameters for the pumped and unpumped K valleys, respectively. The fitted lifetimes τ 1 and the time shifts ∆t for the K valley signals for s-, σ − , and σ + excitation are presented in Table S1. The dominant source of the error in τ 1 is the systematic uncertainty in the IRF width, which is estimated by repeating the fit over the IRF confidence interval and adding the spread in the fit results in quadrature with the statistical error. TABLE S1. Global analysis fit results. In all fits, the value of τ2 was found to be substantially longer than the longest recorded pump-probe delay of the dataset. The FWHM of the IRF is fixed at 200 fs. All fitted experimental data presented here was taken with hν probe = 25.2 eV. S7. Temperature dependence To examine the possible role of exciton-phonon coupling in the ultrafast valley depolarization, we performed additional experiments with the sample held at 126 K. The valley asymmetry, given by: for σ + photoexcitation at room temperature and 126 K is shown in Fig. S5. I K + and I K − refer to the integrated intensity in the K + and K − valleys, respectively. The strong similarity between the observed timescales for the loss of valley asymmetry at each temperature indicates that exciton-phonon interactions do not play a significant role in the valley depolarization mechanism. S8. Comparison of K + and K − valleys after s-polarized excitation Here, we include the integrated intensities following s-polarized photoexcitation for the K + and K − valleys separately (Fig. S6). In contrast to the data recorded after circularly polarized excitation, we do not observe any notable differences between the K + and K − valleys under linearly polarized photoexcitation, as expected.
2022-03-07T06:47:22.595Z
2022-03-04T00:00:00.000
{ "year": 2023, "sha1": "ac3ce50b3f2fe3d79e1eca4b1fa9eaf80eede0d0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9af8b0da15563a13f586c679b300b75e6694d187", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
21360777
pes2o/s2orc
v3-fos-license
Increased Nanog Expression Promotes Tumor Development and Cisplatin Resistance in Human Esophageal Cancer Cells Background/Aims: Nanog plays a key role in stem cell self-renewal and pluripotency differentiation in embryonic stem cells ( ESCs). Recently, some studies reported that abnormal expression of Nanog could be detected in several tumors, indicating that Nanog might be related to tumor development. However, studies on the correlation between Nanog expression and esophageal cancer are sparse. Methods: In this study, we established two esophageal cancer cell lines 9706-Nanog and 9706-shNanog which stably expressed Nanog and Nanog-short-hairpin RNA (shRNA) genes. Results: We found that Nanog expression could promote the proliferation and invasiveness of the cancer cells, and inhibit the apoptosis. We also treated 9706-Nanog, EC9706 and 9706-shNanog cell lines with cisplatin and evaluated the drug sensitivity of the three cell lines. We found that the sensitivity of cisplatin was decreased with increased expression of Nanog. The expression of MDR-1 was also increased in 9706Nanog cells. Conclusions: Nanog may play an important role in human esophageal cancer development, and could be used as a therapeutic target in esophageal cancer treatment. Introduction Esophageal cancer is one of the most common malignant tumors with high mortality rates in Hebei, Henan and east coast of China [1]. The treatments for esophageal cancer include surgery, chemotherapy and radiotherapy but the ef�icacy of these therapies is unsatisfactory [2,3]. Thus it's important to develop new therapeutic strategies for esophageal cancer. Previous studies found that a core regulatory network including three stemness factors Sox2, Oct3/4, and Nanog coordinately determines embryonic stem cells (ESCs) selfrenewal and differentiation, and further studies have demonstrated that Nanog is expressed at high levels in ESCs and that the expression levels decrease after differentiation of ESCs [4][5][6][7]. There were many similar biological properties between cancer cells and embryonic stem (ES) cells, such as the characteristics of continuous proliferation and uncontrollable differentiation, which indicating these ESC self-renewal molecules may conceptually also contribute to tumorigenesis and development. Some studies have suggested that these factors could regulate self-renewal and pluripotency differentiation of cancer stem cell [8] and may play a role in human malignancy [9]. Previous studies have found that Sox2 might promote cell proliferation and tumorigenesis of breast cancer [10], and Oct3/4 might associated with the early stage of pancreatic cancer carcinogenesis [11,12], and even correlated with lymph node metastasis in colorectal cancer cells [13]. High expression of OCT-4 has been observed in esophageal cancer [14], suggesting association between OCT-4 and proliferation of esophageal cancer. Nanog is a member of ANTP class NK family genes and plays a key role in stem cell self-renewal and pluripotency differentiation [4,5]. In addition to self-renewal regulation of embryonic development, the abnormal expression of Nanog gene is found in malignant germ cell tumors, such as embryonic carcinoma and seminoma [8]. The abnormal expression of Nanog is also detected in solid tumors, such as pulmonary [15], breast [16], cervix [17], oral cavity [18], ovary [19], gastrointestinal [20] and kidney [21] cancer. Jeter CR et al. [9] evaluated the expression, origin, and functions of NANOG in different tumor cells, and found that multiple tumor cells in vitro and in vivo express NANOG (NANOG mRNA is derived from a Transcribed pseudogene, NANOGP8), and down regulation of NANOG inhibits tumor cells development associated with an inhibition of cell proliferation, clonal expansion, and clonogenic growth of tumor cells. This systematical investigation demonstrated that NANOG expression in human cancer cells is biologically functional in regulating tumor development. In addition, researchers also found that Nanog overexpression may induce chemo-resistance in oral squamous cell carcinoma [22] and prostate cancer [23], promote the tumor recurrence to resist cisplatin. Comprehensive and systematic studies of NANOG expression in human tumor cells have been proceeded, but research of the correlation between Nanog expression and esophageal cancer cells development is lacking. In our pilot study, the supression effect of shRNA target Nanog gene was demonstrated in vitro. The suppression effect was compared with the off-target effect of control shRNA, and eukaryotic expression vectors pcDNA3.1-Nanog and pSUPER-EGFP-shNanog, which could respectively express and knock-down Nanog gene in human esophageal cancer cell line EC9706, were also constructed. In this study, we detected the expression of Nanog in EC9706 cells, and used pcDNA3.1-Nanog and pSUPER-EGFP-shNanog to transfect into EC9706 cells, and established two esophageal cancer cell lines 9706-Nanog and 9706-shNanog, which could express Nanog and Nanog-shRNA gene stably. Using these cell lines, the impact of Nanog expression on the tumor development including proliferation, apoptosis and invasion behavior of esophageal cancer was evaluated. In addition, drug resistance of cisplatin, a widely used chemotherapeutic agent for esophageal cancer treatment, was also investigated in these cell lines. Establishment of esophageal cancer cell lines EC9706 cells were transfected with pcDNA3.1-Nanog and pSUPER-EGFP-shNanog respectively using liposome 2000, and screened by G418 reagent, and the cells that have stably incorporated pcDNA3.1-Nanog and pSUPER-EGFP-Nanog-shRNA could not be killed by G418. Initially these cells were treated with 400 µg/mL of G418 then the dose was increased to 800µg/mL after 2 weeks. EC9706 cells transfected with normal medium were regarded as control. Cells were cultured until cell clusters derived from replication of cell clones were observed. Then, one of the cell clusters was picked by trypsin �ilter paper and expanded for further experiments. Finally, Real-time PCR and western blot was carried out for identify the Nanog expression of cell lines. Real-time PCR and Western blot Real-time PCR and Western blot analysis were used to evaluate mRNA and protein expression of target gene. Total RNA and protein was isolated from esophageal cancer cells. cDNA was synthesized by random priming and real-time PCR was performed used SYBR green mixture. The sequences of the primers used for real-time PCR are as follows: Nanog-F(5'-ACCTGGTGCACCCAATCCTGG -3'); Nanog-R(5'-CCCCAGCAGCTTCCAAGGCAG -3'). MDR1-F(5'-CCGAGCACACCTGGGCATCG -3'); MDR-1-R(5'-GGCCTCCTTTGCTGCCCTCAC -3'); β-actin-R (5'-GTCGTCGACAACGGCTCCGG -3'); β-actin-F(5'-TGGGCCTCGTCGCCCACATA -3'). Following ampli�ication, compare CT values of samples (normalized to β-actin) in order to assess fold differences in mRNA levels of the target genes. 2-Delta Delta CT Method was used in relative gene expression data analysis. Total protein was subjected to SDS-PAGE, and transferred to PVDF membrane, probed with the antibodies as indicated. Primary antibodies were applied and incubated overnight at 4°C as follows : anti-Nanog and anti-MDR-1 (Santa Cruz Biotechnology) at 1:200; β-actin (Santa Cruz Biotechnology) at 1:500. Immunoreactive bands were visualized by ECL Chemiluminescence method and quanti�ied with Gelpro32 image processing and analyzing program. Colony formation assay Colony formation was analyzed by plating clone assay. First, 1000 cells/well in logarithmic growth were seeded onto 6-well, and incubated for 10 days. After incubation cells were �ixed with 4% paraformaldehyde for 20 min and stained with haematoxylin for 5 min. Finally, staining solution was washed away and colony formation was recorded the microphotographs. The numbers of colony were counted by viewing multiple �ields under a microscope. FACS analysis EC9706 cells were collected and washed twice using cold PBS, centrifuged at 1000 rpm for 5 min and re-suspend in 500 µL PBS. The re-suspension was added with 5µL Annexin V-FITC and 2.5 µL PI and cultured for 15 min in darkroom before studied by �low cytometry. The test was described as below: intact cells (FITC-/PI-), apoptotic cells (FITC+/PI-) and necrotic cells (FITC+/PI+). Rate of apoptotic cells was calculated. Transwell �ilter invasion assay Transwell �ilter invasion assay is one of the most frequently used methods to analyze cell migration in vitro assays. This assay involved a two-compartment system where cells may be induced to migrate from an upper compartment through a porous membrane into a lower compartment. Invasion assays were performed in a 24-well Transwell chamber (Corning, Lowell, MA, USA). Six Transwell chambers were set up for each group, and each chamber was coated with 100 µg Matrigel, and 1 × 10 5 cells were seeded to each chamber. The lower compartments were �illed with 500 µL of RPMI 1640 medium containing 10% FBS, and were incubated in a 5% CO 2 humidi�ied incubator at 37°C for 48 h. To determine the extent to which cells have migrated, the �ilter inserts are removed from culture wells, the cells that have not migrated are cleared from the top surface of the �ilter, and the remaining cells that have migrated to the underside of the �ilter were �ixed with 95% alcohol for 10 min and stained with hematoxylin-eosin for 10 min. Finally, the invasion behavior of the cells were quantitated by viewing multiple �ields under a microscope, an area corresponding to 20% of the �ilter are counted. MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay MTT assay and trypan bluetrypan blue staining were used to determine the cisplatin sensitivity of EC9706 cell lines. Brie�ly, cells were harvested and seeded in a 96-well. After 24 h, fresh RPMI 1640 medium containing cisplatin (3µg/mL) was added. After incubation for different time intervals (12,24,36,48, 60, 72 h), 10 µL of MTT (5 mg/mL) was added to each well and the cells were further incubated at 37°C for 4 h. The supernatant was then removed and 100 µL DMSO was added into each well. The absorbance at 450 nm was measured using a microplate reader. Six replicate wells were used for each group. Statistical analysis Statistical analysis was performed with one-way ANOVA and student t test. Statistical signi�icance was set at P <0.05. Esophageal cancer cell lines expressing Nanog and shNanog After 30 days of culture by G418 screening, stable clones of these cells were selected and expanded, and two esophageal cancer cells lines were established and named as 9706-Nanog and 9706-shNanog (Fig. 1). Observation of morphology showed that the growth of 9706-Nanog was faster than 9706-shNanog. Real-time PCR and Western blot were performed to identify the expression of Nanog in these cell lines. The results showed that expression of Nanog in 9706-Nanog was up-regulated and higher than normal EC9706 cells at both mRNA and protein levels (P <0.05). The expression of Nanog in 9706-shNanog was lower than normal EC9706 cells because of the down-regulation of Nanog by shRNA (P <0.05, Fig. 2). Nanog promote clonogenicity and proliferation of esophageal cancer cells To determine the proliferation and replication abilities of 9706-Nanog and 9706-shNanog cell lines, colony formation assay was performed. The results showed that clonogenicity of 9706-Nanog cell was enhanced according to the size and number of cell colonies. Obviously, colonies of 9706-Nanog were bigger than EC9706 and 9706-shNanog, meanwhile the number was also greater than other groups. To the contrary, cell colonies of 9706-shNanog were smaller and less. In addition, colony formation rate of 9706-Nanog, EC9706 and 9706-shNanog was 34.55±4.66%, 21.11±3.56% and 12.8±3.33% respectively. The clonogenicity and proliferation abilities of 9706-Nanog cells were stronger than in the EC9706 and 9706-shNanog cells (P <0.05, Fig. 3). Apoptosis of esophageal cancer cells Flow cytometry was applied to assess the apoptosis of 9706-Nanog, EC9706 and 9706-shNanog cell lines. There was little apoptosis being observed in 9706-Nanog and EC9706 cells, and the apoptotic rate of 9706-Nanog and EC9706 cells was 1.63±1.08% and 1.54±0.98%, respectively (P >0.05, Fig. 4). The apoptotic rate of 9706-shNanog cells was 19.97±2.08%, which was higher than in the 9706-Nanog and EC9706 cells (P <0.05, Fig. 4). method. The results showed that cisplatin inhibited proliferation of EC9706 cells, and the inhibition rate increased with time dependence. Compared to EC9706, cells of 9706-Nanog exhibited a weaker proliferation inhibition, most of 9706-Nanog cells could remain a good growth condition even 48 h after treatment with cisplatin (Fig. 6). But the growth of 9706-shNanog cells was extremely inhibited and most of cells were dead 48 h after treatment with cisplatin. Compared to EC9706, the growth inhibition rate of cells by cisplatin was increased in 9706-shNanog and decreased in 9706-Nanog (P <0.05), and these data indicated that Nanog could in�luence the cisplatin sensitivity of EC9706. The changes in apoptosis upon treatment with cisplatin were also evaluated by Flow cytometry. There was an obvious apoptosis of EC9706 cells after treatment with cisplatin (16.44±2.48%), and the apoptotic cells of 9706-Nanog was less than EC9706 (7.9±1.52%), while more apoptotic cells in 9706-shNanog cells (23.59±1.78%) was detected (P <0.05, Fig. 7). To more precisely determine the effect of Nanog on cisplatin sensitivity of EC9706, we also detected the expression of MDR-1 which was regarded as an important factor on drug resistance and sensitivity of chemotherapy to esophageal cancer. The real-time and Western blot assay showed that MDR-1 expression was related closely with Nanog expression. Compared to EC9706, the Expression of MDR-1 was increased in 9706-Nanog cells and decreased in 9706-shNanog cells at both mRNA and protein levels with signi�icant difference (P <0.05) (Fig. 8). Discussion Nanog serves as a novel transcription factor which maintains self-renewal and pluripotency of stem cells [4,5]. As a key self-renewal molecule, Nanog is detected not only in embryonic stem cells (ESCs), but also in germ cell tumors [8]. Ezeh et al. [16] showed that the Nanog protein was expressed in tissues of breast cancer. Zhang et al. [24] reported that several tumor cell lines express NANOGP8, a processed pseudogene of Nanog. Jeter CR et al. [9] previously �inished a systematical investigation on the expression, origin, and functions of NANOG in different tumor cells, and found that multiple tumor cells in vitro and in vivo express NANOG and demonstrated that NANOG expression in human cancer cells might be related to tumor development. The results suggest that Nanog may play an important role in tumor development. In the present study, we used two Eukaryotic expression vectors, pcDNA3.1-Nanog and pSUPER-EGFP-shNanog, which were transfected into human esophageal cancer cell line EC9706, and established two cell lines 9706-Nanog and 9706-shNanog by screening cells with G418 reagent. In addition, Real-time PCR and Western blot assay were carried out to identify the expression of Nanog in 9706-Nanog and 9706-shNanog cell lines. Based on these cell lines, we evaluated the relations between Nonog and the biological characteristics of human esophageal cancer cells, including clonogenicity and proliferation, invasion and apoptosis of cell lines with Nanog expression and loss-of-function. We showed that the colony formation rate of 9706-Nanog cells was higher than EC9706 and 9706-shNanog cell lines, and there were more invasion cells in 9706-Nanog than in other cell lines. These results indicate that expression of Nanog could promote clonogenicity, proliferation and invasion abilities of human esophageal cancer cell line EC9706. Our data also indicate that Nanog inhibits apoptosis of esophageal cancer cells, as increased apoptotic cells were observed by FACS in 9706-shNanog cells. Taken together, not only Nanog is expressed in human esophageal cancer cell line EC9706, it is also closely related to the malignant characteristics which usually gave rise to tumorigenesis and progression of esophageal cancer. Furthermore, inhibition of Nanog expression may be an effective treatment for patients with esophageal cancer. Multidrug resistance (MDR) enables cancer cells to resist anticancer drugs of a wide variety of structure and function [25,26], and inducing reduced drug sensitivity in cancer cells [27,28]. Cisplatin is a commonly used chemotherapeutic drug for human esophageal cancer, but drug resistance has become a major issue. Expression of Nanog was increased when cancer cells turned into MDR condition [22], suggesting a possible relationship between Nanog and MDR in cancer cells [18,29]. In our study, we treated 9706-Nanog, EC9706 and 9706-shNanog cell lines with cisplatin and evaluated the drug sensitivity of these three cell lines. We found that the expression of MDR-1 was increased in 9706-Nanog cells but it was inhibited in 9706-shNanog cells. The sensitivity to cisplatin was decreased as Nanog expression levels were increased. These data indicate Nanog was related to the expression of MDR-1 gene and further changed the drug sensitivity of human esophageal cancer to cisplatin. Therefore, Nanog may be used as a novel target to study drug resistance of esophageal cancer. In this study, we were unable to discriminate between Nanog and NanogP8 gene, so we could not demonstrate the expression level of these two genes and biological differences among Nanog and its pseudogenes in esophageal cancer. Future studies are required to assay for Nanog and NanogP8 mRNA, and to perform the sequence analysis of 3'UTR region to discriminate among the Nanog alleles and NanogP8. In conclusion, this study showed that Nanog was expressed in human esophageal cancer cell line EC9706 and promotes tumor cells development. Overexpression or loss of function of Nanog was associated with ability of clonogenicity, proliferation and invasion, and apoptosis in EC9706 cells. In addition, expression of Nanog was related to cisplatin resistance of EC9706. However, the biologically function of NANOG expressed in human esophageal cancer cells remains unclear, and more importantly, the function of multiple pseudogenes of Nanog genes in regulating tumorigenesis and tumor development is still controversial.
2018-04-03T05:45:53.294Z
2012-09-01T00:00:00.000
{ "year": 2012, "sha1": "a62592c452d7e6f43ddf7b0a9814cc0a7830173a", "oa_license": null, "oa_url": "https://www.karger.com/Article/Pdf/341471", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "52879ccaf90fbd0e1496a3c8357b09a5c5e01fd0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119321539
pes2o/s2orc
v3-fos-license
Extension of Newton-Steffenssen method by Gejji-Jafari decomposition Technique for solving nonlinear equations In this paper we extend Newton-Steffenssen method for solving nonlinear equations, introduced by Sharma [J.R. Sharma, A composite third order Newton-Steffenssen method for solving nonlinear equations, Appl. Math. Comput. 169 (2005), 242-246] by using the Gejji-Jafari decomposition technique. Several numerical examples are given to illustrate the efficiency and performance of this new method. Introduction Solving nonlinear equations is one of the most important problems in numerical analysis. To solve nonlinear equations, iterative methods such as Newton's method are usually used. Throughout this paper we consider iterative methods to find a simple root α, of a nonlinear equation f (x) = 0, where f : I ⊂ R → R for an open interval I. Many variants of the Newton's method have been suggested in the literature by different techniques. One of them is Adomain decomposition method which is used in [6] and other literatures. To implement Adomain decomposition method, one has to calculate Adomain polynomial, which is another difficult task. Other techniques have also their limitations. To overcome these difficulties, a new decomposition technique is introduced by Gejji-Jafari in [1]. In this paper we use this technique to extend Newton-Steffenssen method introduced by Sharma [2]. Gejji-Jafari decomposition method Consider the nonlinear equation (2.1) Throughout the paper we assume that f (x) has a simple root at α and γ is an initial guess close to α. Let us transform the nonlinear equation (2.1) into the following canonical form: where N(x) nonlinear operator and c is a constant. The main idea of this technique is to look for a solution having the series form The nonlinear operator N can be decomposed as Thus we have the following recurrence relation: and In [1] it is proved that the series ∞ i=0 x i converges absolutely and uniformly to a unique solution of (2.2). Extension of Newton-Steffenssen method Consider the following coupled system: The equation (3.1) of the above system can be rewritten as . Note that x is approximated by where lim m→∞ X m = x. For m = 0, For m = 1, Thus (3.8) becomes which yields the famous Newton's method with second order convergence For m = 2, where N(x 0 + x 1 ) is to be calculate. From (3.5), (3.2) and (3.10) we have Thus we have which gives the following well known Newton-Steffenssen method [2] with third order convergence where N(x 0 + x 1 + x 2 ) is to be calculate. From (3.5), (3.2) and (3.14) we have which suggests the following three-step iterative method Similarly we can obtain higher-order iterative methods. For general n it can be shown that the (n − 1)-step iterative method is Now we prove that order of convergence of the iterative method (3.19) is four, which is shown by the following theorem: Proof. By applying the Taylor series expansion theorem and taking account f (α) = 0, we can write f (x n ) = e n +c 2 e 2 n +c 3 e 3 n +c 4 e 4 n +c 5 e 5 n +c 6 e 6 n +c 7 e 7 n +c 8 e 8 n +O(e 9 n ), (3.21) where c k = f k (α) ⌊k , k = 1, 2, ... and e n be the error in x n after n iterations i.e. e n = x n − α; Similarly we can prove that iterative method (3.20) has n th -order convergence. Numerical Testing Here we consider, the following eight test functions to illustrate the accuracy of new iterative method. Some of them are taken from [7] and some from [8]. The root of each nonlinear test function is also listed. All the computations reported here we have done using Mathematica 8. Scientific computations in many branches of science and technology demand very high precision degree of numerical precision. We consider the number of decimal places as follows: 10000 digits floating point (SetAccuraccy=10000) with SetAccuraccy Command. In examples considered in this article, the stopping criterion is the |f (x n )| ≤ ǫ where ǫ = 10 −10000 . The test non-linear functions are listed in Table-1. Here we comparer performance of our new method (3.19) to the methods of Yun (YN) [3], of Chun (CN) [6] and Noor(NR) [5]. The results of comparison for the test function are provided in the Table 2. It can be seen that the resulting method from our class are accurate and efficient in terms of number of accurate decimal places to find the roots after some iterations. Table 1. Test functions and their roots. Non-linear function Roots
2013-04-25T10:55:43.000Z
2013-04-25T00:00:00.000
{ "year": 2013, "sha1": "4931fdeca6dd9f25bd4a788a03efec8ed3c60887", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4931fdeca6dd9f25bd4a788a03efec8ed3c60887", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
251562688
pes2o/s2orc
v3-fos-license
Study on microstructure evolution and mechanical properties of high-strength low-alloy steel welds realized by flash butt welding thermomechanical simulation Defects would occur in the weld joint of the wheel rims during the post-flash butt welding (FBW) process suffering from poor plasticity, which will deteriorate the quality and lifecycle of finish products. Therefore, the FBW process of the 440CL high-strength-low-alloy (HSLA) steel was physically simulated and the influence of flash parameters on FBW joints was systematically evaluated in this study. The results showed that the width of heat affected zone increased with accumulated flash allowance (δf) while declined with accelerated flash speed (vf). The recrystallization level would be intensified with increased δf. Meanwhile, the acceleration in vf populated the WZ with a more homogeneous microstructure, higher recrystallization degree and lower dislocation density. The hardness in WZ slightly reduced (202 → 195 HV) as increased δf but obviously dropped (192 → 177 HV) as increased vf. All tensile samples were fractured at the BM location and the tensile properties of FBW joints exhibit a good match with those of BM, with a slight increase in strength (UTS: 468 ~ 493 MPa; YS: 370 ~ 403 MPa) but a mild decrease in plasticity (EL: 39 ~ 44%; RA: 74 ~ 79%). Furthermore, both the joint strength and ductility showed a downward tendency with the increment of δf. However, the strength slightly decreased while the ductility increased with the advancement of vf. These findings would be valuably referential to the real FBW of HSLA steels with optimized microstructure and mechanical performance. Introduction As an important part of resistance welding technology, the flash butt welding (FBW) process is extensively applied in transportation and oil pipeline industries, including joining railway tracks, automotive wheel rims, vessel mooring chains and line pipes [1][2][3][4]. During the FBW process, the contact surfaces of workpieces will be rapidly heated to melt and join through resistance heat generated by the heavy transient current, while one side is fixed, and the other side is tightened by a movable clamp for subsequent upset action. Once the metal behind the contact surface has been sufficiently heated to guarantee adequate plasticity, the flash current will be stopped and the movable clamp will apply greater force to butt the contact surfaces together so that the molten oxides and impurities could be squeezed out of the joint [2,[5][6][7]. Theoretically speaking, FBW is a combination of melting and forging processes that produce welded joints with superior mechanical performances comparable to the base metal, as well as possesses various advantages of high welding efficiency, sound welding formability, and is independent of additional filler wire [8][9][10]. In the past few decades, with the continuous advancement of production technology and the overall enhancement of safety and environmental awareness, the automobile lightweight become the focus of the entire automotive industry [11]. Consequently, the expectation of improving material performance, energy efficiency and cost-effectiveness has become the overall goal of the automotive industry. The application of high-strength low-alloy (HSLA) steel is very extensive and indispensable in the automotive lightweight industry especially for the application of truck wheel rims, benefiting from their performance characteristics including high strength and toughness, superior resistance to brittle fracture and corrosion [12][13][14][15][16]. As an efficient and commonly used joint operation, FBW technology exerts a vital influence in determining the forming quality and service life in the manufacture of automobile rims. Thereinto, the variation of key welding parameters is the crucial factor directly affecting the microstructural evolution and mechanical performance of the FBW joints. Ziemian et al. [17] evaluated the significance of flash and upset sequences on microstructure, inclusion and mechanical characteristics of ASTM A529-Grade 50 steel FBW weld joints. The strength and defects of the welds are highlighted to be sensitive to parameters of flash duration, upset allowance and current. Shi et al. [18] comprehensively studied the influence of upset allowances on the quantities of inclusions with flash welding duration fixed and revealed that excessively short and long upset allowance will exert a detrimental effect on the breaking force. Siddiqui et al. [5] developed a computational fluid dynamics-based model to analyse the alumina inclusion behaviour during AC flash welding and found that the whole inclusion motion would be significantly affected by upsetting parameters. Wang et al. [19] the effects of the electrode feeding mode on the heating uniformity of the end face through a combination of numerical simulation and experiment and demonstrated that the skin effect is prominent once the AC passes through the low-temperature variation zone. Xi et al. [9,20] systematically investigated the influence of flash and upset allowances on the characteristics of RS590CL welds and recommended the optimal range for obtaining high-quality FBW joints. Lu et al. [21] assessed the effect of FBW parameters on the microstructures, mechanical properties and post-formability of B590CL welded joints and the fracture mechanisms in the practical production of the B590CL wheel rims. Shajan et al. [22,23] established the correction between the upset pressure, texture evolution and toughness of the micro-alloyed flash butt joints, providing a new perspective to explain how the upset pressure impacts the final performance. Shajan et al. [24] investigated the effects of post-heat treatment on the microstructure and toughness of FBW HSLA steel and found that recrystallization could be effectively induced by applying 1000 °C for a duration of 5 s, contributing to an improvement in weld zone toughness. On the whole, the above investigations imply that the influence of parameters on the performance of FBW joints is mainly realized by affecting the thermodynamic cycles. During the wheel rim production process, cracks or thinning would occur in the weld joints in the post-weld process including bulging and flaring suffering from poor plasticity, which will deteriorate the quality and lifecycle of finish products. The abovementioned issue would be predominant with the increase in steel strength grades. In this investigation, a physical simulation method was utilized to study the microstructural and mechanical evolution of the flash butt-welded HSLA steel because it features a reduction in work time and raw material, and it will not disturb the site production. To visually present the differences in thermodynamic cycles with the variation of parameters, FBW processes of HSLA steels with different flash parameters were conducted on Gleeble 3500 thermomechanical simulator. The purpose of the present investigation is to evaluate the influence of flash allowance and speed on the thermodynamic cycles, microstructure and mechanical performance of 440CL welds by controlling variables. Raw material In this investigation, commercial hot-rolled 440CL strips (material standards: JT020-2015 and YB/T 4151-2006) with a thickness of 6 mm were utilized as raw materials. The microstructure and phase diagram of the aforementioned material are shown in Fig. 1a, b, respectively. The as-received microstructure consists of major F (ferrite) and minor P (pearlite). P band structure formed along the rolling direction (RD), mainly derived from dendritic segregation and carbon enrichment in the molten status and then stretched along RD during the rolling process. Through the phase diagram calculated by Thermo calc software, the Ac 1 and Ac 3 temperatures were estimated to be 672 °C and 852 °C, respectively. The chemical composition is listed in Table 1. FBW physical simulation process The FBW experiments were conducted on Gleeble 3500 thermo-mechanical simulator, as shown in Fig. 2a. Figure 2b shows how a pair of specimens with a dimension of 70 mm × 10 mm × 6 mm (length × width × thickness) were assembled with clamps in this simulator. To accurately simulate the real welding process, the thermal cycle was controlled through spot-welded thermocouples 10 mm away from the contact surface. During the FBW process, sample pairs were firstly heated to 1250 °C with a heating rate of 250 K/s and then went through flash and upset steps in sequence, followed by air cooling. These parameters are from the data collected by the flash butt welding company on-site and provided by HBIS group. To separately evaluate the effects of flash allowance (δ f ) and flash speed (v f ) on the thermomechanical cycle, weld formability, microstructure evolution and mechanical properties of welded joints, 4 sets of pairs were welded using different δ f values ranging from 6 to 12 mm, and 5 sets of pairs were welded by varied v f values from 1 to 9 mm/s, while other key parameters were kept constant referring to the actual FBW situation. Table 2 summarizes the parameter details of each experimental set. Note that each set of tests was repeated four times to obtain four FBW joints, one for microstructural and hardness detection, and the others for tensile tests. Characterization strategies To clearly observe the transition of microstructure and hardness from the weld interface zone (WZ) to base metal (BM), the cross-sectional samples were extracted perpendicular to the welding direction from the welding joints, hotmounted, ground with SiC paper down to 4000 grit size, polished to a 1-µm finish using the red diamond suspension and then etched with 4 vol% Nital solution. The crosssectional morphologies and microstructure at different zones of the joints were captured by Leica M205A Stereo Microscope and Leica MMRM light optical microscope (LOM), respectively. In-depth microscopic analysis was conducted using the JEOL JSM-7001F field emission gun scanning electron microscope (SEM) equipped with an electron backscatter diffraction (EBSD) detector with a magnification of × 250 and a step size of 0.9 µm. The obtained data (Fig. 3a, b), the high-temperature duration prolonged from 6.217 to 12.217 s as increased δ f from 6 to 12 mm while the high-temperature duration narrowed from 10.217 to 1.328 s with accelerated v f from 1 to 9 mm/s. As the distance between the measuring point and the contact surface is constantly shortened with the advance of the flash stage, and the temperature is controlled through the measuring point in this experiment, the actual temperature of the contact surface will continue to rise with the increase of the flash time, indicating that a larger δ f or a slower v f may generate more heat input during the welding process. By comparison with the force-time profiles in different δ f and v f situations, samples with larger δ f will bear axial pressure for longer ( Fig. 3c) while samples applied with higher v f will be subject to higher axial pressure with a shorter duration (Fig. 3d). An increase in δ f contributes to the broadening in the width of the plastic zone which also implies the intensified heat input while an acceleration in v f will shrink the heating zone and increases the difficulty of plastic deformation [25]. Figure 4 captured the representative macrographs of samples after FBW on conditions of various δ f and v f values. Distinctly extruded metal with burrs is carried on the contact face. Through comparing samples 1# ~ 4#, it is clear that the overall length of welded specimens generally reduced with the increase of δ f . Generally speaking, on the basis of guaranteeing decent welding formability and performance, the less the raw material loss, the smaller the theoretical and actual dimension deviation of the processed wheel rim. Therefore, a smaller δ f will be preferred under the condition that the difference in welding performance was not obvious. According to the evolution of microstructural characteristics across the weld joint, 3 typical zones could be divided: WZ, coarsegrain heat affected zone (CGHAZ), and fine-grain heat affected zone (FGHAZ), as shown in Fig. 5a. No macro defect including cracks or pores could be detected across the weld joints implying the welding conditions are reliable [26]. Here, the WZ is too narrow to be accurately measured; therefore, the influence of flash parameters on the macrostructure was only compared via the width variation in HAZ. The extremely narrow WZ is formed by the retained liquid metal while the most liquid one splashed out of the contact surface during the flash stage. To make the comparison more convincing, three positions including the upper surface, central and lower surface along the welding direction were checked. By measuring, the width of CGHAZ and FGHAZ generally increased regardless of positions with increased δ f , which is mainly related to larger heat inputs and grain coarsening (Fig. 5b). However, as the v f accelerated, due to the narrowed heating zone (Fig. 5c). δ f represents the shortened length of the workpiece during the flash stage, which is positively correlated with the heat input. Under the appropriate δ f , at the end of the flash stage, a liquid layer with a certain depth would be formed at the joint interface if an optimum δ f is applied, possessing enough ability to bear plastic deformation and facilitate the following upset stage [20]. Besides, v f has a reverse correlation with high-temperature duration and heat input but a positive correlation with plastic deformation. Therefore, only if an appropriate δ f and v f are applied, the joints would undergo moderate plastic deformation without significant grain coarsening, resulting in the minimum width of HAZ. Microstructural evolution of FBW joints To show the microstructural evolution across the weld joint more intuitively, light optical micrographs (LOM), grain boundary misorientation angle distribution maps (GBMA) and geometrically necessary dislocations maps (GNDs) at WZ, CGHAZ and FGHAZ of specimen 1# are displayed in Fig. 6a-i. Low-angle grain boundaries (LAGBs) with a misorientation angle of 2° ≤ θ < 15° consist of an array of dislocations. A boundary misorientation lower than 2° is not considered due to the unreliable identification since the EBSD technique suffers from a lower angular resolution [27]. High-angle grain boundaries (HAGBs) with a misorientation angle of θ ≥ 15° could be viewed as an indicator showing the recrystallisation degree [28]. The formation of GNDs originates from the stored dislocations related to the non-uniform deformation, which creates a shear gradient giving rise to lattice rotation and net Burgers vector for sets of dislocations [29]. The mean GND density could be calculated from local average misorientation to further evaluate the stored energy. The corresponding grain size distribution, misorientation angle distribution and GND density distribution were further compared in Fig. 6j-l. The microstructure of WZ shown in Fig. 6a exhibited a combination of various ferritic morphologies including acicular ferrite (AF), bainite ferrite (BF), and primary ferrite (PF), and Widmanstätten ferrite (WF) detectable. The local peak temperature in WZ was well above Ac3, leading to the BM being completely austenitized and transformed into PF via reconstructive transformation and other ferrites via shear transformation. Compared to the PF with a typical equiaxed shape, the AF could be characterized by spiculate fine grains with multiple orientations while the BF usually possesses lath-like morphology. Concerning WF, it normally stems out from the grain boundary ferrite if the movement of the planar growth front slowed down [30]. The GBMA map taken at WZ shown in Fig. 6b reveals that the LAGBs indicated by blue solid lines occupied a major proportion compared with the HAGBs indicated by black solid lines and are inclined to distribute in other ferrite grains except for PF. Similarly, higher GND density was found to spread over in AF, BF and WF compared with PF (Fig. 6c). This similarity can be attributed that the LAGBs are regarded as sequences of dislocations with certain orientations. A comparatively low GND density within PF grains is mainly because they had experienced fully recrystallized phase transformation at high temperatures with sluggish cooling rates [31,32]. However, regions within PF grains at neighbourhood boundaries of other ferrite grains revealed slight GND densities. Such dislocations were generated and accumulated during the formation of AF, BF and WF with shape changes in the transformed zones [33]. Therefore, corresponding plastic deformation is induced in the PF to accommodate these geometrical changes, which causes the dislocation accumulation [34]. The microstructure of CGHAZ is quite similar to that of WZ but with more PF and a finer overall size of 8.2 µm compared with the grain size of 10.4 µm in WZ (Fig. 6d, j). Besides, the LAGBs' proportion in CGHAZ was calculated to be 61.4%, which is 10.9% smaller than that in WZ (Fig. 6e, k). Accordingly, the average GND density in CGHAZ was determined to be 2.43 × 10 14 m −2 , which is slightly lower than that of 2.51 × 10 14 m −2 in WZ (Fig. 6f, l). This can be explained by a higher local peak temperature and severer plastic deformation taking place in WZ during the FBW process, leading to more dynamic recrystallization and dislocation accumulation in this area than in CGHAZ [9]. Also, during solidification, the molten material in WZ will be constrained between the contiguous solid counterpart that can hardly contract or expand, inducing the accumulation of thermal stresses and further the pile-up of dislocations [30]. By contrast, the FGHAZ show a considerable difference in microstructure, which consists of PF and a small quantity of AF and P with a further refined average grain size of 7.9 µm (Fig. 6g, j). In FGHAZ, the local peak temperature would be between Ac 1 and Ac 3 , so that only partial PF and P were austenitized and on cooling transformed to AF during FBW. Certainly, a relatively low local peak temperature in FGHAZ contributes to the refinement of grain size. It is clear from Fig. 6h, k and I and l that the LAGBs proportion and the average GND density both became smaller in FGHAZ when compared to the counterparts in WZ and CGHAZ, which is because the FGHAZ experienced minor plastic deformation during the FBW process [22]. To evaluate the effects of flash parameters on the microstructural evolution, the weld microstructures in WZ, CGHAZ and FGHAZ for different δ f and v f values are compared in Fig. 7. Overall speaking, the variation in flash parameters exerts little influence on the microstructure constitution: WZ mainly consisted of a typical coarse solidification structure including AF, BF, PF and WF; the CGHAZ possesses similar microstructural composition with WZ but much finer; the FGHAZ is composed of refined PF, AF and P. On the contrary, the influence of flash parameters is mainly reflected in the proportion of different ferrite types and grain sizes. With the increase of δ f , the area fraction of PF and WF increased while AF and BF decreased in WZ, as shown in Fig. 7a1-d1. Also, the same phenomenon occurs in the CGHAZ with the accumulation of δ f , as shown in Fig. 7a2-d2. On the impact of flash speed, the content of PF increased while the fraction of AF, BF and WF decreased in WZ and CGHAZ with accelerated v f , as shown in Fig. 7c1, e1-h1, c2, e2-h2, respectively. These microstructure evolutions in WZ and CGHAZ are highly influenced by the heat input and plastic deformation during the FBW process. The welding heat input is positively correlated with δ f and negatively related to v f . As the δ f increases, the consequently increased high-temperature duration enables the prior austenite (PA) grains to grow up and coarsen. When the FBW samples cooled below Ac 3 , proeutectoid ferrite nucleates at PA grain boundaries and grows into reticular allotriomorphic ferrite since the growth rate of ferrite along the PA grain boundary is much faster than inwards, and with the longer high-temperature duration, the proeutectoid ferrite gradually increases and widens, and then transformed into massive PF within PA grains. Although a larger v f would result in a smaller heat input, a severer plastic deformation would be arisen during high temperature, leading to the aggravation of dynamic recrystallization and thus promoting the nucleation and growth of PF [31]. Additionally, the transformation from PA to WF is principally a displacive transformation with the carbon diffusion-controlled growth rate [35]. A higher heat input encouraged by longer δ f or lower v f in the welding process will be more beneficial to the carbon diffusion, thus the WF will be more in favour of nucleating and growing up from the resulted coarse PA [36]. Furthermore, the AF proportion in FGHAZ mildly increased with increased δ f (Fig. 7a3-d3) while gradually decreasing with increased v f and nearly disappeared when the v f exceeds 7 mm/s (Fig. 7c3, e3-h3). This is because high heat input induced by large δ f or low v f will increase the local peak temperature, leading to more proportion of PF and P from BM being austenitized and then transformed into AF. Therefore, it is more likely that the local peak temperature at FGHAZ is too close to Ac 3 to enable the nucleation of AF to transform from the parent microstructure once the v f is faster than 7 mm/s. To further investigate the effects of flash parameters on the grain morphology, distributions of misorientation angle and dislocation density, EBSD was performed on WZ and CGHAZ of the welded samples processed with Fig. 7 LOM images in WZ, CGHAZ and FGHAZ of samples processed with different flash parameters: a1-a3 sample 1#; b1-b3 sample 2#; c1-c3 sample 3#; d1-d3 sample 4#; e1-e3 sample 5#; f1-f3 sample 6#; g1-g3 sample 7#; h1-h3 sample 8# varied δ f and v f values, as shown in Figs. 8 and 9. Need to note, all the EBSD maps taken at CGHAZ were scanned at the same distance from the weld line. According to the image analysis obtained by these EBSD maps, comparisons on grain size, misorientation angle transition and GND density of samples with different parameters are presented in Fig. 10. By comparing the IPF maps and GB maps taken at WZ of welded samples 1#, 3# and 4# shown in Fig. 8a, b, d, e, h, g), it can be speculated that the irregularly polygonalshaped grains without dense transgranular LAGBs represent PF and became coarser with a smaller aspect ratio as increased δ f . Besides, the grain clusters with densely distributed LAGBs interiorly refer to AF, BF or WF, and their proportion decreased slightly with the increase of δ f . Likewise, the GND density was found higher within these grain clusters and became smaller over δ f , as implied in Fig. 8c, f, i. According to the variation in grain size quantitatively plotted in Fig. 10a, the mean grain size in WZ underwent a mild down-and-up trend with the accumulation of δ f , showing a negative correlation with the proportion of fine grains (˂ 10 µm). Furthermore, the overall LAGBs' fraction and GND density in WZ slightly reduced with the increment of δ f statistically indicated by Fig. 10c, e, respectively. It is known that the co-presence of high temperature and severe strain during FBW will compel the weld joints to experience dynamic recrystallization [37]. The recrystallization processing level would be intensified when applying a higher heat input which could be caused by increased δ f , leading to the increase in the fraction of HAGBs and the decrease in dislocation density. A comparatively smaller average grain size detected in WZ of sample 3# (δ f = 10 mm) could give the credit to less extra coarse grain fraction that stemmed from the further homogeneous growth of the recrystallized grains. Meanwhile, the acceleration in v f populated the WZ with more homogeneous grains accompanied by a smaller aspect ratio, more HAGBs distributed and lower dislocation density, as indicated by samples 3#, 6# and 8# shown Fig. 8d-f, j-l, m-o, respectively. Especially when the v f speeds up to 9 mm/s (sample 8#), the majority of the grains are near equiaxed and the LAGBs are no longer dominant obviously. According to the grain statistics shown in Fig. 10a, the average grain size showed up an upward trend while the small grain proportion greatly dropped by around 8% when the v f was added to 9 mm/s. The LAGBs' fraction sharply reduced from 72 to 58% and the GND density declined from 2.42 × 10 14 /m 2 to 1.85 × 10 14 /m 2 as the v f increased from 1 to 9 mm/s, as shown in Fig. 10c, e, respectively. It has been reported that a decrease in PA grain size induced by lower heat input may budge the CCT curve towards shorter cooling duration and higher temperature to favour the transformation of PF [31]. Therefore, more PF transformed at WZ during solidification with faster v f causes these microstructural trends. Since CGHAZ is just adjacent to WZ, the comparatively lower local peak temperature and slower cooling rate still caused the evolution of various ferrites with a finer size similar to those in WZ, as shown in Fig. 9. The mean grain size exhibited an upward trend, while the small grain proportion showed a downward tendency with the advancement of δ f , as reflected by Fig. 9a, d, g and the corresponding statistics in Fig. 10b. This is because a higher heat input would arise at larger δ f that will be in favour of grain coarsening. Likewise, a similar trend in the average grain size and fine grain fraction with the acceleration of v f was observed, as shown in Figs. 9d, j, m and 10b. Generally, a smaller heat input will be generated at a faster v f , which would be supposed to refine grain size. However, the acquired microstructure at higher v f showed more presence of PF confirming a shrunken heating zone overrode other factors in affecting the final microstructure and average size. The fraction of LAGBs experienced a continuous reduction both with the increase of δ f and v f , as shown in Figs. 9b, e, h, k, n and 10d. Furthermore, the variation of GND density showed a similar tendency as the LAGBs' proportion with the increment of δ f and v f , as indicated in Figs. 9c, f, i, l, o and 10f. These tendencies can be explained by the increased volume fraction of the fully recrystallized PF phase with quite a low dislocation density. Generally speaking, the heat input increases with an increase in δ f and a decrease in v f . An appropriate δ f should guarantee that a molten metal layer formed on the end face of the whole workpiece at the end of the flash stage and the plastic deformation temperature could be achieved at a certain depth. If excessively small δ f could not meet the above requirements, therefore, the welding quality will be affected. Exaggerated δ f will cause a waste of raw materials and a reduction in productivity. An optimum v f should be fast enough to ensure the intensity and stability of the flash. However, if the v f is too large, the heating zone will be too narrow, which will increase the difficulty of plastic deformation. Meanwhile, the welding current will be increased in that circumstance, leading to an increase in the nozzle depth after lint beam blasting, and the deterioration of the weld joint quality. Hence, an optimum heat input could achieve sufficient metal flow and adequate plastic deformation, consequently, contributing to a defect-free weld joint. Moreover, the optimum heat input could produce a fine recrystallized grain structure accompanied by sufficient dislocation density to ensure an optimal match of strength and plasticity [38]. Microhardness transition across FBW joints The variation in microhardness transition at the upper, central and bottom of the plate thickness across the weld line regarding different flash parameters is plotted in Fig. 11a-h. From the hardness transition profiles, it can be observed that the microhardness trendline is roughly symmetric and similar at different parameters: the microhardness value reached the maximum at or near the WZ and then decreased toward the base metal (~ 150 HV). The hardness is higher in HAZ than BM except for a slightly low ebb observed at the transition zone to BM at sample 8# with a v f of 9 mm/s. For the rest, the softening phenomenon was not detected in the HAZ of samples processed with other parameters. According to the comparison of microhardness transitions over δ f shown in Fig. 11i, the hardness value of WZ is around 202 HV at δ f = 6 mm and slightly reduced to 195 HV when δ f reached 12 mm, which is 1.3 ~ 1.35 times harder than that of BM. The hardness in WZ slightly decreased with the increment of v f from 1 to 5 mm/s. Whereafter, a steep drop has been seen in the hardness value of WZ from 192 to 177 HV when the v f speeds up from 5 to 9 mm/s, as can be seen in Fig. 11j. Several factors exert an essential role in contributing to the hardening of WZ, including strain localization, substructure and phase transformation [30,39]. The strain localization induced by residual stresses and microstructural heterogeneity generated during the FBW process would be conducive to the increase in hardness. The accumulation in substructure boundaries will also contribute to obstructing the movement of dislocations, therefore, would be attributable to the improvement in hardness [33]. Furthermore, the ferrite transformation including the development of AF and BF is highly correlated to the hardness variation. Compared to the PF which is normally viewed as a fully recrystallized phase, AF mostly nucleates at deformed PA and is known to be an ideal structure with high strength and good toughness assured [40]. The BF that transformed at lower temperatures and faster cooling rates is known to have higher dislocation density and hardness than PF as well. Therefore, an increase in δ f would enable more high-temperature duration to promote a dynamic recrystallization process that causes the reduction in AF and BF fraction in the fusion region and further results in hardness declining of WZ. However, an over acceleration in v f will narrow the heating zone and increase the difficulty of plastic deformation, thus confining the nucleation of AF and BF to achieve a decrease in hardness. Figure 12a, d display the strain-stress curves of welded joints processed with different δ f and v f that were gained from the ambient uniaxial monotonic tensile tests. In the elastic strain stage, the slope of the strain-stress curves showed little variation indicating there is no obvious difference in stiffness as the flash parameter changes. The yield point is noticeable for all welding conditions due to the nitrogen and carbon interstitial atoms pinning/unpinning dislocations [41]. Besides, in the plastic strain stage, obvious changes in strain-stress behaviour could be observed that specimens start yielding at lower strains with the increase of δ f and v f . To better reveal the influence of flash parameters on the variation in plasticity and strength, the plasticity including elongation (EL) and area of reduction (RA) and the strength including yield strength (YS) and ultimate tensile strength (UTS) for the different FBW conditions were obtained and compared in Fig. 12b, c, e, f). For better comparison, the tensile properties of the material in as-received status were also detected using the same strain rate, and the EL, RA, YS and UTS were determined to be 45%, 79%, 362 and 466 MPa, respectively. The tensile properties of FBW joints exhibit a good match with those of BM, with a slight increase in strength (UTS: The reduction in ductility could be explained by a local compromise resulting from the strain localization due to the microstructural heterogeneity and residual stress induced by FBW. Furthermore, both the joint strength and ductility showed a downward tendency with the increment of δ f , which could ultimately attribute to higher heat input. Increasing quantities of WF will nucleate while the transformation of fine AF and BF will be further inhibited when the heat input is excessive, resulting in the mild degradation of strength and plasticity simultaneously. Furthermore, according to the grain refinement strengthening and dislocation strengthening mechanisms, finer average grain size and smaller dislocation density obtained by increased δ f would also contribute to the enhancement in strength [42]. However, the strength slightly decreased while the plasticity increased with the advancement of v f . This is because less AF/BF with high dislocation density will be generated but more PF will nucleate under the circumstance of lower v f due to the narrowed heating region during FBW, causing a decrease in strength but an increase in plasticity. Even though the mean grain diameter was refined with the acceleration in v f , the recrystallization softening degree and decrease in dislocation density would be fostered, leading to the tendency for improved plasticity at a slight sacrifice of strength. Tensile properties of the FBW joints After tensile tests, the microstructure of three sites (A, B, C) outward from the fracture zone of all specimens is checked and shown in Fig. 13a-h. The microstructure in site A for all samples was determined to be composed of PF, AF and P, which is the typical microstructure of FGHAZ. The microstructure in site B mainly consists of PF and P with insignificant deformation, belonging to the BM part. However, the PF and P grains near the fracture (site C) are severely stretched in the tension direction, where cracks are initiated and propagated along the austenite grain boundaries and possess an obvious directionality under axial stress. Even though a great extent of necking took place by HAZ, it still can be found that all tensile samples were fractured at the BM location. This is reasonable and can be explained from two aspects. On one hand, the basic microstructures of BM are PF and P while the welding zone mainly consists of BF and AF, and the strength of BF and AF is better than that of PF and pearlite, leading to strengthening and premature failure of the joints. On the other hand, the welding zone was formed suffering severe plastic deformation and subsequent recrystallization process; therefore, the deformation ability is superior to that of BM. The fracture surface morphologies of tensile specimens processed with different flash parameters are displayed in Fig. 14a-h. It can be found that the parabolic and equiaxed dimples with different sizes and depths were the main Fig. 12 Room-temperature tensile engineering strain-stress curves (a), corresponding ductility (b) and strength results (c) of the FBW joints processed with different δ f ; ambient tensile engineering strain-stress curves (c), corresponding ductility (d) and strength results (e) of the FBW joints processed with different v f feature in all welding conditions, which could be characterized as a typical ductile fracture. Some extent of tear marks could also be seen on the boundaries of dimples, indicating the occurrence of exaggerated plastic deformation during the FBW process [21]. Besides, several spherical inclusions could be observed at the base of some large and deep dimples. The EDS map and point checking displayed in Fig. 14i show that these inclusions are rich in Mn, Ca, S, Al and O, which could be inferred as MnS, CaS and Al 2 O 3 . The fracture mechanism is highly correlated to the different expansion modes of the microvoids that existed in the welded joints during the tension process. Due to the local plastic deformation, micro-cracks initiated and propagated at the interface of the second-phase particles, namely the inclusions. Thereafter an internal shrinkage neck is generated in the local micro-region between the inclusion and the matrix. When the neck reaches a certain extent, it will suffer tear or shear fracture that forms the dimple fracture Fig. 14c at the magnification of 500 × and EDS point results correspond to locations A, B and C with small plastic deformation. Nevertheless, the shear and tear stress, compel the micropores to bear uneven stress during nucleation and growth, resulting in the formation of parabolic dimples and increasing the plastic deformation. As can be seen from Fig. 14, the tensile fracture of welded joints under different flash parameters contains parabolic dimples, equiaxed dimples and tearing edges, indicating the fracture mechanism refers to a mixed ductile fracture of normal fracture and shear fracture. Conclusions In this investigation, the FBW process of 440CL HSLA steels was physically simulated and the influence of flash parameters including δ f and v f on the microstructure evolution and mechanical performance of FBW joints was systematically analysed. The following conclusions could be summarised. 1. The CGHAZ showed similar ferritic morphology to WZ but with more PF and a finer average size. The FGHAZ showed considerable difference in microstructure, consisting of further refined PF and minor AF and P. The highest LAGBs' proportion and GND density were found in WZ in comparison with HAZs resulting from a higher local peak temperature and severer plastic deformation taking place in WZ. The width of CGHAZ and FGHAZ generally increased with increased δ f due to higher heat input and declined with the accelerated v f due to the narrowed heating zone. The fraction of PF and WF increased while AF and BF decreased in WZ and CGHAZ as increased δ f . With the acceleration of v f , the PF content increased while the fraction of AF, BF and WF decreased in WZ and CGHAZ. The mean grain size in CGHAZ exhibited an upward trend while the LAGBs fraction and GND density experienced a continuous reduction with the advancement of δ f and v f . 3. The microhardness value reached the maximum at or near the WZ and then decreased toward the base metal (~ 150 HV). The hardness in WZ reduced from 202 to 195 HV when the δ f increased from 6 to 12 mm. A steep drop has been seen in the hardness value of WZ from 192 to 177 HV when the v f speeds up from 5 to 9 mm/s. All tensile samples were fractured at the BM location and the tensile properties of FBW joints exhibit a good match with those of BM, with a slight increase in strength (UTS: 468 ~ 493 MPa; YS: 370 ~ 403 MPa), but a mild decrease in plasticity (EL: 39 ~ 44%; RA: 74 ~ 79%). Furthermore, both the joint strength and ductility showed a downward tendency with the increment of δ f . However, the UTS and YS slightly decreased while the EL and RA increased with the advancement of v f . The fracture mechanism refers to a mixed ductile fracture of normal fracture and shear fracture. These findings are of great guiding significance in providing an appropriate process window concerning real production in FBW of HSLA steels with optimized microstructure and mechanical performance.
2022-08-15T15:06:40.011Z
2022-08-13T00:00:00.000
{ "year": 2022, "sha1": "0465fc33722a76e3d02fba62e38894f750a92e60", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00170-022-09859-w.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "3be4199c6c30942c334c7ef621d9d302290e980b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
245727259
pes2o/s2orc
v3-fos-license
Multi-Layered Energy Efficiency in LoRa-WAN Networks: A Tutorial Emerging Internet-of-Things (IoT) applications are driving increasing demand for advanced services in wireless networks, prompting the development of new technologies to address the associated challenges. Energy efficiency of IoT standards is a key feature targeted by research efforts and industrial activities, leading to an extensive and growing number of innovative solutions. Low Power Wide Area Networks (LPWANs) define a class of wireless communication technologies seen as highly relevant for future IoT development given its long communication range, low-cost devices and interesting energy management. Long Range Wide Area Network (LoRaWAN) is acknowledged to be the dominant IoT communication technology. It has allowed broad deployment and unlocked new IoT applications such as smart cities, asset tracking, etc. This article provides a comprehensive tutorial on the LoRa standard, and surveys existing solutions, hot topics and future insights for building energy efficient IoT infrastructures and IoT devices. Indeed, energy efficiency is one of the key factors for successful and sustainable deployments of IoT applications. More precisely, this article discusses how to meet LoRa/LoRaWAN energy efficiency across physical layer, medium access control layer, and network layer. Subsequently, extensive pioneering solutions from related literature are compared and assessed. Finally, insightful conclusions are drawn, and open problems are listed at the end of this article. I. INTRODUCTION A. MOTIVATION AND TRENDS The Internet of Things (IoT) is an emerging ecosystem enabling a connected, sustainable and smart environment. Its growth is staggering, with billions of objects being deployed to sense, exchange and share relevant information about their environment. This emerging digital fabric acts as a bridge between physical reality and the Internet. It is expected, according to CISCO, 1 that the number of active connected devices will hit 500 billions by 2030 empowered by the Fifth Generation (5G) of cellular wireless ecosystems. Besides, 5G is leveraged in enhanced Mobile Broadband communications (eMBB), enhanced The associate editor coordinating the review of this manuscript and approving it for publication was Derek Abbott . 1 https://www.cisco.com/c/en/us/products/collateral/se/internet-ofthings/at-a-glance-c45-731471.pdf Machine-Type Communications (eMTC) and Ultra-Reliable Low-Latency Communications (URLLC). In fact, 5G is designed to support Machine-to-Machine (M2M), Deviceto-Device (D2D) and Device-to-Everything (D2E) communication, the IoT and the Internet of Vehicles (IoV). Unlike previous cellular generations, there are specific provisions in 5G for such applications where low power, small form factor, and large numbers of connected devices are the norm. In fact, it was a specific intent of the standardization effort leading to 5G to incorporate IoT-friendly features in cellular standards (e.g., LTE-M, Narrowband-IoT, etc.) [1]. IoT development rests upon two broad classes of network access technologies. The first network class comprises highthroughput wireless networks, including Wireless Local Area Networks (WLANs) such as WiFi (802.11) and cellular networks. These are designed mainly to deliver high data rates with efficient spectrum usage. The second class comprises a wide array of ad hoc wireless standards designed for wireless sensor networks, i.e. low-cost, ultra low-power devices, and characterized by low data rates and low duty cycle. The focus in this case is on energy efficiency and very simple transceivers. While IoT technology is widely developed, substantial research effort remains to be invested to deal with massive connectivity and energy efficiency [2]. The IoT-5G marriage sustains implementing sensor-based IoT capabilities into robots, actuators and UAVs, to ensure shared coordination and reliable task execution with low latency [3]. Indeed, the incorporation of 5G [4] within IoT systems results in increasing the transmission rate, reducing transmission delay and guaranteeing network reliability and information security. IoT enables low-cost objects to access and exchange information globally over the Internet [5]. An overview of typical IoT application domains is listed in [6], including smart cities, healthcare, smart buildings, smart grid and smart industry (industry 4.0). For instance, IoT-enabled home automation is transforming our daily life, by automatically adjusting the temperature according to human presence, and the garage door opens automatically upon arrival. In the context of smart cities, cohesive communication between autonomous or semi-autonomous vehicles and infrastructure (e.g., traffic lights) leads to improved road safety. Thus, the IoT can both enhance quality of life and support global economic activity [7]. In concert with this vision, a number of questions must be addressed in the IoT's continuing evolution [8]: • Can IoT ecosystems ensure connectivity anytime, anywhere and anyhow? • How to ensure continuity of service and interoperability between different devices from various manufacturers? • How can energy consumption be reduced to increase the network lifetime? • How can a high level of security be provided with lowcost devices? • How can the rapid evolution of the IoT concept be effectively managed within massive environments? Low power wide area networks provide many solutions offering low cost technology, low power consumption and satisfactory connectivity to IoT devices. Contrary to 4G/5G or WiFi networks, LPWANs in general do not primarily focus on providing high data rates for IoT devices and/or low latency. Rather, the most desired features are energy efficiency, connectivity and low cost [9]. By design, 5G mobile networks are expected to overcome the limits of previous cellular standards while providing key technologies to support future IoT deployments. Meanwhile, LPWAN solutions are being used both to sustain and enable new requirements for IoT-critical service cases, notably low cost, low data rate, high scalability and energy efficiency [1]. B. RELATED SURVEYS AND OUR MAIN CONTRIBUTIONS The academic and industrial communities are becoming increasingly interested in leveraging LoRaWAN's energy efficiency. When searching for published papers that mention LoRaWAN energy optimization, academic databases such as IEEE Xplore, MDPI, and other digital libraries show significant surge patterns. Recent state-of-the-art surveys on LoRaWAN [10]- [13], to the best of our knowledge, address the challenges and open issues regarding the physical, MAC layer, and architectural aspects, setting the stage for LoRaWAN development. A comprehensive review of the energy efficiency of deployed approaches for efficient network operation, beyond the aspects studied in the existing literature, is, however, sorely lacking. A systematic review of the major search engines was performed for this paper in order to accurately represent the state of the art on LoRaWAN technology and efficient energy consumption in IoT devices. Table 1 briefly summarizes recent research on LoRaWAN networks. In this paper, LoRa networks are examined through extensive review of relevant literature to address questions regarding their behavior and specifically their ability to efficiently optimize energy consumption. The main contribution aspects of this work are summarized as follows: • We provide a comprehensive tutorial covering fundamentals of LoRaWAN networks, and survey the literature dealing with energy efficiency across multiple network layers. • We review existing solutions and discuss hot topics related to energy efficiency at the physical (PHY), Medium Access Control (MAC), and network (NET) layers of the LoRa standard. • We discuss some of the challenges, open issues, and potential future research directions in optimizing LoRaWAN technology for efficient energy consumption. C. PAPER ORGANIZATION The remainder of this paper is organized as follows. Section II deals with the LPWAN concept, compares it to the cellular concept and provides an overview of the most widely known LPWANs technologies. We survey the main techniques and solutions for energy efficiency at LoRa's physical, MAC, and network layers in Sections III, IV and V, respectively. Section VI provides insightful discussions and presents some hot topics and research opportunities. Important concluding remarks are drawn in Section VII. A. LoRaWAN STANDARD The LPWAN technologies constitute a valuable complement and potential alternative to traditional cellular and short-range wireless technologies for a variety of emerging smart city and machine-to-machine applications [21]. Indeed, the three key ingredients of LPWANs (high energy efficiency, scalability, and coverage [22]) are well suited to the development of applications and services for the IoT. Many LPWAN technologies are currently available which have gained some 9200 VOLUME 10, 2022 traction in the market, including SigFox, Weightless, Thread, NB-IoT, and LoRa, most of them designed to operate in subGHz Industrial Scientific Medical (ISM) bands [23]. Figure 1 graphically compares LPWAN technologies with other wireless technologies. In particular, the figure highlights the complementarity of LPWAN versus 3G / 4G cellular given that it features extensive range and coverage at a low cost, with very low power consumption and a simple ad-hoc infrastructure. However, these systems are limited in throughput and latency. These factors make LPWAN a very promising candidate for remote monitoring [24]. [24]. FIGURE 1. Comparison between LPWAN and cellular technologies In some cases, IOT usage may require highly particular functionalities, for which the features of LPWANs are required [25]. The main properties of LPWAN technologies required for the successful development of IoT systems are shown in Table 2 [26]. Among LPWAN standards, there are those operating in unlicensed bands (e.g., LoRa, SigFox, etc.), and those designed for licensed frequency bands as a component of cellular systems (e.g., NB-IoT, EC-GSM-IoT, LTE-M-IoT, 5G IoT, . . .) [27]. The suitability of an LPWAN technology for a specific application depends on requirements in terms of range, data rate, coverage, power budget, network cost and scalability, Figure 2 compares the relative strengths of the most popular LPWAN technologies, thus providing guidance in selecting the best one for a particular scenario. For example, Long Range (LoRa) provides very long communication range, but no support for real-time data flow [28]. -Narrowband IoT (NB-IoT): Refers to an LPWAN radio technology standard developed by the Third Generation Partnership Project (3GPP) to support a wide range of cellular devices and IoT services. Thus, NB-IoT devices with an average of 200 bytes per day can achieve battery lifetimes of 10 years while covering a distance of 1 km in urban areas and 20 km in rural areas. NB-IoT, unlike LoRaWAN, can support both low latency and high data rates [7]. -Sigfox: It is primarily used to establish IoT networks in situations where the volume of data transmitted is low. According to Sigfox 2 an uplink message might have a payload of up to 12 bytes and takes about 2 seconds to reach the base stations over the air, the range is long, up to tens of kilometers, and the current consumption is very low, averaging around 1-10 mA per transmission. Sigfox leverages the D-BPSK (Differential Binary Phase-Shift Keying), a lowcomplexity, narrowband modulation type. Its most important properties are its high spectral efficiency and its ease of implementation which, combined with a low bit rate, leads to inexpensive transceiver implementations [30], [31]. -LoRaWAN: In 2015, a group of approximately 500 companies formed the LoRa Alliance, resulting in the creation of a new LPWAN technology standard known as LoRaWAN, which defines the network and MAC stacks that allow network nodes to transmit messages to gateways. LoRaWAN guarantees secure and reliable communication, as well as extended battery lifetime and cost savings [32]. The primary goal of this paper is to present an efficient technology in terms of energy consumption, thus selecting and focusing on LoRaWAN technology. Authors in [33] define it as follows: ''LoRaWAN is an energy efficient LPWAN technology designed to address power consumption and coverage issues in IoT applications''. [42]. The NS forwards the messages to the Application Server and Join Server. • The Application Server (AS): The AS forwards all packets received from the network server to the specific associated application. Alternatively, an incoming message from an application is forwarded by the application server to the network server. • The Join Server (JS): The JS is responsible for the authentication process of the end devices, both for the generation and distribution of the authentication keys. Two network entry methods are allowed in LoRaWAN: Activation By Personalization (ABP) and Over-The-Air Activation (OTAA). These are described in some detail in Subsection E.1. C. LoRaWAN PROTOCOLS STACK The LoRaWAN protocol stack defines one mandatory and two optional classes at the MAC layer and corresponding to the various possible use cases. LoRa technology covers a series of ISM bands, which vary depending on the application case and the region in which the LoRaWAN nodes are deployed [43]. In [44], it is stated that these bands are unlicensed and require conditions in terms of maximum transmission power, duty cycle, and bandwidth. Additionally, in [45], they identify maximum duty cycle as a major challenge for networks using unlicensed bands, defining it as ''the maximum percentage of time during which an end device can occupy a channel.'' As a result, the end device's emission is limited to these parameters. LoRaWAN's duty cycle, depending on the frequency band used, could be 0.1%, 1%, or 10%, with a recommended duty cycle of less than 1%. For a value of 1%, the device must wait for 100 times the duration of the previous frame for a new transmission in the same channel [46]. In the current LoRaWAN protocol stack implementation, the application layer sits on top of the MAC layer, as shown in Figure 4. The LoRaWAN application layer is not defined at all in the specification. The LoRaWAN application layer (for example, running on Loriot, IBM Bluemix, or Amazon AWS, to name a few platforms) includes useful functions for measurement services such as data analysis, long-term storage, and simple visualization. Cloud services and effective (customer-oriented) dashboards may be the most effective ways to maximize the impact of measurement results on the end user [47]. [48]. The UNB modulation technique provides significant bandwidth efficiency. Furthermore, it allows signal transmission at a very low bandwidth, best suited for small uplink traffic. The SS technique, one of the oldest communication techniques, is used in military applications to provide secure communications, expanding the original signal over a large frequency band while maintaining the same signal power [26]. Another feature is the absence of an apparent peak in the spectrum, which merges with noise, making interference and interception difficult [21]. Its advantages include interference rejection, multipath suppression, multiple access code division, and high-resolution ranging. The SS transmission refers to a method in which the signal occupies more bandwidth than is required to send the information. Band spreading uses data-independent code and synchronized reception at the receiver side for despreading and subsequent data recovery [49]. LoRa, which as mentioned is the physical layer in LoRaWAN, achieves low power consumption and long-range communications via chirp spread spectrum modulation (CSS), a SS technique used to modulate the signal using chirp pulses (variable frequency sinusoidal pulses) [50]. Indeed, the CSS modulation used in LoRa converts each data symbol into a chirp, defined as a signal whose frequency increases or decreases linearly over time. A chirp also refers to a sweep signal, and each CSS symbol sweeps the bandwidth (BW) once [51]. 2) LR-FHSS Long Range-Frequency Hopping Spread Spectrum (LR-FHSS) is a new physical layer to address extremely long-distance and large-scale communication scenarios, such as satellite IoT. Semtech has announced LR-FHSS, a LoRa physical layer extension. Emerging use cases with increasingly larger and denser network deployments, such as satellite-scale LoRaWAN networks, are driving the exten- VOLUME 10, 2022 sion. Its primary aim is to increase network capacity and robustness by using the Frequency Hopping Spread Spectrum (FHSS) modulation technique while maintaining the same communication range as LoRa. The LR-FHSS is a fast FHSS modulation used for uplink communication only; downlink communication is achieved with current LoRa because the same radios can switch between modulations [52]- [54]. 3) SPREADING FACTOR The spreading factor (SF) is the most relevant variable in the CSS system. The CSS order of modulation is given by M = 2 SF , indicating that each CSS symbol carries SF bits [55]. SF refers to the number of chirps required to encode a bit [56], providing a flexible trading range for the data rate [50]. In addition, it improves the spectral efficiency and capacity of the network. LoRa modulation employs six orthogonal SFs that provide different data rates. Because these factors are orthogonal, different spread signals can be transmitted simultaneously using the same frequency channel while maintaining communication performance and trading Time on Air (ToA) for communication range [57]. The SF is defined as: where: -R c is the chip rate, -R s is the symbol rate. The SF can have six different values, ranging from SF = 7 to SF = 12 [40]. Then there is a compromise between the SF and the coverage range. The higher the SF, the larger the communication range is. As stated above, the SFs are orthogonal, which means the gateway can receive multiple transmissions simultaneously on different SFs [58]. The authors of [59] investigate the effects of imperfect orthogonality between multiple LoRa SF transmissions. The authors claim that a LoRa transmission can be interfered with by another SF transmission if the power of the interfering signal is significantly greater than the power of the reference signal. Indeed, the results show that this power difference threshold is about 16 dB. Such a power difference may occur when an interfering signal is close to a receiver or several interfering signals are received simultaneously. Another research study addresses the effect of interference on a LoRa network due to transmissions running concurrently while using the same SF. The results show that transmitting using different SFs may significantly impact LoRaWANs with high density [60]. Another study proposes a detailed analysis of the available uplink throughput on a LoRa network, considering the effects of co and inter-SF interferences. Two types of interference are involved in this scenario. The first type stems from the assumption that SFs are not perfectly orthogonal, leading to inter-SF interference. It follows that the transmission of packets using different SFs is subject to collisions. The second type of collision derives from multiple nodes utilizing the same SF on the same channel, thus leading to co-SF interference. There are two different types of SF allocations: the SF-random type, in which the SFs are uniformly distributed, and the SF-distance type, in which the SFs are distributed by distance. They compute the maximum possible throughput and the probability of successful transmission, using expressions for perfect and imperfect SF orthogonality [61]. The data rate of LoRaWAN varies from 0.3 kb/s to 27 kb/s depending on the SF used [50]. According to the study [43], by sending a packet with a payload of 51 bytes every 10 minutes and assuming a capacity of 2400 mAh, the battery lifetime for SF7-SF10 can reach one year. Comparatively, the battery life for SF12 and SF11 is lower, ranging between 0.5 and 0.8 years. Furthermore, it is highlighted in [62] that the ToA, which refers to the duration of the packet transmission, is dependent on the value of the SF in use; if the SF increases, the ToA also increases, implying higher energy consumption. In [63], the authors estimate the ToA for a standard message with a payload of 51 bytes, as illustrated in Table 4. The actual ToA for a packet in a LoRa network can be defined as follows [64]: where T preamble is the preamble duration and T payload is the payload duration. T preamble can be calculated as follows: where L preamble is the preamble length, T s represents the symbol period, and Based on the above formulas, we can conclude that the ToA of the LoRa packet is directly related to the SF. Hence, the choice of SF is a decisive factor to ensure proper network management. 9204 VOLUME 10, 2022 4) LORA PHYSICAL LAYER FRAME Figure 5 depicts the physical, MAC, and application layer LoRa frame structure [65]. The following fields are included in the LoRa packet [64]: • Preamble Field: Serving for synchronization purposes and comprising eight successive reference chirps which indicate the packet modulation scheme, the preamble field is modulated with the same spreading factor as the rest of the packet. • Header Field: Two operating modes are available. The number of bytes in the header field indicates the FEC code rate, the length of the payload, and the presence of CRC in the frame in the default explicit operational mode. In the second implicit mode of operation, it is understood that the code rate and payload in a frame remain fixed. This field is not included in the frame when using this mode, which helps to reduce transmission time. A 2-byte CRC field is also present, allowing the receiver to reject packets with invalid headers. The header field, including the CRC field, is 4 bytes long and has a coding rate of 1/2. However, the coding rate for the rest of the frame resides in the PHY header. Note that the length of the payload is determined by the first byte of the header field. • Payload Field: The payload's size ranges from 2 to 255 bytes. This field also includes the following elements: MAC header (specifies the frame type, protocol version, and direction); MAC payload (including actual data); and MIC (corresponds to the digital signature of the paylod). • CRC Field: It is optional and contains Cyclic Redundancy Check (CRC) bytes to protect the payload from errors (2 bytes). In [27], the authors highlight the single difference between the structure of uplink and downlink messages: the downlink messages do not contain the CRC field, as messages should be as short as possible to minimize the effect of any duty cycle limitations. E. LoRaWAN MAC LAYER As mentioned in the previous section, the LoRaWAN MAC (medium access control) layer allows communication between multiple end devices and their gateway(s). The gateway serves many devices and relays messages to a central server [66]. LoRaWAN defines the network protocol for LoRa-based devices [67]. Figure 6 depicts the use of two distinct keys in message transmission: -The network session key: It is used to encrypt the frame and ensure that the message gets sent correctly and in its entirety for better communication. This key gets shared between the end device and the network server. -The application session key: It is used to encrypt and decrypt the payload in the frame (application data messages) and to ensure security. This key gets shared between the end device and the application server [68]. 1) MAC MESSAGES Different message types are present in the LoRaWAN MAC Layer, and two joining procedures, OTAA and ABP, are included. One of these procedures must be executed by the end devices. In [27], it is stated that in order to join the LoRaWAN network and exchange data, the end device must perform an Over-The-Air Activation (OTAA) procedure, composed of two MAC messages exchanged between the end device and the network server (Join Request and Join Accept), and if the end device loses connection, it must repeat this procedure. Figure 6 shows the two MAC messages exchanged during an OTAA procedure. In [67], the authors present the second type of joining procedure, Activation by Personalization (ABP), a procedure that allows end devices to connect directly to the network without the need for a request-accept exchange. However, the first procedure used to join the network (OTAA) is qualified as the most appropriate and secure way for an end device to join the network. Once the connection is established, the end device starts sending and receiving data messages. The messages received by the gateway can be the subject of an ACK (acknowledgment) by setting the ACK bit if requested or not, which means that in the first case, the VOLUME 10, 2022 gateway sends an ACK to confirm the reception of the data messages, while in the second case, no confirmation is required [69]. Furthermore, in [70], it is indicated that for the first use case, two ACKs are received. The first ACK is sent on the same channel where the data was transmitted and T1 after the reception of the frame, the second ACK is sent on the downlink channel after a timeout of T2 = T1 + 1s, and retransmission is performed if the end devices do not receive an ACK. In some cases, gateways are required to send an ACK as confirmation of the connection between them and the nodes. If multiple gateways receive the same message, one of them must respond by sending an ACK to the node in question [71]. 2) MAC COMMANDS MAC commands consist in control information sent by the gateway to the nodes. Each MAC command should trigger the appropriate response from the recipient nodes. Table 4 lists the MAC commands supported by LoRaWAN [72]- [74]. 3) MAC PROTOCOLS It is clearly stated in [73] that energy-efficient use is primarily influenced by MAC protocols, designed to manage uplink and downlink messages, node mobility, and determining network scalability. LoRaWAN is based on Pure-ALOHA (P-ALOHA), which allows node senders to select a SF and a transmission power for their messages. While the Listen before Talk (LBT) option is not excluded in LoRaWAN, the basic P-ALOHA scheme allows blind transmission at any time. Even though it is generally an inefficient protocol, pure ALOHA is still widely used because it offers multiple significant advantages, such as variable packet size, flexibility in starting transmission, and no requirement for time synchronization. As a result, most (if not all) LPWAN technologies use pure ALOHA as their primary MAC protocol [75], [76]. This protocol poses a challenge when the number of nodes is growing considerably. Without a listen before talk strategy, after each message collision, these messages are retransmitted later. The presence of collisions and the necessity to retransmit lost packets will decrease channel capacity rapidly [77]. According to the authors of [78], the throughput of P-ALOHA channels can be used up to 18% of the total channel capacity. To overcome the challenges of P-ALOHA, the Slotted-ALOHA (S-ALOHA) protocol was introduced. The latter is a widely used MAC protocol in local wireless communications, where the channel time is divided into slots and end devices can only send packets at the beginning of a slot. If two or more nodes send their packets at the same time, a collision occurs, indicating that the data is not being sent correctly. Otherwise, the data is correctly transmitted and no collision occurs. It was mentioned that the throughput of S-ALOHA channels can use up to 37% of the total channel capacity [78]. In the context of extremely rich and valuable studies that employ the S-ALOHA protocol, the authors of [79] explore the distributed choice of retransmission probabilities in Slotted Aloha from a game theoretical perspective. Using a Markov chain analysis, they were able to obtain optimal and equilibrium retransmission probabilities and throughput, before evaluating the impact of adding retransmission costs. In parallel, the authors of [80] analyze the performance of the Slotted-Aloha-based uplink of a cellular system according to several power differentiation schemes and at another level, they deduce the expected throughput and delay in order to optimize and provide a stability analysis that will serve as an alternative study. Authors of [81] focus on the retransmission probabilities in LoRaWAN networks using the Slotted Aloha protocol with the goal of maximizing throughput while taking into account that the SFs are perfectly orthogonal. They have demonstrated that it is possible to achieve both satisfactory throughput and limited delays by fine-tuning the retransmission probabilities and correctly setting the MAC parameters. The authors of [82] consider a model for increasing the average system packet success probability (PSP) under Pure Aloha. To achieve this, they proposed an optimization model aiming to maximize the average PSP of the system, via a sub-optimal SF allocation method that takes into account the effects of interference using the same or different SFs. F. LoRaWAN NET LAYER The LoRaWAN employs a single-hop routing model, in which gateways in the center communicate and transmit messages from terminal devices to the network server. The gateways are linked to the Internet network via WiFi, 4G, or Ethernet, allowing data to be sent to the network server via IP. It is possible to bring together different end-devices and gateways in the same geographical area by isolating frequency communications as well as virtual channels within the same frequency channel, keeping in mind that transmissions with different SFs are orthogonal due to the spread of the spectrum [83]. The single-hop model has several limitations, including the fact that gateways are both complex and expensive to maintain, particularly for a wide range of communication types [84]. Indeed: • Several gateways must be installed throughout the area. • Existing GWs require purchasing and installation of communication modules to connect to the Internet. • A monthly fee is incurred if the gateway connects to the Internet via a private telecommunications network. They are also required to listen to all channels at the same time and to be constantly connected to the Internet [85]. G. LoRaWAN DEVICES AND APPLICATIONS FIELDS LoRa networks have been widely deployed for a variety of applications and research systems. LoRa's openness makes it an extremely suitable choice for various IoT deployments. Common IoT applications include smart buildings, smart cities, smart agriculture, smart metering and water quality measurement [12], [20]. Furthermore, paper [86] claims that the acquisition costs of LoRa-enabled devices, as well as mobile devices and gateways, are low. In [38], a performance study and analysis of the capabilities of a currently available LoRa transceiver is performed, followed by a description of the transceiver's characteristics and a demonstration of its efficient use in an extended application scenario. Certain features, such as simultaneous non-destructive transmissions and carrier detection, have been demonstrated to be useful. The demonstration clearly shows that six LoRa nodes can form a network covering 1.5 ha in a built-up environment, with a potential lifetime of two years on two AA batteries, providing data in five seconds and with 80% reliability. H. LoRaWAN SERVICES In the LoRaWAN network, each device has different classes that define its capabilities. In fact, three classes are defined by the LoRaWAN Alliance and create a trade-off between network downlink communication latency and battery life. Figure 7 depicts the three LoRaWAN network classes. The three classes can be used together in the same network, and end nodes can switch between them [87]. The three classes are as follows: • Class A ''All'': This is the class supported by all devices in the LoRaWAN network and the most energy-efficient. It can optimize battery life to last for years. Alohatype access to the channels is used. Furthermore, two downlink windows (RX1 and RX2) are available after every uplink communication. Otherwise, the end devices are in sleep mode. The device can only receive a message from the gateway during these two windows. • Class B ''Beacon'': Similarly to Class A, while end devices receive messages in time-synchronized scheduled receive slots, the gateway transmits a beacon at periodic intervals to synchronize all end devices on the network. When a terminal device receives a beacon, it can open a short reception window called a ''Ping Slot'' in a previewed form during a regular time slot. In terms of latency and power consumption, Class B offers a balanced solution [88]. • Class C ''Continuous'': For this class, the receive window is always open, it is only closed when the end device is transmitting. It is used for real-time applications, so the energy consumption is higher. Among the three classes, Class C has the lowest latency for the ED [89]. I. LoRaWAN CHALLENGES The authors of [20] discuss the research challenges of LoRa networking and divide them into five major components: • Energy Consumption: The most significant feature of LPWAN is their high energy efficiency. This becomes an important parameter in extending the life of end devices. LoRa networks are expected to operate for 5-10 years with minimal maintenance. As a result, power consumption becomes a significant challenge for LoRa networking. • Communication Range: A long communication range is also an essential component of LoRa technology. Current LoRa technology is based on chirp spread spectrum, which is less susceptible to interference. LoRa networking is and will be used in a variety of settings, including homes, hospitals, schools, and forests. End devices will be placed in locations that are open to the air. With such diverse deployment conditions, signal attenuation, propagation losses, and fading must be mitigated in order to improve signal penetration and thus the coverage of LoRa networks [90], [91]. • Multiple Access: The goal of LoRa networking is to connect thousands of end devices to the network while communicating over a limited region and spectrum. Depending on the application, the possibilities for these end devices transmitting data concurrently vary. Multiple access issues involve two distinct aspects: link coordination and resource allocation. • Error Correction: LoRa technology is used to transmit data over long distances. While the message is being transmitted over the air, the data may become corrupted or lost due to channel effects, environmental conditions, or collisions. Current solutions fall into two categories: channel coding and interference cancellation. • Security: Security is a major concern in all computer communications. The eavesdropping, selective forwarding, and node impersonation are all examples of security attacks. Figure 8 summarizes the challenges associated each layer of LoRaWAN networks. As previously stated, energy efficiency is the most important feature of LPWAN technologies. It follows that evaluating the energy efficiency of LoRaWAN is paramount. Unlike previous surveys on LoRa technology, this work focuses on the evaluation of energy consumption at the different layers of the OSI model. III. ENERGY EFFICIENCY AT PHYSICAL LAYER LoRa is the physical layer used in LoRaWAN to achieve low energy consumption and long-range communication, due to its very appealing advantages. The LoRa PHY layer is gaining popularity among researchers. In this section, we examine the energy consumption of the most relevant research works from both the network (configurable radio parameters) and the device (resource allocation) perspectives. A. NETWORK SIDE According to the authors of [92], [93], five parameters at the physical layer affect energy consumption and range: • Bandwith (BW): It is the amplitude value related to the frequency domain for the channel used; a higher BW results in a higher data rate and, as a result, a shorter transmission time. A higher BW, on the other hand, means lower sensitivity since more noise is included. Most LoRa networks use a channel with a bandwidth of 125 kHz, 250 kHz, or 500 kHz. • Central Frequency (CF): It is spread over different frequency channels by exploiting the implementation of pseudo-random channel hopping. CF values depend on local frequency regulations, while LoRaWAN gateways typically support eight channels, whereas IoT devices usually support at least 16 channels. Specifically, CF can be configured in the range of 137 to 1020 MHz, depending on the legislation in the geographic region [94]. • Spreading Factor (SF): LoRa uses six different programmable SFs, ranging from 7 to 12. This allows up to six nodes to transmit on the same channel at the same time. In [95], it is noted that increasing the SF value without changing the transmission power causes the energy consumption to increase more rapidly. Higher SF values result in a lower bit rate, but higher sensitivity. Overall data rates in use are determined by regional specifications. Table 5 shows the possible data rate (DR) for the EU 863-780 MHz ISM Band [65]. in accordance with ETSI regulations. The following is a collection of recent research on the abovementioned parameters: In [94], the impact of parameter selection on power consumption and communication reliability is detailed, and an algorithm capable of rapidly identifying a good transmission parameter setting is proposed. The challenge here was to determine the setting that minimizes the cost of transmission energy while also satisfying the required communication performance, thus creating a balance between network performance and energy consumption. This work is a first step toward developing an automated mechanism for selecting LoRa transmission parameters. The authors of [96] create an energy model to estimate and optimize wireless sensor energy consumption. Based on LoRaWAN Class A, the proposed model integrates the modeling of sensor node units, specifically the processing and sensor units, using a true IoT application. Furthermore, many different LoRaWAN transmission modes are investigated in order to select the best mode that can optimize energy consumption. Moreover, a comprehensive optimization study of LoRaWAN parameters such as SF, CR, BW, communication range, and TP is discussed with the goal of maximizing sensor lifetime. In [56], an analysis of the performance of the LoRaWAN network is presented, including discussion of the impact of various spreading factors and the power used in transmission frames. The experiment was carried out by sending packets with the same payload at various spreading factors and power settings. The results show that using higher spreading factors and increasing transmission power reduces observed packet loss. In [97], the authors assess the energy consumption of the sensor nodes, taking into account three factors that influence energy consumption: transmission distance, data transfer/collection, and stability. The proposed simulation system considers three nodes with a LoRa base station, and the duration of the communication test is approximately seven days. Three communication parameters, SF, CR, and distance, are then varied to study their effect in isolation, i.e. when one parameter is varied, the others are kept constant. The results showed that when the SF is high, the power consumption is also high, whereas when the CR is low, the power consumption is low, when performing a power consumption and node transmission data measurement according to the LoRa protocol. LoRa Backscatter is a CSS modulated system with data transmission distances of up to 2.8 km and a power consumption of only 9.25 watts at a data rate of 37.5 kbps. When compared to standard LoRa technology, this system reduces power consumption by nearly 1000 times. The solar panels mounted on the passive RF chips can be used to power them [98]. The authors of [99] present a PLoRa (Passive Long-Range) communication technology that provides long-range connectivity for IoT devices based on ambient excitation, removing the need for any dedicated excitation source. The PLoRa tag is battery-free and generates energy from radio signals and ambient light, allowing it to communicate with active LoRa nodes and gateways over long distances in three distinct modes. The experimental results show that the prototype PLoRa PCB tag is able to backscatter an ambient LoRa transmission impinging from a closely situated LoRa node (20 cm) to a gateway as far as 1.1 km away, delivering 284 bytes of data every 24 minutes indoors, or every 17 minutes outdoors. However, the detection range of PLoRa packets is limited to 50 m, so there remains considerable room for further improvement. Monitoring these physical layer parameters is an important factor in ensuring an efficient and controlled use of energy in a LoRaWAN network [100]. We present a comprehensive analysis of the most significant results of the state-of-the-art on energy efficiency of allocation mechanisms. We classify existing works into two types of approaches: (a) Single parameter allocation and (b) Multiple parameter allocation, as described below. a: SINGLE PARAMETER ALLOCATION In a LoRaWAN network, a node cannot predict the distance between it and a gateway at first, but it can estimate the distance by observing the power signal received from a downlink transmission. If the received signal power of a downlink transmission is very high, it can reduce the SF of its next transmission to save power. This SF allocation scheme is known as the lowest possible SF allocation scheme [101]. It is important to note that ECC techniques are used in the LoRa physical layer to ensure noise and interference resistance and to increase receiver sensitivity [66]. On this basis, the LoRaWAN specification includes an Adaptive Data Rate (ADR) mechanism that allows a network server to select both the data rate and the transmission power provided to each node. Furthermore, the authors of [102] state that the ADR mechanism serves two critical purposes: • Increase the global network capacity, • Extend the battery life of the nodes. The transmission power is dynamically assigned to a node based on its distance from the gateway, thus increasing battery life. The authors of [103] state that the ADR mechanism reduces energy consumption per payload byte by a factor of 5. The ADR is mentioned by authors of [104] as being used for both energy savings and communication range extension. The ADR is accomplished by employing orthogonal different SFs and varying transmission power. These orthogonal SFs allow multiple LoRaWAN end devices to operate on the same frequency channel at the same time. When the transmission power is consistently higher than the sensitivity, the ADR algorithm has the ability to increase the SF whose SNR margin is too low, or even decrease it. This algorithm is required for ED movement during the NS SF selection process. The ADR scheme [100] operates asynchronously at the LoRa node and the network server. Most of the complexity of the ADR mechanism is assigned to the network server, with the goal of keeping the nodes as simple as possible. The node-side ADR algorithm is specified by the LoRa Alliance, while the corresponding algorithm on the network server is determined by the network operator. The ADR algorithm on the network server can decrease the SF and change the TP, while the node can only increase the SF. The part of the ADR performed at the node (ADR-NODE) is primarily intended to increase the SF (and thus reduce the data rate) if the uplink transmissions cannot reach the gateway. Thus, if a downlink frame is not received within a configurable number of frames, the node increases the SF of the next uplink frame. This increases the transmission range and therefore the probability of reaching a gateway. Further and improved alternative SF allocation techniques were reported as detailed below. In [72], the authors propose two algorithms, EXPLoRa-SF and EXPLoRa-AT, to outperform the basic ADR approach. The first algorithm has the advantage of allowing users to choose SF based on the total number of connected devices. The second algorithm aims to use an innovative ''ordered water filling'' approach that allows the spreading factors to be distributed in order to balance the ToA of the packets transmitted by the system's end devices based on groups of spreading factors. The results show that using this last technique is very efficient, especially under high load conditions when the system supports thousands of nodes or high message rates. But there is no evaluation of the energy consumption of the proposed algorithms. The authors of [105] propose a new method for an efficient allocation of the SF in a LoRaWAN network designed specifically to improve network scalability. Compared to the conventional method, the new method increases network scalability and improves data delivery probability while increasing average power consumption by 1 to 8%. In [101], an open source discrete event simulator is presented to analyze the performance of the LoRaWAN network and to investigate different schemes for SF allocation. Two machine learning solutions, the intelligent Decision Tree Classifier (DTC) and the intelligent Support Vector Machine (SVM), are described. The results show that the proposed systems can improve the overall performance of LoRaWAN networks, but the nodes use their maximum transmission power, which is an area for future improvement of the proposed schemes in order to reduce the nodes' energy consumption. The authors of [106] suggested a lightweight learning approach suited to the communication parameters of IoT devices and reaching energy efficiency and reliability goals. In this regard, this approach assigns SFs based on the distance between the IoT device and the gateway. A new proposed algorithm to improve the energy efficiency of the ADR scheme is presented in [107], where a comparison with the baseline ADR algorithm is performed, showing that the new proposed algorithm is superior in terms of energy consumption, with many cases showing more than 100% efficiency improvement. Paper [109] improved the ADR mechanism by proposing a new allocation scheme called Enhanced-ADR (E-ADR) that can perform dynamic allocation procedures, thus optimizing network transmission time, reducing energy consumption, and decreasing overall packet loss. The performance of E-ADR was evaluated with Waspmote-SX1272 devices and gateways using several mobility models in a smart farm scenario. The results demonstrate that E-ADR can reduce, and in some situations, eliminate packet loss and support mobility procedures. In addition, the gain on energy consumption is increased by approximately 60.23%. The same authors in [110] propose an expanded version of E-ADR to deal with unknown mobility patterns. This E-ADR extension, known as VHMM-based E-ADR, predicts VOLUME 10, 2022 the node trajectory using a Variable-order Hidden Markov Model (VHMM). It was built on the Waspmote SX1272 hardware platform. The experimental results show that it is very efficient in terms of packet loss rate (PLR) and energy consumption. Table 7 lists some of the SF allocation strategies that have been used to improve the performance of LoRaWAN networks and their energy consumption compared with the standard solution using a single configurable parameter. b: MULTIPLE PARAMETER ALLOCATION The ADR algorithm implemented at the network server, designated ADR-NET, allows the network server to modify the TP and SF for the end nodes' uplink data transmissions. It should be noted that the network server does not increase the SF (it does not reduce the data rate), as this is done by the LoRa node through ADR-NODE [100]. A major study on the introduction of new algorithms for the allocation of SFs and TPs in LoRaWAN is reported in [93], where two algorithms are presented, the first being the SensitivitySF allocation and the second being the AssignmentSF allocation, with the goal of maximizing the throughput of each SF level and thus improving the overall network performance. The simulation results show that the proposed algorithms can significantly improve network performance when compared to the basic ADR strategy, with the AssignmentSF algorithm, in particular, systematically ensuring a high success rate for any traffic load incurred. The authors of [111] propose a system for allocating SFs and TPs in LoRaWAN networks. The goal of this system was to improve the packet error rate (PER) for users far from the base station and to make these networks more equitable. The main idea behind this algorithm is to assign different SFs and control the power at different nodes to ensure that signals do not interfere with one another. The PER for the overall network is reduced by 42% in simulation using this technique in NS-3, and the PER of end devices remote from the gateway is reduced by 50%. The authors of [112] defined a novel resource allocation mechanism to dynamically adjust the LoRaWAN CF and SF parameters for reducing collisions while increasing the PDR. Correspondingly, this work provided a heuristic to find the optimal CF and SF parameters by investigating the RSSI and the distance between the gateway and the IoT device. By this means, the gateway could receive the transmitted packet and with sufficient power in the selected SF value. Furthermore, it assigns more IoT devices to the lower SF values so as to reduce the interference. In terms of energy consumption, the proposed scheme consumes 20% more than the basic ADR. The authors of [113] presented a resource allocation scheme for raising the PDR by fine-tuning the LoRaWAN radio parameters. Specifically, a Mixed Integer Linear Programming (MILP) formulation was implemented to achieve ideal values of SF and CF, taking into account the network traffic specifications. The authors explored the network traffic specifications to increase the Data Extraction Rate (DER) along with reducing the packet collision rate and energy consumption in LoRaWAN. Another interesting study is presented in [114] where a resource allocation mechanism is presented based on a joint TP and CF assignment approach, and is called low-complexity Matching Channel Assignment Algorithm (MCAA). It aims to ensure throughput fairness among IoT devices, especially where multiple devices are connected. To formulate channel assignment, they treated IoT devices and channels as two sets of selfish players seeking to maximize their utilities. A channel assignment algorithm was proposed based on distributing the channel access decision site to users. Consequently, the LoRaWAN gateways achieved the optimal TP for IoT devices when sharing the same CF with the users in the same channel. The simulation results indicated that the resource allocation mechanism obtained 80% better performance than the baseline method, while being much simpler complex. The authors of [115] improved the basic ADR by dynamically designating the radio parameters, SF and TP, and using the OWA operator. This work focuses on increasing the network noise resilience and PDR in dense IoT scenarios through recognition of the nature of OWA decision making and PLR metric, reaching low energy consumption for all channel conditions. The Fair Adaptive Data Rate (FADR) [117] algorithm manages the allocation of SFs and TPs of the nodes, with the goal of ensuring a fair data extraction rate between all nodes while limiting excessive TPs. It optimally combines SFs and TP levels while also ensuring node longevity by limiting excessively high TP levels. Furthermore, simulations show that when applied to highly congested cells, the FADR is 300% fairer than the minimum airtime allocation method, while consuming nearly 22% less power. In [119], the authors examined a novel resource allocation mechanism to adjust the allocation of SFs and TPs targeting the effects of co-SF and inter-SF interference. The authors optimized the SF and TP allocations to maximize the average data rates. In addition, the joint SF intractability and TP assignment problem was addressed by dividing them into two sub-problems: (i) SF assignment with fixed TP and (ii) TP assignment with fixed SF. From the simulation results, it is shown that the proposed mechanism improves the fairness, data rates, and throughput performance when compared to the baseline algorithms. In [116], a resource allocation algorithm for LoRaWAN is introduced, which involves energy harvesting. The authors also provide a model to optimize SF allocation, energy harvesting duration and power devices for IoT transmission. The paper presents two SF allocation algorithms using fairness or unfairness of IoT devices. The simulation results showed that the unfair SF allocation algorithm maximized the minimum rate. Furthermore, the imperfect SF orthogonality did not affect the minimum performance rate. Finally, the authors came to the conclusion that throughput performance is strongly affected by co-SF interference and not by the energy deficiency. The authors in [118] focus on the LoRaWAN protocol since its modulation technology can use different SFs to achieve more flexible communications, and they propose a new resource allocation scheme based on Spatial Time Division Multiple Access that uses multi-layer virtual cells (STDMA), allowing for more efficient resource allocation. A numerical analysis of the optimal power consumption to achieve the highest data rate is presented, allowing for the adjustment of the cellular radius depending on the communication distance. The results show that the proposed model outperforms the conventional system in terms of data rate and power optimization. Paper [120] proposes an Adaptive Priority-based Resource Allocation (APRA) mechanism to enhance LoRaWAN scalability and energy consumption in a dense IoT scenario, where simulation results show that APRA successfully improved power consumption by 95% and increased the battery discharge time of the end device by up to 5 years, while ensuring high packet delivery and low delay for high priority applications. Table 8 lists some of the SFs allocation strategies that have been proposed to improve the performance of LoRaWAN networks and their energy consumption compared with the standard solution using multiple configurable parameters. The authors of [121] propose a model for estimating the lifetime of LoRa monitoring nodes. They also evaluated the cost of battery replacement and damage penalty costs. For longer sensing intervals, the damage penalty is more significant than the cost of battery replacement. In addition to an analysis of the use of energy from renewable energy sources available in the industrial environment, a cost-benefit study of harvesting energy in terms of battery life and replacement costs is included. IV. ENERGY EFFICIENCY AT MAC LAYER The MAC layer is one of the most extensively investigated research areas in the IoT network field. Based on our review of the literature, several mechanisms have been proposed to improve the performance of MAC protocols in order to optimize them. This section investigates the energy efficiency of the LoRaWAN MAC layer on both the network and node sides. A. NETWORK SIDE 1) MAC PROTOCOLS As previously stated, the choice of MAC protocol affects energy consumption. LoRa MAC layer protocols are classified into two groups: centralized-synchronous protocols and contention-based protocols. It has been demonstrated that the energy efficiency of synchronous protocols is 3 to 4 times greater than that of contention-based protocols [122]. A collection of frequently-cited works in the literature that address the energy consumption of MAC protocols forms the basis of the overview below. In order to achieve a correct balance between energy consumption and network performance, the authors of [123] propose an Adaptive Duty Cycle Medium Access Control (ADC-MAC) protocol. In fact, the LoRaWAN network uses the ALOHA protocol to allow arbitrary access to the channel without regard to the duty cycle, node energy or traffic load. The proposed protocol's main idea is to dynamically set the node's duty cycle by specifying three factors: node load, node energy, and channel busy rate. The authors also make it clear that the wireless transceiver module dominates the energy consumption at the sensor node. In this regard, a wide range of duty cycle mechanisms in MAC protocols have been developed as promising solutions for controlling the wireless transceiver and, as a result, lowering energy consumption. Such efforts include: synchronous duty cycle protocols (S-MAC [124], T-MAC [125], EX-MAC [126]), asynchronous duty cycle protocols (Wise-MAC [127], PW-MAC [128]). The authors of [40] propose an additional MAC protocol to solve the problem of decoding superimposed LoRa signals in the case of orthogonal SFs with the same receiving power. Previous research [61], [129], [130] has shown that signals are not completely orthogonal when using different SFs. While using the same SF and channel, the stronger signal may be captured due to the difference in receiving power; otherwise, under similar receiving power conditions, a collision is generated and all signals are considered lost. The proposed beacon-based MAC protocol is the Collision Resolving-MAC (CR-MAC) protocol, in which the collision resolution technique is used to decode the superposed Lora signals. Two algorithms have been developed, the first for decoding two slightly desynchronized superimposed LoRa signals, and the second for three or more, and the CRC field is used to improve the collision resolution technique. The simulation results show that the CR-MAC protocol achieves significant improvements in terms of energy efficiency and latency levels. The authors of [131] propose a Carrier Sense Multiple Access (CSMA) protocol for a LoRa network as an alternative to the random-access protocol ALOHA, with the goal of minimizing collisions in LoRa transmissions for short and long messages, with extremely interesting results in terms of both power consumption and delay. A significant amount of effort was expended in [73] to propose a new MAC protocol called RS-LoRa, with the goal of improving the reliability and scalability of the LoRaWAN network. This RS-LoRa MAC protocol is divided into two steps: the first is handled by the gateways, in which a gateway schedules the nodes in its cell by dynamically specifying the authorized received signal strength and SFs for each channel, and the second is handled by the nodes, in which the nodes decide on the transmissions based on the scheduling information provided by the gateway. The transmission power, SF, channel, and timing of data transmission are all determined by the nodes themselves. The nodes are divided into different groups using the proposed light scheduling, and each group uses a common transmission power to reduce the capture effect. This MAC protocol proposal significantly reduces the number of packet collisions in the network by improving reliability and scalability. In terms of energy consumption, it is clear that RS-LoRa introduces supplemental power consumption measures. However, it should be noted that in RS-LoRa, authorized SFs are determined by the gateway based on network preferences; the gateway can choose between reliability and energy consumption. The authors of [132] proposed including the distributed queueing (DQ) algorithm in LoRa, called the DQ-based MAC protocol, to improve the scalability and operability of LoRa networks. In order to evaluate the performance of the new MAC protocol, a comparison between DQ-LoRa, P-ALOHA, and CSMA was done, and the results showed remarkable performance in terms of throughput, average delay, and average power consumption. DQ-LoRa is more efficient than P-ALOHA in terms of power consumption, and can achieve power savings of up to 48% when the number of packets transmitted is high. In [133], the authors introduce, develop and evaluate FCA-LoRa, a new MAC protocol designed to improve reliability and collision avoidance in LoRa networks. It is based on the diffusion of beacon frames through the network gateway aiming to synchronize communication with terminal devices and to optimize channel utilization using different SFs. The simulation was performed using OMNeT++, showing that FCA-LoRa increases the performance of the traditional LoRaWAN scheduling method regarding throughput and collision avoidance. However, since LoRa end devices listen almost continuously to the available frequency channels, FCA-LoRa could raise some serious issues regarding power consumption. In another study, the authors proposed a real-time protocol called RT-LoRa for industrial monitoring and control applications. The protocol uses a Multiple Listening Before Talk (mLBT) mechanism that allows the detection of channels several times in a time slot [134]. In [135], the authors present EF-LoRa, a LoRa networking solution that can provide equitable power consumption between end devices based on smart allocation of various network resources such as frequency channels, SFs, and TP, ensuring balanced power consumption between LoRa network end devices and extending the network's life. According to simulation results, the proposed EF-LoRa solution can improve the energy fairness of existing LoRa networks by 177.8% and achieve higher energy efficiency fairness than existing LoRa and RS-LoRa networks. The authors of [136] propose a new MAC protocol that dynamically adapts the LoRaWAN MAC layer to changes in traffic load. This protocol, known as the Traffic-aware Energy efficient MAC protocol (TREMA), can switch between asynchronous and synchronous schemes based on changes in probed traffic. The protocol TREMA eventually expands the maximum capacity of LoRa implementations while also ensuring that the most energy-efficient access scheme is always chosen. A new MAC protocol, called Deterministic Group Acknowledgment Transmissions in LoRa networks (DG-LoRa), designed to improve the scalability of LoRaWAN networks through deterministic GACK transmissions is introduced in [139]. The authors evaluate the VOLUME 10, 2022 performance of DG-LoRa using a Monte Carlo simulation and then compare it to existing LoRaWANs in terms of data drop rate and number of retransmissions. Their numerical results show that DG-LoRa supports about five times more connections to the LoRa network by achieving a data drop rate of 5%. In addition, DG-LoRa provides low overhead by reducing the number of data frame retransmissions. In [137], the authors propose Time-slotted LoRaWAN (TS-LoRa) as a new approach to time slot communication over LoRaWAN. TS-LoRa allows nodes to self-organize the scheduling of time slots within frames. Experimental results with 25 nodes show that TS-LoRa can achieve packet delivery rate above 99%, even for the most remote nodes. Furthermore, the simulations with a higher number of nodes indicated that TS-LoRa has lower energy consumption than the confirmable version of LoRaWAN, without compromising the packet delivery rate. The authors of [138] present a new network architecture and an on-demand Time Division Multiple Access (on-demand TDMA) MAC protocol exploiting short-range wake-up radios and a LoRa physical layer. On-demand TDMA provides an efficient broadcast and unicast service for data transmission and collection, thereby improving the performance of LoRa networks and achieving 100% packet delivery rate by eliminating the possibility of packet collisions. Table 9 summarizes and evaluates the energy consumption of different MAC protocols used in LoRaWAN networks. B. SENSOR/IoT DEVICE SIDE The energy consumption of the end device is a very decisive factor for the network's correct and efficient performance [103], in the context of extremely rich and interesting work done to increase energy efficiency at the MAC layer level, as detailed below. 1) SCHEDULING OF LoRa-BASED TRANSMISSIONS Authors of [66] propose a transmission scheduling algorithm at a central node, which defines when a given IoT device is allowed to transmit. They reduce message size by implementing a probabilistic structure using Bloom filters, which encodes time slots designed to decrease the synchronization for packet length and send more information to IoT nodes. The time slots are assigned according to the traffic requirements of the IoT nodes and contextual information, VOLUME 10, 2022 such as periodicity, synchronization or clock drift. Using the central node, they synchronized the uplink transmission of the IoT devices. In terms of power consumption, the results show that the synchronization process will use less than 3 mAh of additional battery per end node over a period of one year, for synchronization periods longer than three days. This is less than the battery capacity used to transmit packets that will be lost in an unsynchronized network due to collisions. The authors of [140] present a scheme for scheduling node communications in slots of different sizes based on the SF. This approach enables transmissions with the same SF to be scheduled in other slots. On the other hand, transmissions with different SFs can be processed in parallel, thus preventing collisions. The first algorithm, called ''Global Algorithm,'' calculates the schedule for all communications. The second algorithm, called ''Light,'' schedules only the first transmission for each individual node and replicates it in subsequent frames. Under this setting, the IoT device maintains the same SF in successive transmissions with the preference for the shortest schedule. The simulation results show an improvement of up to 250% in terms of energy consumption associated with a packet delivery rate of nearly 100%. In [141], a transmission scheduling mechanism based on temporal mappings is presented. The authors use the joining process of a new device to provide information about the periodization of transmissions. This enables gateways to schedule transmissions while avoiding collisions. The results obtained from NS-3 simulations show that, differently from LoRaWAN and CSMA, the collision rate decreases when the packet delivery rate increases. with slightly increased energy consumption. 2) ALLOCATION SCHEME In [142], a new ADR algorithm called CA-ADR is proposed for the LoRaWAN network to assign data rates to EDs by taking the collision probability at the MAC layer into account. The new algorithm was compared to two benchmark solutions using simulation and experimental approaches under different performance metrics. Their findings show that CA-ADR outperforms the standard solution in networks that are not severely constrained by connectivity issues. Paper [143] proposes CARA (Collision Avoidance Resource Allocation), a new algorithm designed to increase the capacity of LoRaWAN networks while decreasing the number of collisions. The CARA algorithm divides the wireless medium's capacity into resource blocks that correspond to a channel and a SF. Transmissions in different resource blocks will not collide due to the orthogonality of the SFs in LoRa. Furthermore, CARA benefits from the existing joining procedure for parameter exchange and synchronization, which eliminates any subsequent communication between the final devices and the network. At another point, a comparison was made between the ADR and the proposed SF assignment algorithm. When compared to the ADR algorithm, CARA provides a significant increase in throughput. Another sig-nificant result is that, while the proposed solution slightly increases overall transmission time, resulting in a slight increase in power consumption compared to the ADR proposed in the LoRaWAN specification, it ensures more equitable resource sharing. The authors of [144] explore a different approach to data collection in LoRaWAN, and propose a Fine-grained scheduling approach to ensure Reliable and Energy-Efficient (FREE) data collection in LoRaWAN networks data collection in LoRaWAN networks. This approach is based on buffering data at the terminal devices and collecting it in scheduled mass transmissions at appropriate times. Instead of transmitting the data directly, this system assigns SFs, TPs, time slots, and frequency channels. Finally, they evaluate the performance of the proposed system. The numerical results show that the lifetime of the devices is estimated to be more than ten years, regardless of the type of traffic and the size of the network. 3) DATA MANAGEMENT The frequency and payload size of data transmission affect energy consumption. The authors of [145] add that for a given day, the number of transmissions is linked first and foremost to the device's energy consumption. As the payload size grows, so does the number of symbols per packet, resulting in an additional energy cost per transmitted packet. It is also reported in [103] that the energy consumption in the case of unacknowledged uplink traffic is greater than when the acknowledgement is transmitted in the first reception window. In the primary use case, the end node without receiving ACK must keep both reception windows open as well as the transition phase between the two, compared to the second case, where the reception of the ACK packet is a one-time event. The latter allows the node to enter sleep mode immediately after receiving an ACK, and the transition phase between the first and second reception windows is avoided. Such a strategy will aid in the reduction of energy consumption. It is very clearly marked in [89] that each synchronization procedure directly affects energy consumption through the excessive load of synchronization messages and their transmission frequency, whereas LoRaWAN end devices are frequently asked to share a common time base. This work consists of determining a trade-off between the expected synchronization uncertainty and the energy available in the LoRaWAN. On this basis, an algorithm was developed to generate comparative curves between energy efficiency and uncertainty, while taking into account a synchronization mechanism that occurs on demand following (a posteriori) synchronization. The trade-off proposal is applied to two scenarios related to industrial applications: the Time Division Multiple Access (TDMA) system and the predictive maintenance framework. The authors of [147] present Charm, a new system that aims to improve not only the battery life of end devices but also the coverage of LPWAN networks. Charm is a solution designed to ensure that multiple LoRaWAN gateways pool their received signals in the cloud on a consistent basis for the purpose of detecting weak signals that cannot be decoded by any individual gateway. Indeed, a new gateway hardware and software design has been developed, consisting of precisely detecting the specific sections of the received signal that must be sent to the cloud. The obtained results provide intriguing benefits in terms of range and end node battery life. In [145], the authors propose an adaptive data aggregation and retransmission algorithm for transferring traffic data from sensors, and the results were analyzed to provide an estimated lifetime of sensor device radios. The proposed algorithm works by aggregating data from successive periods and transmitting it in a single LoRa packet. Because the data must be transmitted only after the aggregation is complete, such aggregation will result in higher latency in data transmission. This higher latency is accompanied by a a reduced overall energy cost due to the reduced number of transmissions. It has been demonstrated that even with a data rate of one transmission per time interval, a 1000 mAh battery can guarantee a lifetime of more than five years with time intervals of 6 minutes. The authors of [146] present AggACK, a frame aggregation method for ACK running on LPWAN networks. The LoRaWAN network server employs the proposed method by sending ACKs that contain cumulative ACK information for multiple data and multiple users. User nodes open their reception windows synchronously via the network server, causing the network server to simultaneously broadcast the cumulative information of ACKs to the user nodes during the same ACK. Unlike the standard LoRaWAN, it ensures reliable data transfer with a very small number of ACKs. Table 10 lists the aggregation techniques used in LoRaWAN networks. The LoRaWAN network in its basic configuration is unable to maintain reliable communication, and a loss of transmitted frames is possible due to channel effects as well as terminal device mobility. The authors of [148] performed a detailed measurement of a new LoRaWAN network to identify the spatial and temporal properties of the LoRaWAN channel. According to the collected data, frame losses are very high and occur when end devices move away from a gateway. When the end device is about 6 km from the nearest gateway, it can be as high as 53%. In fact, the loss of a frame results in the loss of data. Given that an IoT application is typically data-driven, the resulting data loss must be minimal, and thus data recovery from IoT applications is required. To accomplish this, they created and implemented the DaRe application-layer coding scheme. The need for such a system is further motivated by the ALOHA media access technique used in LoRaWAN, which will undoubtedly result in numerous collisions resulting in frame loss. DaRe is a system that incorporates new techniques to improve data recovery while lowering overhead costs. DaRe does not recover lost frames, but it does allow data recovery from lost frames at the application level using FEC. The results show that with a coding rate of 1/2 and a frame loss of up to 40%, 99% of the data is recoverable. In comparison to repetition coding, DaRe provides 21% more data recovery and can reduce power consumption during transmission by up to 42% for 10-byte data units. Furthermore, DaRe provides greater resistance to burst frame loss. 4) UNMANNED AERIAL VEHICLES (UAVs) Unmanned Aerial Vehicles (UAVs), also known as drones, are now being used in a wide range of novel applications, particularly in the telecommunications domain. Paper [149] discusses some promising benefits of UAVs in wireless environments, such as UAV-assisted wireless charging, in which a UAV could be (re)charged on the fly via wireless power transfer while flying near a charging station. A UAV, on the other hand, could wirelessly transfer energy to depleted ground IoT devices. At another level, authors of [150] present a novel use of UAV with an energy harvesting module to extend the network's lifetime. As a result, the UAV can be used as an energy source for depleted IoT devices. On the one hand, VOLUME 10, 2022 the UAV charges the depleted ground IoT devices, beginning with those with a battery level less than a certain threshold. The UAV station, on the other hand, collects data from IoT devices that have enough energy to transmit their packets, and during the same phase, the UAV extracts and harvests energy from the RF signals transmitted by IoT devices. Numerous studies, including [151], use UAVs on the LoRaWAN network to collect data from LoRa sensors. The authors of [28] investigate the UAV-based gateway (GW) that can improve the reliability of LoRaWAN communication in urban scenarios. However, neither study focuses on the power consumption and lifetime of the LoRaWAN terminal device. The authors of [152] highlight the energy efficiency of a LoRaWAN using UAV technology to collect periodic sensor data reports. The simulation results show that a UAV-based GW can reduce the average power consumption for network communications by up to 59%, depending on the trajectory or speed of the UAV. 5) LINEAR WIRELESS SENSOR NETWORK The authors of [153] investigate the energy consumption of a specific type of WSN known as Linear WSN. According to [154], this type of WSN differs significantly from a standard WSN in that the sensor nodes are fixed in one dimension. Such WSNs are used for monitoring a wide range of applications, including transmission lines and pipelines. Linear WSN requires a unique network architecture, and its power consumption varies. The authors of [153] also proposed an energy consumption model based on a network architecture tailored for a linear wireless sensor network, as well as a comparison of two widely used wireless communication protocols, LoRaWAN and ZigBee. Based on this model, the LoRaWAN network outperforms ZigBee network in terms of energy efficiency. V. ENERGY EFFICIENCY AT NET LAYER The network layer's primary function is to transmit data packets from a source to a given destination. This section provides a literature review on LoRaWAN energy efficiency at the network layer. Numerous authors proposed multi-hop LoRaWAN solutions where several devices act as relays for other devices. The routing mechanism is a crucial factor for a multi-hop LoRaWAN since it has the potential to affect network performance in terms of throughput, reliability, latency, and energy consumption. While some works propose mechanisms that use intermediate nodes, such as a simple relay that only uses the LoRa physical layer, others propose routing protocols, resulting in more complex mesh networks. This section will go over these approaches in-depth. A. NETWORK SIDE 1) ROUTING MODELS As previously stated, the energy required to transmit a packet decreases as the distance between devices decreases. The following is a collection of works that are more closely related to the routing model used for the LoRaWAN network. The energy efficiency of the star and mesh topologies was investigated by the authors of [95]. They also propose a method for determining the best associations between spreading factor, transmission power, distance, and bandwidth. In terms of energy consumption, they conclude that the best choice between direct and multi-hop transmission is determined by the sender-to-gateway distance. The authors of [155] present an energy consumption model for single-hop and multi-hop LoRaWAN networks. In the multihop approach, the authors consider a network formed by several rings around the gateway. The simulations are performed in MatLab, and the authors conclude that in the multihop scenario, nodes near the gateway consume more energy than nodes farther away. In the single-hop scenario, however, nodes near the gateway have higher energy efficiency. One more theoretical work has been introduced in [156] for various multi-hop configurations for LoRa networks with three hops to the gateway. When compared to traditional singlehop LoRaWAN, the results showed that some topologies could improve packet delivery ratio and energy consumption. The authors of [157] consider a standard LPWAN with a TDMA MAC layer. They suggest a Distance-Ring Exponential Stations Generator (DRESG) framework for evaluating performance and establishing optimal routing connections in the uplink for multi-hop communications. Their results indicate that multi-hop may improve network lifetime and balance energy consumption across all nodes in the network. The same researchers suggest a protocol stack for LPWANs called HARE in [158], which allows for single-hop and multihop connections. It is formed of many techniques at various communication layers, such as network synchronization, adaptive transmission power level, TDMA channel access, network association process, and energy-aware routing protocol. The protocol was tested on real hardware platforms and demonstrated high reliability and low energy consumption. To improve the range and quality of LoRaWAN communications, the authors of [159] propose the implementation of a forwarding device that allows LoRaWAN to be used for multi-hop communications. To assess the impact of an additional communication hop, tests were performed with Class A and Class C devices. It has been demonstrated that the addition of a forwarder improves signal strength while significantly lowering the power consumption of the final device, resulting in a longer battery life. In [160], authors propose using a simple relay device to expand the LoRaWAN coverage area in rural areas. The authors recommend deploying relay nodes in areas that are not covered by the gateway and propose a simple and direct message forwarder as well as a synchronization mechanism. They demonstrate that the addition of the relay node to deliver packets reduces energy consumption. They also notice an improvement in coverage and reliability. The authors of [161] compare and contrast two architectures for multi-hop LoRaWAN in smart cities. An end device acts as a relay node in the first architecture to extend the coverage area. In the second architecture, known as star-of-stars, a group of remote devices forms a cluster and sends data to a cluster gateway, which acts as a relay and sends the data to a central gateway. The authors implement a prototype of both architectures, but no new forward mechanism is presented. The results show that two or three hops consume less energy than single-hop communication. Furthermore, this paper discusses some challenges as well as future work for multi-hop LoRaWAN networks. In [84], a study on the implementation of a multi-hop network for LoRaWAN communication that provides low cost and high coverage is proposed. The proposed method involves sending packets to the Internet via multiple gateways in order to overcome the problem of the gateway using a private telecommunication network as a way to access the Internet, which requires a monthly fee. The network configuration includes: gateways, which are classified as hopping gateways (HGW), main gateways (MGW), and the network server. In terms of energy efficiency, we can conclude from the preceding works that the most energy efficient routing model is determined by the distance between the sender and the gateway. Table 11 provides a comparison between single-hop and multi-hop routing models. 2) ROUTING PROTOCOLS Routing is the process of looking for and selecting a path to send messages from a source to a destination. In the presence of multiple routes, an algorithm manages path selection, taking into account one or more performance metrics to select the links and nodes that will serve as the destination path. Sensors are typically powered by small capacity batteries that operate autonomously over a limited lifetime, which can be extended using energy recovery techniques. As a result, one of the primary challenges of WSN routing protocols is to create energy-efficient paths between sources and sinks [85]. Numerous studies, focusing on routing protocols, have been conducted to improve the LoRaWAN network, as detailed below. In [38], an IoT protocol for LoRa transcievers called LoRaBlink is proposed. This protocol addresses many issues that are required for the deployment of IoT applications but are not addressed by existing LoRa protocols, such as reliable and energy-efficient multi-hop communications. It is also intended for low-latency and bidirectional communication. It combines MAC and routing via beacons to enable time synchronization and distance reporting based on the number of hops to the gateway or sink. The authors of [162] propose using the Concurrent Transmission (CT) protocol in LoRaWAN networks, resulting in the CT-LoRa protocol. The CT is a new flood routing protocol that has been successfully implemented in IEEE 802.15.4 networks. The protocol does not require the use of a routing table, and the flooding mechanism ensures node synchronization. The proposed indoor scenario results demonstrated that the protocol improves LoRa coverage and achieves a reliable packet delivery rate. While the authors do not present an energy consumption analysis of the protocol, it is indicated that numerous works have confirmed that regardless of the energy consumption, reliability, or latency viewpoint, CT provides better or comparable performance compared to state-of-the-art multi-hop protocols. The authors of [85] provide a detailed analysis of the definition and evaluation of a multi-hop routing protocol based on the Destination-Sequenced Distance Vector (DSDV) routing protocol, which is designed to increase the coverage of LoRaWAN installations and enable full interoperability with standard LoRaWAN gateways. The proposed routing protocol makes use of the LoRaWAN beacon to deliver LoRaWAN packets from end nodes to gateways with the fewest hops possible. The proposed system was tested in linear topologies and bottlenecks and evaluated using packet delivery rate (PDR) and throughput as performance metrics. Overall, the results indicate the viability of such an extended LoRaWAN multihop system. The throughput values obtained are generally adequate for most IoT applications. The authors of [165] describe a new version of the routing protocol for LoRa mesh networking, which provides a multi-point networking connection between gateways to achieve greater coverage. The proposed protocol is tailored to the needs of LoRa networks and devices, and is based on HWMP (Hybrid Wireless Mesh Protocol) and AODV (Ad Hoc Remote On-Demand Vector Routing). The protocol is evaluated only from the perspective of route construction time. The authors of [164] propose an energy-efficient multihop communication solution (e2McH) in which routes are built based on energy consumption, residual battery life, and traffic rate. The simulation results demonstrate a 15% energy consumption gain when compared to single-hop LoRaWAN. In [166], the authors control the ancient underground water distribution systems in Siena, Italy, using multi-hop linear communication over LoRa. The authors use a simple routing protocol and synchronization mechanism, with end devices using a wake-up time transmission scheme to save energy. They proved that the proposed solution is dependable, and that the synchronization mechanism cuts energy consumption VOLUME 10, 2022 by half when compared to a non-optimal wake-up time. In some applications, such as mines and pipelines, a linear topology may be the only viable option. In [163], the first proposal for a routing protocol in a LoRa network using a standard IP stack was introduced. They developed a novel MAC layer to deal with IPv6 Routing Protocol for Low Power and Lossy Network (RPL) routing protocol, and concluded that enabling RPL over LoRa is a step in the right direction that needs to be better tested. It is demonstrated that by selecting the path with the shortest ToA, power consumption can be reduced, thereby increasing network lifetime. For WSN, a multitude of routing protocols are available. A hierarchical protocol is intended to achieve the best possible balance of scalability and performance. When sensor nodes are associated with multi-hop communication, the power consumption of the sensor nodes in hierarchical routing is extremely low [167]. Table 12 examines a selection of protocols that are adapted to the LoRaWAN network and are referenced in the literature. 3) NETWORK SLICING Network slicing has emerged as one of the most important 5G innovations, gaining traction in both academia and industry due to its ability to allow different IoT devices and applications to coexist on top of a shared physical network infrastructure. Network slicing, in general, makes use of virtualization and softwarization substrate technologies to dynamically orchestrate physical networking resources (bandwidth, pathways, virtual network service chaining, placement). Network slicing helps to create end-to-end virtual network instances (networks slices) that are isolated on top of a shared physical network system [168], [169]. In this context, the literature includes several works for improving LoRa energy consumption through network slicing, as detailed below. Authors of [170] study the factors associated with IoT dense deployment scenarios, especially when faced with the difficult task of using network slicing for a LoRaWAN gateway, as this results in performance degradation caused by the physical limitations of these gateways. For the efficient deployment and isolation of network slices in LoRaWAN physical gateways, the authors used a software defined networking (SDN)-based architecture tailored to network slicing. They also used a slice-based optimization method to improve the scalability and configuration of the LoRaWAN. It employs Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) and Gaussian Mixture Modelling (GMM) algorithms to determine the best network slice configuration strategy for maximizing QoS benefits while minimizing energy consumption and reliability costs. The numerical results demonstrate the efficiency of the method used to enable optimal decision-making in realistic LoRaWAN scenarios for slice configuration with respect to other decision techniques (static, dynamic-adaptive, and dynamic-random). The authors of [171] investigate network slicing over various SF configurations in order to assess system performance and determine which one best serves LoRa devices in each slice. At a higher level, a dynamic inter-slicing technique is designed in which the bandwidth is similarly reserved on all LoRa gateways based on a maximumlikelihood estimation (MLE) [172], and this is then improved and extended with an adaptive dynamic method that evaluates each LoRa gateway independently and reserves its bandwidth after applying MLE to the devices in its range. Both dynamic slicing suggestions will be compared to a simple fixed slicing strategy in which the GW's bandwidth is reserved equally between slices. Finally, an energy model for LoRaWAN is integrated into NS3 based on LoRa energy specifications to evaluate the energy consumed in each slice, and an intra-slicing algorithm that reaches the QoS requirements of each slice in isolation is proposed. The results demonstrate that as the number of nodes increases, so does the total energy consumed across all simulated slices. 4) MULTI-RATs INTEGRATION Gateways that allow multiple RATs (also known as multihoming) have the potential to provide alternative communication opportunities via various wireless networking technologies, allowing IoT devices to send data. As a result, multihoming gateways must include control functions capable of selecting the best RAT interface for targeting IoT data based on a variety of factors such as current network conditions and specifications [173]. The authors of [174] propose a novel distributed approach to solving the RAT selection problem, based on matching game theory, which can achieve a stable connected devices-RATs association. The proposed matching game assignment enables IoT devices with limited energy budgets to improve their energy efficiency and reduce data transmission costs over the serving RAT while incurring very little signaling overhead. Around this perspective, we present below the most intriguing works that make use of a Multi-RATs system with a LoRaWAN network and have demonstrated efficient energy consumption. Authors of [173] explore the viability of developing a multi-RAT LPWAN device capable of meeting transmission requirements in an NB-IoT co-existence scenario. The authors built a device prototype that includes an NB-IoT chipset that handles communication events and implements the LoRaWAN module. The dual-RAT LPWAN device was tested in a commercial NB-IoT and a private LoRaWAN environment at Brno University of Technology in the Czech Republic. According to the results, the NB-IoT chipset consumed more than 200 mW on average to keep the network synchronized. The LoRaWAN transceiver, on the other hand, consumed less than 60 mW due to the lower transmission power, despite the longer transmission time. The authors of [175] created an analytical model for assessing the performance of license-free IoT-based networks, especially in scenarios where the same radio spectrum is used by one or more concurrent radio access technologies (e.g., LoRaWAN and Sigfox). This allowed the authors to perform analytical modeling of transmission success probability and key performance indicators (KPIs) in terms of delay and battery lifetime. It was possible to evaluate system performance when subjected to interference by measuring the interference of a scenario in which different communication technologies coexist. The authors used simulations with analytical models to evaluate the performance of a reference license-free technology. Given its exposure to interference caused by another transmission device that uses the same frequency, the scenario was simulated using a simplified form of LoRaWAN. The simulation results show that the presence of competing technologies significantly degraded the system's performance. The authors solved this problem by demonstrating that it is possible to mitigate the interference caused by the coexistence of transmission devices through joint reception. The latter can significantly extend battery life, especially over short distances. VOLUME 10, 2022 In [176], the authors propose a game theory-based model in which end nodes implement multiple radio transceivers (e.g., LoRa, Wi-Fi, and BLE) to allow data processing at the network edge and connect multiple networks using different radio technologies. They integrate it into LoRaBox using auction-based techniques described in a previous work [177]. The system is composed of a test solution that supports LoRaWAN, BLE, and Wi-Fi. Depending on the signal strength, radio, and application requirements, the system can switch between multiple radio technologies adaptively. B. SENSOR/IoT DEVICE SIDE Network Coding (NC) is a promising technique for increasing throughput and decreasing latency in LoRaWAN networks. Its main advantage is that it reduces the number of repeated transmissions by minimizing packet loss. The authors of [178] go into great detail about the methods and techniques used for Instantly Decodable Network Coding (IDNC), which is applied to LoRaWAN. It is possible to state that by simulating and implementing the proposed system, a significant reduction in joint delay, delivery time, and power consumption is observed. In paper [179], two practical alternative methods for increasing the probability of decoding a message in the presence of interference are used. The first method employs directional antennas to improve signal strength at the receiver level without requiring additional transmission power. The second method employs multiple base stations to improve the probability that at least one of them decodes a received message successfully (diversity reception). Both methods have advantages and disadvantages in terms of cost, deployment method, and maintainability. The simulation results show that, as expected, using each method has the potential to improve the performance of the LoRa network under interference conditions. However, it has been demonstrated that the use of multiple base stations outperforms the use of directional antennas. For example, in an environment with 600 nodes interspersed through four other LoRa networks of 600 nodes each, it is discovered that using three base stations raises the Data Extraction Rate (DER) from 0.24 to 0.56, while using directional antennas raises the value to 0.32. Some IoT applications, for example, may still require certain reliability guarantees. The authors of paper [180] approach this problem through a low complexity encoding/decoding technique based on fountain codes. The main advantage of using encoding techniques when using LoRa technology is that no changes to the fundamental PHY technology are required. The encoded message is transmitted by the end devices via the radio access network to the application servers responsible for decoding the message. Each packet is assigned a sequence number, which allows the application server to determine which packets have been lost. These packets can then be recovered after a sufficient number of subsequent packets have been delivered successfully to ensure that the established redundancy is used for decoding. Static Context Header Compression (SCHC) is a header compres-sion scheme that allows for fragmentation and is particularly suited to LPWAN technologies [181]. The authors of [182] discuss the impact of SCHC compression and fragmentation on the LoRaWAN network, evaluating several compression and fragmentation configurations aimed at ensuring efficient packet transmission. The results demonstrate the benefits of SCHC, including a significant improvement in reliability for LPWAN links operating at lower data rates. However, fragmentation can result in a loss of efficiency in terms of data and power. VI. HOT TOPICS AND NEW INSIGHTS It is clear from the preceding sections that considerable work on improving energy efficiency within the LoRaWAN network has already been performed. However, many issues remain unresolved to enable efficient energy use in the LoRaWAN network, leading us down an interesting research path. Table 13 summarizes all axes addressed by layer and section, as well as the associated open research areas. A. DENSE/ULTRA-DENSE NETWORKS Energy consumption control for the nodes is a primary issue faced in a dense/ultra-dense network. The latter requires a higher number of gateways and certainly a more dense deployment of end devices, hence leading to more interference and consequently more collisions and delays. These collisions have a significant impact on battery life. B. SERVICE/DEVICE HETEROGENEITY Given the various types of devices used in a LoRaWAN network supporting a variety of services, managing and controlling the power consumption of heterogeneous devices from different vendors would be necessary, posing a challenge in the dynamic integration of heterogeneous devices and services, where interoperability needs to be maintained. C. CROSS-LAYER DESIGN Cross-layer design is the process of designing protocols that leverage the dependencies between protocol layers to achieve better performance. This approach contrasts with the traditional layered approach, where the protocols in each layer are designed independently of each other [183]. The idea in this context is to explore and create solid and efficient cross-layer communication to improve network performance, particularly as it relates to node energy consumption, by fully utilizing the interactions between the layers. D. MULTI-RATS ACCESS The coexistence of heterogeneous technologies distributed over various types of hardware is a research area that has emerged due to the explosive growth of IoT devices. Multiple radio access technologies (RATs) are highly future-oriented wireless network technologies requiring sophisticated coordination and collaboration between nodes and between RATs to achieve an integrated architecture where the multiple RATs operate as a single cohesive virtual radio accerss network. According to the reviewed literature, several researchers have examined strategies to enable the coexistence of LoRaWAN with different wireless technologies to satisfy the requirements of next-generation IoT applications that use LPWAN technologies. As a result, more significant gains occur while preserving the benefits derived from independent use of the network. By combining LoRaWAN with other wireless network technologies, multiple-RAT devices can solve transmission bandwidth and rate-limiting issues. Multi-RAT can improve reliability by providing critical data transmission over multiple media at the same time. However, the addition of multi-RAT devices holds the possibility of introducing a significant new challenge: how do we deal with the interference caused by the coexistence of different technologies that use the same wireless spectrum? Co-channel interference leads to information loss or packet retransmissions, which affect various IoT applications, in addition to wasting energy, increading latency, and even reducing the effective bit rate. Several studies have highlighted the advantages of allowing LoRaWAN to coexist with other LPWAN technologies, but they have failed to address the issue of interference. Alternatively, further research exploring the interference effects on large-scale and heterogeneous LPWAN deployments is needed [13]. E. TRANSMISSION PARAMETERS SELECTION To begin with, the distribution of SF resources among nodes is becoming a critical issue. Increasing the node's SF should be carefully calculated because higher SF means longer ToA, and longer ToA increases the probability of collision with other high-frequency transmissions. The challenge is to propose a single SF assignment rule for each possible LoRaWAN topology, given that each network is unique and requires different methods to optimize the SFs of its nodes. Furthermore, the studies conducted to solve this problem use extremely high transmission power, resulting in higher energy consumption. It is necessary to propose an allocation scheme capable of optimizing SFs in an energy-efficient model. Secondly, the literature assumes that SFs are perfectly orthogonal, with few studies examining the impact of imperfect orthogonality. It is therefore of interest to conduct research that considers the non-orthogonality of SFs. Finally, the LoRaWAN network provides a high flexibility level: the NS can select the SF to be used by the various nodes, the transmission/reception channels, and the duration of the reception windows. It is thus possible to support reliable communications while meeting the requirements for communication reliability, delay, energy efficiency, and system capacity with the proper configuration of these parameters. The task at hand is to implement a system that allows us to automate the selection of LoRa transmission parameters based on the operating mode of each network to optimize and improve performance. F. PACKETS RETRANSMISSIONS The ALOHA access scheme is used in the LoRaWAN network, allowing any end device to send packets at any time by simply making a link request. Given that terminal devices are deployed in a non-scheduled mode and that the LoRaWAN network uses a device-controlled communication mode, simultaneous communications could potentially interfere with each other, resulting in packet transmission failure without a suitable access scheme. Failure to communicate results in a decrease in energy efficiency. Investigating the probabilities of retransmissions over the LoRaWAN network using various SF allocation approaches based on the MAC protocol used to provide an analysis of the overall system performance is an important direction to pursue. G. ENERGY/PERFORMANCE TRADE-OFF Most works targeting the performance improvement of LoRaWAN networks focus on finding a good trade-off between network performance and energy consumption. In several studies, the authors have to choose between a spe-VOLUME 10, 2022 cific feature to be improved (scalability, reliability, security, etc.) and energy consumption. It is necessary to find a model that maintains a good equilibrium. VII. CONCLUSION This paper provides a comprehensive tutorial on LoRaWAN networks. Namely, we provide a detailed background on the LoRa standard, its architecture, its communication protocol stack, its end devices, and its services. Then, we survey the main research works on energy efficiency at the physical, MAC, and network layers of LoRa systems. Specifically, we highlight the most important works on the power consumption of LoRaWAN networks, while focusing on the network specific characteristics that are attributed to these specific layers. This tutorial aims to review pioneering research works and draw insights on how to optimize LoRa capabilities both from the network and the device perspectives. We finally present some research opportunities and open problems. MOHAMED SADIK (Senior Member, IEEE) received the Ph.D. degree in electrical engineering from the National Polytechnic Institue of Lorraine (INPL), Nancy, France, in 1992. He has been a Full Professor with the Department of Electrical Engineering, National School of Electrical and Mechanical Engineering (ENSEM), since 2008. He was the Chair of the Electrical Engineering Department and the Chair of the Research and cooperation Department, ENSEM, from 2013 to 2019. He has founded and led the Networks, Embedded Systems and Telecom (NEST) Group Research, since 2011. He is currently the Chair of the Laboratory of Research in Engineering (LRI Lab.). His previous research activities are part of the development of autonomous biomedical systems. He is interested in codesign, modeling and synthesis of embedded systems, autonomous/intelligent systems with applications to smart farming, smart health, smart environment, smart and green energy, and smart grid management. His current research interests include protocols design, ad hoc networking, networking game, pricing and networking neutrality, image processing, machine learning and deep learning, and security in the cloud. He published several peer-reviewed papers in reputed international journals and conferences and some book chapters. He served on the technical program committees for many international conferences (and as a reviewer of several journals). He has co-founded the International Symposium on Ubiquitous Networking (UNet).
2022-01-06T16:20:43.009Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "2a0c0839adb654b00c6ca6659e43014113a5e8c2", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09667513.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "24d0c6b0746f9dd0330d0f7ec9ca1ad4214fe0f7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
247636844
pes2o/s2orc
v3-fos-license
Physical Modeling for Large-Scale Landslide with Chair-Shaped Bedrock Surfaces under Precipitation and Reservoir Water Fluctuation Conditions The deformation and failure mechanisms of historical landslides, characterized with different types of bedrock surface shapes which are known to have been induced by rainfall and reservoir water fluctuations, is an important issue currently being addressed by many researchers. The Zhaoshuling Landslide of the Three Gorges Reservoir Region, which was characterized with a chair-shaped bedrock surface under rainfall and reservoir water fluctuation conditions, was selected as an example in this study’s physical modeling process. The results of different parameters, including the displacements, pore water pressure, and total soil pressure during the landslide event, revealed that the Zhaoshuling Landslide with a chair-shaped bedrock surface had been extremely sensitive to heavy rainfall coupled with the rapid lowering of the water levels. Then, based on the data analysis results of the monitoring of the rainfall and groundwater levels, as well as the reservoir water levels, a conceptual model was put forward to explain the failure mechanisms. It was believed that the chair-shaped bedrock at the toe of the slope had been subjected to a localized zone of high transient pore water pressure, which had significantly adverse effects on the mechanisms of the slope stability. Introduction China's Three Gorges Region covers 193 km of the middle reaches of the Yangtze River between Fengjie in Chongqing City and Yichang in Hubei Province (as shown in Figure 1). Due to the steep valley-side slopes and long periods of river incision, a large number of landslides have been formed in the Three Gorges Reservoir area, of which more than 90% have been reactivated ancient landslides [1,2]. The Qianjiangping Landslide, which is a famous historical landslide, was reactivated at first by tentative impoundment combined with heavy rainfall and collapsed on July 14th of 2003. This disaster event resulted in major losses of lives and property [3][4][5]. The Liangshuijing Landslide in Yunyang, which is another ancient landslide, had also displayed intensified deformations in April of 2009, threatening many households located on the sliding body, as well as the shipping processes of the Yangtze River [6][7][8]. In order to prevent such tragedies and danger risks in the future, examinations of the potential revivals of ancient landslides in reservoir areas have received increasing attention in the field of engineering geology and geotechnical mechanics. At the present time, the revival factors of historical landslides in the Three Gorges Reservoir area have mainly focused on the external inducing factors and the internal controlling factors. The rise and fall of reservoir water and rainfall levels are considered to be the most important inducing factors for the reactivations of ancient landslides in the reservoir area. The groundwater seepage and dynamic changes of groundwater levels At the present time, the revival factors of historical landslides in the Three Gorges Reservoir area have mainly focused on the external inducing factors and the internal controlling factors. The rise and fall of reservoir water and rainfall levels are considered to be the most important inducing factors for the reactivations of ancient landslides in the reservoir area. The groundwater seepage and dynamic changes of groundwater levels resulting from the rise and fall of reservoir water levels are of particular concern. The fluctuations in water levels tend to change the physical and mechanical properties of a region, as well as the stress state and stability of the slope material [9][10][11][12][13]. In addition, such internal controlling factors as topography, lithology, permeability, and material composition also play vital roles in the stability of an area prone to landslide events [5,[14][15][16]. However, the majority of the previous studies have focused on the inducing factors, which may only provide limited information regarding the complicacy of the phenomena. However, the internal controlling factors, particularly the characteristics of the bedrock surfaces, are known to be the most important factors impacting the stability of landslide prone areas. This study found that the available technical reports backed up the theories that the different positions and shapes of the bedrock surfaces will lead to different deformation mechanisms of the landslides, along with the magnitude and distribution of pore pressure and stress in sliding bodies [13,[17][18][19][20]. Therefore, considering the various types of reservoir landslides, integrated models of rainfall and water level variation conditions, However, the majority of the previous studies have focused on the inducing factors, which may only provide limited information regarding the complicacy of the phenomena. However, the internal controlling factors, particularly the characteristics of the bedrock surfaces, are known to be the most important factors impacting the stability of landslide prone areas. This study found that the available technical reports backed up the theories that the different positions and shapes of the bedrock surfaces will lead to different deformation mechanisms of the landslides, along with the magnitude and distribution of pore pressure and stress in sliding bodies [13,[17][18][19][20]. Therefore, considering the various types of reservoir landslides, integrated models of rainfall and water level variation conditions, bedrock surface shapes, internal action mechanisms, and the stability levels of landslide deformations with reservoir water level changes over time should be further investigated. Various physical model experiments have been conducted under laboratory conditions, and the experimental results have been extensively applied to explore the features, stability, and evolution of landslides. Such investigations have provided improved insight into the failure modes and mechanisms related to the changes of different factors [21][22][23][24]. Subsequently, the similarities between the laboratory results and field observations were in-vestigated using the law of similitude, which has been employed extensively to investigate the fundamental principles of landslide movements [25][26][27]. In the present study, physical modeling was performed in order to examine the effects of chair-shaped bedrock surfaces on the reactivation of an ancient landslide under rainfall and reservoir water fluctuation conditions. The Zhaoshuling Landslide of Badong County was selected as the target of an interest. The pore water pressure, total soil pressure, and the landslide processes were obtained using experimental processes by analyzing the monitoring data of multiple systems. The obtained results improved the current understanding regarding the deformation characteristics and failure mechanisms of ancient landslides with chair-shaped bedrock surfaces under the conditions of rainfall and reservoir water fluctuations. The findings of this study provide an important basis for the prevention and mitigation of landslides in the reservoir areas. Engineering Geology of the Zhaoshuling Landslide Badong Town is a new residential area for immigrants in the Three Gorges Reservoir Region. It is located on the southern side of the Yangtze River. Many giant ancient landslide sites are located in this area, including the Zhaoshuling Landslide investigated in this study (as shown in Figure 1). The Zhaoshuling Landslide was characterized by a long strip shape and occurred on the right bank of the Yangtze River, approximately 6 km west of Badong County (as shown in Figure 2). Several deformation monitoring and stability analyses have been performed during the last several years for the purpose of studying the development of landslide deformations [28][29][30][31][32][33][34][35]. It was determined that based on the engineering geological survey results, the landslide's front has an elevation of 60 m and is submerged under the Yangtze River. In addition, the landslide's rear area has an elevation of approximately 460 m, and the two lateral sides are seated on a gully and a local fault, respectively. The landslide's length is 1260 m and its width had been determined to be 570 m. The total planar area measures 61.2 × 10 4 m 2 . Furthermore, based on the buried depths of the slide area revealed by drilled boreholes, along with the planar distribution of the landslide, it has been confirmed that the Zhaoshuling Landslide's volume was approximately 3600 × 10 4 m 3 . to 425 m, and 475 to 500 m, respectively, can be observed along the longitudinal direction. Figure 1 provides a landscape photograph of the Zhaoshuling Landslide area, as well as the locations of exploratory boreholes. The highest platform is located at the rear of the landslide area with a slope angle of 10 • to 15 • . The lower platform is located in the front of the landslide with a slope angle of 10 • to 20 • . Figures 2 and 3 present the plan and sectional views of the Zhaoshuling Landslide engineering geological conditions, respectively. The structural features in the study area were found to be characterized by an E-W trending of multiple folds, as well as a series of reverse faults. The major feature among the fold structures was the Guandukou Syncline, in which the fold axis was observed to strike in a nearly E-W direction and axial trace extended along the southern bank of the Yangtze River. The syncline had manifested as an iso-thick symmetrical fold in the section. In addition, many asymmetrical secondary interlayer folds existed on the two flanks of the syncline, which were found to be mainly developed in the soft formations of the T2b 3 strata. In regard to the E-W trending reversed faults, the bedding faults or bedding shear zones were observed to be developed following a pattern of E-W folds due to extensive deformational forces. Additionally, along with the above-mentioned fault features, there The structural features in the study area were found to be characterized by an E-W trending of multiple folds, as well as a series of reverse faults. The major feature among the fold structures was the Guandukou Syncline, in which the fold axis was observed to strike in a nearly E-W direction and axial trace extended along the southern bank of the Yangtze River. The syncline had manifested as an iso-thick symmetrical fold in the section. In addition, many asymmetrical secondary interlayer folds existed on the two flanks of the syncline, which were found to be mainly developed in the soft formations of the T 2 b 3 strata. In regard to the E-W trending reversed faults, the bedding faults or bedding shear zones were observed to be developed following a pattern of E-W folds due to extensive deformational forces. Additionally, along with the above-mentioned fault features, there were also well-developed conjugate joint systems, fracture zones, and various joints observed, as well as many other structural indications, such as gravitational creep-slippage and so on. The Zhaoshuling Landslide moves along the interfaces of the T 2 b 2 and T 2 b 3 strata. The material of the sliding mass mainly consists of T 2 b 3 strata, with small amounts from the T 2 b 2 strata, and can be divided into two layers. The surface layer is mainly a khaki-brown soil-rock mixture, and subsurface mainly consists of quaternary landslide accumulation cataclastic rock masses, with most found to have retained the sequences of the original rock with layered structures. The lithologic characteristics of the bedrock have been determined to consist mainly of T 2 b 2 strata with purplish-red interbedded fine-grained argillaceous siltstone and silty mudstone. The sliding surface has been found to be composed of khaki disintegrated rock and gravelly clay, generally with thicknesses ranging between 0.3 and 0.5 m. The surface can be observed to be chair-shaped and is basically consistent with the relief of the local terrain. As indicated by the results of previous studies, the slope composed of T 2 b 3 had easily become deformed as a result of the valley cutting and softening processes of the groundwater and reservoir water. Those progressive deformations generally developed during multi-stages. The field investigation results indicated that no displacements had occurred when the reservoir's water levels were in the elevation range of 145 to 175 m each year. However, the deep sliding mass was prone to failure if the slide masses with T 2 b 3 strata were destroyed due to the lowering of reservoir water levels. Therefore, the interfaces of the T 2 b 2 and T 2 b 3 strata are considered to have been the major components of the Zhaoshuling Landslide, and their failure mechanism and processes were explored in this study. In order to simplify this study's model, the deep T 2 b 2 strata were treated as a slide bed. Details of the Adopted Apparatus and Instrumentation In the present study, a large gravity model test system was constructed, as shown in Figure 4. The test system was manufactured and operated by the China Three Gorges University, Key Laboratory of Geological Hazards on Three Gorges Reservoir Area, and the Ministry of Education, Yichang, China. The test system consisted of a hydraulic control lifting system; reservoir water level control system; fixed-head water supply and drainage system; artificial rainfall system; and an observation and data acquisition system, as detailed in Figure 5. The main purposes of this study's model tests were to solve the following questions: 1. How could the Zhaoshuling Landslide event have been triggered as the result of the rise and fall in rainfall and reservoir water levels, causing deformations and failure to occur; and 2. Whether or not the deformations and failure of the upper sliding body had been affected by the undulating state of the lower sliding surface. It was believed that the answers to the aforementioned questions could be achieved using the monitoring and observational data of the displacements and pore water pressure levels at various locations in this study's model during different time periods. Law of Similitude In the present research investigation, considering the size limitations of the model box, the physical experiments were generally scaled down. The physical parameters in the prototype-scale could be correlated with those in the model-scale using the similitude ratios, detailed as follows: where C q represents the the similitude ratio; q indicates the corresponding parameter; and the subscripts p and m denote the prototype and the model, respectively. During the modeling processes, the parameters involved the dimension l; density ρ; acceleration of gravity g; cohesion c; internal friction angle φ; Young's Modulus E; Poisson's Ratio µ; stress σ; strain ε; displacement u; permeability coefficient k; time t; velocity ν; suction s; moisture content θ; rainfall intensity q; and lateral pressure p. Therefore, following the π-Theorem, all of the aforementioned parameters that correlated with the landslide must meet the following equation: where φ, µ, ε, θ are the non-dimensional parameters. If the equation was complete, its solution had a form with chosen numbers of independent scaling products (π-terms). Therefore, by choosing l, g, ρ as the independent parameters, Equation (1) could be rewritten in terms of those three non-dimensional scaling parameters as follows: where Then, if the similitude ratios of C φ , C ρ , and C g are set equivalent to unity, the similitude ratios of C l , C c , C σ , and C p are assigned as n, the other scaling parameters can be easily derived as follows: In the current study, the Zhaoshuling Landslide was simplified into a 2D model for the model testing process. The size of the model was scaled down to 1/400 of the full scale of the Zhaoshuling Landslide due to the limitations of the model box. Physical Parameters of the Model's Similar Materials The model testing processes were required to reproduce the characteristics of the prototype slope, particularly for such progressive failures as the overall shear failure. The main issue was to simulate the deformation mechanism with similar materials. In this study, normal sand, barite powder, iron powder, glycerin, bentonite, and ordinary Portland cement were used for the mixing of the similar materials. Among those, the barite powder and iron powder were used to improve the apparent density. Then, glycerol and bentonite were added binders. Ordinary Portland cement was used to improve the water resistance of the similar materials. The slide bed was simulated using barite powder, cement, and gypsum, along with water. The slip soil was composed of polyvinyl chloride film material. The physical and mechanical parameters of prototype slip slope were obtained from the physical and mechanical tests. The parameters of the model slip mass were obtained from the similarity theory, then the materials were created by means of the mix proportion tests. Direct shear tests between the slide bed and the geomembrane were also conducted in the laboratory to test the ability of these interfaces to simulate the slip soil. The results show that the test parameters under the final ratio can meet the needs of actual landslide model test. The physical and mechanical properties of the similar materials used in the model tests are presented in Table 1. Test Plan for the Large-Scaled Model Testing Processes The slope profile and interface locations were determined prior to the backfilling by drawing the slope model contours on the glass model box. Then, prior to placing the soil sample into the simulation box, the glass walls were lubricated in order to reduce the frictional effects of the sidewalls. After laying and fixing the PVC film on the sliding bed, the mixed materials were placed in the simulation box and compacted for every 2 cm of lift in order to achieve the target density. Then, after the compaction process was completed, the slope was cut according to the contour line. Throughout this study's testing procedures, displacement meters, pore pressure sensors and soil pressure sensors were set into the large-scaled models, as presented in Figure 6. The details of the instrumentations are summarized in Table 2. All of the sensors were placed uniformly in the toe, middle, and rear zones. However, due to the small thickness of the sliding body in this study's model tests, the sensors were all arranged on the sliding surface, and the sensors in the same section were arranged in the same row. In addition, in order to record the effects of the sliding surface geometry on the propagation of the landslide, one group of sensors (No. 2) was positioned on the concave part of the chair-shaped surface. Each instrument was calibrated prior to installation. All of the instruments inside the slope model were installed during the placement of the mixed materials. Furthermore, all of the electrical instruments were connected to a data-logger for the convenience of automatic data recording, and each of the entire testing processes were video-taped. Figure 7 illustrates the construction of the physical simulation models. In view of the set goals of this study's experiments, two model tests were conducted using different experimental processes. The testing procedures are listed in Table 3. In accordance with the Three Gorges Reservoir operating program, the reservoir water levels were known to fluctuate between 145 m and 175 m. Additionally, considering the similarity relationship, the rise and fall in the water level were determined to be 4.2 mm/h, which corresponded to a real speed of 2.0 m/d. Therefore, it was believed that the experimental processes could accurately and realistically reflect the actual conditions. (1) A flow control device was used to control the flow into the reservoir from the initial level at an elevation of 36.25 cm (corresponding to 145 m of real elevation) to the normal level at an elevation of 43.75 cm (corresponding to 175 m of real elevation) during an 18 h time period. The reservoir water level was maintained at 175 m for 1 h. (2) The water level control valve was opened in order to discharge water from the reservoir. Then, the water level outside the slope model was decreased from a 43.75 cm elevation to a 36.25 cm elevation within an 18 h period. Finally, the water level of the reservoir at that elevation was maintained. (2) When the water level control valve was opened, a sprinkler device began operation at the same time, which simulated a rainstorm with a rainfall intensity of 0.42 mm/h (corresponding to 200 mm/d of real intensity). The control duration was set as 3.6 h (corresponding to 3 days of real time). The water level drop rate was the same as that of Model 1, and finally the reservoir water level was maintained at 145 m. Pore Water Pressure and Soil Pressure Responses to Rises in Water Levels The pore water pressure levels measured in this study by the six piezometers are shown in Figure 8. The piezometers (labelled P-1, P-2, P-3, P-4, P-5, and P-6) were installed at the elevations of 0. At the beginning of the experimental process, it was observed that all of the pore water pressure sensors had displayed no responses to the rising water levels when the valve was switched on, allowing the water flow into the slope. However, when the reservoir water level had risen to 38.13 cm (corresponding to 152.5 m of real elevation), the P-1 piezometer had displayed a response. Following that, the P-2 piezometer located at an elevation of 30 cm was observed to respond to the reservoir water level changes at approximately five hours after the start of the experiment. However, there were no changes during the rising of the reservoir water levels according to the measurement results of the P-3, P-4, P-5, and P-6 piezometers located in the middle and rear areas of the landslide site. The pore water pressure levels of the P-1 and P-2 piezometers were confirmed to have increased linearly with the rising of the reservoir water levels. Figure 9 shows the total stress values measured by the six soil pressure sensors installed in the slope model. The changes in total soil pressure levels were in response to the rising water levels. In addition, the slope seemed to have become more uniform as the water levels on the slope rose. Moreover, the total soil pressure seems to have been influenced by the water pressure levels, since the total soil pressure also increased with the rising water levels. At the beginning of the experimental process, it was observed that all of the pore water pressure sensors had displayed no responses to the rising water levels when the valve was switched on, allowing the water flow into the slope. However, when the reservoir water level had risen to 38.13 cm (corresponding to 152.5 m of real elevation), the P-1 piezometer had displayed a response. Following that, the P-2 piezometer located at an elevation of 30 cm was observed to respond to the reservoir water level changes at approximately five hours after the start of the experiment. However, there were no changes during the rising of the reservoir water levels according to the measurement results of the P-3, P-4, P-5, and P-6 piezometers located in the middle and rear areas of the landslide site. The pore water pressure levels of the P-1 and P-2 piezometers were confirmed to have increased linearly with the rising of the reservoir water levels. Figure 9 shows the total stress values measured by the six soil pressure sensors installed in the slope model. The changes in total soil pressure levels were in response to the rising water levels. In addition, the slope seemed to have become more uniform as the water levels on the slope rose. Moreover, the total soil pressure seems to have been influenced by the water pressure levels, since the total soil pressure also increased with the rising water levels. Water 2022, 14, x FOR PEER REVIEW 12 of 23 Figure 9. Soil pressure levels during the rises in the water levels. The soil pressure sensors EP-1 and EP-2 presented a similar general trend during the increase in water levels. For example, the total stress recorded by the EP-1 sensor increased from an initial value of 0.32 kPa to a final value of 1.94 kPa. Additionally, there was an incremental change of 1.62 kPa observed within 18 h following the commencement of this study's experiment. The total stress recorded by the EP-2 sensor increased from an initial value of 0.62 kPa to a final value of 1.18 kPa, with an incremental change of 0.56 kPa observed as the water level rose within the aforementioned 18 h period. However, the total stress recorded by the Ep-3, Ep-4, EP-5, and EP-6 sensors, which were located in the middle and rear zones of the slope model, were not found to change. Figure 10 details the pore water pressure responses with the water level changes recorded by the piezometers at different elevations. It can be seen in the figure that the pore water pressure levels of the P-1 and P-2 piezometers at the elevations of 0.2 m (80 m) and 0.3 m (120 m), respectively, showed a similar general trend during the lowering of the water levels. Furthermore, there was a small delay observed in the pore water pressure relative to the lowered water levels of the model slope. When the water levels of the slope were lowered by 0.075 m, the pore water pressure recorded by the P-1 and P-2 piezometers had decreased by 1.77 kPa and 1.18 kPa, respectively. However, it was found that the pore water pressure levels recorded by the other sensors located at the middle and rear zones of the slopes had displayed little change. The soil pressure sensors EP-1 and EP-2 presented a similar general trend during the increase in water levels. For example, the total stress recorded by the EP-1 sensor increased from an initial value of 0.32 kPa to a final value of 1.94 kPa. Additionally, there was an incremental change of 1.62 kPa observed within 18 h following the commencement of this study's experiment. The total stress recorded by the EP-2 sensor increased from an initial value of 0.62 kPa to a final value of 1.18 kPa, with an incremental change of 0.56 kPa observed as the water level rose within the aforementioned 18 h period. However, the total stress recorded by the Ep-3, Ep-4, EP-5, and EP-6 sensors, which were located in the middle and rear zones of the slope model, were not found to change. Figure 10 details the pore water pressure responses with the water level changes recorded by the piezometers at different elevations. It can be seen in the figure that the pore water pressure levels of the P-1 and P-2 piezometers at the elevations of 0.2 m (80 m) and 0.3 m (120 m), respectively, showed a similar general trend during the lowering of the water levels. Furthermore, there was a small delay observed in the pore water pressure relative to the lowered water levels of the model slope. When the water levels of the slope were lowered by 0.075 m, the pore water pressure recorded by the P-1 and P-2 piezometers had decreased by 1.77 kPa and 1.18 kPa, respectively. However, it was found that the pore water pressure levels recorded by the other sensors located at the middle and rear zones of the slopes had displayed little change. Figure 11 presents the results of the soil pressure levels measured by the six soil pressure sensors in response to the lowering water levels. The total stress of the soil pressure of the EP-1 sensor, which was located at 0.2 m in the slope, decreased from an initial value of 1.94 kPa to 0.17 kPa. In regard to the EP-2 soil pressure sensor, which was located higher than the EP-1 sensor, it was observed that the pressure had decreased from an initial value of 1.18 kPa to 0.07 kPa. Therefore, the decreasing trend of the total stress was found to be the same as that of the pore water pressure. Water 2022, 14, x FOR PEER REVIEW 13 of 23 Figure 10. Pore water pressure levels during the water level lowering process. Figure 11 presents the results of the soil pressure levels measured by the six soil pressure sensors in response to the lowering water levels. The total stress of the soil pressure of the EP-1 sensor, which was located at 0.2 m in the slope, decreased from an initial value of 1.94 kPa to 0.17 kPa. In regard to the EP-2 soil pressure sensor, which was located higher than the EP-1 sensor, it was observed that the pressure had decreased from an initial value of 1.18 kPa to 0.07 kPa. Therefore, the decreasing trend of the total stress was found to be the same as that of the pore water pressure. Figure 11 presents the results of the soil pressure levels measured by the six soil pressure sensors in response to the lowering water levels. The total stress of the soil pressure of the EP-1 sensor, which was located at 0.2 m in the slope, decreased from an initial value of 1.94 kPa to 0.17 kPa. In regard to the EP-2 soil pressure sensor, which was located higher than the EP-1 sensor, it was observed that the pressure had decreased from an initial value of 1.18 kPa to 0.07 kPa. Therefore, the decreasing trend of the total stress was found to be the same as that of the pore water pressure. Figure 11. Soil pressure during the water level lowering process. Figure 11. Soil pressure during the water level lowering process. Pore Water and Soil Pressure Level Changes in Response to the Lowering of the Water Levels In summary, by referring to measurement results of the different displacement sensors, no deformations were observed to have occurred during the aforementioned processes of this study's model, and no cracks had been observed on the slide surface, as shown in Figure 12. In summary, by referring to measurement results of the different displacement sensors, no deformations were observed to have occurred during the aforementioned processes of this study's model, and no cracks had been observed on the slide surface, as shown in Figure 12. Visual Observations during the Processes In order to simulate the coupling effects of the water level fluctuations and rainfall events on the investigated landslide, another identical model was constructed. Similar to the first model, the same rising and lowering of the reservoir water levels were conducted in the second model during the reservoir impounding and water level lowering processes. When the water level lowering rate was approximately 4.2 mm/h, a rainfall simulation device was activated. The rainfall occurred with an intensity of 0.42 mm/h, which was similar to the rainfall intensity (200 mm/d) of the prototype. The rainfall duration time was controlled as 3.6 h, which corresponded to three days of real time. A video camera was mounted in front of the slope in order to record the failure initiation and subsequent movements during the water level lowering processes. The slope profiles associated with the displacements before and after failure were also recorded. The failure process of the slope model is systematically shown in Figure 13a-c, respectively. The failure mechanism observed and documented in this study's large-scaled slope model experiment was apparently complex, and the following phenomena were observed during the water level lowering and rainfall simulation processes. Visual Observations during the Processes In order to simulate the coupling effects of the water level fluctuations and rainfall events on the investigated landslide, another identical model was constructed. Similar to the first model, the same rising and lowering of the reservoir water levels were conducted in the second model during the reservoir impounding and water level lowering processes. When the water level lowering rate was approximately 4.2 mm/h, a rainfall simulation device was activated. The rainfall occurred with an intensity of 0.42 mm/h, which was similar to the rainfall intensity (200 mm/d) of the prototype. The rainfall duration time was controlled as 3.6 h, which corresponded to three days of real time. A video camera was mounted in front of the slope in order to record the failure initiation and subsequent movements during the water level lowering processes. The slope profiles associated with the displacements before and after failure were also recorded. The failure process of the slope model is systematically shown in Figure 13a-c, respectively. The failure mechanism observed and documented in this study's large-scaled slope model experiment was apparently complex, and the following phenomena were observed during the water level lowering and rainfall simulation processes. The lowering water level, combined with heavy rainfall, was observed to initiate the formations of a transverse tension crack in the middle of the slope model. The crack occurred in the middle of the slide mass at approximately the 30 min point following the commencement of the water lowering and rainfall simulations, as shown in Figure 13a. The length and breadth of the tension crack increased with the lowering water levels combined with the rainfall simulations. The length of the tension crack increased as the water levels reached the lower parts of the slopes, as illustrated in Figure 13b,c. Then, obvious sliding deformations were found to have immediately occurred in the middle and toe sections of the slope model. The entire deformation zone of the slope model was bounded at an elevation of approximately 300 m. However, the slope material behind the deformation zone appeared to remain stable. As can be seen in the figures, the obvious deformation region during the model tests was located in the lower zone at an elevation of approximately 300 m. The lowering water level, combined with heavy rainfall, was observed to initiate the formations of a transverse tension crack in the middle of the slope model. The crack occurred in the middle of the slide mass at approximately the 30 min point following the commencement of the water lowering and rainfall simulations, as shown in Figure 13a. The length and breadth of the tension crack increased with the lowering water levels combined with the rainfall simulations. The length of the tension crack increased as the water levels reached the lower parts of the slopes, as illustrated in Figure 13b,c. Then, obvious sliding deformations were found to have immediately occurred in the middle and toe sections of the slope model. The entire deformation zone of the slope model was bounded at an elevation of approximately 300 m. However, the slope material behind the deformation zone appeared to remain stable. As can be seen in the figures, the obvious deformation region during the model tests was located in the lower zone at an elevation of approximately 300 m. The displacement data shown in Figure 14 are expressed in millimeters. It should be mentioned that due to sensor failure, sensor readings were not collected by the D-1 and D-6 sensors, and no further results are shown in Figure 14. However, based on the monitoring data recorded by the other four displacement sensors, the deformations were found to be small in the areas of the D-3, D-4, and D-5 sensors, which were located in the middle and rear sections of the model, during the entire experimental process, whereas the deformations at the toe had increased sharply to 14 mm following the rainfall simulations, as shown in Figure 14. The displacement data shown in Figure 14 are expressed in millimeters. It should be mentioned that due to sensor failure, sensor readings were not collected by the D-1 and D-6 sensors, and no further results are shown in Figure 14. However, based on the monitoring data recorded by the other four displacement sensors, the deformations were found to be small in the areas of the D-3, D-4, and D-5 sensors, which were located in the middle and rear sections of the model, during the entire experimental process, whereas the deformations at the toe had increased sharply to 14 mm following the rainfall simulations, as shown in Figure 14. Figure 15 presents the pore water pressure levels measured by the six pore pressure gauges mounted in the slope model. The changes in pore water pressure were in response to lowering of the water levels combined with the rainfall simulations. The variations in the pore water pressure levels were divided into three types. Figure 15 presents the pore water pressure levels measured by the six pore pressure gauges mounted in the slope model. The changes in pore water pressure were in response to lowering of the water levels combined with the rainfall simulations. The variations in the pore water pressure levels were divided into three types. Figure 14. Displacements observed during the water lowering process combined with the rainfall simulations. Figure 15 presents the pore water pressure levels measured by the six pore pressure gauges mounted in the slope model. The changes in pore water pressure were in response to lowering of the water levels combined with the rainfall simulations. The variations in the pore water pressure levels were divided into three types. The pore water pressure levels recorded by the P-1, P-4, and P-5 piezometers were observed to have gradually increased from initial pore water pressure levels of 1.94 kPa, 0.07 kPa, and 0.16 kPa to 3.44 kPa, 1.47 kPa, and 1.33 kPa, respectively. The incremental changes of 1.50 kPa, 1.40 kPa, and 1.17 kPa occurred within an 18 h timeframe after the initiation of the experiment. The pore water pressure levels recorded by the P-3 and P-6 piezometers were found to have changed little during the entire experimental process, with increases from initial total stress levels of 0.20 kPa and 0.36 kPa to 0.78 kPa and 0.63 kPa, respectively. The incremental changes of 0.58 kPa and 0.27 kPa had occurred within 18 h following the initiation of the experiment. However, the pore water pressure levels recorded by the P-2 piezometer were found to have increased sharply from an initial total stress of 1.18 kPa to 4.27 kPa, an incremental change of 3.09 kPa. Figure 16 details the soil pressure levels measured by the six soil pressure gauges installed in the slope model. The changes in soil pressure were also in response to lowering water levels combined with the rainfall simulations. The soil pressure levels measured by the P-1 and P-5 piezometers showed similar results within 18 h following the commencement of the experiment, with gradual increases observed from the initial pressure levels of 1.94 kPa and 0.16 kPa to 3.40 kPa and 1.14 kPa, respectively. The soil pressure levels measured by the P-3 and P-6 piezometers were found to display only minimal changes during the entire experimental process, with increases from initial soil pressure levels of 0.20 kPa and 0.36 kPa to 0.75 kPa and 0.63 kPa, respectively. The data range fluctuation characteristics which were recorded by the P-4 piezometer during the final 18 h displayed incremental changes of 0.80 kPa. However, the soil pressure levels recorded by P-2 piezometer were observed to increase sharply from an initial soil pressure of 1.18 kPa to 3.67 kPa. Therefore, an incremental change of 2.48 kPa had occurred. levels measured by the P-3 and P-6 piezometers were found to display only minimal changes during the entire experimental process, with increases from initial soil pressure levels of 0.20 kPa and 0.36 kPa to 0.75 kPa and 0.63 kPa, respectively. The data range fluctuation characteristics which were recorded by the P-4 piezometer during the final 18 h displayed incremental changes of 0.80 kPa. However, the soil pressure levels recorded by P-2 piezometer were observed to increase sharply from an initial soil pressure of 1.18 kPa to 3.67 kPa. Therefore, an incremental change of 2.48 kPa had occurred. Figure 16. Soil pressure levels recorded during the water lowering process and rainfall simulations. Comparison of the Two Models This study's comparison of Model-1 and Model-2 revealed the following: (1) The rise and fall of reservoir water levels had little effect on the middle and rear sections of the landslide site. Figure 16. Soil pressure levels recorded during the water lowering process and rainfall simulations. Comparison of the Two Models This study's comparison of Model-1 and Model-2 revealed the following: (1) The rise and fall of reservoir water levels had little effect on the middle and rear sections of the landslide site. (2) The landslide mass was stable during reservoir impoundment and discharge processes. (3) The displacements increased, and finally failure occurred when the water levels rapidly decreased, combined with the effects of rainfall. It was observed that, differing from Model-1, the soil and pore pressure levels in all parts of Model-2 displayed a tendency to increase at first and then decrease following the simulated rainfall, as shown in Figures 15 and 16. It was determined that this was due to the fact that the rain had infiltrated into the slope through the pores and cracks of the landslide model, which resulted in increases in the pore water and soil pressure levels, as well as increased weight of the rock and soil. Then, after the rainfall had ceased, the groundwater inside the slope was discharged into the reservoir, resulting in decreased groundwater levels and gradual decreases in the pore water pressure. The rises in the soil pressure levels during the early part of the process also indicated stress concentrations and accumulations of strain energy in the slope. The maximum soil pressure was the result of the coupling of the reservoir water levels and the effects of rainfall. The continuous decreases in the soil pressure during the latter part of the process were determined to be due to the release of strain energy inside the slope and the redistribution of the stress following the deformations of the rock and soil masses within the landslide. It can be seen in Figure 15 that during the lowering of the water level combined with the rainfall process, the data of the pore water pressure recorded by P-2 piezometer (located at the toe of the landslide) had increased rapidly. In addition, the rising rate of the pore water pressure was higher than that recorded by the sensors in other parts of the landslide site. The piezometers located above the bedrock and near the middle and rear zones of the slope showed pressure rises of between approximately 0.27 and 1.50 kPa following the rainfall simulations. However, the pore water pressure recorded by the piezometer installed at the toe of the slope showed an increase of 3.09 kPa in response to the same rainfall simulations. Influences of the Chair-Shaped Bedrock Surface on the Groundwater Levels as per the Monitoring Data Pore phreatic water is the main form of groundwater in the Zhaoshuling Landslide area. Its recharge occurs at the upper part of the landslide as a result of bedrock fissures and atmospheric precipitation, which eventually discharges into the Yangtze River following the infiltration of the landslide body. This was determined through the data of the borehole monitoring of the groundwater levels which had been performed in the main part of the landslide site. In order to study the influencing effects of chair-shaped surface on groundwater levels under the conditions of rainfall and reservoir water level changes in the Zhaoshuling Landslide area, the data from boreholes Zhao-1 and Zhao-2 located in different parts of the landslide site were selected for further analysis in this study. The borehole locations are also shown in Figure 2. The groundwater level monitoring processes at the Zhao-1 and Zhao-2 boreholes have been conducted since May of 2006, and the monitoring period ranged from May of 2006 to November of 2012. Figure 17 shows the effects of the rainfall and reservoir water fluctuations on the changes in groundwater levels. Borehole Zhao-1 was located in the middle part of the landslide site and had a higher elevation and groundwater level than borehole Zhao-2, which was located at the toe of the landslide site. Therefore, the groundwater at the Zhao-1 borehole was less affected by the reservoir water levels than the Zhao-2 borehole. The changes in groundwater levels in the borehole were dominated by the rainfall effects. However, the amount of surface runoff was larger than that of the infiltration during rainfall events in that area, and the water level fluctuations at borehole Zhao-1 were observed to be small. Borehole Zhao-2 was located at the toe of the landslide site. During the period of the reservoir water fluctuations and the rainy seasons during the study period (May of 2006 to November of 2012), the changes in groundwater levels of the Zhao-2 corresponded to the variations in the rainfall and reservoir water levels. Figure 17 shows that the fluctuations in the groundwater levels were consistent with that of the reservoir water levels and had also lagged behind that of the reservoir water levels. Figure 18 presents the effects of the rainfall on the changes in the rates of the groundwater levels. It was found that under the same precipitation conditions, the change rates of the groundwater levels in the Zhao-2 borehole were greater than that of the Zhao-1 borehole during rainy seasons. These results indicated that the changes in the groundwater levels in the Zhao-2 borehole were more easily affected by rainfall than those of the Zhao-1 borehole. The test results were found to be in accordance with the monitoring data. Borehole Zhao-2 was located at the toe of the landslide site. During the period of the reservoir water fluctuations and the rainy seasons during the study period (May of 2006 to November of 2012), the changes in groundwater levels of the Zhao-2 corresponded to the variations in the rainfall and reservoir water levels. Figure 17 shows that the fluctuations in the groundwater levels were consistent with that of the reservoir water levels and had also lagged behind that of the reservoir water levels. Figure 18 presents the effects of the rainfall on the changes in the rates of the groundwater levels. It was found that under the same precipitation conditions, the change rates of the groundwater levels in the Zhao-2 borehole were greater than that of the Zhao-1 borehole during rainy seasons. These results indicated that the changes in the groundwater levels in the Zhao-2 borehole were more easily affected by rainfall than those of the Zhao-1 borehole. The test results were found to be in accordance with the monitoring data. The higher groundwater level changes in the Zhao-2 borehole, which was in the same position as P-2 in this study's model, indicated that poor drainage conditions existed. Water 2022, 14, x FOR PEER REVIEW 20 of 23 Figure 18. Relationships between the precipitation and the changes in the rates of the ground water levels in the Zhao-1 and Zhao-2 boreholes. Conceptual Model of a Slope Failure with a Chair-Shaped Bedrock Surface In the current research investigation, a striking feature was observed after examining the cross-sections of the landslide site. That is to say, the rockhead profile was found to be chair shaped. The Guandukou Syncline and Badong Fault were found to strike an E-W control of the tectonic framework and geomorphologic characteristics of the Zhaoshuling Landslide. The Guandukou Syncline was determined to be composed of multiple secondary folds, which are mainly asymmetric, and box folds forming a chair-shaped bedrock surface. The chair-shaped bedrock surface was observed to be generally parallel to the slope surface. However, at the toe of the slope, the interfaces become rather flat or slightly depressed, or even upside down. It was found that the chair-shaped bedrock formation at the toe of the slope had significant adverse effects on the mechanisms of the slope stability, as detailed in Figure 19. Conceptual Model of a Slope Failure with a Chair-Shaped Bedrock Surface In the current research investigation, a striking feature was observed after examining the cross-sections of the landslide site. That is to say, the rockhead profile was found to be chair shaped. The Guandukou Syncline and Badong Fault were found to strike an E-W control of the tectonic framework and geomorphologic characteristics of the Zhaoshuling Landslide. The Guandukou Syncline was determined to be composed of multiple secondary folds, which are mainly asymmetric, and box folds forming a chair-shaped bedrock surface. The chair-shaped bedrock surface was observed to be generally parallel to the slope surface. However, at the toe of the slope, the interfaces become rather flat or slightly depressed, or even upside down. It was found that the chair-shaped bedrock formation at the toe of the slope had significant adverse effects on the mechanisms of the slope stability, as detailed in Figure 19. The significant pressure build-up and the rate of build-up were dependent on the rain intensity, elevation of the recharge zone, different properties of the upper and lower strata, and the rate at which the groundwater could escape from the toe of the slope. With the same conditions of rain intensity and recharge zone elevation, the properties of the strata and groundwater escape conditions were very important to the pore water pressure build-up. For example, if the rainfall on the exposed slope surface occurred at a rate less than the permeability of the slope materials, and the conditions at the toe of the slope were favorable for the ground water to escape, then the water may percolate vertically downwards into the slope without causing large positive pore water pressure level changes. It was found that the properties between the upper and lower strata were quite different in the Zhaoshuling Landslide site. The bedrock was T2b 2 strata and had a permeability much lower than that of the slope material. The infiltrating water may have formed a seepage layer on the bedrock surface, which potentially followed the contours of the bedrock. Therefore, any significant change in the inclination of the bedrock may have resulted in localized changes in the hydraulic gradient. However, the bedrock of the Zhaoshuling Landslide was chair-shaped, with a particularly significant change in gradient in the toe region. Therefore, since the bedrock surface was slightly depressed and even upside down at the toe, the seepage flow was not smooth. A localized zone of high transient pore water pressure may have been created within the slope material, which could have potentially reduced the effective stress of the soil body. This was arguably the most critical region of the slope from a stress perspective. Therefore, it was believed that the investigated landslide event may have begun with a local slip under rainfall infiltration and reservoir water level change conditions. Conclusions It has been determined that rainfall and reservoir water fluctuations were significant factors inducing the failure of the Zhaoshuling Landslide. The shape of bedrock surface, and the pore water and total soil pressure levels in the landslide area were found to be sensitive to the presence of the water during the water lowering process and rainfall, which directly enhanced the displacements and may have even initiated the landslide failure. The following conclusions were drawn in the present research investigation: 1. The results obtained from this study's physical model tests indicated that the Zhaoshuling Landslide was stable when the reservoir water levels were fluctuating between 145 m and 175 m. However, rainstorm events combined with the quick decrease in the reservoir water levels may have caused the toe of the landslide mass to fail, and the middle of the landslide mass to suffer large displacements. The significant pressure build-up and the rate of build-up were dependent on the rain intensity, elevation of the recharge zone, different properties of the upper and lower strata, and the rate at which the groundwater could escape from the toe of the slope. With the same conditions of rain intensity and recharge zone elevation, the properties of the strata and groundwater escape conditions were very important to the pore water pressure build-up. For example, if the rainfall on the exposed slope surface occurred at a rate less than the permeability of the slope materials, and the conditions at the toe of the slope were favorable for the ground water to escape, then the water may percolate vertically downwards into the slope without causing large positive pore water pressure level changes. It was found that the properties between the upper and lower strata were quite different in the Zhaoshuling Landslide site. The bedrock was T 2 b 2 strata and had a permeability much lower than that of the slope material. The infiltrating water may have formed a seepage layer on the bedrock surface, which potentially followed the contours of the bedrock. Therefore, any significant change in the inclination of the bedrock may have resulted in localized changes in the hydraulic gradient. However, the bedrock of the Zhaoshuling Landslide was chair-shaped, with a particularly significant change in gradient in the toe region. Therefore, since the bedrock surface was slightly depressed and even upside down at the toe, the seepage flow was not smooth. A localized zone of high transient pore water pressure may have been created within the slope material, which could have potentially reduced the effective stress of the soil body. This was arguably the most critical region of the slope from a stress perspective. Therefore, it was believed that the investigated landslide event may have begun with a local slip under rainfall infiltration and reservoir water level change conditions. Conclusions It has been determined that rainfall and reservoir water fluctuations were significant factors inducing the failure of the Zhaoshuling Landslide. The shape of bedrock surface, and the pore water and total soil pressure levels in the landslide area were found to be sensitive to the presence of the water during the water lowering process and rainfall, which directly enhanced the displacements and may have even initiated the landslide failure. The following conclusions were drawn in the present research investigation: 1. The results obtained from this study's physical model tests indicated that the Zhaoshuling Landslide was stable when the reservoir water levels were fluctuating between 145 m and 175 m. However, rainstorm events combined with the quick decrease in
2022-03-25T15:22:47.620Z
2022-03-21T00:00:00.000
{ "year": 2022, "sha1": "98d5c429d755c3391dd6e2bc50cf0082df862304", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/14/6/984/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e175aaefbf7576b0a1e0fccdad4df89b4c9f9349", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
118651568
pes2o/s2orc
v3-fos-license
Is the Universe a Quantum System? In order to relate the probabilistic predictions of quantum theory uniquely to measurement results, one has to conceive of an ensemble of identically prepared copies of the quantum system under study. Since the universe is the total domain of physical experience, it cannot be copied, not even in a thought experiment. Therefore, a quantum state of the whole universe can never be made accessible to empirical test. Hence the existence of such a state is only a metaphysical idea. Despite prominent claims to the contrary, recent developments in the quantum-interpretation debate do not invalidate this conclusion. In order to relate the probabilistic predictions of quantum theory uniquely to measurement results, one has to conceive of an ensemble of identically prepared copies of the quantum system under study. Since the universe is the total domain of physical experience, it cannot be copied, not even in a thought experiment. Therefore, a quantum state of the whole universe can never be made accessible to empirical test. Hence the existence of such a state is only a metaphysical idea. Despite prominent claims to the contrary, recent developments in the quantum-interpretation debate do not invalidate this conclusion. I. INTRODUCTION A hundred years after Planck's quantum hypothesis, quantum theory seems to be universally valid. While it has its roots in the atomic and subatomic domain, the stability of matter, diamagnetism and superconductivity are examples of quantum effects in the macroscopic domain. A fundamental limitation on the applicability of quantum theory has not been accepted so far. The formalism of general quantum theory allows one to incorporate into a single quantum description additional degrees of freedom as a subsystem, for example a heat bath or a detector in a laboratory. The formalism contains no fundamental obstacle to the description of an arbitrary compound of physical microsystems. Thus it is tempting to think even of the universe as a whole in terms of quantum theory [1]. Of course, nobody can explicitly specify a quantum state of the universe [2]. Nevertheless, assuming its existence in principle is enough to construct theoretical models of a quantum universe and to study their predictions, at least at a formal level. Demanding research programs, such as quantum gravity and quantum cosmology [3][4][5], partially rest on this idea. In spite of these efforts, in this article we will demonstrate that the very concept of a "quantum state of the universe" is doomed to failure. While this conclusion is not entirely new [6], we feel that it has not received the attention it deserves. Indeed, publications employing a quantum state of the universe, in whatever specification, continue to appear. Therefore, our goal here is to reconsider this concept and clearly present arguments decisive for its rejection. In this way we also declare our position in the recently revived debate on the meaning of quantum theory. The logical structure of our reasoning is as follows. Taking (U), (F) and (MI) for granted, it follows that (QU) is excluded. Here (U) stands for a definition of the concept of "universe", (F) for the principle that the interpretation of any physical theory has to rely on facts, (MI) for a minimal interpretation of quantum theory, and (QU) for the claim that there is a quantum state of the universe. Those who tend to escape our conclusion have to decide which of our assumptions they regard as closest to dispensable. The article is organized as follows. In section II we specify explicitly (U), (F) and (MI). Section III contains our reasoning against (QU). In the last section, we discuss and refute possible objections in an interplay of questions and answers. II. PREREQUISITES We start with the specification of (U), which defines the most extended object of physical description. (U) Definition of the Universe. The (physical) universe is the union of all objects and phenomena which are empirically accessible in principle. Two explanations are in order. 1. An object or phenomenon is empirically accessible only if it is intersubjectively perceptible and communicable. Here it is irrelevant whether a living being is actually observing the object or phenomenon. All that matters is that an observation is possible at any time. Empirical accessibility may well require sophisticated experimental manipulations or technical means for observation. " In principle" means "supposing perfect measurement apparatuses". What is to be considered as empirically accessible in principle does not depend on the current state of measurement-device technique. In this sense, cosmic background radiation was "empirically accessible in principle" already in antiquity. In contrast, if quantum theory is true, simultaneous values of mutually incompatible observables are not empirically accessible in principle. To give some motivation for the principle (F), we first recall the purpose of interpretations of physical theories in general. The goal of an interpretation is a unique relation between the mathematical formalism of the theory and the objects and phenomena which are to be described. Roughly speaking, an interpretation of a physical theory is a set of mapping principles relating certain elements of the mathematical formalism to certain elements of physical reality. If one knows the objects and phenomena to be described, the interpretation shows how to apply the formalism. Vice versa, if one knows the formalism, the interpretation shows which objects and phenomena the theory is able to describe. (F) Principle of Relation to Facts. Every interpretation of a physical theory has to relate certain elements of the mathematical formalism of the theory to conceivable facts. Some explanations are in order. 1. Here a fact is only what is empirically accessible in principle in a single measurement on an individual system. 2. Conceivable facts are facts that may but need not exist in reality, supposing the theory is exactly valid. They are the kind of facts considered in thought experiments. We note that in some situations it is empirically clear that they do not exist in reality, as is the case in counterfactual reasoning [7]. 3. The purpose of the concept of conceivable facts is not to test the empirical adequacy of the theory (for that the real facts are decisive), but only to specify the testible statements of the theory. Strictly speaking, the theory can only be tested empirically when its interpretation has been specified by means of conceivable facts. In this sense, interpretation is a precondition of testability. 4. Principle (F) excludes an understanding of "interpretation" in the broad sense of attributing "meaning" to a theoretical concept by means of free human imagination regardless of any empirical relevance. Speculative imagination in physics, useful as it is for the invention of new hypotheses, has to pay tribute to the methodological basis of theoretical concepts, which is epitomized in principle (F). Examples of conceivable facts are the values of all observables in classical mechanics. A probability density on phase space is not a conceivable fact, since it incorporates some ignorance about the classical system under consideration. However, the point in phase space that describes the "real" state of the system represents a conceivable fact, even if it is not precisely known (which is the typical situation in classical statistical mechanics). In quantum theory, all internal parameters that characterize a quantum system (such as mass, spin, charge etc.) stand for conceivable facts. In contrast, a value of a quantum observable (such as position, energy, orbital angular momentum etc.) is never assigned as a fact to a system in all of its states that show quantum uncertainty for this observable, namely in its non-eigenstates. Also, it is not a conceivable fact that a certain Schrödinger wave function (in other words, pure state) is given, since this function cannot be tested by a single measurement on an individual system. Hidden variables would be connected with conceivable facts (hence the efforts to introduce them), but they are not part of the quantum formalism. In any case, results of measurements performed on an individual quantum system are always conceivable facts. Our third premise is the set of basic rules of how to apply quantum theory. These rules are almost uncontroversial among the proponents of different quantum interpretations. (MI) Minimal Interpretation of Quantum Theory. Every state of a given quantum system yields probabilistic predictions for all observables that can be measured on this system. More precisely, let the quantum state W be represented by a positive trace-one operator W acting on some complex separable Hilbert space H, and the observable A by a positive-operator-valued measure E A on a suitable set Ω with measurable subsets X ⊆ Ω. Then the trace Tr [W E A (X)] is the probability of finding a result in X when A is measured on the system in the state W. Some comments apply. 1. Remarkably, the possible outcomes of measurements in the sense of experimental physics are related to the notions "measure" and "measurable subset" in the sense of mathematical measure theory (see, for example [8]). The real line R or suitable subsets of R are typical examples of the set Ω of the possible values of A. 2. The class of quantum observables represented by positive-operator-valued measures extends the more familiar class of observables represented by self-adjoint operators in a substantial way [9]. In the special case that A is represented by a self-adjoint operator A on H, the operator E A (X) is simply given by the spectral projection of A associated with X ⊆ Ω ⊆ R, in symbols, E A (X) = I X (A). Here I X is the indicator function of the set X. 3. The essence of (MI) does not depend on the formal frame in which quantum theory is formulated. Observables and states may be represented, as above, by operators on a Hilbert space, or they may, more abstractly, be postulated as elements of a suitable algebra and as positive linear functionals on this algebra, respectively. The choice of a specific mathematical axiomatization is irrelevant to the subsequent reasoning. (MI) does not ascribe a value of A to the quantum system before A was measured, not even in the simple case that A is represented by a self-adjoint operator with a purely discrete spectrum. After each single measurement, however, a measurement result must be assigned to the individual system as a fact. The relative frequency of the occurrence of such facts in suitable experiments is just what the probability Tr [W E A (X)] predicts. Indeed, the only way to interpret probabilities in physics is to compare them with relative frequencies, that is, to interpret them statistically. Details are given in the next section. III. REASONING Our reasoning against a quantum state of the universe goes as follows. For the physical interpretation of a quantum state in accordance with (F) and (MI), it has to be conceivable in principle to produce an (infinite) collection of measurement results as facts for every observable, to "read them off" and to determine their relative frequencies. The interpretation of the quantum probability prediction about the observable A in state W then implies equating the probability Tr [W E A (X)] with the relative frequency of measurement results in X, for all X ⊆ Ω. Actually, the empirical significance of W can be illustrated completely by the histograms of such collections of results for all observables that are conceivably being measured in the state W. Every single measurement of an observable A on any quantum system produces just one fact. For the empirical significance of the quantum state W it is irrelevant, whether different quantum systems of the same type are prepared at the same time into the state W, or the same quantum system is repeatedly prepared into the state W at different times. In either case, an ensemble of identically prepared quantum systems leads in the end to a collection of facts with the same histogram. It is this collection of facts given after the measurements that serves to interpret the corresponding probability prediction and, thereby, the quantum state. (MI) obeys (F) exactly in this way. In order to consider the physical universe as a genuine quantum system, one either had to prepare it arbitrarily often into the same state, or one had to prepare arbitrarily many universes of the same type into the same state. In both cases, it is inconceivable in principle to register relative frequencies of facts after measurement, supposing the total information about the universe is encoded in its state. In the first case, the universe cannot remain in the same state as before a measurement and, at the same time, exhibit the result of this measurement. In the second case, "reading off" relative frequencies contradicts (U): A universe consisting of all physical objects and phenomena by definition, cannot be compared with additional facts from "parallel universes", that is, from "outside". Consequently, a collection of measurement results (in the sense explained above) for the system "universe" cannot consistently be conceived of. Therefore the concept "quantum state of the universe" is lacking a sound physical interpretation, taken for granted (F) and (MI). Roughly speaking, any proposal to provide this concept with empirical significance is ruled out by the probabilistic character of quantum theory. It is not enough to refute merely the more bizarre proposals (such as splitting the apparatus [10]). There is no way to appeal to a "quantum state of the universe" within the methodological principles of physics. We state some obvious but far-reaching consequences of this conclusion: 1. There has never been a "quantum state of the universe" in the past. The origin of the physical universe cannot be explained from a quantum state alone, neither by amplitudes to appear from nothing [2] nor by a hypothetical tunnelling phenomenon [11,5]. This conclusion does not depend on whether the universe is open or closed, inflationary or not. There is no exclusively quantum-theoretical cosmogenesis on principle. 2. The physical universe as a whole is not subjected to a purely quantum-theoretical dynamics as was proposed in [12]. In this sense, there is no strict quantum cosmology [13]. 3. A "theory of everything" which aims at a description of all physical systems and their interactions [14] cannot rely exclusively upon quantum-theoretical basic concepts. There is no quantum theory of gravity with an interpretation which allows for a "quantum state of the universe". IV. DISCUSSION The reasoning presented in the last section may give rise to a number of interesting questions, which we are going to discuss now. In doing so, we want to anticipate and refute possible objections. Question 1: If the need for empirical accessibility is taken seriously, then some kind of experimental arrangement, shortly called "apparatus" in the following, is indispensable. Does not every physical description of the universe necessarily comprise as part of the universe the apparatuses suitable to test this description? Isn't this problem even more fundamental than how to apply quantum theory to the universe as a whole? Isn't a classical state of the universe inconceivable as well? Answer 1: We abstract from all concrete measurement methods. We push idealization even thus far as to neglect the material configuration of the apparatus completely. In this vein, one can relate a physical description to something empirically accessible in principle without explicitly paying attention to internal states of the apparatus or to reactions of the apparatus to the system of interest. This stage of idealization is well suited to find out which picture, or better caricature, of physical reality a theory permits. Thus, a classical pure state of the universe is conceivable (and has indeed been conceived, as is well known, in the 19th century in the guise of the Laplacian demon). Our reasoning against a quantum state of the universe notably holds true for every probability prediction about the physical universe, be it of quantum origin or not. Consequently, there is also no classical mixed state of the universe, whence cosmology cannot rely on probability densities on phase space. Question 2: In contrast to classical physics, in quantum theory state transformations caused by apparatuses play a central role. How can one then justify to establish quantum descriptions without explicitly incorporating preparation and measurement apparatuses? Answer 2: All you need is (F). In order to interpret probability predictions physically, it is indispensable to consider collections of conceivable measurement results as facts. Usually these facts are read off the apparatuses, but every description of apparatuses going beyond the facts themselves may fall victim to our idealization. Question 3: Collections of measurement results can only be thought of as produced by repeated preparation and measurement. Isn't it, in view of such an ensemble interpretation [15], always (and not only for the universe as defined by (U)) impossible to assign a quantum state to an individual system? Answer 3: Whether or not a certain quantum state is given cannot be tested empirically in a single measurement on an individual system. It is, however, not a priori meaningless to assign a certain quantum state to an individual system, as long as one knows the preparation apparatus (whose state is, notably, not part of the quantum description). If it is a legitimate thought experiment to check at an infinite ensemble into which quantum state a specific apparatus prepares, then it is also legitimate to ascribe this state to each and every individual system prepared by this apparatus. There is no fundamental problem with this for microsystems, but there is one for the universe. Question 4: The idealization relevant for interpretation extends so far as to make irrelevant the material configuration of apparatuses (Answer 1), as well as to legitimize thought experiments with infinite ensembles of quantum systems (Answer 3). Why is it then forbidden to imagine an infinite multitude of identically prepared quantum universes? Why should not different facts exist in different "parallel universes"? Answer 4: Abstraction and idealization in physics lead only to simplified descriptions of what is empirically accessible in principle. Because any view from outside the universe is inconceivable by the definition (U), a multitude of universes or a comparison between different universes remains forbidden, even if idealization is pushed to the extreme. It is legitimate to imagine an infinite ensemble of electrons, only because it is conceivable in principle to prepare many electrons (or one electron repeatedly) into the same state. For the universe, the situation is fundamentally different. Even if the material configuration of apparatuses is completely neglected, there remains a difference in their logical status: An apparatus for the observation of an electron is surely outside the electron, but an apparatus for the observation of the universe is surely not outside the universe. Question 5: The real structure of nature does not depend on definitions. Answer 4, however, seems to do so. Why are cosmological scenarios excluded which involve a multitude of universes, each being part of nature? "Universe" means "all embracing", but why should a physical universe as an object of cosmology be literally everything? Answer 5: One can, of course, give up (U) and use the word "universe" in a less embracing sense. But then, our reasoning and conclusion remain valid for what was originally meant by (U). Question 6: Real apparatuses consist of atoms, and atoms are undisputedly quantum systems. Why can one then rely on facts to interpret quantum states without describing the emergence of these facts within the conceptual frame of quantum theory? Doesn't the whole reasoning rest on an artificial opposition between quantum predictions and classical apparatuses due to over-idealization, and hence lack physical relevance? Answer 6: Indeed, the application of (MI) presupposes that a collection of facts comes out of every sequence of measurements. (MI) gives no hints on how these facts come into existence or on how their emergence could be described theoretically. This notorious "quantum measurement problem" [16] cannot be solved or avoided by explicitly taking into account the apparatus and the environment as quantum systems. In particular, purely quantum-dynamical theories of decoherence [17] do not explain the emergence of facts in single measurements, not even for all practical purposes. The idealization chosen here favours the sudden emergence of a definite fact in a spontaneous quantum event [18] once a measurement is carried out on a quantum system. From then on the fact persists. Conventional quantum theory expresses such an individual quantum event as a suitable state collapse. Encouraged by these facts and in the tradition of Niels Bohr, we insist that the classical description of apparatuses is a necessary independent input to every quantum description. By "classical" we do not refer to the laws of classical physics, but only to the applicability of classical logic to the facts presupposed by (MI). The relevance of these facts to interpretation is a direct consequence of (F). Question 7: The unsatisfactory special role of the apparatuses and the desire for a description of the universe as a closed quantum system have been two essential motivations for the development of the formalism of consistent quantum histories [19]. Hasn't the state concept lost its fundamental status in this modification of the quantum formalism, so that the reasoning presented above has become obsolete? Answer 7: Quantum probability goes without histories, mathematically [20] and physically. The formalism of consistent quantum histories is burdened with a fundamental freedom of choice of a consistent family or a framework [21]. Among the various imaginable quantum histories, there is no unique procedure to discriminate in a given physical situation between facts and non-facts. In the histories formalism it is not unambiguously expressible that one observable has an actual value due to the real experimental setup, while another (incompatible) one has not. This is the ultimate reason why the histories approach has been criticized repeatedly [22]. Independently of its applicability to the universe, the quantum-histories approach thus fails to satisfy principle (F). For this reason it lacks a sound physical interpretation. This is fatal to the whole approach, but it is far from being acknowledged by its adherents [23]. Finally, one could ask how to do cosmology at all in the era of quantum theory. We stress that this problem appears to be puzzling only through the dogma of the universal applicability of quantum theory. In accordance with Ludwig [6] and others, we suggest to drop this dogma. In the same way as the description of a quantummechanical microsystem requires classical apparatuses as a fundamental concept, facts could come into play in the description of the early universe and within grand-unification programs, as a fundamental concept apart from quantum uncertainty. We think that this dichotomy is unavoidable. Moreover, it is by no means evident that all physical systems must possess quantum states. In conclusion, we have shown that the universe as a whole cannot be ascribed a quantum state with a sound interpretation, irrespectively of specific cosmological models. Thus, it makes no sense to postulate such a state hypothetically and treat it like a very complicated quantity, about which one doesn't yet know enough. This conclusion should serve as an interpretational boundary condition for working out cosmological theories. Its enforcing character is based on conceptual and methodological rigor. This is a step beyond Occam's razor, which has so often been the main tool of heuristic argumentation against a multitude of universes: While the razor cuts off only what is physically legitimate but redundant, the idea of an ensemble of universes is at best metaphysical.
2019-04-14T03:17:18.369Z
2000-05-23T00:00:00.000
{ "year": 2000, "sha1": "b981dd2cc64ee401adca0c17d78db604cc0da065", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7080860bf507744febe7f0b89e5673892baa766a", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Physics" ] }
207247855
pes2o/s2orc
v3-fos-license
Potency of an inactivated influenza vaccine prepared from A/duck/Hokkaido/162/2013 (H2N1) against a challenge with A/swine/Missouri/2124514/2006 (H2N3) in mice H2N2 influenza virus caused a pandemic starting in 1957 but has not been detected in humans since 1968. Thus, most people are immunologically naive to viruses of the H2 subtype. In contrast, H2 influenza viruses are continually isolated from wild birds, and H2N3 viruses were isolated from pigs in 2006. H2 influenza viruses could cause a pandemic if re-introduced into humans. In the present study, a vaccine against H2 influenza was prepared as an effective control measure against a future human pandemic. A/duck/Hokkaido/162/2013 (H2N1), which showed broad antigenic cross-reactivity, was selected from the candidate H2 influenza viruses recently isolated from wild birds in Asian countries. Sufficient neutralizing antibodies against homologous and heterologous viruses were induced in mice after two subcutaneous injections of the inactivated whole virus particle vaccine. The inactivated vaccine induced protective immunity sufficient to reduce the impact of challenges with A/swine/Missouri/2124514/2006 (H2N3). This study demonstrates that the inactivated whole virus particle vaccine prepared from an influenza virus library would be useful against a future H2 influenza pandemic. H2N2 influenza virus was the causative agent of an influenza pandemic known as Asian flu, which started in 1957. More than one million deaths were reported worldwide. However, H2 influenza viruses have not been detected in the human population since 1968 in replacement of another pandemic influenza caused by H3N2 influenza viruses. In contrast, H2 avian influenza viruses have been continuously circulating in wild aquatic and sporadically isolated from domestic birds around the world [6,16,17,24,28,29]. In addition, H2N3 influenza viruses were isolated from pigs in 2006 in Missouri, U.S.A. [19]. These facts suggest that avian H2 influenza viruses may occasionally transmit to pigs and can be re-introduced into the human population in the future. Such an event could result in a pandemic because of the lack of acquired immunity against H2 influenza viruses in the current human population [27]. Therefore, vaccines against H2 influenza viruses are needed to prepare for a future human pandemic [20]. The HA genes of H2 influenza viruses are phylogenetically divided into North American and Eurasian lineages [26]. The H2N2 influenza viruses that caused Asian flu belong to the Eurasian lineage and the H2N3 influenza viruses that were isolated from pigs in 2006 belong to the North American lineage. The avian H2 influenza virus A/black duck/New Jersey/1580/1978 (H2N3) antigenically cross-reacts with H2 influenza viruses isolated from humans and birds before 1991 [5,14]. The H2N3 influenza viruses isolated from pigs show antigenic cross-reactivity with North American and Eurasian H2 avian influenza viruses [13]. However, information regarding the H2 influenza viruses recently isolated in Asia, particularly on the antigenicity of such viruses is limited. To prepare in case of H2 influenza virus transmission to the human population from animals, the characterization of genetic and antigenic properties of recent isolates, including viruses recently isolated from wild bird in Asia is greatly needed. Since 1996, we have conducted intensive surveillance of avian influenza in wild waterfowl in Hokkaido, Japan and Mongolia to monitor viruses that are maintained in the nesting lakes in Siberia and spread southward along with their migration in autumn. We reported the isolation of influenza viruses of various subtypes including H2 influenza viruses [7,10,29]. All viruses isolated in the surveillance study are stored in our influenza virus library (http://virusdb.czc.hokudai.ac.jp/). doi: 10.1292/jvms. Previous studies demonstrated that cold-adapted live vaccines generated by human and avian H2 influenza viruses induce effective immunity against challenge using parental strains in mouse and ferret models [2,3]. However, studies on the preparation of inactivated vaccine against H2 influenza are still limited. The aim of the present study is to evaluate the efficacy of an inactivated whole virus particle vaccine prepared from viruses recently isolated from wild birds in Asia based on its antigenicity, immunogenicity, and protective effects against challenge with swine H2 influenza virus in mice. (Table 1) were isolated from fecal samples of migratory ducks in our surveillance study [7,10]. All viruses used in the present study were propagated in 10-day-old embryonated chicken eggs at 35°C for 48 hr, and infectious allantoic fluids were stored at −80°C until use. Madin-Darby canine kidney (MDCK) cells were grown in minimum essential medium (MEM) (Nissui Pharmaceutical, Tokyo, Japan) supplemented with 10% inactivated calf serum and antibiotics and were used for titration of viral infectivity. Sequencing and phylogenetic analysis Viral RNA was extracted from the allantoic fluids of embryonated chicken eggs using TRIzol LS Reagent (Life Technologies, Carlsbad, CA, U.S.A.) and reverse-transcribed with the Uni 12 primer (5′-AGCAAAAGCAGG-3′) and M-MLV Reverse Transcriptase (Life Technologies) [8]. The full-length HA gene segment was amplified by polymerase chain reaction (PCR) using Ex-Taq (TaKaRa, Shiga, Japan) and gene-specific primer sets [8]. Direct sequencing of each gene segment was performed using the BigDye Terminator v3.1 Cycle Sequencing Kit (Life Technologies) and an auto-sequencer 3500 Genetic Analyzer (Life Technologies). Sequencing data were analyzed and aligned using Clustal W using GENETYX ® Network version 12 (Genetyx Co., Tokyo, Japan). The nucleotide sequences were phylogenetically analyzed by the maximum-likelihood (ML) method using MEGA 6.0 software (http://www.megasoftware.net/). Sequence data for H2 HA genes were compared with reference sequences selected and obtained from GenBank/EMBL/DDBJ. Antigenic analysis To analyze the antigenic properties of H2 influenza viruses, the hemagglutination inhibition (HI) test was performed using hyperimmunized chicken antisera against 7 representative strains of H2 viruses. Twenty-five microliters of 8 hemagglutination units of the test virus was added to 25 µl of 2-fold dilutions of each antiserum in PBS and incubated at room temperature for 30 min. After the incubation, 50 µl of 0.5% chicken red blood cells in PBS was added and incubated at room temperature for 30 min. HI titers were expressed as the reciprocal of the highest serum dilution showing complete inhibition of hemagglutination. Vaccine preparation The selected vaccine strain, A/duck/Hokkaido/162/2013 (H2N1), and the challenge strain, A/swine/Missouri/2124514/2006 (H2N3), were inoculated into the allantoic cavities of 10-day-old embryonated chicken eggs and propagated at 35°C for 48 hr. The viruses in the allantoic fluids were purified by differential centrifugation and sedimentation through a sucrose gradient modified from Kida et al [15]. Briefly, allantoic fluids were ultracentrifuged and pellets were layered onto 10 to 50% sucrose density gradient and ultracentrifuged. The fractions containing viruses were collected based on the sucrose concentration, hemagglutination titer, and protein concentration. Whole virus particles were pelleted from the sucrose fractions by ultracentrifugation and suspended in a small volume of PBS. The purified viruses were inactivated by incubation in 0.1% formalin at 4°C for 7 days. Virus inactivation was confirmed by inoculation of the formalin-treated samples into embryonated chicken eggs. The total protein concentration was measured using the BCA Protein Assay Reagent (Thermo Fisher Scientific, Waltham, MA, U.S.A.). Each viral protein in the vaccine was separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and the relative amounts of the hemagglutinin (HA) protein were assumed as a ratio of the HA protein in the total protein using ImageJ (http://rsb. info.nih.gov/ij/index.html). Serum neutralization test Serum neutralizing antibody titers were measured according to the method of Sakabe et al [25]. Briefly, test sera and 100 TCID 50 of A/swine/Missouri/2124514/2006 (H2N3) or vaccine strain virus were mixed and incubated for 1 hr at room temperature. The mixture was inoculated onto MDCK cells and incubated at 35°C for 1 hr. Unbound viruses were removed and the cells were washed with PBS. The cells were subsequently incubated in MEM containing 5 µg/ml acetylated trypsin (Sigma-Aldrich). Cytopathic effects were observed after 72 hr incubation and neutralizing antibody titers were determined as the reciprocal of the serum dilution yielding 50% inhibition of the cytopathic effects. Genetic analysis of H2 influenza viruses Nucleotide sequences of HA genes of the H2 viruses in the influenza virus library were determined and phylogenetically analyzed along with reference sequences available in the public database (Table 1 and Fig. 1). Nucleotide sequences of viruses isolated in Hokkaido in 2013 showed high similarity (99.7-100%) and A/duck/Hokkaido/162/2013 (H2N1) was selected as a representative strain. Based on the results of phylogenetic analysis, the H2 HA genes were classified into Eurasian and North American lineages as the previous study described [26]. The Eurasian linage included viruses isolated in Asia, Europe, and Alaska, while the North American linage included viruses mainly isolated in North America. Viruses belonging to the Eurasian linage were further divided into 4 clusters. Viruses in cluster 1 were avian influenza viruses isolated before the 1980's. Human H2N2 influenza viruses formed a single cluster, cluster 2. This study revealed that avian H2 influenza viruses isolated in Japan in the 1980's doi: 10.1292/jvms.17-0312 (represented by A/pintail/Shimane/1086/1981 (H2N3) in the phylogenetic tree) belonged to cluster 3, along with European isolates around the same period. Recent isolates from avian species in European and Asian countries formed cluster 4. These results clearly demonstrate that H2 influenza viruses recently circulating among birds are genetically distant from human H2N2 viruses. Swine H2N3 viruses belong to the North American lineage and no avian viruses recently isolated in the East Asia region are genetically close to the swine H2N3 viruses. Antigenic analysis of H2 influenza viruses Seven H2 influenza virus strains representatives of each genetic cluster were selected and antigenically analyzed by HI test ( Potency test of the vaccine against H2 influenza virus in mice Based on the results of antigenic analysis, all viruses tested in the present study showed cross-reactivity with viruses belonging to the other genetic groups. Thus, the most recent isolate at the beginning of this study, A/duck/Hokkaido/162/2013 (H2N1), was assumed to be vaccine candidates and used in following examinations. Neutralizing antibody titers of sera collected from mice immunized once with either A/duck/Hokkaido/162/2013 (H2N1) or A/swine/Missouri/2124514/2006 (H2N3) were low (Table 3). In contrast, the neutralizing antibody titers of mice sera injected twice with either vaccines were reached up to 1:320 against the homologous virus (Table 4) DISCUSSION Vaccination is the most effective control measure for human pandemic influenza and the preparation of vaccines for future H2 influenza pandemics is necessary [20]. Our results demonstrated that an inactivated whole virus particle vaccine prepared from recent avian H2 influenza virus, A/duck/Hokkaido/162/2013 (H2N1), is effective for use in future human pandemics. A/duck/ Hokkaido/162/2013 (H2N1) showed broad antigenic cross-reactivity and thus was selected as the vaccine candidate strain in this study. The inactivated vaccine prepared from A/duck/Hokkaido/162/2013 (H2N1) induced neutralizing antibodies against the homologous virus and A/swine/Missouri/2124514/2006 (H2N3) in mice after 2 subcutaneous injections. The inactivated vaccine was also sufficiently protective to reduce the impact of the challenge with A/swine/Missouri/2124514/2006 (H2N3) at a level comparable to that of the vaccine prepared from the homologous strain of the challenge virus. Inactivated whole virus particle influenza vaccines are more effective than split influenza vaccines [1,9,20,23]. Lenny et al. reported that monovalent or multivalent inactivated whole virus particle vaccines generated from A/Singapore/1/1957 (H2N2), A/ duck/Hong Kong/319/1978 (H2N2), or A/swine/Missouri/2124514/2006 (H2N3) are effective against a challenge with one of the three viruses in mouse model [18]. Our findings supported the effectiveness of inactivated whole virus particle vaccine against H2 influenza because avian H2 influenza viruses currently circulating among birds are also effective. Our inactivated vaccine prepared from A/ duck/Hokkaido/162/2013 (H2N1) required 2 rounds of vaccination to induce neutralizing antibodies in mice to A/swine/ Missouri/2124514/2006 (H2N3); thus, the dosage of vaccine and the most effective administration strategy should be considered to improve the efficacy of this vaccine. We have established an influenza virus library for storing various influenza viruses for use as seed for vaccines. Influenza viruses of 144 combinations including 16 HA and 9 neuraminidase subtypes isolated from animals or generated in our laboratory have been stored in the library. Our previous studies revealed that whole virus particle vaccines prepared from this library induce effective immunity against infections with H1, H5, H6, H7 and H9 influenza viruses in mice and macaque models [4,11,12,[21][22][23]. In the present study, the vaccine candidate strain against H2 influenza selected from the influenza library is shown to be potentially useful for a future H2 influenza pandemic. Our annual influenza surveillance in wild birds in Japan and Mongolia effectively monitors virus circulation in wild birds in East Asian countries and also provides a variety of influenza viruses [7,29]. Thus, our library is updated each season, providing specimens from which we might gain novel information about the antigenicity of H2 influenza viruses circulating in wild birds in East Asian countries. In further studies, monitoring of introduction of H2 influenza virus into pig population and emergence of mammalian adapted H2 influenza viruses is important for early response to a human pandemic. In addition, the continuous surveillance and antigenic analysis of H2 influenza viruses in both wild birds and poultry are necessary to prepare for a future pandemic and allow for rapid vaccine preparation.
2018-04-03T01:31:45.462Z
2017-10-07T00:00:00.000
{ "year": 2017, "sha1": "3da9a80c273cc1ac1684b35e0cd1704e48568edc", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jvms/79/11/79_17-0312/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7366bc3b7056961538d00405c860c074401f1970", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245838967
pes2o/s2orc
v3-fos-license
Effects of Salinity on the Biodegradation of Polycyclic Aromatic Hydrocarbons in Oilfield Soils Emphasizing Degradation Genes and Soil Enzymes The biodegradation of organic pollutants is the main pathway for the natural dissipation and anthropogenic remediation of polycyclic aromatic hydrocarbons (PAHs) in the environment. However, in the saline soils, the PAH biodegradation could be influenced by soil salts through altering the structures of microbial communities and physiological metabolism of degradation bacteria. In the worldwide, soils from oilfields are commonly threated by both soil salinity and PAH contamination, while the influence mechanism of soil salinity on PAH biodegradation were still unclear, especially the shifts of degradation genes and soil enzyme activities. In order to explain the responses of soils and bacterial communities, analysis was conducted including soil properties, structures of bacterial community, PAH degradation genes and soil enzyme activities during a biodegradation process of PAHs in oilfield soils. The results showed that, though low soil salinity (1% NaCl, w/w) could slightly increase PAH degradation rate, the biodegradation in high salt condition (3% NaCl, w/w) were restrained significantly. The higher the soil salinity, the lower the bacterial community diversity, copy number of degradation gene and soil enzyme activity, which could be the reason for reductions of degradation rates in saline soils. Analysis of bacterial community structure showed that, the additions of NaCl increase the abundance of salt-tolerant and halophilic genera, especially in high salt treatments where the halophilic genera dominant, such as Acinetobacter and Halomonas. Picrust2 and redundancy analysis (RDA) both revealed suppression of PAH degradation genes by soil salts, which meant the decrease of degradation microbes and should be the primary cause of reduction of PAH removal. The soil enzyme activities could be indicators for microorganisms when they are facing adverse environmental conditions. INTRODUCTION Polycyclic aromatic hydrocarbons (PAHs) are organic molecules consisting of two or more benzene or heterocyclic rings (Patel et al., 2020), which are mainly discharged from the process of thermal decomposition and recombination of organic materials such as coal, petroleum, petroleum gas and wood in nature. Due to the recalcitrance and hydrophobicity of PAHs, a great majority are eventually deposited in soil after transformation and migration (Sun et al., 2018), leading to a serious threat to human health and ecosystem security (Tsibart and Gennadiev, 2013;Sushkova et al., 2017;Zhang et al., 2018). Among the remediation processes of PAH pollution in soils (Rivas, 2006;Ghosal et al., 2016;Kuppusamy et al., 2016Kuppusamy et al., , 2017Li et al., 2020;Zhang et al., 2021), the bio-augment remediation method is considered the most suitable choice because of its low economic cost, high efficiency and sustainability (Haritash and Kaushik, 2009;Ghosal et al., 2016). Soil enzymes are also common representations of soil biochemical characteristics, which are produced by soil microorganisms (Cortés-Lorenzo et al., 2012;Singh, 2015;Azadi and Raiesi, 2021). Soil catalase (S-CAT) can decompose hydrogen peroxide in soil and reduce the damage of excessive accumulation of hydrogen peroxide to soil microorganisms (Sun et al., 2021). Soil polyphenol oxidase (S-PPO) is an oxidoreductase that can oxidize aromatic compounds into quinones (Sullivan, 2014). Besides, soil dehydrogenase (S-DHA), reflecting the amount of active microorganisms and their degradation ability of organic matter, can be used to evaluate the degradation performance (Lu et al., 2017). The activities of these enzymes in soils are usually the most sensitive indicators to environmental changes, and their activities are always affected by soil conditions through shifting the synthesis and structure of local microorganisms (Teng and Chen, 2019;Azadi and Raiesi, 2021). Soils in onshore oilfields are commonly suffered by multiple environmental stresses including PAHs contamination and soil salinization (Nie et al., 2009;Cheng et al., 2017). Actually, soil salts are vital factors for microorganisms during their physiological metabolic activities and important substances to maintain cells' osmotic equilibrium (Lozupone and Knight, 2007;Rath and Rousk, 2015;Rath et al., 2019;Yang et al., 2020;Zhao et al., 2020). However, high salinity can result in dehydration or lysis of cells for microbes, then decrease microbial functions in soils (Singh, 2015;Yang et al., 2020). For microbes with salt tolerance, osmotic substances will accumulate in cells and thereby enhance the adaptation of microorganisms to salts (Hagemann, 2011;Asghar et al., 2012). Although former studies have reported effects of the salinity on PAH degradation in soils, it is still unclear how microbial communities relate to changes of degradation genes and soil enzymes with increasing salinity. In this study, a 30-day soil remediation of PAHs under 3 salinity gradients (addition of 0, 1%, and 3% of NaCl, w/w) was conducted. The goals were to provide a better understanding of effect mechanisms of soil salinity on the degradation rate during a bio-augmented remediation of PAHs under salinity changes. The objectives are as follows: (1) to reveal the influence of salinity on composition and diversity of the bacterial community, and (2) to elucidate the response characteristic of functional genes and soil enzymes related to PAH degradation. The results reveal the effect extent of soil salinity on bioremediation of PAH and provide a new perspective for the assessment and remediation of PAHs in extreme environment including but not limited to oilfield soils. Experimental Design In this study, bacteria colonies were isolated and enriched directly from oil-contaminated soil in the Shengli oilfield, China. The bacteria consortium, passed on NCBI database by Yang Li (Qilu University of Technology Shandong Academy of Sciences, Jinan, China), had been proven to have a synergistic biodegradation ability for PAHs in a former experiment. The soils used in this study were collected from the Shengli Oilfield of China. The sampling site was not obviously polluted by crude oil, but had beared long-term oil exploitation since the 1960s. After air dried and ground through a 10-mesh sieve, the soils were spiked with phenanthrene (PHE) and pyrene (PYR) thoroughly to make their concentrations to 200 mg/kg and 50 mg/kg in soils, respectively. Then appropriate sterilized water was added to make the soil moisture to approximately 20%. One portion of the soil was subjected to the measurement of the basic physicochemical properties of the soil, and another was prepared for the PAH degradation experiment. After a month of aging process, the soil was divided into three parts, named LS treatment, S1 treatment and S3 treatment, respectively. Approximately 1% sodium chloride (NaCl, w/w) was added to S1 treatment, and 3% NaCl (w/w) was added to S3 treatment. The mixture was placed in a plastic sterilized box. Each box was equipped several 0.22µm filters on the cover, in order to ensure the normal respiration of soil, and prevent the influence of microorganisms from the air. Soil samples were cultured at 25 • C for 30 days. All treatments were set with 3 replicates. And during each sample collection, triplicate samples were collected for chemical and biological analysis. Determinations of Physico-Chemical Properties and Polycyclic Aromatic Hydrocarbons in Soils The pH of the soil and the electrical conductivity (EC) method were used to evaluate soil salinity (Bañón et al., 2021), The percentage of weight loss of organic matter on ignition (W SOI %) method was used to determine the soil organic matter (OM) content (Nakhli et al., 2019). The obtained samples were airdried in the shade and passed through a 60-mesh standard sieve before analysis. Ultrasonic solvent extraction technology was used to extract PAHs from soil (Pan et al., 2013;Liao et al., 2021). A high-performance liquid chromatography (HPLC) system (Agilent, United States) equipped with a fluorescence detector (RF-10AXL) was utilized to analyze PAH concentrations (Geng et al., 2022). The soil enzymes activities of S-CAT, S-PPO and S-DHA were determined as follows: enzymes were extracted from prepared soil samples by enzyme kits and the activities were determined via a microplate reader (iMark, BIO-RAD, United States) (Li et al., 2019a). Analysis of Microbial Community and Degradation Genes Genomic DNA was extracted from the fresh soil samples using the Mag-Bind R Soil DNA Kit M5635-02 (Omega Bio-Tek, United States). A Nanodrop 2000 spectrophotometer (Thermo, United States) was used to check the quality and concentration of the extracted DNA. The two genes (C12O and PAH-RHDα) were amplified in a triplicate and quantified using an MA-6000 real-time fluorescence quantitative PCR instrument. The primers were synthesized following former studies (Muangchinda et al., 2015;Wang et al., 2020). The reaction system was an 8 µl template dilution sample and 8 µl mixture A. The thermal cycle reaction procedure of qPCR was as follows: 5min at 95 • C for stage 1, 15 s at 95 • C and 30 s at 60 • C for stage 2. The whole process was conducted for 40 cycles. Shanghai Personal Biotechnology Co., Ltd was commissioned to accomplish the composition spectrum analysis of microbial community diversity. In brief, the V3-V4 region of the bacterial 16S rRNA genes was amplified with the forward primer 338F (5 -ACTCCTACGGGAGGCAGCA-3 ) and the reverse primer 806R (5 -GGACTACHVGGGTWTCTAAT-3 ) (Xu et al., 2021). Agencourt AMPure Beads (Beckman Coulter, Indianapolis, IN) were used for the purification of PCR amplicons, and the PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, United States) was used for quantitative measurement. After the above stages, amplicons were pooled in equal amounts, and sequencing was performed on the Illumina MiSeq platform with MiSeq Reagent Kit v3. Data Statistical Analysis Before statistical analysis, Kolmogorov-Smirnov and Levene's tests were carried out to test the normality and homogeneity of differences . Excel 2020 (Microsoft, United States) was used for preliminary data statistics and processing. Origin (Version 2020) (Origin Laboratories, Ltd, United States) was mainly used to draw statistical graphs. All data are derived from the mean value in triplicate. SPSS Software (International Business Machines Corp, United States) was used to analyze the differences with one-way analysis of variance (ANOVA) or a non-parametric test. Picrust2 software 1 was used to predict the function of soil bacteria (KEGG). 2 The community structure of bacteria was analyzed via QIIME2 and R language. To comprehensively evaluate the characteristics of microbial community diversity, alpha diversity was utilized. The Chao1 index was used to represent richness, the Shannon and Simpson indices represented diversity, and Pielou's evenness index represented evenness. RESULTS AND DISCUSSION The Removal Percentage of Polycyclic Aromatic Hydrocarbons in Soil Figure 1 demonstrates the percentage removal of PHE and PHY from soil samples at different time points. After 30 days of incubation, significant differences (P < 0.05) in the removal of PAHs were obtained from soils treated with different salinities. On the 7 th day (Figure 1A), there was no significant difference of removal rates of PHE and PYR among the three treatments (P > 0.05), though values of degradation rate were higher in lower salinity soils than in the higher. However, on the 30 th day (Figure 1B), the removal percentages of PHE and PYR in the LS treatment reached 64.52% and 57.83%, respectively, and the S1 treatment had the highest removal percentages of 81.85% and 60.33%, respectively. This indicated that appropriate salinity could probably promote the removal rate of PHE in soils . Compared with LS and S1, the addition of 3% NaCl (w/w) significantly decreased the degradation of PAHs, leading to removal of 39.95% and 35.54% for PHE and PYR, respectively. Many previous studies have revealed a similar result: decreased PAHs removal was caused by salinity stress (Ibekwe et al., 2018;Wang et al., 2019). Changes of Soil Properties Under Salt Stress Soil enzymes, pH, EC and W SOI % were selected to reflect processes of biochemical reactions in the soils. As shown in Supplementary Table 1, pH values remained stable among different treatments of S1, S3 and LS and different sampling times. Soil conductivity and contents of organic matter were significantly influenced by the gradient salinities (P < 0.05). Soil enzymes as catalysts of biochemical conversion and the biodegradation of PAHs have been studied intensively (Lipińska et al., 2015). In this study, the activities of three common soil enzymes (S-CAT, S-PPO and S-DHA) under different soil salinities and sampling times were analyzed to evaluate the Table 2 showed the results of the difference analysis of enzyme activities between samples from the 7 th day and 30 th day. The results showed that the activities of these enzymes significantly decreased with increasing soil salinity (P < 0.05). All of the highest activities were found in the treatment with the lowest salinity (LS treatment). Soil catalase (S-CAT), a common antioxidant enzyme in soil (Sun et al., 2021), can be used as an indicator of soil biomass to some extent, and soils with high biomass usually have higher catalase activity (Chabot et al., 2020). The results in Figure 1 show that the highest S-CAT activity was observed in the LS treatment, indicating that the addition of sodium chloride reduced the S-CAT activity. On the 7 th day, there was no significant difference in S-CAT activity between S1 and S3 treatments, but the difference became more pronounced in the two treatments as the incubation time progressed. The reduction of catalase activity in the high salt state leaded to a lower antioxidant capacity of soil microorganisms, which results in higher residual PAHs than in the low salt state. Activities of S-PPO and S-DHA are both important enzymes for the breakdown of cyclic organic matter and represent the bioremediation capacity (Lu et al., 2017). In this study, these two enzyme activities were significantly decreased by the addition of salt (P < 0.05). This was the result of a significant inhibitory effect of the soil salinity on microbial degradation abilities of organic matters. Comparing the change in enzyme activity from the 7 th day to the 30 th day, the LS treatment showed the greatest change in S-DHA activity with a significant decrease of 67.89%. This may be because activity of S-DHA is an indicator of total biological activity, and bacteria without PAH-degrading abilities or that are less adapted to the environment undergo apoptosis. Li et al. (2019b) also pointed out that the increase in S-DHA activity was due to an increase in the total number of microorganisms. However, this change was absent in the treatments with relatively high salinity (S1 and S3). The reason was probably that salinity has a filter function of eliminating poorly adapted bacteria. Then the halophilic bacteria remained and were well adapted to their environment. Abundance of Polycyclic Aromatic Hydrocarbon-Degrading Genes in Contaminated Soil The biodegradation of PAHs in soil depends on a variety of functional genes, which are valuable biomarkers for evaluating the potential of PAH degradation (Yang et al., 2015). Realtime quantification PCR(RT-qPCR) was applied to quantify the absolute abundance of the PHA-RHDα and C12O genes (Figure 3). In general, the salt in soils gave a prominent stress to bacteria and mainly decreased the total abundance of PAHdegrading genes with salinity. The copy number of degradation genes was an indicator of PAH-degrading microbial abundance, the decrease of which signified a decrease of PAH-degrading microorganisms. All values of gene copies of PAH-RHDα in the lower salinity treatment were higher than those in higher soils. The copy numbers of the C12O gene showed an upward trend from S1 to S3 on the 30 th day, which meant that several PAH-degrading bacteria in the S3 treatment were halophilic and thrived under high salinity conditions. Previous studies have also reported the growth and metabolism of Halobacillus (Li et al., 2012), a halophilic microorganism containing the C12O gene, under high salinity (Delgado-García et al., 2018). However, there was no significant difference in the gene copies between Different letters over columns represent significant differences among treatments at the p < 0.05 level of LSD post hoc comparison tests. S1 and S3 (P = 0.254), which could be explained by the same role played by the mildly halophilic bacteria in both the S1 and S3 treatments. Picrust2 analysis was also conducted to predict the functional genes in relative quantity of each soil treatment. Eight genes associated with the PAH degradation (Li et al., 2019b) were selected to show significant variations among different treatments (Figure 4). From the 7 th day to the 30 th day, all numbers of these functional genes decreased. On the 7 th day, the average percentages of all genes showed the lowest values in S3 treatment and the highest in the LS. The results were associated with the bacterial genera carrying PAH degradation genes (Wang et al., 2021a). On the 30 th day, the average proportions of genes like k00452, k04101and k04100, increased in S1 treatment, which may be due to the abundance of bacteria containing these genes increased, and they were tolerant to the salt stress extent in the treatment of S1 (Liao et al., 2021). For the other genes, the highest abundances were only found in the LS treatment, which meant most PAH degradation bacteria were not salt-tolerant and leaded to a restrained degradation rate in high salinity soils. Responses of Soil Microbial Community Structure to Salt Stress Bacteria in soils usually dominate microbial communities (Pesce et al., 2018) and play a key role in the dissipation of PAHs in soils (Li et al., 2019b). In order to discuss the effect of salt stress on the microorganisms in soils, 16S rRNA sequence was conducted to analyze the structure and diversity of bacterial communities. The results showed that salt stress caused significant differences in the formation of microbial community structure from the control treatment. Alpha diversity analysis was used to evaluate the bacterial diversity and richness during incubation (Liao et al., 2021). A rarefaction curve (Supplementary Figure 1) was exhibited to show the sequenced quantities of all soil samples could effectively and accurately cover and estimate all microbial communities (Xu et al., 2021). The four commonly used alpha diversity indices were shown in Figure 5, which indicates that all the mean values of alpha diversity indices followed the trend of LS > S1 > S3. That is, the higher the salinity of soils from each treatment, the lower the value of the alpha diversity index, and then the more uneven the distribution of the soil bacterial community. Considering that salinity was the only factor that varied among the treatments, the results of alpha diversity analysis further proved that salinity had an appreciable impact on soil microbial diversity, richness and evenness. Principal coordinates analysis (PCoA) based on Bray-Curtis distances was applied to analyze the overall structural variations of microbial structure (Figure 6). The components of PCoA1 and PCoA2 could explain 69.60% and 11.20% of the variance along their axes, respectively. The loading values of PCo1 were greatly affected by salinity and increased with the soil salinity of the treatment. In the plot, samples from different treatments separated well, which suggested significant differences among different soil salinities (P < 0.05). This result was consistent with the findings of alpha diversity analysis. The statistics of taxon number under different treatments (Supplementary Figure 2) also revealed an increase in species richness over time and a decrease with salinity. The relative abundance and taxonomic analysis of soil microbial communities (Supplementary Table 3) demonstrated that Proteobacteria was the dominant phylum in all treatments (Cycil et al., 2020), accounting for the highest proportion of 89.20%-98.31%. Among the different treatments, the abundance was in accordance with the trend of LS < S1 < S3. The relative abundances of other phyla, including Bacteroidetes (0.26%-4.36%), Firmicutes (0.92%-3.20%), Actinobacteria (0.31%-3.55%), and Chloroflexi (0.01%-0.06%), decreased with the increase of soil salinity (De León-Lorenzana et al., 2018). Proteobacteria, Bacteroidetes, Firmicutes, Actinobacteria, and Chloroflexi have been reported to contain many genera associated with the degradation of aromatic hydrocarbons (Muangchinda et al., 2015) and to predominate in PAH-contaminated soils (Ma et al., 2016;Li et al., 2019b). The abundance of Proteobacteria usually increased with soil salinity (Wang et al., 2021b), and dominated the microbe communities under salt stress (Li et al., 2019a). Furthermore, the genus in salt-stress associated with PAH degradation deserve increasing attentions (Xu et al., 2019;Wang et al., 2020;Zhang et al., 2021). Figure 7 shows the bacterial composition at the genus level. The most frequently observed bacterial genus was Acinetobacter, accounting for 36.05%-81.07%, which was reported to be easier to adapt to salinity (Zhang et al., 2021). Halomonas, accounting for 0.28%-18.08%, showed a similar distribution characteristic to Acinetobacter with higher relative abundance in high salt treatment . In addition, the genera Marinobacter, Croceicoccus, Stenotrophomonas, Pseudomonas, and Georgenia were negatively affected by salinity and restrained the relative abundance. The relative abundance of other low-abundance bacteria, such as Salinimicrobium and Clostridiisalibacter, increased over time and decreased with increasing salinity (Figure 7). Among the top 20 bacterial genera, 10 bacterial genera have been previously reported as PAH-degrading bacteria (Fernández-Luqueño et al., 2011;Kappell et al., 2014;Huang et al., 2015;Muangchinda et al., 2015;Ghosal et al., 2016;Sun et al., 2018), including Acinetobacter, Marinobacter, Halomonas, Croceicoccus, Stenotrophomonas, Pseudomonas, Clostridiisalibacter, Ochrobactrum, Methylophaga, and Altererythrobacter. As shown in Figure 8, the addition of salinity significantly decreased the relative abundance of some targeted genera in the treatment such as Marinobacter, Salinimicrobium etc., while others were enriched. Compared with the treatment of S1 and S3. LS treatment showed higher relative abundances of Marinobacter, Salinimicrobium, Croceicoccus, Stenotrophomonas Pseudomonas, Orchrobactrum, Methylophaga, and Altererythrobacter which were reported to be positively correlated with the removal percent of PAHs (Li et al., 2019b;Wang et al., 2020). Caminicella, Sedimentibacter, Caenispirillum, and Gerogenia were enriched only in the low salinity treatments, which may participate in the enhanced degradation of PAHs. In addition, salinity promoted an increase in some genera, including Acinetobacter, Halomonas, and Clostridiisalibacter. Moreover, the highest abundance of Acinetobacter and Halomonas appeared in the S3 treatments Zhang et al., 2021). It suggested that salt application led to a decrease in soil microbial diversity, which was consistent with the results of alpha diversity. Besides, some low abundance genera associated with PAH degradation are also worth of interest and future attention, as biodegradation in complex soils occurs through synergistic interactions between bacteria (Adam et al., 2017). Correlation Analysis of Soil Physical and Chemical Properties, Degradation Genes, Soil Enzyme Activities and Soil Microorganisms Redundancy analysis (RDA) was conducted based on the correlation between pH, EC, W SOI %, degradation genes, soil enzyme activities and the top 10 bacterial genera in relative abundance (Figure 9). The results showed that soil physicochemical properties had a significant effect on the composition and function of the microbial community (P = 0.001). Electrical Conductivity value was the most important factor affecting the structure of soil flora and the relative abundance of species, followed by soil enzyme activity and organic matter content. The soil conductivities were positively correlated with the organic matter and some halophilic bacteria, such as Halomonasas and Acinetobacter, while negatively correlated FIGURE 9 | Redundancy analysis (RDA) ordination plot to show the relationships among the soil physicochemical parameters, degradation genes, enzyme activities and the relative abundance of top 10 bacterial genera. The red arrow represents the species, and the length of the arrow represents the variability of species in the sorting space. The blue arrow line represents the influencing factor, and the length represents the influence of the factor on the composition and function of the flora. with soil enzyme activities, PAH degradation, and pH. That is to say, in higher salinity treatments, the PAH degradation rate, soil enzyme and degradation genes will be lower. This is in accordance with other results discussed above in this paper. Halomonasas, Acinetobacter, and Marinobacter are the three largest variants of the different species in the sorting space. The relative abundances of Halomonas and Acinetobacter were positively correlated with the soil salinity, indicating that these genera were important participants in the degradation process of PAHs during a relatively high saline environment (Czarny et al., 2020;Wright et al., 2020). However, there was a significant negative correlation between these two genera and the PAH degradation genes, indicating that these bacteria may not participate in PAH degradation directly. Wang et al. (2020) has proved that Halomonas cannot degrade PHE directly in experiments. The relative abundances of Marinobacter and other genera, including Croceicoccus, Stenotrophomonas, Pseudomonas, and Salinimicrobium, were all negatively correlated with the soil salinity, while positively correlated with pH, PAHs degradation genes and soil enzyme activities. These genera were reported to be the main force of PAH degradation in low salinity treatment . Marinobacter proved to require the cooperation of other bacteria during the biodegradation of PAHs (Cui et al., 2014), which led to a relatively low degradation rate of PAHs in high salinity soils. Soils with lower salinities had higher community diversity and richness, which led to a higher cooperation rate between different bacteria and then a higher PAH removal rate. CONCLUSION This study illuminated the effects of salinity on the PAH removal rate, soil enzyme activities, degradation gene abundance, and the structural changes of the soil bacterial community. (1) The PAH degradation rate increased slightly in low saline soils, while were restrained significantly in high salt conditions. (2) With increasing of soil salinity, not only the bacterial community diversity decreased, but also abundance of degradation gene and soil enzymes. This result could be responsible for the reduction of degradation rate in saline soils. (3) The microbial community was filtered in high salt treatments and dominated by salt-tolerant and halophilic genera, such as Acinetobacter and Halomonas. (4) Correlation analysis confirmed that, soil salinity was negatively related with PAH degradation, abundance of functional genes and soil enzyme activities, while positively related with some halophilic genera. DATA AVAILABILITY STATEMENT The original contributions presented in the study are publicly available. This data can be found here: https://www.ncbi.nlm.nih. gov/bioproject/, PRJNA788045. AUTHOR CONTRIBUTIONS YL, XF, and QZ designed the study. YL, WL, and LJ performed the experiment. FS, TL, QL, and YX analyzed the data. YL, XF, and JW wrote the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This study was supported by programs of National Natural Science Foundation of China (Grant Numbers 41807111 and U1906222) and Natural Science Foundation of Shandong Province, China (Grant Number ZR2019PD018). ACKNOWLEDGMENTS We appreciate Xinran Hou for assistance during the soil sampling.
2022-01-11T14:20:04.258Z
2022-01-11T00:00:00.000
{ "year": 2021, "sha1": "13a7833eec931f5bb4f5588d5aef6a763484e61b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "13a7833eec931f5bb4f5588d5aef6a763484e61b", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
206889680
pes2o/s2orc
v3-fos-license
The changes in, and relationship between, plasma nitric oxide and corticotropin‐releasing hormone in patients with major depressive disorder Summary There is strong evidence of roles of the hypothalamus‐pituitary‐adrenal axis and nitric oxide (NO) synthase‐NO system in depression, but the relationship between them is unknown. The aim of this study, therefore, was to elucidate whether there is any correlation between NO and corticotropin‐releasing hormone (CRH) in major depressive disorder (MDD) patients. In 16 outpatients with MDD and 18 healthy controls, the plasma amino acids citrulline (Cit) and arginine (Arg) were determined by high‐performance liquid chromatography, and CRH levels was measured by radioimmunoassay. The Cit/Arg ratio was calculated as an index of NO synthesis. Correlations between NO and CRH were examined with the Spearman test. Before treatment, no significant correlation was observed between the plasma NO level and CRH levels in MDD patients. The plasma NO levels were significantly higher in MDD patients. A significant correlation was found between NO levels and Hamilton Depression Rating Scale (HAMD) scores in MDD patients. The plasma CRH levels were significantly higher in MDD patients than in controls. After monotherapy for 2 months, the NO levels had dramatically declined but were also higher than those in the controls. This study is the first report of the absence of a significant correlation between plasma NO and CRH levels, although both levels are elevated in MDD patients. Furthermore, the strong links between the plasma NO levels and the HAMD scores, as well as the increased NO reduction after remission, suggest that NO plays a key role in depression and may be an indicator of therapeutic success. Summary There is strong evidence of roles of the hypothalamus-pituitary-adrenal axis and nitric oxide (NO) synthase-NO system in depression, but the relationship between them is unknown. The aim of this study, therefore, was to elucidate whether there is any correlation between NO and corticotropin-releasing hormone (CRH) in major depressive disorder (MDD) patients. In 16 outpatients with MDD and 18 healthy controls, the plasma amino acids citrulline (Cit) and arginine (Arg) were determined by high-performance liquid chromatography, and CRH levels was measured by radioimmunoassay. The Cit/Arg ratio was calculated as an index of NO synthesis. Correlations between NO and CRH were examined with the Spearman test. Before treatment, no significant correlation was observed between the plasma NO level and CRH levels in MDD patients. The plasma NO levels were significantly higher in MDD patients. A significant correlation was found between NO levels and Hamilton Depression Rating Scale (HAMD) scores in MDD patients. The plasma CRH levels were significantly higher in MDD patients than in controls. After monotherapy for 2 months, the NO levels had dramatically declined but were also higher than those in the controls. This study is the first report of the absence of a significant correlation between plasma NO and CRH levels, although both levels are elevated in MDD patients. Furthermore, the strong links between the plasma NO levels and the HAMD scores, as well as the increased NO reduction after remission, suggest that NO plays a key role in depression and may be an indicator of therapeutic success. K E Y W O R D S arginine, citrulline, corticotropin-releasing hormone, major depressive disorder, nitric oxide | INTRODUCTION Major depressive disorder (MDD) is a prevalent, often recurrent debilitating illness accompanied by severe functional impairment, high mortality, and a heavy health care burden. 1 Numerous hypotheses and pathways have been suggested to be involved in MDD, but the underlying mechanism remains unclear. Nitric oxide (NO) is a highly diffusible and reactive molecule synthesized and released with the assistance of nitric oxide synthases (NOSs), which convert arginine into citrulline, producing NO in the process. 2 NO has been shown to modulate the functions of different neurotransmitters, including norepinephrine, serotonin, *The copyright line for this article was changed on 6 August 2018 after original online publication. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. glutamate and dopamine, and thus plays an important role in the neurobiology of major depression. 3 Altered NO levels have been found in depression, not only in different brain regions 4,5 and cerebrospinal fluid (CSF) but also in blood 6,7 and exhaled gas. 8 However, there are many inconsistent results in previous studies. 9-12 Our laboratory's previous data have also revealed increased plasma NO levels in patients with first-episode melancholic MDD. 13 Much other work has been done to elucidate the contribution of NO to the pathophysiology of depression. However, there is disagreement regarding its specific function in depression, as has been reviewed by Dhir and Kulkarni. 3 Previous animal experiments have indicated the co-localization of NOS1 with corticotropin-releasing hormone (CRH) in the hypothalamus paraventricular nucleus (PVN), 14 and NO modulates the release of CRH. 15 The hypothalamo-pituitary-adrenal (HPA) axis is the key regulating system for stress responses. 16 In the present study, we aimed to further elucidate the relationship between plasma NO and plasma CRH in clinical MDD patients. To do so, we analyzed the changes in plasma CRH levels, NO levels and NO in pre-and post-treatment MDD patients. | RESULTS No significant differences were found in age, sex, race, years of education, body mass index (BMI) and Hamilton Depression Rating Scale (HAMD) scores between the MDD group and the control group (Table 1). Moreover, these demographic and clinical characteristics were similar between MDD and control patients in both male and female subgroups. After the first blood collection, all patients were treated with the selective serotonin reuptake inhibitor (SSRI) escitalopram at 10-20 mg/day. Thirteen patients had monotherapy, and another three patients underwent combination therapy with alprazolam (0.4-0.8 mg/day) because of insomnia and anxiety. Visits were made once per 2 weeks until 2 months after the first visit. The patients voluntarily joined the follow-up study. At the last visit, the patients' symptoms and HAMD scores were evaluated again, together with their blood samples being taken. Five patients dropped out for various reasons (1 for severe insomnia, 2 for nausea, and 2 for restlessness). The total MDD group showed a significant increase in NO content compared with that in the healthy control group (median 1.45 vs 0.96, Z = −4.28, P = .000, Figure 1). Both male and female subgroup analyses revealed that MDD patients had a significantly higher plasma NO content than did healthy participants (males: median 1.36 vs 1.11, Z = −3.00, P = .003; females: median 1.52 vs 0.95, Z = −2.94, P = .003). A strong association was found between the plasma NO levels and HAMD scores in the MDD group (rho = 0.63, P = .008; Figure 2). After treatment, the NO level markedly decreased compared with pretreatment levels (median 1.33 vs 1.39, Z = −2.05, P = .041; Figure 3). The plasma CRH level also showed a significant increase in the total MDD group compared with the control group (median 19.48 pg/ mL vs 10.55 pg/mL, Z = −3.20, P = .001, Figure 4). The change in CRH in female patients was more pronounced than that in male patients (male: median 18.08 pg/mL vs 11.64 pg/mL, Z = −1.77, P = .077; female: median 20.45 pg/mL vs 11.24 pg/mL, Z = −2.37, P = .018). No significant correlation was observed between plasma NO levels and CRH levels (r = .05, P = .91, Figure 5). | DISCUSSION We found no significant correlation between plasma NO levels and CRH levels, although they both were significantly increased in the Herein, we provide the first report of the lack of a significant correlation between plasma NO and CRH levels, in accordance with our previous data showing no co-localization for NOS1 and CRH in the CUS rat PVN; no significant correlations were found between plasma NO and corticosterone levels either in the CUS rat model or in the control group, and there was no significant correlation between plasma NO and cortisol levels in MDD patients. 25 Our results indicated that the NOS-NO system and HPA axis may function independently, contrary to some previous reports. 14,15,[22][23][24] However, most of the previous data have come from various laboratories using different methods. Furthermore, most of the conclusions came not from clinical patients but from postmortem human brain slices or animal studies, and most of those studies did not focus on special subtypes of depression. In the present study, the levels of NO were also significantly higher in the MDD patients and declined after antidepressant treatment. In this study, the ratio of the amino acids citrulline and arginine (Cit-Arg ratio) was calculated as an index of NO synthesis. This method has been reported to be sufficiently accurate and reproducible, 5 and it can be used as an effective index to reflect the NOS-NO system activity. [26][27][28] F I G U R E 1 Plasma concentrations of nitric oxide (citrulline/ arginine ratio) in ( ) major depressive disorder (MDD) and ( ) healthy controls. Both MDD patients and healthy participants were divided into male and female subgroups. In the total analysis, MDD patients showed a significant increase in the plasma NO (Cit/Arg ratio) content compared with that in the healthy control group. Both male and female subgroup analyses showed that MDD patients had significantly higher plasma NO content than that in healthy participants. The data are shown as the median, 25th-75th percentiles, and the range. Cit, citrulline; Arg, arginine; MDD, major depressive disorder. **P < .01, ***P < .001 F I G U R E 2 Correlation between plasma nitric oxide levels and Hamilton Depression Scale scores in major depressive disorder. A strong association between plasma NO levels and HAMD scores was observed in the MDD group. NO, nitric oxide; HAMD, Hamilton Depression Scale. (○) Male; (•) Female F I G U R E 3 Change in the plasma concentrations of nitric oxide (citrulline/arginine ratio) between pre-and post-treatment in major depressive disorder. After treatment, the NO levels markedly decreased compared with pretreatment levels but were still higher than those in the control group. Cit, citrulline; Arg, arginine; MDD, major depressive disorder; treat-MDD, post-treatment in major depressive disorder. *P < .05, **P < .01, ***P < .001 F I G U R E 4 Plasma concentrations of corticotropin-releasing hormone in ( ) major depressive subjects (MDD) and ( ) healthy controls. The plasma CRH level also showed a significant increase in the MDD group compared with the control group. The changes in female patients were more pronounced than those in male patients. CRH, corticotropin-releasing hormone; MDD, major depressive disorder. *P < .05, **P < .01 Furthermore, we selected a special subtype of depression (MDD) for study, using the same SSRI monotherapy. In addition, we collected blood samples before treatment. Our result was consistent with those from other laboratories. 7,29 Our published data revealed increased plasma NO levels both in a male rat model of chronic unpredictable stress 25 and in first-episode MDD patients. 13 Plasma levels of NO metabolites, i.e., nitrite and nitrate, which reflect plasma NO concentrations, have also been reported to increase depression. 6,30 Of course, there are still some inconsistent findings in previous studies, [9][10][11][12] possibly because of different methods, different subtypes of depression or the different treatments the patients received, and multiple depression comorbidities. Therefore, in this study, we excluded the above factors. A strong association between plasma NO levels and HAMD scores was also revealed, thus indicating that the NO alterations were consistent with the severity of depressive symptoms. After antidepressant treatment, the concentrations of NO declined, as we have previously reported. 13 Therefore, the plasma NO level may be a monitor of depressive systems or may forecast the outcome of anti-depression treatment. Interestingly, multiple antidepressants have been reported to change the NO levels in an animal's body, 3,31,32 and clinical studies have also confirmed the NO modulatory activity of various antidepressants, particularly those belonging to the SSRI class. 3 Hence, it is estimated that future antidepressants that may act partly on the NO signaling pathway might be helpful for the treatment of drug-resistant depression. Our results indicated that the plasma CRH levels increased in MDD patients, in agreement with previous clinical results 33,34 and postmortem human brain studies, 23,24,35 thus revealing that hyperactivity of hypothalamic CRH neurons is involved in the progression of depression. It has also been shown that administration of CRH-R1 antagonists has clear antidepressant-like effects in depression. 36,37 We observed a more significantly increased CRH level in female patients than in males, thus revealing a closer relationship between the HPA axis and female depression. Several limitations should be noted in the present study. First, we collected only blood samples and not CSF samples of the patients, for ethical reasons. Some researchers have proposed that the plasma levels of these neuroactive amino acids might, to a certain degree, reflect their brain levels. 4 Moreover, some researchers have interpreted this finding as evidence that the plasma CRH measured had a hypothalamic origin. 33,38 Second, our results came from a sample that was relatively small because we had to strictly control many factors. For example, the MDD patients had decreased appetites and decreased food intake, thus leading to a decreased metabolism and decreased synthesis of these amino acids. Consequently, we limited the BMI range in the present study. In conclusion, this is the first report of a lack of a significant correlation between increased plasma NO and CRH levels, thus indicating that these two systems may function independently. Furthermore, the strong links between the plasma NO levels and the HAMD scores suggested that NO might be correlated with the severity of depression. The decreased NO levels after remission suggested that NO may be an indicator of therapeutic success and is useful for monitoring during therapy in MDD patients. | Study design This was a retrospective case-control study that included three parts: The first part was to analyze the plasma NO levels in MDD patients and the changes in NO levels in pre-and post-treatment patients. The second part was to evaluate the plasma CRH levels in MDD patients. The third part was to elucidate the relationship between NO and CRH in MDD patients. All subjects were evaluated through standard physi- | Subjects Sixteen Chinese Han untreated outpatients with MDD (7 males and 9 females, mean age of 47 years) and 18 age-and sex-matched con- | Blood sample collection and management Fasting venous blood samples were collected into anticoagulated tubes (containing EDTA) in the morning (08.00 hours). After being centrifuged (4°C, 12 000 g) for 15 minutes, the plasma samples were divided into aliquots and immediately stored at −80°C in the laboratory for further measurement of NO and CRH levels. | High-performance liquid chromatography (HPLC) with fluorescence detection (FLD) for plasma amino acid analysis The protocol was the same as that previously published. 13 The plasma The linearity of the detector response to standards was in the range of 0.9-158.2 μmol/L (Cit) and 1.8-178.0 μmol/L (Arg). The intraday relative standard deviations for the peak area were as follows: Cit, 0.97% and Arg, 0.77%, respectively. The interday relative standard deviations for the peak area were as follows: Cit, 4.29% and Arg, 3.18%, respectively. | Radioimmunoassay for CRH Plasma samples were extracted and concentrated to 1 ml and analyzed using a competitive RIA kit (Phoenix Pharmaceutical Company, Harbor Boulevard, Belmont, CA, USA). Anti-CRH serum can elicit a 100% cross reaction with CRH (1-41), but no cross response with prepro-CRH (125-151), cortisol, adrenocorticotropic hormone, vasopressin and brain natriuretic peptide Acros Organics 45. The limit of detection (LOD) was 10 pg/mL. The 50% binding intercept was 145 pg/mL. The CRH recovery from plasma was 62%,and the intra-and inter-batch variation coefficient were 6.7% and 10.8%, respectively. | Statistical analysis The results of the measurements are presented as the median (quartile). The comparisons between the groups were conducted on the basis of nonparametric statistics, because the small sample size did not provide enough data to discriminate between Gaussian and non-Gaussian distributions, thereby increasing the risk of an incorrect P value if running parametric tests. A Mann-Whitney U test was used to measure the differences between the two groups. The comparison between two related samples was analyzed using the Wilcoxon test. Correlations between binary variables were examined with the Spearman test. Pearson correlation analysis was used to assess the correlation between continuous variables. SPSS version 11.5 software was used for the statistical analyses. Differences were considered significant if the two-sided P value was .05 or less. DISCLOSURE All authors declare no conflicts of interest.
2018-04-03T05:07:07.066Z
2017-10-25T00:00:00.000
{ "year": 2017, "sha1": "2a9e25b68978583a475c9baba0141e64742f26b0", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1440-1681.12826", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e129b94d756a2824ec2424946f1a0951d8da3209", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
104307945
pes2o/s2orc
v3-fos-license
Gac fruit extracts ameliorate proliferation and modulate angiogenic markers of human retinal pigment epithelial cells under high glucose conditions Objective: To investigate the impact of the extracts of Gac fruit parts (peel, pulp, seed, and aril) on the cell viability and angiogenesis markers of human retinal pigment epithelial (ARPE- 19) cells under high glucose conditions. Methods: The effect of the extracts of Gac fruit peel, pulp, seed and aril on the ARPE-19 cells was determined using MTT viability assay, Trypan blue dye and morphological changes were observed using light microscopy. Enzyme-linked immunosorbent-based assay was performed to evaluate the effect of Gac fruit parts on the reactive oxygen species (ROS), vascular endothelial growth factor (VEGF) and pigmented epithelium-derived factor (PEDF) secretions. Results: High glucose (HG) at 30 mmol/L increased ARPE-19 cell viability and ROS and VEGF secretions. While, the exposure of ARPE- 19 cells in high glucose condition to Gac fruit extracts led to inhibition of cell viability, induced morphological changes, decreased ROS and VEGF secretions, and increased PEDF level. Gac pulp, seed, and aril at 1 000 μg/mL showed significant inhibition activities [(7.5 ± 5.1)%, (2.7 ± 0.5)%, (3.2 ± 1.1)%, respectively] against HG-induced ARPE-19 cell viability. The findings also demonstrated that Gac aril at 250 μg/mL significantly decreased ROS and VEGF levels [(40.6 ± 3.3) pg/mL, (107.4 ± 48.3) pg/mL, respectively] compared to ROS [(71.7 ± 2.9) pg/ mL] and VEGF [(606.9 ± 81.1) pg/mL] in HG untreated cells. Moreover, 250 μg/mL of Gac peel dramatically increased PEDF level [(18.2 ± 0.3) ng/mL] compared to that in HG untreated cells [(0.48 ± 0.39) ng/mL]. Conclusions: This study indicates that the extracts of Gac peel, pulp, seed and aril reduced cell viability, minimized ROS generations and showed angiogenic activities. Therefore, our findings open new insights into the potentiality of Gac fruit against HG-related diabetic retinopathy disease. Introduction Diabetic retinopathy (DR) is a microvascular destructive disease and is one of the common consequences of diabetes [1]. Hyperglycaemia is a primary factor in the initiation and progression of vascular complications related to eye diseases in diabetes such as DR [2,3]. However, the mechanisms by which hyperglycaemia leads towards vascular dysfunction still require further investigation. Objective: To investigate the impact of the extracts of Gac fruit parts (peel, pulp, seed, and aril) on the cell viability and angiogenesis markers of human retinal pigment epithelial (ARPE-19) cells under high glucose conditions. Methods: The effect of the extracts of Gac fruit peel, pulp, seed and aril on the ARPE-19 cells was determined using MTT viability assay, Trypan blue dye and morphological changes were observed using light microscopy. Enzyme-linked immunosorbent-based assay was performed to evaluate the effect of Gac fruit parts on the reactive oxygen species (ROS), vascular endothelial growth factor (VEGF) and pigmented epithelium-derived factor (PEDF) secretions. Results: High glucose (HG) at 30 mmol/L increased ARPE-19 cell viability and ROS and VEGF secretions. While, the exposure of ARPE-19 cells in high glucose condition to Gac fruit extracts led to inhibition of cell viability, induced morphological changes, decreased ROS and VEGF secretions, and increased PEDF level. Gac pulp, seed, and aril at 1 000 µg/mL showed significant inhibition activities [(7.5 ± 5.1)%, (2.7 ± 0.5)%, (3.2 ± 1.1)%, respectively] against HG-induced ARPE-19 cell viability. The findings also demonstrated that Gac aril at 250 µg/mL significantly decreased ROS and VEGF levels [(40.6 ± 3.3) pg/mL, (107.4 ± 48.3) pg/mL, respectively] compared to ROS [(71.7 ± 2.9) pg/ mL] and VEGF [(606.9 ± 81.1) pg/mL] in HG untreated cells. Moreover, 250 µg/mL of Gac peel dramatically increased PEDF level [(18.2 ± 0.3) ng/mL] compared to that in HG untreated cells [(0.48 ± 0.39) ng/mL]. Conclusions: This study indicates that the extracts of Gac peel, pulp, seed and aril reduced cell viability, minimized ROS generations and showed angiogenic activities. Therefore, our findings open new insights into the potentiality of Gac fruit against HG-related diabetic retinopathy disease. about (755 ± 185) g. The fruits were then kept separately in sealed plastic bags and stored in -80 曟 for a few weeks until they were ready to use. Fruit extract preparation The Gac fruit was removed from storage and exposed to room temperature in order to completely thaw, followed by washing with tap water to remove any debris. The fruit was next separated into four parts; peel, pulp, seed, and aril. It was then cut into small slices and pieces, followed by each fruit part being lyophilised using a freeze dryer at -45 曟 for 3 d (BT2K, VirTis, Warminster, USA). The freeze-dried parts were then ground, mixed, extracted with 70% ethanol at the ratio of 1:20 (w/v), and vigorously shaken using an orbital shaker (SHO-2D, Daihan Scientific, Seoul, Korea) at 180 rpm, for 2 h and filtered. Next, the obtained extract was evaporated using a rotary evaporator (R-210, Buchi, Flawil, Switzerland). The extract following evaporation (sticky and dark liquid extract) was then dried using a freeze dryer, and finally, the resulting dried extract was stored at a temperature of -20 曟 for further use. (1 mg/mL) was added to each well and incubated for 4 h [31]. Then, the MTT was removed carefully and 100 µL of DMSO was added. The absorbance was read at 570 nm using a microplate reader h [31]. MTT was then removed and 100 µL of DMSO was added. The absorbance was read at 570 nm using a microplate reader and 630 nm was used as a reference wavelength. The results were expressed as percentage for three measurements using the following formula: Cell viability percentage (%) = OD 570-630 of HG groups (treated and non-treated) /OD 570-630 of LG group 伊 100 Where OD = Optical density. Cell morphology examination Using 6-well plates, the ARPE-19 cells were seeded at a density of 300 000 cells/well in LG media, allowing to attach overnight. Next, the medium of each well was removed, and the cells were washed twice with PBS (1伊). After that, the ARPE-19 cells were exposed to 2 mL of HG (30 mmol/L) media containing different concentrations (62.5-1 000 µg/mL) of the Gac fruit extracts (peel, pulp, seed and aril). In this assay, the concentration of Gac extracts started from 62.5 µg/mL because small concentrations might not be effective to induce morphology changes. The untreated cells with HG and LG media were also considered. Following 48 hours of incubation, a light-inverted microscope was used to observe the normal morphological changes. Trypan blue dye assay For further determining the effects of Gac fruit extracts on ARPE-19 cell viability, 6-well plates were used, and 300 000 ARPE-19 cells were seeded with LG (5.5 mmol/L). After 24 h, the media were removed, then ARPE-19 cells were incubated with 2 mL of LG (5.5 mmol/L) and HG (30 mmol/L), and with different concentrations Results are expressed as mean 依 SD of three measurements. Bars having different letters are significantly different at P < 0.05 using Tukey's test. LG Untreated Peel (62.5 µg/mL) Pulp (250 µg/mL) Seed (500 µg/mL) Aril (125 µg/mL) Effects of extracts from Gac fruit parts on PEDF level As seen in Figure 7, there were no significant differences between the level of PEDF among LG and HG cells, even though the level of PEDF Discussion Gac fruit has been traditionally used in folk and ancestral medicine and treatments for several conditions. Newly, several biological effects and health benefits of Gac fruit have been revealed, such as antioxidant, anti-proliferative, anti-cancer, and antibacterial activities. In the present study, the effects of extracts from Gac fruit parts (peel, showed that HG induced abnormal activation of RPE cells which might be associated with PDR development [33]. The results of this study revealed that HG increased ARPE-19 cell viability when compared to LG, which was consistent with a previous study that reported a 2-fold increase in cell viability in HG than LG condition [34]. Amongst the fruit parts, Gac seed at the highest concentration 1 000 µg/mL had the highest anti-proliferative activity tested by MTT and Trypan blue dye. This was in line with the results of some studies published earlier which demonstrated that Gac seed exhibited considerable suppression activity against the proliferation of breast cancer cells ZR-75-30 [28], normal HaCat and melanoma D24 [35], and lung cancer cell A549 [36]. In this study, Gac seed and pulp induced noticeable morphological changes, such as cells condensation, floating, detachments, spherical and rounding shapes, and these characteristics might be due to the cytotoxic effect of high doses. Recently, studies reported that Gac seed water extract also induced morphological changes and cytotoxic effect against normal HaCat, melanoma D24 and C1 cell lines [35]. However, the molecular mechanisms, such as apoptotic and necrotic pathways underlying Gac extracts-induced ARPE-19 morphological changes, need to be further studied. Gac aril in this study also inhibited ARPE-19 cell viability which was consistent to another study which revealed that water extract taken from Gac aril significantly reduced MCF-7 and melanoma with a rate of 60% and 70%, respectively [37]. In addition to the seed and aril parts, this study has also showed that Gac peel and pulp reduced ARPE-19 cell viability, but the biological and anti-proliferative potentiality of Gac pulp and peel has not been studied well. One of the essential initiators of DR development is ROS, which has been noticed to be elevated by chronic hyperglycaemia [38,39]. High concentrations of glucose were found to stimulate ROS production in RPE in vitro [33,40]. This finding was in agreement with the result of this study which revealed that HG (30 mmol/L) led to an increase in ROS level produced by ARPE-19 compared to a lower level in LG (5.5 mmol/L). This increment can be naturally avoided by the antioxidant defence system. However, in some conditions such as chronic hyperglycaemia, the balance between ROS and the antioxidant system is disrupted [41]. Thus, it is necessary to support and recover the balance and decrease the level of ROS in the management of PDR. Furthermore, the results of this study showed the ability of extracts from Gac fruit parts to reduce ROS production. This proved the role of phytochemicals and natural sources as antioxidants and their potential ability in the management of DR [42,43]. Amongst all of the Gac fruit parts, Gac aril exhibited the highest anti-ROS ability which could be due to the rich carotenoids, phenolics and other bioactive compounds found in this part as revealed by previous studies [24,44]. Gac seed and peel were also found to possess anti-oxidant activities which might be due to rich trypsin inhibitors compounds, saponins, and phenolics content [35,45]. Retinal neovascularisation as mentioned earlier is the last stage of DR which is characterized by abnormal proliferation and this process is tightly controlled by inhibitors and stimulators of angiogenic factors [46]. Therefore, one of the main strategies to treat PDR is to modulate angiogenesis process by either suppressing angiogenic stimulators, such as VEGF and/or stimulating angiogenic inhibitors, such as PEDF. Current treatment patterns of this process include anti-VEGF drugs injection in addition to laser photocoagulation and surgery [47,48]. These strategies are extremely expensive and accompanied with undesired results, such as retinal detachments, retinal damage, and vitreous haemorrhage [49,50]. In this study, HGinduced secretions of VEGF were about 2-fold higher than that in LG group which was consistent with the results of previous studies. It was previously shown that VEGF secretion was highly responsive to the change in glucose concentration, which increased when the glucose dose increased [51]. It was also reported that the VEGF level produced by the ARPE-19 cells increased under HG conditions [52,53]. HG-stimulated VEGF secretion in ARPE-19 cells as illustrated in this study was reversed when treated with Gac parts extracts. Amongst all of the Gac fruit parts, the aril extract dramatically decreased VEGF secretions, which might be attributed to the rich content of phytochemicals, especially carotenoids. The role of carotenoids in the modulation of angiogenesis markers has been well established [54]. A previous study confirmed the anti-angiogenic activity of lycopene via decreased VEGF production, inhibited tube formation, and migration in human umbilical vein endothelial cells [55]. During this study, the PEDF level was evaluated by using the extracts of Gac fruit parts. The results revealed that the PEDF level of HG group reduced compared to that of LG group, whereas Gac fruit parts boosted the production of PEDF by the ARPE-19 cells. In PDR, the increase in the level of PEDF is essential in order to balance the high concentration of VEGF that is stimulated by HG thus modulating the angiogenesis process. Although one study showed that the PEDF level was enhanced when treated with xanthatin as a medicinal component from Xanthium [56], limited information has been found regarding the role of phytochemicals in PEDF secretions. To the best of our knowledge, this study is the first to investigate the effects of Gac fruit extracts on HG-induced PDR biomarkers in vitro. The data revealed anti-proliferative, anti-ROS, angiogenesis biomarkers regulating activities of Gac fruit extracts. Therefore, the current findings suggest that Gac fruit could potentially be utilized as a therapeutic agent in the treatment of HG-related eye disease. However, additional studies are highly needed to explore the active compounds and their mechanisms of actions underlying potential of Gac parts extracts. Conflict of interest statement The authors have no conflicts of interest to declare related to this study.
2019-04-10T13:12:49.264Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "8bbeb6aad4ac7df7ec7883ea62c53fdfe081a0e4", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2221-1691.248093", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "3e8ffd9dffc93ec56143be7e05adf186603356f3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
119139120
pes2o/s2orc
v3-fos-license
Limiting Means for Spherical Slices We show that for a suitable class of functions of finitely-many variables, the limit of integrals along slices of a high dimensional sphere is a Gaussian integral on a corresponding finite-codimension affine subspace in infinite dimensions. Introduction In this paper we generalize a result of [10] showing that the large-N limit of the integral of a function f over affine slices of a high dimensional sphere S N −1 ( √ N ) is Gaussian. In [10] this was proved for bounded f , and here we establish the result for f in a suitable L p space, for any p > 1. for continuous linear T : H → X where X is finite dimensional. See [10] Proposition 4.1 for full detail. We are interested in integrals of a function φ over S AN with respect to the normalized surface area measureσ: We will take φ to be a Borel function that only depends on the first k-coordinates for k < N . We will also need an important disintegration formula for the integral (1.4) and to that end we need the following projections. Let be the projection from l 2 onto the first k-coordinates. Then let L be the restriction of P (k) to ker Q: This is a surjection provided dim (ker(Q)) > dim(X). Further we define L N to be the restriction of P (k) to ker(Q N ): for large enough N , L N is also surjective. (See [10] Proposition 6.2). Next we also want to restrict L and L N to be isomorphisms. We define L 0 to be the restriction of L to ker Q ⊖ ker L which is the orthogonal complement of ker Q ∩ ker L within ker Q: This is an isomorphism. Lastly let L 0,N be the restriction of L N to ker Q N ⊖ker L N L 0,N : ker Q N ⊖ ker L N → X for large N this is an isomorphism (See again [10] Prop. 6.2). Now we state without proof the disintegration formula we will need. See [10] Theorem 3.3 for full detail. Theorem 1.1. Let f be a bounded, or non-negative, Borel function defined on Let z 0 N be the point on Q −1 N (w 0 ) closest to 0. Let L 0,N and L N be defined as above and let x 0 = L N (z 0 N ) ∈ X. Then where a z 0 N = N − |z 0 N | 2 and D N consists of all x ∈ x 0 + L N (ker Q N ) ⊂ X for which the term under the square-root is positive: Taking φ to be a Borel function on R k we let f be the function obtained by extending φ to R N by setting then the disintegration formula (1.7) for this particular f is: (1.9) Let d = N − 1 then the volume of the sphere is: given, for all j, by the formula: (1.11) We can then rewrite (1.10) as So, using the normalized surface measure σ on the sphere where I N (x) is as in (1.13). 1.2. Related literature. This paper is a generalization of work done in [10] where more detailed results are proved for a bounded Borel function φ. The connection between Gaussian measure and the uniform measure on high dimensional spheres appeared originally in the works of Maxwell [7] and Boltzmann [2, pages 549-553]. Later works included Wiener's paper [12] on "differential space", Lévy [6], McKean [8], and Hida [3]. The work of Mehler [9] is one example illustrating the classical interest in functions on high-dimensional spheres. For the theory of Gaussian measures in infinite dimensions we refer to the monographs of Bogachev [1] and Kuo [5]. This paper is the fourth in a series of papers. The first [4] develops the Gaussian Radon transform for Banach spaces, where a support theorem was established. The second [11] establishes the result for hyperplanes and the third [10] proves the result for the case of affine planes. Limiting Results In this section we review our previous results and prove the main result of this paper. Previous Results. From the previous paper [10] the main result was the following theorem: Theorem 2.1. Let A be a finite-codimension closed affine subspace in l 2 , specified by (1.1). Let k be a positive integer; suppose that the image of A under the coordinate projection where σ is the normalized surface area measure on S AN , and µ is the probability measure on R ∞ specified by the characteristic function where p A is the point on A closest to the origin and P 0 is the orthogonal projection in l 2 onto the subspace A − p A . We give a sketch of the proof here for full detail refer to [10] Theorem 2.1. The pushforward measure π (k) * µ of µ to R k is is the projection on the first k coordinates. Now define µ ∞ on R k by where L 0 is given in (1.7) and z 0 (k) is the first k-coordinates of z 0 , the point on A = Q −1 (w 0 ) closest to the origin. From their respective characteristic functions we can deduce that Now, using this and Theorem 2.2 below, we can conclude that (2.5) For the first equality in (2.5) we need the following theorem (from [10] Theorem 4.1). Theorem 2.2. Let A be an affine subspace of l 2 given by Q −1 (w 0 ), where Q : l 2 → R m is a continuous linear surjection. Suppose that the projection P (k) : l 2 → R k : z → z (k) maps ker Q onto R k . Let S N −1 ( √ N ) be the sphere of radius √ N in the subspace R N ⊕ {0} in l 2 . Let φ be a bounded Borel function on R k and let f be the function obtained by extending φ to l 2 by setting 7) where L 0 is the restriction of the projection P (k) to ker Q ⊖ ker P (k) , and z 0 is the point on Q −1 (w 0 ) closest to the origin. Note that in order to extend Theorem 2.1 for a more general function φ we only need to extend Theorem 2.2. Again we give a sketch of the proof for Theorem 2.2 . Let a = √ N and d = N −1. From the disintegration formula above (1.14) we have where We state here the limits of the constant term outside the integral (in (2.8)), as well as those of the full integrand on the right hand side, including the determinant term without proof, for full detail refer to [10]: . (2.12) Therefore if φ is such that we can apply dominated convergence theorem in (2.8) we have, which is the result in Theorem 2.2. The Main Result. We turn now to the main result of this paper, an extension of the previous result Theorem 2.1 to more general functions. We will show that if φ is a Borel function on R k which is L p , p > 1, with respect to the Gaussian measure with density proportional to then the conclusion of 2.1 still holds. To this end we state and prove a generalization of Theorem 2.2. Theorem 2.3. Let A be an affine subspace of l 2 given by Q −1 (w 0 ), where Q : l 2 → R m is a continuous linear surjection. Suppose that the projection P (k) : Let φ be a Borel function on R k which is in L p with respect to the Gaussian measure with density proportional to for some p > 1, and let f be the function obtained by extending φ to l 2 by setting 18) where L 0 is the restriction of the projection P (k) to ker Q ⊖ ker P (k) , and z 0 is the point on Q −1 (w 0 ) closest to the origin. Proof. Utilizing the proof from Theorem 2.2 we need only show (2.14) still holds, that is, and D N is all x ∈ R k such that the square-root term is positive. First we have the following inequality, (2.20) We observe that, for N > k + m + 2, the maximum of the function e y for all y ∈ (0, N ] occurs at y = k +m+2; this is seen by checking that the derivative d/dy is positive for y ∈ [0, k + m + 2) and negative for y ∈ (k + m + 2, N ]. Thus, Taking y = L 0,N −1 (x − z 0,N (k) ) 2 , we have: (2.21) Lemma 2.4 gives the bound: (2.23) Let a N (x) = L −1 0 (x − z 0,N (k) ) and a(x) = L −1 0 (x − z 0 (k) ). (2.24) Then by (1.3), for any ǫ > 0 and large enough N (2.26) This gives us: Now since φ is a Borel function on R k which is in L p with respect to the Gaussian measure with density proportional to e − a(x) 2 /2 dx, for some p > 1. We have then the bound The dominating function is integrable: and q is the conjugate to p as usual: p −1 + q −1 = 1. The integral in c ǫ is finite because, after changing variables to y = a(x), and using the limits (2.10), (2.11), and (2.12) we have established: Lemma 2.4. With notation as above, Proof. Recall the definition of L, it is the projection on k coordinates restricted to the ker Q: and L 0 is the restriction of L to the orthogonal complement of ker L inside ker Q. Since L is surjective L 0 is an isomorphism. Let x ∈ R k and y 0 = L −1 0 (x) then any vector y ∈ L −1 (x) can be written as This means y = y 0 + v v ∈ ker L for all y ∈ L −1 (x) therefore y 0 = L −1 0 (x) is the point in L −1 (x) of smallest norm. By the same argument, provided N is large enough for L N to be a surjection, for x ∈ R k , the point L −1 0,N (x) is the point on L −1 N (x) of smallest norm. Let y ∈ ker(Q N ) then y ∈ R N and Q N y = 0. Taking R N to be contained in l 2 as R N ⊕ {0} then for all y ∈ ker Q N we take y = (y, 0). Now 0 = Q N y = Q(J N y) therefore J N y ∈ ker Q and so (y, 0) ∈ ker Q. Thus ker Q N is contained in ker Q. Now for y ∈ ker Q N we have L(y) = L N (y) since both L and L N are the projection onto the first k coordinates. Since L −1 N (x) is all y ∈ ker Q N such that L N (y) = x it is contained in L −1 (x). Now we have the inequality (2.32). Let us look at an example that shows the necessity of the L p , p > 1, condition and the difficult nature of the limit of Gaussian integrals above. In this context for the function g(x) = e x 2 /2 (1 + x 2 ) −1 for all x ∈ R, we have R g(x)e −x 2 /2 dx < ∞ but
2019-03-18T19:56:23.000Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "5f55382f8dedfd3efdfb67019a0b6cbe849614f9", "oa_license": null, "oa_url": "https://digitalcommons.lsu.edu/cgi/viewcontent.cgi?article=1481&context=cosa", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "5f55382f8dedfd3efdfb67019a0b6cbe849614f9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
2650254
pes2o/s2orc
v3-fos-license
Forward Production With Large p/\pi Ratio and Without Jet Structure at Any p_T Particle production in the forward region of heavy-ion collisions is shown to be due to parton recombination without shower partons. The regeneration of soft partons due to momentum degradation through the nuclear medium is considered. The degree of degradation is determined by fitting the $\bar p/p$ ratio. The data at $\sqrt s=62.4$ GeV and $\eta=3.2$ from BRAHMS on the $p_T$ distribution of average charged particles are well reproduced. Large proton-to-pion ratio is predicted. The particles produced at any $p_T$ should have no associated particles above background to manifest any jet structure. Introduction In an earlier paper [1] we studied the problem of hadron production in the transfragmentation region (TFR) in heavy-ion collisions. It was stimulated by the data of PHOBOS [2] that show the detection of charged particles at η ′ > 0, where η ′ = η − y beam . We broadly refer to the η ′ > 0 region as TFR. However, since the transverse momenta p T of the particles were not measured, it has not been possible to determine the corresponding values of Feynman x, in terms of which TFR can more precisely be defined as the region with x > 1. More recently, BRAHAMS has analyzed their forward production data at √ s = 62.4 GeV with both η and p T determined [3]. It is then possible to interpret BRAHMS data by applying the formalism developed in [1], which is done entirely in the framework of using momentum fractions instead of η. In this paper we calculate the proton and pion distributions in x and p T and conclude not only that the p/π ratio is large, but also that there should be no jet structure associated with the particles detected at any p T in the forward region. In [1] the x distributions of p and π have been calculated for 0.6 < x < 1.2 in the recombination model [4,5,6], taking into account momentum degradation of particle constituents traversing nuclear matter [7] and the recombination of partons arising from different beam nucleons. However, we have not considered the regeneration of soft partons as a consequence of momentum degradation in the nuclear medium. Since such soft partons significantly increase the antiquark distribution in the mid-x region, it is important to include them in the determination of the pion distribution. Furthermore, no consideration has been given in [1] to transverse momentum, which is the other major concern in this paper. In the following we use forward production to refer specifically to hadrons produced at x > 0.3, with the fragmentation region (FR) being 0.3 < x < 1, and TFR being x > 1. Any hadron produced in the TFR cannot be due to the fragmentation of any parton because of momentum conservation, since no parton can have momentum fraction > 1, if we ignore the minor effect of Fermi motion of the nucleons in a nucleus. In the FR hadrons with any p T that are kinematically allowed can, in principle, arise from the fragmentation of hard partons; however, the momenta of those hard partons must be even higher than the detected hadrons in the FR, and the probability of hard scattering into the region near the kinematical boundary is severely suppressed [8]. Moreover, there is the additional suppression due to the fragmentation function from parton to hadron. Thus the fragmentation of partons at any p T in the FR (despite the nomenclature that has its roots in reference to the fragmentation of the incident hadron) is highly unlikely, though not impossible. The issue to focus on is then to examine whether there can be any hadrons produced in the FR with any significant p T . If so, then such hadrons at any p T would not be due to fragmentation and would therefore not have any associated jet structure. In contrast to the double suppression discussed above in connection with fragmentation, recombination benefits from double support from two factors. One is the additivity of the parton momenta in hadronization, thus allowing the contributing partons to be at lower x where the density of partons is higher. The other is that those partons can arise from different forward-going nucleons, thus making possible the sum of their momentum fractions to vary smoothly across x = 1, thereby amalgamating FR with TFR. These are the two attributes of the recombination process that makes it particularly relevant for forward production. Its implementation, however, relies on two extensions of what has been considered in [1], namely, the regeneration of soft partons and the transverse-momentum aspect of the problem, before we can compare our results with BRAHMS data [3]. It is useful to outline the logical connections among the different parts of this work. First of all, the degree of degradation of forward momenta through the nuclear medium is unknown. The degradation parameter κ can be determined phenomenologically if the x distributions of the forward proton and pion are known, but they are not. We have calculated the x distributions for κ = 0.6 and 0.8 as typical values serving as benchmarks. Since the normalization of the p T distribution, which is known from BRAHMS data [3], depends on the x distribution, κ can be determined by fitting the p T distribution. However, what is known about the p T distribution is only for all charged particles, not p or π separately. If there were experimental data on the p/π ratio (which is not yet available for the 62.4 GeV data that have the p T distribution), one could disentangle the species dependence. Fortunately, there exist preliminary data onp/p and K/π ratios at 62.4 GeV. We shall therefore calculate the p andp distributions and adjust κ to render the ratio Rp /p to be in the vicinity of the observed ratio. We shall then show that the p T distribution of all charged particles can be well reproduced in our calculation. We shall assume factorizability in p L and p T dependences. The two are treated in Sec. II and III, respectively. The main difference between Sec. II and the earlier work in [1] is the inclusion of the regeneration of soft partons, a subject we now address. Regeneration of Soft Partons Let us first recall some basic equations from Ref. [1], which we shall refer to as I. Equations I-(16) and I-(32) give the proton and pion distributions in x (with the p T variables integrated out) for AB collisions in the recombination model where the hadronic x is 2p L / √ s and the partonic x i are momentum fractions. The recombination functions R p and R π are given in [1]. The partons are assumed to arise from different nucleons in the projectile nucleus A and thus contribute in factorizable form of F AB , i.e., whereν is the average number of wounded nucleons that a nucleon encounters in traversing the nucleus B at a particular impact parameter, given in Eq. I-(49). The effect of momentum degradation on the parton distributions is contained in the expressions where κ is the average momentum fraction of a valon after each collision and G(y) is the valon distribution in momentum fraction y before collision [9]. K(z) and L ′ q (z) are the quark distributions in a valon, with K N S (z) being the valence-quark distribution and L ′ q (z) the saturated sea-quark distribution after gluon conversion. This briefly summarizes the essence of determining the x distributions of protons and pions produced in AB collisions. To describe how the above should be modified in order to take into account the regeneration of soft partons, we need to fill in the steps on how sea-quark distribution L ′ q (z) is derived. In addition to the valence quark in a valon, there are also sea quarks (q), strange quark (s) and gluons (g), whose distributions are denoted by L i (z), i = q, s, g. Their second moments satisfy the sum rule for momentum conservation [1,9] K N S (2) + 2 2L q (2) +L s (2) +L g (2) = 1 . Gluon conversion to qq changes the sea-quark distribution to whose second moment satisfies the modified version of Eq. (9) whereL g (2) is absent, i.e., From these equations we can determine Z 1 , getting . This is what we obtained and used in I to calculate the hadron distributions. The degradation effect is parametrized by κ such that 1 − κ is the fraction of momentum lost by a valon after a collision. After ν collisions, the net momentum fraction lost is 1 − κ ν . That fraction is converted to soft partons so that the new sea-quark distributions L ′′ q,s (z) satisfy a sum rule that differs from Eq. (11) by the addition of extra momentum available for conversion, i.e.,K Assuming that only the normalization is changed, we write which yields, upon using Eqs. (9) and (13), In [1] we have considered the cases: κ = 0.6 and 0.8 for b = 1 fm (0-5%) and 8 fm (30-40%). For any given b, the averageν is known [see Eq. I-(12), (13)]. The dependence of ν onν is Poissonian, as expressed in the last factor in Eq. (7). We now replace L ′ q (z) in Eqs. (6) and (8) by L ′′ q (z, κ, ν) and obtain the new distributions F q ν (x i , κ) and Fq ν (x i , κ) defined in Eqs. (5) and (6), in which the summation over ν inḠ ′ν (y ′ ) is now extended to include the ν dependence of L ′′ q (z, κ, ν). As an illustration of our results on the effects of degradation and regeneration, we show in Fig. 1 the u-quark, F ū ν (x), andū-antiquark, Fū ν (x), distributions before and after regeneration for b = 1 mb and κ = 0.6. Note that with or without regeneration all distributions are highly peaked at x = 0 because momentum degradation pushes all valons to lower momenta by a factor of κν (which forν ∼ 6 is ∼ 1/20). Regeneration increases Fū ν (x) significantly for x < 0.3, as shown by the dashed-dotted line above the dotted line. For F ū ν (x), because of the dominance of the valence quark distribution K N S (z), the increase is minimal, as the dashed line is nearly all covered by the solid line. Similar changes occur for the d andd distributions. In the same way as we have done in [1] we calculate the proton and pion distributions in x for κ = 0.6 and 0.8 and for b = 1 and 8 fm. The results are shown in Figs. 2-5. Since the regenerated soft partons do not affect the hadron distributions for x > 0.8 (remembering that the hadron x is the sum of the parton x i ), we have plotted these figures for the range 0.3 < x < 0.9. We emphasize here that the large x behavior in the TFR is not the central issue in this paper any more, as it was in [1]. In Figs. 2-5, in addition to our present result with regeneration (solid and dashed lines) we show also our previous result obtained in [1] without regeneration for the case κ = 0.6 for the purpose of seeing the effect of regeneration. Note that the proton distributions in Figs. 2 and 3 are not affected very much by the regeneration effect, but the pion distributions in Figs. 4 and 5 are increased. At x = 0.6 the increase is roughly around a factor of 3. In [1] there was no data to compare with the calculated result on the x distributions. In particular, the degree of momentum degradation was unknown. Now, BRAHMS data show the p T dependence at η = 3.2 [3]. In order to fit the p T distributions of the hadrons produced, we must have the correct normalizations, which in turn depend on the x distributions that we have studied. Transverse Momentum Distribution Having determined the longitudinal part of the hadronic distributions above, modulo the value of κ, we now proceed to the transverse part. We have treated the degradation and regeneration problems on rather general grounds without restricting the x values and with p T integrated so that p T never appears in our consideration of the x distribution. It is then natural to make use of that result in a factorizable form for the inclusive distribution which is, of course, an assumed form that is sensible when there is negligible contribution from hard scattering. For the transverse part, V h (p T ), we follow the same type of consideration as developed in [6], where particle production at intermediate p T is shown to be dominated by the recombination process. Similar work in that respect has also been done in [10,11]. In the absence of hard scattering there are no shower partons. Without shower partons there are only thermal partons to recombine. Thus for pion production we have T T recombination, while for proton we have T T T recombination, where T represents the thermal parton distribution [6] T ( In the above equation p i T is the transverse momentum of ith parton; C i and T are two parameters as yet undetermined for the forward region in Au+Au collisions. In view of the factorization in Eq. (16) we use the term thermal in the sense of local thermal equilibrium of the partons in the co-moving frame of a fluid cell whose velocity in the cm system corresponds to the longitudinal momentum fraction x. However, the value of T can include radial flow effect. Limiting ourselves to only the transverse component, the invariant distributions of produced pion and proton due to thermal-parton recombination are where the proportionality factors that depend on the recombination functions are given in [6]. At midrapidity, thermal and chemical equilibrium led us to assume C q = Cq, and we have been able to obtain p/π ratio in good agreement with the data. Now, in FR (and in TFR) we must abandon chemical equilibrium, sinceq cannot have the same density as q, when x is large. But we do retain thermal equilibrium within each species of partons to justify Eqs. (18) and (19) for the p T dependence. We join the longitudinal and transverse parts of the problem by requiring where F q ν (x i , κ) and Fq ν (x j , κ) are the quark and antiquark distributions in their respective momentum fractions already studied in Sec. II above. The proportionality factors in the two expressions above are the same. Equation (20) connects the parton density from the study of the longitudinal motion to the thermal distribution in the transverse motion. Substituting those relations into Eqs. (18) and (19), and letting F q ν (x i , κ), Fq ν (x j , κ) and other multiplicative factors be absorbed in the formulas for H h (x, κ) developed in [1], we obtain for the transverse part of Eq. (16) where c π and c p are two proportionality constants to be determined by the normalization condition i.e., It follows from Eqs. (16) and (23) that we recover the invariant x-distribution without undetermined proportionality factors. The exponential factors in Eqs. (21) and (22) give the characteristic behavior of hadrons produced by the recombination of thermal partons [6,8]. Such exponential behavior is overwhelmed by power-law behavior at intermediate p T due to thermal-shower recombination when x is small and when light quarks contribute to the hadrons produced. However, when x is large, the shower partons are absent due to the suppression of hard scattering, so the exponential p T dependence becomes the prevalent behavior in the forward region. Since the data of BRAHMS [3] exhibit the p T distribution for a narrow range of η around 3.2, we can readily check whether Eqs. (16), (21) and (22) are in accord with the data. We consider only the most central collisions for which H π (x, κ) and H p (x, κ) have been calculated in the previous section for b = 1 fm. The data of the p T distribution in [3] are, however, given for average charged particle (h + + h − )/2. To be able to make comparison with that, we need information on the magnitudes of contributions from K andp. Preliminary data on the K/π ratio is ∼ 0.15 [12], and that on thep/p ratio is ∼ 0.05 [13]. We shall use the former ratio and calculate the latter. The enhancedq distribution enables us to compute Hp(x, κ) exactly as in Eq. (1), except that F q ν (x j ) in Eq. (3) is replaced by Fq ν (x j ). Thep/p ratio is constant in p T , since both p andp have the same p T dependence given in Eq. (22). The value of the ratio, however, depends on Hp(x, κ) and H p (x, κ). The value of x for both p andp is chosen to be 0.55 for reasons to be explained below when we discuss the p T distribution. Since the data onp/p are preliminary and imprecise at this point, we consider two values of κ and obtain κ = 0.76, These results bracket the observed value ofp/p at ∼ 0.05 for √ s = 62.4 GeV and η = 3.2 [13]. Note that with a 5% decrease in κ there is over 80% increase inp/p. This is a direct consequence of soft-parton regeneration, where enhancedq distribution significantly increases thep production. To learn about the effect of regeneration, we have calculated Hp(x, κ) with the soft-parton regeneration turned off, and found that for xp = 0.55 and κ = 0.76 the ratio of the corresponding Hp(x, κ) values with regeneration to that without is about 2000. In other words without regeneration Rp /p would be at the level of 2.5 × 10 −5 , which is hardly measurable. The increase ofp production due to regeneration is much more than the corresponding increase of π (shown in Fig. 4) for a good reason. It is not just a matter ofp consisting of threeq, while π having only oneq. The pion recombination function is broad in the momentum fractions x q and xq, so with x q high it is possible for xq to be low to reach the region with higher density ofq. The proton recombination function is much narrower, since the proton mass is nearly at the threshold of the three constituent quark masses. The xq values are roughly 1/3 of xp, so none of the antiquarks can have very low xq for xp ∼ 0.55, say. The effect of soft-parton regeneration can therefore drastically increase thep production. It is of interest to point out that the observedp/p ratio changes significantly with energy. At √ s = 200 GeV and η = 3.2, Rp /p has been found to be 0.22, which is four times larger than at √ s = 62.4 GeV [12]. It implies that the degradation effect depends sensitively on √ s. It also means that what other ratios have been measured at √ s = 200 GeV cannot be used reliably as a guide for our present study at 62.4 GeV. We are now able to relate the average charged multiplicity (h + + h − )/2 in the data to [p +p + 1.15(π + + π − )]/2 that we can calculate. Since the data on the p T distribution are taken within the narrow band bounded by η = 3.2 ± 0.2, we can determine the x value in the range of p T of interest by first identifying η with y and use If we take p T = 1.0 GeV/c, the corresponding values of x for pion is x π = 0.4 and for proton, x p = 0.54, which are well inside the FR. The slope of the p T distribution in the semilog plot is essentially determined by the value of T , as prescribed by Eqs. (21) and (22). We find it to be T = 196 MeV. In our treatment here and before, the value of T incorporates the effect of radial flow and is therefore larger than the value appropriate for local thermal temperature that is considered in other approaches to recombination [10,11]. For the values of κ that can reproduce thep/p ratio we can calculate [p +p + 1.15(π + + π − )]/2, adjusting p T in fine-tuning, and obtain the two lines in Fig. 6. The solid line is for κ = 0.76 and p T = 1.09 GeV/c, while the dashed line is for κ = 0.72 and p T = 1.07 GeV/c. They both fit the data [3] very well. The spectrum being dominated by proton does not depend on κ sensitively; its normalization does depend on the x values, which in turn depend on p T for fixed rapidity. For p T ∼ 1.08 GeV/c the corresponding x p is ∼ 0.55, which is the value we used to calculatep/p. Since the contributions from resonance decays have not been considered, our results for p T < 1 GeV/c are not reliable, and should not be taken seriously. In Fig. 7 we show the p/π ratio for the two cases considered above. Again, there is sensitive dependence on κ, although not as much as inp/p. As κ decreases, more soft partons are generated. The increase ofq enhances π production, and thus suppresses the p/π ratio. The dominance of proton production makes the charged hadron spectrum insensitive to the change in the pion sector. But the ratio manifests the pion yield directly. Currently, the data on the p/π ratio is still unavailable for √ s = 62.4 GeV. Since κ depends sensitively on √ s, the ratio may be quite different from that determined at 200 GeV [14]. To have the ratio exceeding 1 is a definitive signature of recombination at work. The verification of our results will give support to our approach of accounting for hadrons produced up to p T = 2.5 GeV/c at η ≃ 3.2 in the absence of hard scattering. We note that the two lines in Fig. 7 are nearly straight, since both p and π distributions are mainly exponential, as shown in Eqs. (21) and (22), except for the prefactor involving p T for the proton. We therefore expect the measured ratio to be essentially linearly rising in Fig. 7. But more importantly in the first place is whether the ratio exceeds 1 for p T > 1 GeV/c. Our concern should first be whether the regeneration of soft partons and the suppression of hard partons are the major aspects of physics that we have captured in this treatment. The precise p T dependence of R p/π , i.e., whether it is linear or not, is of secondary importance at this point. Similarly, we expect the data to show constancy of Rp /p for the range of p T studied. Conclusion We have extended the study of particle production in the FR and TFR to include the regeneration of soft partons due to momentum degradation and to consider also the determination of the p T distributions. We have shown that the data of BRAHMS for forward production can be reproduced for all p T , when suitable values for the degradation parameter are obtained by fitting thep/p ratio. The consequence is that large p/π ratio must follow. The hadronization process is recombination and the p T dependence is exponential, reflecting the thermal origin of the partons. We predict that the exponential behavior will continue to higher p T even beyond the boundary separating FR and TFR. The production of protons is far more efficient than the production of pions. That is not surprising since it is consistent with the result already obtained in [1] due to the scarcity of antiquarks in the FR and TFR. Here, the p T dependence of the proton-to-pion ratio, R p/π , is shown to be linearly rising above p T = 1 GeV/c and can become greater than 2 above p T = 2.5 GeV/c. Any model based on fragmentation, whether the transverse momentum is acquired through initial-state interaction or hard scattering, would necessarily lead to the ratio R p/π ≪ 1, by virtue of the nature of the fragmentation functions. In contrast, Rp /p would be around 1 if gluon fragmentation dominates, and be ≪ 1 if the fragmentation of valence quarks dominates, so Rp /p is not the best discriminator between recombination and fragmentation. No shower partons are involved in the recombination process because hard partons are suppressed in the forward region. That is supported by the absence of power-law behavior in the p T dependence of the data. Without hard partons there are no jets, yet there are high-p T particles, which are produced by the recombination of thermal partons only. Thus there can be no jet structure associated with any hadron at any p T . That is, for a particle (most likely proton) detected at, say, p T = 2.5 GeV/c, and treated as a trigger particle, there should be no associated particles distinguishable from the background. This is a prediction that does not depend on particle identification, and can be checked by appropriate analysis of the data at hand.
2014-10-01T00:00:00.000Z
2006-05-17T00:00:00.000
{ "year": 2006, "sha1": "1371d27e4105cee602396a5af628a07d4b3ce093", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/0605037", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1371d27e4105cee602396a5af628a07d4b3ce093", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119685420
pes2o/s2orc
v3-fos-license
Isomonodromic deformations in genus zero and one: algebrogeometric solutions and Schlesinger transformations Here we review some recent developments in the theory of isomonodromic deformations on Riemann sphere and elliptic curve. For both cases we show how to derive Schlesinger transformations together with their action on tau-function, and construct classes of solutions in terms of multi-dimensional theta-functions. Introduction Here we review some recent developments in the theory of isomonodromic deformations on Riemann sphere and elliptic curve. For both cases we show how to derive Schlesinger transformations together with their action on tau-function, and construct classes of solutions in terms of multi-dimensional theta-functions. The theory of isomonodromic deformations of ordinary matrix differential equations of the type where A(λ) is a matrix-valued meromorphic function on C, is a classical area intimately related to the matrix Riemann-Hilbert problem on the Riemann sphere. Over the last 20 years this has become a powerful tool in areas like soliton theory, statistical mechanics, theory of random matrices, quantum field theory etc. The main object associated with the isomonodromic deformation equations is the so-called τ -function. After the classical work of Schlesinger [1] the important contributions to the development of the subject were made in the papers of Jimbo, Miwa and their collaborators in the early 80's [2,3,4,5]. There are only a few cases where the matrix Riemann-Hilbert problem may be solved explicitly in terms of known special functions. However, as was already discovered by Schlesinger himself, there exists a large class of transformations which allow to get an infinite chain of new solutions starting from the known ones. They share the characteristic feature that they shift the eigenvalues of the residues of the connection A(λ) in (1.1) by integer or half-integer values, thus changing the associated monodromies by sign only. These transformations -nowadays called Schlesinger transformations -were systematically studied in [4,5]. In particular, it turns out that being written in terms of the τ -functions the superposition laws of these transformations provide a big supply of discrete integrable systems. Recently in papers [6,7] it was solved a class of 2 × 2 Riemann-Hilbert problems with arbitrary off-diagonal monodromy matrices in terms of multidimensional theta-functions. The equations for τ -function were integrated in the paper [6] to give the following result: where all the objects associated to auxiliary hyperelliptic curve are defined below in Sect.2.3. The natural question of generalizing the theory of isomonodromic deformations on the sphere to higher genus surfaces was addressed by several authors. Here, we mention the contributions of Okamoto [8,9] and Iwasaki [10]. For the case of the torus, recently two different explicit forms of equations of isomonodromic deformations were proposed. In work of the author and Samtleben [11] it were studied isomonodromic deformations of non-singlevalued meromorphic connection on the torus whose "twists" (which determine the transformation of the connection A(λ) with respect to tracing along basic cycles of the torus) vary with respect to the deformation parameters. The isomonodromic deformation equations for these connections hence contain transcendental dependence on the dynamical variables, which makes it difficult to analyse this system in a way analogous to the Schlesinger system on the sphere. On the other hand, Takasaki [12] considered connections on the torus whose twists remain invariant with respect to the parameters of deformation. In Takasaki's form, the equations of isomonodromic deformations have already the same degree of non-linearity as the ordinary Schlesinger system. In the paper [13] it were constructed transformations of Schlesinger type for elliptic isomonodromic deformations in Takasaki form, and it was derived the action of these transformation on elliptic version of τ -function. Here we review these results, and, in addition, present the generalization of results of the paper [6] to elliptic case. We show how to solve certain class of Riemann-Hilbert problems on the torus in terms of Prym theta-functions. In turn, this allows to construct a class of algebro-geometric solutions of elliptic Schlesinger system. In sect.2 we introduce Schlesinger system on the Riemann sphere. For 2 × 2 case we discuss elementary Schlesinger transformations together with their action on τ -function and, following [6], derive class of algebro-geometric solutions of the Schlesinger system in terms of theta-functions of auxiliary hyperelliptic curve. In sect.3 we describe equations of elliptic isomonodromic deformations with constant twists [12], and, following [13], construct elliptic version of elementary Schlesinger transformations. The new result of this paper -the construction of algebro-geometric solutions of elliptic Schlesinger system in terms of Prym theta-functions -is presented in sect.3. Schlesinger system Consider the following ordinary linear differential equation (1.1) for a matrix-valued function Ψ(λ) ∈ SL(2, C) and where the residues A j ∈ sl(2, C) are independent of λ. Regularity at λ = ∞ requires and allows to further impose the initial condition Ψ(λ = ∞) = I. The matrix Ψ(λ) defined in this way lives on the universal covering X of CP 1 \ {λ 1 , . . . , λ N }. Its asymptotical expansion near the singularities λ j is given by where T j is a traceless diagonal matrix with eigenvalues ±t j . The residues A j of (2.2) are encoded in the local expansion as Upon analytical continuation around λ = λ j , the function Ψ(λ) in CP 1 \ {λ 1 , . . . , λ N } changes by right multiplication with some monodromy matrices M j In the sequel we shall consider the generic case when none of t j is integer or half-integer. The assumption of independence of all monodromy matrices M i of the positions of the singularities λ j : ∂M i /∂λ j = 0 is called the isomonodromy condition; it implies the following dependence of Ψ(λ) on λ j as follows from (2.4) and normalization of Ψ(λ) at ∞. Compatibility of (1.1) and (2.7) then is equivalent to the classical Schlesinger system [1]: describing the dependence of the residues A j on the λ i . Obviously, the eigenvalues t j of the A j are integrals of motion of the Schlesinger system. The functions G j have the following dependence on λ j : [3]: which obviously implies (2.8). To introduce the notion of the τ -function for the Schlesinger system, one notes that (2.8) is a multi-time Hamiltonian system [2] with respect to the Poisson structure on the residues A j (α, β, γ denoting sl(2) algebra indices with the completely antisymmetric structure constants ε αβγ ) and Hamiltonians where compatibility of these equations follows from (2.8). This τ -function is closely related to the Fredholm determinant of a certain integral operator associated to the Riemann-Hilbert problem (see [14] for details). Schlesinger transformations on the Riemann sphere Schlesinger transformations are symmetry transformations of the Schlesinger system (2.8) which map a given solution {A j ({λ i })} to another solution A j ({λ i }) with the same number and positions of poles λ j such that the related eigenvalues t j are shifted by integer or half-integer values t j → t j + n j /2 , n j ∈ Z. The monodromy matrices M j hence remain invariant or change sign under this transformation. We shall restrict ourselves to elementary Schlesinger transformations, which change only two t j 's, say, t k and t l for k = l by ±1/2. The transformed variables will be denoted by Ψ, A j , t j , etc. Without loss of generality we consider the case (2.14) Our presentation here mainly follows [15]. For the transformed function Ψ we make the ansatz where the matrices S ± do not depend on λ and are uniquely determined by [15]: By G α j here we denote the α-th column of the matrix G j (α = 1, 2). Combining the columns G 1 k and G 1 l into a 2 × 2 matrix we can deduce from (2.17) the following simple formula for S ± : with projection matrices It is easy to check using the local expansion of Ψ at the singularities λ j (2.4) and the defining relations for S ± (2.19) that the transformed function Ψ at λ j has a local expansion of the form (2.4) with the same matrices C j and the desired transformation (2.14) of the t j . The matrices G j change to new matrices G j . Thus, Ψ satisfies the system where the functions A j ({λ i }) build a new solution of the Schlesinger system (2.8). On the level of the residues A j , the form of the Schlesinger transformation is not very transparent; however, it turns out that the associated τ -function transforms in a rather simple way. Namely, for Ψ we find (2.21) For example, the Hamiltonians H j for j = k, l transform as follows: according to (2.9). Hence the transformed τ -function τ is given by τ = f (λ k , λ l ) det G · τ with some function f (λ k , λ l ) to be determined from the transformation of H k , H l . Taking into account the transformation of Hamiltonians H k and H l following from (2.21) we find the following formula describing the action of elementary Schlesinger transformation (2.14) on the τ -function: Other elementary Schlesinger transformations like may be obtained in a similar way by building the matrix G from G 1 k and G 2 l instead of (2.18), etc.. Moreover, all such transformations with different k and l may be superposed to get the general Schlesinger transformation which simultaneously shifts an arbitrary number of the t j by some integer or half-integer constants. These general transformations were in detail studied in [3,4,5]. Algebro-geometric solutions of Schlesinger system Let us take N = 2g +2 and introduce the hyperelliptic curve L of genus g by the equation with branch cuts [λ 2j+1 , λ 2j+2 ]. Let us choose the canonical basis of cycles (a j , b j ), j = 1, . . . , g such that the cycle a j encircles the branch cut [λ 2j+1 , λ 2j+2 ]. Cycle b j starts from one bank of branch cut [λ 1 , λ 2 ], goes to the second sheet through he branch cut [λ 2j+1 , λ 2j+2 ], and comes back to another bank of the branch cut [λ 1 , λ 2 ]. The dual basis of holomorphic 1-forms on L are given by λ k−1 dλ w , k = 1, . . . , g. Let us introduce two g × g matrices of a-and b-periods of these 1-forms: The holomorphic 1-forms satisfy the normalization conditions a j dU k = δ jk . The matrices A and B define the symmetric g × g matrix of b-periods of the curve L: Let us cut the curve L along all basic cycles to get the fundamental polygon L. For any meromorphic 1-form dW on L we can define the integral P Q dW , where the integration contour lies inside of L (if dW is meromorphic, the value of this integral might also depend on the choice of integration contour inside of L). The vector of Riemann constants corresponding to our choice of the initial point of this map is given by the formula (see [16] The characteristic with components p ∈ C g /2C g , q ∈ C g /2C g is called half-integer characteristic: the half-integer characteristics are in one-to-one correspondence with the half-periods Bp + q. To any half-integer characteristic we can assign parity which by definition coincides with the parity of the scalar product 4 p, q . The odd characteristics which will be of importance for us in the sequel correspond to any given subset S = {λ i 1 , . . . , λ i g−1 } of g − 1 arbitrary non-coinciding branch points. The odd half-period associated to the subset S is given by where dU = (dU 1 , . . . , dU g ) t . Denote by Ω ⊂ C the neighbourhood of the infinite point λ = ∞, such that Ω does not overlap with projections of all basic cycles on λ-plane. Let the 2 × 2 matrix-valued function Φ(λ) be defined in the domain Ω of the first sheet of L by the following formula, where functions ϕ and ψ are defined in the fundamental polygon L by the formulas: with two arbitrary (possibly {λ j }-dependent) points λ ϕ , λ ψ ∈ L and arbitrary constant complex characteristic p q ; * is the involution on L interchanging the sheets. An odd theta characteristic p S q S corresponds to an arbitrary subset S of g − 1 branch points via Eq. (2.26). Since domain Ω does not overlap with projections of all basic cycles of L on λ-plane, domain Ω * does not overlap with the boundary of L, and functions ϕ(λ * ) and ψ(λ * ) in (2.27) are uniquely defined by (2.28), (2.29) for λ ∈ Ω. Now choose some sheet of the universal covering X, define new function Ψ(λ) in subset Ω of this sheet by the formula and extend on the rest of X by analytical continuation. Function Ψ(λ) (2.30) transforms as follows with respect to the tracing around basic cycles of L (by T a j and T b j we denote corresponding operators of analytical continuation): The following statement proved in the paper [6] claims that function Ψ satisfies condition of isomonodromy, and, therefore, provides a class of solutions of Schlesinger system: Theorem 2.1 Let p, q ∈ C g be an arbitrary set of 2g constants such that characteristic p q is not half-integer. Then: 1. Function Ψ(Q ∈ X) defined by (2.30) is independent of λ ϕ and λ ψ , and satisfies the linear system (2.2) with 31) which in turn solve the Schlesinger system (2.8). 2. Monodromies (2.6) of Ψ(λ) around points λ j are given by where constants m j may be expressed in terms of p and q as follows: 3. The τ -function, corresponding to solution (2.5) of the Schlesinger system, has the following form: (a, b). A (naive) straightforward generalization of the idea of isomonodromic deformations from the complex plane to the torus E runs into difficulties related to the absence of meromorphic functions on the torus with just one simple pole. An independent variation of the simple poles of a meromorphic connection A on the torus preserving the monodromies around the singularities and basic cycles is impossible for the following simple reason. Existence of such a deformation would imply a version of (2.7) with the function A j λ−λ j on the r.h.s. being substituted by a meromorphic function with only one simple pole on the torus, which gives rise to the contradiction. Therefore, one of the underlying assumptions has to be relaxed. E.g. one may consider the case where not all the poles of the connection A are varied independently. Another possibility is the assumption that some of the poles of A are of order higher then one [9]. A third alternative which we shall consider here, is to relax the condition of single-valuedness of the connection A on E and assume that A has "twists" with respect to analytical continuation along the basic cycles a and b, i.e. where the matrices Q, R do not depend on λ. By a gauge transformation of the form A → SAS −1 + dSS −1 with S holomorphic but possibly multi-valued, one may bring the connection into a form where Q = I and R = e κσ 3 , where σ α denote the Pauli matrices: The equations of isomonodromic deformations with this choice of the twist were considered in [11] where the multi-valuedness of A had a natural origin in the holomorphic gauge fixing of Chern-Simons theory on the punctured torus. The resulting equations however are rather complicated in comparison with the Schlesinger system on the sphere. This is due to the fact that the twist κ itself becomes a dynamical variable -i.e. changes under isomonodromic deformations -and in generic situation has a highly non-trivial λ jdependence. Therefore, instead of being bilinear with respect to the dynamical variables, this Schlesinger system on the torus becomes highly transcendental. An alternative form of the elliptic Schlesinger system was proposed by Takasaki [12] who considered the restriction Q = σ 3 , R = σ 1 , related to the classical limit of Etingof's elliptic version of the Knizhnik-Zamolodchikov-Bernard system on the torus [17]. This choice of fixing the twists turns out to be compatible with the isomonodromic deformations equations, therefore essentially simplifying the dynamics as compared to [11]. It results into studying isomonodromic deformations of the system 34) with λ ∈ C. Functions w α on the torus are defined in Appendix (see (A.87)). The connection A(λ) obviously has only simple poles on E and the following twist properties, cf. (A.88) Since the residues of all w α at λ = 0 coincide, the residue of A(λ) at λ j is As in the case of the Riemann sphere, the function Ψ has regular singularities at λ = λ j with the same local properties (2.4)-(2.6). The twist properties of Ψ take the form with monodromy matrices M a , M b along the basic cycles of the torus. Moreover, as in the case of Riemann sphere, Ψ(λ) has monodromies M j around the singularities λ j . The isomonodromy condition on the torus requires that all monodromies M j , M a and M b are independent of the positions of singularities λ j and the module µ of the torus. As on the Riemann sphere this implies that the function ∂Ψ/∂λ j Ψ −1 has the only simple pole at λ = λ j with residue −A j . In addition, it has the following twist properties To derive the equation with respect to module µ we observe that ∂Ψ/∂µ Ψ −1 is holomorphic at λ = λ j (but not at λ = λ j +µ ) and has twist properties Taking into account the periodicity properties of the functions Z α (A.90), this hence implies The compatibility conditions of the equations (3.34), (3.37) and (3.38) then yield the λ i and µ dependence of the residues A j . The result is summarized in the following Theorem 3.1 [12] Isomonodromic deformations of the system (3.34) are described by the following elliptic version of the Schlesinger system: The corresponding equations for the matrices G j from (2.4) take a form analogous to the equations (2.9) on the Riemann sphere: The system (3.39) admits a multi-time Hamiltonian formulation with respect to the Poisson structure (2.10) on the residues The Hamiltonians describing deformation with respect to the variables λ i and to the module µ of the torus are respectively given by The representation of H µ as contour integral along the basic a-cycle in (3.43) was derived in [13]. All Hamiltonians Poisson-commute as a direct consequence of (3.41). The τ -function of the elliptic Schlesinger system (3.39) is defined as generating function τ ({λ j }, µ) of the Hamiltonians it is uniquely determined up to an arbitrary (µ, {λ j })-independent multiplicative constant. Compatibility of equations (3.44) is a corollary of the elliptic Schlesinger system. Schlesinger transformations for elliptic isomonodromic deformations The natural generalization of the notion of Schlesinger transformations on the Riemann sphere to the elliptic case was given in the paper [13]. Starting from any solution of the elliptic Schlesinger system (3.39) with associated function Ψ satisfying (3.34) and (3.36) we construct a new solution A j , Ψ with eigenvalues t j which differ from the t j by integer or half-integer values. In particular, we will consider the elliptic analog of the elementary Schlesinger transformation (2.14) on the Riemann sphere. The following construction was inspired by the papers [18], [19]. As an elliptic analog of the function F (λ) from (2.16) we shall choose the following ansatz where the functions J α (λ j , µ) depend on G k and G l and will be defined below. The elementary elliptic Schlesinger transformation is described by the following where F (λ) is given by formula (3.45) and λ-independent coefficients J α are defined by as above we denote by G the matrix (2.18) containing the first columns of the matrices G k and G l . Then the function Ψ(λ) satisfies the equations (3.34), (3.37), (3.38) and the twist conditions (3.36) with the transformed functions In turn, the functions A j satisfy the elliptic Schlesinger system (3.39). For the eigenvalues t j we have Proof. The proper local behaviour of function Ψ at singularities λ j is ensured by the relations which in complete analogy to (2.17) describe annihilation of the vectors G 1 k and G 1 l by the matrices f (λ k ) and f (λ l ), respectively. Obviously, equations (3.49) are a consequence of (3.47). Similarly to the case of the sphere, it is then easy to verify that (3.49) provide the required asymptotical expansions (2.4) for the function Ψ with parameters G j , C j and t j . Concerning the global behavior of Ψ we note that the prefactor (det f (λ)) −1/2 in (3.45) provides the condition det Ψ = 1 and kills the simple pole of f (λ) at λ = (λ k + λ l )/2. Therefore, the only singularities of F (λ) on E are the zeros of det f (λ). Since det f (λ) has only one pole -this is the second order pole at λ = (λ k +λ l )/2 -it must have also two zeros on E whose sum according to Abel's theorem equals λ k + λ l . According to (3.49) these are precisely λ k and λ l . It remains to check that Ψ satisfies conditions (3.36) with the same matrices M a and M b . This follows from the twist properties which in turn follow from (3.45) and the periodicity properties (A.88) of the functions w j (λ). ✷ As a result of rather long calculations one can prove the elliptic analog of formula (2.22) describing the transformation of the τ -function under the action of elliptic Schlesinger transformations. Theorem 3.3 [13] The τ -function τ corresponding to the Schlesinger-transformed solution A j (3.48) of the elliptic Schlesinger system is related to the τ -function corresponding to the solution A j as follows where G is the matrix (2.18) containing the first columns of the matrices G k , G l , J ≡ The natural open problem arising here is to construct elliptic generalizations of integrable chains associated to ordinary Schlesinger system [4,5]. In the next section we shall present the extension of construction of algebro-geometric solutions of Schlesinger system to the case of elliptic isomonodromic deformations. Algebro-geometric solutions of elliptic Schlesinger system To construct theta-functional solutions of elliptic Schlesinger system (3.39) let us assume that N = 2g and introduce two-sheet covering L of torus E with branch points λ 1 , . . . λ 2g . Genus of L equals g + 1. Denote by * the involution of L interchanging the sheets of the covering. Let us choose the canonical basis of cycles on L in such a way (see figure 6.2 on p.215 of [22]) that The basic holomorphic differentials dU 1 , . . . , dU g+1 on L normalized by a j dU k = δ jk , j, k = 1, . . . , g + 1 transform as follows under the action of involution * : dU 1 (P * ) = −dU g+1 (P ) dU j (P * ) = −dU j (P ) , j = 2, . . . , g (3.51) Let us introduce the following Prym differentials dV j , j = 1, . . . , g: and symmetric g × g matrix of their b-periods: which has positively-defined imaginary part. Remark 3.1 Differentials dV j and matrix Π were first introduced by Bobenko [22] in the studies of classical tops admitting elliptic Lax representation. These objects are related to standard Prym differentials dW j and standard Prym matrix Π P rym as follows: Denote by L the universal covering of curve L. p, q ∈ C are arbitrary constant vectors such that p 1 = 0. Then the function Φ is holomorphic and invertible on L outside of branch points λ j and transforms as follows with respect to analytical continuation along the basic cycles of L: for j = 2, . . . , g. Proof. Taking into account the definition of Prym differentials dV j (3.52) we see that dV + e 1 , j = 2, . . . , g Substituting these expressions into the formulas for ϕ and ψ and taking into account behaviour of 1-forms dU j under the action of involution * (3.51) we derive the following transformation properties of functions ϕ and ψ: and Transformation laws of ψ along cycles a j , b j for j > 1 coincide with transformation laws of ϕ. Combining the above relations into matrix form, we come to the transformation laws (3.56) -(3.59). It remains to verify non-degeneracy of Φ(P ) outside of singularities λ j . We know that det Φ(P ) has at least simple zeros at the points λ j (at these points the columns of Φ(P ) are proportional to each other). To check that det Φ(P ) does not vanish on L outside of λ j let us first observe that T a j [det Φ(P )] = det Φ(P ) , j = 2, . . . , g , (3.70) Now let us calculate the integral Taking into account (3.70) and (3.71) as well as normalization of the basic integrals dU j we see that the first two terms of the r.h.s. of this expression equal 2πi, whereas each term in the sum equals 4πi. Altogether, we get 4πig, and, therefore, det Φ(P ) has in L exactly 2g zeros which coincide with λ j . Let us also choose some domain Ω ⊂ E which does not overlap with projections of all basic cycles on E. Then domain Ω * does not overlap with the boundary of L and functions ϕ * (P ) and ψ * (P ) are uniquely defined in L by (3.55). Let us now choose some sheet of universal covering X of torus E with punctures {λ 1 , . . . , λ 2g }, and define new function Ψ(λ) in subset Ω of this sheet by the formula Then we extend function Ψ(λ) on the rest of X by analytical continuation. The following theorem shows that function Ψ satisfies conditions of isomonodromy, and, therefore, generates a class of solutions of elliptic Schlesinger system (3.39): Theorem 3.5 Function Ψ(λ ∈ X) defined by formulas (3.54), (3.55) and (3.73) is holomorphic and invertible on X outside of the points λ j , j = 1, . . . , 2g. Moreover, it transforms as follows with respect to analytical continuation along basic cycles of E: and around closed cycles surronding points λ j : where T b , T b and T λ j denote corresponding operators of analytical continuation; for l = 2, . . . , g. Proof. Holomorphy and invertibility of function Ψ follows from the same statements concerning function Φ (3.54). Relations (3.74) directly follow from (3.56), (3.57). To calculate m j let us observe that, according to (3.56) -(3.59), monodromies of Ψ are related to constants p and q as follows: Outlook Let us mention several applications of the mathematical results described above. Recently [23] it was established close relationship between Schlesinger system and Ernst equation of general relativity, which allows to apply to the Ernst equation all results of sect.2. In particular, one can get in this way a class of algebrogeometric solutions of Ernst equation [24], which turns out to coincide with the class of algebrogeometric solutions of Ernst eqation known since 1988 [25]. It is rather satisfactory that certain subclass of genus 2 algebrogeometric solutions of Ernst equation recently found realistic physical application in the problem of description of different kinds of dust discs [26,27]. Another application of construction of sect.2 is the theory of SU (2)-invariant gravitational instantons [28] where it allows to considerably simplify the results of Hitchin [29]. So far we don't know about physical applications of elliptic version of Schlesinger system, and all results of sect.3 have at the moment pure mathematical significance; however we strongly believe that such applications will be found in the near future.
2019-04-12T09:05:51.721Z
2000-03-17T00:00:00.000
{ "year": 2000, "sha1": "3752e7552830bf081b5f7426acccad8eba24dbc0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7abfd20c8e1dec413335251ca682c2dd9962389a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
53956214
pes2o/s2orc
v3-fos-license
Interactive comment on “ Tropospheric ozone variability in the tropics from ENSO to MJO and shorter timescales ” Abstract. Aura OMI and MLS measurements are combined to produce daily maps of tropospheric ozone beginning October 2004. We show that El Nino-Southern Oscillation (ENSO) related inter-annual change in tropospheric ozone in the tropics is small in relation to combined intra-seasonal/Madden–Julian Oscillation (MJO) and shorter timescale variability by a factor of ~ 3–10 (largest in the Atlantic). Outgoing longwave radiation (OLR), taken as a proxy for convection, suggests that convection is a dominant driver of large-scale variability of tropospheric ozone in the Pacific from inter-annual (e.g., ENSO) to weekly periods. We compare tropospheric ozone and OLR satellite observations with two simulations: (1) the Goddard Earth Observing System (GEOS) chemistry-climate model (CCM) that uses observed sea surface temperatures and is otherwise free-running, and (2) the NASA Global Modeling Initiative (GMI) chemical transport model (CTM) that is driven by Modern Era Retrospective-Analysis for Research and Applications (MERRA) analyses. It is shown that the CTM-simulated ozone accurately matches measurements for timescales from ENSO to intra-seasonal/MJO and even 1–2-week periods. The CCM simulation reproduces ENSO variability but not shorter timescales. These analyses suggest that a model used to delineate temporal and/or spatial properties of tropospheric ozone and convection in the tropics must reproduce both ENSO and non-ENSO variability. Introduction The El Niño-Southern Oscillation (ENSO) and its effects on the atmosphere and ocean have been extensively studied and documented.Trenberth (1997) provides several key references with an overview description and historical account of ENSO.The terminology, ENSO, is understood to consist of El Niño (warmer than average ocean temperatures in the tropical eastern Pacific -i.e., warm phase) typically followed by La Niña (cooler than average ocean temperatures in the tropical eastern Pacific -i.e., cool phase).ENSO events have ∼ 2-7-year timescales and produce planetary-scale changes in tropical sea surface temperature (SST), convection, and winds.Peak activity for ENSO occurs generally centered about Northern Hemisphere autumn to mid-winter months (e.g., largely October-January). J. R. Ziemke et al.: Tropospheric ozone variability in the tropics cal convection from ENSO events in the tropical Pacific induces a decrease (increase) in tropospheric column ozone.Although changes in convection are fundamentally dynamical there are also ENSO-related changes in composition that affect ozone precursors, such as increases in emissions from biomass burning over Indonesia due to suppressed rainfall during El Niño.There can also be long-range transport effects on tropospheric ozone at northern mid-latitudes related to ENSO including induced trends over long records.Lin et al. (2014) studied the effects of ENSO/Pacific Decadal Oscillation (PDO) on tropospheric ozone at Mauna Loa Observatory (19.5 • N, 156.5 • W, altitude 3.4 km).By combining 40 years of surface ozone measurements with a set of chemistry-climate model simulations they found that the flow of ozone-rich air from Eurasia towards Hawaii during spring weakened in the 2000s as a result of La Niña-like decadal cooling in the tropical Pacific.This circulation-driven ozone decrease offsets the ozone increase that otherwise would have occurred at Mauna Loa in spring due to rising Asian anthropogenic emissions.Ziemke et al. (2010) produced a monthly tropospheric ozone ENSO index (OEI) over a multi-decadal time record (beginning 1979) by differencing satellite-measured column ozone in the tropics between the eastern and western Pacific.They noted that the OEI could be used as a diagnostic test for modeled ozone including tropospheric ozone sensitivity relating to changes in SSTs.Oman et al. (2011) found excellent agreement between the measured OEI with the OEI produced by the Goddard Earth Observing System (GEOS) free-running chemistry-climate model (CCM) with observed SSTs over a 25-year period.This demonstrated an appropriate response of the CCM meteorology to the ENSO signature of the imposed SSTs; the fidelity of the ozone response to the induced circulation and photochemical changes included realistic horizontal and vertical gradients in tropospheric ozone. The tropical atmosphere and ocean exhibits intra-seasonal and shorter timescale variability with periods much shorter than ENSO from days or weeks to several months.The leading source of intra-seasonal variability is related to the Madden-Julian Oscillation (MJO) with characteristic timescales of about 1-2 months (Madden andJulian, 1971, 1994).The MJO is identified as large-scale circulation cells in equatorial latitudes that propagate eastward from the Indian Ocean to at least the central Pacific.In the original discovery of the MJO, Madden and Julian (1971) described the oscillation as a 40-50-day variation in surface pressure, zonal winds and temperature at different levels of the troposphere.Madden and Julian (1994) note that zonal wind anomalies in the upper troposphere associated with the MJO sometimes traverse the entire circumference of the Earth.The strongest variability for the MJO occurs around northern wintertime months when the intensity of ENSO events is largest.The MJO modulates regional monsoon (in particular the Australian and Indian monsoon) which impacts particulate mat-ter (e.g., Ragsdale et al., 2013) and surface ozone (e.g., Barrett, et al., 2012), both of which contribute to poor air quality in the tropics and/or subtropics.The ocean-atmosphere coupling associated with the MJO may also affect the duration and onset of ENSO (e.g., Hoell et al., 2014, and references therein).The MJO alters stratospheric circulation including the strength of the Northern Hemisphere polar vortex and timing of stratospheric sudden warming events (e.g., Garfinkel et al., 2012, 2014, andreferences therein) and also modulates tropical Kelvin waves (Guo et al., 2014).Wheeler and Hendon (2004) quantify the MJO using two leading derived Empirical Orthogonal Functions (EOFs) of combined tropospheric zonal winds and Outgoing Longwave Radiation (OLR) in the tropics.Their method quantifies the MJO into eight separate identifiable temporal phases beginning from onset extending through the full 1-2 month cycle. Using a chemical transport model (CTM) and measurements from the Aura Tropospheric Emission Spectrometer (TES), Sun et al. (2014) indicated that the MJO in tropospheric ozone in tropical latitudes may be locally up to 47 % of total variability.Their estimate is comparable to the ∼ 5-10 Dobson units (DU; 1 DU = 2.69 × 10 20 molecules m −2 ) MJO variability (out of ∼ 15-20 DU background ozone) in troposphere ozone in the tropical Pacific by Ziemke and Chandra (2003).One of the results of Sun et al. (2014) was that large-scale advection within the CTM explains most of the simulated changes in ozone relating to the MJO. In addition to ENSO and intra-seasonal/MJO changes, Dunkerton and Crum (1995) showed that there is considerable convective variability in the tropics with shorter timescales of 2-15 days.Dunkerton and Crum (1995) used daily outgoing longwave radiation (OLR) in the tropics to relate 2-15-day disturbances with intra-seasonal oscillations/MJO signals and found distinction between them as well as moderate interaction between them during convectively active phases of the intra-seasonal oscillations.A long existing problem with GCM/CCM simulations is difficult in producing a realistic MJO in the atmosphere (e.g., Lin et al., 2006;Hung et al., 2013;Del Genio et al., 2015;and references therein).Efforts have demonstrated that there is a causal link between how well gross moist stability and vertical advection is treated in models with how well those models reproduce a variation similar to the MJO (e.g., Benedict, et al., 2014, and references therein). The purpose of our study is to characterize the variability of tropical tropospheric ozone for timescales ranging from ENSO to MJO and shorter time periods in relation to tropical convection and atmospheric model simulations of ozone.We compare observed tropospheric ozone with ozone simulated from two NASA Goddard models of atmospheric composition, one being a CCM forced by observed monthly SSTs and the other a CTM driven by meteorological reanalyses.Section 2 discusses data and models for our analysis while Sect. 3 describes the impact of ENSO-vs.non-ENSOrelated changes in tropospheric ozone in relation with con-vective forcing.Section 4 describes derivation of a useful tropospheric ozone diagnostic from OMI/MLS while Sect. 5 shows some of its applications as applied to model ozone and OLR measurements.Finally, Sect 6 provides a summary. Data and models Daily measurements of tropospheric column ozone (TCO) in tropical latitudes are calculated using the OMI/MLS residual method of Ziemke et al. (2006).This method subtracts MLS stratospheric column ozone from OMI total column ozone for near clear-sky scenes (i.e., radiative cloud fractions < 30 %).Ziemke et al. (2014) evaluated three other OMI/MLS TCO products and concluded that the Global Modeling and Assimilation Office (GMAO) data assimilation product was best to use overall when considering all factors including global coverage and ozone profile information.However, Fig. 12 of Ziemke et al. (2014) showed that the assimilation product when limited to tropical latitudes had zonal variability ∼ 10-15 DU in stratospheric column ozone which was considerably larger than direct satellite measurements that typically have zonal variability of only a few DU.In addition, this larger zonal variability in stratospheric column ozone coincided with a reduced zonal wave-one pattern of TCO with assimilation, also considered inconsistent with previous TCO measurements.Our main reason for using the product of Ziemke et al. (2006) for the tropics stems from being independent of MERRA/GEOS-5 analyses including winds used by both the assimilation and trajectory ozone products.There are known errors with tropical stratospheric winds in the analyses caused by spurious transport (Tan et al., 2004).Although Tan et al. (2004) diagnosed an older assimilation system, comparisons of MLS ozone and N 2 O with our CTM simulations using MERRA meteorology show that spurious transport in the tropical and subtropical lower stratosphere is still a problem.These errors in stratospheric winds with assimilation produce errors in the derived ozone profiles including TCO. The OMI/MLS residual product combines MLS v3.3 ozone profiles with OMI version 8.5 total ozone measurements.Data quality and description of the MLS v3.3 ozone profiles are discussed by Livesey et al. (2011).Description and access to the OMI data may be obtained from the NASA webpage http://disc.sci.gsfc.nasa.gov/Aura/data-holdings/OMI.Horizontal gridding for TCO is 1 • latitude × 1.25 • longitude.The OMI/MLS residual ozone product uses WMO NCEP 2 K km −1 lapse-rate tropopause pressure to separate tropospheric from stratospheric ozone.Our study also uses OLR daily measurements for 2004-2012 at 2.5 • × 2.5 • horizontal gridding obtained from the National Oceanic and Atmospheric Administration (NOAA) webpage http://www.esrl.noaa.gov/psd/data/gridded/.OLR is the amount of radiative flux (units W m −2 ) re-emitted back to space in the 3.55-3.91µm wavelength band. The Global Modeling Initiative (GMI) CTM hindcast simulation includes a chemical mechanism suitable for the troposphere and stratosphere (Duncan et al., 2008;Strahan et al., 2007).Although the CTM simulation extends from 1979 through 2012 we include a limited period of October 2004 through December 2012 to coincide with the OMI/MLS measurements.The emissions of trace gases and aerosol fields used in the CTM simulations are described by Duncan et al. (2008), however, anthropogenic emissions have been updated and include year-specific scaling factors (van Donkelaar et al., 2008).Anthropogenic and biomass global emissions include surface emissions from industry/fossil fuel, biomass burning, biofuel combustions, and contributions from aircraft.Biomass burning emissions in the CTM are from van der Werf et al. ( 2010) and are extended through year 2010.Observationally-based biomass burning emissions are used in the CTM through year 2010 with the 2010 emissions repeated for 2011-2012.Most of the global emissions such as fossil fuel, biofuel, and biomass burning emissions for the CTM represent monthly means; however, lightning NO x and biogenic emissions (such as isoprene) are calculated online within the model and can vary daily.More detailed description of emissions for this simulation is given by Strode et al. (2015).The CTM meteorological fields are taken from Modern-Era Retrospective Analysis for Research and Applications (MERRA) (Rienecker et al., 2011). The CCM is described by Oman et al. (2013).This CCM is forced by monthly SSTs and specified boundary conditions and fluxes of important greenhouse gases including carbon dioxide, methane, and nitrous oxide.The CCM uses observed monthly mean SSTs over the 1960-2012 time record (Rayner et al., 2003).All global emissions for the CCM including biomass burning and non-biomass burning/anthropogenic were chosen to closely match emissions for the CTM.Lightning NO x for the CCM varies daily as with the CTM.We again refer to Strode et al. (2015, and references therein) for quantitative details regarding emissions.Both the CCM and CTM tropopause pressure use the WMO lapse-rate definition. ENSO vs. non-ENSO timescale changes in tropical tropospheric ozone Variability of tropospheric ozone from OMI/MLS and the CTM, shown in Fig. 1 for ENSO, non-ENSO, and intraseasonal oscillation (ISO) timescale changes, is derived from calculated standard deviation (see figure caption).Also plotted in Fig. 1 are corresponding calculations for OLR, scaled by a factor of 0.18 for plotting with ozone.Figure 1 is comprised of two sets of calculations.Figure 1a corresponds to variability calculated using original daily time series while Fig. 1b is the same but with all daily time series filtered to include only extreme ENSO events.For the Niño 3.4 index, ENSO events as defined by NOAA occur when Niño 3.4 is either greater than 0.5 (El Niño) or less than −0.5 (La Niña) for five consecutive months.Extreme ENSO events in Fig. 1b were subjectively chosen here to correspond with Niño 3.4 index being either greater than 1.0 or less than −1.0.ENSO signals (bottom curves) in Fig. 1 were extracted at each grid point from original deseasonalized ozone and OLR time series using the linear regression T (t) = β× Nino34(t) + ε(t), where T is original gridded time series, t is day index, β is a derived constant, Nino34(t) is the Niño 3.4 ENSO index, and ε(t) is the residual (i.e., ε(t) is identically the non-ENSO component of the time series).A daily Nino34(t) time series was generated from the NOAA monthly record using linear extrapolation.All line curves in Fig. 1 In Fig. 1a ENSO contributes a relatively small amount to the total daily variability of tropospheric ozone in the tropics at all longitudes.Figure 1b shows that this is the case even when only extreme ENSO events are considered, although for extreme events the ENSO variability increases in the Pacific relative to shorter timescales.In Fig. 1a and b ENSOrelated change in tropospheric ozone and OLR (bottom curves) is generally smaller than either non-ENSO change (top curves) or ISO timescale changes (middle curves).The CTM reproduces all of the OMI/MLS tropospheric ozone zonal patterns for all three timescale scenarios.Most of the non-ENSO related changes involve the intra-seasonal/MJO and shorter time period changes.These changes are larger than ENSO by a factor of ∼ 3-4 in the Pacific and a factor of 10 or more in the Atlantic in both Fig. 1a and b. The daily ozone dipole index (ODI) We calculate a quantity that we refer to as the ozone dipole index (ODI).This differs from the monthly OEI used by Ziemke et al. (2010) in that it is calculated using daily measurements rather than monthly means and does not include the final 3-month running average that is applied to the OEI.We use this ODI as a diagnostic test for evaluating OMI/MLS tropospheric ozone with other atmospheric parameters, including satellite-measured OLR and similar troposphere ozone derived from models.The ODI is the deseasonalized difference of western minus eastern Pacific TCO time series each day over the Aura record.Deseasonalization of time series is explained in Appendix A. The ODI calculation involves first averaging TCO from OMI/MLS each day in the tropics over the broad eastern and western Pacific regions (i.e., 15 • S-15 • N, 110-180 • W and 15 • S-15 • N, 70-140 • E, respectively) followed by computing the difference of western minus eastern Pacific.As with the monthly OEI, this differencing removes measurement offsets or drifts with time that would be common to both Pacific time series.We also calculate a daily dipole index time series for National Oceanic and Atmospheric Administration (NOAA) OLR measurements in the exact same manner as calculation of the ODI for investigating connections between tropospheric ozone and convection in the Pacific. Statistical coherence and phase of coherence are calculated between the measured ODI and the ODIs derived from the CTM and CCM.These statistics are also calculated between the measured ODI and the OLR daily dipole series.Coherence, a normalized statistic with values lying between 0.0 and 1.0, provides evaluation of statistical connection between two time series as an explicit function of frequency.We refer the reader to Appendix A for details regarding these calculations. Comparisons between measured and modeled ODI In Fig. 2a we compare time series of measured ODI (red curve) and CTM ODI (dotted blue curve).The two time series appear remarkably similar for timescales varying from low-frequency ENSO to 1-2-month periods (e.g., MJO) and even shorter.Figure 2b is the same as Fig. 2a but for the CCM instead of CTM.The CCM in Fig. 2b reproduces ENSO variability and appears to produce variability at shorter timescales similar to the CTM; however, the evaluation of the models requires more than just visual inspection of time series. We calculate coherence and coherence phase as functions of frequency to establish a statistical connection between measured and simulated ODIs on varying timescales.The coherence and coherence phase calculated between the OMI/MLS and CTM ODIs are shown in Fig. 3a where square of coherence is shown in the top panel with coherence phase on the bottom.Time periods in days are printed along the horizontal frequency axes for all panels in Fig. 3. If a simulated ODI exactly matched that obtained from OMI/MLS then the squared coherence would be 1.0 and the phase shift would be 0.0 over the entire frequency spectrum.For the CTM in Fig. 3a, statistical significance of squared coherence exceeds the 99 % level for values greater than 0.684.The CTM squared coherence exceeds this value for a broad range of timescales from ENSO (at far left in panel) to the MJO (30-60 days), down to timescales as short as 7-14 days.The excellent agreement in Fig. 3a over broad timescales attests to the realism of the input meteorology and computed photochemistry within the CTM. Figure 3b shows similar calculations for the CCM.The squared coherence in Fig. 3b (top) is statistically significant for ENSO but not shorter timescales.In addition the phase between OMI/MLS and the CCM in Fig. 3b (bottom) is near zero only for very low-frequency ENSO variability. Convection activity is inferred using OLR flux measured from NOAA polar orbiting satellites (e.g., Chelliah and Arkin, 1992;Liebmann and Smith, 1996).Clouds that are high in the troposphere have cloud-top temperatures colder than cloud tops lying below.The colder cloud tops coincide with reduced OLR and therefore low OLR corresponds to deep convection.Comparison of the OMI/MLS ODI with the OLR dipole series in Fig. 4 indicates that convection is the main driver of the ODI from ENSO to MJO and shorter periods.Aside from convection/advection forcing, the variability of precursors may also affect the variability of tropospheric ozone on different timescales; however, chemical timescale vs. dynamical timescale must be considered.As an example, CO is a precursor of tropospheric ozone with an average lifetime of www.atmos-chem-phys.net/15/8037/2015/Atmos.Chem.Phys., 15, 8037-8049, 2015 ∼ 2 months (e.g., Petrenko et al., 2013).Conversion of CO to ozone will have a relatively long timescale compared to daily or weekly variability, but not when compared to intraseasonal to inter-annual variability.As a test, we repeated our analyses where all emissions for the CTM were held constant in time (figures not shown).We found that the variability of CTM ozone such as that shown in Fig. 1a and the coherence/phase in Fig. 3a were nearly identical for the constantemissions simulation.This suggests that the variability of precursors is not important overall in affecting tropospheric ozone variability on these timescales and on planetary scales.However, the variability of precursors on regional scales can be significant.It was shown by Ziemke et al. (2009) using the CTM and OMI/MLS ozone that biomass burning over Africa, South America, and Indonesia can generate 10-25 % and even greater increases of tropospheric ozone in localized regions within or near the burning.The high coherence calculated between measured ODI and the OLR dipole series from inter-annual (i.e., ENSO) to shorter timescales suggests that convection has a dominant influence in forcing largescale changes in tropospheric ozone in the tropical Pacific. The behavior of OLR with ozone in Figs. 1 and 4 indicates further that convection in the MERRA analyses is being well simulated from ENSO down to weekly timescales.Figure 5 compares calculated spectral amplitudes of the ODI obtained from OMI/MLS data (red curve), CTM output (blue dotted curve), and CCM output (green dashed curve).The spectral amplitudes for OMI/MLS and the CTM in Fig. 5 are everywhere comparable and the variability shown by peaks and valleys as functions of frequency are closely identical for periods even shorter than ∼ 30 days.In comparison, the CCM has considerably smaller amplitudes at all frequencies and the frequency variability of spectral amplitudes is not consistent with the observations.The spectral analysis including the coherence/coherence-phase statistics moves beyond visual inspection of time series to give a quantitative measure of model performance. Power spectra for TCO time series averaged over the Indian Ocean just north of the Equator are shown in Fig. 6 for OMI/MLS ozone and the CTM and CCM simulated ozone.This tropical region is where the 1-2-month MJO signal-tonoise in tropical TCO maximizes for both data and the CTM.MJO variability in Fig. 5 has well-defined peak amplitudes around 45-50-day period for both the data and the CTM.However the CCM power spectrum does not show any consistent MJO or shorter timescale variability and essentially only generates an ENSO variation at very low frequency. Summary We have studied the variability of tropospheric ozone in the tropics from ENSO to intra-seasonal/MJO and weekly timescales using satellite measurements and two simulation models.Aura OMI and MLS satellite measurements are combined to derive daily maps of tropospheric ozone for October 2004 through 2012.Daily OLR from NOAA for the same time record are included to relate tropospheric ozone variability to changes in convection.The two models that we use are (1) the free-running GEOS Chemistry-Climate Model (CCM) and (2) the Global Modeling Initiative (GMI) chemistry-transport model (CTM) driven by Modern-Era Retrospective Analysis for Research and Applications (MERRA) meteorological analyses. Non-ENSO timescale changes in measured tropospheric ozone and convection in the tropics are found to be larger than ENSO-related changes by a factor of about 3-4 in the Pacific and up to a factor of ∼ 10 in the Atlantic.The non-ENSO variability in tropospheric ozone and convection is comprised mostly of intra-seasonal/MJO to 1-2-week timescale changes.Time series analysis including coherence calculations with OLR satellite data suggests that large-scale variability of tropospheric ozone in the Pacific from ENSO to weekly timescales is driven largely by convection. We developed a tropospheric ozone dipole index (ODI) from OMI/MLS measurements by differencing western mi-nus eastern Pacific tropospheric column ozone time series.The ODI is demonstrated to be a useful diagnostic for testing model ozone variability from ENSO down to weekly timescales.The ODI is derived similarly to the monthlymean Ozone ENSO Index (OEI) of Ziemke et al. (2010), but instead using daily measurements.The ODI was compared with ODI calculated from both the CTM and CCM.It is shown that the ODI obtained from the CTM is highly coherent with the measured ODI for timescales varying from ENSO to 1-2-month MJO and even shorter weekly time periods.The remarkable coherent behavior between the CTM ODI and measured ODI attests to the accuracy of the MERRA analyses and also that the CTM largely combines the effects of dynamics and photochemistry correctly over this broad range of timescales. Our analyses show that the Goddard CTM reproduces ozone observations exceptionally well over timescales from ENSO down to weekly periods whereas the Goddard CCM reproduces only ENSO variability.The inability of the CCM to generate shorter timescales such as an MJO is a known problem with GCMs/CCMs.Using daily instead of monthly SSTs would likely not improve performance of the CCM in light of previous studies.Del Genio et al. (2015) suggest that for these models to generate an MJO they need to have cloud/moisture-radiative interactions and convectionmoisture sensitivity.Understanding the differences in ozone variability between the CCM and CTM can help quantify possible missing or inaccurate feedback processes as future work.An important result we find is that using a model to quantify temporal and spatial properties of tropospheric ozone in the tropics requires that the model properly simulate the non-ENSO variability which includes the MJO and shorter periods. Power spectra with estimated 1-2-month signal-to-noise were calculated in the tropics for OMI/MLS and CTM TCO similar to Ziemke et at. (2007).Figure 6 in the main text shows power spectra with estimated signal-to-noise for both background and 1-2 month signal for the Indian Ocean region where the MJO signal for both OMI/MLS and CTM TCO is largest.In Fig. 6, an estimated background noise power spectrum (i.e., denoted BG) for each time series was estimated using a first-order autoregressive model T (t) = α × T (t − 1) + N (t), where α is a derived constant, t is the day index, and N(t) is normally distributed random noise with mean of zero.For power spectra using the seven-point estimator the 95 % critical signal-to-noise ratio level is 1.69. A3 ENSO vs. non-ENSO variability The top panel in Fig. A2 shows OMI/MLS time series for the ENSO component (thick curve), non-ENSO component (thin curve), and annual cycle (dotted curve) in the tropical western Pacific.The bottom panel in Fig. A2 is the same as the top panel but instead for the CTM.The selected region for these time series is 10-20 • S, 115-120 • E which coincides with largest ENSO variability for both OMI/MLS and CTM TCO.The ENSO variability was extracted using linear regression (see figure caption).Figure A2 shows that the CTM closely tracks OMI/MLS measurements for the non-ENSO components.ENSO variability for both the CTM and OMI/MLS is smaller than non-ENSO (comprised mostly of MJO and shorter timescales). Figure 1 . Figure 1.(a) Variability in deseasonalized OMI/MLS daily tropospheric column ozone (solid curves), GMI CTM (dotted curves), and OLR (long dashed curves) for ENSO signal, intra-seasonal oscillation (ISO) signals, and with ENSO signals removed (non-ENSO).ISO curves involved band-pass filtering the time series for 25-65-day periods.OLR (units W m −2 ) was multiplied by a factor of 0.18 for plotting with ozone.The plotted variability was calculated using amplitude of 2σ to estimate peak-to-peak change.The time record is 1 October 2004-31 December 2012 and all original time series were averaged over 20 • S-20 • N. The ENSO signals were extracted using the linear regression T (t) = β× Nino34(t) + ε(t), where T is original time series, t is day index, β is a derived constant, Nino34(t) is the Niño 3.4 ENSO index, and ε(t) is the residual that represents the non-ENSO component of the time series.(b) Same as (a) except that all of the time series were filtered for extreme ENSO events whereby Nino34(t) > 1.0 or Nino34(t) < −1.0. represent 20 • S-20 • N averages as function of longitude.The ISO variability (middle curves) involved band-pass filtering of the original time series for 25-65-day periods (see Appendix A and Fig. 1 caption). Figure 2 . Figure 2. (a) Daily ODI (in DU) for OMI/MLS data (solid red curve) and CTM (dotted blue curve).The beginning labels "O", "J05", "A", and "J" on the horizontal time axis in (a) and (b) denote October, January 2005, April, and July, respectively (similar labels for subsequent years).The monthly-mean Niño 3.4 ENSO index (thick black curve; units K and multiplied by 3 for plotting) is included for comparison with the two ODI time series.The ODI time series is derived by subtracting the eastern Pacific (15 • S-15 • N, 110-180 • W) from western Pacific (15 • S-15 • N, 70-140 • E) deseasonalized tropospheric column ozone.The correlation between the two daily ODI time series printed in the upper right of this figure is 0.857.(b) Same as (a) but with the CCM (dotted green curve) in place of CTM.Calculated standard deviations of the ODI time series from OMI/MLS, CTM, and CCM are 3.7, 3.9, and 2.6, respectively. Figure 3 . Figure 3.This figure plots calculated coherence and phase of coherence between OMI/MLS ODI and model (i.e., CTM and CCM) ODI as functions of frequency (periods in days shown).(a) Top panel: coherence-squared between OMI/MLS ODI and CTM ODI.Included are confidence levels for coherence-squared of 95 % (i.e., value of 0.393), 99 % (value of 0.536), and 99.9 % (value of 0.684).Bottom panel: phase of coherence in degrees.Panel (b) is the same as (a) but for the CCM instead of CTM (see Appendix A for details of these calculations). Figure 4 . Figure 4.This figure plots ODI and OLR dipole time series in (a) followed by calculated coherence and phase of coherence between OLR and OMI/MLS ODI in (b).Panel (a) is similar to Fig. 2a except with calculated OLR dipole series (blue dashed curve) replacing the CTM ODI.OLR time series values have been divided by 4 for plotting with ozone.Panel (b) is similar to Fig. 3a except that the calculated coherence and coherence phase is between the OMI/MLS ODI and OLR dipole series. Figure 5 . Figure5.This figure plots calculated spectral amplitudes (in DU) of ODI derived from OMI (solid red curve), CTM (dotted blue curve), and CCM (dashed green curve) as functions of frequency (periods in days shown).Spectral amplitude is defined as the square root of c(ω) 2 + s(ω) 2 , where c and s denote Fourier cosine and sine coefficients, ω is frequency and the over-bar denotes application of a smoothed spectral estimator (see Appendix A for details of these calculations). Figure 6 . Figure 6.All three panels show calculated power spectra (in units of DU 2 ) of daily tropospheric column ozone time series averaged over a broad region of the tropical Indian Ocean (0-10 • N, 70-80 • E)where the MJO signal is statistically significant well above 95 % for OMI/MLS and the CTM.The top, middle, and bottom panels are for OMI/MLS data, CTM output, and CCM output, respectively.The power spectra are plotted vs. frequency with periods in days shown.A power spectrum is defined by [c(ω) 2 + s(ω) 2 ]/2 where c and s denote derived Fourier cosine and sine coefficients, ω is circular frequency and the over-bar denotes application of a smoothed spectral estimator.Estimated background noise is denoted "BG" with 95 % confidence level shown in each panel (see Appendix A for details of these calculations).
2018-11-21T08:38:15.566Z
2015-07-22T00:00:00.000
{ "year": 2015, "sha1": "099319b1b5faefd6e4544b8bcb3ec52d461eaeca", "oa_license": "CCBY", "oa_url": "https://www.atmos-chem-phys.net/15/8037/2015/acp-15-8037-2015.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d326fa038757c675c506bfa35970e39560673ddb", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Environmental Science" ] }
261076112
pes2o/s2orc
v3-fos-license
Parameter space geometry of the quartic oscillator and the double well potential: Classical and quantum description , I. INTRODUCTION Exploring the quantum parameter space's geometry has led to captivating insights into various physical systems [1][2][3][4][5].Within these investigations, a prominent element is the quantum geometric tensor (QGT), encompassing the quantum metric tensor (QMT) in its real part and the Berry curvature in its imaginary part.This tensor holds significance in quantum information processing applications, such as adiabatic and holonomic quantum computing [6].In these applications, minimizing errors corresponds to traversing geodesics on the control parameter manifold [7].The QGT plays a vital role in describing quantum phase transitions (QPT) in the thermodynamic limit and provides valuable information about the precursors to these transitions when dealing with systems comprising a finite number of particles.Furthermore, following the ideas of Ozawa and Goldman [8,9], recently has been possible to measure the QMT experimentally [10,11], showing that the QGT could be an essential tool to describe quantum phenomena by example, the observations of the quantum geometry enabled an evaluation of the quantum Cramér-Rao bound (QCRB) [12]. In this study, we extensively examine the geometry of the quantum parameter space for the anharmonic oscillator.To our knowledge, a systematic investigation of the geometric properties of the quantum parameter space of the anharmonic oscillator has not been undertaken.Therefore, the primary objective of this article is to fill this gap by conducting a thorough analysis of the quantum geometric tensor of this system. The anharmonic oscillator is a system that has been analyzed from many points of view.In the original articles by Bender and Wu [13,14], it was shown that, using semiclassical methods of WKB, the large − behavior in the series expansion of the energy levels of the perturbation calculus is divergent.Later this study was followed by the perturbative analysis of Lipatov [15] and Brézin, et al [16], showing that the presence of the instanton connecting the two minima of the potential is responsible for the non-Borel summability of the perturbation expansion of the ground state energy.The anharmonic oscillator has served as a test system, useful to extend the results to quantum field theory, and its large order behavior has been analyzed using instanton contributions to the path integral representation of Green's functions [17][18][19][20].These studies were extended considering the contributions of multi-instantons to the ground state in [21] and to excited states in [22]. In Ref. [23], it was shown that a summable perturbation series exists for the double-well potential employing an effective coupling.Subsequently, in [24] a fast convergence in perturbation theory was introduced using a simple uniform approximation of the logarithmic derivative of the ground state eigenfunction.This technique transforms the one-dimensional Schrödinger equation into a Riccati form, using the logarithmic derivative of the wave function [25].Using this approach, in [26] approximate eigenfunctions were obtained for the quartic anharmonic oscillator, and also for the double-well potential [27,28]. In the present article, we extend these studies of the anharmonic oscillator by employing a different approach: a geometric analysis of the parameter space.To do this, we introduce a notion of distance in this space using the quantum metric tensor developed by Provost et al [1] and Zanardi, et al [2].With the help of this metric, we can reconstruct all the geometric information of the parameter space, including the scalar curvature, which shows quite interesting behaviors for both positive and negative oscillator parameters. We employ three techniques to construct the quantum metric tensor for the ground state of the anharmonic oscillator, which has a double-well potential in the case of negative oscillator parameters.We obtain an exact numerical description performing a diagonalization in a truncated harmonic oscillator basis [29], including a careful analysis of the convergence of all the relevant observables, along with the perturbative method outlined by Zanardi et al [2].These numerical results are compared with a perturbative procedure of nonlinearization to express the wave function as a power series in [30], and with a semiclassical procedure employing Fourier series [31].This particular approach involves constructing a classical analog of the quantum metric tensor [32], offering the advantage of enabling the derivation of the classical equivalent of the matrix element of an operator between the ground state and any excited state. II. GEOMETRY OF THE PARAMETER SPACE In this section, we introduce some basic aspects involved in parameter space associated with quantum and classical systems, as well as fix some notations. A. Quantum metric tensor We start by considering a one-dimensional quantum system with a Hamiltonian Ĥ ( q, p; ) that depends on a set of real adiabatic parameters denoted by = { } ( = 1, ..., ).It is assumed that the Hamiltonian has at least one eigenvector |Ψ ()⟩ with nondegenerate eigenvalue ().Using this eigenvector, the components of the quantum metric tensor (QMT) defined in the -dimensional parameter space M of the system are given by [1] () := Re where := .This metric provides the distance between the eigenvectors |Ψ ()⟩ and |Ψ ( + )⟩ with infinitesimally different parameters, namely ℓ 2 = () () .For the purposes of this work, it is convenient to introduce the perturbative form of this metric [2] () = Re where the operators Ô ≡ Ĥ.Notice that to evaluate the expression (2) all the matrix elements must be known.The advantage of this expression is that it shows that the components of the QMT are singular at the points * ∈ M of the parameter space such that ( * ) = ( * ).This means that at the critical points of the QPT, the components of the QMT are singular.However, in some cases, a more detailed analysis is required [5,33].For the purposes of this study, we set = 0 and write the QMT (2) as with where * denotes the complex conjugate and we have defined the transtion matrix elements () := ⟨Ψ 0 | Ĥ|Ψ ⟩.To find out whether the resulting singularities are genuine or merely a result of the parameter space's coordinates, we can resort to the scalar curvature , which does not depend on the coordinates used. In particular, for a two-dimensional space endowed with a metric tensor (the QMT, in this case), the scalar curvature has the simple form [34] where = det[ ] is the determinant of the metric and the quantities A and B are defined as B. Classical metric tensor Let us now consider a one-dimensional classical integrable system described by a Hamiltonian (, ; ) depending the set of real adiabatic parameters = { } ( = 1, . . ., ).The natural coordinates for this type of systems are the actionangle variables {, }, which allow to write the Hamiltonian as (; ) = ((, ; ), (, ; ); ) and can be used to define the torus average of a function (, ; ) as In this setting, the classical analog of the quantum metric tensor (1) is1 [36] where ⟨•⟩ 0 means that the classical average ( 7) is taken over the initial ( = 0) angle variable 0 and O () are timedependent functions defined as This classical metric tensor (CMT) provides a measure of the distance, on the parameter space M, between the points [(), ()] and [( + ), ( + )] with infinitesimally different parameters, i.e., ℓ 2 = (; ) .Since the functions O () are periodic in the angle variable , they can be expressed as a Fourier series as where () = 0 + with = / the angular frequency of the system, "i" is the imaginary unit, and ( ′ ) 𝑖 are the time-independent Fourier coefficients Using the functions ( ′ ) 𝑖 , it can be shown that the components of CMT ( 8) can be equivalently written as [31] This expression can be regarded as the perturbative form of the CMT, and in this sense it is analogous to the expression (2) of the QMT.The appealing feature of ( 12) is that it does not require to solve the initial conditions problem, what makes it suitable for this study.To determine if the resulting singularities of this metric are genuine or not, we can also compute the associate scalar curvature (5) of the two-dimensional case. Notice that the CMT ( 12) can be written in a similar fashion as the QMT (3).In fact, using ( ( ′ ) ) * = (− ′ ) which follows from (11), the CMT can be expressed as with where we have defined ′( ′ ) III. QUARTIC OSCILLATOR AND DOUBLE WELL POTENTIAL The Hamiltonian of the quantum quartic oscillator is where and are the system parameters, which we adopt as our adiabatic parameters.Then, the associated parameter space corresponds to a two-dimensional manifold with coordinates { } = (, ), = 1, 2. Throughout this paper we consider > 0. In the case > 0, the potential has a single minimum, while in the case < 0 it has two minima and is known as the double-well potential. A. Quantum analysis QMT from a numerical calculation The quantum Hamiltonian ( 15) is easily represented with the creation and annihilation operators considering the following equations, where = √︁ /.Consequently the Hamiltonian becomes into, the respective matrix elements of Ĥ can be represented in the Fock basis and are given by, With these matrix elements, a truncated Hamiltonian matrix is built with , ′ ∈ [0, ], where is the finite truncated size on the Fock basis, which allows to obtain conv converged eigenstates.We encode in Mathematica [37] the calculations to find the first conv eigenstates |Ψ ⟩ of the system, preserving a numerical accuracy of seventy digits of precision. Using (15), for and we get Both quantities are substituted into (2).The cutoff conv guarantees that the transition matrix elements of Ô1 and Ô2 converged.Having obtained numerically the eigenstates, energies, and the transition operators Ô1 and Ô2 we calculate numerically the elements of the QMT point by point in the bidimensional parameter space.Employing the command Interpolation[..., Method->Hermite] defined in Mathematica, is represented as a continuous, differentiable function.In this way we can use the analytical expression (5) and obtain the Ricci scalar . The classical limit of the Hamiltonian (15) has a double well potential if < 0 and a simple well if > 0. To visualize the analogue behaviour in the variable for the quantum system, we employ the ground state |Ψ 0 ⟩ expanded in the quantum harmonic oscillator basis |Φ ⟩, so, where the coefficients 0, = ⟨Φ |Ψ 0 ⟩ are obtained in the numerical diagonalization and The ground state probability distribution |Ψ 0 ()| 2 to find the particle at the position has a maximum at = 0 for > 0. For < 0, it shows an interesting behavior, displayed in Fig. 1.It has one maximum for small values of | | and, for = 0.2, it starts noticing the presence of the two wells at ≈ −0.15 and develops two separated probability regions at ≈ −0.32.The delocalization of the probability distribution over the two wells occurs when the ground state energy is lower than the top of the energy barrier between the two wells.This effect can be best visualized in Fig. 2, where the blue lines represent the position of the minima of the classical potential, the yellow bands the exact quantum ground state probability distribution, with their maxima depicted with red dots. QMT from a perturbative approach Here we consider a perturbative treatment in the parameter since there is no exact solution to the resulting Schrödinger equation.Furthermore, we restrict ourselves to obtain the ground-state wave function and its energy up to the 10th order in .To accomplish this task, we will use the method proposed in Ref. [30], suitable for finding corrections to large powers of .The wave function generated by this approach, shown as an example up to the fourth order in is where Using the wave function up to the tenth order in together with the Provost and Valle formula (1), we get the quantum metric tensor components for > 0 where the coefficients are given in the Table (I).It is evident from the above equations that all the components of the QMT will diverge when → 0. It is also relevant to mention that the quantum perturbative analysis presented in this subsection can only be performed for > 0. B. Classical analysis In this section, we consider the classical counterpart of quantum Hamiltonian (15), which is given by and also take { } = (, ) with = 1, 2 as the set of adiabatic parameters.The classical metric tensor for this system in the case > 0 was obtained in [35], by using a formulation based on generating functions and resorting to the canonical perturbation theory.However, in [35] neither the scalar curvature associated with the geometry of the parameter space nor the more challenging case of < 0 were studied.The aim of the section is to provide a complete analysis of the parameter space of these systems in both cases, > 0 and < 0, in the framework of the Fourier-base formulation (12). To compute the classical metric (12), we begin by setting the functions (9).Using (25), for and we get The next step is to obtain the Fourier coefficients (11), which requires first expressing these deformation functions in terms of the system's action-angle variables {, }.Since finding the action-angle variables for the Hamiltonian ( 25) is not easy, we need to resort to the canonical perturbation theory [38].We carry this out by separately considering the cases > 0 and < 0. It is essential to point out that functions (26a) and (26b) hold for both cases. With the generating function at hand and bearing in mind the equations of the canonical transformation the variables and 0 can be calculated in terms of 0 , , and the parameters .Then, using (29a) together with the resulting action variable 0 ( 0 , ; ), the classical deformation functions (26a) and (26b) can also be expressed in terms of the 0 , , and , i.e.O 1 = O 1 ( 0 , ; ) and O 2 = O 2 ( 0 , ; ) which are given in (A2) and (A3) up to the fourth order in .Because of this, it is convenient to perform the change of variable → 0 in (11), which allows us to write the expression for the Fourier coefficients as (33) From this expression we obtain the coefficients ( ′ ) 1 (; ) and Substituting (A4) and (A5) into (12), the components of the classical metric tensor for > 0 are where the numerical coefficients (11) , (12) , and (22) are given in Table II.As in the quantum case, all the components of the CMT will also diverge when → 0. In this case, the system presents three fixed points corresponding to vanishing phase space velocities ( , ).The points are In Fig. 4, we can see these points.The blue points correspond to 1 and 3 , which are center points as long as < 0. Furthermore, the red point corresponds to 2 and it is a hyperbolic point. Taking into account this, it is convenient to carry out a transformation to a coordinate system centered on 1 or 3 .Then, let us take the point 1 and consider the change of and = .In terms of the new variables the Hamiltonian (25) reads Since the constant term 3 2 2 does not affect the dynamics of the system, we can get rid of it.However, note that by removing this term we are removing the divergence in energy at = 0. Redefining the parameter ′ = √ , this Hamiltonian can be decomposed as Analogously to the previous case, we assume that ′ ≪ 1. In this setting, 0 is just a harmonic oscillator since < 0 and then plays the role of the Hamiltonian of the unperturbed problem with action-angle variables { 0 , 0 } given by Here, 0 = √ −2 is the frequency of the unperturbed system.In addition, the terms 1 and 2 are regarded as first-order and second-order potentials, respectively. Following the same procedure as in the previous case, we first need to obtain the generating function .The functions 1 , 2 , . . .involved in (30) are again obtained from (31), but with functions Φ modified by the presence of 2 .In particular, the first three functions .The resulting functions for = 1, . . ., 10 are given in (A6). IV. CLASSICAL AND QUANTUM METRICS Our goal now is to compare the classical and quantum metrics and their scalar curvatures for both cases > 0 and < 0. With this in mind, we first need to establish the value of the action variable and its different powers.One way to do this is by using the semiclassical relation between the quantum metric tensor and the classical metric [36] In particular, for the metrics components ( 24) and (34) of the case > 0 we have the relations cl 12 , and ℏ 2 22 = cl 22 .For each of these relations, we get an identification of the -th power of the action variable as = ( ℏ) ( = 1, 2, . . ., 14) where are the numerical coefficients.However, using (24) and ( 34 In what follows we will use these identifications of the action variables for both cases > 0 and < 0, and we will set ℏ = 1 to perform the comparisons between classical and quantum objects.Before proceeding, it is convenient to compute the scalar curvature of each metric.Using (5), the scalar curvature of the quantum metric tensor (24) is where the coefficients are given in Table IV.Analogously, the scalar curvature of the classical metric tensors (34) for > 0 and (41) for < 0 with the identifications = ( ℏ) are where the coefficients ℎ and are provided in Table IV. In Fig. 6 we plot the numeric QMT and its scalar curvature obtained from the exact numerical diagonalization and compare the results with the analytic CMT and its scalar curvature obtained from the Fourier approach.We see in Figs.6(a)-6(c) that far from the region where = 0 the components of the QMT and the CMT have a very close behavior.Note in Figs.6(d)-6(e) that this similitude between classical and quantum quantities is also exhibited by the determinant and scalar curvature.This shows that the classical metrics for the regions > 0 and < 0 far from = 0 agree well with their quantum counterparts.Nevertheless, we can also see from Figs. 6(a)-6(c) that near the region where = 0, the components of the CMT show a divergent behavior, in contrast to their quantum counterparts, which do not diverge but show a peak.In this regard, it is remarkable that the classical metric for < 0 behaves in a similar way.From Fig. 2, we notice that at = 0, the potential abruptly changes from having two minima to having only one if we go in the − → direction or from having one minima to having two minima in the opposite direction.In the classical and perturbative quantum sense, this is interpreted as an exact quantum transition, while in the exact quantum sense, this transition is moderated by tunneling, which gives rise only to a precursor of a quantum phase transition and hence the maximum near = 0.However, this transition can be confirmed only in the thermodynamic limit.In Fig. 7, the plots of the numeric QMT, the analytic QMT, and the analytic CMT are shown for = 0.2.We see that in fact the (numeric and analytic) quantum and classical metrics present the same behavior for values of far from = 0. Furthermore, it is clear that the analytic classical and quantum metric components diverge at = 0, while the numeric QMT remains finite.This behavior is also characteristic of the determinants of the corresponding metrics.From Fig. 7 we observe that the component 11 of the numeric QMT displays peak at = −0.285,whereas the component 12 of this metric has a peak at = −0.32 and a local minimum at = −0.45.The analytic CMT counterpart does not possess these peaks; however, its 12 component does have a local minimum at = −0.504.Simultaneously, the determinant of the numeric QMT exhibits a peak at = −0.325,while the determinant of the CMT shows a peak at = −0.586.In Fig. 8, we show the corresponding scalar curvatures for = 0.2.We can see that the scalar curvature of the numeric QMT has a peak at = −0.245and a local minimum at = −0.48. The local maxima and minima in both the components of the metric tensor and the scalar curvature, for negative , reflect the appearance of the delocalization of the probability distribution, i.e., the point where the probability density is spread out over the two wells (see Fig. 2).On the other hand, the scalar curvature of the CMT presents a divergent behavior that signals the appearance of the aforementioned extreme values (maximum or minimum).Our results then reveal that the CMT can be used to predict the occurrence of peaks in the QMT, which indicate the appearance of the delocalization of the probability distribution.Fig 8 also shows that both scalar curvatures take the value of −4 in the region < 0 far form = 0, where the ground state wave function corresponds approximately to two welllocated Gaussians on each of the wells.The negative constant scalar curvature means that the parameter space associated with the ground state has a hyperbolic geometry in that region.In addition, it is worth noting that, in the region, > 0, the scalar curvatures calculated using the analytic QMT and the perturbative CMT show a, analogous divergent behavior for values close to = 0. Finally, for ≫ the system tends to behave like a harmonic oscillator, and for this reason, the components of the QTM and CMT tend to zero; however, in the limit → ∞ the scalar curvature of the exact numerical QMT tends to −28, while the scalar curvature of CMT tends to 21.1866, as it can be verified from the analytic expressions of these curvatures, (43) and (44).In Fig. (9), we show the plots of the numerical QMT and analytic CMT for = −0.5.Remarkably, the results show an excellent agreement between the components of QTM and the components of CMT.This confirms the usefulness of the classical framework as a tool to be adopted in order to have a first glance over the information contained in the parameter space of a quantum system. In Fig. 10, the plots of the scalar curvatures of the numeric QMT and the analytic CMT are shown for = −0.5.Note that for → 0 both scalar curvatures tend to −4, which can also be seen from ( 44) in case of cl with < 0. The fact that the scalar curvature has a finite value at → 0 reveals that the divergence present in the QMT and the CMT can be removed by a change of coordinates in the parameter space.It is worth mentioning that in the case = 0, the system's Hamiltonian reduces to that of an inverted harmonic oscillator, and then the classical and quantum methods used in this work cannot be applied, at least in the conventional form.Furthermore, from Fig. 10 we can see that the scalar curvature of the QMT has a local minimum at = 0.215.The origin of this local minimum may be analogous to the one of Fig. 8, i.e., its appearance corresponds to the separation of the probability distribution into two branches. V. COMPARISON BETWEEN QUANTUM AND CLASSICAL APPROACHES FOR THE PARAMETER SPACE METRIC The aim of this section is to carry out a more detailed analysis of the quantum and classical metric tensors, (3) and ( 13), and to strengthen the analogy between them.To do this, we use the same identifications for powers of the action variable that were introduced in the previous section.We begin by comparing both metrics (3) and ( 13), which suggests that exact excitation energies − 0 are mimicked in the classical approximation by the harmonic oscillator energies ′ .In Fig. 11(a) we plot − 0 and ′ for = 1 and = 0.2, finding that for small values of and ′ (, ′ < 10), the quantities − 0 and ′ remain close to each other.However, for large values of and ′ (, ′ > 10), − 0 deviates from the linear behavior of ′ , which is a consequence of quantum corrections. In the case = −1 and = 0.2, the first seven values of the energies are 0 = 0, 1 = 1.04 × 10 −11 , 2 = 1.360866, 3 = 1.360866, 4 = 2.661983, 5 = 2.661984, 6 = 3.893522, and 7 = 3.893550.From this it is clear that ≈ +1 for even , which is a quasi-degeneration that emerges as a consequence of the double well potential.In contrast, in the classical case, all "excitation" energies ′ appear only once.This is because, in the classical perturbation formalism, we have considered only one of the wells, the one associated with the fixed point 1 .We have verified that employing the well involving the fixed point 2 , the resulting CMT is exactly the same.In Fig. 11(b) we plot − 0 and ′ as functions of /2 and ′ , respectively.We can see that both quantities present a similar behavior for small values of (/2 < 20) and ′ ( ′ < 20).The comparison between the metrics (3) and ( 13) also suggests that classical functions ′( ′ ) must play an analogous role to that of the functions () of the quantum case.In Fig. 12 we plot () and | ′( ′ ) | as functions of and ′ , setting = 1 and = 0.2.From this plot we can see that both functions exhibit a very similar behavior, approaching each other for small values of and ′ .Also, we can notice that the classical functions ′( ′ ) tend to zero faster than their quantum counterparts () .Remarkably and reinforcing the analogy, for odd and ′ both functions () and ′( ′ ) are zero.This is the reason why only the values of () and ′( ′ ) for even and ′ appear in the plot.0.01 In the case = −1 and = 0.2, the functions ′( ′ ) 𝑖 are real for even ′ and pure imaginary for odd ′ .However, for the same values of and , the functions () are real for even and vanishes for odd .In Fig. 13 we plot () and | ′( ′ ) | as functions of /2 and ′ , respectively.From this plot we can appreciate that both functions, () and | ′( ′ ) |, exhibit analogous behavior.We have plotted () as a function of /2 because of the quasi-degeneration resulting from the double well potential, which does not have a classical counterpart. To aid in a better understanding of this, in Figs. 14 we show and G ( ′ ) 𝑖 𝑗 to their metrics occur for ≤ 4 and ′ ≤ 4, and that precisely for these values of and ′ there is good agreement between these quantities.For > 4 and ′ > 4, the contributions of () and G ( ′ ) are of the order of 10 −5 or smaller.In the case = −1 and = 0.2, the classical function G ( ′ ) is real for all ′ , since the product ′ ( ′ ) is real for even ′ and pure imaginary for odd ′ .In the quantum case, the function () is nonzero for even and zero for odd .Clearly, this is because () is nonzero for even and zero for odd .In Fig. 15 we plot () and G ( ′ ) as functions of and ′ for = −1 and = 0.2.Here we can also appreciate that the relevant contributions of () and G ( ′ ) to the quantum and classical metrics, respectively, occur for ≤ 5 and ′ ≤ 5. Certainly, the contributions of and G ( ′ ) are of the order of 10 −5 or smaller for > 5 and ′ > 5. VI. CONCLUSIONS In this paper we have studied the geometry of the space parameter of the single-well anharmonic oscillator ( > 0 ) and the quartic double-well potential ( < 0) from the classical and quantum points of view.In the quantum setting, we computed the exact QMT and its scalar curvature numerically for both the single-well and double-well potentials, using a basis of oscillator states and performing an exact numerical diagonalization.In the quantum setting, we also obtained analytically the QMT and its scalar curvature for the ground-state of the single-well case ( > 0), by following a perturbative treatment in the parameter and employing the method proposed in Ref. [30] to obtain the ground-state wave function up to 10th order in .In the classical framework, we computed analytically the CMT and its scalar curvature for both cases, the single-well and double-well potentials, by employing a formulation of the CMT based on Fourier series and introduced in [31].This approach allowed us to obtain the CMT up to 10th order in for the single-well system and up to 6th order in for the double-well system. We performed a detailed analysis of the exact quantum numerical results and the classical analytical results.This analysis was accomplished by considering identifications for the different powers of the action variable that arise from the semiclassical relation (42) between the analytic classical and quantum metrics of the single-well problem.We found that the QMT and the CMT have a very close behavior except in points near = 0, as shown in Fig. 6.Similarly, we found that there is a very good agreement between the corresponding scalar curvatures of these classical and quantum metrics for that region. For = 0.2, it was shown that components of the numeric QMT (except the component 22 ) and its scalar curvature exhibit peaks indicating the appearance of a delocalization of the probability distribution.The classical metric and its curvature show a divergent behavior at = 0 that could be regarded as a sign of the appearance of such peaks.In addition, for = 0.2 with < 0, far from = 0, both classical and quantum scalar curvatures take the value of −4, indicating that the associated parameter space has a hyperbolic geometry in that region.In contrast, for = 0.2 with > 0 far from = 0 the scalar curvature of the QMT tends to −28, while its classical analog tends to 21.1866.This discrepancy could be because the ground state wave function is strictly not Gaussian for those values of the parameters. The case with = −0.5 was also analyzed.We found a remarkable agreement between the numeric QMT and analytic CMT, which show a divergent behavior for → 0. However, for this limit, the scalar curvatures of both metrics tend to −4, meaning that the singularity at → 0 is apparent and can be removed by performing a change of coordinates (parameters).In this case, it is also worth mentioning that the scalar curvature of the QMT has a local minimum, which is absent in its classical counterpart and could be related to a separation of the probability distribution into two branches. We also compared in detail the perturbative expression of QMT (3) with the Fourier-based expression of the CMT (13), and found that the quantum energies − 0 and the classical quantity ′ have similar behavior, as well as that the classical functions ′( ′ ) play an analogous role to that of the functions FIG. 2 . FIG. 2. |Ψ 0 ()| 2 for = 0.2 in function of and , the blue curve is the classical localization of the critical points, and the red curve is the maximal value of |Ψ()| 2 obtained numerically. FIG. 4 . FIG. 4. Energy surfaces for = −1 and = 0.2.Blue points are the stable center points 1 and 3 , whereas the red one is the unstable center point 2 . FIG. 5 . FIG. 5. Coefficients of the identifications as a function . FIG. 6 . FIG.6.Comparison of parameter space metric and obtained from the exact numerical quantum approach (orange) and the classical approach (blue).The agreement is very good except near the region where = 0. FIG. 7 . FIG.7.Comparison of the parameter space metrics for = 0.2.Orange round markers correspond to the exact numerical QMT, red square markers correspond to the perturbative QTM, and the blue line corresponds to the perturbative CTM. 5 RFIG. 8 . FIG. 8. Comparison of the scalar curvatures for = 0.2.Orange round markers correspond to the exact quantum numerical , red square markers correspond the perturbative quantum , and the blue line corresponds to the perturbative classical . FIG. 9 . FIG.9.Comparison of the parameter space metrics for = −0.5.Orange round markers correspond to the exact numerical QMT and the blue line corresponds to the perturbative CTM. FIG. 10 . FIG. 10.Comparison of the scalar curvatures for = −0.5.Orange round markers correspond to the exact quantum numerical and the blue line corresponds to the perturbative classical . Appendix A: Generating functions and Fourier coefficients 1 . Case > 0 TABLE III . Coefficients of the classical metric tensor (41), up to order 5 . TABLE IV . Coefficients of the scalar curvatures.
2023-08-24T06:41:07.276Z
2023-08-23T00:00:00.000
{ "year": 2023, "sha1": "6e2b2eea1da8ea4da1cbc9be07cea11f193f914f", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1402-4896/ad1e4a/pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "a6de6abe05255509d2e9b73858dd5bf8e3ad7a3e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
16590886
pes2o/s2orc
v3-fos-license
Physiological and Biochemical Characterization of a Novel Nicotine-Degrading Bacterium Pseudomonas geniculata N1 Management of solid wastes with high nicotine content, such as those accumulated during tobacco manufacturing, poses a major challenge, which can be addressed by using bacteria such as Pseudomonas and Arthrobacter. In this study, a new species of Pseudomonas geniculata, namely strain N1, which is capable of efficiently degrading nicotine, was isolated and identified. The optimal growth conditions for strain N1 are a temperature of 30°C, and a pH 6.5, at a rotation rate of 120 rpm min−1 with 1 g l−1 nicotine as the sole source of carbon and nitrogen. Myosmine, cotinine, 6-hydroxynicotine, 6-hydroxy-N-methylmyosmine, and 6-hydroxy-pseudooxynicotine were detected as the five intermediates through gas chromatography-mass and liquid chromatography-mass analyses. The identified metabolites were different from those generated by Pseudomonas putida strains. The analysis also highlighted the bacterial metabolic diversity in relation to nicotine degradation by different Pseudomonas strains. Introduction Nicotine, a principal pyridine alkaline in tobacco plants, is notorious for its significant contribution to tobacco addiction. However, nicotine is very toxic to humans because it is easily absorbed in the body; its hydrophilic nature contributes to the environmental contamination [1]. Moreover, large quantities of tobacco wastes containing high concentration of nicotine are produced during tobacco manufacturing process. These wastes have been classified as ''toxic and hazardous wastes'' by European Union Regulations [2]. In addition, the American Medical Association has issued a public strategy strengthening the forcible reduction of nicotine level in tobacco [3]. As an environmentfriendly treatment, microbial degradation of nicotine has been considered as a promising method due to its low cost and high efficiency. In this study, a novel strain, Pseudomonas geniculata N1, capable of degrading nicotine was isolated. Along with the identification and characterization of this new nicotine-degrading strain, we also determined the optimal conditions for cell growth and nicotine degradation. Compared with other Pseudomonas and Arthrobacter species, strain N1 exhibited a distinct color change, during its growth with nicotine as the sole source of carbon and nitrogen. The intermediates of strain N1-mediated nicotine degradation were identified by high-performance liquid chromotography (HPLC), ultraviolet (UV) absorption, gas chromatography mass (GC-MS), and liquid chromatography mass (LC-MS) analysis. The data showed that strain N1 decomposes nicotine via a unique pathway, which is different from those reported by Pseudomonas strains. This study suggests that the nicotine-degrading bacterium has future potential application on the treatment of the waste generated during tobacco manufacturing. The findings might help further the research for characterizing the molecular mechanisms underlying nicotine degradation by strain N1. Chemicals and media L-(-)-Nicotine ($99% purity) was purchased from Fluka Chemie GmbH (Buchs, Switzerland). All other chemicals were of analytical grade. The ''nic medium'' was a minimal medium containing 13.3 g K 2 HPO 4 ?3H 2 O, 4 g KH 2 PO 4 , 0.2 g MgSO 4 ?7H 2 O and 0.5 ml of trace elements solution. L-(-)-Nicotine was added to this minimal medium after filtration sterilization to a final concentration of 1 g l 21 Strain identification and characterization After the extraction of genomic DNA by the Wizard Genomic DNA purification kit (Promega Corp., Madison, WI, USA), 16S rRNA gene was amplified by PCR with the universal primer pair of 27F (59-AGAGTTTGATCCTGGCTCA-39) and 1492R (59-GGTTACCTTGTTACGACTT-39). PCR amplification was carried out with pfu polymerase (Tiangen, Beijing, China) by denaturation at 94uC for 5 min, followed by 30 cycles of 94uC for 30 s, 60uC for 30 s, and 72uC for 3 min. The PCR product was purified for sequence analysis and homology alignment analysis using the BLAST search program (http://www.ncbi.nlm.nih.gov/ BLAST.html). A phylogenetic tree was constructed with the neighbor-joining (NJ) method using MEGA 4.1 [19]. A series of experiments were conducted for simultaneously identifying the morphological, physiological and biochemical characteristics of the strain. The morphology was studied using a transmission electron microscope. The physiological and biochemical characteristics such as the utilization of different carbon sources and enzymatic properties were determined by China Center for Type Culture Collection (CCTCC). Cell growth and nicotine degradation Culture temperature, pH, nicotine concentration and rotation rates were studied in order to identify the optimal conditions for cell growth and nicotine transformation. To determine the optimal temperature for cell growth, strain N1 was incubated in minimal medium containing 1 g l 21 nicotine at 23uC, 26uC, 30uC, 34uC, and 37uC with the initial pH set at 7.0. The optimal pH was determined by culturing N1 at pH values of 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, and 7.5. The pH was adjusted using 100 mmol phosphorous buffer. Once the optimal temperature and pH were determined, the effect of nicotine concentration (0.5, 1.0, 1.5, and 2.0 g l 21 ) was investigated under optimal temperature and pH. To evaluate the influence of the rotation rate of the reciprocal shaker on nicotine degradation, strain N1 was cultivated at different shaking rates of 0, 120, 180, and 220 rpm. The optimized conditions were then used for subsequent work. During the incubation period, aliquots of the culture medium were sampled at the pre-determined intervals and analyzed at 600 nm by using a 2100 spectrophotometer (Unic Company, Shanghai). The samples were also preserved at 220uC for HPLC and UV absorption analyses. Degradation of nicotine by strain N1 P. geniculata N1 was cultured under optimal conditions in ''nic medium'', Luria-Bertani (LB) medium and LB medium with 1 g l 21 nicotine, harvested during the late-exponential phase by centrifugation at 6,0006 g for 8 min at 4uC, and washed twice by sodium phosphate buffer (100 mM, pH 7.0). Then the cells were suspended in deionized water (OD 600 nm ,5) for reaction (called resting cells). Resting cells were resuspended in prepared PBS buffer (pH 7.0), and adjusted to OD 600 nm ,15, with the addition of 10% glycerol, 1 mM DTT and 2.5 mM PMSF. After sonification in the condition of 5 s on, 5 s off, 90 cycles, cell lysates were centrifuged at 12,000 rpm for 20 min, and the supernatant liquid were used for reaction (called crude cells). The nicotine degradation assay was performed at 30uC on a shaker rotating at 180 rpm. Identification of metabolites in nicotine degradation After the ''resting cell reaction'', the reaction mixture (1 ml) was evaporated until its dry at 50uC under the reduced pressure, and then dissolved in 200 ml of acetonitrile. The resulting solution was transferred to a vial and dried under nitrogen stream. Samples were analyzed using a GC-MS system (GCD 1800C, Hewlett-Packard) equipped with a flame ionization detector and a 50-mlong J&W DB-5MS column (Folsom, CA, USA) at 140uC. The injection port and detector were set at 260uC and 280uC, respectively. LC-MS analysis was performed by Agilent 1290 (ultra-performance liquid chromatography, UPLC) coupled with an Agilent 6230 electrospray ionization-time-of-flight-mass spectrometry (ESI-TOF-MS) with methanol-0.1% formic acid mixture General analytical techniques The nicotine present in the culture medium was quantified by HPLC (Agilent 1200 series) equipped with an Eclipse XDB-C18 column (column size, 25064.6 mm; particle size, 5 mm; Agilent). A mixture of methanol-1 mmol H 2 SO 4 (5:95 v v 21 ) was used as the mobile phase, at a flow rate of 0.5 ml min 21 . Qualitative analysis of nicotine and metabolites was carried out by UV-2500 spectrophotometer (Shimadzu). Nucleotide sequence accession number The nucleotide sequence reported in the present study has been deposited in the GenBank under the accession number JN607239. Isolation and identification of strain N1 The isolated strain N1, which can utilize nicotine as the sole source of carbon and nitrogen, has been deposited at CCTCC under the accession number M2011183. Strain N1 forms small, circular, and convex colonies with neat edges on nicotine agar ( Figure S1A). Notably, the colonies were yellow, which is rarely observed in case of nicotine-degrading strains. The strain was identified as a non-spore-forming, gram-negative rod (0.561.5 mm) with 2 or 3 flagella at one pole. The image of strain N1 is presented in Figure S1B. Strain N1 could utilize a narrow range of carbon sources such as polychrom, and it grew weakly in glucose, amygdalin, arbutin, and saligenin ( Table 1). The physiological and biochemical characteristics, performed at CCTCC, are shown in Tables 1 and 2. Strain N1 incubated at 30uC showed the following characteristics: growth at 5% sodium chloride; positive for catalase and oxidase; positive for arginine dihydrogenase and lysine decarboxylase; negative for ornithine decarboxylase; utilization of citric acid or polychrome as the sole source of carbon for growth; positive for lipase (C14); negative for H 2 S and indole production; and negative for Voges-Proskauer test ( Table 2). The characteristics of strain N1 were strikingly similar to those of previously reported Pseudomonas geniculata strains [20]. The 16S rRNA sequence exhibits 99% identity with Pseudomonas and Stenotrophomonas species. Phylogenetic tree of 16S rRNA from 35 different strains is constructed using the molecular evolutionary genetics analysis tool (MEGA4.1) by neighbor joining (NJ) method and repeated bootstrapping for 1000 times was performed. Strain N1 is closest to the ortholog from Pseudomonas geniculata strain ATCC 19374T (Figure 1). In conclusion, strain N1 was classified as Pseudomonas geniculata based on the above results. Cell growth and nicotine degradation The effects of temperature on strain N1 is shown in Figure 2A and 2B. Temperature has a dramatic influence on the growth of N1, which showed the maximum rate of growth and nicotine degradation at 30uC. The growth rate was much slower with a gradual drop in temperature. Notably, little-to-no growth was observed at 34uC, which indicates a narrow tolerance range for temperature. Figure 2C and 2D show the impact of pH on the growth of strain N1. The data show that strain N1 could grow at the pH values ranging from 5.5 to 7.0. Thus, strain N1 prefers weak acidic environment, ranging from pH 6.0 to 6.5. The rate of cell growth dropped remarkably, when pH was ,6.0. However, the maximum biomass did not show a major difference. With an increase in the pH value, the cell growth rate dropped slightly in a neutral environment; no growth was detected in an alkaline environment. It should also be noted that the influence of pH on nicotine degradation was not significant. The degradation rate was rather stable at pH values ranging from 6.0 to 7.5, whereas it was much slower at a pH of 5.5, which was in line with the pattern of cell growth. Figure 2E and 2F illustrate the nicotine tolerance of strain N1. Strain N1 could grow well when the nicotine concentration was ,2 g l 21 . Moreover, with an increase in the concentration of nicotine in the growth medium, the maximum biomass increased proportionally. However, the growth was much slower when the initial nicotine concentration was 1.5 g l 21 rather than 1.0 g l 21 . The maximum biomass was noted after 4.5 days, when the nicotine concentration was 1.0 g l 21 , while it took .8 days to reach the stationary phase in the presence of 1.5 g l 21 nicotine. As shown in Figure 2G and 2H, the rotary rate of the shaker can impact cell growth by altering oxygen supply. The growth was extremely slow with low maximum biomass when the growth cultures were kept stationary. This finding confirmed our initial results, which identified strain N1 as an aerobic bacterium. The optimal rotary rate of 120 rpm min 21 resulted in the maximum growth rate and maximum biomass production. In conclusion, P. geniculata N1 grows best at 30uC, pH 6.5, and 120 rpm min 21 with a maximum nicotine-tolerating capability of 1.5 g l 21 (Figure 3). Nicotine degradation by resting and crude cells Resting cells harvested from nicotine medium (see materials and methods) were able to degrade 3 g l 21 nicotine within 3 h. As shown in Figure 4, the decrease in nicotine concentration and the formation of new peaks in the UV absorption spectrum ( Figure 4A) or HPLC spectrum ( Figure 4B and Figure S2) suggest the degradation of nicotine and generation of new metabolites. In contrast, the cells cultivated in LB medium did not exhibit the ability to degrade nicotine, illustrating that the enzymes required for nicotine degradation are inducible ( Figure S3). The crude cells of strain N1 harvested in nicotine medium were obtained after sonication in phosphate buffer (see materials and methods). However, the cell extracts could not degrade nicotine, and it is similar to the findings with strain Pseudomonas putida S16 [17] ( Figure S4). The phylogenetic tree is constructed using the molecular evolutionary genetics analysis tool (MEGA 4.1) by neighbor joining (NJ) method [19]. The repeated bootstrapping for 1,000 times was performed. doi:10.1371/journal.pone.0084399.g001 Nicotine biotransformation and metabolites identification The culture broth of strain N1 was yellowish green, and blue color did not develop during nicotine biotransformation. This indicated that the nicotine-degradation pathway used differed from that used by Arthrobacter, Nocardioides and Rhodococcus strains. The GC-MS chromatogram is shown in Figure 5. The structures of compound A (nicotine, 12.603 min), B (myosmine, 13.581 min), and E (cotinine, 17.102 min) could be identified by comparing their mass spectra with the standard GC-MS spectral library ( Figure 5). Compound C (18.883 min) exhibited the following Figure 6). Discussion The highly toxic alkaloid nicotine, present in tobacco waste, is removed from the environment via mineralization by bacteria. Basic insights into the steps and intermediates of nicotine degradation by Arthrobacter and Pseudomonas species have been proposed and elucidated [4,18,21]. In this study, a novel nicotinedegrading bacterium N1 was isolated from tobacco leaves. The physiological and biochemical data show that the strain N1 belongs to the genus Pseudomonas. Most of the morphological and physiological traits of strain N1 were identical to those of Pseudomonas geniculata [20]. P. geniculata has been poorly reported in previously published literature, and this study is the first report demonstrating the nicotine-degrading ability of P. geniculata. In addition, it is interesting that strain N1 could utilize only a narrow range of carbon sources and efficiently degrade nicotine. Strain N1 may have a powerful membrane transport capacity; it has been reported to possess 28 multidrug efflux pump genes [22]. These fingdings imply a highly efficient nicotine uptake capacity of the strain and an efficient removal of end-products of nicotine catabolism from the cells which may help to explain the nicotine-degrading properties of strain N1. Current understanding of nicotine degradation in bacteria is based on characterization of 6-hydroxynicotine (pyridine pathway) in Arthrobacter species [4] and N-methylmyosmine (pyrrolidine pathway) in Pseudomonas species [22][23][24][25]. Otherwise, Agrobacterium tumefaciens strain S33 could firstly transform nicotine to 6-hydroxy-N-methylmyosmine using pyridine pathway, and then further degrade 6-hydroxy-N-methylmyosmine to 6-hydrxoxy-3-succinoylpyridine and 2,5-dihydroxypyridine using pyrrolidine pathway [12]. In the present study, the formation of blue pigment was not observed during the transformation of nicotine by strain N1. Therefore, it can be proposed that the latter catabolic pathway of nicotine degradation is likely to be different from that of Arthrobacter. The intermediates 6-hydroxynicotine, 6-hydroxy-Nmethymyosime, 6-hydroxy-pseudooxynicotine, and 2,6-dihydroxypseudooxynicotine were identified by GC-MS and LC-MS analyses. The intermediates 6-hydroxy-3-succinoylpridine and 2,5-dihydroxy-pridine in P. putida S16 were not detected in the ''resting cell reactions'' of strain N1. It can be concluded that the upper pathway of nicotine degradation in strain N1 was similar to the pyridine pathway, and the further conversion of 2,6dihydroxypseudooxynicotine might be different from that ob-served in Arthrobacter and Pseudomonas. In addition, the intermediates myosime and cotinine can be detected by GC-MS. Thus, the direct demethylation to form myosime and the hydroxylation of nicotine at position 2 of pyrrolidine ring to form cotinine was proposed, and found to be similar to that of the strain Pseudomonas sp. CS3 [26]. In conclusion, it is proposed that the strain Pseudomonas geniculata N1 can decompose nicotine via a unique nicotine-degrading pathway.
2016-05-12T22:15:10.714Z
2014-01-08T00:00:00.000
{ "year": 2014, "sha1": "cb4119a4459986e583e16f1c0374d9efdfa33b32", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0084399&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb4119a4459986e583e16f1c0374d9efdfa33b32", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
227172435
pes2o/s2orc
v3-fos-license
Ecological Reference Points for Atlantic Menhaden Established Using an Ecosystem Model of Intermediate Complexity Atlantic menhaden (Brevoortia tyrannus) are an important forage fish for many predators, and they also support the largest commercial fishery by weight on the U.S. East Coast. Menhaden management has been working toward ecological reference points (ERPs) that account for menhaden’s role in the ecosystem. The goal of this work was to develop menhaden ERPs using ecosystem models. An existing Ecopath with Ecosim model of the Northwest Atlantic Continental Shelf (NWACS) was reduced in complexity from 61 to 17 species/functional groups. The new NWACS model of intermediate complexity for ecosystems (NWACS-MICE) serves to link the dynamics of menhaden with key managed predators. Striped bass (Morone saxatilis) were determined to be most sensitive to menhaden harvest and therefore served as an indicator of ecosystem impacts. ERPs were based on the tradeoff relationship between the equilibrium biomass of striped bass and menhaden fishing mortality (F). The ERPs were defined as the menhaden F rates that maintain striped bass at their biomass target and threshold when striped bass are fished at their Ftarget, and all other modeled species were fished at status quo levels. These correspond to an ERP Ftarget of 0.19 and an ERP Fthreshold of 0.57, which are lower than the single species reference points by 30–40%, but higher than current (2017) menhaden F. The ERPs were then fed back into the age-structured stock assessment model projections to provide information on total allowable catch. The ERPs developed in this study were adopted by the Atlantic menhaden Management Board, marking a shift toward ecosystem-based fishery management for this economically and ecologically important species. INTRODUCTION Forage fishes are abundant, schooling, mid-trophic level fishes that contribute substantially to the diet of many larger predators and serve central roles in energy transfer within ecosystems, but many forage species are themselves harvested and support some of the largest fisheries in the world. Due to their role in the ecosystem and their environmentally driven fluctuations, forage fish and their management have become a focal issue in the call for ecosystem-based fisheries management (EBFM) approaches (Pikitch et al., 2004;Dickey-Collas et al., 2014;Rice and Duplisea, 2014;National Marine Fisheries Service [NMFS], 2016; Siple et al., 2019). Although several general recommendations have been proposed to guide forage fish harvest rates and management policy (Constable et al., 2000;Cury et al., 2011;Smith et al., 2011;Pikitch et al., 2012), the effect that forage fish harvest has on predator populations remains a subject of debate among scientists (Hilborn et al., 2017;Pikitch et al., 2018). Despite the different viewpoints, there is a consensus that casespecific modeling and research are necessary to address specific ecological considerations and management challenges associated with individual forage fisheries or the systems they reside in (Hilborn et al., 2017;Pikitch et al., 2018). Multi-species and ecosystem models that account for predator-prey dynamics are essential tools for evaluating the ecological impacts of forage fish harvest policies. However, the added complexity and data requirements of these approaches introduce considerable uncertainty into the management advice they provide. Overly simple models may not provide a good enough representation of the ecosystem and can lead to poor fits and model bias, and reduced stakeholder buy-in; while full endto-end ecosystem models require an increased understanding of species-environment interactions and have high parameter uncertainty (Collie et al., 2016). Models of intermediate complexity for ecosystem assessment, or MICE models, seek to strike a balance by including only the necessary components to address the main management question(s) (Plagányi et al., 2014). For example, a MICE model of the California Current included three forage species and two predator species and was used to evaluate forage fish management systems and identify key sources of uncertainty (Punt et al., 2016;Kaplan et al., 2019). While MICE models have some clear advantages (speed and ease of use, fewer data requirements, and simpler interpretation), they should be compared to other intermediate and highly complex models of the same system to check against critical model biases (Plagányi et al., 2014;Kaplan et al., 2019). Atlantic menhaden (Brevoortia tyrannus), members of the Clupeidae family, are a planktivorous schooling fish found in Atlantic waters from Nova Scotia to Florida (Ahrenholz, 1991). They are prey for a wide range of other species, including commercially and recreationally important finfish like striped bass [Morone saxatilis (Hartman and Brandt, 1995) and bluefin tuna (Thunnus thynnus) (Butler et al., 2010), piscivorous birds (Mersmann, 1989;Glass and Watts, 2009), and marine mammals (Gannon and Waples, 2004)]. Atlantic menhaden have been the target of commercial fisheries since the 1800s (Ahrenholz et al., 1987). The majority of landings are taken by the purse-seine reduction fishery, which processes the catch into fish meal and fish oil for aquaculture and animal feed, dietary supplements, and other products. Atlantic menhaden are also harvested by mixed gear fisheries in most states for use as bait in other commercial and recreational fisheries (SEDAR, 2020a). Landings peaked in the mid-1950s at about 700,000 mt per year; over the past decade, total landings have averaged approximately 200,000 mt per year with an average annual value of $40.8 million USD, making Atlantic menhaden the largest fishery by weight on the U.S. East Coast for that time period (National Marine Fisheries Service [NMFS], 2019). Because Atlantic menhaden have been primarily caught in state waters, the species is managed by the Atlantic States Marine Fisheries Commission (ASMFC). ASMFC also manages, solely or jointly with the National Oceanic and Atmospheric Administration (NOAA), several predator species that consume Atlantic menhaden, including striped bass, bluefish (Pomatomus saltatrix), weakfish (Cynoscion regalis), and spiny dogfish (Squalus acanthias). The role of Atlantic menhaden as a forage species has long been recognized, particularly for striped bass, which is arguably ASMFC's highest profile predator species. There has been increasing interest from managers and stakeholders in accounting for Atlantic menhaden's ecosystem services when setting regulations and harvest limits. ASMFC convened a workshop with managers, scientists, and stakeholders to identify ecosystem management objectives for Atlantic menhaden. The objectives included (1) sustaining menhaden to provide for directed fisheries, (2) sustaining menhaden for consumptive needs of predators, (3) sustaining menhaden to provide stability across all fisheries, and (4) minimizing risk due to a changing environment (ASMFC, 2015b). ASMFC has already implemented precautionary measures to achieve these objectives. In 2006, harvest by the purse seine reduction fishery within the Chesapeake Bay was capped due to concerns about localized depletion of Atlantic menhaden in an important predator nursery area (ASMFC, 2005), and in 2017 the coastwide total allowable catch (TAC) was set at a level lower than the TAC at the singlespecies target F to leave more Atlantic menhaden in the water for predators (ASMFC, 2017). However, these measures were somewhat ad hoc and were not based on quantitative analyses. Developing quantitative ecological reference points (ERPs) that take into account Atlantic menhaden's role as a forage species remained a high priority for ASMFC. With the passage of Amendment 3 to the Menhaden Fishery Management Plan in 2017, the ERP workgroup was tasked with developing ERPs for management that account for menhaden's role as a forage fish. Several models were considered as part of this process. The models ranged from simple to complex and included a time-varying intrinsic growth rate surplus production model (Nesslage and Wilberg, 2019), a two-species surplus production model with predation (Uphoff and Sharov, 2018), a multispecies statistical catch-at-age model (Curti et al., 2013;McNamee, 2018), an Ecopath with Ecosim (EwE) MICE model with a limited number of predator and prey species (described here), and a more holistic EwE model that included many more menhaden predators (Buchheister et al., 2017a,b). Of these, the EwE MICE model was put forward as the recommended tool for developing the ERPs because it included bottom-up effects of menhaden harvest on predators, captured the dynamics of key managed predator species, and can be updated on a timeframe suitable for management (SEDAR, 2020b). This paper describes the development of the final EwE MICE model and its utility in establishing Atlantic menhaden ERPs. The overall goal of this work was to identify tradeoffs associated with Atlantic menhaden harvest and establish management reference points that account for the dietary needs of menhaden predators. Ecopath With Ecosim We developed a model of intermediate complexity for ecosystem assessment for the Northwest Atlantic Continental Shelf, hereafter called NWACS-MICE, using EwE. EwE is a trophic dynamic modeling package that facilitates management of biomass and food web data for whole ecosystems and has been widely used for analysis of aquatic resources (Pauly et al., 2000;Christensen and Walters, 2004a;Colléter et al., 2015). The Ecopath component of EwE is a static, mass-balance view of the ecosystem that allows for representation of age structure and provides the initial state for dynamic modeling. In Ecopath, the production of each modeled species or functional group is allocated among fishing, predation, other mortality, and migration while maintaining mass-balance between groups. In Ecosim, biomass dynamics are modeled on a monthly time step as a series of differential equations, where the change in biomass for each group is predicted as its consumption minus losses to predation, fishing, migration, and other unexplained natural mortality (Walters et al., 1997). The EwE software package also includes several built-in functions that were utilized in the development of menhaden ERPs. These included the time series fitting routine, equilibrium FMSY analysis , emergent stock-recruit curves, and batch run processing with the multi-sim plugin. For full details on the underlying theory, assumptions, equations, and model mechanics of EwE see the original sources (Walters et al., 1997;Christensen and Walters, 2004a;Christensen et al., 2005). In developing menhaden ERPs, it was critical that the chosen model be able to account for top-down predation effects on menhaden as well as the bottom-up effects that menhaden have on their predators. In Ecosim, this is modeled based on foraging arena theory, which states that predator-prey interactions are restricted to spatial and temporal arenas, and movement of prey into the foraging arena determines how much is consumed by predators (Ahrens et al., 2012). The Ecosim vulnerability exchange rate parameters, V ij , describe the exchange rates of prey i from a not vulnerable state into a vulnerable biomass pool where they can be consumed by predator j. The vulnerability parameters control the amount of prey biomass available to predators and therefore regulate consumption, and in turn, the growth and biomass of the predators. Consumption for a predator is mortality for its prey, and the V ij also serve as limits on predation mortality at high predator biomass. Low V ij values restrict flow into the vulnerable state, which thereby limits consumption and prevents any substantial biomass gains in the predator. Large V ij values result in stronger top-down predation effects because the exchange rate of prey into the vulnerable biomass pool is high, allowing for prey biomass to be quickly exhausted by predators. Other Ecosim parameters that factor into the foraging arena equations include foraging time adjustment (FTA) and prey switching. FTA allows groups to spend less time feeding when their densities are low or when food density increases, which lowers exposure to predation under those conditions (i.e., FTA regulates the tradeoff between growth vs. survival). Additionally, the time spent feeding can be directly responsive to changes in predator abundance (risk-sensitive feeding) and some proportion of unexplained mortality can be allowed to vary with feeding time (stronger density-dependence in natural mortality, M). Prey switching is said to occur when predator diets change disproportionately to the relative abundance of their prey. In Ecosim, this is accomplished by modifying the search rates, a ijt , of predator j in relation to changes in biomass B of prey i over time t using a power function a ijt = a ij · B P j it , where the predator switching power parameter (P j ) can range between zero (constant a ij ) and two (fast response). In our analysis, prey switching allowed us to explore whether impacts of menhaden harvest on predators might be moderated by the ability of predators to quickly switch to other prey resources. Spatial Domain and Functional Groups The spatial domain for the model spans the continental shelf of the Northwest Atlantic Ocean from North Carolina to Maine including the Mid-Atlantic Bight, Southern New England, Georges Bank, and Gulf of Maine subregions. The model implicitly represents major estuaries along the coastline, such as the Chesapeake Bay, Delaware Bay, and Long Island Sound, given that diet and biomass data from estuaries were included in the model parameterization. Although the domain does not encompass the entire distributional range of Atlantic menhaden (from Florida to Nova Scotia), it is similar to the range of a multispecies virtual population analysis model developed for Atlantic menhaden (Garrison et al., 2010) and of existing ecosystem models for the region (Link et al., 2008. This domain relies on the natural faunal and oceanographic break in North Carolina (Longhurst, 2010), while also including the bulk of historical Atlantic menhaden fishing effort concentrated in the Chesapeake Bay and the Mid-Atlantic (SEDAR, 2020a). An original EwE model of the NWACS was previously developed to inform Atlantic menhaden management in an ecosystem context (Buchheister et al., 2017a,b). The original NWACS model leveraged previous Ecopath models developed for the region (Link et al., 2008). The model consisted of 48 different functional or species groups, with several important species modeled using age stanzas (for a total of 61 unique groups). The model was calibrated to data from 1982 to 2013 and was partially updated to include data through 2017 for key species as part of the Atlantic menhaden ERP development (SEDAR, 2020b). The original NWACS model served as a basis for developing the NWACS-MICE model, which was restricted to focus on key managed species that are connected through food web interactions. The NWACS-MICE model simulated the dynamics of 17 biomass pools including Atlantic menhaden (ages 0 and 1+), striped bass (ages 0-1, 2-5, and 6+), spiny dogfish, bluefish (ages 0 and 1+), weakfish (ages 0 and 1+), Atlantic herring (Clupea harengus, ages 0 and 1+), anchovies (Anchoa spp.), benthic invertebrates, zooplankton, phytoplankton, and detritus ( Table 1). Striped bass, menhaden, spiny dogfish, bluefish, weakfish, and Atlantic herring are managed, or comanaged, by the ASMFC, and regularly undergo formal stock assessments. Of these, striped bass, spiny dogfish, bluefish, and weakfish were identified as major consumers of Atlantic menhaden based on an analysis of diet data (SEDAR, 2020b) from the NOAA Northeast Fisheries Science Center food habits database 1 . These six species are hereafter referred to collectively as the ERP complex. Multiple age stanzas were used to represent basic trophic ontogeny, fishery selectivity, and age-dependent predation for these key species. Anchovies were also included because they are an important prey item for species in the ERP complex. A separate fishing "fleet" for each species in the ERP complex was included in the NWACS-MICE model, where each fleet only captured a single target species and landings included both harvests and dead discards combined overall gear types and fishing sectors. Ecopath Model Inputs The basic data requirements for Ecopath are biomass (B), production to biomass ratio (PB) or total mortality rate (Z), consumption to biomass ratio (QB), diet composition (DC), and landings for each model group. Biomass accumulation (BA) was included to represent non-equilibrium changes in biomass occurring over the Ecopath base year. The NWACS-MICE Ecopath model base year was 1985, which is the earliest common year in all stock assessments for the ERP complex. Biomass inputs (million metric tons) were obtained either directly from stock assessments or by simply adding the biomass of lower trophic level groups from the original NWACS model. For all the assessed species, biomasses were taken directly from the singlespecies stock assessments (ASMFC, 2015a(ASMFC, , 2019bNortheast Fisheries Science Center [NEFSC], 2018SEDAR, 2020a) and summed by age for each Ecopath age stanza. For multi stanza groups, biomass was only input for a single age stanza (usually the oldest) and then calculated by Ecopath for other stanzas based on input growth and mortality parameters. Details for biomass calculations of each group are provided in the Supplementary Materials. Biomass accumulation was input to represent non-stationarity within the Ecopath base year of 1985. BA is a flow term expressed as a rate of change, where a negative value signifies biomass depletion during the base year and a positive value indicates biomass gains. For multi-stanza groups, high BA will shift the calculated age distribution to younger ages, representing a strong year class during the Ecopath base year and leading to initial increases during the first few years of an Ecosim simulation. Biomass accumulation was entered for all species in the ERP 1 http://www.nefsc.noaa.gov/femad/pbio/fwdp/ complex except weakfish and spiny dogfish ( Table 1), which according to time series data, showed little change during the base year. The input BA rates were calculated from the stock assessment model output as (B 1986 /B 1985 )-1, where B was the total biomass of all ages. In Ecopath, PB ratios and total mortality rates are used interchangeably because the two values are equal under the assumption of equilibrium (Allen, 1971). Mortality rates for species in the ERP complex were entered as annual total instantaneous mortality, Z, where Z = F + M. Age-specific M was available from each species' stock assessment. For multi stanza groups, M was taken as the average over all ages in each stanza weighted by the 1985 mean numbers-at-age (Table 1). In the case of Atlantic herring, the 2018 assessment used a constant M, thus, the age-varying M vector was taken from the previous stock assessment conducted in 2015, which used the Lorenzen (1996) estimator (Deroba, 2015;Northeast Fisheries Science Center [NEFSC], 2018). Spiny dogfish and anchovy M were taken directly from the original NWACS model, and the PB of the invertebrate and zooplankton groups were taken as the average PB of the inclusive groups from the original NWACS model, weighted by the biomass of those groups. The PB ratio for phytoplankton was taken directly from the original NWACS model. Fishing mortality, F, for each species in the ERP complex was calculated from stock assessment output as the sum of landings for each stanza divided by the average (or mid-year) biomass of each stanza. These F rates were added to numbersweighted mean M to obtain the input Z values. The Ecopath diet matrix describes the proportion of each prey i in the diet of predator j, DC ij . The diet matrix of the original NWACS model was simplified for the NWACS-MICE model by first summing the DC ij across NWACS-MICE prey groups and then averaging across NWACS-MICE predators, weighted by total consumption of each predator (Supplementary Table S1). Any diet proportions of a prey type included in the original NWACS model but not in the MICE model were assigned to diet import, which represents a constant proportion of consumption that is obtained from outside the modeled system. Consumption rates, QB, were input for all consumer groups and taken directly from the original NWACS model ( Table 1). For multi-stanza species QB was entered for the leading stanza only and calculated for other ages based on input biomass, mortality, and growth parameters. For aggregate groups (inverts and zooplankton) the QB was taken as the weighted average QB for inclusive groups from the original NWACS model weighted by the biomass of each group. Lastly, landings were included for the ERP complex species (Table 1) and derived from stock assessment outputs by summing the 1985 landings-at-age across ages within each stanza. Time Series Data The NWACS-MICE Ecosim model was calibrated to time series of observed abundance and catch from 1985 to 2017 using species and age-specific time series of fishing mortality as forcing functions. A total of 18 indices of abundance and 10 catch time series were used as reference data during model calibration (Supplementary Table S2, SEDAR, 2020b). Relative abundance time series for species in the ERP complex were obtained from fisheries independent surveys and recreational creel surveys as reported in the stock assessments. Given that some species had many such time series, we included no more than two times series that were recommended by the ASMFC's Species Technical Committees as the most representative for each species. Catch time series were assembled from the stock assessment report files as the landings and dead discards in weight, summed over all gears and age classes for each stanza. Fishing mortality was used as a forcing time series in Ecosim for all harvested species except spiny dogfish, which used catch forcing instead because F estimates were not available. Time series weights (one for each reference time series) were derived from the year-specific coefficient of variation (CV) for each survey, calculated as the inverse of the mean CV over all available years (i.e., 1/CV) such that more precise data streams had higher weights and thus more influence on model fit. Ecosim Calibration Procedure Fitting an Ecosim model begins by first identifying the most sensitive V ij parameters and then estimating those parameters to improve the model's goodness-of-fit as assessed by the sum of squared differences (SS) between predicted and observed biomass and catch time series. As a conservative approach, it has been recommended to only estimate K-1 parameters (Heymans et al., 2016), where K is the number of reference time series (i.e., observed biomass and catch) used to tune the model. Ecosim models are prone to local minima in SS, thus requiring repeated vulnerability searches to find model convergence. Therefore, a "repeated search" methodology (described in the Supplementary Materials) was implemented where the sensitivity and estimation routine was repeated until no further improvement in the SS and AIC was obtained. The vulnerabilities were reset to their default value of 2 and the repeated search was initiated after any changes were made to Ecopath inputs, FTA parameters, prey switching parameters, or time series forcing functions. When calibrating Ecosim models, the V ij parameters are often estimated at extremely high values (1 × 10 9 ) during the fitting process, which may result in theoretical predation rates far above the prey's Z when predator biomass is high. While this may improve the SS measure-of-fit over the period of observed data, the high V ij could lead to dynamic instability, exaggerated top-down effects, and groups crashing entirely when projecting extreme fishing or environmental scenarios. To correct for this, we set an upper limit to the vulnerability parameters to prevent the theoretical maximum predation mortality from exceeding the natural mortality of the prey (see the Supplementary Materials for details). Additionally, V ij estimated at the lower bound of 1.0 can be problematic in projections scenarios and often causes species to be unresponsive to fishing; therefore a minimum vulnerability of 1.02 was used. Ecosim Base Run Configuration Over 30 different Ecosim configurations were tested during the development of the NWACS-MICE model representing alternative inputs and assumptions for diet composition, foraging time adjustments, prey switching, vulnerability caps, primary production anomalies, seasonal prey availability, and recruitment deviations (Supplementary Table S3). Each model was fit following the repeated search methodology and then adjusted by applying the minimum vulnerability of 1.02 and the upper V ij limit described above and in the Supplementary Materials. We began by fitting Ecosim with recommended default settings which included FTA of 0.5 for the youngest age stanzas only, which allows for compensatory improvements in juvenile survival at low stock sizes due to density-dependent foraging behavior . Next, we fit a series of models that included prey switching to simulate a process where predators may switch to more abundant prey items when menhaden abundance is low, thereby mitigating some of the negative effects that menhaden harvest may have on predator populations. Separate NWACS-MICE models were fit with prey switching power P j values of 0.5, 1, and 1.5 applied to all menhaden predators. Of the values considered, P j = 1 (run 5) resulted in the lowest SS and was the setting used in the base run. To determine whether the estimated V ij 's might cause dynamic instability, we also inspected emergent properties of the model as additional diagnostics to the Ecosim SS, following best practices of Heymans et al. (2016). This included an equilibrium F MSY analysis applied to each species in the ERP complex by running long term simulations over a range of F values (see Supplementary Materials); an evaluation of emergent stock recruit curves in Ecosim (Walters and Martell, 2004); and checking whether Ecosim could generate expected biomass responses when species were fished at their proxy single-species reference points. The final base run (run 8) was fitted with prey switching power P j = 1 and vulnerability limits applied (lower V ij = 1.02 and upper V ij with M2 cap = 1), plus a few manual adjustments to parameters that improved model stability and emergent property diagnostics. The manual changes were arrived at through an iterative process and included: setting the proportion of other mortality (M0) sensitive to foraging time equal to zero and predator effect on foraging time equal to 1 for juvenile striped bass (e.g., risksensitive foraging time and lower density-dependence in M); and raising the minimum V ij limit slightly from 1.02 to 1.3 for the menhaden-zooplankton interactions, to 1.05 for spiny dogfish, to 1.1 for bluefish, and 1.5 for weakfish. These small increases in the minimum V ij were found to improve diagnostics in the single species projection scenarios and equilibrium F MSY analysis. Establishing the ERPs Of the species in the ERP complex, striped bass was the most responsive to changes in Atlantic menhaden F. This was supported by analysis from the original NWACS model that evaluated a broader suite of species and found that striped bass and nearshore piscivorous birds were the most sensitive menhaden predators, with both showing similar responses to increases in menhaden F (Buchheister et al., 2017a). Therefore, striped bass was used as an indicator of the impacts of Atlantic menhaden fishing pressure on the ecosystem for the development of ERPs using the NWACS-MICE model. ERPs based on striped bass biomass were assumed to also sustain other species in the ecosystem that were less sensitive to levels of Atlantic menhaden removals. Projections were run with the NWACS-MICE Ecosim model from 2018 to 2057 over a range of Atlantic menhaden and striped bass F. In these simulations, striped bass F ranged from 0 to 2 times F 2017 , and Atlantic menhaden F ranged from 0 to 10 times F 2017 . Bluefish, weakfish, spiny dogfish, and Atlantic herring were held constant at F 2017 in these projections. For striped bass, which has two harvested age stanzas, the F multipliers were applied to each stanza (i.e., a F multiplier of 0.5 would be a 50% reduction in F 2017 for all harvested age stanzas). For each simulation, a biomass ratio for striped bass was calculated as age 6+ biomass in the terminal year of the projection divided by the Ecosim target age 6+ biomass, where the Ecosim biomass target was based on the ratio of B target /B 2017 = 1.58 from the stock assessment (Northeast Fisheries Science Center [NEFSC], 2019a). Biomass of striped bass age 6+ from the NWACS-MICE model was treated as a proxy for spawning stock biomass reference points, since females mature between ages 4 to 8. Similarly, the biomass of bluefish and weakfish were predicted as a function of striped bass and Atlantic menhaden F and expressed as ratios to their single species reference points. For bluefish we used the biomass target (2.06 * B 2017 ) and for weakfish we used the biomass threshold (3.58 * B 2017 ) from their respective stock assessments (ASMFC, 2019b; Northeast Fisheries Science Center [NEFSC], 2019b) as the single species reference points. The menhaden ERPs were based on the relationship of striped bass biomass to menhaden fishing mortality, when striped bass are fished at their single-species F target (0.635 * F 2017 ) and all other species in the ERP complex were held constant at F 2017 . Thus, we defined the ERP F target as the maximum Atlantic menhaden F that maintains striped bass at their biomass target, when striped bass are fished at F target and all other species were fished at 2017 rates. The ERP F threshold was defined as the maximum Atlantic menhaden F that maintains striped bass at their biomass threshold when striped bass are fished at F target . Total Allowable Catch Projections Atlantic menhaden are managed using a coastwide total allowable catch (TAC); as a consequence, the menhaden ERPs must provide decision support for setting the coastwide TAC. Therefore, we used stock assessment model projections to determine the probability that the single species F target and F threshold would exceed the ERP F rates from Ecosim and to estimate the TAC with a 50% probability of exceeding the ERP F target . The single species stock assessment for Atlantic menhaden was conducted using the Beaufort Assessment Model (BAM), which is an age-structured statistical catch-at-age model fitted to landings, age composition, length composition, and index data (SEDAR, 2020a). Uncertainty in the single species assessment was determined through a Monte Carlo bootstrapping (MCB) procedure, whereby uncertainty in input data and model parameters such as M were bootstrapped to provide distributions around estimated time series such as recruitment, biomass, and estimated parameters. The projections used the base run of the BAM model, as well as the individual runs from the MCB procedure, to forward project abundance at age from the terminal year of the assessment, using total instantaneous mortality, Z. Total instantaneous mortality was the sum of natural mortality, a specified input, and fishing mortality, a value that was solved for during the projection analyses. Recruitment was projected using non-linear time series analysis (Deyle et al., 2018). The projections allowed for determining the risk of exceeding a value of F under specified TAC values. Annual TACs were established using MCB runs from the BAM with a specified probability (usually 50%) of exceeding the single species and ERP F target or F threshold values (SEDAR, 2020a). Projections were run from 2018 to 2022, using actual landings in 2018-2019 and applying the 216,000 mt TAC in 2020 to project the TAC for 2021-2022. Lastly, menhaden stock status is based on reproductive output. Therefore, fecundity-based biological reference points were also generated for the associated F target and F threshold using equilibrium calculations for spawning potential ratio (SPR) where population fecundity was calculated based on a function of mean weight-at-age, spawning frequency, and maturity (Gartland et al., 2019;SEDAR, 2020a). Ecopath Mortality Rates and the Ecotrophic Efficiency of Menhaden Predation mortality (M2) for Atlantic menhaden calculated by the NWACS-MICE Ecopath for 1985 conditions was 0.121 for juveniles and 0.031 for adults age 1+ (Table 1 and Figure 1). Striped bass (all ages combined) and adult bluefish accounted for 36% and 55% of juvenile menhaden M2, respectively, with the other two predators (dogfish and weakfish) accounting for the remaining 9% of menhaden M2 (Figure 1). Predation mortality of adult menhaden was partitioned to 64% adult bluefish and 30% striped bass (Figure 1). The low M2 of Atlantic menhaden in the NWACS-MICE model resulted in ecotrophic efficiencies of 0.08 and 0.15 for juvenile and age 1+ menhaden (Table 1), respectively, meaning that 92% and 85% of the total mortality is unexplained in the model. Bluefish, spiny dogfish, and striped bass accounted for most of the predation mortality overall in the Ecopath model (Figure 1). In fact, bluefish accounted for the largest percentage of M2 on menhaden, juvenile bluefish, and weakfish. Striped bass contributed to at least 20% of the M2 on juvenile striped bass, menhaden, and juvenile weakfish. Predation mortality for the other forage group in the ERP complex, Atlantic herring, was higher than menhaden, with 0.895 for juveniles and 0.377 for adults (Table 1), with most of the mortality coming from spiny dogfish and bluefish (Figure 1). Even though Atlantic herring contribute to a smaller portion of the predator diets compared to menhaden, their M2 rates are higher because biomass is an order of magnitude lower than menhaden ( Table 1). Predation mortality rates were low (<0.002) for the adult age stanzas of predator species in the ERP complex (striped bass, dogfish, and bluefish; Table 1), which is expected for these larger species that have fewer predators, many of which were excluded from the NWACS-MICE model. Predation mortality on juvenile stanzas was generally higher than adults, with juvenile bluefish and weakfish having a high M2, 1.6, and 1.3, respectively (Figure 2). Predation on striped bass juveniles was poorly TABLE 1 | Basic inputs and estimates from the NWACS-MICE Ecopath model, including biomass (B), biomass accumulation (BA), production to biomass ratio (PB) or total mortality rate (Z), consumption to biomass ratio (QB), trophic level (TL), ecotrophic efficiency (EE), fishing mortality (F), and predation mortality (M2). Model Fits to Time Series The NWACS-MICE Ecosim model produced reasonably good fits to the observed indices of abundance and catch data (Figures 2, 3). The weighted sum of squares (SS) from all 32 fitted models ranged between 1031 and 1327, with the base run SS = 1186 (Supplementary Table S3). Six of the seven lowest SS were obtained from exploratory scenarios that included annual primary production anomalies or forced annual deviations in juvenile survival that allowed the model to better track interannual variability. Of the models developed for management (runs 1-14), the lowest SS was for run 5 (SS = 1088) with prey switching Pj = 1 and no vulnerability limits applied. However, the equilibrium F MSY output for run 5 (Supplementary Figure S1) demonstrated model instability at high fishing mortality rates on Atlantic menhaden and striped bass, as well as a general lack of sensitivity to fishing for weakfish and bluefish. As previously mentioned, this instability was associated with vulnerability parameters estimated at the lower bound of 1.0. Subsequently, manual adjustments were made to the vulnerabilities and foraging time adjustment parameters in runs 6 and 7 (Supplementary Table S3), eventually leading to the base run 8 that had a higher SS but improved stability at high menhaden F in the equilibrium F MSY analysis. In general, the NWACS-MICE Ecosim model was better at capturing the overall trends in observed abundance data rather than the interannual variability. The predicted biomass of striped bass followed the general trend in the data, capturing the recovery during the 1990s and gradual decline that has followed since ( Figure 2). The high interannual variability in the observed data for both menhaden groups was not captured well by the model, nor was the steep decline in the combined juvenile menhaden index observed in [1985][1986][1987][1988][1989][1990]. The spiny dogfish observed index was highly variable without trend, but the model predictions were flatter. Bluefish juveniles and adults fit the data well whereas weakfish did not fit the observed spike in abundance that occurred in the late 1990s. Lastly, Atlantic herring fit the overall trend but did not predict the high values observed during 1992, 1995, and 2002 or the lows observed during 1998-2000. The Ecosim model was also able to fit the observed catch trends well (Figure 3), although the predicted catch of striped bass was slightly higher than observed catches after 2000. Predator-Prey Surface Plots The analysis of menhaden and striped bass F combinations showed that under current striped bass and menhaden F rates, striped bass will remain below their biomass target and threshold and reach equilibrium at a B/B target ratio of 0.66 (Figure 4). At current striped bass F, the model predicted the striped bass biomass ratio would range between 0.74 (near the striped bass B threshold ) when menhaden F = 0 down to 0.42 when menhaden F is 10x the current value. When striped bass are fished at their F target of 0.2, the model predicted their biomass ratio to range from 1.15 to 0.54 over the range of menhaden F rates considered. Under this scenario, striped bass reached their biomass target at current menhaden F rates, and remained above their biomass threshold for menhaden F rates ranging from zero to approximately four times F 2017 (Figure 4). When both striped bass and Atlantic menhaden were fished at their single species F target rates, the equilibrium striped bass biomass ratio was 0.90, which is above the threshold and below the target. The menhaden and striped bass F combinations explored here resulted in changes to the biomass of other species in the ERP complex, such as bluefish and weakfish that also eat menhaden, are preyed upon by striped bass, or compete with striped bass for food. Bluefish, which was experiencing overfishing in 2017, was predicted to remain below their biomass target across all menhaden and striped bass F combinations (Figure 4). Under current striped bass and menhaden harvest rates, bluefish were predicted to reach equilibrium at a biomass ratio of 0.38. The maximum predicted bluefish biomass ratio was 0.59, which occurred when menhaden F = 0 and striped bass F was 2x F current . Higher F rates on striped bass led to higher biomass of bluefish due to reduced predation and competition (striped bass prey on juvenile bluefish and have diet overlap with bluefish). When striped bass F is reduced, the biomass of bluefish was predicted to decline, with the lowest biomass ratio of 0.19 predicted in scenarios with high menhaden F and low striped bass F (Figure 4). Weakfish biomass was also predicted to remain below their threshold across all striped bass and menhaden F combinations, and would reach equilibrium at a biomass ratio of 0.30 under current F rates (Figure 4). Similar to bluefish, the maximum biomass ratio of 0.33 for weakfish occurred when menhaden F = 0 and striped bass F is 2x the current value. However, when striped bass F was low, weakfish biomass increased slightly under higher menhaden F rates, going from 0.21 at menhaden F = 0 to 0.25 at Fx10. This is because the indirect positive effects (i.e., lower predation and competition) resulting from the impact of menhaden harvest on striped bass and bluefish (Figure 4) outweighed the direct negative effects of menhaden harvest on weakfish. In contrast, when striped bass F is high, weakfish biomass ratios declined with menhaden F, going from 0.33 when menhaden F = 0 to 0.27 at maximum menhaden F (Figure 4). Spiny dogfish and Atlantic herring biomass ratios were highest when menhaden F and striped bass F were both high (Figure 4). Spiny dogfish equilibrium biomass ratio was predicted to be 1.24 under current F rates and remained above their biomass target across nearly all F combinations. Atlantic herring equilibrium biomass ratio under current F was equal to 0.6, and remained below the target over all menhaden and striped bass F rates (Figure 4). Atlantic Menhaden Ecological Reference Points Atlantic menhaden ERPs were estimated based on the relationship between menhaden F and striped bass biomass ratios when striped bass was fished at their biomass target and all other species are fished at their 2017 status quo levels. The ERPs are located within the striped bass surface plot where the horizontal dotted line (at striped bass F target = 0.2) intersects the target and threshold B ratio contours (Figure 4), and in Figure 5 where the tradeoff curve crosses the biomass target and threshold. The ERP F target is the menhaden F that maintains striped bass at their biomass target when striped bass are fished at their F target , and it marks the point where the tradeoff curve crosses the target biomass ratio of 1 (Figure 5). The ERP F target was estimated to be 0.19, which was about 20% higher than the current 2017 Atlantic menhaden F of 0.16 and 40% lower than the menhaden single species F target of 0.31 from the stock assessment ( Table 2). The ERP F threshold is the menhaden F that maintains striped bass at their biomass threshold (when striped bass are fished at F target ), and is the point where the tradeoff curve crosses the threshold biomass ratio of 0.78 (Figure 5). The ERP F threshold was estimated to be 0.57, which is over 260% higher than the current menhaden F, and is about 30% lower than the single species menhaden F threshold of 0.86 from the stock assessment (Table 2). For the projections at the current TAC value, there was 0% probability that the TAC will exceed the ERP F threshold and a moderate (60-66%) chance it will exceed the ERP F target in the short-term. TAC values of 176,800 mt and 187,400 mt for 2021 and 2022, respectively, were associated with a 50% probability of attaining the ERP F target ( Table 2). Fecundity-based reference points, in numbers of eggs, associated with the ERP F target and F threshold were 2.00 × 10 15 and 1.49 × 10 15 , which were higher than their single species counterparts by just 3% and 2%, respectively, and below the current fecundity of 2.60 × 10 15 eggs ( Table 2). Ecological Reference Points Atlantic menhaden ERPs were established using an ecosystem model of intermediate complexity and were based on the tradeoff between menhaden harvest and striped bass biomass. This type of tradeoff relationship is central to any forage fish management system. Recent analyses have focused on understanding these forage fish tradeoffs in both real-world (Koehn et al., 2017) and simulated systems (Essington et al., 2015). However, our FIGURE 4 | Equilibrium biomass ratios of ERP species as a function of Atlantic menhaden and striped bass F combinations generated by the NWACS-MICE Ecosim model base run. In the striped bass panel, the dashed lines indicate the current F rates, the dotted lines are the target F rates, and the solid black lines indicate the location of the target and threshold biomass ratio contours. All ratios are expressed relative to their single species targets, except for menhaden, which is expressed relative to current (2017) biomass because biomass targets are not defined. approach is the first to use these tradeoff relationships in actual management of a forage fish. The tradeoff relationship between Atlantic menhaden and striped bass was concaved, meaning that small increases in menhaden F resulted in disproportionate drops in striped bass biomass (Walters and Martell, 2004). In addition, the current status of striped bass (B 2017 /B target = 0.6) and menhaden F (F 2017 = 0.16) is suboptimal, i.e., it is below the tradeoff curve, and there is a set of solutions along the tradeoff frontier where both menhaden harvest and striped bass biomass are higher. By extension, striped bass catch would also be higher at their single-species F target under an optimal configuration. According to the NWACS-MICE model, moving toward an optimal condition first requires a reduction in striped bass F, because biomass was below the threshold across all menhaden F rates. Striped bass was determined to be overfished and experiencing overfishing in the latest stock assessment (Northeast Fisheries Science Center [NEFSC], 2019a) and regulatory changes have already been implemented to reduce F and rebuild the stock FIGURE 5 | Equilibrium striped bass biomass ratio when fished at F target = 0.2, over a range of menhaden F rates, generated by the NWACS-MICE Ecosim model. The solid black line is the tradeoff curve used to establish the ecological reference points (ERPs). The ERP F target and ERP F threshold are the menhaden F rates where the curve crosses the biomass target and threshold, respectively. Target and threshold F rates from the single-species (SS) stock assessment are included for comparison along with the current menhaden F rate (green line). (ASMFC, 2019a). The ERPs were developed assuming efforts to reduce striped bass F are successful and would therefore provide enough Atlantic menhaden to support a rebuilt striped bass population. That is, these ERPs do not compromise the performance of striped bass management actions. This satisfied two fundamental ERP objectives previously defined by managers to (1) sustain menhaden for directed fisheries and (2) sustain menhaden for predator species (ASMFC, 2015b). The ERP target and threshold values were found to be 40% and 30% lower than their single species counterparts, respectively. Through a meta-analysis using ecosystem models, Pikitch et al. (2012) recommended that to sustain forage fish populations and their predators, fishing mortality on forage fish should not exceed 50% of F MSY or 50% of natural mortality. In a study that examined collapsed forage fisheries, Patterson (1992) found that sustainability was likely to be achieved when fishing mortality did not exceed 67% of natural mortality. MSY based reference points were not estimable in the menhaden stock assessment model, but if we assume the single species F reference points are below F MSY , then the ERP F rates could easily be 50% of F MSY or lower. When we compare the ERPs to output from the NWACS-MICE equilibrium analysis (F MSY = 0.81, Supplementary Table S4) or a F MSY proxy from the BAM that achieves a SPR of 40% (F 40%SPR = 1.57), then both ERPs would be well below the 0.5F MSY rule-of-thumb (Pikitch et al., 2012). Compared to a natural mortality rate of 1.17 (Liljestrand et al., 2019), the ERP target and threshold are, respectively, 16% and 49% of M, also below the rules-of-thumb (Patterson, 1992;Pikitch et al., 2012). Therefore, the menhaden ERPs, which were explicitly related to the performance of a single predator, striped bass, were within the range of forage fish harvest rates that have been recommended to enhance forage fish sustainability and provide benefits to the broader ecosystem. Our study uniquely integrated an ecosystem model with an age-structured single species model to provide tactical management advice for a forage species, combining the strengths of both approaches. The NWACS-MICE tool provided strategic advice about the long-term effects of Atlantic menhaden harvest on a limited set of predators and allowed managers to evaluate trade-offs between forage fish harvest and predator biomass. However, the NWACS-MICE model does not capture the shortterm interannual variability in Atlantic menhaden population Current menhaden fishing mortality (F) and total allowable catch are provided for 2017, the terminal year of the assessment. The ERP F rates were generated by the NWACS-MICE Ecosim model and the associated fecundities and total allowable catch (TAC) projections were estimated by the age-structured assessment model. * These single species F rates for Atlantic menhaden are the full F values representing the maximum fishing mortality rate across ages, whereas the single species reference points used for management are the geometric mean F rates for ages 2-4. dynamics, especially with regards to recruitment. The single species model includes variable recruitment in the projection scenarios and is well suited for providing short-term (3-5 years) tactical advice on TAC levels, but does not provide information on ecosystem responses. This integration of models allows for long-term ecosystem-level planning, while also providing decision support in the form of annual TACs that fits the existing single-species management framework. While the integration of the models may appear straight forward, translation between two models with different levels of complexity, such as different age structures and recruitment assumptions, presented a challenge. Ultimately, a ratio approach was used to convert fishing mortality rates and biomass ratios between the two modeling types. Further propagation of uncertainty from the ecosystem model to the TAC projections will be necessary to fully quantify risk. One possible approach is to apply natural mortality rates from the ecosystem model in the TAC projections to account for any future changes in predation mortality. Uncertainties, Assumptions, and Limitations There remained a substantial amount of unexplained mortality for Atlantic menhaden in the NWACS-MICE model, which was somewhat expected given the limited field of predator species that were included. Similarly, the original NWACS model (Buchheister et al., 2017a), also resulted in high unexplained mortality rates, which was not expected given the inclusion of many more predators. There are several potential explanations for this pattern. First, although many thousand stomach samples were included when creating the Ecopath diet matrices (Buchheister et al., 2017b;SEDAR, 2020b), the dietary contribution of Atlantic menhaden to their predators could be underestimated if, for example, there are intensive spatialtemporal predation events that were not sampled in the diet surveys. Second, the estimated biomass of Atlantic menhaden in the 2019 assessment (SEDAR, 2020a) was more than double the estimate from the previous assessment (SEDAR, 2015), which was due to new, empirical estimates of higher menhaden natural mortality (Liljestrand et al., 2019). If the NWACS-MICE model has used lower biomass estimates and/or mortality rates from previous assessments, menhaden would have a higher EE and lower proportion of unexplained mortality. The uncertainty surrounding menhaden EE and the contribution to predator diets was the basis for a sensitivity run requested by a technical review panel (runs 15-22 in Supplementary Table S3). This configuration resulted in slightly lower ERPs and a steeper tradeoff curve. It is also possible that the models are correct, and menhaden do have high rates of non-predation natural mortality since they are prone to large fish kills related to hypoxia (Paerl et al., 1998;Smith, 1999) and epizootic infections (Dykstra et al., 1989;Reimschuessel et al., 2003). The truth is likely some combination of these factors, and future work is needed to empirically validate the current estimates of menhaden natural mortality and understand how it is partitioned into sources of fishing, predation, and other causes. The NWACS-MICE model was found to be highly sensitive to the Ecosim vulnerability parameters, which were often estimated at upper and lower bounds. Vulnerability parameters estimated at the bounds may arise due to lack of contrast in the data, omission of key environmental forcing functions, or overly precise optimization criteria. For instance, Ecosim may attempt to explain patterns in the data using predatorprey vulnerabilities that would have been naturally explained by some environmental drivers. Additionally, the vulnerability parameters, along with foraging time adjustment settings, impact the degree of compensation in recruitment, growth, and survival (Christensen and Walters, 2004a), which in turn determine how sensitive a species is to harvest. This is evident in the wide range of F MSY arising from alternative model configurations (Supplementary Table S4 and Supplementary Figure S1). Models with higher F MSY for menhaden would likely result in a flatter tradeoff relationship and vice versa. However, not all parameter settings produced satisfactory fits to the data. Applying minimum and maximum vulnerability caps resulted in slightly worse fits to the data, but drastically improved projection scenarios at high menhaden F leading to more reasonable F MSY estimates, and constrained theoretical maximum predation mortality rates to values that are compatible with natural mortality rates of the prey species. The parameter space in Ecosim models is large and must be evaluated fully to capture the uncertainty in the model (Gaichas et al., 2012). Potential improvements to parameter estimation in Ecosim could apply the vulnerability caps described here as penalized bounds (Bolker et al., 2013;Kinzey et al., 2018) or a constrained minimization approach (Vallino, 2000;Senina et al., 2008) that would prevent the vulnerabilities from being estimated at the upper and lower bounds. Due to the reduced age structure and the combining of fleets in the NWACS-MICE model, asymptotic selectivity was assumed for all species in the ERP complex. In contrast, recent stock assessments of menhaden, bluefish, weakfish, and Atlantic herring assume dome-shaped selectivity for some or all fleets and may allow selectivity to change over time. Flat-topped selectivity generally results in a stronger response to increasing F than dome-shaped selectivity, because older fish are vulnerable to harvest. For the ERPs, we expect that a dome-shaped selectivity for menhaden would lead to higher ERP F rates, i.e., a flatter tradeoff curve. However, to implement dome-shaped selectivity for menhaden, NWACS-MICE would require finer age structure and difficult assumptions about age-specific predation from the diet studies. Nevertheless, improving consistency between NWACS-MICE and the stock assessments has advantages, and the implications of model structure as it relates to selectivity should be explored in future iterations. The current configuration of the NWACS-MICE model does not include any environmental drivers that might help explain the inter-annual variability in the system. Rather, the model attempts to replicate observed trends in abundance and catch using fishing and trophic processes only. This is a glaring limitation for an ecosystem model centered around a species that is recruitment driven and related to several environmental drivers such as physical transport processes (Checkley et al., 1988;Quinlan et al., 1999) and larger-scale climatic drivers like the Atlantic Multidecal Oscillation (Buchheister et al., 2016). It's also possible that delivery and transport of nutrients from coastal rivers might impact Atlantic menhaden dynamics as it does for Gulf menhaden (Brevoortia patronus) (Govoni, 1997;Leaf, 2017) and Atlantic thread herring (Opisthonema oglinum) (Chagaris et al., 2015), two other members of the family Clupeidae with similar life histories. Stock assessment models also do not explicitly account for environmental drivers either, but they do estimate annual recruitment deviations. Analogous to this, Ecosim has the ability to estimate annual primary production anomalies (runs 23-25), however, those anomalies were not correlated with other information on primary production and were not considered for inclusion. Work is underway to assemble a time series of bottom-up forcing to account for changes in primary productivity and other environmental factors that drive Atlantic menhaden populations. Research and Modeling Recommendations This NWACS-MICE model and the adopted ERPs serve as a step forward in EBFM, but additional research and model development will be beneficial. Expanding the collection of diet and abundance data for the key predators, particularly across seasons and regions, would improve our understanding of the spatiotemporal dynamics of trophic interactions and predatorprey overlap. Accounting for seasonal and spatial migration patterns is also important in this system. For example, we found the ERP tradeoff curve to be sensitive to assumptions about the seasonal availability of Atlantic herring as prey to striped bass. More work is needed to synthesize data to parameterize and validate a spatial-temporal dynamic Ecospace model (Steenbeek et al., 2013). We also recommend improved monitoring of population trends and diet data in non-finfish predators (e.g., birds, marine mammals) and data-poor prey species (e.g., bay anchovies, sand eels, benthic invertebrates, zooplankton, and phytoplankton) to better characterize the importance of Atlantic menhaden and other forage species to the ecosystem dynamics. Future iterations of the NWACS model should explore annual recruitment deviations (from external models), primary production time series, and environmental drivers to better represent interannual variability in the system. An obvious next step in refining the menhaden ERPs is to incorporate additional predators, such as birds, mammals, and other piscivorous fishes that were found to contribute to menhaden mortality in the original NWACS model and/or were sensitive to menhaden harvest. The decision not to include additional predators was primarily a practical one, that aimed to balance model complexity with the added uncertainty that comes with including more species for which we have few data. Due to time constraints, a fair comparison of the ERPs generated by the original NWACS and the NWACS-MICE models was not possible during the development of these ERPs. If the full model suggests that the ERPs were severely biased due to model simplification, then that would be grounds to expand the MICE model to include additional species. As Collie et al. (2016) concluded, the "sweet spot" in model complexity strikes a balance between bias and uncertainty, and also depends on the key management questions and the effort required to update and maintain the models for routine operational use. Understanding how model complexity influences the management advice is considered a high priority moving forward. With regards to management advice using the NWACS-MICE model, additional work is needed to characterize uncertainty in model projections and the resulting tradeoff frontier using Monte-Carlo simulations and alternate massbalance parameterizations (Steenbeek et al., 2018). Management strategy evaluation (MSE) presents a more robust technique of incorporating uncertainty and evaluating strategic harvest strategies (Punt et al., 2016;Mackinson et al., 2018;Surma et al., 2018). Any ecosystem-level MSE should be carefully planned and incorporate input from stakeholders and managers of all species considered. Additionally, the optimal solution along the tradeoff frontier can be solved for in Ecosim while considering the socio-economic value of the competing fisheries (Christensen and Walters, 2004b;Heymans et al., 2009;Essington et al., 2015). Given that models of different types or complexities can address slightly different aspects of the tradeoffs in menhaden harvest management, we have also advocated for the continued development of other multispecies models, especially a multispecies statistical catch at age model developed as part of the ERP process (Curti et al., 2013;McNamee, 2018). Moving Toward EBFM The adoption of ERPs developed with the NWACS-MICE tool represents a significant step in incorporating quantitative ecosystem considerations into fisheries management on the US East Coast. ASMFC has sole management authority for Atlantic menhaden, striped bass, and weakfish, and so an ecosystem management approach that considers reference points, management objectives, and trade-offs for all three species together, using a tool like this, is a feasible next step to move EBFM forward in this system. Conceptually, our approach to establishing ERPs is transferable to other predatorprey systems and other multi-species modeling approaches. In fact, ecosystem models already exist in many systems around the world where large forage fisheries are prosecuted. However, the capabilities of this tool to provide broader EBFM advice are somewhat limited by current single-species management frameworks, in which species reference points are set independently, without consideration of ecosystem dynamics or trade-offs with other species. Progress toward true EBFM will require not just strong scientific tools, but also a shift in the management framework to better coordinate stock assessments and ecosystem modeling efforts with management actions. Additionally, stakeholders and managers of all species must come together to define objectives for the ecosystem as a whole and set reference points. A sea change in fisheries management may not be possible overnight, but incremental some steps such as this can be taken to move ecosystem-based management forward. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, upon reasonable request. AUTHOR CONTRIBUTIONS DC, AB, and JB developed the NWACS-MICE model. KD provided data inputs and processed model output for management use. MC chaired the ERP workgroup and provided critical strategic and conceptual advice during model development. AS provided data inputs and conducted the single species projections. All authors contributed to model review and writing of the manuscript. FUNDING This work was supported by a grant from the Lenfest Ocean Program grant number 00032187.
2020-11-27T14:11:07.685Z
2020-11-27T00:00:00.000
{ "year": 2020, "sha1": "1b298bbc0593b4e72ca0e2e71bc379efb0d2e015", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2020.606417/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "1b298bbc0593b4e72ca0e2e71bc379efb0d2e015", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
234784156
pes2o/s2orc
v3-fos-license
Detection of SARS-CoV-2 virus using an alternative molecular method and evaluation of biochemical, hematological, inflammatory, and oxidative stress in healthcare professionals In early December 2019, an outbreak of coronavirus disease 2019 caused by a new strain of coronavirus (SARS-CoV-2), occurred in the city of Wuhan, Hubei Province, China. On January 30, 2020, the World Health Organization (WHO) declared the outbreak a public health emergency of international concern. Since then, frontline healthcare professionals have been experiencing extremely stressful situations and damage to their physical and mental health. These adverse conditions cause stress and biochemical, hematological, and inflammatory changes, as well as oxidative damage, and could be potentially detrimental to the health of the individual. The study population consisted of frontline health professionals working in BHU in a city in southern Brazil. Among the 45 participants, two were infected with the SARS-CoV-2 virus and were diagnosed using immunochromatographic tests such as salivary RT-LAMP and qRT-PCR. We also evaluated biochemical, hematological, inflammatory, and oxidative stress markers in the participants. The infected professionals (CoV-2-Prof) showed a significant increase in the levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), cholesterol, lactic dehydrogenase, lymphocytes, and monocytes. In this group, the levels of uric acid, triglycerides, leukocytes, neutrophils, hemoglobin, hematocrit, and platelets decreased. In the group of uninfected professionals (NoCoV-2-Prof), significant increase in HDL levels and the percentages of eosinophils and monocytes, was observed. Further, in this group, uric acid, LDH, triglyceride, and cholesterol levels, and the hematocrit count and mean corpuscular volume were significantly reduced. Both groups showed significant inflammatory activity with changes in the levels of C-reactive protein and mucoprotein. The NoCoV-2-Prof group showed significantly elevated plasma cortisol levels. To our kowledge, this study is the first to report the use of the RT-LAMP method with the saliva samples of health professionals, to evalute of SARS-CoV-2. Introduction The emergence of viral diseases poses a serious threat to global public health. Several viral epidemics have emerged in the past few decades. On December 31, 2019, the Chinese Health Authority alerted the World Health Organization (WHO) [1] to several cases of pneumonia of unknown etiology in the city of Wuhan in Hubei province, China [2]. The pathogen causing the new disease was named SARS-CoV-2 coronavirus, and the WHO named the disease coronavirus disease 2019 [3]. On December 14, 2020, 4.24 million confirmed cases and 294,046 deaths worldwide caused by COVID-19 were reported [1]. SARS-CoV-2 was classified as a 2B group beta-coronavirus with high similarity to two bat-derived coronavirus strains, with more than 96% identity. The genome of the new virus comprises a 5ʹ untranslated region (UTR), replicase complex (ORF1ab), Spike (S gene), E, M, and N genes, and 3ʹUTR [4]. Since the outbreak of SARS-CoV-2 infection began, the recommended gold standard assay is polymerase chain reaction with real-time reverse transcription (RT-qPCR). This technique has played an important role in the clinical diagnosis and investigation of suspected cases, with numerous commercially standardized kits and recommendations from the CDC and WHO [5]. However, alternative methods have been proposed for the detection of SARS-CoV-2, including loop-mediated isothermal amplification (LAMP), developed by Notomi et al. [6]. RT-LAMP was performed under isothermal conditions of 65 • C for the amplification of genes such as the ORF1ab, and the spike (S), envelope (E), and nucleocapsid genes from SARS-CoV-2, and the results were obtained within 15-40 min [7][8][9][10][11][12][13]. The RT-LAMP technique can detect the virus in both throat and nasopharyngeal swab samples, with a detection limit ranging from 5 to 10 copies of RNA and with a 99%-100% agreement with RT-qPCR [10][11][12]. In addition, the results of RT-LAMP can be assessed by visual colorimetric evaluation, making result acquisition faster and more convenient than that of RT-qPCR [10]. The use of this method for screening SARS-CoV-2 by healthcare professionals has not been reported to date. Health professionals face great demand for work, as well as risks to their physical integrity, and are frequently exposed to contagion by SARS-CoV-2. Therefore, they must be monitored, and the RT-LAMP method emerges as an alternative for the detection of SARS-CoV-2 because its operation is relatively simple and low-cost, and it has a shorter execution time compared to that of RT-qPCR [9]. The physical and psychological impact of stress experienced by health professionals during the pandemic, as well as the fear of becoming infected with SARS-CoV-2 during the performance of professional activities, have been the subject of recent studies on the health conditions of these individuals [14]. Protocols with behavioral tests, as well as the evaluation of laboratory parameters capable of assisting in such investigations, are also important tools for understanding the results of such studies worldwide [12]. Biochemical, hematological, coagulatory, and oxidative damage markers stand out among the investigated laboratory parameters, in addition to numerous inflammatory markers, particularly cytokines, which can be detected in cases of SARS-CoV-2 infection [15][16][17]. We believe that the study of these markers combined with the detection of SARS-CoV-2 by a fast and less expensive technique could be an interesting tool to assist professionals to assess their condition. Thus, the aim of this study was to assess the condition of stress in health professionals in southern Brazil by investigating the presence of SARS-CoV-2 in the saliva of the individuals using the RT-LAMP technique, along with an evaluation of the biochemical, hematological, and inflammatory markers as well as oxidative damage in such professionals and a control group of non-professionals. Study population A prospective, cross-sectional study was conducted, with 90 convenience samples collected from frontline health professionals (45 individuals) in the fight against COVID-19 in Basic Health Units (BHUs) in the municipality of Pinheiro Machado, in the State of Rio Grande do Sul (RS), and samples from the control group (45 samples) composed of healthy individuals who do not work in the health field. A questionnaire was administered to collect socio-demographic data, in addition to closed questions directly related to the work of professionals during the pandemic. The questions also addressed pre-existing diseases and the use of medications. The inclusion criteria were: health professionals who perform their activities in at least one of the BHUs in the town of Pinheiro Machado/RS. Exclusion criteria were: health professionals and individuals in the control group who did not sign the ICF and/or who had already been diagnosed with COVID-19 within three months before the beginning of the study. Samples Saliva samples were collected from individuals in the nonprofessional health and control groups to detect SARS-CoV-2 using the RT-LAMP method. All health professionals and individuals in the nonprofessional control group were tested after nasopharyngeal swab collection by qRT-PCR at the Laboratório Central do Rio Grande do Sul (LACEN/RS) for the detection of SARS-CoV-2. After an overnight fast, blood samples were collected from all subjects by venous puncture into Vacutainer® (BD Diagnostics, Plymouth, UK) tubes in anticoagulants with EDTA and sodium citrate. The citrated plasma and serum tubes were centrifuged at 2500×g for 15 min at 4 • C. All collections were performed at the Rita de Cássia Laboratory, Pinheiro Machado/RS, and the Laboratory of Biochemistry Research and Molecular Biology of Microorganisms (LaPeBBioM), Universidade Federal de Pelotas (UFPel). All collection and transportation of samples used in the study followed the protocols recommended by the Center for Disease Control and Prevention [18]. This study protocol (4.124.248) was approved by the local research ethics committee, and the volunteers who participated in the study signed a free prior informed consent form. Extraction of viral RNA Viral RNA of individuals' saliva was extracted using the proteinase K method, as proposed by Chantal et al. [19], with a few modifications: Approximately 0.5 mL of the subjects' saliva was collected in sterile falcon tubes and sent immediately to the laboratory to begin the extraction of the viral RNA. Then, 50 μL of saliva was transferred to a microtube, with 6.25 μL of proteinase K (20 mg/mL) added (Ludwig Biotecnologia, Porto Alegre, Brazil), 50 μL of buffer containing 10x TBE (Ludwig Biotecnologia, Porto Alegre, Brazil) and 1% Tween 20 (Sigma-Aldrich, St. Louis, MO, U.S.A.). The tubes were incubated for 1 min at room temperature, and then for a further 4 min in a thermocycler at 55 • C. Subsequently, the tubes were inactivated at 95 • C for 30 min. After this procedure, the material was used to detect SARS-CoV-2 using the RT-LAMP method. Samples from nasopharyngeal swabs as well as the positive control (inactivated SARS-CoV-2, kindly provided by Dr. Edison Durigon of the University of São Paulo (USP) were extracted using MagMax™ core nucleic acid purification kit (Applied Biosystems), according to the manufacturer's instructions. After extraction, the RNAs were quantified using a NanoDrop® spectrophotometer (Thermo Scientific, Waltham, MA). A concentration of approximately 10 ng RNA was used to perform RT-LAMP and RT-qPCR. RT-LAMP The RT-LAMP reaction was performed on saliva samples from the professional and non-professional group, as proposed by Park et al. [13], with a few modifications. Briefly, a reaction mixture with a final volume CGAGAAGATGACCCAGATCATGT 3ʹ), respectively. After the reaction mixtures were prepared, the tubes were incubated in a thermal block at 65 • C for 30 min, and then using a colorimeter. qRT-PCR RNA samples extracted from the nasopharyngeal swabs of the healthcare professionals and control groups were evaluated by qRT- Biochemical measurements Serum from individuals in the health professional and nonprofessional groups were initially used to assess the presence of anti-SARS-CoV-2 IgG/IgM antibodies using the immunochromatographic method (Wondfo Biotech Co. Ltd., China). Levels of uric acid, albumin, ALT, AST, creatinine, total cholesterol, HDL cholesterol, LDL cholesterol, VLDL cholesterol, glucose, total globulins, lactic dehydrogenase, total proteins, C reactive protein, and triglycerides were assessed using standard methods on a Cobas MIRA® automated analyzer (Roche Diagnostics, Basel, Switzerland). Colorimetric mucoprotein was identified using the Labtest commercial kit (Minas Gerais, Brazil), as instructed by the manufacturer. Serum cortisol was determined by immunoenzymatic assay (ELISA). Cortisol level (Diagnostics Biochem Canada Inc) was quantified according to the manufacturer's recommendations. Microplate reading was performed on a microplate reader (TP-Reader, Thermo plate, China) with a length of 450 nm. FRAP, total oxidation status (TOS), and total antioxidant capacity (TAC) concentrations were determined on the Cobas MIRA®, described previously by Erel [21,22]. The ratio of total peroxide (TOS) to the TAC gave the oxidative stress index (OSI), an indicator of the degree of oxidative stress [23]. Anticoagulant activity Anticoagulant activity was determined based on thromboplastin partial actived time (aTTP) and prothrombin time (TP), which indicate the intrinsic and extrinsic coagulation pathways, respectively, in citrated plasma [24]. The patient samples were then centrifuged at 800×g for 10 min aTTP and TP were determined using monochannel coagulometer (Clotimer, Brazil) with commercial kits (Biotécnica, Varginha, Minas Gerais, Brazil), following the manufacturer's instructions. Statistical analysis Data are expressed as the mean ± SD for duplicates or triplicates for each experimental point. Data were analyzed using one-way analysis of variance (ANOVA), followed by Dunnet's multiple comparison test when needed, using GraphPad Prism 4.0 software. Table 1 shows the results from the socio-demographic questionnaire and closed questions directly related to the professionals' performance during the COVID-19 pandemic, the existence of pre-existing diseases, and the use of medications. Table 2 shows the results of the immunological and molecular tests performed for the detection of SARS-CoV-2 in the study individuals. Results RT-LAMP ( Fig. 1) detected the presence of SARS-CoV-2 in two saliva samples from the tested health professionals (channels 10 and 11). Due to this fact, the health professionals were subdivided into NoCoV-2-Prof and CoV-2-Prof during the course of the study. Results were confirmed Table 1 Socio-demographic data of the individuals studied. by qRT-PCR, indicating that RT-LAMP is a reproducible and reliable method. In addition, all negative results for RT-LAMP were also confirmed by qRT-PCR and presented similar results; that is, the presence of SARS-CoV-2 virus was not detected in the individuals' nasopharyngeal swab samples. The serology results also showed the same pattern, with only NoCoV-2-Prof group IgG/IgM class antibodies detected after 15 days of viral infection. Table 3 shows the results of biochemical analysis. A significant reduction in the levels of uric acid and triglycerides can be seen in the CoV-2-Prof group when compared to No-Prof individuals. There was a significant increase in the levels of ALT, AST, HDL, and LDH in the CoV-2-Prof group compared with that in the No-Prof group. Significant increase in HDL levels was observed in the NoCoV-2-Prof group compared to that in the No-Prof group. There was a significant decrease in the levels of uric acid, LDH, triglycerides, and VLDL in the NoCoV-2-Prof group compared to the No-Prof group. When assessing anti-inflammatory activity, a significant increase in mucoprotein levels can be seen in the NoCoV-2-Prof group ( Fig. 2A) compared to the No-Prof group. A nonsignificant reduction in mucoprotein was observed in the CoV-2-Prof group compared to the No-Prof control. The levels of C-reactive protein showed a significant increase only in the CoV-2-Prof group compared to the No-Prof group (Fig. 2B). The professionals' stress assessment was validated by the detection of serum cortisol levels. A significant increase in circulating cortisol levels was observed in the NoCoV-2-Prof group compared to the cortisol levels of the individuals in the No-Prof group (Fig. 3). Oxidative stress was assessed using FRAP, TOS, TAC, and the OSI assay. Fig. 4A shows the levels of antioxidants in the serum for the groups evaluated using the FRAP assay. Significant decrease in antioxidant levels can be observed in the NoCoV-2-Prof group compared to the No-Prof group. Fig. 4B shows the results of TOS levels. A significant increase in TOS levels was observed in the NoCoV-2-Prof and CoV-2-Prof groups compared to the No-Prof group. Fig. 4C shows TAC levels. There was a significant increase in TAC levels in the NoCoV-2-Prof group compared to the No-Prof group. Fig. 4D shows the results of the OSI. Table 4 shows the results of the hemograms. Significant reduction in the values of the WBC, neutrophils, hemoglobin, hematocrit, and platelets was observed in the CoV-2-Prof group compared with the No-Prof group. There was a significant increase in lymphocyte and monocyte values in the CoV-2-Prof group compared to those in the No-Prof group. A significant increase in the values of eosinophils and erythrocytes was observed in the NoCoV-2-Prof group compared to the No-Prof group. There was a significant decrease in hematocrit and MCV values in the NoCoV-2-Prof group compared with that in the No-Prof group. Regarding coagulation proteins, a significant decrease in the prothrombin time (PT) activity was observed in the NoCoV-2-Prof group compared to the No-Prof control (Fig. 5A). For the activated partial prothrombin time (aTTP), no significant differences were observed between the groups (Fig. 5B). Discussion Study participants were divided into three groups based on the results initially observed in the RT-LAMP method: NoCoV-2-Prof, CoV-2-Prof and No-Prof. This result was important, as it is the first report on the use of the RT-LAMP technique to test health professionals using saliva. Although the method was previously reported by Park et al. [13], using commercially available reagents, the RT-LAMP presented in this study Data are expressed as mean ± SD of the NoCoV-2-Prof and CoV-2-Prof groups compared to the No-Prof (control). *p < 0.05; **p < 0.01; ***p < 0.001 compared to control. showed interesting advantages over qRT-PCR. Some recent studies using RT-LAMP aimed at standardizing the method to detect viral RNA for the diagnosis of SARS-CoV-2 in nasopharyngeal samples [13,[25][26][27][28]. Dao et al. [29] tested a two-color RT-LAMP assay protocol to detect SARS-CoV-2 viral RNA using a 300-primer set specific for the N gene. Yan et al. [10] developed an RT-LAMP assay that contained specific primers for the SARS-CoV-2 orf1ab and S genes. SARS-CoV-2 transmission is mainly mediated by saliva droplets [16]. The FDA has approved saliva as a source of virus detection. Ben-Assa et al. [29] applied the RT-LAMP method to nasal swab samples and self-collected saliva samples. Tests were performed on 186 samples from suspected patients, and the results, as well as thsoe from our study, were compared with those obtained using qRT-PCR. However, among the samples tested, only 3 were saliva. Two of them confirmed the results for SARS-CoV-2, and another was a suspected patient with a negative result. Tests of saliva samples from patients confirmed by RT-LAMP and RT-qPCR that 2 subjects were positive, while one negative case was confirmed as negative by qRT-PCR. Our results are consistent with those reported by Ben-Assa et al. [29], showing that RT-LAMP is a fast, low-cost, and safe method for the screening of SARS-CoV-2 in healthcare professionals. A study reported by Nagura-Ikeda et al. [30] evaluated self-collected saliva, with a median collection time of three days after receiving their first positive qRT-PCR results. Self-collected saliva at initial-stage symptoms proved to be an alternative option for diagnosing SARS-CoV-2. However, no study has yet tested a group of health professionals using the RT-LAMP method to detect SARS-CoV-2 in the saliva. Among the 45 individuals health professionals who participated in the study, we observed a diversity of job functions at the BHUs. Depending on their roles, individuals may be more exposed to the risk of infection with SARS-CoV-2. Nurses and nursing technicians were exposed in greater numbers (31.1%), while doctors (8.9%) and community workers (24.4%) had lower exposure. Of individuals exposed, 45.1% had pre-existing disease, with a prevalence of respiratory disease (21.4%), followed by hypertension (16.7%). Diabetes affected 7.2% of this population. Heart disease and depression were each present in 4.8% of participants. During this study, 2 professionals (one doctor and one dentist) had a detectable result for SARS-CoV-2 and ended up comprising the CoV-2-Prof group. Johnstone and Turale [31] reported that nurses and nurse technicians could be the professionals most affected by the virus during the pandemic. Our results showed that, although there was a greater number of professionals with this role in the NoCoV-2-Prof group, there were no cases of infection with the SARS-CoV-2 virus in this group during the study. The pandemic caused by the new coronavirus has had several psychological effects worldwide. Previous psychiatric illnesses have been exacerbated. In this study, one psychological impact less reported by individuals was stress (4.7%), a result that was contrary to the increased levels of plasma cortisol observed significantly in the group NoCoV-2-Prof compared to No-Prof. Blackman [32] recently reported on emotional exhaustion of health professionals during the pandemic, with effects on the behavior of such individuals. For example, 74.4% of professionals stated that they had some change in their professional routine. Da Silva and Neto [33] found that the psychological suffering of health professionals can be associated with the uncertainty of a safe workplace. Moore et al. [34] showed that 35% of frontline professionals in the United Kingdom needed support but did not feel able to ask for help. Furthermore, 64% reported feeling anxious during the April 2020 peak of the pandemic. Stress is the response of the body to any non-specific demand related to emotional tension and pressure [35]. Stress conditions can activate different metabolism response routes and cause oxidative damage in these individuals. Prolonged emotional pressure in distressing periods, such as the pandemic or chronic stress, can lead to a wide spectrum of physical and psychological illnesses [36]. Such situations can cause physiological changes, increasing the level of certain biochemical, hematological, coagulation, and inflammatory markers. In this study, a significant reduction in uric acid values was observed in the CoV-2-Prof group. A significant decrease in the levels of uric acid and triglycerides was observed in the NoCoV-2-Prof group. Qin et al. [37] reported that in severe cases of COVID-19, extremely low levels of uric acid were detected because of the disease. Trinder et al. [38] showed that changes in lipids correlate with the severity of the infection; that is, the more severe the infection, the greater the changes in lipid and lipoprotein levels. Alvarez [39] and Sahin and Yldiz [40] reported that triglyceride levels may be elevated or inadequatein cases of viral infection. Feingold et al. [41] also showed that in patients with COVID-19 infections, serum trigyceride levels were variable, both high and low. Wang et al. [42] found that the levels of serum triglycerides were important influencing factors for recovery from SARS-CoV-2 infection. However, there are no data available in the literature regarding the measurement of these parameters in health professionals who work during the pandemic for a concrete comparison of results. A significant increase in ALT, AST and HDL levels in the CoV-2-Prof group was observed in this study. There was also a significant increase in HDL values and a significant decrease in urea, triglycerides and VLDL levels in the NoCoV-2-Prof group. Changes in liver function tests have been reported in up to 40% of COVID-19 patients [43][44][45]. The suggested mechanism for liver damage involves the presence of angiotensin II receptors (ACE2) and transmembrane serine protease 2 (TMPRSS2) in cholangiocytes and hepatocytes, suggesting that the damage to liver function is caused by viral cytopathic effects [46,47]. In addition, other mechanisms may be involved, such as autoimmune damage, hypoxic hepatitis, and drug-induced liver damage [48]. There is association of the severity of COVID-19 with liver damage, and some meta-analyses indicate that alteration of these liver enzymes can be used as a prognostic marker for the severity of COVID-19 [49,50] Shao et al. [47] conducted a study on 98 mild COVID-19 cases in patients at the Wenzhou Central Hospital in Wenzhou, China, and their results suggest that hepatobiliary complications are prevalent in patients with mild COVID-19. Thus, our results are inconsistent with the data in the literature. Regarding HDL dosage, the infected professionals had significantly increased values compared to the control group, differing from the results in the literature, which show a decrease in total LDL and/or HDL cholesterol levels in patients infected with COVID-19. In most studies, the severity of the disease is higher the greater the decrease in LDL and/ or HDL levels [41]. Decreased serum HDL cholesterol levels are associated with the severity of COVID-19 infection. Infected patients had drastically reduced concentrations of serum total cholesterol, high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol [51]. As the professionals had a satisfactory recovery, HDL levels may have remained higher than those reported in the literature for the most severe cases of COVID-19 in hospitalized patients. In these patients, the levels of total cholesterol, LDL, and HDL may be decreased [32,33]. A study by Scalsky et al. [52] reported that high HDL levels are associated with a reduced risk of positive testing for SARS-CoV-2, a result opposite to the one we obtained, since this parameter was high in infected individuals. Li et al. [53] reported that a high level of lactic acid dehydrogenase (LDH) was significantly associated with severe COVID-19 on admission. Patients with risk factors such as older age, hypertension, and high LDH levels required careful observation and early intervention to prevent the worsening of COVID-19. According to Table 4 Blood count values of the groups evaluated. Data are expressed as mean ± SD of the NoCoV-2-Prof and CoV-2-Prof groups compared to the No-Prof (control). *p < 0.05; **p < 0.01; ***p < 0.001 compared to control. Li et al. [50], Zhang et al. [54], and Wu et al. [55], a high level of LDH is a factor for COVID-19 severity. Wang et al. [42] found in their results that 40% of patients infected had high LDH. The CoV-2-Prof group showed a significant increase in LDH compared to the control group, but did not have a severe form of the disease. Erez et al. [56] reported that the LDH of 398 individuals was recognized as a marker for severe prognosis in several diseases, including cancer and infection. Inflammatory markers were evaluated in this study. A significant increase in mucoprotein (MUC) levels was demonstrated in the NoCoV-2-Prof group, and a non-significant reduction in mucoprotein was observed in the CoV-2-Prof group. There was no data in the current literature to compare these results, since it is the first report on the use of mucoproteins to assess the acute inflammatory process in health professionals working during the pandemic. CRP levels increased significantly in the CoV-2-Prof group. We attribute this increase to the SARS-CoV-2 viral infection in this group of professionals. Guan et al. [44] showed elevated C-reactive protein (CRP) levels in 60.7% of patients evaluated with COVID-19. Lippi, Plebani, and Henry [57] reaffirm this hypothesis, reporting that one of the most frequent laboratory alterations in patients with COVID-19 is an increase in CRP by 75-93%. These data are in agreement with those described in the present study. Li et al. [53] observed that, approximately 7-14 days after the onset of initial symptoms, an increase in the clinical manifestations of the disease begins. This occurs in parallel with a pronounced systemic increase in inflammatory mediators and cytokines, which can be characterized as a "cytokine storm." Higher CRP levels are associated with unfavorable aspects of COVID-19, such as the development of Acute Respiratory Discomfort Syndrome (ARDS), higher levels of troponin-T, myocardial injury, and death [55] Several studies have established that the hyperinflammatory response induced by SARS-CoV-2 is one of the main causes of illness severity and death in infected patients. It is known that chronic stress can stimulate the conserved transcriptional response to adversit through the sympathetic nervous system, leading to the induction of proinflammatory cytokines and the suppression of genes involved in the production of antibodies and interferons, causing vulnerability to viral infections. Proinflammatory cytokines induce chronic inflammation and generate ROS, thus producing an unbalanced oxidative stress response [58]. Oxidative stress results from an imbalance in the generation of oxidizing compounds and the performance of antioxidant defense systems [59]. Several respiratory viruses induce unregulated ROS formation due to increased recruitment of inflammatory cells at the site of infection [60]. Among the markers of oxidative stress, total antioxidant capacity (TAC), total oxidative state (TOS), and oxidative stress index (OSI) can act as important tools for investigating oxidative stress in organisms [21,22,61]. In addition, FRAP in the serum has been determined as an indirect method to assess antioxidant capacity. We observed a significant decrease in the level of antioxidants in the NoCoV-2-Prof group, significant increase in TOS levels in the NoCoV-2-Prof and CoV-2-Prof groups. A significant increase in TAC levels in the No-CoV-Prof group, and a significant increase in OSI levels in the NoCoV-2-Prof and CoV-2-Prof groups. Current research findings and reports have suggested that oxidative stress plays an important role in SARS-Cov-2 infections. The absence or reduction of oxidative stress would have a significant beneficial effect during the initial stage of viral infection, preventing the binding of viral proteins to host cells [59]. Derouiche [64] reports that oxidative stress affects repair mechanisms and the immune control system, which is one of the main events in the inflammatory response. This fact leads to oxidative stress that may be linked to a greater propensity to be infected with COVID-19, in addition to being a factor that increases the severity of the virus, especially in patients with chronic diseases. All reported markers used to assess the level of oxidative stress in groups of individuals were reported in an unprecedented manner. Thus, we do not have the data from the literature for comparison. Regarding the frequent hematological changes in patients with COVID-19, it is known that total leukocyte count demonstrates considerable variation, sometimes appearing high or decreased, but lymphopenia was evident. There was also a decrease in hemoglobin level [57]. The number of leukocytes in the CoV-2-Prof group was reduced. The hemoglobin in this group was also reduced, corroborating the data previously mentioned. The neutrophil count in the CoV-2-Prof group was decreased. Varim et al. [63] reported that individuals with severe COVID-19 had an increase in neutrophils compared to patients who had mild disease. Higher neutrophil value is related to poor prognosis. Neutrophils play a vital role in our immune defenses, eliminating invading microorganisms. Common causes of neutropenia include autoimmune diseases, drug reactions, chemotherapy, and hereditary disorders [66]. Although further research is required on the underlying etiology, several factors can contribute to COVID-19-associated lymphopenia. Lymphocytes have been shown to express the ACE2 receptor on their surface [65,67], and SARS CoV-2 can directly infect these cells, ultimately leading to cell lysis. In addition, the cytokine storm is characterized by markedly increased levels of interleukins and tumor necrosis factor alpha (TNFα), which can lead to apoptosis in lymphocytes [68,69]. Substantial cytokine activation may also be associated with atrophy of lymphoid organs [70]. Huang et al. [7] and Wang et al. [42] highlighted an association between lymphopenia and the need for care in the ICU. Wu et al. [55] showed an association between lymphopenia and the development of acute respiratory distress syndrome (ARDS). Lymphopenia, excessive activation of the inflammatory cascade, and cardiac involvement are characteristics of COVID-19, and have a high prognostic value. However, our understanding of the underlying mechanisms is still limited [71]. Professionals in the CoV-2-Prof group had increased lymphocyte values, contrary to the literature data. He et al. [70] comments that the effect of lymphocytes in mild cases of COVID-19 (as in the case of the two individuals in the infected group) remains unclear. Monocytes were also increased in the CoV-2-Prof group. The effect of monocytes in mild COVID-19 cases is unclear. In line with our study, Zhou et al. [71] found significantly increased circulating proportions of CD14 and CD16 monocytes in peripheral blood of 33 patients hospitalized with COVID-19, and this percentage was highly increased in COVID-19 patients with acute respiratory distress syndrome. The platelet count in the CoV-2-Prof group was reduced. Platelet count is a simple, inexpensive, and easily available biomarker, which is why it was quickly adopted as a potential biomarker for COVID-19 patients [72]. It has been reported that the number of platelets was significantly reduced in patients with COVID-19, and even lower in non-survivors compared to survivors [72,73]. In the evaluation of coagulation proteins, a significant decrease in activity was observed in the prothrombin time (PT) of the NoCoV-2-Prof group and activated partial prothrombin time (aTTP). However, it is known that coagulation disorders are observed relatively frequently among patients with COVID-19, especially among those with severe disease with increased clotting times due to deficiency in the production by the liver [73]. Conclusions The psychological damage that the COVID-19 pandemic has caused to humanity is indisputable. One particularly affected group is that of health professionals who work daily to face the pandemic. RT-LAMP can be an alternative molecular method for screening SARS-CoV-2 in the saliva samples collected from these professionals. We consider it a fast, safe, and less expensive method than qRT-PCR. We emphasize that our study is the first to report the use of the RT-LAMP method with saliva samples from health professionals. Health care professionals have also experienced chronic stress during the pandemic, a fact confirmed here by the increase in plasma cortisol levels and the alteration of some biochemical, hematological, inflammatory, and oxidative stress markers. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Rodrigo de Almeida Vaucher reports was provided by Federal University of Pelotas. Rodrigo de Almeida Vaucher reports a relationship with Federal University of Pelotas that includes: employment.
2021-05-20T13:16:04.936Z
2021-05-19T00:00:00.000
{ "year": 2021, "sha1": "ed90ce043db240ce3b0a0094f08d3f7702b4d70f", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.micpath.2021.104975", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "7f5dd7d8e597728619353f851365d14ee8aad20f", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
70300463
pes2o/s2orc
v3-fos-license
Design of active power filter for narrow-band power line communications In power line communication (PLC), couplers such as coupling transformers and band-pass matching coupling circuits are usually required for coupling, band-pass filtering, and impedance matching. However, the cost and size of transformers prevent them from being an economic and compact solution for PLC couplers. In addition, passive band-pass matching coupling circuits need accurate impedance matching and possibly incur power losses. In this paper, a 6th order multiple feedback (MFB) active power filter with the minimum number of components was designed for narrow-band PLC, which has high input impedance and low output impedance, allowing outstanding performance in main voltage isolation, suppressing the current-harmonics and compensating the reactive power simultaneously. Finally, simulations were conducted in the range of 95 kHz-125 kHz (CENELEC “B-band”), which confirmed that the new filter met the CENELEC requirements for transmission and disturbance levels. economic and compact solution for PLC couplers. In addition, band-pass matching coupling circuits [7][8][9] are often fabricated by passive components, which need accurate impedance matching and possibly incur power losses. In this paper, for overcoming these problems an active power filter was designed for suppressing the current-harmonics and compensating the reactive power simultaneously. The active power filter is a Butterworth filter using a multiple feedback topology which has a fairly flat pass-band characteristic and a relatively sharp attenuation outside the pass-band. It should be noted that although the new active filter design focuses on the CENELEC B band (95 kHz-125 kHz) as an example, the same principles will hold for any of the CENELEC bands. CENELEC band allocation The European PLC standard was approved by CENELEC, which divides frequency (3 kHz-148.5 kHz) into four different sub-bands, and provides maximum transmission and disturbance levels for different bands when transmitting data over power line. Table 1 lists the PLC frequency bands and their maximum transmission and disturbance levels [10]. CENELEC-A band is exclusively for utility providers and the other three (CENELEC-B, C, D) bands are open for end user applications. In this paper, an active power filter was designed based on the CENELEC B band (95 kHz-125 kHz). Multiple-Feedback filter topology Multiple-Feedback (MFB) topology is one of the simplest circuits with the minimum number of components, which is often implemented with 2 th order response for single operational amplifier. Figure 1 is a 2 th order MFB band-pass filter. The standard form for transfer function [11] of all 2 th order band-pass filters is where Q is quality factor, 0  and 0 H are the resonant frequency and resonant gain, respectively. As shown in figure 1, node equations can be expressed as Eq. (2 ) and Eq. (3). According to Eq. (2) and Eq. (3), the transfer function of this MFB band-pass filter is illustrated in Eq. (4) (1), the proper filter characters can be obtained when designing a MFB band-pass filter. In order to simplify the calculation, equation is usually assumed established. The Eq. (5) can be expressed as Eq. (6). Cascade design of MFB filter Each MFB filter stage with one operational amplifier will be 1 st or 2 nd order, which should be cascaded to achieve higher order MFB filter. In order to design a 6 th order MFB bandpass filter, three 2 nd order MFB filter stages are cascaded. According to Eq. (8), the design of each 2 nd order stage needs the parameters of quality factor Q , resonant frequency 0  and resonant gain 0 H , which can be obtained from cumbersome polynomial equations. Thankfully there are some resources can be used to look up when designing a MFB band-pass filter rather than dealing with cumbersome polynomial equations. Each type of filter (such as Butterworth, Chebychev, Bessel) has its own coefficient table based on the desired filter order number. The coefficients table serve as a quick design reference of designing a proper filter instead of complex mathematical calculations. The design of the 6 th order MFB Band-pass Filter The designed MFB band-pass filter is focused on the CENELEC "B band" (95 kHz-125 kHz), whose resonant frequency Step 1 The calculation of Quality factor According to the resonant frequency and bandwidth of the designed active band-pass filter, the equation of the quality factor is: Simulation results and analysis The simulation was conducted in PSPICE. Figure 3 illustrates the 6 th order MFB filter topology, which includes three 2 nd order MFB filter stages. The calculation process of each stage (see section 3) is complex even with the simplified coefficient table, previously given. As shown in figure 5, the attenuation of the 6 th order MFB filter at 50 Hz is approximately -230 dB. If a 50 Hz, 340V peak voltage is resident at the input side of the MFB filter, the disturbance level is about 1 peak nV , which is much lower than the CENELEC maximum disturbance levels (see table. 1) and exhibits outstanding performance in main voltage isolation. Conclusion In this paper, a 6 th order active MFB filter was designed for narrow-band PLC to achieve the CENELEC specifications. Although the new active filter design focuses on the CENELEC B band (95 kHz-125 kHz) using a MFB topology as an example, the same principles is adequate for any of the CENELEC bands. The active filter was analyzed with a series of simulations, which exhibits an excellent flat pass-band and meets the CENELEC requirements for transmission and disturbance levels.
2019-02-19T14:06:47.525Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "deb7d784c32aba5580224ad5a2720ab0d566a83c", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/48/matecconf_meamt2018_04012.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ffaa133026af2544f1c27cc51bdda8c570e179e7", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
234774440
pes2o/s2orc
v3-fos-license
Association of self-perceived income status with psychological distress and subjective well-being: a cross-sectional study among older adults in India Background As the older population aged 65 and over worldwide, is estimated to increase from 9% in 2019 to 16% in 2050, rapid aging will transform the aspects such as economic security, employment status, and family structure. The effects of lower levels of perceived income and poor socioeconomic status on the mental health of older adults appear to be large and enduring. Therefore, the present study contributes to the literature on understanding the association of socioeconomic conditions and self-perceived income status in particular, with self-assessed mental health outcomes (psychological distress and subjective well-being) among older adults in India. Methods Data for the present study was derived from the Building Knowledge Base on Population Ageing (BKPAI) in India. Bivariate and binary logistic regression analyses were conducted to understand the relationship between socioeconomic status and outcome variables. Results About 43% of older adults had no income whereas 7% had income but perceived as not sufficient to fulfil their basic needs. Nearly, 9% of older adults were retired from regular employment. Almost 70% older adults had received no pension and nearly 18% of older adults had no asset ownership. It is revealed that older adults with income that is partially sufficient to fulfil their basic needs were 2.23 times [OR: 2.23, CI: 1.75–2.84] and 1.96 times [OR: 1.96, CI: 1.55–2.47] significantly more likely to suffer from psychological distress and low subjective well-being than those who had income which was sufficient to fulfil their basic needs. Conclusions By focusing on four target areas such as the income support, education, family oriented initiatives and local or regional policies, the current framework for assessing the mental health among older adults in India can be modified. A move towards a guaranteed pension for eligible older individuals by which they do not have to remain as a financial burden on their children, may reduce their self-perceived economic distress and result in higher levels of wellbeing in older ages. Also, strategies to address socioeconomic disadvantages and gender differentials related to mental health status among older population are urgently needed. Supplementary Information The online version contains supplementary material available at 10.1186/s40359-021-00588-5. Also, India's older adult population is expected to reach 19% of the total population [2], which in turn, poses a serious challenge to the available geriatric services, including those for mental health. Currently, older individuals who are materially advantaged by their income, education, employment and other resources live a happier and dignified life than those with fewer material resources [3]. Further, socioeconomic status (SES) contributes to subjective well-being (SWB) that reflects life satisfaction and happiness through different mechanisms of individual and aggregate level resources [4]. SES also influences the social and psychological state of older adults and thereby indirectly affects the quality of life in older ages [5]. Various indices of economic hardships such as unemployment, financial strain, and work-related stress, are linked with physical and mental health problems [6][7][8]. Income and occupational status are independently associated with mental health in most studies in both developed and developing countries, and the effects of low income and poor economic status appear to be large and enduring [9,10]. However, the experiences of mental health problems and low subjective wellbeing and their disparities across different socioeconomic groups were found to be higher among older population in low-and middle-income countries [11][12][13]. The economic insecurity of older adults in those countries results in losing their relevance and significance in their own households and increasing feelings of loneliness [14]. Studies have also shown that higher financial strain was associated with more depressive symptoms in older ages [15][16][17]. Besides, gender, educational level, income, rural residence, and the presence of one or more major medical conditions were associated with increased risk of geriatric depression [18]. In addition, older adults receive support from their multiple social roles such as being a man, being currently married, and having children upon which most interpersonal relationships are based on and the differences in these roles can have a greater impact on their psychological well-being [19]. Furthermore, a growing body of research shows that unlike individual factors affecting the mental health of older adults, some dimensions of SES at the household level have a lasting effect that serves as a resource or buffer against hardships in later years of life [20][21][22]. Moreover, resources such as income and assets accumulate over time; whereas, role-related deprivations could lead older adults to feel unfit in the family and ultimately to their cumulative disadvantages and increased psychological distress [22][23][24]. According to one study [25], well-being declines by increasing age and the situation is particularly worse for the oldest old segment and females tend to have a lesser well-being score as compared with males. Similarly, the lack of inheritance rights and property, and insufficient incomes and earnings expose older widows to deprivation and social isolation [26], resulting in a poor late-life mental health status. Studies found a positive effect of co-residence with children on mental health and physical well-being [27]. Social networks, family dynamics, and both positive and negative aspects of these relationships are central to the well-being and functioning of older men and women [26]. At the same time, the study also reported that living in a multigenerational family without a spouse and having a lower household income were significantly associated with poor mental health in both men and women [28]. Evidence shows a linkage of perceived discrimination among older adults by their family members with their poor health including weak emotional states such as anxiety and depression [29][30][31]. Similarly, financial resources, better health, availability, and quality of social care are very high in urban areas [32]. Hence, differences were found between well-being in rural and urban older populations due to the socio-demographic factors, social resources, and income adequacy [32]. Various studies in India have analysed the risk factors such as lower levels of income, not working, not receiving any pension, not owning any asset, being a woman, and not having an adult child for care and support that are associated with mental health among older adults [33][34][35][36]. However, there is less on the association of particular SES indicators with psychological health and SWB. Among the ones that do so, very few specifically analyse the subjective income status and its association with mental health outcomes. In addition, self-perceived income sufficiency is recommended as a useful question in assessing health outcomes of vulnerable populations [37]. Since better psychological health and SWB are associated with positive health outcomes and increased longevity [38], the present study contributes to the literature on understanding the association of socioeconomic variables and self-perceived income status in particular, with mental health outcomes (psychological distress and SWB) among older adults (60 years and above) in India. The study hypothesized that:-H 1 : There is a positive association of self-perceived income status of older adults with their psychological health and subjective well-being. H 2 : Poor socioeconomic status among older adults is significantly associated with increased psychological distress and low subjective well-being. Methods Data for this study is derived from the BKPAI (Building Knowledge Base on Population Ageing in India) which was conducted in 2011 [39]. The survey was carried out in seven major states (Himachal Pradesh, Punjab, West Bengal, Odisha, Maharashtra, Kerala and Tamil Nadu), that covered 9852 elders from 8329 older adult's households both from rural and urban areas [39]. The states selected for the survey had higher percentage of the 60 + population compared to the national average [39]. The individual questionnaire covers on the socio-demographic profile, work history and benefit, income and assets, living arrangement, social activities, the health status of older adults & social security related questions [39]. Sampling procedure The BKPAI sample design entails a two-stage probability sampling. Where first villages were classified into different strata based on population size and the number of PSUs to be selected was determined in proportion to the population size of each stratum [39]. Using probability proportional to population size (PPS) technique, the primary sampling unit (PSUs) have been chosen, and within each selected PSU, older households were selected through systematic sampling. A similar procedure was applied in drawing samples from urban areas [39]. The final sample size for the analysis after removing missing cases and outliers was 9231 older adults aged 60 years and above. Outcome variable The general health questionnaire, most common assessment of mental well-being is used as a measure of the common mental health problems/domains of depression, anxiety, somatic symptoms, and social withdrawal [40]. Further, SWB provides a meaningful and complementary measure of the health of older adults as it involves the subjective appraisals of their life in older age from their own perspective [41,42]. The 12-item version of the General Health Questionnaire (GHQ-12) was used as a measure of mental health in the study. Psychological distress was having a scale of 0 to 12 based on experiencing stressful symptoms and was recoded as 0 "high" (representing 6 + scores) and 1 "low" (representing score 5 and less) [43,44] (Cronbach's alpha: 0.90). The 9-item Subjective Well-being Inventory was used to measure low subjective well-being. Subjective wellbeing inventory having a scale of 0 to 9 and was categorized as 0 "high" experiencing better experience (representing 6 + scores) and 1 "low" experiencing negative experience (representing score 5 and less) [45]. Twelve questions on psychological distress and nine questions on SWB were asked to assess the outcomes. All the questions on the outcome variables were asked on a Likert scale and were recoded to a dichotomy and used as per the previous literature [46]. The low SWB represents lower levels of subjective well-being among older adults (Cronbach's alpha: 0.93). Explanatory variables Self-perceived income sufficiency was recoded as (no income, has income and fully sufficient, has income and partially sufficient and has income and not sufficient), working status (in last 1 year) was recoded as (never worked, currently working and retired) [47], receiving pension (no and yes), asset ownership was asked regarding homeownership, land ownership, jewellery ownership and other monetary savings and was recoded as (no and yes). Sex (men and women) and place of residence (rural and urban) were considered in the analysis. Coresiding with children was recoded as (no and yes). Age was recoded as '60-69 years, 70-79 years and 80 + years' , educational status was recoded as 'no education, below five years, 6-10 years and 11 + years' and marital status was recoded as 'not in a marital union and currently in union' [33]. Decision-making power was assessed through the question "who usually makes the following decisions: you alone or with your spouse, with your children, or with others?" on the following issues a. marriage of son/ daughter. b. buying and selling of property c. buying other household items d. gifts to daughters, grandchildren, other relatives e. education of children, grandchildren f. arrangement of social and religious events (Cronbach alpha: 0.88). The variable decision making power was thus recoded as (no role, partial role, and absolute role). Community involvement was coded as (no and yes) [48]. Have someone to trust was coded as (no and yes) [34]. Experienced economic violence was recoded as (no and yes) [33,49]. Chronic diseases were coded as (no and yes) [49]. Caste was categorized as Scheduled Castes, Scheduled Tribes, Other Backward Classes and others [50]. Religion was recoded as Hindus, Muslims, Sikhs and others, household wealth index was divided into five quintiles i.e. poorest, poorer, middle, richer and richest. The wealth index drawn based on the BKPAI survey is based on the following 30 assets and housing characteristics: household electrification; drinking water source; type of toilet facility; type of house; cooking fuel; house ownership; ownership of a bank or post-office account; and ownership of a mattress, a pressure cooker, a chair, a cot/ bed, a table, an electric fan, a radio/transistor, a black and white television, a colour television, a sewing machine, a mobile telephone, any landline phone, a computer, internet facility; a refrigerator, a watch or clock, a bicycle, a motorcycle or scooter, an animal-drawn cart, a car, a water pump, a thresher, and a tractor. The range of index was from poorest to the richest i.e. ranging from lowest to the highest [39]. Statistical analysis Descriptive analysis along with bivariate analysis was employed to find the plausible association between psychological distress and low SWB with exposure and potential risk factors using the chi-square test. Apart from this, binary logistic regression analysis [51] was conducted to understand the relationship between psychological distress and low SWB and other risk factors. The software used was Stata 14 [52]. The significance level was set to be 5% (p < 0.05). Svyset command was used to control the analysis for complex survey design. Additionally, individual weights were used during the analysis to make the estimates nationally representative. Table 1 represents the socio-economic and demographic profile of older adults in India. The sample represents the Indian older adult population. About 43% of older adults had no income whereas 7% has income but not sufficient to fulfil their basic needs. Nearly, 9% of older adults were retired from regular employment. Almost, 70% older adults had no pension and nearly, 18% of older adults had no asset ownership. About 53% of older adults were women and nearly 26% of older adults belong to rural areas. Nearly, 30% of older adults do not co-reside with their children. Eleven per cent of older adults belong to 80 and above age group. Nearly, 51% of older adults had no education and only 6% had 11 and above years of education. About, 40% of older adults were no in marital union during the survey period. Nearly, 70% of older adults had an absolute role in decision making in the household. About 20.5% and 17.3% of older adults had no community involvement and had no one to trust respectively. Almost, 5% of older adults reported that they suffered from some type of economic abuse after turning age 60. About 35.4% of older adults suffered from chronic diseases. About, 24% of older adults belonged to the poorest wealth quintile and 15% belong to richest wealth quintile households. Results Percentage of older adults suffering from psychological distress and low SWB in India were presented in Table 2. About 23.4 and 26.7% of older adults had psychological distress and low SWB respectively. Older adults who had income but that was not sufficient for the fulfilment of basic needs had the highest prevalence of psychological distress (35.1%) and low SWB (39.4%). Older adults who never worked had the highest prevalence of psychological distress (27.1%) and low SWB (30.4%). Older adults who do not have pension had a higher prevalence of psychological distress (23.9%) and low SWB (27.6%). Those older adults who do not own any asset had a higher prevalence of psychological Older adults with no education had a higher prevalence of psychological distress (30.6%) and low SWB (35.5%). About 28.5% and 32.9% of older adults who were not in union had psychological distress and low SWB respectively. No role in household decision making was the risk factor for higher prevalence of psychological distress (50.3%) and low SWB (55.9%). The older adults who do not had any community involvement and no one to trust on had a higher prevalence of psychological distress and lower subjective well-being. Older adults who faced economic violence had a higher prevalence of psychological distress (42.0%) and low SWB (45.0%). Older adults who had a higher prevalence of Figure 1 reveals the fact that older adults who worked by other motives includes involuntary works to meet the needs of their household had higher prevalence of psychological distress (45.4%) and low SWB (43.8%). Figure 2 reveals that an older adult who experienced mental or physical stress due to work has a higher prevalence of psychological distress (25.8%) and low SWB (31.0%). Discussion Even though there were several notable exceptions, most of the indicators of SES were strongly related to two of the mental health outcomes considered in the present study. Financial independence is important for older adults for keeping a quality life in old age. However, a large proportion of older individuals in our study who currently work for their economic need or by compulsion and not by choice had lower levels of mental health outcomes, indicating that the frustration from a job is a major risk factor among older Indian adults. The present study found that about 43% of older adults had no income source. The findings were somewhat relatable with previous studies. In Korea the poverty rates among older population rose from 29% in 1996 to 41% in 2014 [53]. Similarly in China it was found that in 2008 about 15% of older adults consumed at levels below the World Bank's $1.25/day international poverty line [54]. Another study in China revealed that about 33% of older adults were identified as falling in poverty [55]. The present study found that the self-perceived income status is more associated with mental health conditions than any other single indicator of SES. Earlier studies have demonstrated that the financial strain in older adults may act as a stressor that would exacerbate other ongoing deterioration in their health outcomes [56]. Additionally, a recent study found that perceiving a lower income status is associated with lower life satisfaction and lower levels of happiness [57]. Again, own assets and accumulated wealth are the primary sources of support for older persons in Asian countries, whereas, older persons in the West rely heavily on public transfers [1]. Consistently, in our study, those who perceived income as sufficient to fulfil their basic needs and those who had asset ownership reported better mental health outcomes than their counterparts. On the other hand, in developing countries like India, few people look forward to retirement, whereas the majority with their unmet life desires dreads it [58]. And due to the poor social security, older adults continuing to work beyond the retirement age is a norm in India [59]. Besides, working post-retirement is a positive factor in maintaining psychological well-being among older individuals [60]. In concordance with this, our study found that those who retired from work have better mental health and SWB. A striking gender difference in reporting poor mental health and SWB was also found in our study. Men suffered more psychological distress than women. It might be due to the burden of domestic chores and their lack of social networks and support after they lose their job or spouse [61]. Interestingly, it was also found that older adults who are currently married experience better psychological and subjective wellbeing. Furthermore, despite the efforts of government interventions, the coverage of the Indian old-age pension system has remained low due to the discretionary or voluntary nature of the schemes [62]. A recent study observed that the pension receipt directly affects the well-being of retired older adults with low economic status [63]. Further, the pension receipt is found to be associated with increased household expenditure, indicating that most of the income from pension received is used for either improving the health or educational outcomes of other family members [64]. A recent study also observed that though the households spent most of the old-age pension income on improving overall family welfare, it reduced the work participation of older adults substantially [65]. In line with this, older adults in our study who received pension reported poor mental health outcomes. The studies on the association of education with health outcomes in later years suggest that older people with higher levels of education may have a better understanding of their ageing process and reap the benefit of quality care services and better health [66]. Similarly, the literates in our study also reported higher levels of mental and SWB. Furthermore, older people in India traditionally have lived with their children or grandchildren. Such living arrangements are found to be mutually beneficial with older parents providing childcare and other forms of support in domestic work and receiving economic support and care in return [67]. The study suggests that in cultures in which intergenerational ties have higher value, co-residing with children is positively associated with the mental health of the older population [28]. As the evidence suggests, the difference between the rich and the poor in any population extends far beyond money alone. For instance, previous studies observed that the quality relationships rather than the number of family ties were associated with feelings of well-being [68,69]. In countries where care and support toward older parents is a social norm, it is found that co-residence was associated with a low prevalence of depressive symptoms [70,71]. When their spouses die, men lose much of the support and care that wives provide, such as emotional support and the maintenance of social contact with children and others [72]. Consistently, our study found that current marital status and living with an adult child were positively associated with better psychological health and higher SWB. The results show that factors that were significantly associated with the outcomes were primarily related to the older adults themselves. In fact, few other factors were also found to be related to psychological and SWB in old age. They include older adults' importance in the decision making role in the household, experiencing economic abuse within the family, household wealth status, etc. On the other hand, old age is seen as a time of major losses of social roles and experiences of deteriorating both the quality and quantity of relationships [73]. Partial or absolute role in household decision making in this study is found positively associated with mental health outcomes among older adults. Further, the increased burden of low social status makes people feel disrespected and older people are subjected to several types of domestic abuse [74]. And economic abuse is found as a negative predictor of overall psychological disorders among older adults [75]. Consistently, the results of the current study show that those older adults who reported economic abuse have low psychological and subjective health outcomes. The positive association of increasing age with poor mental health is in parallel with findings in India showing the increased age as a major predictor of late-life depression [76]. Also, considering the poor well-being score among male older population, the grim scenario of gender disparity in later life is evident, thus throwing light on the male disadvantage in mental health and the cultural paradox that persist in India [25], and more investigation is warranted in this regard. Furthermore, our analysis is also in consistence with the finding that psychological health may also be affected by different factors for older individuals who are from households in which poverty is more common than for individuals from households with more assets and higher incomes [77]. The results show that older adults from the richest wealth quintile had better mental health status compared to all other quintiles. The case is similar with regard to the older adults' place of residence too. Although the multivariate analyses showed no significance, the bivariate results noticed the poor mental status of older adults residing in rural areas of the country, indicating that rural location as a risk factor for ill-being in older ages [2,78]. The study poses several limitations such as the shared response biases that can occur when both outcome measures of mental health status and well-being are based on self-reports. Another limitation is that the data is crosssectional in design and we are restricted in addressing causality between socioeconomic statuses and outcome variables. Moreover, though the study focuses on many aspects of socioeconomic and familial factors that affect the well-being of older adults, the findings are limited by imprecise measures. Conclusions The study found that the illiterate, older women, older individuals with low levels of perceived income sufficiency, not working, not owning asset/home, not living with their children, experiencing economic abuse, and having no role in household decision making were all at increased risk for psychological distress and low SWB. In this regard, the study highlights that the existing social security system and care services in the country is inadequate to meet the multifaceted need of the growing older population. By focusing on four target areas such as income support, education, family-oriented initiatives, and local or regional policies, the current framework for assessing the mental health among older adults in India can be modified. A move towards a guaranteed pension for eligible older individuals, by which they do not have to remain as a financial burden on their children may reduce their self-perceived economic distress and result in higher levels of wellbeing in older ages. Also, strategies to address socioeconomic disadvantages and gender differentials related to mental health status among older population are urgently needed. Abbreviations SRH: Self-rated health; ADL: Activities of daily living; IADL: Instrumental activities of daily living; OR: Odds ratio; CI: Confidence Interval; BKPAI: Building a Knowledge Base on Population Aging in India; SWB: Subjective well-being.
2021-05-19T13:25:16.721Z
2021-05-18T00:00:00.000
{ "year": 2021, "sha1": "777a71fbbe10cf19ed5a1c696e31e9c41bbeb45b", "oa_license": "CCBY", "oa_url": "https://bmcpsychology.biomedcentral.com/track/pdf/10.1186/s40359-021-00588-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f61df333bee3ddfd8f08114df02382a8190b57fe", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252026009
pes2o/s2orc
v3-fos-license
Towards personalized environment-aware outdoor gait analysis using a smartphone Automatic gait analysis in free-living environments using inertial sensors requires individualized approach as local acceleration and velocity profiles vary with the walker and the topological properties of the environment (e.g., walking in the forest vs. walking on sand). Here, we propose a smartphone-based gait assessment architecture which consists of two data processing modules. The first module employs a set of personalized classifiers for automatic recognition of the walking environment. The second module provides accurate step time estimates by selecting the optimal filtering frequency tailored to the predicted environment. The performance of the architecture was evaluated using experimental data collected from 10 participants walking in 10 different conditions typically encountered during daily living. Compared with ground truth data, the architecture successfully recognized the walking environments; the percentage of correctly classified instances was above 92%. It also estimated step time with high accuracy; the mean absolute error was less than 10 ms, outperforming or at the very least matching the performance levels achieved in controlled laboratory trials (indoor flat surface walking). Compared with using one filtering frequency for all environments, using optimal frequency tailored to each environment reduced step time estimation error by more than 39%. To the best of our knowledge, this is the first study which successfully demonstrates that parameter tuning can improve gait characterization in outdoor environments. However, further research using a larger data set (including more participants with varying demo-graphics and degree of impairment) is needed to confirm this result. Our findings highlight the importance of environment-aware gait analysis, and lay the ground-work for a smartphone-based technology that can be used in the community. | INTRODUCTION In recent years, developing cheap and portable technologies for automatic assessment and characterization of how well people walk and maintain balance in free-living environments has gained momentum with potential applications in areas of healthcare, sports science, and surveillance. One promising approach is to use body-worn devices (either custom-built or off-the-shelf including smartwatches and mobile phones) which record movement data using inertial measurement unit (IMU) sensors (e.g., accelerometer and gyroscope) (Tao et al., 2012). Compared with laboratory-based measurements (performed by human experts using expensive 3D motion analysis sensors), body-worn devices will enable large-scale, continuous, and long-term data collection under more natural conditions. These rich data sets can then be used to investigate how walking patterns vary depending on a wide range of factors: physiological (e.g., fatigue and injury), bio-mechanical (e.g., walking with an assisting device or carrying a shopping bag), environmental (e.g., walking on irregular surfaces or climbing uphill), and behavioural (e.g., cleaning and shopping) (Ippersiel et al., 2022;Kowalsky et al., 2021;Luo et al., 2020;Yang et al., 2012). However, analysis and interpretation of outdoor walking data are not straightforward as the local acceleration and velocity profiles obtained from IMU sensors are noisy (e.g., because of undesired motion artefacts), highly variable (due to factors listed above), and not intuitive. The successful recovery of meaningful gait parameters hinges on context-aware data processing methods; that is, methods capable of recognizing under which circumstances a recording is made and processing data accordingly to improve parameter estimation accuracy. To take a step forward in this direction, this study focuses on one aspect of context-awareness, which is knowing where walking takes place. We propose a new software architecture to achieve personalized, environment-aware outdoor gait analysis ( Figure 1). The architecture consists of two modules: environment classification module which recognizes the environment a recording is made (e.g., while walking uphill or on pebble beach), and a gait characterization module which uses an adaptive algorithm to estimate temporal gait parameters. The gait characterization module is adaptive because the predictions made by the environment classification module are used to adjust the parameters of the gait characterization module to improve its performance. In particular, we concentrate on estimating one clinically relevant gait parameter, step time. When combined with step length, step time is used to predict walking speed (i.e., step length divided by step time). Within the remit of automatic gait analysis using wearable IMU sensors, the majority of the previous work studied walking in controlled laboratory environments, and the proposed step detection and analysis methods were not adaptive in the sense that they did not consider who was walking and under which conditions walking took place (Avvenuti et al., 2018;Manor et al., 2018;Zhong & Rau, 2020). The proposed software architecture attempts to address both limitations by aiming for adaptive gait analysis in free-living environments. In particular, we study whether it is feasible to recognize outdoor walking environments using classical machine learning methods. We also evaluate whether personalized (obtaining a separate model for each individual as opposed to one generalized model for all), and environment-aware parameter tuning improve the accuracy of the gait characterization module. The remainder of the paper is organized as follows. Section 2 discusses related work on environment classification and gait characterization using smartphones or other devices utilizing IMU sensors. Section 3 provides details about the design and implementation of the environment classification and gait characterization modules, as well as experimental procedures including data collection and analysis. Section 4 presents results summarizing the performance of the two modules. Section 5 presents a short summary of the work and highlights its main findings. It also discusses its current limitations as well as potential avenues for future work. F I G U R E 1 A proposed software architecture for automatic estimation of temporal gait parameters in outdoor environments using a smartphone. The details about the environment classification and gait characterization modules are discussed in Section 3. For this study, phone data were processed offline using a standard laptop. The long-term goal is to run the architecture on the phone for real-time gait analysis 2 | RELATED WORK 2.1 | Environment classification Hu et al. (2021) studied walking patterns of 30 participants (15 females and 15 males, age = 23.5 ± 4.2 years old, height = 169.3 ± 21.5 cm and weight = 70.9 ± 13.9 kg). Participants wore six IMU sensors (one on the wrist, one on the lower back, one on each thigh and one on each tibia) and walked 15 m at preferred speed in nine different environments: flat, cobblestone, grass, stairs up, stairs down, uphill, downhill, bank left, and bank right. They evaluated the performance of three complex deep neural networks in recognizing where walking occurred: convolutional neural network, long short-term memory network, and long short-term memory network with global pooling. They showed that the classification accuracy was 84% when only lower back IMU sensor was included in the analysis, and it increased to 92% when all sensors were included in the analysis. This data set is publicly available. We used it in (Bunker et al., 2021) to evaluate the classification performance of seven classical machine learning methods (i.e., fuzzy-rough nearest neighbour, vaguely classified nearest neighbour classifier, random forest, decision tree, naive Bayes, support vector machine, and multi-layer perceptron); 82% classification accuracy was achieved matching the performance of the deep neural networks mentioned above. However, neither of the studies evaluated the generalization performance of the classifiers; that is, whether they could extend to unseen participants successfully. Dixon et al. (2019) studied the running patterns of 29 participants (14 females and 15 males, age = 23.3 ± 3.6 years old, height = 180 ± 10 cm and weight = 63.6 ± 8.5 kg). Participants wore two IMU sensors (one on tibia and one on lower back) and ran on three different surfaces (synthetic track, concrete pavement and wood chip trail). They evaluated the performance of two classifiers (gradient boosting and deep convolutional network) and their variations using 90% training and 10% testing data split (which was repeated five times by randomly reshuffling the training and testing data). Above 90% classification accuracy was reported for all classifiers (the best performance being around 97%). Again, there was no mention on how well the classifiers generalized to unseen participants data. Benson et al. (2020) studied the running patterns of three groups of participants: Group 1-28 participants, 10 females and 18 males, age = 32.2 ± 13.4 years old, height = 174 ± 9 cm and weight = 70.5 ± 10.3 kg, Group 2-25 participants, 13 females and 12 males, age = 36.9 ± 10.1 years old, height = 173 ± 10 cm and weight = 70.2 ± 13 kg and Group 3-16 participants, 8 females and 8 males, age = 31.2 ± 10.3 years old, height = 170 ± 9 cm and weight = 67.1 ± 8.1 kg. All participants wore an IMU sensor on lower back. Group 1 participants ran on a treadmill (duration = 5 min), Group 2 participants ran on a concrete side-walk (length = 600 m), and Group 3 participants ran both environments. A binary support vector machine classifier was trained using data from Groups 1 and 2, and its performance was evaluated using 10-fold cross-validation resulting in 93% accuracy. The authors also evaluated the generalization performance of the model using data from Group 3 (84%). Ahamed et al. (2018) compared the running patterns of six participants (five females, age = 47.5 ± 9.6 years old, height = 169 ± 2 cm and weight = 67.4 ± 11.5 kg, and one male, age = 29 years old, height = 170 cm and weight = 75 kg). Participants wore two IMU sensors (one lower back and one on wrist watch) and ran in two different weather conditions: one in winter (À10 C) and one in spring (6 C). The random forest classifier (including 100 decision trees) was trained and evaluated using 70% training and 30% testing data split resulting in 87% accuracy. The authors also tried training a separate model for each participant which increased the accuracy to 95%. | Gait characterization in outdoor environments Within the scope of automatic gait analysis using IMU sensors, previous research has primarily focused on step detection (i.e., whether someone is walking or not) (Avvenuti et al., 2018) and extraction of step-related spatial and temporal gait parameters (such as cadence, step length, step time, swing to stance ratio, double support time, gait asymmetry and variability) during steady walking in indoor environments (Manor et al., 2018;Silsupadol et al., 2019;Zhong & Rau, 2020). Only a handful of studies have looked into gait analysis in free-living environments. Silsupadol et al. (2019) investigated walking patterns of two groups of participants: Group 1-12 young adults, 8 females and 4 males, age = 21.4 ± 1.2 years old, and group (2) 12 older adults, 12 females, age = 72.4 ± 6.1 years old (participants' height and weight information was not reported). All participants carried three smartphones (two on lower back and one in shoulder bag) walked on pedestrian walkways in seven different ways: (1) preferred speed, (2) turn left (3) turn right, (4) decelerate from normal to slow speed, (5) accelerate from slow to normal speed, (6) accelerate from normal to fast speed, and decelerate from fast to normal speed. Step time (and other gait parameters including gait speed, cadence, step length and asymmetry) was estimated from anterior-posterior acceleration channel which was low-pass filtered at 2 Hz cut-off frequency. This channel was utilized to detect heel-strike time points which were then used to estimate the gait parameters. The step time estimation error varied between 0 and 60 ms and it was higher during unsteady walking including acceleration and deceleration. Weiss et al. (2011) studied 22 Parkinson's (7 females and 15 males, age = 65.9 ± 5.9 years old) and 17 control participant (9 females, 8 males, age = 69.9 ± 8.8 years old) (participants' height and weight information was not reported). Participants wore an IMU sensor (lower back) and walked for a minute in a hospital corridor and outside the hospital. In addition, one Parkinson's and one control participant were asked to wear the sensor at home and outside for three consecutive days. The study compared acceleration profiles between groups and between environments but did not attempt to estimate temporal gait parameters from the data. | MATERIALS AND METHODS The study was approved by the Aberystwyth University Ethics Committee Board, and all experiments were conducted in accordance with the Declaration of Helsinki. All participants gave their informed consent before participating in the study. | Participants Ten participants from Aberystwyth town were recruited for data collection (4 females and 6 males, age = 29.0 ± 8.7 years old, height = 173 ± 7.9 cm and weight = 78.2 ± 16.2 kg). All participants were healthy with no apparent neurological or physical impairments that could affect their gait or compromise their safety while walking outdoor. | Data recording mobile app A custom-built Android app, developed by our research group, was used to record motion data from the embedded sensors of Google Pixel 4 smartphone at an average sampling rate of 400 Hz. The motion data included time stamps, accelerometer (three-channel) and gyroscope (threechannel) readings. These data were combined with participant information and GPS coordinates of the experimental location, and were transferred to a secure online server for storage and processing. Note that the GPS data were not used in this study. | Data collection Experiments were conducted in the Summer of 2021 at eight different outdoor locations in Aberystwyth town including a grass patch, a running track, a pavement, sandy beach, pebble beach, a forest track, a road with a slope (for uphill and downhill walking), a set of stairs (for up and down walking) ( Figure 2). These locations were chosen to cover a wide range of walking patterns seen in real-life. In each environment, the participants were instructed to walk back-and-forth between two landmarks at preferred speed without stopping. The landmarks were 14 m apart. In each session, we ensured recording of at least 1 min of walking data except from those performed on the road and stairs. In these locations, the duration of the data recording was doubled to collect data for both walking uphill/going up the stairs and walking downhill/going downstairs. All experiments were repeated twice, and performed in daylight and under supervision ensuring the safety of the participants. To create more realistic walking conditions, we deliberately did not control for external factors such as weather conditions (e.g., windy or rainy day), surface conditions (e.g., beach conditions varied after a tide), or having a crowd in the vicinity (although they were not allowed to cross the experimental path). During experiments, participants carried the data recording phone using a fixation belt with a phone holder fixed on lower back close to L3 vertebra. The belt was tied around the waist tightly (without discomforting the participants) to reduce motion artefacts. The phone was placed in the holder horizontally with z-axis corresponding to the anterior-posterior axis (i.e., direction of walking). In addition, all experiments were recorded using a high-speed camera (GoPro Hero 10, frame rate 240 frames s À1 ). These videos were annotated manually to measure actual footground contact times (and subsequently step times) as gold standard. To synchronize phone and GoPro data, at the beginning of each data recording session the experimenter performed a predefined set of motions in the field of view of the camera (i.e., holding the phone at rest close to chest for 5 s, lifting it up and bringing down quickly to the resting position, and holding it there for another 5 s). The relative time offset between phone and GoPro was estimated by aligning the time points of maximum vertical acceleration. | Data preparation In total, we have collected 220-min of phone data including 20 data sets per participant (with 10 walking environments Â2 repetitions). Each recording was inspected visually to identify intervals of steady straight walking; that is, turnings were excluded from the analysis. This extracted steady walking data were used for training and validating classifiers in the environment classification module, and estimating step time in the gait characterization module. 3.5 | Part 1: Environment classification 3.5.1 | Feature extraction Each recording was divided into short segments using a 2-s sliding window with 50% overlap. This resulted in at least 25 segments per trial (i.e., data instances). From each segment, seven time domain features (min, max, mean, SD, skewness, kurtosis and number zero crossings), and two frequency domain features (dominant frequency and its amplitude) were extracted leading to 54 features in total; nine features  six channels (three channels from the accelerometer and three from the gyroscope sensors). | Classifier training and testing The training and testing data were a 2D array consisting of different data segments (rows) and 54 features (columns). The participant ID and environment decision class were also added to the 55th and 56th columns, respectively. Previous studies on human activity recognition have shown that training multiple personalized models (one classifier per person) could lead to better classification performance than training one generalized model for multiple people (Mannini & Intille, 2018). In our data set, we also expected a degree of inter-participant variability. To compare personalized versus generalized models, two distinct classifier-training methods were followed. First, one classifier was obtained for all participants. Nine participants were chosen to train the classifier, and the remaining participant was used to evaluate its true performance (onefold). This process was repeated 10 times by reshuffling the participants in the training and testing data sets (10-fold in total). Second, for each participant, a separate classifier was obtained. In this case, 90% of the personalized data were used to train a classifier, and the remaining 10% was used to evaluate its performance (onefold). Again, this process was repeated 10 times by randomly reshuffling the data in the training and test data sets (10-fold in total). | Classifier evaluation For all classifiers, performance was evaluated using 10-fold cross validation (including 10 randomization at each fold) as described above. The classification accuracy was reported as the percentage of correctly classified instances. The stability of each classifier was evaluated based on the coefficient of variation (SD divided by mean). The class confusion matrix of one of the high-performing classifiers was also visualized to investigate the similarity of walking patterns across different environments. We predicted higher confusion (i.e., more incorrectly classified instances) among environments where participant exhibited similar walking patterns (e.g., pavement and track). | Part 2: Gait characterization The goal of the characterization module is to estimate temporal gait parameters such as step time, variance and swing to stance ratio accurately, and the goal of this study is to improve the robustness of the gait characterization module in outdoor conditions by making it more adaptive to the environment where walking takes place. The gait characterization module starts with filtering the data to remove high-frequency noise that is not related to walking (Phase 1). It then detects heel-strike and toe-off time points using a peak detection algorithm (Phase 2). Next, it estimates step times by measuring the time difference between two consecutive heel-strike or toe-off time points (Phase 3). Finally, a post-processing algorithm looks for missed steps (false-negatives) or pseudo steps (false-positives) by analysing the distribution of estimated step times (Phase 4). If the distribution of the estimated step times does not match the expected distribution, the algorithm changes the peak detection threshold and reanalyses the data (repeats Phase 2, 3 and 4). More details about the gait characterization module can be found in . The peak detection algorithm (Phase 2) runs under the assumption that the negative and positive peaks prominent in forward acceleration (recorded by the accelerometer z-channel) align well with heel-strike (i.e., when maximum deceleration occurs) and toe-off time points (i.e., when maximum acceleration occurs), respectively. Previous studies have shown that this alignment assumption holds reasonably true while analysing data recorded in controlled laboratory conditions (Khandelwal & Wickström, 2017), however, it has not been tested on data recorded in outdoor environments. Walking on uneven and granular surfaces or against the wind may change the acceleration profile of a participant creating additional peaks or shifting existing ones in time. This would negatively impact the accuracy of step time estimation. In addition, these environmentdependent acceleration profiles may vary from person to person necessitating to tailor the gait characterization module to each participant. 3.6.1 | Identifying optimal cut-off frequency and step time estimation error One parameter that impacts the performance of the gait characterization module is the cut-off frequency of the low-pass filter (Phase 1). In the original implementation, we proposed a relatively high cut-off frequency (10 Hz) to preserve walking-related dominant frequency and its harmonics in the acceleration and angular velocity profiles. To evaluate whether lowering the cut-off frequency improves step time estimation, we ran the gait characterization module using different cut-off frequencies (varied between 2 and 10 Hz with 1 Hz increments), and saved the optimal frequency that led to minimum step time estimation error (for each person and environment). The estimation error was calculated as the mean absolute error (e) between predicted and actual step time measurements (obtained from the camera). The relative change in error (Δe) was also calculated, where b e corresponding to the estimation error when the cut-off frequency was fixed at 10Hz. Δe varied between 0 (b e ¼ e) and 100% (e ¼ 0). The higher Δe, the better the optimal cut-off frequency is. Figure 3a shows three examples of how estimation error varied as a function of filtering frequency while participant 1 walking on sand, participant 1 walking downhill and participant 2 walking on sand. The relationship between estimation error and frequency differed between participant 1 and participant 2. In participant 1, frequencies <7 Hz resulted in lower estimation errors whereas in participant 2 frequencies >6 Hz resulted in lower estimation errors. When data were filtered using optimal frequency, peaks aligned better with heel-strike and toe-off time points (Figure 3b-d). | Statistical analysis Two statistical tests were performed: one to compare the performance of personalized and generalized classifiers, and one to evaluate whether tuning cut-off frequency resulted in lower step time estimation error. In both cases, one-sample Kolmogorov Smirnov test was performed to evaluate whether data came from normal distributions (in each group). The null hypotheses were rejected at 5% significance level, that is, the distributions were not normal. Hence, non-parametric Wilcoxon rank sum followed by Tukey-Kramer Multiple Comparison tests were performed to evaluate whether mean accuracy was different at 5% or less significance level. | RESULTS All data preprocessing and analysis were performed offline using custom-built Matlab and Python scripts on a standard laptop. All results were reported in the format of either mean ± SD of the mean (for continuous variables such as step time) or median ± interquartile range (iqr) (for optimal cut-off frequency). | Environment classification The performance of personalized classifiers was high with an average accuracy of 92.3 ± 5.3% (Table 1). Inter-participant variation in classifiers' performance was up to 10% with participants 1 and 7 having the lowest and highest accuracy, respectively (87.6 ± 5.4% and 98.1 ± 2.8%). Similarly, there was also up to 10% variation in classifiers performance with J48 and MLP having the lowest and highest accuracy, respectively (84.5 ± 4.4% and 96.8 ± 2.0%). On rare occasions where classifiers failed to identify the correct environment. The majority of mislabelling occurred Step time estimation error as a function of low-pass filter cut-off frequency. Three examples are shown: participant 1 walking on sand (black), participant 1 walking downhill (cyan), and participant 2 walking on sand (magenta). Circle markers indicate optimal cut-off frequencies resulting in lowest estimation error for each data set. Note that the optimal frequency for each data set is distinct. (b) Phone data from participant 1 (sand): low-pass filtered by the optimal frequency of 3 Hz (top) and filtered by 10 Hz as a control (bottom). (c) Phone data from participant 1 (downhill): filtered by the optimal frequency of 6 Hz (top) and filtered by 10 Hz (bottom). (d) Phone data from participant 2 (sand): filtered by the optimal frequency of 7 Hz (top) and filtered by 10 Hz (bottom). Vertical lines correspond to actual toe-off (dashed grey line) and heel-strike (solid grey line) time points obtained from the video camera between similar environments including pavement and track or pebble and sand (e.g., see Figure 4). Overall, the classifiers also had high stability Personalized classifiers performed significantly better than generalized classifiers which were trained and tested on different participants (nine participants for training and one participant for testing) (p < 0.01). The accuracy of generalized classifiers reached a ceiling around 33%: 28.5 ± 3.2% (FNN), 19.9 ± 1.5% (J48), 31.8 ± 4.0% (JRip), 30.7 ± 2.7% (NN), 30.8 ± 5.6% (NB), 23.6 ± 2.4% (SMO), 27.2 ± 3.4% (MLP). Generalized models were more complex than personalized models. For instance, on average a generalized J48 classifier (a decision tree) had almost 17 times more leaves than a personalized J48 classifier (38 vs. 679), suggesting that there were no overlapping rules among participants, probably due to inter-participant variability. | Step time estimation The optimal cut-off frequency minimizing step time estimation error was different for each participant and environment, and there was no one frequency that could be associated with a participant or environment ( F I G U R E 4 Naive Bayes (NB) classifier confusion matrix which was calculated as a cumulative sum of 10-fold cross-validation. Light colours indicate higher numbers cases had either 2 Hz (38%) or 3 Hz (17%) as optimal frequency. In five walking conditions the median frequency was less than 3 Hz: 2.5 ± 2.5 Hz (downstairs), 2.5 ± 3.5 Hz (forest), 2.5 ± 3.0 Hz (track), 2.5 ± 3.25 Hz (uphill) and 2.5 ± 2.5 Hz (upstairs), and in one condition (sand) it was 3.5 ± 2.75 Hz. In other four conditions (downhill, grass, pavement and pebbles) the optimal frequency was more variable; for instance, 5.5 ± 4.25 Hz in grass. With optimal cut-off frequency, the mean step time estimation error was 8.6 ± 15.4 ms (Table 4). Again, the amount of error varied among participants and environments. The minimum error was measured for participant 7 (2.9 ± 1.9 ms) and sand (5.1 ± 4.4 ms) whereas the maximum error was measured for participant 10 (28.6 ± 26.2 ms) and downhill (16.2 ± 32.1 ms). T A B L E 2 Optimal cut-off frequency for each participant and environment Note: Last two rows show median and interquartile range across participants (same environment). Last two columns show median and interquartile range across environments (same participant). The overall median and Iqr when all data combined was 3.0 and 4.0, respectively. T A B L E 3 Reduction in step time estimation error (percentage) when optimum cut-off frequency was used The overall mean when all data combined was 39. Note: Last two rows show mean reduction (and SD) across participants (same environment). Last two columns show mean reduction (and SD) across environments (same participant). | Summary This study presents a novel software architecture which uses an adaptive method to successfully characterize gait in different outdoor environments, and lays the groundwork for a smartphone-based gait assessment technology that can be used in the community. The architecture relies on two data processing modules: environment classification module which predicts the environmental conditions under which walking takes place, and a gait characterization module which uses an environment-specific algorithm to estimate the temporal gait parameters of the walker. The utility of the architecture was demonstrated by analysing real world data collected from 10 participants walking on 10 different conditions. The main results are summarized below: 1. The accuracy of the environment classification module was above 90%. 2. It was not possible to obtain a single classifier generalized to all participants. Instead, a separate classifier was obtained for each participant. 3. The step time estimation error was less than 10 ms which is less than (or equal to) error values reported for indoor, flat surface walking 4. Tailoring filtering frequency to each person and environment reduced step time estimation error by 39%. 5. The optimal filtering frequency was different for each person and environment. To the best of our knowledge, this is the first study which successfully demonstrates that parameter tuning can improve gait characterization in outdoor environments. | Analysis of results Why did generalized classifiers have low classification accuracy? The most plausible explanation is that the inter-participant variability (variability of how participants walk) was higher than the inter-environment variability (variability of walking in different environments). In another words, the gait variability between two participants walking in the same environment was higher than the gait variability of the same participant walking in two different environments. This was rather expected as everyone has a distinct gait depending on a multitude of biomechanical and physiological factors (Winter, 1991) so much so that gait can be used as a human identification tool (Nixon et al., 2010), similar to fingerprint. Alternatively, the low classification accuracy can be due to the selected feature set which included most basic time and frequency domain features. It remains to be seen whether adding new features (e.g., see Lubba et al., 2019) or using deeper networks (capable of learning their own features) would improve the classification performance. The personalized classifiers had an average classification accuracy higher than 90%, which was very encouraging. Incorrect classifications typically occurred across potentially similar conditions. The percentage of incorrect classifications was around 10% between track and pavement (both environments had a flat surface), and sand and pebble beach (both environments had a granular surface although at a different scale). There T A B L E 4 Step time estimation error (in milliseconds) when optimum cut-off frequency was used was also some confusion between pavement and grass, pavement and pebble beach, pavement and forest and forest and sand. On some occasions, participants deviated from their path or changed their gait based on external factors; for instance, to avoid a rock (pebbled beach), a tree branch (forest) or a pedestrian who was walking nearby. These gait perturbations also changed the acceleration profiles and may have contributed to the incorrect classifications to a certain extent. Overall, the gait characterization module had an outstanding performance in estimating step time. The average step time estimation error was less than 10 ms, which was a marked improvement compared with previous indoor studies, reporting error values between 10 and 20 ms (e.g., 13 ms in Del Din et al. (2015) and 19 ms in Kim et al. (2015)). However, it is worth noting that previous studies evaluated performance using larger data sets (including participants with varying demographics and degree of gait impairments). In our study, there were few exceptional cases (9 in total) where estimation error was relatively high (>25 ms or 5% assuming that average step time was 500 ms). For instance, it was 100 ms while participant 3 walking downhill. In this particular case, the participant walked very fast (almost running) and took larger steps shifting the position of local peaks in the acceleration profile ( Figure 5a). Noticeably, five of these nine cases came from participant 10. Further analysis showed that this was due to the fact that the peaks in this participant's acceleration profile did not align with heel-strike and toe-off time points as well as they did in other participants (Figure 5b), violating the key assumption of the gait characterization module. The results highlight the importance of parameter tuning in smartphone-based outdoor gait characterization; that is, tuning low-pass filter cutoff frequency reduced step time estimation error as much as 100%. The optimal frequency varied between 2 and 9 Hz, and on average the error reduction varied between 8% and 55% in participants and 20% and 55% in environments. In general, lowering cut-off frequency improved prediction performance; the optimal cut-off frequency was equal to or lower than 5 Hz in 70% of the instances, and equal to or less than 3 Hz in 55% of the instances. We speculate that walking outside required more corrective movements (for instance, to maintain balance after stepping on a stone). These movements were typically fast and transient, creating high-frequency noise in the data. Filtering data with a relatively low cut-off frequency might have enhanced the peaks created by the fundamental stepping frequency, which was typically less than 2 Hz. | Limitations of the study and future work While the proposed software architecture focused on estimating one gait parameter, step time, it can be easily extended to estimate other temporal parameters (e.g., swing to stance ratio and left-right asymmetry). In addition, there is a prospect for estimating spatial gait parameters which are associated with mobility (e.g., step length) and dynamic stability (e.g., step width) (Brach et al., 2005;Sekiya et al., 1997). Several methods have been proposed for estimating step length from IMU data (Klein & Asraf, 2020;Köse et al., 2012;Zijlstra & Hof, 2003). So far, these algorithms have been almost exclusively tested on indoor data, and further studies are needed to evaluate their performance on outdoor data. Apart from filtering frequency, there are other parameters inside the gait characterization module that can be tuned to improve performance (e.g., amplitude threshold in the peak detection algorithm). The long-term goal is to include a new optimization module capable of tuning these parameters automatically on the fly without needing a priori training. This new module will make the architecture more adaptive to unseen participants and environmental conditions. We are currently working on improving the gait characterization module to handle fast, unsteady and impaired walking. In these conditions, acceleration profiles change unpredictably. Hence, more reliable and assumption-free step detection and characterization methods are needed. One promising approach is training a deep neural network to predict foot contact times (Kidzi nski et al., 2019). In particular, long short-term memory networks and transformers, capable of learning temporal dependencies, would be suitable for this task. To expand the project, we have started recording data from more participants with diverse backgrounds (e.g., healthy participants) with different age groups and participants with neurological movement disorders (e.g., Stroke and Parkinson disease) and in more dynamic environments F I G U R E 5 (a) Phone data from participant 3 and participant 4 during downhill walking. (b) Phone data from participant 10 and participant 4 during climbing stairs. Vertical lines correspond to actual toe-off (dashed grey line) and heel-strike (solid grey line) time points obtained from the video camera (e.g., while walking in town centre or shopping in a grocery store). The new data set will pose new challenges to recognize the 'context' (who is walking and where walking takes place), and provide a rich test bed to evaluate the performance of the improved gait characterization module. Up until now, all data processing was done offline using a standard laptop. The next step is to realize these computations on the phone itself to provide real-time feedback to participants and health care professionals. The readout from the phone sensors was kept as high as possible (i.e., 400 Hz) to have high-resolution data. It is desired, however, to lower the sampling rate in order to reduce data bandwidth hence improve phone battery life. The preliminary results from an ongoing investigation in our research group suggests that 100 Hz is plentiful to maintain high performance. Similarly, we are investigating how position and orientation of the phone affect its sensitivity. It is not unreasonable to assume if the phone is placed in one of front pockets (which is more realistic in daily living scenarios), its sensors will be less sensitive to the steps from the contralateral side than the steps from the ipsilateral side. A naive solution to mitigate this problem would be to use separate amplitude thresholds for each side. However, peaks in the accelerometer data generated by the steps from the contralateral side may not be as prominent (or may be distorted), warranting further data processing. This study lays the groundwork for a smartphone-based gait assessment technology that can be used in the community for long-term continuous health monitoring. AUTHOR CONTRIBUTIONS Otar Akanyeti proposed the study. Megan Taylor Bunker developed the data recording mobile app. Megan Taylor Bunker and Arshad Sher collected the data. Otar Akanyeti, Arshad Sher analyzed the data. Otar Akanyeti and Arshad Sher wrote and edited the manuscript.
2022-09-03T15:09:45.644Z
2022-09-01T00:00:00.000
{ "year": 2023, "sha1": "6cc2eea0e7258db7a4bb7028a44633767fae8711", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1111/exsy.13130", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "5eca31cb284a4b9fc9029069710574fd9f5ab17d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
118455585
pes2o/s2orc
v3-fos-license
Late time evolution of the gravitational wave damping in the early Universe An analytical solution for time evolution of the gravitational wave damping in the early Universe due to freely streaming neutrinos is found in the late time regime. The solution is represented by a convergent series of spherical Bessel functions of even order and was possible with the help of a new compact formula for the convolution of spherical Bessel functions of integer order The CMB observations done by Wilkinson Microwave Anisotropy Probe (WMAP) [17] generally support theoretical predictions based on the standard inflationary cosmological model. The detailed analysis of the experimental data gives more and more accurate values [18][19][20] for the most valuable cosmological parameters such as baryon density, total matter density, Hubble constant, and age of the Universe. Independently of WMAP measurements, there is a long quest for a direct observation of cosmological gravitational waves [21]. Specially designed for this task Laser Interferometer Gravitational Wave Observatory (LIGO) puts a major effort in this experimental challenge [22]. A direct observation of cosmological gravitational waves would serve as a decisive test for validity of the Einstein general theory of relativity in the same way as the Michelson-Morley experiment served as a major proof of the Einstein special theory of relativity. As in the experimental case, cosmological tensor fluctuations pose a challenge also from a theoretical side [23]. Following S. Weinberg [23] we argue that "The particles of both the cold dark matter and baryonic plasma move too slowly to contribute any anisotropic inertia. In tensor modes there are no perturbations to densities or streaming velocities, so there are no perturbations to either the cold dark matter or baryonic plasma that need to be followed here." Therefore the only contributions to the anisotropic inertia tensor are due to photons and neutrinos. Further simplification comes from the following argument by S. Weinberg [23][24][25]: "The anisotropic inertia tensor is the sum of the contributions from photons and neutrinos, but photons have a short mean free time before the era of recombination, and make only a small contribution to the total energy density afterwards, so their contribution to the anisotropic inertia is small. This leaves neutrinos (including antineutrinos), which have been traveling essentially without collisions since the temperature dropped below about 10 10 K, and which make up a good fraction of the energy density of the universe until cold dark matter becomes important, at a temperature about 10 4 K. The tensor part of the anisotropic inertia tensor is given by so the gravitational wave equation now becomes an integro-differential equation: The impact of neutrino source on gravitational wave damping has been thoroughly considered [23][24][25][26][27][28][29][30][31][32][33]. In this paper we report an analytical solution for the damping of gravitational waves in the early Universe due to freely streaming neutrinos, eq. (3), in the late time regime, u ≫ Q ≫ 1. The solution is represented by an infinite series of spherical Bessel functions of even order. First we shall explain each term in the introduced equations (1-3). II. NOTATIONS We are interested in the time evolution of h ij (x, t) that is the tensor perturbation to the metric g µν : where a(t) is the time-dependent Robertson-Walker scale factor. The kernel K(u) in eq. (1) is represented by the sum of three spherical Bessel functions j n (u), The anisotropic stress tensor π ij (u) is obtained from the solution of the Boltzmann equation for freely streaming neutrinos [24,25] which defines the stress-energy tensor for the Einstein field equations. The functions ρ ν (u) and ρ γ (u) give the unperturbed equilibrium neutrino and photon energy density, correspondingly, which define the ratio: The variable u is the product of the wave number k and the conformal time, The boundary condition to eq. (3) is and it is assumed that we can parameterize the tensor h ij (u) as Introducing the dimensionless quantity where t eq is the time of matter-radiation equality, the general eq. (3) can be written as [24] (1 + y) with the boundary conditions Eq. (11) can be further simplified by the change of variable, into Here Q is defined by the ratio of the wave number to its value at the time of matter-radiation equality, k eq = a eq H eq , . III. TIME EVOLUTION OF THE GRAVITATIONAL WAVE DAMPING In the late time regime, u ≫ Q ≫ 1, the general eq. (14) simplifies into whereĈ ≡ −24f ν (0)(4Q) 2 . Below we present the analytical solution for eq. (16) together with boundary conditions (12). Clearly we need to find a specific function that would "absorb" all derivatives of u and all powers of u on the left hand side of eq. (16). For the differential operator that appears in the left hand side of eq. (16), these conditions can be satisfied with the function Applying the differential operator (17) to the function (18) we obtain a single spherical Bessel function L[f n (u)] = n(n − 2)(n + 1)(n + 3)(2n + 1)j n (u) (19) which is exactly what we are looking for. Therefore we can look for the solution of eq. (16) in terms of the expansion: The left hand side of eq. (16) transforms into ∞ n=0 n(n − 2)(n + 1)(n + 3)(2n + 1)c n j n (u). The regular at the origin solution for the homogeneous part of eq. (16), is the sum of the two spherical Bessel functions, The homogenous part χ 0 (u) of the general solution χ(u) can be already seen as a linear combination of the first two terms in the expansion (20) for n = 0 and n = 2. The right hand side of eq. (16) is represented by the convolution of the kernel (5) with the first derivative of the unknown function χ(u) which we are looking for in terms of a series (20). Clearly one needs a mathematical tool that relates a convolution of spherical Bessel functions to a series of those. In Appendix A we prove a useful formula for the convolution of spherical Bessel functions, that is not presented in the mathematical literature: Here the matrix B k,l is generated by the convolution of the first derivative of the function χ(u), with the kernel (5) by means of eq. (24), and for integer k ∈ [0, 10] and l ∈ [0, 10] is We should notice an unpleasant feature of the matrix of coefficients (27): the first row up to a factor k = −5 is identical to the second row. This is a direct response to the symmetry of the introduced function f n (u), eq. (18). The functions f n (u) for n = 0 and n = 2 are exactly the same up to the factor k = −5, f 2 (u) = 10 3 (j 0 (u) + j 2 (u)). Therefore the rank of the matrix (27) is Rank[B] = N − 1. This is a real obstacle because it leads to inconsistency with the boundary conditions. Indeed, the boundary conditions (12) are met if we set On the other hand, the linear dependence of the matrix is equivalent to In order to avoid this unpleasant feature we can start summation in the series (20) from n = 2 instead of n = 0 which is equivalent to setting and thus the boundary conditions (12) are met if we set Absence of the j 2 (u) term in the left hand side of eq. (21) leads to a restriction on the first coefficients in eq. (25): Finally we get the system of linear equations n(n − 2)(n + 1)(n + 3)(2n + 1)c n =Ĉ In the limit Q ≫ 1 and owing toĈ ≡ −24f ν (0)(4Q) 2 we have Q-independent solution: IV. CONCLUSION We have analyzed the problem of gravitational wave damping in the early Universe due to freely streaming neutrinos in the late time regime u ≫ Q ≫ 1. As in the opposite limit u ≪ Q [26], the solution is represented by a convergent series of spherical Bessel functions of even order and is independent of the Q-value. Thus we conclude that the problem gravitational wave damping in the early Universe due to freely streaming neutrinos is completely solved in an analytical way in both early and late time limits. Starting with the integral we prove (24). First, we represent a spherical Bessel function as a Fourier transformation of the Legendre polynomial P n (z), Substitution of (39) into (38) leads to Now we employ the Legendre function of the second kind Q n (z) defined as Performing the integrations over t and s we obtain Further we reincarnate spherical Bessel functions by decomposing plane waves in terms of Legendre polynomials: which leads to The angular momentum coupling simplifies the product of Legendre polynomials, in terms of the Clebsch-Gordan coefficients l, 0, m, 0|L, 0 . Introducing and using the analog of eq. (45), we come to where in the first term we have decomposed the product P l (t)Q m (t), whereas in the second term we decomposed the product P l (t)P m (t). Using the parity identity,
2012-07-31T00:05:52.000Z
2012-04-06T00:00:00.000
{ "year": 2012, "sha1": "990c2f9b47929809af03f9410c210149c5796c41", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "990c2f9b47929809af03f9410c210149c5796c41", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
16983714
pes2o/s2orc
v3-fos-license
Strangeness and Charm Signatures of the Quark-Gluon Plasma Strangeness, charmonium and open charm yields in relativistic nucleus-nucleus collisions are considered within statistical model approach as potential signals of the quark-gluon plasma. I. Introduction The present status of the quark-qluon plasma (QGP) in nucleus-nucleus (A+A) collisions is somewhat uncertain. Experimental data for A+A collisions with truly heavy beams have become available: Au+Au at 11 A·GeV at the BNL AGS and Pb+Pb at 158 A·GeV at the CERN SPS [1]. A systematic analysis of these data could yield clues to whether a short-lived phase with quark and gluon constituents, the QGP, exists during the hot and dense stage of these reactions. This question, whether the QGP is already produced with the currently operating heavy ion accelerators, is right now vigorously debated. There are also convincing hopes to find new reliable evidences of the QGP within a few years. The accelerators of a new generation RHIC BNL and LHC CERN will start soon to operate. Since a long time [2] statistical models are used to describe hadron multiplicities in high energy collisions. Thermal hadron production models have been successfully used to fit the data on particle multiplicities in A+A collisions at the CERN SPS energies (see, e.g. [3,4,5]). Due to the large number of particles a grand canonical formulation is used for the modeling of high energy heavy ion collisions. Recently, an impressive success of the statistical model applied to hadron multiplicities in elementary e + + e − , p + p and p +p interactions at high energy was also reported [6]. However, in the latter case the use of a canonical formulation of the model, which assures the exact conservation of the conserved charges, is necessary (see, e.g., [7] and references therein). The temperature parameter which characterizes the available phase space for the hadron production is found in these interactions to be 160-190 MeV [6]. It does not show any significant dependence on the type of reaction and on the collision energy (at the SPS energies and higher). This fact suggests the possibility to ascribe the observed statistical properties of hadron production systematics in elementary and nuclear collisions at high energies to the statistical nature of the hadronization process [6,4,8]. II. Strangeness Production The enhanced production of strangeness was considered by many authors as a potential signal of QGP formation (see, e.g., Ref. [9]). The expectation was that strangeness production should rapidly increase when the energy transition region is crossed from below. The strangeness to pion ratio is indeed observed to increase in A+A collisions. It seems however that this is not a signal of the QGP. The low level of strangeness production in p+p interactions as compared to the strangeness yield in central A+A collisions, called strangeness enhancement, can be also understood to a large extent as due to the effect of the exact strangeness conservation. The canonical ensemble treatment of the strangeness conservation leads to the additional suppression factors imposed on the strange hadrons production in small systems created in p+p collisions [6,7]. Another important point is that for the chemical freeze-out parameters, temperature T and baryonic chemical potential µ b , found for the SPS energies the strangeness to entropy ratio is larger in the equilibrium HG than in the equilibrium QGP [5]. To estimate the strangeness to entropy ratio let us consider the quantity where N s and Ns are the numbers of strange quarks and antiquarks, and S is the total entropy of the system. In the QGP we use the ideal gas approximation of massless u-, d-(anti)quarks and gluons, strange (anti)quarks with m s ∼ = 150 MeV and (anti)charm quarks with m c ∼ = 1500 MeV. For the HG state the values of N s and Ns are calculated as a sum of all s ands inside hadrons, and S is the total HG entropy. The behaviour of R s (1) for the HG and QGP is shown in Fig. 1 as a function of T for µ B = 0. In the wide range of T = 200 ÷ 500 MeV one finds an almost constant value of R s in the QGP which is smaller than the corresponding quantity in the HG. The total entropy as well as the total number of strange quarks and antiquarks are expected to be conserved approximately during the hadronization of QGP. This suggests that the value of R s at the HG chemical freeze-out should be close to that in the equilibrium QGP and smaller than in the HG at chemical equilibrium. Therefore, the strangeness suppression in the HG would become a signal for the formation of QGP at the early stage of A+A collision at the CERN SPS energies. The same conclusion was obtained in Ref. [10]. In the model presented in that paper it was assumed that due to the statistical nature of the creation process the strangeness in the early stage is already in equilibrium and therefore possible secondary processes do not modify its value. As the strangeness to entropy ratio is lower in the QGP than in the confined matter, the suppression of strangeness production is expected to occur when crossing the transition energy range from below. The total strangeness production is usually studied using the experimental ratio which measures the ratio of the mean multiplicities of Λ hyperons and K,K mesons to the multiplicity of pions. This is an experimental analog of the quantity R s (1). In Fig. 2 the experimental data of E s ratio (2) are shown together with its theoretically expected behaviour according to Ref. [10]. The deconfinement phase transition causes a nonmonotonic behaviour of E s with the collision energy. The maximum of E s is in the energy region between the AGS and SPS. III. Charmonium and Open Charm Production Charmonium production in hadronic [11] and nuclear [12] collisions is usually considered to be composed of three stages: the creation of a cc pair, the formation of a bound cc state and the subsequent interaction of this cc bound state with the surrounding matter. The first process is calculated within perturbative QCD, whereas modeling of non-perturbative dynamics is needed to describe the last two stages (see, e.g., [13] and references therein). The interaction of the bound cc state with matter causes suppression of the finally observed charmonium yield relative to the initially created number of bound cc states. This initial number is assumed to be proportional to the number of Drell-Yan lepton pairs, which then allows for the experimental study of the charmonium suppression pattern. It was proposed [14] that the magnitude of the measured suppression in nuclear collisions can be used as a probe of the state of high density matter created at the early stage of the collision. The rapid increase of the suppression (anomalous suppression) observed when going from peripheral to central Pb+Pb collisions [15] is often attributed to the formation of the QGP [16]. It was recently found [10,17] that the mean multiplicity of J/ψ mesons increases proportionally to the mean multiplicity of pions when p+p, p+A and A+A collisions at CERN SPS energies are considered. We illustrate this unexpected experimental fact by reproducing in Fig. 3 the ratio J/ψ / h − as a function of the mean number of nucleons participating in the interaction for inelastic nuclear collisions at the CERN SPS. The J/ψ and h − denote here the mean multiplicities of J/ψ mesons and negatively charged hadrons (more than 90% are π − mesons), respectively. In the standard picture of the J/ψ production based on the hard creation of cc pairs and the subsequent suppression of the bound cc states the observed scaling behavior of the J/ψ multiplicity appears to be due to an 'accidental' cancelation of several large effects. A very different picture was proposed in Ref. [18] which explains a scaling property of the J/ψ multiplicity by assuming that a dominant fraction of J/ψ mesons is produced directly at hadronization according to the available hadronic phase space. J/ψ mesons are neutral and unflavored, i.e., all charges conserved in the strong interaction (electric charge, baryon number, strangeness and charm) are equal to zero for this particle. Therefore, its production is not influenced by the conservation laws of quantum numbers. Consequently, the J/ψ production can be calculated in the grand canonical approximation and, therefore, its multiplicity is proportional to the volume, V , of the matter at hadronization. Thus, the statistical yield of J/ψ mesons at hadronization is given by where j = 1 and m ψ ∼ = 3097 MeV are the spin and the mass of the J/ψ meson and T H is the hadronization temperature. The previously mentioned results of the analysis of hadron yield systematics in elementary and nuclear collisions within the statistical approach indicate that the hadronization temperature T H is the same for different colliding systems and collision energies. This reflects the universal feature of the hadronization process. The total entropy of the produced matter is proportional to its volume. As most of the entropy in the final state is carried by pions, the pion multiplicity is also expected to be proportional to the volume of the hadronizing matter. Thus the scaling property (3) follows directly from the hypothesis of statistical production of J/ψ mesons at hadronization and the universality of the parameter T H . Since elements of hadronizing matter move in the overall center of mass system the volume V in Eq. (4) characterizes in fact the sum of the proper volumes of all elements in the collision event. The hypothesis of statistical production of J/ψ mesons at a constant hadronization temperature T H leads to the prediction of a second scaling property of the J/ψ multiplicity, namely: which should be valid for sufficiently large c.m. energies, √ s. This scaling property is illustrated in Fig. 4 which shows the ratio J/ψ / h − as a function of √ s for proton-nucleon interactions. The experimental data on J/ψ yields are taken from a compilation given in [19]. The values of h − are calculated using a parameterization of the experimental results as proposed in [20]. Onwards from the CERN SPS energies, √ s ∼ = 20 GeV, the ratio J/ψ / h − is approximately constant, in line with the expected scaling behavior (5). The rapid increase of the ratio with collision energy observed below √ s ∼ = 20 GeV should be attributed to a significantly larger energy threshold for the J/ψ production than for the pion production. In terms of the statistical approach the effect of strict energy-momentum conservation has to be taken into account by use of the microcanonical formulation of the model. The statistical J/ψ multiplicity (4) depends on two parameters, T H and V . However, a simple way to estimate of the crucial temperature parameter in Eq. (4) from the experimental data is possible, provided that we find a second hadron which has the properties of the J/ψ meson, i.e., it is neutral, unflavored and stable with respect to strong decays. The best candidate is the η meson. The multiplicity of η mesons seems to obey also the scaling properties (3) and (5). The independence of the η / π 0 ratio on the collision energy was observed quite a long time ago [21]. Recent data on η production suggest that η / π 0 ratio is also independent of the size of the colliding objects. In central Pb+Pb collisions at 158 A GeV the ratio η / π 0 = 0.081 ± 0.014 is measured [22]. It is consistent with the values of the ratio reported for all inelastic p+p at 400 GeV [23] (0.077±0.005) and S+S at 200 A·GeV [24] (0.12±0.04). From the measured ratios, J/ψ / h − and η / π 0 , we estimate a mean ratio J/ψ / η = (1.3 ± 0.3) · 10 −5 . Here we use the experimental ratio π 0 / h − ∼ = 1 in N+N interactions [25]. Under the hypothesis of the statistical production of J/ψ and η mesons at hadronization the measured ratio can be compared to the ratio calculated using Eq. (4): where m η ∼ = 547 MeV is the mass of the η meson. This leads to an estimate of the hadronization temperature, T H ≈ 176 MeV. A graphical solution of Eq. (6) is shown in Fig. 5 which illustrates the high sensitivity of the estimate of the T H parameter by using the J/ψ / η ratio. This is due to the large difference between mass of the J/ψ and the η mesons as the right hand side of Eq. (4) is approximately proportional to exp[(m η − m ψ )/T H ]. In the proposed model the creation of the J/ψ mesons is due to the straight thermal production at hadronization and not due to the coalescence of cc quarks produced before hadronization. Therefore the yield of J/ψ mesons is independent of the production of open charm, which is carried mainly by the D mesons in the final state. The D mesons multiplicity is determined by the number of cc quark pairs created in the early parton stage before the hadronization. Assuming chemical equilibrium of charm in the QGP stage of A+A collision we can estimate the ratio of the open charm hadrons to pions. The charm to entropy ratio is defined by the quantity similar to Eq. (1) for the ratio of strangeness to entropy. The behaviour of R c (7) for the HG and QGP is shown in Fig. 6 as a function of T for µ B = 0. In Fig. 6 the quantity R s (1) from Fig. 1 is also presented for a comparison. The behaviour of R c is completely different from that of R s . R c in the QGP is strongly increasing with T and its values are much larger than in the HG. The experimental analog of R c is E c = D / π , which measures the ratio of the mean multiplicities of D mesons to the multiplicity of pions. The assumption of the conserved total entropy and the total number of charm quarks and antiquarks during the hadronization of QGP leads then to the picture of chemical non-equilibrium hadron gas at the chemical freeze-out with a strong enhancement of D mesons yield. The ratio of D mesons to pions should strongly increase with the collision energy from the SPS to the RHIC. IV. Summary Statistical production of starngeness and charm discussed in this talk can be summarized as the following: • The transition to the QGP in A+A collisions could be seen as a nonmonotonic dependence on collision energy of the strangeness to pion ratio in the energy region between the AGS and the SPS. • The yield of J/ψ mesons at the CERN SPS can be understood assuming the statistical production of J/ψ at the hadronization, and it is sensitive to the hadronization temperature. The D mesons multiplicity is determined by the number of cc quark pairs created in the early QGP stage. The ratio of D mesons to pions is expected to increase strongly with the collision energy from the SPS to the RHIC.
2014-10-01T00:00:00.000Z
1999-12-07T00:00:00.000
{ "year": 1999, "sha1": "36054fa8d33456deb6fb73b0fc4c15bc290ea71c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "36054fa8d33456deb6fb73b0fc4c15bc290ea71c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15010581
pes2o/s2orc
v3-fos-license
Explicit determination of the images of the Galois representations attached to abelian surfaces with End(A)=\Z We give an effective version of a result reported by Serre asserting that the images of the Galois representations attached to an abelian surface with $\End(A)= \mathbb{Z}$ are as large as possible for almost every prime. Our algorithm depends on the truth of Serre's conjecture for two dimensional odd irreducible Galois representations. Assuming this conjecture we determine the finite set of primes with exceptional image. Introduction Let A be an abelian surface defined over Q with End(A) := End Q (A) = Z. Let ρ ℓ : G Q → GSp(4, Z ℓ ) be the compatible family of Galois representations given by the Galois action on T ℓ (A) = A[ℓ ∞ ](Q), the Tate modules of the abelian surface (we are assuming that A is principally polarized). Each ρ ℓ is unramified outside ℓN, where N is the product of the primes of bad reduction of A. If we call G ℓ ∞ the image of ρ ℓ then we have the following result of Serre (cf. [Se 86]): Theorem 1.1 If A is an abelian surface over Q with End(A) = Z and principally polarized, then G ℓ ∞ = GSp(4, Z ℓ ) for almost every ℓ. Remark: If G ℓ is the image ofρ ℓ , the Galois representation on ℓ-division points of A(Q) (and the residual mod ℓ representation corresponding to ρ ℓ ), it is enough to show that G ℓ = GSp(4, F ℓ ) for almost every ℓ (cf. [ Se 86]). Serre proposed the problem of giving an effective version of this result: "...partir de courbes de genre 2 explicites, et tcher de dire partir de quand le groupe de Galois correspondant G ℓ devient gal GSp(4, F ℓ ) ". But Serre's proof depends on certain ineffective results of Faltings and therefore does not solve this problem. In this article, we will give an algorithm that computes a finite set F of primes containing all those primes (if any) with image of the corresponding Galois representation exceptional, i.e., different from GSp(4, F ℓ ). The validity of our method depends on the truth of Serre's conjecture for 2-dimensional irreducible odd Galois representations, conjecture (3.2.4 ? ) in [Se 87]. This means that if ℓ is a prime such that G ℓ = GSp(4, F ℓ ) and ℓ / ∈ F , then G ℓ has a 2-dimensional irreducible (odd) component that violates Serre's conjecture. In the examples, we also give infinite sets of primes for which we can prove the result on the images unconditionally, i.e., without assuming Serre's conjecture. Results of this kind were previously obtained by Le Duff only under the extra assumption of semiabelian reduction of the abelian surface at some prime. Our technique has two advantages: it does not have any restriction on the reduction type of the abelian surface, and in the case of semiabelian reduction it allows us to prove the result on the images (unconditionally) for larger sets of primes. I want to thank N. Vila, A. Brumer and J-P. Serre for useful remarks and comments. 2 Main Tools 2.1 Maximal subgroups of PGSp(4, F ℓ ) In [Mi 14], Mitchell gives the following classification of maximal proper subgroups G of PSp(4, F ℓ ) (ℓ odd), as groups of transformations of the projective space having an invariant linear complex: 1) a group having an invariant point and plane 2) a group having an invariant parabolic congruence 3) a group having an invariant hyperbolic congruence 4) a group having an invariant elliptic congruence 5) a group having an invariant quadric 6) a group having an invariant twisted cubic 7) a group G containing a normal elementary abelian subgroup E of order 16, with: G/E ∼ = A 5 or S 5 8) a group G isomorphic to A 6 , S 6 or A 7 . For the relevant definitions see [Hi 85], see also [Bl 17] and [Os 77] for cases 7) and 8). Remark: This classification is part of a general "philosophy": the subgroups of GL(n, F ℓ ), ℓ large, are essentially subgroups of Lie type, with some exceptions independent of ℓ (see [Se 86]). The action of Inertia From now on we will assume that ℓ is a prime of good reduction for the abelian surface A. Then it follows from results of Raynaud that the restrictionρ ℓ | I ℓ has the following property (cf. [Ra 74], [Se 72]): Theorem 2.1 If V is a Jordan-Hlder quotient of the I ℓ -module A[ℓ](Q) of dimension n over F ℓ , then V admits an F ℓ n -vector space structure of dimension 1 such that the action of I ℓ on V is given by a character φ : I ℓ,t → F * ℓ n (t stands for tame) with: where φ i are the fundamental characters of level n and d i = 0 or 1, for every i = 1, 2, ..., n. All this statement is proved by Serre in [Se 72] except for the bound for the exponents, which is the result of Raynaud mentioned above, later generalized by Fontaine-Messing. We will use the following lemmas repeatedly (cf. [Di 1901]): Lemma 2.2 Let M ∈ Sp(4, F ) be a symplectic transformation over a field F . The roots of the characteristic polynomial of M can be written as α, β, α −1 , β −1 , for some α, β. Remark: A similar result holds in general for the groups Sp(2n, F ). In the case of the Galois representations attached to A, we know that det(ρ ℓ ) = χ 2 , where χ is the mod ℓ cyclotomic character. Therefore, we obtain: The roots of the characteristic polynomial ofρ ℓ (Frob p) ∈ G ℓ can be written as α, β, p/α, p/β (p ∤ ℓN). Remark: Here Frob p denotes the (arithmetic) Frobenius element, defined up to conjugation. The value of the representation in it is well defined precisely because of the fact that the representation is unramified at p. Proof: Use lemma 2.2, G ℓ ⊆ GSp(4, F ℓ ), and the exact sequence: Remark: The same is true for ρ ℓ (Frob p) ∈ G ℓ ∞ . Thus, the characteristic polynomial of ρ ℓ (Frob p) has the form: From formula (2.1), we obtain the following possibilities forρ ℓ | I ℓ (ℓ ∤ N): where ψ i is a fundamental character of level i. Suppose that the representationρ ℓ is reducible with a 1-dimensional sub(or quotient) representation given by a character µ. This character is unramified outside ℓN and takes values inF ℓ , therefore from the description ofρ ℓ | I ℓ given in section 2.2 we have µ = εχ i , with ε unramified outside N and i = 0 or 1. Clearly cond(ε) | c. After semi-simplification, we have: for a 3-dimensional representation π with det(π) = ε −1 χ 2−i . Therefore, cond(ε) 2 | c. Let d be the maximal integer such that d 2 | c. If we take a prime p ≡ 1 (mod d), we have ε(p) = 1 so χ i is a root of the characteristic polynomial ofρ ℓ (Frob p). This gives: b p − a p (p + 1) + p 2 + 1 ≡ 0 (mod ℓ), (3.1) both for i = 0 and i = 1 (in agreement with lemma 2.3). By the Riemann hypothesis, the roots of the characteristic polynomial of ρ ℓ (Frob p) have absolute value √ p. This gives automatically bounds for the absolute values of the coefficients a p and b p , and from these bounds we see that for large enough p congruence (3.1) is not an equality. Therefore, only finitely many primes ℓ may verify (3.1). Variant: Instead of taking a prime p ≡ 1 (mod d), we can work in general with p of order f in (Z/dZ) * . Let P ol p (x) be the characteristic polynomial ofρ ℓ (Frob p). Then ε(p)p i is a root of P ol p (x), with i = 0 or 1, and where Res stands for resultant (again, cases i = 0 and 1 agree). This variant is used in the examples to avoid computing P ol p (x) for large p. • Case 1: In this case we have the factorization: As in the previous subsection: cond(ε) | d. Eliminating r from the equation, we obtain: If we take p ≡ 1 (mod d), we obtain: Again, from the bounds for the coefficients we see that for large enough p this is not an equality. Thus, only finitely many ℓ can satisfy (3.3). Alternatively, for computational purposes, we may take p with p f ≡ 1 (mod d). Then we have: • Case 2: This case is quite similar to the previous one. We start with: with cond(ε) | d. From this: In this case, the fact that this holds only for finitely many primes ℓ is nontrivial. It may be thought of as a consequence of theorem 1.1. At this point we invoke Serre's conjecture (3.2.4 ? ) (see [Se 87]) that gives us a control on π 1 and π 2 . Both representations should be modular of weight 2, i.e., there exist two cusp forms f 1 , f 2 with: N 1 N 2 | c (we are assuming π 1 , π 2 to be irreducible; otherwise, they are covered by section 3.1). Both cusp forms have trivial nebentypus. There are finitely many cusp forms in these finitely many spaces. So we have an algorithm to detect the primes ℓ falling in this case by comparing characteristic polynomials mod ℓ, due tō We take all pairs of integers N 1 , N 2 with N 1 N 2 = c and all pairs of cusp forms f 1 ∈ S 2 (N 1 ), f 2 ∈ S 2 (N 2 ) (either newforms or oldforms). If we denote by P ol f i ,p (x) the characteristic polynomial of ρ f i ,ℓ (Frob p) (i = 1, 2), we should have for some such pair f 1 , f 2 : for every p ∤ ℓN. Theorem 1.1 guarantees that this can only happen for finitely many primes. where c p is the eigenvalue of f i corresponding to the Hecke operator T p . These eigenvalues, and a fortiori the characteristic polynomials P ol f,p (x) for any cusp form f , can be computed with an algorithm of W. Stein (cf. [St]). The compatible family of Galois representations constructed by Deligne, in the case of a cusp form f ∈ S 2 (N), shows up in the jacobian J 0 (N) of the modular curve X 0 (N): it agrees with a two-dimensional constituent of the one attached to the abelian variety A f corresponding to f . For computational purposes, we introduce the following variant: observe that either N 1 or N 2 (say N 1 ) satisfy Consider all divisors of c verifying this, maximal (among divisors of c) with this property. Call S the set of such divisors. Then we are supposing that there exists f ∈ S 2 (t) with t ∈ S and With this formula we compute in any given example all primes ℓ falling in this case. Stabilizer of a hyperbolic or elliptic congruence If G ℓ corresponds to an irreducible subgroup inside (its projective image) some of the maximal subgroups in cases 3) and 4) of Mitchell's classification, there is a normal subgroup of index 2 of G ℓ such that and the subgroup M ℓ is reducible (not necessarily over F ℓ ). In fact, a hyperbolic (elliptic) congruence is composed of all lines meeting two given skew lines in the projective three dimensional space over F ℓ defined over F ℓ (F ℓ 2 , respectively), called the axes of the congruence (see [Hi 85]). The stabilizer of such congruences consists of those transformations that fix or interchange the two axes, and it contains the normal reducible index two subgroup of those transformations that fix both axes. From the description ofρ ℓ | I ℓ given in section 2.2 we see that if ℓ > 3, it is contained in M ℓ . Therefore, if we take the quotient G ℓ /M ℓ we obtain a representation G Q → C 2 whose kernel is a quadratic field unramified outside N. Then, there is a quadratic character φ : (Z/cZ) * → C 2 with: φ(p) = −1 ⇒ρ ℓ (Frob p) is of the form: Therefore, trace(ρ ℓ (Frob p)) = 0 , i.e., Considering all quadratic characters ramifying only at the primes in N we detect the primes ℓ falling in this case. Once again, from theorem 1.1, it follows that this set is finite (of course, this fact strongly depends on the assumption End(A) = Z). Stabilizer of a quadric This case can be treated exactly as the one above: assuming again absolute irreducibility of the image G ℓ , it contains a normal subgroup of index 2, and we obtain a quadratic character unramified outside N verifying formula (3.9). In fact, in this caseρ ℓ is the tensor product of two irreducible 2-dimensional Galois representations (see [Hi 85], page 28), one of them dihedral (this is the necessary and sufficient condition for the tensor product to be symplectic, see [B-R 89], page 51), so the matrices in G ℓ are of the form: depending on the value of the quadratic character φ. Stabilizer of a twisted cubic This case is incompatible with the description ofρ ℓ | I ℓ given in section 2.2. In fact, in this case all upper-triangular matrices are of the form (see [Hi 85], page 233): In no case is the subgroup of G ℓ given byρ ℓ | I ℓ of this form. Exceptional cases The cases already studied cover all possibilities in the classification except the exceptional groups, i.e., cases 7) and 8). In these cases, comparing the exceptional group H ⊆ PGSp(4, F ℓ ) (its order and structure) with the fact that P(G ℓ ) contains the image of P(ρ ℓ | I ℓ ) described in section 2.2, we end up with the only possibilities (ℓ > 3): For these two primes, as for any prime we suspect of satisfying G ℓ = GSp(4, F ℓ ), we compute several characteristic polynomials P ol p (x) mod ℓ. At the end, either we prove that it must be G ℓ = GSp(4, F ℓ ) (because the orders of the roots of the computed polynomials do not give any other option) or we reinforce our suspicion that ℓ is exceptional. Conclusion Having gone through all cases in the classification (the stabilizer of a parabolic congruence is reducible, it has an invariant line of the complex, cf. [Mi 14]) we conclude that for all primes ℓ except those whose image, according to our algorithm, may fall in a proper subgroup (according to theorem 1.1, only finitely many) the image of P(ρ ℓ ) is PGSp(4, F ℓ ). From this it easily follows that if ℓ is not one of the finitely many exceptional primes we have G ℓ = GSp(4, F ℓ ) and applying a lemma of [Se 86] (see also [Se 68]) we obtain G ℓ ∞ = GSp(4, Z ℓ ). Recall that at one step we have assumed the veracity of Serre's conjecture (3.2.4 ? ). An example We have applied the algorithm to the example given by the jacobian of the genus 2 curve given by the equation: The algorithm of Q. Liu computes the prime-to-2 part of the conductor. From this computation and the bound of the conductor in terms of the discriminant of an integral equation (cf. [Liu 94]) we obtain: c | 2 12 · 23 · 5. We exclude a priori the primes dividing the conductor: 2, 5 and 23. We sketch some of the computations performed: • Reducible cases with 1-dimensional constituent or two related 2-dimensional constituents: The maximal possible value of the conductor of ε is d = 64. We compute the characteristic polynomials of ρ ℓ (Frob p) for the primes p = 229, 257, 641, 769 and applying the algorithm (equations (3.2), (3.4) and (3.6)) we easily check that no prime ℓ > 3 falls in these cases. Remark: The characteristic polynomials used at this and the remaining steps can be found in section 6. Then we compute, for each t ∈ S and each Hecke eigenform f ∈ S 2 (t), the characteristic polynomial P ol f,p (x) for p = 3, 7, 11, 13, 17, 19 with the algorithm implemented by W. Stein ([St]). Then, comparing these polynomials with the characteristic polynomials of ρ ℓ (Frob p) as in formula (3.8) we see that no prime ℓ > 3 falls in this case. • Cases "governed" by a quadratic character: We have to consider all possible quadratic characters φ unramified outside c (there are 15) and for each of them take a couple of primes p with φ(p) = −1 and a p = 0. Applying the algorithm (formula (3.9)) we see that no prime ℓ > 3 falls in these cases. At this step we have used the values a p for the primes p = 3, 7, 13, 97, 113, 569, 769. • Exceptional cases: We compute the reduction of a few characteristic polynomials modulo 7 and we find elements whose order (in PGSp(4, F 7 )) does not correspond to the structure of any of the exceptional groups. From all the above computations we conclude: Theorem 4.1 Let A be the jacobian of the genus 2 curve: Let G ℓ ∞ be the image of ρ A,ℓ , the Galois representation on A[ℓ ∞ ](Q), whose conductor divides 2 12 · 5 · 23. Then, assuming Serre's conjecture (3.2.4 ? ) it holds: for every ℓ > 5, ℓ = 23. Remark: We are not claiming that the image is not maximal for any of the four excluded primes. The case of semiabelian reduction For certain genus 2 curves one can prove that the image is large for an infinite set of primes by using the following results of Le Duff [LeD 98]: Proposition 5.1 Let A be an abelian surface defined over Q. Suppose that for a prime p of bad reduction of A,à 0 p (the connected component of 0 in the special fiber of the Nron Model of A at p) is an extension of an elliptic curve by a torus. Then, for every prime ℓ = p with ℓ ∤ Φ(p) (number of connected components ofà p ), G ℓ contains a transvection. Recall that a transvection is an element u such that Image(u − 1) has dimension 1. Proposition 5.2 ([LeD 98]) If G ⊂ Sp(4, F ℓ ) is a proper maximal subgroup containing a transvection, all its elements have reducible (over F ℓ ) characteristic polynomial. Therefore, a transvection together with a matrix with irreducible characteristic polynomial generate Sp(4, F ℓ ). Remark: We can also find in [Mi 14] the list of maximal subgroups of PSp(4, F ℓ ) containing central elations, and a central elation is the image in PSp(4, F ℓ ) of a transvection in Sp(4, F ℓ ). These groups correspond to cases 1) and 3) in section 2.1 or to a group having an invariant line of the complex, defined over F ℓ . Recall that P ol q (x) denotes the characteristic polynomial of ρ ℓ (Frob q) for any prime q of good reduction for the abelian surface A and ℓ = p. From the two previous results it follows: Theorem 5.3 (Le Duff ) Let p be a bad reduction prime verifying the condition of proposition 5.1 and q a prime with P ol q (x) irreducible, then for every ℓ ∤ 2pqΦ(p) such that P ol q (x) is irreducible modulo ℓ, G ℓ = GSp(4, F ℓ ). If ∆ q is the discriminant of P ol q (x) and ∆ Qq the discriminant of Q q (x) := x 2 − a q x + b q − 2q the irreducibility condition reads: EXAMPLE (Le Duff): Take the genus 2 curve: A 2 = J(C 2 ) has good reduction outside 2, 19, 151. For p = 19, 151 the condition in proposition 5.1 is satisfied with Φ(p) = 1. Take q = 3, P ol 3 (x) is irreducible and theorem 5.3 gives: G ℓ = GSp(4, F ℓ ) for every ℓ > 3 with ( 61 ℓ ) = −1 and ( 5 ℓ ) = −1. Remark: Of course, considering more irreducible characteristic polynomials one can obtain the same result for other primes. In particular, G ℓ = GSp(4, F ℓ ) for ℓ = 19, 151 (cf. [LeD 98]). Remark: The example in the previous section also verifies Le Duff's condition. Let us apply our method to this example. The invariants are: c = cond(A 2 ) | 2 8 · 19 · 151 (computed with Liu's algorithm), then cond(ε) | d = 16; and the set S = {256, 604, 608}. In this example, we only have to worry about those maximal subgroups in Mitchell's classification containing central elations. Therefore, we only have to discard the maximal subgroups considered in sections 3.1, 3.3 and 3.4. • The reducible case with 1-dimensional constituent is easily handled using the characteristic polynomials (see section 6) P ol p (x) for p = 17, 97 and we conclude that no prime ℓ > 2 falls in this case. • Due to the fact that the spaces of modular forms S 2 (t) for t ∈ S are rather large, we decided to save computations and to apply the procedure described in section 3.3, formula (3.8), only to the prime p = 3. After computing all resultants of P ol 3 (x) with all the characteristic polynomials P ol f,3 (x) for f ∈ S 2 (t), t ∈ S, we find the possibly exceptional primes ℓ > 2: 5, 11, 19, 29, 31, 41, 61, 109, 151. Having computed the characteristic polynomials P ol p (x) for p = 11, 41, 79, 101, 199, 211 (see section 6) we checked that for each of the ten possibly exceptional primes ℓ listed above one of these six polynomials is irreducible modulo ℓ. Then, applying theorem 5.3, we conclude that none of these primes is exceptional. Thus, no ℓ > 2 has reducible image. • Cases governed by a quadratic character: we have to consider all possible quadratic characters φ unramified outside c and for each of them take a couple of primes p with φ(p) = −1 and a p = 0. We use the values a p for p = 3, 5, 97, 257 (see section 6) and an application of the algorithm (formula (3.9)) proves that the only possibly exceptional primes ℓ > 2 are: 3, 5, 11, 97, 257. We already mentioned that 3, 5 and 11 are not exceptional. Applying theorem 5.3 again we see that 97 and 257 are also non-exceptional because P ol 11 (x) is irreducible modulo 97 and P ol 281 (x) is irreducible modulo 257. Then, we conclude: Theorem 5.4 Let A 2 be the jacobian of the genus 2 curve given by the equation y 2 = x 5 − x + 1. Assume Serre's conjecture (3.2.4 ? ) (cf. [Se 87]). Then the images of the Galois representations on the ℓ-division points of A 2 are G ℓ = GSp(4, F ℓ ), for every ℓ > 2. Remark:ρ 2 is also irreducible over F 2 . This irreducibility for all ℓ is equivalent to the fact that A 2 is isolated in its isogeny class in the sense that any abelian variety isogenous to A 2 over Q is isomorphic to A 2 over Q. Unfortunately, this condition of being isolated is not effectively verifiable. Among the subgroups containing central elations, we have used Serre's conjecture only to eliminate the following one: Take q with P ol q (x) irreducible. If ( ∆ Qq ℓ ) = −1 case (*) cannot hold, because the matrices A and B would have their traces in F ℓ 2 F ℓ . This follows from the factorization: Then, again using P ol 3 (x) we prove without using Serre's conjecture the following Theorem 5.5 The images of the Galois representations on the ℓ-division points of A 2 are G ℓ = GSp(4, F ℓ ), for every ℓ > 3 with 5 ℓ = −1. Observe that we have obtained an unconditional result that is stronger than the one in [LeD 98], because it only uses the condition on one of the discriminants (thus, it applies to more primes). We warn the reader that there is a mistake in [LeD 98], pag. 521, the polynomial P ol 11 corresponding to this example is wrongly computed. It should read: x 4 + 7x 3 + 31x 2 + 77x + 121. Unconditional results in the general case We will show now that even in the case that the condition of proposition 5.1 is not verified at any prime, we can obtain similar unconditional results. In an arbitrary example, if we do not use Serre's conjecture, there is another case to consider (in addition to case (*) ): The inclusion of this group in GSp(4, F ℓ ) is given by the map: where Frob is the non-trivial element in Gal(F ℓ 2 /F ℓ ). Two tricks allow us to discard this case: i) Suppose that for a prime q, P ol q (x) decomposes over Q as follows: Then case (**) cannot hold if ℓ ∤ B − A and ℓ = q. EXAMPLE (Smart): The following curve is taken from the list given in [Sm 97] of all genus 2 curves defined over Q with good reduction away from 2: C 3 : y 2 = x(x 4 + 32x 3 + 336x 2 + 1152x − 64), , c | 2 20 (this is the uniform bound for the 2-part of the conductor of abelian surfaces over Q, cf. [B-K 94]). Le Duff's method cannot be applied to this example; the condition of proposition 5.1 is not verified at 2. We eliminate ALL maximal proper subgroups in Mitchell's classification using the characteristic polynomials P ol p (x) for several primes p and cond(ε) | 1024, S = {1024}, with the algorithm described in section 3. To be more precise, the reducible cases treated in section 3.1 and 3.2 are excluded using the polynomials P ol p (x) for p = 3, 17, 19, 31. Assuming Serre's conjecture, the remaining reducible case is excluded using the polynomials P ol p (x) for p = 7, 11, 13. The cases considered in sections 3.4 and 3.5 are excluded using the polynomials P ol p (x) for p = 3, 5. Finally, with the technique described in section 3.7 we check that ℓ = 5, 7 are non-exceptional. All characteristic polynomials used are listed in section 6. After these computations we find no exceptional primes. Then, we conclude Without Serre's conjecture, trick (i) is used to discard case (**). In fact, P ol 5 (x) decomposes as in (i) with A = −2 and B = 0. The same happens to P ol 17 (x). To deal with case (*) we check that P ol 3 (x) is irreducible and ∆ Q 3 = 12 (see section 6). We obtain the unconditional result: Theorem 5.7 The images of the Galois representations on the ℓ-division points of A 3 are G ℓ = GSp(4, F ℓ ), for every ℓ > 3 with 3 ℓ = −1. Brumer and Kramer (unpublished) have given examples of jacobians of genus 2 curves with prime conductor. For them our algorithm determines the image with just a few computations. For instance, when applying Serre's conjecture no computation is necessary because we have S = {1} and S 2 (1) = 0. One of these examples is given by the jacobian of the genus two curve: C : y 2 = x(x 2 + 1)(1729x 3 + 45568x 2 + 25088x − 76832). The conductor of J(C) is 709. Remark: All the examples of abelian surfaces considered in this article verify the condition End(A) = Z. This follows in particular from our result on the images of the attached Galois representations (the condition on the endomorphism algebra is also necessary for this result to hold). Computed characteristic polynomials We list all the characteristic polynomials P ol p (x) that have been used in the examples of the abelian surface A in section 4 and the abelian surfaces A 2 and A 3 in section 5. Recall that in any case the polynomial P ol p (x) is of the form x 4 − a p x 3 + b p x 2 − pa p x + p 2 so it is enough to give the values a p , b p .
2014-10-01T00:00:00.000Z
2001-10-24T00:00:00.000
{ "year": 2001, "sha1": "06d5e635237963b617d869f915140ed3aeb8a71b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0110340", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ac94fc12c547f2693a74664541e1cf96e885827d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
20685530
pes2o/s2orc
v3-fos-license
Laparoscopic excision of a ciliated hepatic foregut cyst in a child: A case report and review of the literature Introduction Ciliated hepatic foregut cysts (CHFC) are rare congenital hepatic lesions derived from the embryonic foregut. Because of potential transformation to squamous cell carcinoma in adulthood, the mainstay of therapy is surgical resection. To our knowledge, we report the first case of CHFC in a child that was successfully excised laparoscopically. Presentation of case We report a case of a 4-year-old boy that was diagnosed with an asymptomatic 5-cm liver cyst. After surveillance for 3 years, the cyst grew to 7 cm at which time it was successfully resected laparoscopically. The pathology was consistent with CHFC. Discussion There have been few previous reports of CHFCs in children, all of which described excision via a laparotomy. This is the first case report of laparoscopic resection of CHFC in a child. Conclusion This case report suggests that laparoscopy may be safe and effective for resection of CHFCs with favorable anatomy such as peripheral location and noninvolvement of key vascular and biliary structures. Introduction Ciliated hepatic foregut cysts (CHFC) are rare congenital hepatic lesions derived from the embryonic foregut. Since their description by Friedrich in 1857, there have been approximately 100 reported cases with only 14 cases reported in the pediatric population in English literature [1e6]. CHFCs are often asymptomatic and are frequently discovered incidentally on imaging or during surgical exploration [2]. Due to risk of transformation to squamous cell carcinoma, surgical excision is indicated. To our knowledge, we report the first case of CHFC in a child that was successfully excised laparoscopically. This work has been reported in line with the CARE criteria for case reports [7]. Presentation of case A 4-year-old boy presented with a liver cyst discovered incidentally on an abdominal ultrasound (US) performed for uncomplicated congenital hydronephrosis. He had no history of hepatic disease, biliary obstruction, infection or pain. At this time, magnetic resonance imaging (MRI) showed a 5.3 Â 4.8 Â 2.2 cm minimally complex partially exophytic right hepatic lobe cyst with minimally enhancing septations. Laboratory studies, including a CBC, AFP, PT, hepatic function panel and Echinococcus antibody titers, were within normal limits. Surveillance US was performed annually to monitor the cyst size and characteristics. After 3 years the cyst was noted to increase in size. At this time, an MRI demonstrated an interval increase in size to 7.4 Â 6.1 Â 3.0 cm (Fig. 1). The patient remained asymptomatic. Due to increasing size, the patient underwent laparoscopic resection of the hepatic cyst. An 11-mm port was placed in the umbilicus to accommodate a 10-mm 30-degree laparoscope. The abdomen was insufflated with carbon dioxide to a pressure of 15 mm Hg. Three additional 5-mm ports were placed, one in the right flank, right lower abdomen and left flank. A 5-mm fan liver retractor was used to reflect the right liver edge. The mass was identified involving the right hemidiaphragm, retroperitoneum and Glisson's capsule overlying segment VII of the liver, compressing and abutting the posterior and lateral aspects of segments VI and VII of the liver without direct involvement of the hepatic parenchyma. Due to the peripheral location of the mass, there was no need for intraoperative ultrasound or vascular control. To completely remove the mass, dissection was carried into the retroperitoneum and diaphragm. Using a combination of Harmonic Scalpel (Ethicon Endo-Surgery, Inc., Cincinnati, Ohio) and electrocautery, the mass was circumscribed and safely resected (Fig. 2). The integrity of the cyst wall had been violated during the dissection, at which point the cyst was aspirated with minimal intraperitoneal spillage of its contents. The specimen was extracted through the umbilical port. Estimated blood loss was 10 mL. The patient's postoperative course was uneventful. He was discharged home on postoperative day two. At one-month followup, he remained without complications. Microscopic examination was consistent with CHFC without evidence of malignancy. Discussion CHFCs are rare, typically solitary, unilocular congenital lesions of the liver. They are composed of four layers: ciliated pseudostratified mucin-secreting columnar epithelium, subepithelial loose connective tissue, an incomplete layer of smooth muscle fibers, and a fibrous outer rim [8]. The presence of ciliated columnar cells in a liver lesion is pathognomonic for CHFC. These cysts are histologically similar to esophageal and bronchogenic cysts, suggesting a common derivation from the embryonic foregut. Esophageal and bronchogenic cysts can be distinguished from CHFCs by the presence of two distinct smooth muscle layers or mural cartilage, respectively [1,9]. CHFCs are most often located in the left lobe of the liver, usually in segment IVb [2,10]. This may be explained by the fact that the left lobe constitutes the majority of the liver during the 4th to 8th weeks of development. Until the 8th week of development, two pleuroperitoneal canals are patent, possibly trapping abnormal foregut buds. In contrast to CHFCs, simple liver cysts are more commonly located in the right hepatic lobe [9]. As in this case report, the most common presentation of a CHFC is an asymptomatic lesion found incidentally on radiographic imaging [9]. As a result, it is difficult to ascertain the true incidence of CHFCs. If symptoms are present, they have been reported to include abdominal pain, nausea, and vomiting [9]. Patients may also present with obstructive jaundice, portal hypertension, and malignancy [2,6,11e13]. In neonates, CHFCs may be detected on antenatal imaging [3]. The differential diagnosis for CHFC includes other unilocular hepatic cysts such as a simple hepatic cyst, parasitic cyst, epidermoid cyst, pyogenic abscess, intrahepatic choledochal cyst, mesenchymal hamartoma, hypovascular solid tumor, and hepatobiliary cystadenoma or cystadenocarcinoma. Imaging alone is non-diagnostic as CHFC is a histologic diagnosis [14]. Hence, cases remain undiagnosed until after aspiration, biopsy or surgical excision. CHFCs are typically considered benign processes. However, over the last two decades there have been three reports of malignant transformation to squamous cell carcinoma and extensive squamous metaplasia, resulting in a 4e5% rate of malignancy over that time period [11e13, 15,16]. These malignancies were aggressive with reported survival of 2 and 9 months. The presence of dysplasia associated with squamous cell carcinoma may suggest a stepwise progression from non-dysplastic epithelium to dysplasia to carcinoma. The only identified risk factor for malignant transformation in CHFC is size greater than 12 cm [16]. Laboratory markers for malignancy like CA19-9 levels may be misleading, as elevated levels have been associated with benign CHFC [9]. Due to the potential for malignant transformation, most authors agree that surgical resection should be the mainstay of therapy. There has been one case report describing US-guided aspiration followed by 1-year-long event-free observation in a 5-year-old child without long-term follow-up [5]. Suggested indications for surgery include increasing size, size greater than 4 cm, clinical symptoms, unexplained abnormal liver function tests or cyst wall abnormalities on imaging [11,14]. As many CHFCs are not diagnosed until postoperative pathologic evaluation, surgical excision may be diagnostic as well as therapeutic. Laparoscopy has been adopted for a wide variety of procedures in pediatric surgery due to improved visualization, decreased postoperative pain, quicker recovery and improved cosmetic result [17]. Several reports of laparoscopic excision of a CHFC have been reported in the adult literature [9,15,16,18]. In the pediatric population, there have been two cases describing laparoscopic approaches that were converted to laparotomy, but, to our knowledge, none had been completed laparoscopically prior to this case report [4,6]. Successful laparoscopic resection of hepatic cysts, other than CHFC, in the pediatric population has been described [19,20]. This minimally invasive approach may also be ideal for CHFCs: the typical small size and anterior location allow for easy access. In addition, the generally benign nature allows for removal from the hepatic bed without concern for adequate margins, and the thick cyst wall facilitates handling with laparoscopic instruments [9]. Relative contraindications for laparoscopy may include lesions with central or posterior location or involvement of major biliary or vascular structures as these would be more challenging to remove [18]. Due to the slow progression to malignancy, the procedure should be performed electively under optimal conditions. In neonates with a prenatal or antenatal diagnosis of CHFC, surgical excision can be postponed. In this case, laparoscopy proved to be safe and effective with a short postoperative recovery. Conclusions Due to the risk of malignancy, CHFCs should be surgically excised for increasing size, clinical symptoms or unexplained abnormal liver function. With proper patient selection, laparoscopic resection can be an advantageous and safe approach to the management of CHFCs in the pediatric population. Ethical approval Written informed consent was sought from the parents of this patient for publication of this case report, but the parents could not be located with extensive effort. The content of this manuscript and images are completely anonymized. Sources of funding None. Author contribution NB, SA, KS and FS were involved with the literature review and gathering and interpreting the patient's clinical information. NB created the figures. All authors were involved with drafting and revising the manuscript. All authors read and approved the final manuscript.
2018-04-03T01:34:46.700Z
2015-11-02T00:00:00.000
{ "year": 2015, "sha1": "3a42f866d47373186914eb138099045454282ed8", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.amsu.2015.10.017", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a42f866d47373186914eb138099045454282ed8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16922685
pes2o/s2orc
v3-fos-license
Ultra-High Energy Cosmic Rays, Spiral galaxies and Magnetars We measure the correlation between the arrival directions of the highest energy cosmic rays detected by the Pierre Auger Observatory with the position of the galaxies in the HI Parkes All Sky Survey (HIPASS) catalogue, weighted for their HI flux and Auger exposure. The use of this absorption-free catalogue, complete also along the galactic plane, allows us to use all the Auger events. The correlation is significant, being 86.2% for the entire sample of HI galaxies, and becoming 99% when considering the richest galaxies in HI content, or 98% with those lying between 40-55. We interpret this result as the evidence that spiral galaxies are the hosts of the producers of UHECR and we briefly discuss classical (i.e energetic and distant) long Gamma Ray Burst (GRBs), short GRBs, as well as newly born or late flaring magnetars as possible sources of the Auger events. With the caveat that these events are still very few, and that the theoretical uncertainties are conspicuous, we found that newly born magnetars are the best candidates. If so, they could also be associated with sub-energetic, spectrally soft, nearby, long GRBs. We finally discuss why there is a clustering of Auger events in the direction on the radio-galaxy Cen A and an absence of events in the direction of the radio-galaxy M87. INTRODUCTION The origin of ultra-high energy cosmic rays (UHECR), exceeding 10 EeV (1 EeV=10 18 eV) has been a mystery for decades, but the recent findings of the large area detectors, such as AGASA (Ohoka et al. 1997), HIRes (Abu-Zayyad et al. 2000), and especially the Pierre Auger Southern Observatory (Abraham et al. 2004), began to disclose crucial clues about the association of the highest energy events with cosmic sources. The Auger collaboration (Abraham et al. 2007) found a positive correlation between the arrival directions of UHECR with energies grater than 57 EeV and nearby AGNs (in the optical catalogue of Veron-Cetty & Veron 2006). Although this result has not been confirmed by HIRes (Abbasi et al. 2008) and it has been criticised by Gorbunov et al. (2008), it received an important confirmation by George et al. (2008), who considered a complete sample of nearby hard X-ray emitting AGNs detected by the BAT instrument onboard Swift. This sample is much less affected by absorption than any optical sample although, to identify as such an AGN, one relies on optical identification. Moreover, George et al. (2008) found a correlation not simply with the AGN locations, but by weighting them for the X-ray flux and the Auger exposure. This association, if real, is surprising, since the large majority of the correlating AGNs are radio-quiet, a class of objects not showing, in their electromagnetic spectrum, any sign of nonthermal high energy emission: no radio-quiet AGN was detected by ⋆ Email: gabriele.ghisellini@brera.inaf.it the EGRET instrument onboard the Compton Gamma Ray Observatory (Hartman et al. 1999). Therefore they must accelerate particles (protons, nuclei, and presumably their accompanying electrons) to ultra-high energies without any noticeable radiative emission from these very same particles. Radio-loud AGNs, instead, together with Gamma Ray Bursts (of both the long and short category) do show high energy non-thermal emission, and have been considered for a long time better candidates as UHECR sources (Vietri 1995;Waxman 1995;Milgrom & Usov 1995;Wang, Razzaque & Mészáros 2008;Murase et al. 2008;Torres & Anchordoqui 2004and Dermer 2007for reviews, and Nagar & Matulich 2008and Moskalenko et al. 2008 for the possible association of the AUGER events with radio-loud AGNs). Note also that some short GRBs could be due to the giant flares of highly magnetised neutron stars ("magnetars", as the 27 Dec 2004 event from 1806-20; Borkowski et al. 2004;Hurley et al. 2005;Terasawa et al. 2005), and that, at birth, a fastly spinning magnetar can be much more energetic than when, later, it produces giant flares (Arons 2003). The possibility that GRBs and magnetars are the sites of production of UHECR would directly imply the direct association of these events with (normal) galaxies. In this case the found association of UHECR with nearby AGNs might then be due to the fact that local AGNs just trace the distribution of galaxies. The aim of the present paper is to test this possibility directly correlating the locations of the ultra-high energy Auger events with a well defined, complete, and possibly absorption-free sample of galaxies. For this purpose we use the sample of H I emitting galaxies, comc 0000 RAS piled using the Parkes 64-m radio telescope (Barnes et al. 2001;Staveley-Smith et al. 1996), which is conveniently located in the south hemisphere, as the Auger observatory. The entire sample covers the portion of the sky visible by Auger, making it possible to use, for the correlation analysis, all the 27 UHECR events with energies larger than 57 EeV detected by Auger, without excluding the galactic plane, as is instead necessary when dealing with AGNs or optically selected galaxies. Note that the presence of neutral hydrogen strongly favours spirals (or, more generally, gas-rich galaxies) with respect to elliptical galaxies. UHECR events The Auger Observatory (Abraham et al. 2004(Abraham et al. , 2008, operating in Argentina since 2004, is located at latitude −35.2 • and it has a maximum zenith angle acceptance of 60 • . The relative exposure is independent of the energy of the detected events and it is a nearly uniform in right ascension. The dependence on declination is given by Sommers (2001). The Observatory can detect Cosmic Rays from sources with declination δ < 24.8 • . The available Auger list of UHECR events (Abraham et al. 2008) comprises 27 events with energies in excess of 5.7×10 19 eV from an integrated exposure of 9000 km 2 sr yr. The event arrival directions are determined with an angular resolution of better than 1 • . However, magnetic fields of unknown strength will deflect charged particles on their trajectories through space. The advantage of studying the highest energy events is that this deflection is minimised, but it can still be up to ∼ 10 • in the Galactic field. HIPASS catalogue We compare the arrival directions of Auger UHECRs with the locations of sources of the H I Parkes All-Sky Survey (HIPASS - Meyer et al. 2004). This is a blind survey of sources in H I covering the full southern sky at δ < 25 • which is the same sky area accessed by the Auger Observatory. The full catalogue is composed by a list of 4315 sources at δ < 2 • (HICAT - Meyer et al. 2004;Zwaan et al., 2004) and by its extension to the northern sky up to δ = 25 • (NHI-CAT - Wong et al. 2006) which includes 1002 sources. All sources are shown in Fig. 1 with the 27 UHECR detected by Auger. The HICAT and NHICAT have different level of completeness. To have a catalogue complete in flux at the 95% level, we cut the HICAT at Sint > 7.4 Jy km s −1 and the NHICAT at Sint > 15 Jy km s −1 as discussed in Zwaan et al. (2004) and Wong et al. (2006). Sint represents the total H I line flux. For the purposes of this paper we also considered the H I sources within 100 Mpc which is the maximum distance at which UHECRs of E > 57 EeV can survive the GZK suppression effect (see e.g. Harari et al. 2006). We call this sample 95HIPASS: it contains 2414 sources from the HI-CAT and 290 sources from the NHICAT for a total of 2704 sources and covers the entire sky at δ < 25 • . We will also consider the southern sky sample alone which is more complete and can be cut at 99% completeness for Sint > 9.4 Jy km s −1 (also by considering sources at <100 Mpc). This sample contains 1946 sources and is called 99HICAT. ANALYSIS To quantify the possible correlation between UHECR Auger events and the distribution of H I local galaxies we use the method adopted by George et al. (2008). In order to quantify the probability that two sets of sources are drawn from the same parent population of objects we perform the two-dimensional generalisation of the Kolmogorov-Smirnov (K-S) test (Peacock 1983) proposed by Fasano & Franceschini (1987). In our case the test is used to compare two data samples, i.e. the UHECR and the H I galaxies. This test can then measure either if UHECRs have a galaxy counterpart, and, viceversa, if a concentration of galaxies has an UHECR counterpart. The test relies on the statistic D, also used for the unidimensional K-S test, which repre-sents the maximum difference between the cumulative distributions of the two data samples. For each UHECR data point j we compute a set of four numbers dj,i (i= [1,4]) defined as the difference of the relative fraction of UHECR and H I galaxies found in the four natural quadrants defined around point j. Hence, D = max(dj,i) for all the data points considered. Defining Zn = D √ n, the strength of the correlation between two catalogues is the integral probability distribution P (D √ n > observed), where n = N1N2/(N1 +N2), and N1 and N2 are the number of data points in the two sets. This measurement can be used to determine the similarity of sets of positions on the sky. The probability can be computed analytically for large data sets (n >80 - Fasano & Franceschini 1987). In our case, having only 27 UHECR, we have to rely on Monte Carlo simulations. We generate a large set of random UHECR events according to the relative Auger exposure. For each synthetic UHECR sample we compute Zn by correlating it with the catalogue of H I galaxies. The probability of the observed Zn is given by the number of times we find a value of Zn larger than the observed one. This is the probability that the correlation between the (real) UHECR sample and the H I galaxies is not by chance. Large (low) values of the probability indicate a good (poor) correlation between the Auger UHECRs and the given H I galaxy sample. As noted by George et al. (2008), the two-dimensional K-S test can be performed with the number of data points or with the flux of the sources in the comparison sample. In our case D represents the maximum difference between the number of UHE-CRs and that of the sum of the galaxies weighted for their flux and for the the relative Auger exposure. The advantage of using the weighted flux of the sources is that it accounts for their distance. George et al. (2008) found that the UHECRs are more correlated with the weighted flux of Swift AGN than with with their position. In Fig. 2 we show the map of the flux of the HIPASS catalogue weighted for the Auger relative exposure. RESULTS We found that with the 95HIPASS catalogue (2704 H I sources complete in flux at 95%) the probability that UHECRs are correlated with H I galaxies is 71.6% by using the weighted flux of the H I sources. Considering the more complete 99HICAT (1946 H I sources complete in flux at 99%) distributed within 100 Mpc and the 25 UHECRs distributed in the same sky region we find a larger flux-weighted probability of 87.8%. This probability is slightly smaller than found with local AGN by George et al. (2008). However, having a large sample of H I galaxies we can study if the correlation probability changes by considering different subsamples of galaxies selected according to their distance or luminosity. We have considered 4 bins of distance with an equal number of sources (∼500) per bin. The correlation probability shows a maximum of 95% (97.8% for the 99HICAT) for sources distributed between 37.8 and 55 Mpc. We show these results in Fig. 3 (open circles and stars in the bottom panel). Similarly we defined four equally populated luminosity bins, or, equivalently, four bins of HI mass content, since we can use M/M⊙ = 2.36 × 10 5 D 2 Mpc Sint to estimate the H I mass (here Sint is measured in [Jy km/s]). We find that the probability (left panel in Fig. 3) is maximised by the most H I luminous or massive (in H I) sources (98% and 99% for the 95HIPASS and 99HICAT sample, respectively, for M > 1.1 × 10 10 M⊙). Selecting those H I galaxies located within two 20 • × 20 • boxes centred on the radio-galaxies Cen A and M87 (green boxes in Fig. 1), we can show where they lie in the luminosity-distance plane in Fig. 3 (orange and green dots, respectively). While there is no clustering of points at the distances of Cen A and Virgo, we can see that H I galaxies in the direction of Cen A do cluster at distances of 40-50 Mpc, where the Centaurus cluster is. This could explain why some UHECR events appear to be associated with the radio-galaxy Cen A, and none with M87: beyond Cen A there is the Centaurus cluster, richer of H I emitting spirals than the Virgo cluster. The ratio of the integrated HI fluxes from the two 20 • ×20 • boxes (Virgo/Cen A) is 5.9. To this, we have to multiply by another factor 3 for the lower Auger exposure in the direction of Virgo. The sample has too few galaxies beyond 100 Mpc to test the GZK effect (that would be revealed by finding no correlation for these galaxies). DISCUSSION The 27 Auger events above 57 EeV, with a total exposure of 9000 km 2 sr yr correspond to an integrated flux, in CGS units: This flux is smaller than the electromagnetic flux that we receive from nearby radio-quiet AGNs in hard X-rays (see e.g. . We now compare this flux with the expected flux of other candidate sources. We will consider flaring or bursting sources, that is impulsive events, but the spreading of the arrival times of UHE-CRs from a source located at a distance D, ∆t ∼ Dθ 2 /2c, due to even tiny magnetic deflections, ensure that we can treat all candidate sources as continuous. We will estimate the predicted flux in two different ways. First, assume that a class of sources is characterised by a pulse of emission of UHECRs, of average total energy < E >. Assume also that these events occur at a rate R per galaxy, per year, and consider those events occurring within the GZK radius Dc. We have: where 3.15 × 10 7 is the number of seconds in one year and Ng(D < Dc) is the number of galaxies within Dc of L * luminosity. The average distance of the sources is aDc (a = 3/4 for sources homogeneously distributed). Setting the mean local galaxy density ng = Ng/(4πD 3 c /3) = 10 −2 ng,−2 Mpc −3 we have: where Dc = 100Dc,100 Mpc. The second estimate on the predicted UHECR flux uses the electromagnetic flux as a proxy. Assume that we detect, for a typical member of a class of sources, an average fluence < F >, and that there are N events per year. If a fraction η of these events comes from sources within Dc, we have This estimate is more appropriate when dealing with sources, such as long and short GRBs, whose fluences and occurrences are known, while Eq. 3 is more appropriate when dealing with possible sources of unknown electromagnetic output, but predicted energetics and rates, such as newborn magnetars (Arons 2003) or giant flares from old magnetars. Let us consider the above classes of sources in Figure 3. Top left panel: the HI mass (in solar masses) of our galaxies as a function of their distance. The H I mass is proportional to the H I luminosity, and is found using M = 2.36 × 10 5 D 2 Mpc S int , where S int is the integrated flux in [Jy km/s]. Black empty circles are those galaxies forming a complete, flux limited, sample. Orange and green filled circles are galaxies in the 20 • × 20 • boxes centred on the position of the radio-galaxies Cen A and M87, respectively. The distances of these two radio-galaxies are marked by an arrow. At the distance of Cen A there is almost no H I emitting galaxy, and no concentration is seen at the distance of M87 (and the Virgo cluster). The H I galaxies lying in the same region of the sky as Cen A show a concentration for distances 40-50 Mpc, where the Centaurus cluster of galaxies is. No concentration in distance is seen for H I galaxies in the direction of Virgo. The bottom left panel shows the significance of the 2D K-S test using galaxies in different bin of distances (circles: South sample HICAT; stars: South+North sample HICAT+NHICAT). The top right panel shows the significance of the correlations for different bins of H I content (or, equivalently for different luminosity bins). turn, starting from short GRBs. In the BATSE catalog (cossc.gsfc.nasa.gov/docs/cgro/batse/BATSE Ctlg/flux.html) we have 490 short GRBs of total fluence 5.5 × 10 −4 erg cm −2 in 9 years of operation. Tanvir et al. (2005) correlated these short GRBs with local optically selected galaxies finding that a fraction between 5 and 25% of BATSE short GRBs might be nearby, i.e. at z < 0.025, corresponding to 109 Mpc. Considering that BATSE saw half of the sky and setting η = 0.1 we have an average flux of 3.9 × 10 −13 erg cm −2 s −1 . Then, if the UHECR flux is similar to the electromagnetic one, short GRBs do not match the required flux. Classical long GRBs (namely, energetic GRBs at z ∼ > 1) in the BATSE sample have a total fluence of 0.024 erg cm −2 (for the listed 1490 long GRBs in the BATSE catalog), corresponding to an average (all sky) flux of 1.7 × 10 −10 erg cm −2 s −1 , larger than the one given by Eq. 1. However, for long BATSE GRBs, η must be much smaller than 0.1, as directly suggested by the paucity of nearby events, and by the lack of correlation with nearby galaxies and clusters (Ghirlanda et al. 2006). While we cannot dismiss them as sources of UHECRs, it seems likely that classical BATSE bursts are too distant (but see below). Consider now giant flares from relatively "old" magnetars. The giant flare from SGR 1806-20 of Dec 27, 2004 emitted an energy E ∼ 10 46 erg in less than a second. The radio afterglow convincingly demonstrated the formation of a (at least mildly) rel-ativistic fireball. With the current hard X-ray instruments, such flares can be detected up to ∼30-40 Mpc (Hurley et al. 2005). Eq. 3, with Dc,100 = 0.3, and < E >= 10 46 erg, would require R ∼ 1 event per galaxy per year, while an approximate limit to the rate is R < 1/30 yr −1 (see e.g. Lazzati et al. 2005). Finally, consider fastly spinning newly born magnetars, whose rotational energy can exceed 10 52 erg, with a rate of R = 10 −4 events per galaxy per year (Arons 2003). With the estimated galaxy density (ng,−2 ∼ 0.7 with L ∼ L * ; Blanton et al. 2001) there should be 1 event per year within 100 Mpc. If each magnetar produces 10 50 erg in UHECRs, then this class of sources can be the progenitor of the Auger events (Eq. 3). This is independent of collimation, since the reduced rate of events pointing at us is compensated by an increase of the apparent energetics. But if an equal amount of energy is released in electromagnetic form, at energies detectable by BATSE, then they should be a significant fraction of all BATSE GRBs. Since the birth of a magnetar should be accompanied by a supernova, these events should be associated with long, rather than short GRBs, for which no associated supernova has been seen. If the radiative output is isotropic, they will all be nearby, sub-energetic, GRBs. The required fluence of these sub-energetic nearby long GRBs, to match the UHECRs flux, should be where ǫCR is the ratio of the emitted energy in radiation and UHECR. If η ∼ ǫCR ∼ 1, these events constitute a sizeable fraction of the total fluence of all long BATSE GRBs in one year (which is F ∼ 0.024/9 ∼ 2.7 × 10 −3 erg cm −2 ). Since we know that the large majority of long GRBs are not nearby, newly born magnetars should not constitute conspicuous events in hard X-rays. Their fluence must be mostly emitted in another energy range. GRB 060218 (Campana et al. 2006) with an energy of a few ×10 49 erg, at a distance of 145 Mpc, could be one of these events, and Soderberg et al. (2006) and Toma et al. (2007) already suggested that this GRB was powered by a newly born magnetar. The spectrum of its prompt emission peaked at ∼5 keV, i.e. its fluence in relatively soft X-rays exceeded the 15-150 keV fluence. It was also very long, slowly rising, and would not have been detected by BATSE. Soderberg et al. (2006) pointed out that these sub-energetic long GRBs should not be strongly beamed (not to exceed the rate of SN Ib,c), and should occur at a rate of 230 +490 190 Gpc −3 yr −1 , corresponding to R ≈ 10 −5 events per L * galaxy per year, about ten times larger than for classical long GRBs whose radiation is collimated into 1% of the sky. According to this rate, Eq. 3 would then demand < E >∼ 6 × 10 50 erg in UHECRs to match the observed flux. CONCLUSION We have correlated the cosmic rays with E > 57 EeV detected by the Auger Observatory with a complete, absorption-free sample of H I selected galaxies. We found a significant correlation when correlating the H I flux of galaxies of our sample. When considering the largest 95HIPASS catalogue and the 27 UHECRs we find a weak correlation (probability of 72%), while a larger significance (87.8%) is reached if we consider the most complete 99HICAT sample of galaxies (though with 25 UHECRs). These probabilities are maximised by cutting the H I sample in distance or luminosity bins: it becomes 99% when considering the 500 most luminous (or most H I massive) galaxies (1/4 of the sample), and 98% when considering the 500 galaxies lying between 38 and 54 Mpc, where the Centaurus cluster of galaxies is. Thus there is the possibility that the UHECRs coming from the direction of Cen A are instead coming from the more distant Centaurus cluster. Galaxies of this cluster are richer in H I than Virgo galaxies, explaining why there is no UHECR event from the direction of Virgo. This sample is formed by H I emitting galaxies, therefore it is biased against ellipticals. The found correlation with these galaxies, per se, is not disproving the found correlation with AGNs (Abraham et al. 2007(Abraham et al. , 2008George et al. 2008), since they also trace the local distribution of matter, as spiral galaxies do. On the other hand, it opens up the possibility, on equal foot, that UHECRs are produced by GRBs or newly born magnetars (see also Singh et al. 2004 who used AGASA events). With the caveat that it is premature, with so few events and big theoretical uncertainties, to draw strong conclusions, we have pointed out that although classical (i.e. energetic) long GRBs and short GRBs have difficulties in producing the required UHECR flux, newly born magnetars can. If so, they could also be a subclass of long GRBs, possibly sub-energetic and relatively nearby, powered by fastly spinning, newborn magnetars. The future increased statistics of UHECRs arrival directions will help to discriminate among the different proposed progenitors, especially if there will be (or not) an excess of events close to the radio core and/or lobes of Cen A.
2008-08-17T13:40:34.000Z
2008-06-14T00:00:00.000
{ "year": 2008, "sha1": "23636b4fdc724c3c1763b063f162ff2d8f085379", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0806.2393", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "23636b4fdc724c3c1763b063f162ff2d8f085379", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
238755347
pes2o/s2orc
v3-fos-license
Impedance Modeling and Stability Analysis of MMC Flexible DC System at the DC Side Aiming at the stability of modular multi-level converter (MMC) on the dc side, this paper adopts harmonic linearization method to establish impedance model of MMC dc side. This method ignores the higher-order components of voltage and current, and considers the influence of the control loops to guarantee high accuracy at the same time. The high-frequency band of impedance presents inductive characteristics, and the mid- and the low-frequency band is related to the control loop. The parameters of the current loop have a greater influence on the impedance characteristics of the mid-frequency band while the voltage loop parameters have a greater influence on the low-frequency band. For the back-to-back transmission system based on MMC, the voltage at the dc side has obvious oscillation when the impedance mismatch or the stability margin is small. The risk of dc side voltage instability can be eliminated by optimizing the controller parameters and phase angle margin can be improved. Introduction With the rapid development of power electronics technology, more and more renewable energy and dc equipment are interfaced with the power grid. The concept of the dc grid has increasingly become the preferred solution for power distribution and transmission. As the normal interconnection between ac grid and dc grid, modular multi-level converter(MMC) has the advantages of high power quality, easy expansion, and capability of fault ride-through, which can effectively increase the reliability and flexibility of the power grid [1]. Nowadays, with higher proportion of renewable energy in the power system, the connection between the renewable energy and ac grid may lead to the stability problem of oscillation such as the resonance problems of MMC-based high voltage dc transmission in German and China [2][3]. For this problem, the impedance-based method has been an effective way to assess the stability of the system and the impedance of MMC must be obtained first. As the new generation of voltage source converter, the impedance modeling and stability analysis for modular multilevel converter (MMC) at the ac side is quite fruitful [4][5]. However, with more MMCbased projects including multi-terminal dc grid, DC distribution networks put into operation [6]. The dc-side grid connection of MMC also faces stability problems. Reference [7] built both AC and DC impedance models of MMC by analyzing its operation principle, but it did not consider closed-loop control. Reference [8] gave the small-signal dc impedance model of MMC with the consideration of [9] is different from the classical control strategy in DQ coordinate system and has no generalization. Reference [10] built the dc model of MMC also based on harmonic linearization method, but the stability of MMC interconnection needs further analysis. There are steady-state harmonics of multiple fundamental frequency in the bridge arm current, capacitor voltage during MMC operation, which not only couple with each other, but also couple with the perturbation, and the dynamic characteristics are quite complicated. The harmonic linearization method can accurately reflect the relationship between the harmonics, and the obtained frequency-domain variables can be easily converted between the dq rotating coordinate system and the abc three-phase stationary coordinate system [11]. For the stability analysis of MMC at the dc side, the dc impedance model is built in this paper by using harmonic linearization method. The model considers the harmonic coupling characteristics of electric quantities in MMC and effects of different control methods. The Time-domain and frequency-domain simulation results show that the dc impedance model is of great accuracy. For the unstable working condition of back-to-back MMC power transmission, the controller parameters are optimized based on the proposed model to eliminate the phenomenon of dc-side oscillation. Steady-state harmonics analysis The basic topology of MMC is shown as Fig.1, v ga , v gb , v gc are the AC voltage of MMC. v dc is the DC voltage of MMC. Every arm of MMC contains N series submodules(SM) and an arm inductor. The arm inductor is used for the restriction of the circulating current of the arm. The forward direction of current is shown in Fig.1. i au stands for the current of the upper arm of phase a and i al stands for the current of the lower arm of phase a. Since the structure of MMC is of high symmetries among 6 arms in three phases. Analyzing the relationship of the electrical quantity of the upper bridge arm of phase a can deduce the electrical quantity equation of the remaining bridge arms. As it is shown in Fig.1, the average model of the upper arm in phase a of MMC can be derived as: (2) v au is the sum of capacitor voltages of all SMs of the upper arm in phase a. m au is the insertion index of the upper arm in phase a. L is the arm inductor and C is the equivalent capacitor of all SMs of the upper arm. By using Fourier expansion of state variables in (1) and (2), the steady-state harmonics at the frequency of -kf 1 ,..., -f 1 , 0, f 1 ,…, kf 1 can be obtained. f 1 is the basic frequency and k is a positive integer. Considering that the high-frequency harmonics proportion is really small, the harmonics above 3 basic frequency is ignored in this paper [4]. For simplicity, the subscript of phase a is omitted in the following equations. The frequency-domain average model of the upper arm in phase a can be derived as: The vectors in (3) The matrices of impedance and admittance in (3) and (4) can be derived as: It can be seen that equations (3) and (4) contain vector convolution terms. To simplify the equations, the convolution of vectors can be transformed into the multiplication of matrices. Then equations (3) and (4) can be rewritten as the followings. /2 In (8) and (9), the voltage and current matrices of the upper arm can be derived as: I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I At the steady working point of MMC, the dc and basic-frequency component of i u , and dc component of v u are known. According to the characteristics of the Fourier transform, the positive and negative frequency corresponding quantities are conjugated. Removing the variables at frequency 3f 1 , -2f 1 , -f 1 , 0, f 1 , the equation (8) can be simplified as: In the same way, the equation (9) can be simplified by removing the variables at frequency 3f 1 , -2f 1 , -f 1 , 0. Combining simplified (9) and (12), the following equation can be derived: I I I I I I I I I I I I I I The steady-state arm current and capacitor voltage at different frequencies can be obtained by solving (13). DC Impedance modeling Assuming that there is a small-signal perturbation at the dc side of MMC, and its frequency is f p . The interaction between perturbation and steady-state harmonics will produce the small-signal perturbation at frequency f p -kf 1 ,..., f p -f 1 , f p , f p +f 1 ,…, f p +kf 1 . Considering the symmetrical structure of MMC, the response to dc perturbation between the three-phase is the same. Analyzed with small-signal model, equation (8) and (9) can be rewritten as: Then the controlled insertion ˆu m is analyzed in small-signal model. For the typical control strategy of dc voltage control, power control, and current control, the corresponding control diagrams are shown in Fig.2. The power control means controlling the current at the ac side, which is equivalent to ac current closed-loop control. As it is shown in Fig.2, the inner current loop controls the phase current of MMC at the ac side. The phase current of MMC is the difference between the upper and lower arm current. This means only DM components are needed consideration for small-signal perturbation analysis of current control. So the Q can be derived as: The output voltage loop controls the dc voltage of MMC. The amplitude of f p will reduce by half after dq inverse transformation, and the frequency will change to f p -f 1 and f p +f 1 . The perturbation at frequency f p -f 1 is negative sequence and the perturbation at frequency f p +f 1 is positive sequence. So the matrix Q can be derived as: As it is shown in Fig.2, the circulating current loop controls the double frequency circulating current in MMC arms. So only the common-mode perturbation of arm current will contribute to the insertion index. Can be derived as : Combining (14) Simulation verification of dc impedance model To validate the accuracy of the proposed dc impedance model, an average simulation model of MMC has been built in PLECS. The simulation parameters are shown in Fig.3(a) and Fig.3(b). Responses predicted by the small-signal models are plotted as continuous lines, while responses obtained from circuit simulation are presented by circles at discrete frequency points. As it is shown in Fig.3, the theoretical impedance is in great agreement with the simulation results. At frequency above 200Hz, the dc impedance of MMC presents inductor Effects of controller parameters and stability analysis of cascaded system For the MMC put into use in engineering projects, the electrical parameters and topology structure are relatively fixed, so the effects of the controller parameters and control methods are discussed in this paper. According to the dc impedance model in (22) and (23), the impedance change trend can be obtained by changing the controller parameters, which are shown in Fig,4 and Fig.5. From Fig.4(a) and Fig.5(a), it can be known that changing Kp of the controller has little effect on impedance characteristics, and impedance is only changed in the middle-frequency band. From Fig.4(b) and Fig.5(b), it can be known that changing Ki of the controller has a great influence on the impedance in low and middle frequency, which is an important way of impedance optimization. The stability of cascaded converters in dc system can also be assessed by the impedance-based approach. Fig.6 shows the small-signal analysis diagram of a back-to-back MMC power transmission system. The voltage-controlled MMC can be equivalent to the series connection of voltage source V v and output dc impedance Z v . The power-controlled MMC can be equivalent to its DC input impedance Z p . V l is the voltage of the dc line. Voltage-controlled MMC Power-controlled MMC Figure 6. Small-signal diagram of back-to-back MMC power transmission system From Fig.6, the dc voltage V l is: By using the Middlebrook Criterion [12][13] for the minor loop gain T m (s), the phase difference at the frequency of impedance overlapping should be less than 180 degrees. The greater the phase margin is, the more stable the system will be. Fig.7(a) shows the dc impedance of 2 connected MMC. The current controller parameter of the power-controlled MMC is (2+4000/s), and the voltage controller parameter of the voltage-controlled MMC is (0.5+300/s). It can be seen that the impedance gain of two MMC overlaps at 53Hz, and the phase difference is 191 degrees, which doesn't satisfy the requirements of the Nyquist criterion. To improve the phase margin, the current controller and voltage controller parameters are optimized to (3+400/s) and (0.5+30/s) respectively, and the corresponding dc impedance of the two MMC is shown in Fig.7(b). To validate the stability analysis above, the timedomain simulation has been made and shown in Fig.9. The dc voltage waveform is shown in Fig.8(a), the controller parameters were changed for improvement at 2s, and the oscillation on the dc line diminished. The Fourier analysis of dc voltage before and after parameters improvement is shown in Fig.8(b) and Fig.8(c) respectively. It can be seen that before improving the controller parameters, the dc voltage oscillates at 53Hz. After the improvement, the oscillation is cleared. The simulation results in Fig.8 are in great agreement with the impedance analysis in Fig.7. Conclusion To analyze the dc-side stability of MMC power transmission system, a dc impedance small-signal model of MMC is proposed. The impedance characteristics and stability analysis are carried out under back-to-back power condition. The conclusions are as follows: Considering the steady-state harmonics of MMC and different control methods, the dc impedance of MMC is established. A voltage source perturbation is added at the dc side of MMC to validate the correctness of the impedance model. DC impedance of MMC has different characteristics at different frequency bands. It shows inductor characteristics at the high-frequency band and its impedance characteristics at low frequency is affected greatly by the control methods and controller parameters. The current controller has a great influence on the impedance at middle frequency and the voltage controller has a great influence on the impedance at low frequency. Under back-to-back power transmission, the dc voltage of MMC may have the risk of instability due to the mismatch of two MMC. Through the improvement of controller parameters, the phase margin can be raised and the stability can be enhanced.
2021-10-14T20:11:41.132Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "97c445721a29bf40012f64626b4b5272c2d44c75", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2022/1/012010", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "97c445721a29bf40012f64626b4b5272c2d44c75", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
180217
pes2o/s2orc
v3-fos-license
A Survey of RFID Authentication Protocols Based on Hash-Chain Method Security and privacy are the inherent problems in RFID communications. There are several protocols have been proposed to overcome those problems. Hash chain is commonly employed by the protocols to improve security and privacy for RFID authentication. Although the protocols able to provide specific solution for RFID security and privacy problems, they fail to provide integrated solution. This article is a survey to closely observe those protocols in terms of its focus and limitations. Introduction Radio frequency identification or RFID was first used during the Second World War in Identification Friend or Foe systems onboard military aircraft. Soon after, Harry Stockman demonstrated a system energized completely by reflected power. Then in 1960s, the first Electronic Article Surveillance antitheft systems were commercialized. Later, in 1970s, the US Department of Energy investigated the technology's potential to safeguard materials at nuclear weapons sites [17]. Recently, radio frequency identification (RFID) has been regarded as the main driver of the future ubiquitous technology. It is also claimed as the core technology to realize internet of things environment where large amount of items are connected seamlessly anytime and anywhere [2]. RFID offers simplicity for people to object (P2O) and object to object (O2O) communications. It is believed that it will play a significant role for future ubiquitous society [11]. Generally, RFID systems consist of Radio Frequency Identification (RFID) tags and RFID readers. While RF tags operate as transponders, RF readers act as transceivers. In case of a more complex application, a database server is required to store information comes from both transponders and receivers sides [12]. It is assumed that communication between RF tags and server is secure. The process of RFID communication can be described as follows, RFID reader request access to the RFID tag and return the reply to the database server. After identification and authentication on server side, then server will return the information of RFID tag to the reader [12] [13]. Automatic identification is the basic characteristic of RFID. In its simplest form, identification can be binary, e.g., paid or nor paid which is useful for alerting. Therefore, alerting is become the next powerful feature of RFID. Also, RFID enable real time monitoring to a large number of in a short time. In addition, RFID has ability to perform on-chip computation, accordingly it support cryptographic protocol for authentication. In general, RFID has four basic capabilities, identification, alerting, monitoring, and authentication [9]. RFID has become a new and exciting area of technological development, and is receiving increasing amounts of attention. There is tremendous potential for applying it even more widely, and increasing numbers of companies have already started up pilot schemes or successfully used it in real-world environments. Based on the various industry areas that are featured in the reviewed literature, RFID technology is widely used in many areas such as -Animal detection. -Aviation. -Transportation. -Enterprise feedback control. -Fabric and clothing. -Health, -Military, etc. Therefore, RFID related business experiences many significant advantages. Das [21] confirms market research report from IDTechEx that the increasing sales of RFID tags for the 60 years up to the beginning of 2006 reached 2.4 billion, which was accounted for more than 600 million tags being sold in 2005. Then in 2006, it was expected that approximately 1.3 billion tags and 500 million RFID smart labels would be needed in a range of areas, such as retailing, logistics, animals and farming, library services, and military equipment. In terms of academic point of view, RFID is regarded as an exciting research area due to its relative uniqueness and exploding growth. RFID research has led to the emergence of a new academic research area that builds on existing research in a host of disciplines, such as electronic engineering, information systems, computer science, environmental science, medical and public health and also business strategic management, and there has been a significant increase in the number of papers on RFID in research journals [20]. As noted by Heinrich [22], RFID is likely to be among the most exciting and fastest-growing technologies in terms of scope of application in the next generation of business intelligence which attract many researchers from different field to collaborate in a specific research. However, to continue the advancement of knowledge in this area, it is important to understand the current status of RFID research and to examine contemporary trends in the research domain. It is vital to determine the principal concerns of current RFID research, whether technological, application related, or security related. An academic review of the literature is essential for appropriately shaping future research. One of a growing area in RFID literature is authentication protocol with hash chain model which will be deeply discussed in this paper. The rest of this paper is organized as follows. In the next section, we will briefly review security and privacy issues of RFID. The application of hash chain in RFID authentication is presented in Sections 3. In Section 4, we propose our comparative study of recent RFID authentication protocols based on hash chain model. And some conclusions will be made in the last section. Security and privacy of RFID RFID technology poses exclusive privacy and security concerns since it is promiscuous the tags themselves typically maintain no history of past readings. As a result, security and privacy issues are considered as the fundamental issue the RFID technology [4] [12]. Security Objectives Although generally it is assumed that communication between RFID tag and the readers is secure, yet since it is basically wireless based communication, a number of security and privacy issues could not be avoided. Fundamental information security objectives, such as confidentiality, integrity, availability, authentication, authorization, nonrepudiation and anonymity are often not achieved unless special security mechanisms are integrated into the system [23]. The privacy aspect has gained special attention for RFID systems. Consumers may carry objects with silently communicating transponders without even realising the existence of the tags. Passive tags usually send their identifier without further security verification when they are powered by electromagnetic waves from a reader. The ID information can also be linked to other identity data and to location information. Consumers might employ a personal reader to identify tags in their environment but the large number of different standards may render this difficult. Therefore, user privacy is the main consideration of RFID security. A. Confidentiality The communication between reader and tag is unprotected in most cases. Eavesdroppers may thus listen in if they are in immediate vicinity. Furthermore, the tag's memory can be read if access control is not implemented. B. Integrity With the exception of high-end ISO 14443 systems which use message authentication codes (MACs), the integrity of transmitted information cannot be assured. Checksums (CRCs) are often employed on the communication interface but protect only against random failures. Furthermore, the writable tag memory can be manipulated if access control is not implemented. C. Availability Any RFID system can easily be disturbed by frequency jamming. But, denial-of-service attacks are also feasible on higher communication layers. D. Authenticity The authenticity of a tag is at risk since the unique identifier (UID) of a tag can be spoofed or manipulated. The tags are in general not tamper resistant. RFID Security and Privacy Schema To get a clear understanding on how those security and privacy issues exist within RFID technology, a schema introduced in [17] is presented. Garkenfield, et.al. [17] clearly describe the actual existing security and privacy problems within RFID technology as can be seen in figure 1. Physical attacks, denial of service, man in the middle attack, eavesdropping, traffic analysis, counterfeiting, and tag cloning attack are several security related problems commonly addressed to RFID technology [10] [17]. In terms of privacy, RFID is mainly questioned in tracking and inventorying capabilities. Another privacy concern is authentication problem. Authenticating legal communication between RFID tag and reader is the main question intensive studies in this field [10][12] [13]. This paper surveys some papers that introduce RFID authentications with hash chain method. Hash Chain Model and Its applications in RFID authentication Hash chain is basically a cryptography approach for safeguarding against password eavesdropping which is firstly proposed in [7]. Now, it can be found in other applications such as micropayment systems and RFID authentication due to elegant and versatile low-cost associated to this technique. Lamport [7] describes that a hash chain of length N could be constructed by applying a one-way hash function h(.) recursively to an initial seed value s. The last element hN(s) is also called the tip T of the hash chain. By knowing h N (s), h N-1 (s) can not be generated by those who do not know the value s, however given h N-1 (s), its correctness can be verified using h N (s). This property of hash chains has evolved from the property of one-way hash functions [7]. Then, the application of hash chain model into RFID application is firstly introduced by Ohkubo et.al [8]. After reviewing previous protocols for improving privacy of RFID applications, they suggest five points for an approach to RFID scheme design. -keep complete user privacy. -eliminate the need for extraneous rewrites of the tag information. -minimize the tag cost. -eliminate the need for high power of computing units. -provide forward security. Their protocol for secure RFID privacy protection scheme is described as follows [8]. . Figure 2. Hash chain in RFID application. Hash chain technique is employed to renew the secret information contained in the tag from G to H. The following is a brief description how it works. In the beginning, a tag has initial information s 1 . In the ith transaction with the reader, the RFID tag will do two things: 1. Sends answer a i = G(s i ) to the reader, 2. Renews secret s i+1 = H(s i ) as determined from previous secret s i , where H and G are hash functions, as in Figure 1. The reader sends a i to the back-end database. The back-end database maintains a list of pairs (ID, s i ), where s i is the initial secret information and is different for each tag. So the back-end database that received tag output ai from the reader calculates a0i = G(Hi(s i )) for each s i in the list, and checks if a i a' i . Then, it will find a' i , a' i = a i , then return the ID, which is a pair of a' i . This is the basic hash chain method implementing in RFID authentication protocol which is then followed by many researchers by proposing new protocols in different perspectives and techniques. Comparative study of hash chain based RFID authentication protocols This part will discuss several RFID authentication protocols that using hash chain as a method for properly enhancing security and privacy of RFID. We introduce hash chain method of each protocol and review its limitation. There are ten protocols of RFID authentication with hash chain will be compared in this section. As mentioned earlier, Ohkubo et al. [8] proposed a hash-based authentication protocol. The aim of the protocol is to provide better protection of user privacy with the basic concept of refreshing the identifier of the tag each time it is queried by a reader. The protocol changes RFID identities on each read based on hash chains. Hash chain method is used in this two ways communication of RFID tag. This protocol does not require a random number generator. However, it is confirmed that this protocol is flawed to certain replay attacks. The next protocol is a cryptographically controlled access protocol for RFID tag by using hash locks was proposed by Weis et al. [15]. They argue that although the hash value can be read out by any reader, yet only authorized ones would be able to look up the tags key in a database of key-hash pairs. The objective of this protocol is improving RFID tag security and privacy by using an integrated hash function where key could be verified by comparing the key hash with the stored hash value. The drawback of this protocol is that the static hash value would still be traceable. In addition, Molnar et al. [4] proposed a hash-tree based authentication protocol for RFID tags. They exposed privacy issues related to RFID in libraries, described current deployments, and suggested novel architectures for library RFID. This protocol utilizes a dynamic amount of computation required per tag, which depends on the number of tags available in the hash-tree. However, the protocol is confirmed has a serious problem, where another security leak will arise if the tag lost. In this case, anonymity for the rest of the hash-tree group may be compromised by attacker. Therefore, the protocol does not provide for forwardanonymity. The following approach is the hash chain based RFID authentication protocol by Henrici [3]. This protocol only requires a hash function in the tag and data management at the back-end. It offers a high degree of location privacy and is resistant to many forms of attacks. Further, only a single message exchange is required, the communications channel needs not be reliable and the reader/third party need not be trusted, and no long-term secrets need to be stored in tags. However, the limitation of this protocol is that it does not guarantee to provide full privacy, since the tag is vulnerable to tracing when the attacker interrupts the authentication protocol mid-way. Therefore, this approach also has Then, Avoine [5] introduced the modification of authentication protocol. This proposed protocol is aimed at solving the replay attack problem of [8]. He argued that privacy issues cannot be solved without looking at each layer separately, where RFID system has three layers, application, communication and physical. Yet, it does not consider the issue of availability, and the protocol is vulnerable to attacks where the attacker forces an honest tag to fall out of synchronization with the server so that it can no longer authenticate itself successfully. Similarly, anonymous RFID protocol is also offered by Dimitriou [19]. The objective of this protocol is to protecting forward privacy from cloning and privacy attacks. Mutual authentication is performed within the protocol based on the use of secrets which are shared between tag and database, and refreshed to avoid tag tracing. Yet, limitation of the protocol is addressed to desynchronization problem which occurs in database side. This is confirmed to be vulnerable from man in the middle attack. The following protocol is offered by Rhee et al. [6]. It is called hash-based challenge-response which is aimed at providing security protection mechanism from the replay and spoofing attacks. The proposed protocol is based on challenge response using one-way hash function and random number which is claimed suitable for security database environment. Hash chain function is used in the protocol to guarantee secret key in the form of ID. Then, the tag does not need to update the secret key which avoids attacks by interrupting the session. However, this solution does not provide forward secrecy which means if a tag can be compromised then attacker will be able to trace the past communications from the same tag. Lee et al. [18] proposed a new RFID authentication protocol with hash chain. The objective of this effort is to solve the desynchronization problem by maintaining a previous identification number in the database server. However, since the hashed identification number is always identical, an adversary who queries tag actively without updating identity able to trace the RFID tag. Unfortunately, although it able to secure desynchronization problem, this protocol still suffer from traceability attack which is become a serious limitation of the protocol to solve privacy problem of RFID. Likewise, RFID authentication scheme whith a hash function and synchronized secret information was introduced by Lee, et.al. [14]. The protocol is aimed at securing user privacy including against tag cloning attack through an additional hash operation. Unfortunately, this protocol suffers from desynchronization attacks that could be conducted by adversaries. This occurs due to unavailability of PRNG in the RFID tag while the server does not know how many times an RFID tag may have not yet updated its secret information. Finally, Han,et.al. [13] offer new kind of mutual authentication protocol to solve some problems of previous protocols. In their protocol, the authentication mechanism is supported by a monitoring component. The component which exists in database server, constantly monitors the synchronized secret information between RFID tag and reader. This protocol is argued to provide more secure communication mechanism since the communication between tag, reader and database is mutually authenticated and constantly monitored. In addition, the protocol also supports the low-cost non-volatile memory of RFID tags. However, it also has limitation since it still needs the back-end database support. As can be seen in table 1, all protocols being studied are vary in terms of the way of using hash chain method but focus on the same objective that is providing better privacy mechanism while maintaining anonymity . Conclusion RFID authentication protocols in this study provide privacy and anonymity. Hash chain method is used in these RFID authentication protocols in various ways a unique solution for security and privacy problems of RFID technology. As a result, while problems in particular cases can be addressed, other problem is arising. Therefore, it can be concluded that recent RFID authentication protocols with hash chain failed to satisfy an integrated security and privacy solutions for RFID. Based on the findings, we recommend future study in this area to focus on a level such as developing a framework for RFID authentication model. The framework will be useful for researchers as a foundation for collaborative research in the future for integrated RFID authentication solutions.
2010-08-14T08:07:07.000Z
2008-11-11T00:00:00.000
{ "year": 2008, "sha1": "0c8481b28707b67c23dc87c9df93f9d3f551bc0e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1008.2452", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1be20c6ddbfa7651a5e8d8d1ac6dedaec00af538", "s2fieldsofstudy": [ "Computer Science", "Mathematics", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
240070675
pes2o/s2orc
v3-fos-license
Action-angle variables of a binary black-hole with arbitrary eccentricity, spins, and masses at 1.5 post-Newtonian order Accurate and efficient modeling of the dynamics of binary black holes (BBHs) is crucial to their detection through gravitational waves (GWs), with LIGO/Virgo/KAGRA, and LISA in the future. Solving the dynamics of a BBH system with arbitrary parameters without simplifications (like orbit- or precession-averaging) in closed form is one of the most challenging problems for the GW community. One potential approach is using canonical perturbation theory which constructs perturbed action-angle variables from the unperturbed ones of an integrable Hamiltonian system. Having action-angle variables of the integrable 1.5 post-Newtonian (PN) BBH system is therefore imperative. In this paper, we continue the work initiated by two of us in arXiv:2012.06586, where we presented four out of five actions of a BBH system with arbitrary eccentricity, masses, and spins, at 1.5PN order. Here we compute the remaining fifth action using a novel method of extending the phase space by introducing unmeasurable phase space coordinates. We detail how to compute all the frequencies, and sketch how to explicitly transform from the action-angle variables to the usual positions and momenta. This analytically solves the dynamics at 1.5PN. This lays the groundwork to analytically solve the conservative dynamics of the BBH system with arbitrary masses, spins, and eccentricity, at higher PN order, by using canonical perturbation theory. I. INTRODUCTION Laser interferometer detectors have made numerous gravitational wave (GW) detections that have originated from compact binaries made up of black holes (BHs) or neutron stars [1][2][3]. Among these detections, the predominant sources of GWs are from binary black holes (BBHs), whose initial eccentricity is believed to be mostly radiated away by the time they enter the frequency band of the ground-based detectors such as LIGO, Virgo, and KA-GRA. Since the upcoming LISA mission [4,5] will target compact binaries earlier in their inspiral phase compared to the ground based detectors, incorporating eccentricity becomes more relevant. Since the observation time for LISA sources will be much longer, it is imperative to find accurate closed-form solutions to the binary dynamics. This brings us to the question of working out closedform solutions of the dynamics of a generic BBH system, with arbitrary eccentricity, masses, and with both BHs spinning, without special alignment. Many such attempts have been made in the literature [6][7][8][9][10][11][12][13][14], but most (if not all) of them give the solution of the conservative sector of the dynamics under some simplifying conditions such as the quasi-circular limit, equal-mass case, only one or none of the BHs spinning, with orbit-averaging, etc. Only recently, one of us provided a method to find the closed-form solution to a BBH system with arbitrary eccentricity, spins, and masses at 1.5 post-Newtonian (PN) order for the first time [15] (with the 1PN part of the Hamiltonian being omitted, as it is not complicated to handle). The next natural question is: how can one construct the solutions at 2PN, or is it even feasible? This line of questioning led two of us to probe the integrability, and therefore the existence of action-angle variables of the BBH system at 2PN in Ref. [16], wherein we found that a BBH system is indeed 2PN integrable when we applied the perturbative version of the Liouville-Arnold (LA) theorem, due to the existence of two new 2PN constants of motion that we discovered. Since integrability precludes chaos (which would obstruct finding closed-form solutions), establishing integrability at 2PN instills hope towards finding a closed-form solutions at this order. A straightforward extension of the methods of Ref. [15] from 1.5PN to 2PN appears too difficult to carry out, if not impossible. Our hope is to use non-degenerate canonical perturbation theory [17,18], which when supplied with 1.5PN action-angle variables, can yield 2PN action-angle variables. If this line of work is to be pursued, the 1.5PN action-angle variables are imperative. The calculation cannot start from a lower PN order because the lower order (1PN) system is degenerate in the action-angles context; this is discussed later. We initiated the actionangle calculation in Ref. [16], where we computed four (out of five) actions. In this paper, we compute the last action variable, and sketch how to explicitly transform from the action-angle variables to the usual positions and momenta. This basically comprises the closed-form solution to the 1.5PN spinning BBH dynamics. The history of action-angle variables literature dates back centuries. The Kepler equation presented in 1609 gives the Newtonian angle variable [17], long before Newton proposed his laws of motion and gravitation. Important contributions were made by Delaunay to the action-angle formalism of the Newtonian two-body system [17] in the nineteenth century. More recently, on the post-Newtonian front, Damour and Deruelle gave the 1PN extension of the angle variable when they worked out the quasi-Keplerian solution to the non-spinning eccen-tric BBH system [19]. Damour, Schäfer and Jaranowski worked out action variables at 2PN and 3PN ignoring the spin effects. Such post-Newtonian calculations make use of the work of Sommerfeld for complex contour integration to evaluate the radial action variable [20]. Finally, Damour gave the requisite number (five) of 1.5PN constants of motion in Ref. [21], which is required for integrability as per the LA theorem. This paper is a natural extension to our earlier work [16]. We compute the remaining fifth action variable using a novel method of extending the phase space by the introduction of new, unmeasurable (or fictitious) phase space variables. We then show how to PN expand the lengthy expression of this 1.5PN exact fifth action and retain the much shorter leading-order contribution. Next we discuss how to compute all the frequencies of the system. Then we give a clear roadmap on how to compute all angle variables of the system implicitly, by expressing the standard phase space variables of the system ( ⃗ R, ⃗ P , ⃗ S 1 , ⃗ S 2 ) as explicit functions of the action-angle variables. Thereafter, we proceed towards constructing solution to the BBH system using action-angle variables at 1.5PN. This action-angle-based solution can be extended to higher PN orders via canonical perturbation theory. Finally, in one of the appendices, we point out a loophole in the definition of PN integrability that we presented in Ref. [16] and also provide a simple fix. We mention here that in a companion paper [22], we implemented our action-angle based solution using Mathematica, and compared it with the corresponding numerical solution. The organization of this paper is as follows. In Sec. II, we lay the conceptual foundations, introducing the phase space (symplectic manifold) and the Hamiltonian of the system. This includes introducing important definitions like those of integrability and action-angle variables. In Sec. III, we discuss the idea of extending the phase space by introducing new, unmeasurable phase space variables; they make the computation of the fifth action possible. In the next section, we implement these ideas to actually compute the fifth action in explicit form. Then in Sec. V, we show how to PN expand this fifth action and present its shortened form. In Sec. VI, we finally show how to compute the five frequencies, the angle variables, and construct the action-angle-based solution to the system. Finally, we summarize our work and suggest its future extensions in Sec. VII. As far as appendices are concerned, some lengthy calculations have been pushed to Appendix A, which would have otherwise been a part of Sec. IV. In Appendix B, we prove that our fifth action calculated in the extended phase space is also an action in the standard phase space. Appendix C gives some commonly occurring derivatives that occur in the frequency calculations. Lastly, in Appendix D, we fix a loophole in the definition of PN integrability that we presented in Ref. [16].S 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > m 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t >P 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t >P 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t >S 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > m 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > CM < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > R 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t >R 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > R ⌘R 1 R 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t >S 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > S 2z < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > x < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > II. THE SETUP The paper is a continuation of the research initiated in Ref. [16] and uses the same conventions, which we now briefly describe. For an informal and pedagogical introduction to the mathematical machinery employed in this paper and Ref. [16], the reader is referred to the set of lecture notes at [23]. We will study the BBH system in the PN approximation within the Hamiltonian formalism. The system under consideration is schematically displayed in Fig. 1. We work in the center-of-mass frame with a relative separation vector ⃗ R ≡ ⃗ r 1 − ⃗ r 2 between the two black holes, and conjugate momentum ⃗ P ≡ ⃗ p 1 = −⃗ p 2 , where the labels 1 and 2 indicate the two black holes, with masses m 1 and m 2 respectively. In Ref. [16], ⃗ R 1,2 and ⃗ P 1,2 were used to denote the position and momentum vectors of the two BHs; but here we are reserving these symbols for to-be-introduced unmeasurable, fictitious variables (see Sec. III). The BHs possess spin angular momenta ⃗ S 1 and ⃗ S 2 which contribute to the total angular momentum is the orbital angular momentum of the binary. We will frequently use the effective spin The magnitude of any vector will be denoted by the same letter used to denote the vector, but without the arrow. Additionally, S eff · L, and R a · P a will stand for ⃗ S eff · ⃗ L, and ⃗ R a · ⃗ P a , respectively. Einstein summation convention will be assumed unless stated otherwise. The 1.5PN Hamiltonian that we will primarily be interested in is given by Eqs. (11), (12), (13) and (14) in Ref. [16], and will be denoted by H. Note that H in this paper is found by dropping the 2PN contribution in the H of Ref. [16]. The only non-vanishing Poisson brackets (PBs) between the phase space variables ⃗ R, ⃗ P , ⃗ S 1 , and ⃗ S 2 are and those related by antisymmetry Here the letters a, b label the two black holes (a, b = 1, 2), and i, j, k are spatial vector indices. The PBs are derivations, so obey the chain rule (with ξ i 's standing for all phase-space variables) Eqs. (4), (5), and (6) enable us to compute the PB between any two functions of the phase-space variables. As usual, the evolution of any phase-space function f is given byḟ = {f, H}. With this, it can be verified that both the spin magnitudes are constant,Ṡ a = {S a , H} = 0. This means that we can specify each spin vector using only two variables: the z component and the azimuthal angle of the spin vector. This choice is particularly useful because these two variables act like canonical ones. This is so because Eqs. (4), (5), and (6) imply that This means that there are five pairs of canonically conjugate variables, and a total of ten canonical phase space variables. From a more mathematical point of view, Hamiltonian dynamics takes place on a symplectic manifold B which is a smooth manifold equipped with a closed, non-degenerate differential 2-form Ω, the symplectic form. The orbital variables R i , P j are canonical variables of the cotangent bundle T * R 3 (a symplectic manifold), while each spin vector S i a lives on the surface of a two-sphere (also a symplectic manifold, with symplectic form proportional to the area 2-form). The spin vectors ⃗ S a being on the above spherical symplectic manifolds is consistent with the constancy of the spin magnitudes. The symplectic manifold B which is the total phase space of the system is the Cartesian product of the above symplectic manifolds (T * R 3 , and the two 2-spheres). The symplectic form on B is the sum of the symplectic forms from the three manifold factors [16]. In terms of canonically conjugate variables, This description of the phase space manifold using a symplectic geometry point of view makes it clear that each spin has only two degrees of freedom (S z a and ϕ a ), rather than three (S x a , S y a , and S z a ). Although Ω itself is smooth, notice that this coordinate system is singular at the poles of each spin space. Now we define integrable systems and action-angle variables at the same time, re-presenting the definition given in Ref. [24]. Two quantities f and g are called commuting or "in involution" if {f, g} = 0. Consider a system with Hamiltonian H in 2n canonical phase space variables ( ⃗ P, ⃗ Q). This system is integrable if there exists a canonical transformation to coordinates ( ⃗ J , ⃗ θ) such that all the actions J i are mutually commuting, H is a function only of the actions, and all the ⃗ P and ⃗ Q variables are 2π-periodic functions of the angle variables ⃗ θ. The Liouville-Arnold (LA) theorem [16,18,[24][25][26] which states that, on a 2n-dimensional symplectic manifold, if ∂ t H = 0 and there are n independent, mutually commuting phase-space functions F i , such that the level sets of these functions form compact and connected manifolds, then the system is integrable, and the above level sets are diffeomorphic to an n-torus. H being one of these F i 's implies that all the F i 's are also constants sincė F i = {F i , H} = 0. Hence we call these F i 's the n commuting constants. When Ω is exact, there is a globally well-defined potential one-form Θ (such that Ω = dΘ), then in canonical variables it will be and the action variables can be computed via [24,25] where C k is any loop in the kth homotopy class on the n-torus defined by the level sets F i =const. The above integral is insensitive to the choice of loop in a certain homotopy class; see Proposition 11.2 of Ref. [24]. However, the 2-sphere (and therefore our symplectic manifold B) does not admit a global Θ, as mentioned earlier. In such cases, the actions are still well defined up to some global constants, but now using integrals over areas instead of loops; see Ref. [16] for details. Before ending this section, we briefly introduce the concept of Hamiltonian flows and the associated Hamiltonian vector fields [18]. A quantity f ( ⃗ P, ⃗ Q) defines a Hamiltonian vector field ⃗ X f via {·, f } = ∂/∂λ, such that it acts on another function g( ⃗ P, ⃗ Q) as ∂g/∂λ = {g, f }. The collection of the integral curves of this vector field is referred to as the Hamiltonian flow of the field. III. THE EXTENDED PHASE SPACE: A TOOL TO COMPUTE ACTIONS ON SPHERICAL MANIFOLDS In this section, there are instances where we first explain some subtle concepts informally, before giving a more mathematically precise statement in the next paragraph. The reader may choose to skip the more advanced wording at the expense of some depth. A. Motivation behind fictitious variables In Ref. [16], we evaluated four of the five action integrals for the 1.5PN BBH system. The fifth action computation is a more complicated task and this leads us to invent certain "fictitious", "unmeasurable" variables, thereby extending the usual standard phase space (SPS) to the extended phase space (EPS). We now turn to explain the motivation behind them, which has two facets. Actions are well-defined on exact symplectic manifolds; an exact symplectic manifold admits a global potential , the same is not true for the spin spherical symplectic manifolds, thereby making the SPS non-exact; see Problem 2 of Homework 2 in Ref. [27]. Although the SPS is not exact, the EPS will be. The two spaces will also be found to be equivalent (in a certain sense), which justifies the computation of action in the EPS, which we can then push forward to the SPS, since every EPS point would map to an SPS point by construction. The other more practical problem the EPS cures is that of computation of the action integral in closed form. In the SPS (with variables R i , P i , ϕ a , S z a with a = 1, 2), the action integral is broken down into the orbital and spin sector contributions, Now under the flow of S eff ·L, the above orbital sector integral of Eq. (12) is easy to compute. We state beforehand that the result comes out to be where ∆λ S eff ·L is the flow amount under S eff · L (to be determined). See Eqs. (30)-(33) for the intermediate steps. 2 Now although the orbital sector of the action integral under the S eff · L flow is easy to compute, we don't know how to compute the spin sector integral of Eq. (13). We again state beforehand that writing the orbital angular momentum ⃗ L as a cross product of a position ⃗ R and a momentum ⃗ P was critical to easily evaluating J orb under the S eff · L flow. This is something we can't do with the spin angular momenta ⃗ S a because ⃗ S a are considered to be fundamental coordinates, not written as cross products of some positions and momenta. As we will see, the EPS gets rid of all these problems, thus making the action evaluation tractable. B. Introducing fictitious phase-space variables We refer to the phase space with coordinates ( ⃗ R, ⃗ P , ⃗ S 1 , ⃗ S 2 ) as the SPS, the standard phase space. It is denoted by the letter B. We now invent a new 18dimensional extended phase space (EPS) E = (T * R 3 ) 3 with canonical coordinates R i , P i , R i a , P ai with a = 1, 2, with canonical Poisson bracket algebra Here we use the subscript E to distinguish the Poisson brackets in E from those in B. We call the ⃗ R a , ⃗ P a variables the unmeasurable, fictitious variables. For contrast, we will sometimes refer to the SPS coordinates ⃗ R, ⃗ P , ⃗ S 1 , ⃗ S 2 as the observable coordinates. We also demand that for an SPS point ( ⃗ R, ⃗ P , ⃗ S 1 , ⃗ S 2 ), the corresponding EPS point must satisfy Of course, there are an infinity of EPS points which correspond to the same SPS point. In more advanced language, this is a fiber bundle (with non-compact fibers) with projection In coordinates, this projection takes a point in E and sends it to the point in B where its B coordinates are 2 Eqs. (30)- (33) are written for the EPS, but if we neglect the spin sector terms (like J spin and P ai dR i a /dλ with a = 1, 2), then these are also valid for the SPS. This is depicted in Fig. 2. In this pictorial depiction of Fig. 2 In summary, this extended manifold E, the spins are now seen as cross products of fictitious positions and momenta. Also, now E is exact and admits a global 3 A more sophisticated way to think of this projection is as follows. Think of the three-dimensional spin manifold, with coordinates S i a , as so(3) * , the vector space dual to the Lie algebra so(3) of the rotation group SO(3). The dual of a Lie algebra naturally comes equipped with a Lie-Poisson structure (results developed by Kirillov, Kostant, and Souriau [28]). The usual action of SO(3) on R 3 induces a Hamiltonian action on its cotangent bundle T * R 3 (a Poisson manifold), analogous to our fiber coordinates (R i a , P aj ). From here we can build the dual map T * R 3 → so(3) * , which is the momentum map [28]. Our projection π coincides with the momentum map. It is no longer a product of a Cartesian manifold and two-spheres. All three angular momenta ⃗ L, ⃗ S 1 and ⃗ S 2 stand on equal mathematical footing. C. Comparing the EPS and SPS pictures We can now sensibly talk about PBs on either the base SPS manifold B, or the extended EPS manifold E, denoted as {, } B and {, } E , respectively. The former is computed using Eqs. (4) and (7), whereas the latter is computed using Eq. (15). Additional rules like Eqs. (5) and (6) apply universally to both {, } B and {, } E . Now that we have rid ourselves of the problematic features of the SPS, the next natural question would be: are the two spaces (SPS and the EPS) equivalent in some sense so as to justify action computation in the EPS, instead of the SPS? It is easy to check that, when acting on any two functions f and g that only depend on the SPS coordinates, the two PBs agree, since Eqs. (15) imply Eqs. (4) and (7). Because of this crucial observation, we conclude that the SPS picture and the EPS picture are equivalent for the evolution of f under the flow of g. In other words, This means that either of the two pictures can be used to evolve the system under the H flow. We can state the above compatibility relation of the PBs in B and E in more advanced language of differential geometry. Given some symplectic form Ω, its associated Poisson bracket {f, g} is found from where Ω −1 is the bivector that is the inverse of Ω, In our setting we have a symplectic form Ω B in the SPS and Ω E in the EPS. Eq. (19), the compatibility condition between the two PBs can be reexpressed as where π ⋆ is the pullback induced by the projection map π, and f, g : B → R. Since the LHS is fiberwise constant, 4 so is the RHS; and so we can also consistently push forward this equality to B. Since f and g are arbitrary, this compatibility and the definition of pushforward implies where π ⋆ is the pushforward. The equivalency needs to be pushed even to the integrability arena: the 1.5PN BBH system being integrable or chaotic must not depend on whether we choose to work in the SPS picture or the EPS one. Fortunately, the two pictures are also equivalent when we investigate the integrability of the system, following the LA theorem. In the base SPS manifold, we have the required 10/2 = 5 mutually commuting constants to establish integrability: In the EPS picture, we also have the requisite 18/2 = 9 commuting constants required for integrability. Those are the five constants already listed above, plus S 2 a and R a · P a for a = 1, 2. These nine constants are to be viewed as functions of the EPS coordinates. Because of the integrable nature of the system, there are five (nine) action variables in the SPS (EPS), and similarly for the angle variables. An interesting question arises. Imagine two points P and Q in the EPS which have the same SPS coordinates ⃗ R, ⃗ P , ⃗ S 1 and ⃗ S 2 , but some different fictitious coordinates (shown in Fig. 3). If we were to flow under , with P and Q as starting points for a fixed amount λ 0 , then are the SPS coordinates of the two final points the same? In other words, is π(P ′ (P, f, λ 0 )) = π(Q ′ (Q, f, λ 0 ))? The primes denote the final point reached at the end of the flow. It is easy to check that the answer to the above question is 'yes', and this is due to the compatibility of the PBs (Eq. (19) or Eq. (22)). In other words, when flowing under f in the EPS, the SPS coordinates of the final point reached by the flow depends only on f , λ 0 and the SPS coordinates (and not the fictitious coordinates) of the starting point. This is not just a desirable but a necessary feature because it assures us that among an infinity of EPS configurations (lying within a single fiber) that are compatible with a given SPS configuration, we can choose to work with any one of them. We can state the same result in the language of Hamiltonian vector fields. We denote the Hamiltonian vector field generated by the flow under f (whether in the SPS or in the EPS) with where the Poisson bracket on the EPS {·, ·} E acts on the pullback π ⋆ f of the function f = f ( ⃗ R, ⃗ P , ⃗ S 1 , ⃗ S 2 ). Now the compatibility of the brackets in the SPS and the EPS (Eqs. (19) and (22)) tells us that π EPS(E) i.e., the SPS vector field is the pushforward of the EPS vector field. This is equivalent to the result arrived at in the previous paragraph. D. Strategy to compute the action Since the EPS and SPS are equivalent when acting on SPS functions, we can use either of them for our calculations. As already remarked in Sec. III A, we don't know how to compute the fifth action in the SPS. So we now turn to computing the fifth action in the EPS via which interestingly is tractable. We state in advance the necessary result that the fifth action (Eq. (35)) in the EPS is fiberwise constant (see Footnote 4), meaning it can be written in terms of only the observable coordinates ( ⃗ R, ⃗ P , ⃗ S 1 , ⃗ S 2 ). In other words, the dependence of this action on the unmeasurable variables occurs only through the combinations ⃗ This makes it possible to treat the fifth action as a function of only the SPS coordinates. Another important question arises. Since we are computing the fifth action in the EPS, do we have a legitimate action in the SPS? The answer is 'yes' in a certain sense (as explained below), although due to the SPS-EPS equivalency, we can in principle, totally disregard the SPS and work only in the EPS, through and through. Eq. (10) is the popular loop-integral definition of action. This action has an important property that under its flow by 2π (and not any smaller amount), we get a closed loop. 5 In fact, this is such an important feature that it can also serve as another definition of the action (call it the "loop-flow definition"). We will use the loop-integral definition to compute the action in the EPS, and we show in Appendix B that the pushforward of this action to SPS satisfies the loop-flow definition of action. We make a few closing remarks before we turn to the evaluation of the fifth action integral of Eq. (26). We have numerically verified that flowing by 2π under the fifth action (to-be-computed from Eq. (26)) yields a closed loop (as required by the loop-flow definition), within numerical errors, whether the action is treated as an SPS function or an EPS function. Although, the first four action integrals computed in Ref. [16] were done in the SPS, we could have also computed them in the EPS, and then pushforwarded these integrals to the SPS. The results would be the same as the four action integrals already presented in Ref. [16], except for some irrelevant additive constants. In summary, the equivalence of the two pictures (in terms of integrability, action-angle variables, and most importantly, the evolution under a flow associated with any observable), the global exactness of the symplectic form Ω E , and the ease of evaluation of the action variables, make us prefer the EPS over the SPS for the action computation. IV. COMPUTING THE FIFTH ACTION Four out of the five actions were already presented in Ref. [16]. Here we compute the fifth one. For the fifth action, we generate a closed loop on the invariant n-torus by flowing under S eff · L, and other commuting constants. After flowing under S eff · L by a certain amount ∆λ S eff ·L (to be computed), although the mutual angles between ( ⃗ L, ⃗ S 1 , ⃗ S 2 ) return to their original values, these individual vectors have not. So we have not formed a closed loop yet. However, additional flows under J 2 , L 2 , S 2 1 , and S 2 2 will close the loop (shown in Appendix A), and at the same time ensuring that this loop is in a different homotopy class than the four associated to the other actions. We will see that we do not need to flow along H or J z for the fifth action computation. The fifth action integral can be computed piecewise as five integrals, where each part corresponds to the segment generated by flowing under the quantity in the subscript. 5 See the proof of Theorem 11.6 of Ref. [24] to arrive at this conclusion. Focusing on J S eff ·L , we will need the evolution equations under the flow of S eff · L in the EPS, which read and they imply From these evolution equations we have with ∆λ S eff ·L = λ f − λ i being the required flow parameter amount (for the mutual angles of ( ⃗ L, ⃗ S 1 , ⃗ S 2 ) to be restored). We could pull S eff · L out of the integral since it is a constant under the flow of S eff · L. After performing similar calculations, we can also show that (see also Sec. III-A of Ref. [16]) where the quantities ∆λ i 's are the flow amounts required to close the loop under the corresponding commuting constant in the subscript. This finally renders the fifth action to be which means that the fifth action computation has now boiled down to computing the five flow amounts ∆λ i 's. Due to the tedious nature of the computation of these five parameter flow amounts, they have been relegated to Appendix A. Summarizing, the fifth action is given by Eq. (35), where the ∆λ's are presented in Eqs. (A42), (A67), (A77), (A94) and (A95). Because our derivation assumed m 1 > m 2 , this expression of the action is not manifestly symmetric under the exchange 1 ↔ 2 (labels of the two black holes). However, as discussed in the text after Eq. (A28), the symmetry can be restored simply. Note that in Ref. [16], the flows under J 2 , J z , and L 2 individually form closed loops. This implies that the associated actions are functions of just these individual conserved quantities. Meanwhile, to get a closed loop for the fifth action evaluation, we need to flow under all of J, L, S eff · L, S 1 , and S 2 , which makes the fifth action a function of all these five quantities. Fifth action in the equal mass case The above result for the fifth action in Eq. (35) is not manifestly finite in the equal mass limit: there are many factors of (σ 1 − σ 2 ) which vanish in this limit, including some in denominators. We have checked numerically that the equal mass limit of J 5 is finite, but trying to take this limit analytically is cumbersome. There is however a simpler way, and the solvability of the equal-mass case has been independently investigated in the literature, albeit in the orbit-and precession-averaged approach [29]. Working with only the SPS variables, when σ 1 = σ 2 (equal-mass case), it is easy to check that ⃗ S 1 · ⃗ S 2 , along with H, J 2 , L 2 , and J z forms a set of five mutually commuting constants. In fact, S eff · L can then be seen as a function of these five constants, and is therefore no longer an independent constant. It can be checked that under the (36) which imply that both the spin vectors rotate around ⃗ S, which itself remains fixed under this flow. ⃗ R and ⃗ P don't move and hence only the spin sectors contribute to the action integral. At this point, we can simply use the result of Eq. (28) of Ref. [16] withn = ⃗ S/S, which gives our fifth action variable for the equal mass case as The reason we used a tilde in the above equation is because J 5(m1=m2) need not be the equal mass limit of J 5 , since action variables of a system are not unique; see Proposition 11.3 of Ref. [24]. Finally, using the equal mass relations in Eq. (38) of Ref. [16], it is possible to arrive at an equation connecting the Hamiltonian with the actions. Performing a PN series inversion thereafter, one can write an explicit expression for the Hamiltonian in terms of the actions, up to 1.5PN. This can be used to explicitly obtain the frequencies of the system via ω i = ∂H/∂J i for the equal-mass case. V. FIFTH ACTION AT THE LEADING PN ORDER The action variable given by Eq. (35) is in exact form with respect to the 1.5PN Hamiltonian H. It is a worthwhile exercise to write the leading order contribution of this action because it is a much shorter expression than the "exact" one. This is in the same spirit as the expression of the fourth action variable as a PN series which was presented in Eq. (38) of Ref. [16]. Another advantage is that we can then write S eff · L in terms of the actions, including the fifth one (discussed below), which when used with Eq. (38) of Ref. [16] can give an expression for Hamiltonian in terms of the actions. Note that out of the five actions: J, L, J z , J 4 , and J 5 (see Ref. [16] for the first four), the first two coincide with each other at 1PN order due to the absence of spins. The next important action variable at 1PN is the 1PN version of J 4 [20]. J z is irrelevant when it comes to computing frequencies since the Hamiltonian is never a function of J z . This explains the presence of only two frequencies (resulting from effectively two actions) at 1PN. Now since J 5 comes into play for the first time only at the 1.5PN order, it makes sense to expand it in a PN series and work with the leading order term only, if we are working at 1.5PN. We now turn our attention to extracting this leading order term. We sketch the plan for how to obtain the leading PN contribution to J 5 . It comprises a couple of steps which were performed in Mathematica. Step 1: To start with, instead of writing the various quantities which make up J 5 in terms of the five commuting constants, write them only in terms of ⃗ L, ⃗ S 1 , ⃗ S 2 , σ 1 and σ 2 with the understanding that ⃗ S 1 and ⃗ S 2 are 0.5PN order higher than ⃗ L; see Ref. [16] for more details on this. Attach a formal PN order counting parameter ϵ to ⃗ S 1 and ⃗ S 2 . This ϵ will be used as a PN perturbative expansion parameter: every power of ϵ stands for an extra 0.5PN order. At the end of the calculation, ϵ will be set equal to 1. Writing various quantities of interest in terms of ⃗ L, ⃗ S 1 , and ⃗ S 2 is imperative since it serves to expose the PN powers explicitly. For example, J 2 − L 2 = O(ϵ 1 ), though both J 2 and L 2 are O(ϵ 0 ). This becomes manifestly clear when J 2 − L 2 is written in the above way. Step 2: Instead of trying to series expand J 5 directly in terms of ϵ in one go, we first series expand various quantities that make up J 5 , and then use these expanded versions to finally build up the series-expanded version of J 5 . As a first step, series expand the cubic expression of Eq. (A21), and its roots, keeping terms up to O(ϵ 2 ). Expansion of the roots up to O(ϵ 2 ) is necessary because the turning points f 1 and f 2 coincide at lower orders. Step 3: Series expand various other quantities that make up J 5 , such as k 2 , B 1 , B 2 , D 1 , D 2 , α 1 and α 2 in ϵ such that the resulting expansions have two non-zero post-Newtonian terms. We don't have to worry about series expanding certain other quantities which make up ∆λ 4 and ∆λ 5 , since they don't contribute to the fifth action variable at the leading order. Step 4: Using these series-expanded ingredients, build up J 5 of Eq. (35). The PN orders of the five summands of J 5 (as shown in Eq. (27)) are schematically shown here as where we have indicated that the leading order components of J J 2 and J L 2 cancel each other. Our leading order J 5 is thus the sum of the first three contributions. The last two contributions being at sub-leading orders can be dropped. At this point we can set ϵ = 1. Step 5: At this point the resulting perturbative J 5 is a function of ⃗ L, ⃗ S 1 , ⃗ S 2 , σ 1 , σ 2 and dot products formed out of them. We still want to write this as a function of the commuting constants only, keeping in line with the tradition followed in the action-angle variables formalism. To do so, we eliminate ⃗ L · ⃗ S 1 and ⃗ L · ⃗ S 2 using the following results valid up to the leading PN order which finally yields the leading PN order contribution to J 5 as where we define the combinations We could have chosen to eliminate ⃗ L · ⃗ S 1 and ⃗ L · ⃗ S 2 using slightly modified forms of Eqs. (46) by simply ignoring S 2 1 and S 2 2 terms in the numerator. These modified forms of Eqs. (46) and the resulting modified form of the leading order contribution to the fifth action would still agree with the original results (Eqs. (46) and Eq. (47)) up to the leading PN order. The above expression of linearized fifth action is not manifestly symmetric with respect to the label exchange 1 ↔ 2. This is because from the beginning, we assumed m 1 > m 2 while deriving the 1.5PN exact fifth action; see the text after Eq. (A28). We can easily make this leading PN order version of fifth action symmetric by replacing (σ 1 − σ 2 ) with −|σ 1 − σ 2 | only in the denominator of the RHS of Eq. (47). This is because We note that the expression of the leading PN order contribution to the fifth action in Eq. (47) is much shorter than that of the exact 1.5PN fifth action (when both are expressed in terms of the commuting constants). This could be used in an efficient implementation of the evaluation of the fifth action on a computer. We also note that Eq. (47) can be used to arrive at a quartic equation in S eff · L with other action variables as parameters of this quartic equation. This means it is in principle possible to solve for S eff · L as a function of the actions. By inserting this into Eq. (38) of Ref. [16], we can explicitly find the 1.5PN H( ⃗ J ) as a function of all of the actions (after a PN series inversion). This gives an alternative approach for computing the frequencies ω i = ∂H/∂J i which can be compared with the approach in Sec. VI. We have also numerically verified that J 5 as presented in Eq. (47) above converges to the exact 1.5PN version in the limit of small PN parameter (S 1 , S 2 ≪ L ). A. Computing the frequencies Since we have an integrable Hamiltonian system, the Hamiltonian is a function of the actions and not the angles, though it may not be possible to write H explicitly in terms of the actions. In terms of the actions, the equations of motion for the respective angle variables are trivial, As a consequence, the usual phase space variables are all multiply-periodic functions of all of the angle variables. Concretely, this means a Fourier transform of some regular coordinate would consist of a forest of delta function peaks at integer-linear combinations of the fundamental frequencies ω i [30]. Additionally, if we know the frequencies, we can locate resonances -where the ratio of two frequencies is a rational number -which are key to the KAM theorem and the onset of chaos. With ⃗ C standing for the vector of all five mutually commuting constants, H being one of these C i 's, H is automatically a function of ⃗ C. In principle, once can invert ⃗ J ( ⃗ C) (at least locally, via the inverse function theorem) for ⃗ C( ⃗ J ), and thus find an explicit expression for H( ⃗ J ) paving the road for the computation of the frequencies ω i 's. But this is not necessary. Instead, we follow the approach given in Appendix A of Ref. [31] to find the frequencies as functions of the constants of motion, via the Jacobian matrix between the five C i 's and the five J i 's. For the purpose of frequency computations, we take our C i 's to be (in this specific order) ⃗ C = {J, J z , L, H, S eff · L}. As two of us showed in Ref. [16], the first three of these are already action variables. We take the order of the actions to be ⃗ J = {J, J z , L, J 4 , J 5 }. The expression for J 4 was given as an explicit function of (H, L, S eff · L) in Ref. [16]. The Jacobian matrix ∂J i /∂C j can be found explicitly, since we have analytical expressions for ⃗ J ( ⃗ C). This matrix is somewhat sparse, given by (51) Now we use the simple fact that the Jacobian ∂C i /∂J j is the inverse of this matrix (assuming it is full rank), Because of the sparsity of the matrix in Eq. (51), we directly invert and find the only nonvanishing coefficients in the inverse are The frequencies we seek are in the fourth row of this matrix. Matrix inversion yields the following expressions for the frequencies: . The frequency ω 2 = ∂H/∂J z vanishes since H cannot depend on J z , to preserve SO(3) symmetry. The derivatives of J 4 with respect to (H, L, S eff · L) are easy to compute from the explicit expression given in Eq. (38) of Ref. [16]. Taking the derivatives of J 5 in Eqs. (55) involves many intermediate quantities that arise from the chain rule, and are presented in Appendix C. B. The angle variables Canonical perturbation theory [17,18] has the potential to furnish 2PN action-angle variables when supplied with 1.5PN ones. To use this tool, we want to be able to express perturbations to the Hamiltonian (namely, higher PN order terms) as functions of the angle variables which are canonically conjugate to the actions. One of these angles -the mean anomaly, which is conjugate to our J 4 -has been presented previously in the literature, in pieces. We have explicitly checked that the Poisson bracket between J 4 the 1.5PN mean anomaly (combining 1PN and 1.5PN pieces of the results from Refs. [19] and [32]) is 1, up to 1.5PN order. 6 Constructing angle variables We now lay out a roadmap on how to implicitly construct the rest of the angle variables on the invariant tori of constant ⃗ J (or constant ⃗ C). To be more precise, we show how to obtain the standard phase-space coordinates ( ⃗ P, ⃗ Q) as explicit functions of action-angle variables ( ⃗ J , ⃗ θ). This is in fact the more useful transformation (rather than ( ⃗ J , ⃗ θ) as explicit functions of ( ⃗ P, ⃗ Q)) for canonical perturbation theory, since we will need to transform the 2PN and higher Hamiltonian (which is given as explicit function of ( ⃗ P, ⃗ Q)) into action-angle variables. The method to assign angle variables on invariant tori is straightforward. Pick a fiducial point P 0 on an invariant torus, and give it angle coordinates ⃗ 0 ≡ (0, . . . , 0). Then every other point on this same torus, with angle coordinates θ i , is reached by integrating a flow from P 0 by amounts θ i under each of the actions J i . This is because the flow parameter is in fact the angle parameter: The Poisson brackets evaluating to Kronecker delta follows because θ i and J j are canonically conjugate coordinates; see Theorem 10.17 of Ref. [24]. Since the actions commute, we are free to flow under these actions in any order. The construction explained above was only on an individual torus. The only requirement for extending these variables to being full phase space variables is that the choice of fiducial point P 0 ( ⃗ J ) is smooth in ⃗ J . Given any choice of angle variables, we can always re-parameterize them by adding a constant that is a smooth function of ⃗ J . That is, if θ i are angle variables, then so arē θ i = θ i + δθ i ( ⃗ J ), with smooth δθ i , which can be verified by taking Poisson brackets: θi , J j = δ i j . Some of these angle variables may be simpler than others, but here we are only interested in finding one such construction. So now, the problem of assigning the angle coordinates on the torus has been transformed into that of flowing 6 The result in Refs. [19] (Eq. (7.1 a) under all the actions, one by one, by amounts equal to the angle coordinates of the point whose angles are desired (assuming that the starting point had ⃗ θ = ⃗ 0). To integrate the equations under the flow associated with any of the five actions, we start with where ξ is any one of the phase space coordinates. This is the same sparse matrix ∂J i /∂C j which appeared in the previous section in Eq. (51). The matrix ∂J i /∂C j is a function of only the ⃗ C's, and thus is constant on each torus and each of the flows we consider. Hence, integrating the above equation boils down to integrating under the flow of the C i 's. We will now briefly explain how to obtain the solution for the flow under each of the C i 's one by one. Solutions to flow under the commuting constants The solution for flow under H has been given in Ref. [22]; it has been termed as the "standard solution" there. It is found by filling in the gaps in the solution provided in Ref. [15]. 7 The solution for the flow under S eff · L is constructed in Appendix A, with minor caveats. Eqs. A39, A66, and A76 in Appendix A collectively give solutions for ⃗ L and ⃗ R, but the appendix does not give explicit solutions for ⃗ P , ⃗ S 1 and ⃗ S 2 . However, this is not a major hurdle for the following reasons. The solution for ⃗ P can be easily found from that for ⃗ R by noting that P and the angular offset between ⃗ R and ⃗ P remain constant under the S eff · L flow. Also, the solutions for ⃗ S 1 can be had using similar calculations as for the solution for ⃗ L. Once we have ⃗ S 1 , ⃗ S 2 can be found from ⃗ S 2 = ⃗ J − ⃗ L − ⃗ S 1 , and the fact that ⃗ J does not change under the S eff · L flow. It now remains to show how to integrate under the flow of the remaining three C i 's, (J 2 , J z and L 2 ). Sec. III (specifically Eqs. (21)-(23)) of Ref. [16] showed that the equations for a flow under any of these quantities can be concisely written in a generalized form as Here ⃗ U is the constant vector (under the respective flow) 2 ⃗ J,ẑ, or 2 ⃗ L when C i is J 2 , J z , or L 2 , respectively. In the above equation, ⃗ V stands for any of ⃗ R, ⃗ P , ⃗ S 1 , and ⃗ S 2 , with the exception that under the flow of L 2 , spin vectors don't move; so ⃗ V stands for only ⃗ R and ⃗ P in this case. This basically means that ⃗ V rotates around the fixed vector ⃗ U with an angular velocity whose magnitude is simply U . Constructing the solutions to the flows under J and L in terms of Cartesian components is cumbersome, so we will work with the magnitudes and the directions of the vectors instead. This paragraph assumes the reader is familiar with the definitions of the frames (ijk) and (i ′ j ′ k ′ ) which have been introduced with the help of Fig. 5 in Appendix A. Now in light of Eq. (57), it is a simple matter to see that the equations for flow under J and L (or rather Eqs. (21) and (23) of Ref. [16]) imply that • Under the flow of J by an amount ∆λ, the azimuthal angles of ⃗ R, ⃗ P , ⃗ S 1 and ⃗ S 2 in the inertial (ijk) frame increase by ∆λ. The magnitudes of the vectors don't change. • Under the flow of L by an amount ∆λ, the azimuthal angles of ⃗ R, and ⃗ P in the non-inertial (i ′ j ′ k ′ ) frame increase by ∆λ, whereas the spin vectors don't move. The magnitudes of the vectors don't change. The flow under J z can be handled similarly. With all the individual pieces now identified, it is now straightforward, although lengthy to find each standard phase space variable as an explicit function of the angle variables θ i , on any invariant torus. C. Action-angle based solution at 1.5PN and higher PN orders Now there are two approaches to solving the real-time dynamics of the system, i.e. a flow under H. The approach by one us in Ref. [15] was to directly integrate the differential equations, 7 yielding a quasi-Keplerian parameterization. Although this method is direct, it seems quite difficult to extend this to higher PN orders. The second approach is the action-angle based one, the subject of this paper. All the angles have a trivial real time evolution, each one increasing linearly with timeθ i = ω i ( ⃗ J ). After a certain time t, θ i has changed by ω i t, which we can compute. So assuming that ⃗ θ(t = 0) = ⃗ 0, we can compute the angles ⃗ θ(t) at any general time t, with the ⃗ J unchanged. Now the problem has become that of computing ( ⃗ P, ⃗ Q)(t) given ( ⃗ J , ⃗ θ)(t), whose roadmap has been clearly laid out in Sec. VI B. This concludes our brief description of the action-angle based method of computing the solution. This method has the advantage that evaluating the state of the system (or its derivatives, as needed for computing gravitational waveforms) can be trivially parallelized by evaluating each time independently. Both the above solution methods have been implemented by us in a public Mathematica package [22]. Moreover, our action-angle based solution allows for the possibility of using non-degenerate perturbation theory [17,18] to extend our solution to higher PN orders. The procedure of Sec. VI B will yield the standard phasespace variables ( ⃗ P, ⃗ Q) as explicit functions of ( ⃗ J , ⃗ θ). This is exactly what is required for computing perturbed actionangle variables at higher PN order with canonical perturbation theory. Higher-PN terms in the Hamiltonian are given in terms of ( ⃗ R, ⃗ P , ⃗ S 1 , ⃗ S 2 ), and one must transform them to (unperturbed) action-angle variables to apply perturbation theory. If successful, our method can be seen as the foundation of closed-form solutions of BBHs with arbitrary masses, eccentricity, and spins to high PN orders under the conservative Hamiltonian (excluding radiation-reaction for now). This is in the same spirit as Damour and Deruelle's quasi-Keplerian solution method for non-spinning BBHs given in Ref. [19], which has been pushed to 4PN order recently [33]. We are also currently working to find the 2PN action-angle based solution via canonical perturbation theory. Note that we could not have applied non-degenerate perturbation theory to a lower PN order (say 1PN) to arrive at 1.5PN or higher PN action-angle variables, because the lower PN systems are degenerate in the full phase space. This is because the spin variables are not dynamical until the 1.5PN order; so at lower orders, there are fewer than four action variables and frequencies. 8 At 1.5PN, the system becomes non-degenerate, and can be used as a starting point for perturbing to higher order. We therefore view our construction of the action-angle variables as significant for finding closed-form solutions of the complicated spin-precession dynamics of BBHs with arbitrary eccentricity, masses, and spin. VII. SUMMARY AND NEXT STEPS In this paper, we continue the integrability and actionangle variables study of the most general BBH system (both components spinning in arbitrary directions, with arbitrary masses and eccentricity) initiated in Ref. [16]. There, two of us presented four (out of five) actions at 1.5PN and showed the integrable nature of the system at 2PN by constructing two new 2PN perturbative constants of motion. Here, we computed the remaining fifth action variable using a novel mathematical method of inventing unmeasurable phase space variables. We derived the leading order PN contribution to the fifth action, which is a much shorter expression than the "exact" one. We showed how to compute the fundamental frequencies of the system without needing to write the Hamiltonian explicitly in terms of the actions. Finally, we presented a recipe for computing the five angle variables implicitly, by finding ( ⃗ R, ⃗ P , ⃗ S 1 , ⃗ S 2 ) as explicit functions of actionangle variables. We leave deriving the full expressions to future work. We also sketched how the 1.5PN action-angle variables can be used to construct solutions to the BBH system at higher PN orders via canonical perturbation theory. Typically, action-angle variables are found by separating the Hamilton-Jacobi (HJ) equation [17], though we were able to work them out without effecting such a separation. Finally from this vantage point, we summarize the major ingredients that went into our action-angle based solution for the PN BBHs: (1) the classic Sommerfeld contour integration method for the Newtonian system, which gave the Newtonian radial action long ago [17]; (2) its PN extension by Damour and Schäfer [20]; (3) the integration techniques worked out in the context of the 1.5PN Hamiltonian flow by one of us in Ref. [15]; and finally, (4) the method of extending the phase space by inventing fictitious phase-space variables introduced in this paper. A couple of extensions of the present work are possible in the near future. Currently we are working on presenting our 1.5 PN action-angle-based solution in a more concrete and consolidated form, as well as re-presenting the solution given in Ref. [34] with 1PN terms included that were ignored in the original work. We have developed a public Mathematica package that implements these two solutions [22], as well as the one from numerical integration. This will prepare a solid base for pushing our action-angle-based solution to 2PN. Since the integrable nature (existence of action-angle variables) has already been shown in Ref. [16], constructing the 2PN action-angle variables (via canonical perturbation theory) and an action-angle based solution should be the next natural line of work. Our group has already initiated the efforts in that direction. With the motivation behind these action-angle variables study of the BBH systems being having closed-form solution to the system, it would be an interesting challenge to incorporate the radiation-reaction effects at 2.5PN into the to-beconstructed 2PN action-angle based solution. There is also hope that the action-angle variables at 1.5PN can be used to re-present the effective one-body (EOB) approach to the spinning binary of Ref. [21] (via a mapping of action variables between the one-body and the two-body pictures) as was originally done for non-spinning binaries in Ref. [35]. Also, it would be interesting to try to compare our action-angle and frequency results in the limit of extreme mass-ratios with similar work on Kerr extreme mass-ratio inspirals (EMRIs) [36] in some selected EMRI parameter space region where PN approximation is also valid. Comparison is also possible with the recently derived solution of EMRIs with spinning secondaries [37,38]. Another line of effort could be the task of building gravitational waveforms using the BBH solution presented in this paper; Ref. [14] may serve as one of the guides. Lastly, there a possibility of a mathematically oriented study of our novel method of introducing the unmeasurable, fictitious variables to compute the fifth action. A few pertinent questions along this line could be (1) Is there a way to compute the fifth action without introducing the fictitious variables? (2) Are there other situations (with other topologically nontrivial symplectic manifolds) where an otherwise intractable action computation can be made possible using this new method? (3) What is the deeper geometrical reason that makes this method work? In this appendix, we will rely heavily on the methods of integration first presented in Ref. [15], which integrated the evolution equations for flow under H, with the 1PN Hamiltonian terms omitted. We first need to set up some vector bases before we can integrate the equations of motion. Fig. 5 below displays two sets of bases. The one in which the components of a vector will be assumed to be written in this paper is the inertial triad (ijk), unless stated otherwise. Since derivatives of components of vectors depend on the basis, we mention here that this (ijk) triad is also the frame in which all the component derivatives of any general vector will be assumed to be taken, unless stated otherwise. 9 Evaluating ∆λS eff ·L The evaluation of ∆λ S eff ·L can happen only when we can compute the mutual angles between ⃗ L, ⃗ S 1 and ⃗ S 2 as a function of the flow parameter under the flow of S eff · L. Therefore, most of Appendix A 1 deals with how to do this calculation and only towards the end we arrive at the expression of ∆λ S eff ·L . Under the flow of S eff · L, a generic quantity g evolves as dg/dλ = {g, S eff · L} which implies the three evolution equations for the dot products between the three angular momenta under the flow of S eff · L, which means that we can easily construct three constants of motion (dependent on the five mutually commuting constants as introduced before). These are the differences between the three quantities whose λ derivatives all agree, the triple product ⃗ L · ( ⃗ S 1 × ⃗ S 2 ). Namely, these constants of motion are Stated differently, all this means that the three mutual angles between ⃗ L, ⃗ S 1 , and ⃗ S 2 satisfy linear relationships. With the understanding that hatted letters denote unit vectors, if we define the mutual angles as cos κ 1 ≡L ·Ŝ 1 , cos κ 2 ≡L ·Ŝ 2 , and cos γ ≡Ŝ 1 ·Ŝ 2 , their relations are where We will integrate the solution for which is the most symmetrical of the three dot products given above. Thus if we have a solution for f (λ), we automatically have solutions for the three dot products, The triple product on the RHS of Eq. (A11) is the signed volume of the parallelepiped with ordered sides ⃗ L, ⃗ S 1 , ⃗ S 2 . In general, for a parallelepiped with sides ⃗ A, ⃗ B, ⃗ C, and dot products a standard result from analytical geometry is that the signed volume of this parallelepiped can be written as where the sign comes from the handedness of the ( ⃗ A, ⃗ B, ⃗ C) triad. The radicand is always non-negative. As above in Eq. (A12)-(A14), we can rewrite all angles in terms of f . We can then use this volume equation to express the evolution for f as where the cubic P (f ) ≥ 0 and is given by This is a general cubic, which we will write as with the coefficients It is important here to note the sign of a 3 , The fact that the cubic becomes undefined when m 1 = m 2 is the reason we treated the equal-mass case separately towards the end of Sec. IV. Now we rewrite the cubic in terms of its roots, where A = a 3 is the leading term, and when all three roots are real, we assume the ordering f 1 < f 2 < f 3 . In other words, we assume the roots to be real and simple. For completeness, we state the roots in the trigonometric form. The cubic can be depressed by defining g ≡ f + a 2 /(3a 3 ) in terms of which P becomes P = a 3 (g 3 + pg + q) with the coefficients When there are three real solutions, p < 0, and the argument to the arccos below will be in [−1, +1]. In terms of these depressed coefficients, the trigonometric solutions for the k = 1, 2, 3 roots are This form yields the desired ordering f 1 < f 2 < f 3 . Whenever any two of the vectors { ⃗ L, ⃗ S 1 , ⃗ S 2 } are collinear, the triple product on the RHS of Eq. (A11) vanishes. A less drastic degeneracy is if two roots coincide. Here we will restrict ourselves to the case of three simple roots. At the end of this subsection, we will argue that the cubic has three real roots for the cases of physical interest. Since P (f ) > 0, we have That is, f will lie between the two roots where P (f ) > 0. Without loss of generality we will take m 1 > m 2 and handle only this case. Since P (f ) is cubic, the ODE df /dλ = ± P (f ) can be integrated analytically in terms of elliptic integrals (and their inverses, elliptic functions). The behavior is typical: f oscillates between the two turning points f 1 , f 2 (when m 1 > m 2 ). We can not integrate through the turning point using the first-order form df /dλ (it is not Lipschitz continuous there), but by taking a derivative of Eq. (A19) to find d 2 f /dλ 2 , we can see that the motion is regular at each turning point. At both turning points, the ± sign (the handedness of the triad ( ⃗ L, ⃗ S 1 , ⃗ S 2 )) must flip, so that f oscillates between the two turning points. Continuing further with Eq. (A19), we write Reparameterize this integral via We define ϕ p so it increases monotonically with λ as Now factor out (f 3 − f 1 ) from the radicand in the denominator to give where we have defined the elliptic modulus Note that 0 < k < 1, because of the ordering of the roots. Equation (A33) can be integrated to give where F (ϕ p , k) is the incomplete elliptic integral of the first kind defined as [39][40][41][42] In Eq. (A35), λ 0 is the initial value of the flow parameter and We can now rewrite the parameterization in terms of sn and am, the Jacobi sine and amplitude functions [39], This turns our parameterization into The solution for f is thus given by Eq. (A39) accompanied by Eqs. (A35) and (A37). It now remains to generalize the solution for f when f at λ = λ 0 may be in any arbitrary initial state (such as df /dλ < 0 or > 0) and it can oscillate between f 1 and f 2 any arbitrary number of times during the integration interval. In this most general scenario, the solution is still given by Eq. (A39), accompanied by Eq. (A35) and a variant of (A37), which reads where we use the + sign if (df /dλ)| λ0 > 0, and vice versa. From this solution for f (λ), we recover solutions for the three dot products ⃗ S 1 · ⃗ S 2 , ⃗ L · ⃗ S 1 , and ⃗ L · ⃗ S 2 , by using Eqs. (A12)-(A14). We also immediately get the λ-period of the precession. One precession cycle occurs when ϕ p goes from 0 to π, or when f starts from f 1 , goes to f 2 and then returns back to f 1 (see parameterization in Eq. (A30)). Integrating on this interval via Eq. (A35) gives the equation for the λ-period of precession, which we call Λ, in terms of the complete elliptic integral of the first kind K(k) ≡ F (π/2, k) = F (π, k)/2, Recall that our goal is to close a loop in the EPS by successively flowing under S eff · L, J 2 , L 2 , S 2 1 , and S 2 2 . A necessary condition for the phase-space loop to close is that the mutual angles between ⃗ L, ⃗ S 1 , and ⃗ S 2 recur at the end of the flow. Since the flows under J 2 , L 2 , S 2 1 , and S 2 2 do not change these mutual angles, we choose to flow under S eff · L by exactly the precession period, This flow under S eff · L is pictorially represented by the red P Q curve in Fig. 4. Now we try to address the issue of the nature of roots of the cubic P (f ) of Eq. (A22). It is predicated on the nature of the cubic discriminant D, with D > 0 implying three real roots, D < 0 implying one real and two distinct complex roots, and D = 0 implying repeated roots. The discriminant of the exact cubic P (f ) is too complicated for us to investigate its sign analytically. We rather choose to investigate the sign of its leading order PN contribution. It is in the same spirit as the calculation of the leading PN order contribution of J 5 in Sec. V. We write D in terms of ⃗ L, ⃗ S 1 , and ⃗ S 2 while attaching a formal power counting parameter ϵ to both ⃗ S 1 and ⃗ S 2 , for every factor of ϵ signifies an extra 0.5PN order. Then series expand D in ϵ and keep only the leading order term, which comes out to be and this implies three real roots. If both spins are aligned or anti-aligned with ⃗ L, we will have repeated roots, and the spins will remain aligned or anti-aligned with ⃗ L as the system evolves under the flows of S eff · L or H. Aside from this special case, the above discussion suggests that the D < 0 case of only one real root is disallowed. This is also necessary on physical grounds, as there must be two turning points for the mutual angle variable f , otherwise f would be unbounded. Evaluating ∆λ J 2 After flowing under S eff · L by parameter ∆λ S eff ·L , the mutual angles between ⃗ L, ⃗ S 1 , and ⃗ S 2 have recurred, but ⃗ L, ⃗ S 1 , and ⃗ S 2 have not. We now plan to flow under J 2 by ∆λ J 2 so that ⃗ L is restored; this restoration is a necessary condition for closing the phase space loop. To find the required amount of flow under J 2 so that ⃗ L is restored, we need to find the final state of ⃗ L after flowing under S eff · L by ∆λ S eff ·L . Instead of working with Cartesian components, we find it more convenient to work with the polar and azimuthal angles of ⃗ L in a new non-inertial frame that we now introduce. At this point we introduce a non-inertial frame with (i ′ j ′ k ′ ) axes whose basis vectors are unit vectors along ⃗ J × ⃗ L, ⃗ L × ( ⃗ J × ⃗ L), and ⃗ L respectively, as depicted pictorially in Fig. 5. Without loss of generality, we choose the z-axis of the (ijk) frame to point along the ⃗ J vector. Now there are two angles to find: the polar θ JL , where cos θ JL = ⃗ J · ⃗ L/(JL), and an azimuthal ϕ L . Since we have already solved for the angles between ⃗ L, ⃗ S 1 , and ⃗ S 2 in Appendix A 1, we have the angle θ JL from This shows that θ JL has recurred after the S eff · L flow, because all the mutual angles between ⃗ L, ⃗ S 1 , and ⃗ S 2 have. So, what remains to be tackled is the azimuthal angle ϕ L . The inertial (ijk) components of ⃗ L are and therefore it follows that As mentioned in the beginning of Appendix A, all vector derivatives are assumed to be taken in the inertial (ijk) frame, unless stated otherwise. With the aid of the instantaneous azimuthal direction vector given bŷ we can extract dϕ L /dλ via an elementary result involving the dot productφ This leads to Now using d ⃗ L/dλ = −d ⃗ S 1 /dλ − d ⃗ S 2 /dλ, and inserting the precession equations for the two spins, We see that everything on the RHS is given in terms of constants of motion (J, L, ⃗ L · ⃗ S eff ) and the inner products between the three angular momenta (which can be found from f (λ) in the previous section). Put everything in terms of f using Eqs. (A12)-(A14) and separate into partial fractions, where we have defined So we need to be able to perform the two integrals (with i = 1, 2) where the last equality is due to Eq. (A19). With these integrals, we will have The integrals I i are another type of incomplete elliptic integral (defined below). Using the parameterization of Eqs. (A30) and (A31), I i becomes where we have defined Thus we can identify the I i 's in terms of the incomplete elliptic integral of the third kind, which is defined as [39] Π(a, b, c) and we get the solution for ϕ L (ϕ p ) Here ϕ L,0 is an integration constant to be determined by inserting λ = λ 0 and ϕ L = ϕ L (λ 0 ) into the Eq. A65. To close the loop, we need to know the angle ∆ϕ L that ϕ L goes through under one period of the precession cycle (when flowing under ⃗ L · ⃗ S eff ), that is, when ϕ p advances by π. This is given in terms of the complete elliptic integral of the third kind, Π(α 2 , k) ≡ Π(α 2 , π/2, k) yielding where we have used the fact that Π(α 2 , π, k) = 2Π(α 2 , k). To negate this angular offset caused by flowing under S eff ·L and thereby closing the loop, we need to flow under J 2 by Note that this flow does not alter the mutual angles between ⃗ L, ⃗ S 1 , and ⃗ S 2 , as is necessary to close the loop in the phase space. Now that the mutual angles within the triad ( ⃗ L, ⃗ S 1 , ⃗ S 2 ) have recurred and the full ⃗ L vector has recurred, the concern is if the spin vectors have recurred or not. The spin vectors are constrained not only by their mutual angles with ⃗ L, but also ⃗ J. Their angles with ⃗ J are algebraically related to the mutual angles that we have previously dealt with, e.g. After the respective flows under S eff · L and J 2 by amounts indicated in Eqs. (A42) and (A67), all of these angle cosines between ⃗ L, ⃗ S 1 , and ⃗ S 2 have recurred, which narrows things down to two solutions: the original configuration for ( ⃗ L, ⃗ S 1 , ⃗ S 2 ), and its reflection across the J-L plane. We can rule out the reflected solution with the following observation. The original configuration and its reflection have opposite signs for the signed volume ⃗ L·( ⃗ S 1 × ⃗ S 2 ), and thus opposite signs for the radical P (f ) in Eq. (A19). Now once we return back to the same point on the f axis after flowing under S eff · L, the handedness of the ( ⃗ L, ⃗ S 1 , ⃗ S 2 ) triad is restored. This is because the handedness must have flipped twice: first when f touched f 1 and second when it touched f 2 . 10 Therefore, after the flows by S eff · L and J 2 by the amounts specified in Eqs. (A42) and (A67), each of the three vectors ( ⃗ L, ⃗ S 1 , ⃗ S 2 ) have recurred. This second flow under J 2 is pictorially represented by the green QR curve in Fig. 4. 10 There is also a complex-analytic interpretation. The function P (f ) is an analytic function on a Riemann surface of two sheets. The different signs of ⃗ L · ( ⃗ S 1 × ⃗ S 2 ) correspond to being on the two different sheets. The solution is periodic after completing a loop around both branch points, ending on the same sheet where we started. Evaluating ∆λ L 2 After flowing under S eff · L and J 2 , all the three angular momenta ⃗ L, ⃗ S 1 , and ⃗ S 2 have recurred, but the orbital vectors ( ⃗ R, ⃗ P ) and fictitious vectors have not. We will now restore ⃗ R and ⃗ P by flowing under L 2 by ∆λ L 2 , to be determined in this section. Now, ⃗ R has to be in the i ′ j ′ plane because ⃗ R ⊥ ⃗ L. Denote by ϕ the angle made by ⃗ R with the i ′ axis. The key point is that after successively flowing under S eff · L by λ S eff ·L , J 2 by λ J 2 , and L 2 by a certain amount λ L 2 (to be calculated), if ϕ is restored, then so are ⃗ R and ⃗ P . This is because under these three flows, R, P , and ⃗ R · ⃗ P do not change. Hence the restoration of ϕ after the above three flows by the stated amounts restores both ⃗ R and ⃗ P . Our strategy is to compute ϕ under the flow of S eff · L. The flow under J 2 does not change the angle ϕ, since J 2 rigidly rotates all vectors together. And in the end, we will undo the change to ϕ (caused by the S eff · L flow) by flowing under L 2 . Under the flow of S eff · L, we havė To write the components of this equation in the (i ′ j ′ k ′ ) frame, we need the components of all the individual vectors involved in the same frame which are given by where ϕ is the azimuthal angle of ⃗ R in the (i ′ j ′ k ′ ) frame. Here the letter 'n' beside these columns indicates that the components are in the (i ′ j ′ k ′ ) frame, and ξ i 's are the azimuthal angles of ⃗ S i in this (i ′ j ′ k ′ ) frame. The Euler matrixΛ, which when multiplied with the column consisting of a vector's components in the inertial frame gives its components in the (i ′ j ′ k ′ ) frame is (A70) Now we take the ⃗ R in Eq. (A69), evaluate its components in the inertial frame usingΛ −1 . We then differentiate each of these components with respect to λ (the flow parameter under S eff · L) and transform these components back to the (i ′ j ′ k ′ ) frame usingΛ, thus finally yielding the components (in the non-inertial frame) of the derivative of ⃗ R. The result comes out to be (keeping in mind that dR/dλ = 0) (A71) Plugging Eqs. (A69) and (A71) in Eq. (A68) and using the first two components of the resulting matrix equation gives us Note that what we need for Eq. (A68) is the non-inertialframe components of the frame-independent vector⃗ R; not to be confused with the time derivatives of the noninertial-frame components of ⃗ R. We digress a bit to write ⃗ J = ⃗ L + ⃗ S 1 + ⃗ S 2 in component form in the (i ′ j ′ k ′ ) frame using Eqs. (A69). Only the third component is of interest to us, which reads J cos θ JL = L + S 1 cos κ 1 + S 2 cos κ 2 . (A73) We use this equation for θ JL , and Eqs. (A53) for dϕ L /dλ, to write dϕ/dλ in terms of κ 1 , κ 2 , and γ. Finally using Eqs. (A12)-(A14) to express everything in terms of f , we get This is the equivalent of Eq. (A53) for dϕ L /dλ, and therefore its solution can be found in a totally parallel way to what led us to ϕ L (λ) in Eq. (A65). This gives us where again the integration constant ϕ 0 is determined by inserting λ = λ 0 and ϕ = ϕ(λ 0 ) into this equation. The angle ∆ϕ that ϕ goes through under one period of the precession cycle when flowing under ⃗ L · ⃗ S eff , is given in a similar manner as we arrived at Eq. (A66). We get To negate this angular offset caused by flowing under S eff · L, we need to flow under L 2 by Note that this flow does not change any of the three angular momenta ⃗ L, ⃗ S 1 , or ⃗ S 2 , which is necessary for closing the loop in the phase space. This third flow under L 2 is pictorially represented by the blue RS curve in Fig. 4. Once we have made sure that ⃗ R, ⃗ P , ⃗ S 1 , ⃗ S 2 (and hence also ⃗ L) have been restored by successively flowing under S eff · L, J 2 , and L 2 by ∆λ S eff ·L , ∆λ J 2 , and ∆λ L 2 respectively, now is the time to restore the fictitious vectors ⃗ R 1/2 and ⃗ P 1/2 . The strategy and calculations are analogous to the ones for ⃗ R and ⃗ P , so we won't explicate them in full detail. We will show the basic roadmap and the final results. 6. The second non-inertial (i ′′ j ′′ k ′′ ) triad (centered aroundŜ1 ≡ ⃗ S1/S1) is displayed along with the inertial (ijk) triad (centered aroundĴ ≡ ⃗ J/J). For the purposes of these calculations, the relevant figure is Fig. 6, which shows a second non-inertial frame (i ′′ j ′′ k ′′ ) adapted to ⃗ S 1 . Its axes point along ⃗ J × ⃗ S 1 , ⃗ S 1 × ( ⃗ J × ⃗ S 1 ) and ⃗ S 1 , respectively. We also use this figure to introduce the definitions of the azimuthal angle ϕ S1 and polar angle θ JS1 pictorially. Also, just like ϕ was the angle between ⃗ R and the i ′ axis in Appendix A 3, we define ϕ 1 to be the angle between ⃗ R 1 and the i ′′ axis, with the understanding that ⃗ R 1 lies in the i ′′ j ′′ plane. As far as the fictitious variables of the first black hole are concerned, just like in Appendices A 2 and A 3, all we have to worry about is to restore the change in ϕ 1 which the S eff · L flow (by λ S eff ·L ) brings about, for doing so would imply that both ⃗ R 1 and ⃗ P 1 have been restored. The justifications are analogous to those presented in Appendices A 2 and A 3 while dealing with the orbital sector. Now we proceed to compute the change in ϕ 1 brought about by the S eff · L flow. We denote components in the (i ′′ j ′′ k ′′ ) frame by using the subscript n2. In this frame we have We also have Here ξ 3 and ξ 4 are the azimuthal angles of ⃗ L and ⃗ S 2 , respectively, in the (i ′′ j ′′ k ′′ ) frame. We now write the k ′′ The derivative of ⃗ S 1 along the flow of S eff · L iṡ The analog of dϕ/dλ given in Eq. (A49) becomes Using Eq. (A81), we can arrive at the analog of dϕ/dλ as a function of f [Eq. (A53)], where we have defined Analogous to matrix equations for ⃗ R and⃗ R in Eqs. (A69) and (A71), we can write ⃗ R 1 in component form as and its derivative as (keeping in mind that dR 1 /dλ = 0 along the flow under S eff · L) Also, along the flow under S eff · L, ⃗ R 1 evolves as⃗ Using Eqs. (A79), (A88), and (A89) to express Eq. (A90) in component form and either the first or the second component of the equation when supplemented with Eqs. (A80) and (A83) to eliminate cos θ JS1 and dϕ S1 /dλ gives uṡ ϕ 1 . We again write the partial fraction form (analogous to Eq. (A74)) We have also used Eqs. (A6), (A7), and (A10) to write the cosines of κ 1 , κ 2 , and γ in terms of f . Finally, in a way very similar to how ∆ϕ in Eq. (A76) was found, we find the angle ∆ϕ 1 that ϕ 1 goes through under one period of the precession cycle when flowing under S eff · L. We get where we have defined To negate this angular offset brought about flowing under S eff · L, we need to flow under S 2 1 by This fourth flow under S 2 1 is pictorially represented by the black ST curve in Fig. 4. And finally, by performing similar calculations as above, we can see that ∆λ S 2 2 (the amount we need to flow under S 2 2 ) is given by the following set of equations B 2S2 = 1 2 S 2 σ 2 (−L 2 − JS 2 − S 2 2 + ∆ 1 σ 2 ) + (J + S 2 ) 2 S 2 σ 1 − (J + 2S 2 )∆ 2 σ 1 σ 2 + (J + S 2 )∆ 2 σ 2 1 , (A98) This final fifth flow under S 2 2 is pictorially represented by the orange T P curve in Fig. 4. Of course, this final set of flows under S 2 1 and S 2 2 do not disturb the already restored configurations of the other variables such as ⃗ R, ⃗ P , ⃗ S 1 , and ⃗ S 2 . We mention that it is not recommended to try to arrive at ∆λ S 2 2 from ∆λ S 2 1 by a mere label exchange 1 ↔ 2 (signifying the exchange of the two black holes) because we have already introduced asymmetry in these labels when we assumed m 1 > m 2 in Appendix A 1. Finally, although not required for the fifth action computation, we mention as an aside that the result of the integration of Eq. (A83) is ϕ S1 (λ) − ϕ S10 = 2 where again the integration constant ϕ S10 is determined by inserting λ = λ 0 and ϕ S1 = ϕ S1 (λ 0 ) into this equation. Appendix B: Proof that π⋆(J5) is an action in the SPS By construction, J 5 is an action variable in the EPS (as per the loop-integral defintion), but we also need to show that its pushforward π ⋆ (J 5 ) is an action (as per the loop-flow definition) in the SPS; see Sec. III for these two definitions. The pushforward can be constructed since J 5 is fiberwise constant. To show that π ⋆ (J 5 ) is an action, we need to show that (i) flowing under π ⋆ (J 5 ) forms a closed loop, and (ii) this flow by parameter 2π takes us around the loop exactly once. Condition (i) can be shown to be satisfied automatically. Since the loop-integral definition of action implies the loopflow definition, flowing under J 5 in the EPS forms a loop. Call this loop γ (shown in solid cyan in Fig. 4). The image of this loop π(γ) (shown in dashed cyan in Fig. 4) is a loop in the SPS. Meanwhile, because of the compatibility of the PBs (see Sec. III C), the pushforward of the Hamiltonian vector field π ⋆ ( ⃗ X J5 ) is the Hamiltonian vector field of the pushforward, ⃗ X π⋆(J5) = π ⋆ ( ⃗ X J5 ). Therefore flowing under ⃗ X π⋆(J5) forms a loop, namely the image π(γ). The second part follows from homotopy equivalence. In Fig. 4, let γ 1 be the path P QRS in the EPS, which is not a loop. However, its image π(γ 1 ) (in dashed redgreen-blue) is a loop in the SPS. Recall from Appendix A that we constructed γ 1 using three successive flows (under S eff · L, J 2 and L 2 ) to bring the SPS coordinates back to their starting values, thereby making exactly one loop in the SPS. Let γ 2 be the segment ST P , which is vertical in the EPS (it is contained in a single fiber); its image π(ST P ) is a single point. Their composition is γ 3 = γ 2 ·γ 1 , where · is composition of paths. Composing with the projection, π(γ 3 ) = π(γ 2 · γ 1 ) = π(γ 2 ) · π(γ 1 ) = π(γ 1 ) . (B1) Now, the Liouville-Arnold theorem is constructive, meaning that when we find the action J 5 = γ3 P i dQ i /(2π), it generates a flow (γ) in the same homotopy class as the path we integrated over (γ 3 ). The two loops are homotopic, i.e., [γ] = [γ 3 ], where the notation [γ i ] denotes the homotopy class of a map γ i . Since π is a continuous map, the two images are also homotopic. Therefore we also have the homotopy [π(γ)] = [π(γ 3 )] = [π(γ 1 )] . (B2) Therefore we conclude that π(γ) goes around exactly once, just like π(γ 1 ), being in the same homotopy class. Appendix C: Frequently occurring derivatives in frequency calculations Here we present some common derivatives that arise in the computation of frequencies in Eqs. (55). The most important ones are the derivatives of the roots f i of the cubic P . These roots are implicit functions of the constants of motion, f i = f i ( ⃗ C), and the coefficients of the cubic depend explicitly on the constants, P = P (f ; ⃗ C). Since f i is a root, and this identity is satisfied smoothly in ⃗ C, therefore where we have expanded with the chain rule. We can now easily solve for the derivative of a root with respect to a constant of motion, Here P ′ (f ) = ∂P/∂f is the quadratic where the coefficients are given in Eq. (A23). The denominator P ′ (f i ) only vanishes if f i is a multiple root, c 3 , c 4 , c 5 , the combination S eff · L+c 3 S 2 1 h/c 2 +c 4 S 2 1 h/c 2 + c 5 ⃗ S 1 · ⃗ S 2 /c 2 was in mutual perturbative involution with the other phase space constants. It is important to note that the free terms with coefficients c i are at the same PN order that we are keeping, and that they are not simply built out of other constants of motion. With our previous definition of PN integrability, this seems to suggest far more than n independent functions in mutual perturbative involution. This is in stark contrast with exact integrability scenario where one cannot have more than n independent functions in mutual involution on a 2n dimensional phase space. Clearly, something is wrong. Another way to look at this problem is to realize that for 2PN integrability, if we enumerate the required n = 5 commuting constants by including the 2.5PN Hamiltonian, J 2 , J z , L 2 and L 2 + c 1 S 2 1 h/c 2 + c 2 S 2 2 h/c 2 , the latter two quantities will coincide in the PN limit 1/c → 0, thereby leaving us with only four independent quantities in exact mutual involution, whereas the requisite number is 5 (both for PN perturbative and exact integrability). This means that the 1/c → 0 limit of the requisite number n of quantities in PN mutual involution (required for PN integrability) may not be enough for exact integrability (in the 1/c → 0 limit), which is bizarre. The definition of PN integrability clearly needs a fix. To fix the definition, we add one more demand: the n independent phase-space functions (including the (q + 1/2)PN Hamiltonian) must be such that in the limit 1/c → 0, they reduce to n independent phase-space functions in exact mutual involution. As per this new definition, we can't count L 2 and L 2 + c 1 S 2 1 h/c 2 + c 2 S 2 2 h/c 2 simultaneously in our list of independent functions in mutual involution. This remedies the aforementioned problems with the definition of PN integrability. It's easy to see that the BBH system is still 2PN integrable per this revised definition of PN integrability since L 2 and S eff · L reduce to L 2 and S eff · L in the 1/c → 0 limit, which exactly mutually commute and are independent of each other.
2021-10-29T01:16:24.688Z
2021-10-28T00:00:00.000
{ "year": 2021, "sha1": "061b094ec90b792720e59de134c8453e1ff0cb85", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "061b094ec90b792720e59de134c8453e1ff0cb85", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233204318
pes2o/s2orc
v3-fos-license
Catalogue of exoplanets accessible in reflected starlight to the Nancy Grace Roman Space Telescope. A population study and prospects for phase-curve measurements Reflected starlight measurements will open a new path in the characterization of directly imaged exoplanets. However, we still lack a population study of known targets amenable to this technique. Here, we investigate which of the about 4300 exoplanets confirmed to date are accessible to the Roman Space Telescope's coronagraph (CGI) in reflected starlight at reference wavelengths $\lambda$=575, 730 and 825 nm. We carry out a population study and also address the prospects for phase-curve measurements. We used the NASA Exoplanet Archive as a reference for planet and star properties, and explored the impact of their uncertainties on the exoplanet's detectability by applying statistical arguments. We define a planet as Roman-accessible on the basis of the instrument inner and outer working angles and its minimum planet-to-star constrast (IWA, OWA, $C_{min}$). We adopt for these technical specifications three plausible configurations labeled as pessimistic, intermediate and optimistic. Our key outputs for each exoplanet are its probability of being Roman-accessible ($P_{access}$), the range of observable phase angles, the evolution of its equilibrium temperature, the number of days per orbit that it is accessible and its transit probability. In the optimistic scenario, we find 26 Roman-accessible exoplanets with $P_{access}$>25% and host stars brighter than $V$=7 mag. This population is biased towards planets more massive than Jupiter but also includes the super-Earths tau Cet e and f which orbit near their star's habitable zone. A total of 13 planets are part of multiplanet systems, 3 of them with known transiting companions, offering opportunities for contemporaneous characterization. The intermediate and pessimistic scenarios yield 10 and 3 Roman-accessible exoplanets, respectively. We find that inclination estimates (e.g. with astrometry) are key for refining the detectability prospects. Introduction The population of more than 4000 exoplanets confirmed to date shows a vast diversity of worlds, many of which have no analogue in the Solar System. Yet, a large number of them, in particular those that orbit far from their host stars, are still not amenable to atmospheric characterization with the available techniques. Upcoming direct imaging space telescopes observing at optical wavelengths will enable the investigation of cold and temperate exoplanets on long-period orbits by measuring the starlight that they reflect. The atmospheres of these planets remain largely unexplored but they may represent a key piece in the exoplanet diversity puzzle, helping trace the planets' history and evolution. The Nancy Grace Roman Space Telescope 1 (Spergel et al. 2013) (hereon, the Roman Telescope) will be the first spaceborne facility designed to directly image exoplanets in reflected e-mail: o.carriongonzalez@astro.physik.tu-berlin.de, oscar.carrion.gonzalez@gmail.com 1 Formerly the Wide Field Infrared Survey Telescope, WFIRST. starlight. Planned for launch in the mid 2020s, it will be equipped with an optical coronagraph and a set of filters for imaging and spectroscopy for technology demonstration (Akeson et al. 2019;Mennesson et al. 2020). This instrument will be able to characterize far-out non-transiting exoplanets, most of them presumably discovered in radial velocity (RV) searches. For long-period planets, reflected starlight measurements will provide insight into lower atmospheric layers than the layers probed during transit, which are masked by refraction (García Muñoz et al. 2012;Misra et al. 2014). Probing deep down in the atmosphere will be particularly relevant in the search for biosignatures (Rauer et al. 2011), which is a main goal of future direct-imaging missions targeting Earth-like exoplanets, such as LUVOIR (Bolcar et al. 2016) or HabEx (Mennesson et al. 2016). The question of which exoplanets will be observable by the Roman Telescope and next-generation direct imaging space telescopes is timely. Answering it will provide technical context for future designs, will motivate new and follow-up RV ans astrometric measurements, and will encourage modelers to build tools with which to interpret the prospective spectra. Understanding Article number, page 1 of 34 arXiv:2104.04296v1 [astro-ph.EP] 9 Apr 2021 this population of exoplanets will help plan the observations and select the most interesting targets. Several works have addressed the possible science outcome of direct-imaging missions and discussed potential criteria to define observation strategies (e.g. Traub et al. 2014;Brown 2015;Greco & Burrows 2015;Kane et al. 2018;Lacy & Burrows 2020;Stark et al. 2020). For instance, Traub et al. (2014) studied the detection yield of different coronagraph architectures proposed for the WFIRST-AFTA mission based on a population of over 400 confirmed RV exoplanets, assuming for them circular orbits and skyprojected orbital inclinations i=60 • . Depending on the specific coronagraph architecture, their predictions resulted in detection yields between 0 and 31 exoplanets. Brown (2015) analysed also over 400 RV exoplanets lacking an inclination determination and tried to infer this value from simulated direct-imaging measurements to constrain the planets' true masses. That study concluded that the uncertainties in the orbital parameters may prevent an accurate estimate of i. Kane et al. (2018) computed the maximum angular separation between planet and star (∆θ max ) for a subset of 300 RV exoplanets. That work identified those planets with the largest ∆θ max and estimated their orbital position and uncertainty as of 2025-01-01. For exoplanets with incomplete orbital information, Kane et al. (2018) assumed inclination i = 90 • , eccentricity e = 0 or argument of periastron ω = 90 • when the corresponding parameter was missing. However, they did not consider other factors affecting the detectability such as the planet-to-star contrast ratio (F p /F ). Greco & Burrows (2015) studied how F p /F changes with the orbital configuration of an exoplanet and its position on the orbit, and found that the contrast is indeed a major limitation for the detectability of direct-imaging exoplanets in reflected starlight. Focusing on thermal emission rather than reflected starlight, and with the aim of specifying possible targets for the Roman Telescope, Lacy & Burrows (2020) provided a list of 14 known self-luminous planets and brown-dwarf companions that might be observable in the optical wavelength range. These objects will have larger contrasts than mature planets at the same orbital distance. Although their study discusses the prospects to observe a reflected-light component in the spectra of such objects, their masses, temperatures and orbital distances in practice limit the eventual observations of these targets to primarily thermal emission. Our first goal in this work is to determine which of the currently confirmed exoplanets could be observable in reflected starlight by the Roman Telescope. For those planets whose orbital solution is not completely known, we compute the likelihood of the exoplanet to be accessible based on a statistical analysis rather than assuming fixed values for the unconstrained parameters. Our second goal is to understand the main properties of the population of known exoplanets that will be potentially detectable with the Roman Telescope. We compare this subset to the whole population of confirmed exoplanets as well as to those that have been observed in transit. This way we outline how direct-imaging space missions will contribute to completing the big picture of exoplanet diversity. In addition, we explore the possibility of measuring the phase curve of these exoplanets. To that end, we compute the planet-star-observer phase angles (α) that would be observable and the corresponding uncertainties for each planet. Optical phase-curve observations have proven valuable to constrain the atmospheric properties of Solar System planets (e.g. Arking & Potter 1968;Mallama et al. 2006;García Muñoz et al. 2014;Dyudina et al. 2016;Mayorga et al. 2016) and their energy bud-get (e.g. Pollack et al. 1986;Li et al. 2018). Optical phase curves have also been used to investigate the atmospheres of transiting exoplanets and infer their thermal properties and the presence of clouds (e.g. Demory et al. 2013;Angerhausen et al. 2015;Esteves et al. 2015;García Muñoz & Isaak 2015;Hu et al. 2015). According to recent theoretical investigations (Nayak et al. 2017;Damiano et al. 2020), observing at multiple phases will help better characterize directly-imaged exoplanets in reflected starlight. Remarkably, no previous work has addressed the feasibility and limitations of such optical phase-curve measurements for the confirmed exoplanets, which is essential to prioritise the best targets for atmospheric characterization. Finally, we discuss the benefits of constraining the orbital inclination by means of astrometric measurements or dynamical stability studies. We do so by comparing, for a selection of exoplanets that have estimates of i available, the detectability prospects if i is assumed constrained or unconstrained. Future data releases from the Gaia mission (Perryman et al. 2001;Gaia Collaboration 2016) and ensuing enhanced astrometry will strengthen these synergies. The paper is structured as follows. In Sect. 2 we describe the general conditions under which an exoplanet would be accessible. Section 3 contains the definition of the orbital geometry and the parameters determining the position and brightness of an exoplanet. In Sect. 4 we outline the dataset of planet and star properties used in our study and the assumptions that we adopted. We present our results in Sect. 5 and discuss more thoroughly in Sect. 6 the observational prospects for a selection of particularly interesting targets, as well as the implications for their atmospheric characterization. Section 7 contains the summary and conclusions. Direct imaging of exoplanets. Technical requirements The technique of direct imaging applied to exoplanets relies on suppressing the light from their host stars with optical devices such as coronagraphs or starshades. In this way, the faint planetary point source can be distinguished from the stellar glare. As the star is masked, a certain region around it is also masked. This region is defined by the inner working angle (IWA), and prevents the detection of planets at smaller star-planet angular separations. Coronagraphs also have an outer working angle (OWA) that sets an outer limit to the observable region. Another factor that affects the detectability of exoplanets is the minimum contrast (C min ) of the instrument. The planet needs to be bright enough to be distinguished from background noise. The usual way to quantify the planet brightness is through the contrast ratio between the flux from the planet and that from the star at a certain wavelength λ and observing condition, given by: where R p is the planet radius, r is the star-planet distance at the orbital position being considered and α is the corresponding phase angle. A g is the exoplanet's geometrical albedo and Φ is its normalized scattering phase law. Both A g and Φ depend on the properties of the planetary atmosphere. These properties are discussed in more detail in Sect. 3. From this perspective, the limitations set by the IWA, OWA and C min shape the population of exoplanets that can be directly imaged. For instance, hot and ultra-hot short-period planets orbit Article number, page 2 of 34 Ó. Carrión-González et al.: Catalogue of exoplanets accessible in reflected starlight to the Nancy Grace Roman Space Telescope too close to their host star and thus inside the IWA of any realistic coronagraph, which means that they are undetectable. In turn, exoplanets on long-period orbits and inclinations close to faceon may fall outside the OWA during their whole orbit, which prevents them from being observed. In addition, the planet-tostar contrast decreases as the planet-star distance increases and hence observing planets in reflected starlight will become progressively difficult for the longer-period ones. This is particularly important for small exoplanets, as the amount of photons reflected by them scales with the object's cross section. In this work, we consider as a basis the mission design of the Roman Telescope as envisioned in Spergel et al. (2015), with a telescope diameter of D=2.4 m. It will be equipped with a Coronagraph Instrument (CGI) including an optical hybrid Lyot coronagraph and a shaped pupil coronagraph (Trauger et al. 2016), as a technology demonstrator for future direct-imaging missions targeting Earth-like planets. The original design aimed at a minimum planet-to-star contrast ratio C min on the order of 10 −9 after post-processing (Spergel et al. 2015;Douglas et al. 2018). More up-to-date expectations according to the Nancy Grace Roman Space Telescope on the IPAC (Roman-IPAC) website 2 aim for C min of about 2-3×10 −9 at a moderate signal-to-noise ratio S/N=5. At the time of writing, only one spectroscopy filter, centred at 730 nm, and two imaging filters, centred at 575 and 825 nm, are planned for full commissioning. However, other filters which are not officially supported will fly with the coronagraph and might be commissioned for science operations if the 3-month technology demonstration phase is successful and a potential science phase is funded (Akeson et al. 2019). The three currently official observing modes according to the Roman-IPAC website are: Imaging Mode N (IWA=3 λ/D, OWA=9.7 λ/D, C min =2.94 × 10 −9 ), Spectroscopy Mode (IWA=3 λ/D, OWA=9.1 λ/D, C min =2.2×10 −9 ) and Imaging Mode W (IWA=5.9 λ/D, OWA=20.1 λ/D, C min =1.95 × 10 −9 ). The latter mode will be mainly devoted to debris discs observations (Akeson et al. 2019). As these figures and the C min requirement will likely evolve as the mission design progresses, in this work we will adopt three possible configurations of IWA, OWA and C min for the exoplanet observing modes (Table 1). We define a pessimistic scenario with: IWA=4 λ/D, OWA=8 λ/D, C min =5 × 10 −9 ; an intermediate scenario with IWA=3.5 λ/D, OWA=8.5 λ/D, C min =3 × 10 −9 ; and an optimistic scenario with: IWA=3 λ/D, OWA=9 λ/D, C min =1 × 10 −9 . These are not officially-bounded scenarios and different performances of the instrument (e.g. worse than our pessimistic scenario) cannot be ruled out. However, the cases proposed herein are representative of a plausible range of performances within the CGI capabilities considered realistic at this point. Table 2 summarizes the available CGI filters and corresponding IWA and OWA for the optimistic scenario. Unless noted otherwise, we assume as a reference in this work the imaging filter centred at 575 nm. We acknowledge that additional factors will limit the detectability of exoplanets by the Roman Telescope. For instance, the most recent update on the Roman-IPAC website (14.01.2021) states a CGI host star requirement of V ≤5 mag but also notes that stars with V = 6 − 7 could potentially be targeted. The performance of the instrument on such fainter stars is still to be determined after the technology demonstration phase. The solar or anti-solar telescope pointing at the time of the observation may also affect any proposed target list (e.g. Brown 2015), although Table 1. Plausible configurations of CGI exoplanet observing modes that will be considered in this work. These scenarios are not officially bounded but are within the range of realistic CGI performances according to current predictions. Scenario C min IWA OWA Pessimistic 5 × 10 −9 4 λ/D 8 λ/D Intermediate 3 × 10 −9 3.5 λ/D 8.5 λ/D Optimistic 1 × 10 −9 3 λ/D 9 λ/D zodiacal light will not be as determinant as in future instruments with 10-100 times more contrast sensitivity. This effect, however, will depend on the final launch date and mission schedule. Exo-zodiacal dust may also prevent the detection of certain targets but this noise source will have to be analysed on a oneby-one basis through follow-up observations of each planetary system and will not be considered here. For the sake of generality, we adopt the IWA, OWA and C min at λ=575 nm as our main detectability criteria. For those exoplanets meeting these criteria, we coin the term Romanaccessible. Given that our current focus is on the geometrical constraints for exoplanet detectability, we leave for future work the computation of the S/N that could be achieved for each Roman-accessible planet or the required integration times. Theoretical setting: planet detectability along the orbit In this section, we lay out the equations for the trajectory of a planet in the three-dimensional space and the evolution of ∆θ, α and F p /F with time. We base the description of the planet orbit on the book chapter by Hatzes (2016). Figure 1 sketches the geometry and main elements of the orbit and is based on Fig. 1.36 in that chapter, with additional information specific to the reference axes. Article number, page 3 of 34 A&A proofs: manuscript no. aanda For a general elliptic orbit, the distance between planet and star at each orbital position is given by: Here, e is the eccentricity, a is the semi-major axis and f is the true anomaly. A more thorough description of the orbital equations and parameters can be found in Appendix A. The orbit can be given in a three-dimensional space with the host star at the origin, the X and Y axes defining the plane of the sky and Z oriented away from the observer. The three coordinates of the planet's position vector r p are: where i is the orbital inclination and ω p is the planet's argument of periastron. In this work, the longitude of ascending node is assumed Ω = 0 without loss of generality. Angular separation The sky-projected distance between planet and star is given by:, If the stellar system is located at a distance d from the observer, the apparent angular separation is: Observed phase angles The phase angle α is the planetocentric angle between the directions to the star and to the observer (see Fig. 1). It can be computed at each orbital position from the dot product of the reversed planet's position vector (−r p ) and a unit vector in the direction of the observer (−k, as d r). With the components of r p defined in Eq. (3): Scattering and planet-to-star contrast To compute the brightness of the planet at each orbital position, we substitute the expressions given above for r and α into Eq. (1). We assume for the planet a Lambertian scattering phase law: and a geometrical albedo A g =0.3. Both A g and Φ(α) are assumed to be wavelength-independent and to represent the planet's reflecting properties over the operational spectral range of the Roman Telescope. Our assumed albedo provides a reasonable representation of the outer planets in the Solar System (Karkoschka 1994(Karkoschka , 1998. Other works investigating the prospects for reflected-starlight measurements of exoplanets have also assumed or predicted values of A g between 0.3 and 0.5 for Neptune and Jupiter analogues (e.g Cahoy et al. 2009Cahoy et al. , 2010Traub et al. 2014;Greco & Burrows 2015). Larger values of A g will potentially increase the number of exoplanets exceeding the C min of the instrument, and vice versa. The Lambertian scattering phase law is a simple yet pragmatic approximation to the scattering of planetary atmospheres. It has been frequently applied in studies planning the science outcome of reflected-starlight observations of exoplanets (e.g. Stark et al. 2014;Guimond & Cowan 2018). At small phase angles, the Lambertian function yields brighter values than other models such as isotropic or Rayleigh-like scattering. The results under a Lambertian assumption may differ slightly from those obtained with other phase laws. Nevertheless, in reality the scattering properties of a planet will depend on the specifics of its atmosphere, which will be unknown a priori. Time dependence of the orbital position The relation between the true anomaly and time (t) can be derived from Kepler's equation (Appendix A). For an exoplanet with orbital period P and time of periastron passage t p : This equation combined with Eqs. (2), (5) and (6) yields, respectively, the planet-star distance, angular separation and phase angle at a given time. Leaving aside Ω and t p , which are not important here (Appendix A), the planet's orbit is specified by 5 parameters, namely: a, i, ω p , e and P. Building a complete set of confirmed exoplanets We downloaded the complete set of confirmed exoplanets from the NASA Exoplanet Archive 3 (Akeson et al. 2013), that we used as our main source of known planets and corresponding planet-star properties. As of 16 th of September (2020), it contains 4276 confirmed planets. For specific targets, complementary information was obtained from the original references, from correspondence with the paper authors or from other resources such as the Extrasolar Planets Encyclopaedia (Schneider et al. 2011). Hereon, we refer to this compilation mainly based on the NASA Exoplanet Archive as the input catalogue, shown in Table 3. Completing missing orbital information Not all of the Keplerian elements are known or listed in the input catalogue for each of the confirmed exoplanets. If any orbital parameter is missing, we need to make additional assumptions in order to compute the orbital solution. For 246 exoplanets, a is missing but P as well as the masses of the star (M ) and the planet (M p ) are available. For 124 of them, P is missing but a, M and M p are available. In such cases, we compute the missing value by means of Kepler's third law. Still, there is a significant number of exoplanets (2513) with no information on M p or M p sin i. For these, we approximate M + M p ≈M , which results in a negligible underestimation of a for planetary-mass objects (Stevens & Gaudi 2013). There are 119 exoplanets with no available information on at least two of the three critical parameters in Kepler's third law (M , P, a), making it impossible to include them in our study. When the values of the orbital inclination or the argument of periastron are not available in the NASA Exoplanet Archive, we assigned them random values assuming that the possible orbital orientations are isotropically distributed with respect to the observer. We therefore assume cos i and ω p to be distributed uniformly over the intervals [−1,1] and [0, 2π], respectively. A note of caution about ω p There is no homogeneous convention in the literature to report the argument of periastron. This has been noted previously (e.g. Brown 2015; Xuan & Wyatt 2020) but stands out as a particularly relevant issue for our work and for the direct imaging of RV planets. In some cases the reported ω refers to the argument of periastron of the planet (ω p ) as it orbits around the system's barycenter, while in others it refers to the argument of periastron of the star (ω ). There is a shift of 180 • between ω p and ω (ω p = ω + 180 • ) (Perryman 2011). In addition, the assumed location of the observer with respect to the +Z axis and the definition of the origin for the argument of periastron may also introduce additional 180 • −shifts in ω. The lack of a homogeneous convention and the fact that it is not always stated how the reported ω is defined potentially complicate a systematic analysis as proposed in this work. We verified that both the NASA Exoplanet Archive and the Extrasolar Planets Encyclopaedia quote, for each exoplanet, the ω given in the original reference without assessing the actual definitions used in them. 4 The value of ω has no impact on the range of angular separations over the orbit (see Eq. 5) but it does affect the position of an exoplanet at a given time (Eq. 3), its phase angle (Eq. 6) and therefore the value of F p /F . ω will also have an impact on the probability that a planet will transit its host star (see Sect. 4.3). As the design of direct-imaging missions and the corresponding target selection progress, it would be desirable to have clearly defined conventions for all the reported orbital parameters. We therefore urge efforts towards a standardisation of the data available in the exoplanet catalogues and towards the compilation of self-consistent catalogues (e.g. Hollis et al. 2012) which are updated with new discoveries. We discuss in Appendix B how mistakenly using the value of ω instead of ω p affects the detectability of exoplanets and the prospects for measuring their optical phase curves. In this work, we will generally assume that the ω reported by the NASA Exoplanet Archive corresponds to the argument of periastron of the star, which is the prevailing convention for RV (Perryman 2011;Hatzes 2016). For all the exoplanets that we find to be Roman-accessible (see Sect. 5), we checked the corresponding reference papers or contacted the authors to confirm the values of ω as quoted in Table 3. Extending this case-by-case inspection to the 4276 confirmed exoplanets is out of the scope of this paper. Eccentricity distribution For those exoplanets without a measurement of eccentricity, we draw it from a uniform distribution in e ∈ [0, 1). This is a simplification to the reality, which suggests that short-period exoplanets tend to have small eccentricities while long-period ones show broader e distributions (Winn & Fabrycky 2015). However, empirically-derived distributions of e might be affected by observational biases, especially for long-period planets, whose orbits are more challenging to characterize and for which the discovery numbers are relatively low. For reference, uniform distributions of e have been used in previous works that analysed the detection yield of direct-imaging missions (e.g. Stark et al. 2014). We note however that this is not the only approach considered in the literature. For instance, Steffen et al. (2010) used both Rayleigh and exponential probability distributions to describe the eccentricity, and Wang & Ford (2011) used a distribution with both uniform and exponential components. Kipping (2013) described the observed dispersion of e with two Beta probability distributions, for short-and long-period planets (P<382.3 and >382.3 days, respectively). In Sect. 5.1 we compare the e distributions of exoplanets with short and long periods, as described by Kipping (2013), with that of the Roman-accessible exoplanets. Future studies with access to a larger sample of long-period planets will result in refined representations of the e distribution. Planet radius The value of R p can only be measured for transiting exoplanets. It may be estimated from thermal emission measurements, as for instance with young, self-luminous exoplanets, but these estimates are by definition model dependent (e.g. Mawet et al. 2019;Lacy & Burrows 2020). Hence, the population of exoplanets suitable for direct imaging in reflected starlight will generally lack an estimate of R p . To assign a value of R p to the planets in our input catalogue, we use the mass-density relationship from Hatzes & Rauer (2015) for giant planets, defined in term of Jupiter's mass Article number, page 5 of 34 A&A proofs: manuscript no. aanda (M J ) as those with 0.3M J < M p < 65M J : log 10 (ρ) [g cm −3 ] = (1.15 ± 0.03) log 10 (M p /M J ) − (0.11 ± 0.03) Eq. (9) is therefore valid for planets more massive than Saturn, approximately. A priori, we cannot rule out that lower-mass exoplanets will be detectable (Robinson et al. 2016) (see also Sect. 5.1). Thus, for planets less massive than 120 Earth masses (M ⊕ ), we use the mass-radius relationships in Otegi et al. (2020). They distinguish between rocky and volatile-rich exoplanets, and obtain two different mass-radius relationships depending on the planet density (ρ): Although Otegi et al. (2020) note that the M p -R p statistics suggest a lower limit of 5M ⊕ for volatile-rich planets, we extend the mass-radius relationship to 3.1M ⊕ in order to achieve a continuous coverage in M p . This causes that some exoplanets with ρ > 3.3 g cm −3 (those with 3.1M ⊕ <M p <5M ⊕ ) are modeled in our case with Eq. (10b). In summary, for planets with M p <3.1M ⊕ we use the rocky M p -R p relationship in Eq. (10a), for 3.1M ⊕ <M p <0.36M J we use the volatile-rich relationship in Eq. (10b) and for M p >0.36M J we use the giant-planet relationship in Eq. (9). In all cases, we account for the quoted uncertainties to estimate R p (see Sect. 4.5). Figure 2 shows these relationships together with all of the confirmed exoplanets with measurements of both M p and ρ in the NASA Exoplanet Archive. For reference, we added the Solar System planets to the diagram. We find an overall good fit to the observed population of both Solar System and extrasolar planets. Transit requirements Exoplanets that are suitable for both direct imaging and transit spectroscopy will become prime targets for atmospheric characterization (Carrión-González et al. 2020;Stark et al. 2020). Given their special interest, we computed the transit probability (P tr ) of the Roman-accessible exoplanets (Section 5). Based on Eq. (4) the eventual eclipses (transits and occultations) will take place when the planet-star distance in the sky plane √ X 2 + Y 2 is a local minimum. Following Winn (2010), we consider that transits happen at inferior conjunctions (that is, when X=0 and the planet is in front of the star). With our viewing geometry ( Fig. 1) this means: The impact parameter is defined as the distance between the centres of the planet and the star, projected onto the plane of the sky and normalized to the stellar radius. Substituting Eq. (11) in Eq. (4), the impact parameter at transit is given by: The condition for a full transit to be observed is therefore: 10 4 10 3 10 2 10 1 10 0 10 1 10 2 We use R − R p to exclude grazing transits from the analysis because these only provide a lower limit for R p . For those systems without a R determination in the input catalogue, we extracted its value from the Planetary Systems database in the NASA Exoplanet Archive. Preferentially, we used the value from the source referencing Gaia DR2 (Gaia Collaboration 2018) or, if unavailable, from the one referencing the Revised TESS Input Catalog (Stassun et al. 2019). If R was not available in any of these sources either, the transit probability could not be computed for that system. The mass of a planet discovered in RV cannot be unlimitedly large, and this sets a limit on the range of physically realistic inclinations for a measured M p sin i. In this respect, Stevens & Gaudi (2013) note that the prior distribution of possible M p affects the prior distribution of i, thereby affecting the calculated transit probabilities. For generality, we will not consider here any prior information on the M p distribution. Planetary equilibrium temperature The equilibrium temperature of a planet T eq provides an indication of its possible atmospheric structure and the potential conditions for habitability. For each orbital position r, we computed T eq by assuming a Bond albedo (A B ) of 0.45 and applying: where the factor f accounts for the heat redistribution of the planet. We assume f = 1, consistent with rapid rotators (Traub & Oppenheimer 2010). T eq bears no impact on the detectability criteria in our methodology. Given its importance for atmospheric modeling, however, we compute T eq throughout the planet's orbit. In future Article number, page 6 of 34 Ó. Carrión-González et al.: Catalogue of exoplanets accessible in reflected starlight to the Nancy Grace Roman Space Telescope Notes. In our computations we have considered the uncertainties of all these parameters as explained in Sect. 4.5. work, it could be used to investigate the temporal variability of the atmosphere and to estimate the emitted radiation from the planet. Statistical analysis of detectability For a given orbit specified by its Keplerian parameters, we assess if the detectability criteria for IWA, OWA and C min described in Sect. 2 are met at any orbital position. We repeat this for each of the pessimistic, intermediate and optimistic scenarios described in Table 1. To describe the orbit, we divide it into 360 points with a step in the true anomaly ∆ f =1 • , which is related to time through Eq. (8). We checked a posteriori for a few selected cases that the adopted ∆ f step affects negligibly our findings. The planetary and orbital parameters used in this work are summarized in Table 4. For each parameter from Table 4, we considered the upper and lower uncertainties quoted in the NASA Exoplanet Archive. We also considered the uncertainties in the coefficients of the mass-radius relationships in Eqs. (9) and (10). All these uncertainties are taken into account when producing random realizations of the planet orbits and corresponding planet-to-star contrasts. For each planet, we accounted for all the uncertainties simultaneously and computed 10000 independent realizations of both the orbital and non-orbital parameters. When the value of a specific parameter is not available in the NASA Archive but instead must be estimated through e.g. Kepler's third law or the M p -R p relationships of Eqs. (9) and (10), our treatment ensures that the uncertainties are properly propagated. We use this bootstrap-like method to derive statistical conclusions (Press et al. 2007) on properties of interest such as ∆θ, α and F p /F . Some of the parameters in Table 4 are indeed correlated through the specific techniques with which they were originally estimated and hence their uncertainties are not independent. We also note that the uncertainties in the NASA Archive are extracted from references with no homogeneous criteria in the statistical treatment of the data. A re-evaluation of the orbital parameters to obtain their joint confidence intervals is beyond the scope of this paper, and for simplicity we sample each of them independently from uniform probability distributions between the quoted uncertainty limits. We consider an exoplanet to be Roman-accessible if the detectability criteria defined by the IWA, OWA and C min are met over at least one point in the numerically discretised orbit of at least one of the 10000 independent orbital realizations. The probability of a planet to be Roman-accessible (P access ) is given by the number of orbital realizations in which the exoplanet is accessible, compared to the total of 10000 realizations. The transit probability (P tr ) is computed as the fraction of orbital realizations in which the condition in Eq. (13) is met. For a particular orbit, the amount of days that the planet remains observable (t obs ) can be computed with Eq. (8) by time-integration along the orbit. We compute this for each accessible orbital realization to derive a statistical distribution of t obs . We infer the median value of this distribution and upper and lower uncertainties corresponding to the percentiles 16% and 84%, equivalent to ±1σ for Gaussian errors. In addition, for each accessible orbit we compute the interval of observable phase angles (α obs ) with Eq. (6). We will refer to the minimum and maximum phase angles (α obs(min) , α obs(max) ), together with the corresponding ±1σ uncertainties. We emphasize that the distributions of t obs and α obs are based only on the accessible orbital realizations. This results in intrinsically biased statistics, since the null detections are not accounted for. However, we opted for these definitions to have metrics that describe specifically the accessible orbits given that, for instance, α obs is not defined in a non-accessible orbit. The corresponding P access quantifies to some extent the bias introduced in these metrics. For each planet in the input catalogue, this statistical method produces posterior distributions for each of the sampled parameters in Table 4. With this, we create an output catalogue (Table 5) with the resulting median values of each parameter and their corresponding uncertainties. The above definition of P access is however flawed because there are planets with very small associated values of this metric for which it is difficult to justify a future observational effort. In order to keep our findings useful for target prioritisation, in what follows we will only consider planets that in the optimistic CGI configuration have P access > 25% (Table 1). In addition, we restrict our analysis to targets orbiting stars brighter than V=7 mag, according to the updated CGI possible performances. These additional vetting criteria determine the population of planets studied in Sects. 5 and 6. For reference, the complete list of Roman-accessible exoplanets including those with P access < 25% or V > 7 mag is kept in the input and output catalogues (Tables 3 and 5). Fig. 3. Semi-transparent dots: confirmed exoplanets for which we know d and can derive a as explained in Sect. 4.1. Solid stars: confirmed exoplanets that we find Roman-accessible in the optimistic CGI configuration, with P access > 25% and orbiting stars brighter than V=7 mag. Colour code indicates the corresponding discovery technique (that by which the planet was first identified), as detailed in the legend. "Others" refers to all other possible discovery techniques considered in the NASA Exoplanet Archive. HD 100546 b appears as the only Romanaccessible discovered in Imaging, although its existence is marked as controversial in the NASA Archive. timistic CGI configuration. We compare their properties to the complete set of confirmed exoplanets, as well as to those that have been observed in transit (Sect. 5.1). Afterwards, we describe their overall detectability conditions (P access , α obs , t obs , P tr ) as well as the main limiting factors (Sect. 5.2) in the the different CGI scenarios from Table 1. Finally, we report the equilibrium temperatures computed for these planets and the variation of T eq along their orbit (Sect. 5.3). Population analysis: the subset of direct-imaging exoplanets We analysed all confirmed exoplanets as described in Sect. 4.5 and found that 26 of the total 4276 meet the criteria of angular separation and planet-to-star contrast for the optimistic CGI configuration, with the additional vetting criteria P access > 25% and V<7 mag. The number of planets meeting these criteria in the intermediate and pessimistic scenarios drops to 10 and 3, respectively. Focusing on the optimistic scenario, we study below the main properties, as listed in our input catalogue (Table 3) of this subset of Roman-accessible objects. Figure 3 displays the semi-major axis and distance to the Earth of all confirmed exoplanets, showing how different discovery techniques are sensitive to different ranges of these parameters. The population of Roman-accessible exoplanets is composed of objects discovered in radial-velocity, with the exception of HD 100546 b which was discovered in imaging (Quanz et al. 2015). The existence of this protoplanet with R p = 6.9 +2.7 −2.9 R J is however controversial, as indicated in the NASA Archive. Despite the transit method is the most fruitful technique so far in Eccentricity and orbital period for all confirmed exoplanets (grey dots), those that have been observed in transit (whether discovered by that method or not) (orange dots) and those that are Roman-accessible in the optimistic CGI configuration, with P access > 25% and V<7 mag (green dots). We only consider those planets for which e is known and P can be derived as explained in Sect. 4.1. The black line shows the limit between short-and long-period exoplanets (P=382.3 days) as defined in Kipping (2013) Top panel. It shows the P distribution of all confirmed exoplanets (grey), those observed in transit (orange line) and those that are Romanaccessible (semi-transparent green). Right panel. Normalized distribution of e such that it shows the relative frequency instead of the total count of planets. The same colour code as for the top panel applies. Given that the green bars are semi-transparent (so that the grey distribution underneath can also be seen), the overall graph becomes either darker or lighter green depending on whether both histograms overlap. For reference, we include the eccentricity for the subsets of short-and long-period exoplanets in Kipping (2013) (red and blue lines, respectively). terms of number of planets discovered (76% of the total), none of them is Roman-accessible. New transit missions with long baselines and focusing on nearby stars such as TESS or PLATO (Ricker et al. 2014;Rauer et al. 2014) are expected to yield additional transiting planets amenable to direct-imaging (Stark et al. 2020). Other planets may be accessible in thermal emission to the Roman Telescope (Lacy & Burrows 2020). Computing the contribution of thermal emission for each confirmed exoplanet, which depends on the age of the system and the evolutionary models assumed, is out of the scope of this work. Long-period planets typically have larger eccentricities than short-period ones, and this has an impact on the median eccentricity of the ensemble of Roman-accessible planets. Figure 4 displays the statistics of orbital period and eccentricity (when it is reported in the NASA Archive). The top panel shows the total number of planets in different ranges of orbital periods. Correspondingly, the right panel shows the normalized distributions of e, such that the integral under the histogram is equal to one for the selected bin size. 5 The key informative of the normalized . Distribution of stellar metallicity and semi-major axis of the planet for all confirmed exoplanets (semi-transparent grey), those observed in transit (semi-transparent orange) and those Roman-accessible in the optimistic CGI configuration, with P access > 25% and V<7 mag (green). distributions is their shape, enabling a more evident comparison of populations with different total counts. We find that the Roman Telescope will be able to detect a relatively large proportion of highly eccentric planets, with the median value of this distribution being e= 0.21 +0.33 −0.16 . In comparison, the total population of confirmed exoplanets with a measurement of e has a median eccentricity of e=0.10 +0.21 −0.10 and the subset of those that have been observed in transit (even if discovered by other methods), e=0.02 +0.17 −0.02 . The observed e distribution for the Romanaccessible exoplanets behaves similarly to the long-period planets defined by Kipping (2013). However, this remains a modest sample and therefore more long-period exoplanets need to be followed up to understand the biases existing in the observed e distributions. Figure 5 shows that the statistics of known exoplanets is dominated by giant ones because they are generally easier to detect. This bias is however particularly noticeable in the Romanaccessible population. Given that most of these exoplanets lack an estimate of i and we only know their minimum mass (Table of planets is normalized to one. This implies that, for histogram bin sizes smaller than one such as in the e histogram of Fig. 4 (with a bin size of 0.05), the value of the normalized distribution may be greater than one (as seen in the figure). Over-plotted semi-transparent green bars with dotted hatch correspond to those exoplanets that we find Roman-accessible in the optimistic CGI configuration, with P access > 25% and V<7 mag. We note that these parameters are not available for all of the confirmed exoplanets in the NASA Exoplanet Archive. The spectral type is available for all of the 24 Roman-accessible-planet host stars; the stellar age, for 13 of them and the metallicity, for 16. Fourth: count of Roman-accessibleplanet host stars of different optical magnitudes in each CGI configuration. Green bars with dotted hatch correspond to the optimistic scenario. Semi-transparent yellow bars with '\' hatch correspond to the intermediate scenario. Red stars mark the three stars hosting Roman-accessible planets in the pessimistic scenario. 3), some of these objects may actually be at the boundary between giant exoplanets and brown dwarfs. Interestingly, we also find that the Roman Telescope may be able to detect tau Cet e and f , both with minimum masses of 3.9 M ⊕ and thus in the super-Earth to mini-Neptune mass regime (see Sect. 5.2). The ongoing efforts to discover low-mass exoplanets around nearby stars (Pepe et al. 2021;Quirrenbach et al. 2016) as well as the future development of direct imaging missions with lower C min and smaller IWA will expectedly reduce this observational bias. Host-star properties such as the spectral type or the mass may be of interest to test hypotheses on the formation and evolution Article number, page 9 of 34 A&A proofs: manuscript no. aanda of an exoplanet (Laughlin et al. 2004;Boss 2006). The spectral type also determines the chemistry of the star, which has an impact on the plausible structure and composition of its exoplanets (Santos et al. 2017). Furthermore, both the age of the star and its spectral type set constraints on the stellar activity, which affects the eventual exoplanetary atmospheres. Regarding the host stars of the Roman-accessible exoplanets, we find that the median value of their metallicity is Fe/H = 0.09 +0.20 −0.11 . This shows a mild but not significant bias towards super-solar metallicities (Fig. 6) compared to the total population of confirmed exoplanets, with Fe/H = 0.02 +0.16 −0.14 . The bias is consistent with the observed trend of giant planet hosts to be more metal-rich than low-mass-planet hosts (Santos & Buchhave 2018). The stars hosting Roman-accessible planets are currently dominated by G-type stars, similar to the total population of confirmed planet hosts (Fig. 7). In turn, this figure shows an underrepresentation of F, K and M stars for the Roman-accessible exoplanets in comparison to the complete population. We find that this lack of K and M stars in the Roman-accessible targets is mainly caused by the V<7 mag threshold. Indeed, if the condition on the stellar magnitude was omitted, we would obtain an overabundance of M-type stars hosting Roman-accessible targets (see Table 5). We also find that the stars hosting Roman-accessible planets show no clear bias to a particular stellar age, whereas in the total set of planet-hosting stars there is a clear bias favouring ages of 3 to 4 Gyr. The Roman-accessible planets in the youngest systems are HD 100546 b (0.005 Gyr), discovered in imaging, eps Eri b (0.5 Gyr) and HD 62509 b (0.980 Gyr), the latter two discovered by radial velocity. Figure 7 (bottom) shows similar M distributions in the direct-imaging subset and in the total population of host stars. The lack of low-mass stars is again due to the V<7 mag threshold that rules out M stars from the target list. However, we do not find any Roman-accessible exoplanet orbiting a star more massive than 2 M . This might be caused partly by the difficulties of searching for RV planets around early-type stars. In future work, we will compare these trends in stellar properties with those from self-consistently computed stellar catalogues such as SWEET-Cat (Santos et al. 2013). The above findings show that the population of Romanaccessible exoplanets does indeed differ from the general population of confirmed exoplanets or from those observed in transit. These differences are partly influenced by the sensitivity of different discovery techniques to reveal amenable targets. Hence, reflected-starlight measurements will enable the atmospheric characterization of exoplanets that are not accessible with other techniques. General detectability conditions Some key findings (P access , α obs , t obs ) on the detectability of the up to 26 Roman-accessible exoplanets with P access > 25% and V<7 mag are listed in Table 6 for all the CGI scenarios. For reference, we also add the corresponding findings at λ=730 and 825 nm, the effective wavelengths of the two other commissioned filters for the coronagraph. At these wavelengths, we assume an albedo of A g =0.3 and account for the modified IWA and OWA. The transit probability of these planets is listed in the output catalogue (Table 5). Figure 8 (left panel of each diagram) shows the tracks of contrast and angular separation of the random orbital realizations in our analysis. It also shows (right panel) the corresponding distributions of α obs for the optimistic CGI scenario, which indicate the observable phase angles that occur more often. As we have discretised the orbits evenly in the true anomaly (rather than in time), these distributions do not translate directly into time spent at any given interval of phase angles. At our reference wavelength λ=575 nm, the number of Roman-accessible exoplanets in the optimistic, intermediate and pessimistic CGI scenarios is 26, 10 and 3, respectively (Table 6). HD 219134 h, 47 UMa c and eps Eri b are the only planets that would be accessible in all three scenarios with P access >25%. Generally, P access decreases at longer wavelengths because the IWA increases with λ, masking a larger region around the host star. Particular cases like eps Eri b or HD 219134 h show an increase in P access at longer λ. These are planets that reach large angular separations and, at λ=575 nm, orbit partly outside the OWA of the coronagraph (Fig. 8). Hence, their P access increases at longer wavelengths because both the IWA and OWA move outwards. The transit probability of the Roman-accessible exoplanets is low in all cases (Table 5), with the maximum being P tr =2.29% for HD 62509 b. This super-Jupiter (M p sin i=2.3M J ) orbits the nearby (d=10.34 pc) K0 III giant Pollux. With an orbital period of 589.6 days, HD 62509 b may require observations spanning multiple years to confirm its eventual transits. However, improving the orbital characterization with RV measurements could constrain the time of inferior conjunction and reduce the baseline needed. This star was targeted for 27 days in TESS Sector 20, but its large optical magnitude (V=1.14) poses a problem with photometric saturation. If this planet was found to transit and also imaged (P access =73.84% in the optimistic CGI scenario), it would be a unique opportunity to characterize its atmosphere by combining both techniques. An astrometric determination of its inclination, which should be near 90 • for the planet to transit, would help refine its transit probability. In Fig. 8, those exoplanets with larger uncertainties in their orbital parameters (see Table 3) generally show larger scatter in their F p /F -∆θ tracks. Figure 8 also shows that planets in the sub-giant regime (i.e. those with M p < 0.36M J ) experience large increases of F p /F in a small number of realizations (see e.g. tau Cet e, HD 192310 c, tau Cet f in Fig. 8). This corresponds to orbital configurations with inclinations i ≈ 0 or 180 • that result in large values of M p and in turn R p (Eq. 10b). These unlikely configurations produce the outlying tracks in Fig. 8. Generally, phase angles both before and after quadrature (α<90 • and α>90 • , respectively) can be observed at λ=575 nm in the optimistic CGI configuration (Table 6 and Fig. 8). This will be important to better constrain some of the optical properties of the atmosphere that may be more sensitive to the scattering angles (Carrión-González et al. 2020;Damiano et al. 2020). The minimum value of α obs is in most cases not smaller than about 30 • . The main limitation to measure values of α closer to full phase is the IWA. In this sense, eps Eri b is an outlier that can only be detected at small phase angles in the observing mode that we are considering here (see Sect. 6.3). Correspondingly, the maximum α obs is not larger than 110 • for most of these exoplanets. Typically, at large phase angles, the planet is not bright enough and its contrast drops below the specified C min . Indeed, in the intermediate and pessimistic CGI scenarios, only phase angles smaller than quadrature are generally observed (Table 6). Therefore, both the IWA and C min are major factors limiting the windows of detectability. This is also the reason why, typically, both t obs and the range of α obs decrease at longer wavelengths (Table 6). We define the interval of observable phase angles as ∆α obs = α obs(max) − α obs(min) and compute the corresponding upper and lower uncertainties. Table 7 shows the planets with the largest ∆α obs at our reference λ=575 nm, which a priori might become (Table 1). Regions in green are the windows of detectability in the optimistic CGI configuration at this wavelength and the green histograms in the right panels show the posterior distributions of α obs for this scenario. Article number, page 11 of 34 A&A proofs: manuscript no. aanda HD 219134 prime targets for phase-curve measurements in each CGI scenario. Figure 9 shows, for the optimistic CGI configuration, the computed ranges of ∆α obs for each exoplanet against the total time they are observable, t obs . This information is potentially relevant to find optimal targets for phase-curve measurements. For instance, HD 219134 h shows a large variation of α in the optimistic configuration (∆α obs =94 +11 −27 ) taking place in a detectability window of 2.5 years (t obs =917 +152 −159 days), the shortest value of t obs among the planets of Table 7. Furthermore, this planet has particularly large intervals of α obs in the intermediate and pessimistic scenarios (∆α obs =41 +7 −22 and 33 +4 −18 deg, respectively). Multiplanetary systems Among the optimistic 26 Roman-accessible exoplanets, 13 of them are part of stellar systems with other confirmed planetary companions. Table 8 lists these multiplanetary systems, with the number of exoplanets that they host as well as the number of them that are Roman-accessible in each CGI scenario. Three of these exoplanets are also among those with a larger ∆α obs in Table 7: HD 219134 h, 47 UMa c and HD 190360 b. We find that, in the optimistic CGI scenario, the systems 47 UMa and tau Cet have more than one Roman-accessible exoplanet. In the case of 47 UMa, planets b and c are accessible with P access =100%. We note that 47 UMa d also has a marginal P access =9.41% in this scenario (Table 5). The system tau Cet stands out because planets e and f (M p sin i ∼ 4M ⊕ ) are Romanaccessible (P access =87.75 and 26.74%, resp.). In Sect. 6.1 we discuss more thoroughly the prospects to observe tau Cet e and f. Table 8 also shows three systems for which a transiting, inner exoplanet is known to exist. This offers the possibility of studying both the outer planet in direct imaging and the inner planet with transmission spectroscopy. Such scenarios are potentially valuable to gain insight into the system as a whole, and the processes that may have led to the final arrangements. In the optimistic scenario, this is the case of 55 Cnc d, with the transiting ultra-short-period planet e, pi Men b, with a transiting super-Earth (planet c) and HD 219134 h, with two transiting super-Earths (b and c). These systems will be discussed in more detail in Sect. 6.2. Equilibrium temperatures of the Roman-accessible planets In order to facilitate future atmospheric modeling of the Romanaccessible exoplanets, we computed their T eq at each orbital position by means of Eq. (14). In our output catalogue (Table 5) we quote the range of T eq , and the corresponding uncertainties, computed for each planet in the 10000 orbital realizations (whether detectable or not). In addition, for some planets we report under T eq(obs) the range of equilibrium temperatures that correspond only to those orbital positions that are Roman-accessible (Table 9). This provides a first estimate of the possible variations that the planetary atmosphere might undergo during the time that it remains accessible. Figure 10 shows the evolution of T eq with time for the accessible orbits of those planets that have an estimate of e in the NASA Archive (all but HD 100546 b). Planets in eccentric orbits experience large changes of T eq(obs) and therefore are prime targets to search for atmospheric variability. On the other hand, this would complicate an eventual atmospheric characterization by multiple-phase observations. The planets with the largest changes in T eq(obs) (∆T eq(obs) ) for each CGI configuration are listed in Table 9. In the optimistic scenario, ups And d and pi Men b experience changes of about 30 K during the time that they remain accessible, which is about a year in both cases. HD 114613 b remains observable for about two years and we find that it undergoes a ∆T eq(obs) = 53 +9 −28 K. Both psi 1 Dra B b and HD 190360 b have a t obs of about four years and, in this time, they show variations in T eq of about 40 K. Such variations in T eq during the time that they are observable will likely trigger variability in the cloud coverage of their atmospheres (Sánchez-Lavega et al. 2004). These five planets have P access = 100% and hence they appear as suitable targets to search for atmospheric variability with the Roman Telescope. In more conservative CGI scenarios, however, the observable variability of T eq is significantly reduced. In these cases, only HD 190360 b in the intermediate CGI scenario shows a noteworthy ∆T eq(obs) (17 +13 −10 K). The rest of planets in the intermediate or pessimistic scenario have ∆T eq(obs) smaller than 10 K, which is likely unable to trigger atmospheric variability during the time that they are observable. Article number, page 13 of 34 A&A proofs: manuscript no. aanda Table 8. Multiplanetary systems that are Roman-accessible in each CGI configuration. We also quote under the "Technique" column the observing techniques with which the planets have been detected and the number of planets detected with each of these techniques. For each CGI configuration, only those exoplanets with P access >25% are shown. System Planets Techniques Roman-access. Figure 11 shows, for the optimistic CGI scenario, the median value of the computed T eq distributions against the median value of the M p resulting from our statistical exercise (Table 5). It shows that the population of exoplanets probed with the Roman Telescope will be remarkably different from the one that has been explored with previous techniques, with Jupiter and Saturn analogues amenable to characterization. On the other hand, analogues of Uranus and Neptune are still out of reach for the Roman Telescope. We note that, although some planets in this range of T eq and M p can be found in our output catalogue (Table 5), they orbit stars fainter than V=7 mag and are thus excluded from our analysis. Interestingly, we find Roman-accessible planets with T eq comparable to that of the Earth such as the super-Earth tau Cet e, the giant planet gam Cep b or the super-Jupiter bet Pic c. bet Pic c is a young exoplanet in a system of about 18.5 Myr (Miret-Roig et al. 2020) and thus Eq. (14) used here will severely underestimate its effective atmospheric temperatures. On the other hand, both gam Cep b and tau Cet e are mature systems with ages of 6.6 Gyr (Torres 2006) and 5.8 Gyr (Tuomi et al. 2013), respectively. Discussion on selected targets We next elaborate on eight targets that showcase new study cases in exoplanet science. The exercise explores possibilities for their characterization in reflected starlight, but also limitations arising from, for example, uncertainties in their orbital solutions or their host stars' brightness. First, we focus on the two super-Earths tau Cet e and f, which orbit near their star's habitable zone (HZ). Then, we study the cases of pi Men b, 55 Cnc d and HD 219134 h, planets in multi-planetary systems whose known innermost companions are accessible to atmospheric characterization through transit spectroscopy. We also analyse the gas giant eps Eri b, whose orbital solution remains somewhat controversial, demonstrating the potential of the Roman Telescope to characterize its orbit. Finally, we discuss the candidate super-Earths Proxima Centauri c (hereon, Proxima c) and Barnard's Star b (hereon, Barnard b) as key targets for the next generation of directly-imaged exoplanets. In addition, estimates or reasonable guesses of the orbital inclination are available for most of these exoplanets. This affects their prospects for direct imaging. For such cases, we compare their detectability against the scenario in which i is unconstrained. This way we show the relevance of multi-technique strategies for exoplanet characterization, an approach that will become more common with upcoming Gaia data releases. Two super-Earths near the habitable zone tau Cet is a nearby G8 V star with an effective temperature T =5344 K (Santos et al. 2004). It hosts four super-Earths with minimum masses in the range 1.75−3.93 M ⊕ (Tuomi et al. 2013;Feng et al. 2017). Based on Hipparcos astrometry, Kervella et al. (2019) report an anomaly in the star's tangential velocity attributable to a possible outer giant companion. We find that the two outermost confirmed exoplanets, tau Cet e and f, are Romanaccessible in the optimistic CGI scenario with P access of about 88% and 27%, respectively. In the intermediate and pessimistic CGI scenarios, the probabilities drop below 13% for both planets (Table 5). (Table 3). Green colour indicates the orbital positions which are accessible in the optimistic CGI scenario. For the sake of clarity, only 1 of each 10 orbital realizations are shown. and f (Table 5). This, together with our obtained R p ≈ 1.87 R ⊕ , places them in the super-Earth regime if defined as R p < 2 R ⊕ and M p < 10 M ⊕ . 6 Accounting for the uncertainties in the values of a (Table 3), tau Cet e and f orbit within the optimistic HZ and slightly outside the conservative HZ. We note that, if additional planets in this mass range were found inside the HZ of the system as suggested by e.g. Dietrich & Apai (2021), they would likely fall in the accessible region of the Roman Telescope too. The possibility of characterizing the atmospheres of these planets represents a remarkable step toward a better understanding of habitability beyond the Earth, making these targets quite unique. tau Cet hosts a debris disc with a total mass of about 1 M ⊕ ( Greaves et al. 2004) that might potentially hinder the direct imaging of the system's planets. Based on Herschel images, Lawler et al. (2014) find that the disc is inclined by i = 35 • ± 10 • from face-on. They also find that the disc's inner edge is most likely located between 2 and 3 AU (555 and 833 milliarcseconds, respectively) although cannot rule out solutions between 1 and 10 AU. The disc's outer edge is at about 55 AU. MacGre-gor et al. (2016) observed this system with the Atacama Large Millimeter/submillimeter Array (ALMA) and estimated an inner edge of the disc at 6.2 +9.8 −4.6 AU. This is consistent with recent findings by Hunziker et al. (2020) based on observations in the 600 − 900 nm range with the SPHERE/ZIMPOL instrument at the Very Large Telescope (VLT). Based on their nondetection of extended sources around tau Cet, Hunziker et al. (2020) concluded that either the disc is too faint or its inner edge is at a distance farther than about 6 AU. Overall, the ALMA and SPHERE/ZIMPOL observations suggest that the debris disc will not interfere with the prospective imaging of the exoplanets but further measurements are needed to confirm it. Indeed, we find that if the disc's inner edge is at 2 AU, it remains outside the optimistic OWA of the Roman Telescope for λ=575 nm but it could be detected at λ=730 nm and 825 nm (see Table 2). The disc is not detectable in any of the exoplanet-devoted CGI filters of any CGI scenario considered here if the inner edge is further out than 2.3 AU. The debris disc may negatively affect the habitability of these planets if they are frequently subject to large impacts. On the other hand, the existence of abundant debris from such impacts may have favoured the formation of exomoons, which could be searched for in direct imaging (Cabrera & Schneider 2007). Furthermore, the disc can be used for a first guess on the planets' inclinations because systems hosting debris discs and multiple planets are frequently coplanar (Watson et al. 2011;Greaves et al. 2014). Figure 12 shows, in black, the orbital realizations for tau Cet e and f following our general methodology for planets without a constraint on inclination and, in red, those configurations with i coincident with the disc's orientation. Table 10 compares the detectability results for tau Cet e and f in all CGI scenarios if no prior knowledge of the inclination is assumed and if the orbits of the planets are assumed coplanar with the disc. In the optimistic CGI configuration, an estimate of i = 35 +10 −10 deg results for both planets in statistically larger values of P access . This corresponds to an increase in t obs , while the ranges of α obs remain similar in both cases. Similar conclusions are found for tau Cet e in the intermediate CGI configuration, whereas in this configuration tau Cet f reduces its small P access to zero when i is constrained (Table 10). In fact tau Cet f remains inaccessible for any CGI configuration out of the best-case, optimistic scenario. This reduction of P access when i = 35 +10 −10 , in comparison to the case of unconstrained i, also happens for both planets in the pessimistic CGI configuration. In the case of tau Cet f, the reason is that if C min increases, only those orbital realizations with i close to 0 or 180 • (and hence very large M p and R p ) would be accessible. If i is constrained within 25 and 45 • , these orbital realizations will not reach the C min threshold. For tau Cet e, the large IWA in the pessimistic CGI scenario is the main limitation for the detectability of the planet. In order to determine the orbital parameters that have a larger impact on the detectability of these planets, we carried out a sensitivity study included in Appendix C. There, we fix all orbital parameters a, e, i and ω p except for one at a time and check how P access and α obs change. For tau Cet e, we find (Fig. C.1) that Fig. 12. Detectability of tau Cet e and f in each CGI configuration, following the same colour code as in Fig. 8. Left: Black lines correspond to orbital realizations without an inclination constraint. Solid red lines correspond to orbital configurations with 25 • < i < 45 • , coplanar with the debris disc of the system (Lawler et al. 2014). For the latter case, the inclination is sampled from a uniform distribution within the quoted limits. Right: the histograms show the posterior distributions of α obs . P access does not change significantly, with the largest effect being due to variations in ω p . In the case of tau Cet f, i and ω p are the main parameters affecting the detectability (Fig. C.2). This sensitivity study shows the relative effects of each orbital parameter on P access and α obs , but the correct values of these parameters are those reported in Table 6, where all uncertainties are accounted for simultaneously. Contamination from exo-zodiacal dust (exozodi) might also limit the detectability of tau Cet e and f. Ertel et al. (2020) found that exozodiacal dust levels in the HZ around nearby early-and solar-type stars are generally about 3 times that of the Solar System. They conclude that these levels are low enough that would not impede the spectral characterization of HZ rocky planets with current direct-imaging mission concepts such as the WFIRST Starshade Rendezvous (Seager et al. 2019), HabEx or LUVOIR. Ertel et al. (2020) did not detect exozodi around tau Cet and set an upper limit of 120 exozodis (that is, 120 times that of the Solar System). Follow-up observations should help determine the actual amount of exozodi, which could also be constrained by the Roman Telescope in its observing mode devoted to disc measurements (Mennesson et al. 2019). Ertel et al. (2020) suggest multi-epoch observations as a path to distinguishing between the signal from the exoplanets and that from exozodi dust clumps, given their different phase functions. In this Article number, page 16 of 34 Ó. Carrión-González et al.: Catalogue of exoplanets accessible in reflected starlight to the Nancy Grace Roman Space Telescope respect, we find that even in the optimistic CGI scenario, only a modest phase coverage could be achieved for tau Cet e and f (α ∈ [61 +10 −8 ,100 +9 −10 ] and [53 +20 −29 ,74 +26 −28 ], respectively). Outer companions of transiting exoplanets The Roman Telescope will be able to characterize several exoplanets in multi-planetary systems, some of them with inner companions accessible to transmission or occultation spectroscopy. This provides unprecedented possibilities for understanding their bulk atmospheric compositions, histories and the connection between formation, migration and current-time architecture. Here we discuss the cases of pi Men b and 55 Cnc d, as representatives of this type of exoplanets. pi Men b Planetary systems that contain a far-out Jupiter and a close-in super-Earth appear to be relatively common (Bryan et al. 2019). The mechanisms that result in such architectures remain unclear but are potentially important for understanding the origin and evolution of super-Earths. pi Men (V=5.67 mag) is one of such systems. It hosts a far-out Jupiter discovered with RV (Jones et al. 2002) and a close-in transiting super-Earth discovered with photometry and RV (Gandolfi et al. 2018;Huang et al. 2018). The outer planet, pi Men b, has also been detected in joint Hipparcos and Gaia astrometry (Xuan & Wyatt 2020;De Rosa et al. 2020;Damasso et al. 2020b) thereby providing constraints on its sky-projected inclination and the mutual inclination between both planets in the system. Constraints of this kind will become more usual with future releases of Gaia astrometric data. Pi Men b is now known to follow an eccentric orbit that is most likely not coplanar with the orbit of the inner planet. The super-Earth in the system, pi Men c, is amenable to intransit atmospheric characterization . It has been proposed that its atmosphere may not be hydrogen/helium-dominated but rather contains large amounts of heavy gases. Rossiter-McLaughlin measurements during the transit of pi Men c have revealed that its orbital plane is misaligned with the stellar spin axis (Kunovac Hodžić et al. 2021). Interestingly, the eccentricity and inclination of the outer planet and the orbital misalignment of the inner one support a formation scenario in which the super-Earth is formed far from the star and migrated into its current orbit following higheccentricity migration (Kunovac Hodžić et al. 2021). The possibility of obtaining detailed orbital information of both planets and atmospheric information of the inner one make the pi Men system quite unique. Of interest here, pi Men b is amenable to direct imaging with the Roman Telescope. This will help further constrain its orbit, especially if multi-phase measurements are made. It will also enable the spectroscopic investigation of its atmosphere, which should set valuable constraints on its chemical composition (e.g. Lupu et al. 2016;Nayak et al. 2017;Carrión-González et al. 2020). To explore the detectability of pi Men b, we compare the orbital solution given in the NASA Archive (Huang et al. 2018), which has no estimate of i, and the scenario in which i is constrained. We use an inclination of 128.8 +9.8 −14.1 deg that results from translating the inclination angle defined in Xuan & Wyatt (2020) to our own definition in Figure 1. The inclination is such that the angular momentum vector of pi Men b's orbit points toward the observer (Xuan, private communication). Figure 13 compares the F p /F -∆θ diagrams if the inclination is constrained and if it is not. In case i is constrained, t obs = 334 +15 −15 days and α obs =[70 +2 −1 ,95 +1 −1 ] in the optimistic CGI scenario. This does not differ substantially from the results for the analysis with unconstrained inclination (Table 6), in which pi Men b is accessible over 330 +32 −17 days of its 2093-day orbital period and phase angles α ∈ [69 +7 −2 ,95 +1 −1 ]. Similarly, if i is constrained the conclusions for the other CGI scenarios are comparable to those in Table 6, finding that the planet is only marginally accessible in the intermediate scenario and not accessible in the pessimistic one. The sensitivity study in Fig. C.3 shows that the detectability of the planet does not change much if the orbital parameters vary within the uncertainties reported in the input catalogue (Table 3). For comparison, we note that a shift of 180 • in the value of ω p (as if ω was mistaken for ω p ) would yield a significantly larger range of observable phase angles α ∈ [42 +16 −3 ,111 +2 −7 ] (see Appendix B). 55 Cnc d A total of five planets have been confirmed around 55 Cnc (V=5.96 mag) to date (Butler et al. 1997;Marcy et al. 2002;Fischer et al. 2008;Winn et al. 2011). The super-Earth 55 Cnc e is the only one found to transit, which allowed to constrain the inclination of its orbit. Nelson et al. (2014) carried out dynamical simulations and determined that the inclination of planets b, c, d and f , assumed coplanar, likely coincides with that of planet e. They also found that the system becomes unstable if the mutual inclination between planet e and the others is between 60 • and 125 • . Baluev (2015) considered this an optimistic estimate and concluded that the inclination of the outer planets could not be below 30 • . The NASA Exoplanet Archive quotes i = 90 • , with no upper or lower uncertainties, for 55 Cnc b, c, d and f . We manually set the inclination of these planets to i = 90 ± 60 • , more in accordance with the conservative scenario in Baluev (2015). Hence, the values of M p quoted in the NASA Archive for these planets become their minimum masses. In our exploration, we determine the planet masses according to the sampling of i in each realization (see Sect. 4.5). We find that the only planet observable by the Roman Telescope in this system is 55 Cnc d, with P access =100% in the optimistic CGI scenario. This is the outermost and a priori most massive planet (M p sin i=3.878 M J ) in the system, which appears to be a frequent architecture in multiplanetary systems (e.g. ups And, pi Men, HD 160691, HD 219134). The detectability window spans over 2117 +125 −318 days, with a range of observable phase angles α ∈ [30 +20 −10 , 84 +2 −2 ]. One of the limitations in the detectability of 55 Cnc d is the IWA, which affects mainly the smaller Article number, page 17 of 34 A&A proofs: manuscript no. aanda phase angles. The value of C min prevents the detection of the planet as it orbits from quadrature to inferior conjunction and α increases, reducing F p /F . In the intermediate and pessimistic CGI scenarios, the planet is below the C min and therefore it is not Roman-accessible. From the sensitivity study for this planet (Fig. C.4) we conclude that the uncertainties in the orbital parameters have no significant effect on P access and that i is the main parameter affecting the range of α obs . Detecting 55 Cnc d in reflected starlight will set constraints on its atmospheric structure and composition. This may help understand the possible evolution of the system and the dynamical processes that have brought 55 Cnc e to its ultra-short-period orbit of P=0.74 days (Winn et al. 2011). HD 219134 h The K3 V star HD 219134 (V=5.570 mag) hosts a multiplanetary system with up to six exoplanets (Motalebi et al. 2015;Vogt et al. 2015;Gillon et al. 2017). The two innermost of them, super-Earths b (M p = 4.74 ± 0.19 M ⊕ ) and c (M p = 4.36 ± 0.22 M ⊕ ), have been observed in transit (Motalebi et al. 2015;Gillon et al. 2017). The system also includes three mini-Neptunes (planets d, f and g) and an outer Saturn-mass planet (h), all discovered in RV. Given the different nomenclatures used in literature, we adopt here that of the NASA Archive. Johnson et al. (2016) proposed that the signal attributed to planet f may be a false positive due to stellar rotation and this planet is indeed marked as controversial in the NASA Archive. HD 219134 h, on the other hand, has been suggested to be real despite its reported orbital period of about half the 12-year stellar activity cycle (Johnson et al. 2016). In this work, we have found that HD 219134 h is one of the only three exoplanets that are Roman-accessible in all the CGI configurations considered. In all scenarios, it is also the exoplanet that shows the largest interval of α obs and therefore the most favourable target to perform phase-curve measurements on (Table 7). Phase angles near quadrature are however less likely to be observed because those orbital positions tend to fall outside the OWA (see Fig. 8). An observing mode reaching larger angular separations, such as the CGI mode devoted to disc measurements (Sect. 2), may complement the observations in that region of ∆θ. In the intermediate CGI scenario and even the pessimistic one, HD 219134 h would remain accessible for about 577 and 444 days, respectively. This could facilitate higher S/N observations being obtained. We also find that this planet is suitable to be observed with a broad wavelength coverage. Remarkably, its P access is about 90% or higher for the three CGI filters (575, 730 and 825 nm) in the optimistic, intermediate as well as in the pessimistic CGI scenario (Table 6). There have been recent investigations of the evolution and current composition of HD 219134 b and c (Vidotto et al. 2018;Nikolaou et al. 2019). The broad phase and wavelength coverage achievable for HD 219134 h makes it a promising target for atmospheric characterization (Damiano et al. 2020). Furthermore, it can be considered one of the most reliable targets for the Roman Telescope given its great detectability prospects in all CGI scenarios and wavelenghts. The orbital parameters reported in the NASA Archive for this planet correspond to those in the discovery papers, which have not been further updated. Planning for direct-imaging observations will require a refined orbital characterization, for which additional RV campaigns are strongly needed. Such follow-up RV measurements would also help clarify which of the reported signals in the system correspond to actual planets and which are caused by stellar activity. Yellow lines are specific to the maximum-likelihood orbital configuration provided in the corresponding reference. Middle: posterior distributions of α obs . Right: variation of α with time for each orbital realization. In this panel, green regions correspond to detectability windows for the maximum-likelihood orbit (yellow line). All orbital realizations are shown for reference in the α-t diagram (black lines), but their corresponding detectability windows are omitted. Prospects to confirm controversial exoplanets: eps Eri b eps Eri b is a giant planet orbiting a young K2 V nearby star (d=3.22 pc) with a period of about 7 years, discovered in RV data by Hatzes et al. (2000). Benedict et al. (2006) combined RV and astrometry, and found an orbital solution with i = 30.1 • ± 3.8 • and e = 0.70 +0.04 −0.04 . It has since been a promising target for directimaging given its predicted large angular separation of up to 1600 mas (Kane et al. 2018) and the interest in the atmospheric processes that could take place on a planet with such an eccentric orbit (Sánchez-Lavega et al. 2003). However, the orbital solution of this planet has remained controversial (e.g. Hollis et al. 2012) and, furthermore, the existence of the planet has also been questioned (Anglada-Escudé & Butler 2012). Mawet et al. (2019) combined RV data with high-contrast direct imaging observations at 4.67 µm, finding a RV signal consistent with a planet in a 7-year orbit but no thermal emission. They inferred a minimum age of 800 Myr, an orbital inclination i = 89 • ± 42 • and an eccentricity of e = 0.07 +0.06 −0.05 , an order of magnitude smaller than the previous reference adopted as default in the NASA Exoplanet Archive. They find this solution marginally compatible with the planet being co-planar with the outer debris disc in the system, which has i = 34 ± 2 • (Booth et al. 2017). The NASA Exoplanet Archive updated on 2020-09-03 the information on eps Eri b from that provided by Benedict et al. (2006) to that by Mawet et al. (2019). The scope of our work is not to determine which one of the orbital solutions is more reliable. This said, and as shown here, the update dramatically changes the prospects for detecting the planet, and demonstrates the importance of follow-up measurements, preferably with multiple techniques. Focusing on the optimistic CGI scenario, we compare both solutions in Fig. 14 and find that the one in Benedict et al. (2006) is accessible in all of our realizations (P access =100%) and produce α obs =[60 +3 −3 , 107 +4 yields P access =57.99% and α obs =[12 +8 −4 ,24 +1 −1 ]. These obvious differences, which are also observable in the itermediate and pessimistic CGI scenarios, have potential implications on the prospects to characterize the exoplanet's atmosphere. In a more positive note, given that the ranges of α obs do not overlap, reflected-starlight observations of the planet may help determine the actual orbital solution. In both cases, we find that the OWA of the Roman Telescope is a major limitation to observe the planet. Observing modes with larger OWAs or telescope architectures more flexible in this regard (e.g. Seager et al. 2019; LIFE Collaboration 2021) will facilitate the detection of this planet and increase the interval of α obs . In our sensitivity study for the orbital solution given by Mawet et al. (2019), we find that i is the key factor affecting the detectability of this planet (Fig. C.5). Fig. C.5 shows that orbital realizations with i of about 50 or 130 • would remain outside the OWA for the whole orbital period, but those close to edge-on reach smaller ∆θ making the planet accessible. The abundant exo-zodiacal dust in the system (Ertel et al. 2020) might create additional difficulties. However, observing the eps Eri system could finally confirm the existence of the planetary companion and constrain its orbital solution, either by directly imaging it or by studying planet-disc interactions. The fact that this planet remains accessible in all three CGI scenarios makes it a potential example of how high-contrast imaging with the Roman Telescope could help resolve conflicting orbital solutions. The potential of direct-imaging to confirm RV candidates: Barnard b and Proxima c A space-based direct-imaging mission will be useful to confirm the existence of a number of targets that are often considered candidate exoplanets. Due to the expected duration of the eventual science phase of Roman Telescope's CGI, the use of telescope time in such survey-like observations with uncertain payoff will likely not be favoured. Nevertheless, the next generation of direct-imaging space telescopes will have among their goals the search for new exoplanets (Gaudi et al. 2018;The LUVOIR Team 2018). In this context, we analyse the cases of Barnard b (Ribas et al. 2018) and Proxima c (Damasso et al. 2020a), two super-Earth candidates orbiting the closest planet-host stars. The main properties of these targets, which are not included in the NASA Archive of confirmed exoplanets, and the corresponding references are listed in Table 11. We find (Fig. 15, Table 12) that both planets orbit within the optimistic Roman-accessible region of IWA, OWA and C min if their orbital inclinations are assumed unconstrained. Indeed, Barnard b is accessible in all the orbital realizations (P access =100%) whereas Proxima c, with larger uncertainties in the orbital parameters, has a somewhat lower probability of P access =64.84%. Furthermore, Barnard b remains accessible over about 70% of its orbital period (t obs =167 +39 −49 days) but Proxima c is only accessible over less than a tenth of its orbit (t obs =116 +59 −50 days). The range of α obs is particularly wide for Barnard b (∆α obs ≈ 85 • ), which may eventually help characterize the composition and structure of its atmosphere (Nayak et al. 2017;Damiano et al. 2020). The brightness of their host stars likely prohibits the observation of these planets with the Roman Telescope. However, both stars will be within the operating range of future directimaging missions such as LUVOIR (The LUVOIR Team 2018), being Barnard a more suitable target (V=9.5 mag) than Proxima (V=11.13 mag). In the sensitivity study for these candidates, we find that Barnard b has P access = 100% in all cases, being i and e the parameters with the largest impact on α obs (Fig. C.6). In the case of Proxima c, i is the parameter which affects the most both P access and α obs . Indeed, only the orbits with i ≈ 90 • occur to be accessible (Fig. C.6). Proxima c is indeed amenable to astrometric characterization of its orbit with existing telescopes, which strongly affects the detectability prospects for a direct-imaging mission. Benedict & McArthur (2020) obtained i=133±1 • and e=0.04±0.01 with astrometric data from Hubble Space Telescope and SPHERE instrument at the VLT. Correspondingly, assuming a circular orbit and using Gaia data, Kervella et al. (2020) proposed two solutions: a prograde orbit with i=152±14 • and a retrograde orbit with i=28±14 • . We find that in all these cases Proxima c would not be Roman-accessible because the angular separation is larger than the OWA during the whole orbit (red lines in Fig. 15). There is a growing population of exoplanet candidates, mostly detected with RV. The examples of Barnard b and Proxima c illustrate the potential of direct-imaging missions to confirm, given the appropriate orbital conditions, the existence of these candidates. Conclusions The Nancy Grace Roman Space Telescope will be the first space mission capable of directly imaging exoplanets in reflected starlight. The first measurements of this kind could therefore be available within the decade. Designed as a technology demonstrator, it will pave the way for more ambitious direct imaging missions such as LUVOIR or HabEx. We have shown in this work its potential for several science cases, in particular for phase-curve measurements of exoplanets. We have analysed the complete set of confirmed exoplanets in the NASA Exoplanet Archive and computed which ones would be Roman-accessible at 575 nm in three different scenarios of CGI performance. For that, we have compiled the planetary and stellar parameters needed to compute the evolution of Article number, page 19 of 34 A&A proofs: manuscript no. aanda − Notes. † indicates that the M p value corresponds to M p sin(i). The quoted values for Barnard b are obtained from the discovery paper (Ribas et al. 2018) and the Extrasolar Planets Encyclopaedia. For Proxima c, the planetary parameters are obtained from the discovery paper (Damasso et al. 2020a) and the Extrasolar Planets Encyclopaedia, while the stellar parameters are obtained from Suárez Mascareño et al. (2020). Additional estimates for the inclination of Proxima c have been proposed by Benedict & McArthur (2020) and Kervella et al. (2020), suggesting also practically zero eccentricity. The implications of these findings are discussed in the text. Notes. For the case of Proxima c, the parameters i, e and ω p are assumed unconstrained and hence sampled as explained in Sect. 4.1. If the values of i and e considered for Proxima c are compatible with the findings of Benedict & McArthur (2020) or Kervella et al. (2020), this planet would not be Roman-accessible (see Fig. 15). the exoplanet's orbital position and brightness (Table 3). To account for the uncertainties in the orbital determination and other non-orbital factors, we followed a statistical approach and computed 10000 random realizations for each exoplanet. In each realization, the values of all parameters were independently drawn from appropriate statistical distributions within their quoted upper and lower uncertainties. For those exoplanets lacking a value of orbital elements such as e, i or ω p , we drew their values from uniform distributions assuming an isotropic distribution of possible orbital orientations. In the cases without a value of the planet radius, we derived it by means of published M p -R p relationships covering a range of masses from less than that of Mercury to 60 M J . From the posterior distribution of ∆θ or F p /F , we derived the overall probability of the planet to be Romanaccessible, its transit probability and the values of t obs , α obs and T eq(obs) . As of September 2020, 26 exoplanets orbiting stars brighter than V=7 mag have P access > 25% in the optimistic CGI configuration. This number is reduced to 10 and 3 in the intermediate and pessimistic scenarios, respectively. Only HD 219134 h, 47 UMa c and eps Eri b are Roman-accessible in all three scenarios. We note that our assumed scenarios do not correspond to officially expected CGI specifications but rather to a range of plausible coronagraph performances according to current predictions. For instance, the best official estimates of the IWA currently match the value in our optimistic scenario, while the official OWA is slightly less restrictive than the one we assume. The best official estimates of C min are more restrictive than the value assumed in our optimistic scenario but somewhat more favourable than the one in our intermediate scenario (see Sect. 2). Additional factors not considered in this work will reduce the number of accessible targets and therefore a high value of P access does not guarantee a detection of the planet, which will be restricted by mission schedule and final instrument performance. For reference, we list in our output catalogue (Table 5) the up to 76 exoplanets that would be accessible in the optimistic scenario if the host-star magnitude was not a limitation. The catalogue presented here is expected to evolve as followup observations are performed, and will be updated in future work as more information about the mission is available. One of the next steps to be performed with our methodology is to simulate an optimized observing schedule for a direct-imaging telescope, including noise sources and restrictions from mission timeline. A similar approach was discussed in Brown (2015) under the assumptions of no orbital uncertainties except for i, and R p = R J for all considered planets. That work concluded that successful observations of any suitable exoplanet may be restricted to windows of only a few days. Nevertheless, the detectability criteria in that work as well as the resulting target list were shaped by the science requirement of measuring M p with a fractional uncertainty of 0.10. Relaxing this requirement will broaden the list of observable targets and their detectability windows. On the other hand, accounting for all the parameter uncertainties that we consider in our method will surely increase the uncertainties in the planning. The about 3000 exoplanets discovered between the compilation of the input catalogue in Brown (2015) and ours also increase the options to find suitable targets as the launch of the Roman Telescope approaches. A population study was carried out for the set of 26 Romanaccessible exoplanets in the optimistic scenario. We compared their properties with those of the complete population of confirmed exoplanets, and with the exoplanets that have been observed in transit (Sect. 5.1). As expected, we found that the subset of Roman-accessible planets is biased towards massive objects on long-period orbits with high eccentricities. We also noted a lack of F, K and M stars in the hosts of Roman-accessible planets, caused partially by the threshold specified at V=7 mag. Overall, this suggests that the Roman Telescope will probe a population of exoplanets that differs in various ways from those accessible to atmospheric characterization with current techniques. In the optimistic CGI scenario, exoplanets will be accessible mainly near quadrature (α=90 • ) and many of them could reach minimum values of α obs of about 30 • or 40 • . These phases are remarkably brighter than those generally used to estimate planet detectability and S/N, usually α = 90 • or up to 60 • in optimistic works (e.g. Lacy et al. 2019). This may have a favourable impact on the computation of integration times. We found several exoplanets suitable for phase curve measurements in reflected starlight with ranges of observable phases ∆α obs 70 • . The primary limitation to access smaller phase angles is the IWA of the coronagraph, whereas high phases will be mainly limited by the C min of the instrument. This effect also narrows the intervals of ∆α obs in more conservative CGI scenarios. Computing the range of α obs is not only useful to compute more accurate levels of S/N, but also to understand the potential for atmospheric characterization. We have shown that in the op-Article number, page 20 of 34 Ó. Carrión-González et al.: Catalogue of exoplanets accessible in reflected starlight to the Nancy Grace Roman Space Telescope timistic CGI scenario, α obs could range between about 30 • and 120 • for some targets. The atmospheric-modeling community may use these values to study whether the atmospheric retrievals of an exoplanet would benefit from multiple observations at different phases. Analysing the impact of partial wavelength coverage on the atmospheric characterization is also ongoing theoretical work (e.g. Batalha et al. 2018;Damiano et al. 2020). Such studies will benefit from our findings on the detectability at different CGI filters (Table 6). In addition, our statistical method provides both the T eq of each planet along its orbit and the range of observable temperatures T eq(obs) . Respectively, T eq and T eq(obs) are relevant parameters to model the structure of (exo)planetary atmospheres (e.g. Hu 2019) and to search for atmospheric variability. Up to 13 of the Roman-accessible exoplanets are part of multiplanetary systems, with the systems 47 UMa and tau Cet hosting two Roman-accessible exoplanets each, in the optimistic scenario. In particular, the detectability of tau Cet e and f is severely reduced in more pessimistic CGI configurations (Table 5). Nevertheless, the possibility of observing two super-Earths inside the optimistic habitable zone of their star motivates follow-up measurements of this system before the Roman Telescope is launched. 55 Cnc d, pi Men b and HD 219134 h are Roman-accessible planets that have a transiting inner companion. These are especially valuable targets because spectroscopic observations of both planets could eventually be performed. There are constraints on the orbital inclination of the planets in some of these systems. For pi Men b, such constraints are based on astrometry, while for for 55 Cnc d they come from dynamical stability analyses. We showed that an estimate of i reduces the dispersion of possible orbital solutions, thereby improving the accuracy of the computed P access . The characterization of these outer planets in reflected starlight will foreseeably set valuable constraints on the possible structure of the systems and their history. For pi Men b, we also discussed how a correct value of the argument of periastron of the exoplanet affects the prospects for phase-curve measurements. The lack of a homogeneous criterion to report ω in the literature has resulted in multiple definitions that may yield inconsistent results. The main exoplanet catalogues list the ω values as reported in the original references, regardless of the definitions actually used there. Shifts in ω by 180 • (the usual outcome of different definitions) do not affect the maximum angular separation. They do however affect the computed phase angles and therefore F p /F (Appendix B). The future prioritization of targets for direct imaging missions will benefit from consistently reported values of ω p , as we do in this work. Finally, we addressed the potential of direct-imaging measurements to confirm the existence of exoplanets that are controversial or remain candidates. We showed that eps Eri b could be accessible in reflected starlight and confirm the measured RV signal. We also found the candidate super-Earths Barnard b and Proxima c to orbit in the accessible ∆θ − F p /F region of the Roman Telescope but will be undetectable due to the faint magnitude of their host stars. However, these examples show the relevance of determining the orbital inclination, such as in the case of Proxima c, and its impact on the detectability prospects. We conclude that in general direct-imaging missions will strongly rely on preliminary observations with other techniques such as RV or astrometry. Although planned as a technology demonstrator, our work here has shown some of the possibilities of the Roman Telescope's coronagraph during an eventual phase of science oper-ations. It would access a population of exoplanets that has not been previously observed, widening our understanding of exoplanet diversity. Moreover, it would be able to perform phasecurve measurements of these planets in reflected starlight, providing insight into exoplanetary atmospheres that cannot be studied with other techniques. Appendix A: Equations of motion Assuming an elliptic orbit, we can define a coordinate system with x and y axes co-planar to the orbit. The x axis is in the direction of the ellipse major axis, positive towards the orbital periastron; y axis is perpendicular to x; z is perpendicular to the orbital plane. Expressed in polar coordinates with respect to an arbitrary reference direction which subtends an angle ω p with the x axis: x = r cos f y = r sin f z = 0 (A.1) ω p is referred to as the argument of periastron. The orbit can be represented in three dimensions with a new coordinate system with origin in the star, as shown in Fig. 1. The X, Y and Z axes form a triad such that X lays in the direction of the reference line, Y is in the reference plane and Z is perpendicular to both. We will assume that the direction to the observer is −Z. We note here that our assumption on the observer's direction is consistent with Hatzes (2016) but differs with respect to Murray & Correia (2010) or Winn (2010), who place the observer in +Z. A vector (x,y,z) is expressed in the new axes (X,Y,Z) by applying three rotations (Murray & Correia 2010). Here the angle i corresponds to the orbital inclination and Ω is the longitude of the ascending node. The longitude of the ascending node is the angle between the reference direction and the ascending node (the point at which the orbital plane intersects the reference plane moving towards positive values of Z). Ω determines the position of the orbit in the absolute reference frame of the sky. In this work, we will assume Ω = 0 • without loss of generality which is equivalent to reorienting the XY axes in the plane of the sky. The orbital position of a planet at a certain time can be computed through Kepler's equation (Murray & Dermott 1999): where e is the eccentricity of the orbit, E is called the eccentric anomaly and M, the mean anomaly. M is defined as: Here, t is the time for which we are computing the position, t p is the time of periastron passage and P is the orbital period of the planet. E is defined in terms of the true anomaly f , the orbital semimajor axis a, the eccentricity and the planet-star distance given in Eq. (2). From the sketch of the orbit in Figure With that, sin E can be computed as E can be re-expressed in terms of the true anomaly as: We note that, in those cases where the quoted uncertainties are 0.00, this is a result of insufficient significant figures in the rounding. We have not included upper or lower uncertainties for those cases in which they were not reported in the NASA Archive. In those cases where the quoted uncertainties of e reach nonphysical values (e.g. Notes. We note that, in those cases where the quoted uncertainties are 0.00, this is a result of insufficient significant figures in the rounding. Article number, page 34 of 34
2021-04-12T01:15:46.447Z
2021-04-09T00:00:00.000
{ "year": 2021, "sha1": "3c29c1b414e1f40ce31b509977fbe03419d9f141", "oa_license": null, "oa_url": "https://elib.dlr.de/144611/1/2104.04296.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3c29c1b414e1f40ce31b509977fbe03419d9f141", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119587790
pes2o/s2orc
v3-fos-license
A faithful 2-dimensional TQFT It has been shown in this paper that the commutative Frobenius algebra $QZ_5\otimes Z(QS_3)$ provides a complete invariant for two-dimensional cobordisms, i.e., that the corresponding two-dimensional quantum field theory is faithful. The essential role in the proof of this result plays Zsigmondy's Theorem. Introduction It is evident that one aspect of topological quantum field theories (TQFTs) concerns with the corresponding invariants of manifolds. However, the completeness of these invariants is seldom investigated in the literature. There is a result in [5,Section 14] that claims faithfulness of a 1-dimensional TQFT, which is inspired by [4]. The second author, in her recent work [15], has shown that every 1-dimensional TQFT, with respect to the field of characteristic zero, is faithful. This means that every such 1-dimensional TQFT provides a complete invariant for 1-cobordisms. In the current article we show that there is a faithful 2-dimensional TQFT. At first sight, this result has stronger algebraic than topological impact. It says that there is a commutative Frobenius algebra, which satisfies only the equalities in the language: multiplication, unit, comultiplication and counit that are necessary to define this notion. Since this structure is free of additional equations, one could be tempted to call it "free commutative Frobenius algebra". However, since the category of commutative Frobenius algebras is a groupoid-every homomorphism is an isomorphism (cf. [9, Lemma 2.4.5]-there are no freely generated objects in this category. We find that the existence of such a commutative Frobenius algebra justifies the whole notion. On the other hand, we do not know whether there exists a faithful n-dimensional TQFT for n ≥ 3. An important step towards a solution of this problem was given in [8], where the author presents the cobordism category in arbitrary dimension n with generators and relations. Our proof for 2-dimensional case suggests that for n ≥ 3, particular difficulties could be caused by closed manifolds with many connected components. As it was shown in [6], neither Turaev-Viro, [17], nor Reshetikhin-Turaev, [16], 3-dimensional TQFTs are faithful. We are aware of the fact that even a negative answer to this question might be conclusive-it suggests that TQFTs should search for a more "expressive" targets than the category of vector spaces. In order to keep this paper as short as possible, we rely on [9] for basic definitions, and suggest the reader to be acquainted with now classical works [2], [12], [1] and more recent [10]. The category 2Cob and 2TQFTs Let 2Cob be the category whose objects are 0, 1, 2, . . ., where n is the sequence of n circles and whose arrows are the equivalence classes of 2-cobordisms defined as in [9, Section 1.2]. We denote cobordisms by K, L, . . ., and K = L means that K and L belong to the same equivalence class. Let K : n → m be a 2-cobordism whose ingoing and outgoing boundaries are respectively the sequences of circles (Σ Also, we denote by (g i k ) K the genus of the connected component of K containing Σ i k . The category 2Cob is a symmetric monoidal with the tensor product ⊗ given by "putting side by side" and symmetry generated by the transpositions: Let Vect be the category of vector spaces over a fixed field whose symmetric monoidal structure is given by the tensor product and the usual symmetry. According to Atiyah's axioms (see [2, Section 2]), a 2-dimensional quantum field theory (2TQFT) is a symmetric, strong monoidal functor (cf. [11, Section XI.2]) from 2Cob to Vect. For m, k, n ≥ 0, let E m,k,n denote the connected 2-cobordism with n ingoing boundaries, m outgoing boundaries and genus k. As a part of a relation between 2TQFTs and commutative Frobenius algebras, which is thoroughly explained in [9, Section 3.3], we have that if F is a 2TQFT, then for , is a commutative Frobenius algebra. Conversely, if (A, µ, η, δ, ε) is a commutative Frobenius algebra, then there is a 2TQFT, which we denote by F A , mapping 1 into A, and E 1,0,2 , E 1,0,0 , E 2,0,1 and E 0,0,1 into µ, η, δ and ε, respectively. For such an F A , we denote F A K by (K) A , and abbreviate The following three lemmata hold since 2TQFT is a monoidal functor. and and if K = A L for K, L : 0 → 1, then we have that and if K = A L for K, L : 0 → 2, then we have that where dim(A) > 1, then for some k 1 ≥ . . . ≥ k n ≥ 0 and l 1 ≥ . . . ≥ l m ≥ 0 such that (k 1 , . . . , k n ) = (l 1 , . . . , l m ), we have that Proof. Since dim(A) > 1, the cobordisms K and L must have the same source and target. Also, K = L entails that either ρ K = ρ L , or ρ K = ρ L and there is (i, k) such that (g i k ) K = (g i k ) L , or ρ K = ρ L and for every (i, k), (g i k ) K = (g i k ) L and K and L differ in their closed components. We start with the last and simplest case. If ρ K = ρ L and for every (i, k) we have that (g i k ) K = (g i k ) L , then by applying Lemma 2.1 for all the boundary components, we arrive at the equality of the form (2.1). If ρ K = ρ L and there is (i, k) such that (g i k ) K = (g i k ) L , then by applying Lemma 2.1 for all the boundary components except the one corresponding to (i, k), and then by applying Lemma 2.2, we arrive at the equality of the form for some n, m, p, q ≥ 0 such that p = q, and k 1 ≥ . . . ≥ k n ≥ 0, l 1 ≥ . . . ≥ l m ≥ 0. If ρ K = ρ L and (i, k)ρ K (j, l), while not (i, k)ρ L (j, l), then by applying Lemma 2.1 for all the boundary components except those corresponding to (i, k) and (j, l) we arrive either directly at the equality of the form for some n, m, p, q, r ≥ 0 and k 1 ≥ . . . ≥ k n ≥ 0, l 1 ≥ . . . ≥ l m ≥ 0, or this equality is obtained by a further application of Lemma 2.3. For a > max{k 1 , l 1 }, put the both sides of the equalities (2.2) and (2.3) in the context E 0,a,1 • • E 1,a,0 in order to obtain the equality of the form (2.1). For Z 5 being the cyclic group of order 5, with the generator a, let QZ 5 be the group algebra and let e, a, a 2 , a 3 , a 4 be its basis. The multiplication µ : The comultiplication δ : QZ 5 → QZ 5 ⊗ QZ 5 is represented by the 25 × 5 matrix The structure (QZ 5 , µ, η, δ, ε) is a commutative Frobenius algebra and it is special in the sense that for every k E 1,k,1 = QZ 5 E 1,0,1 . Faithfulness In this section we denote the tensor product QZ 5 ⊗ Z(QS 3 ) by A. The algebra A is equipped with the commutative Frobenius structure as the tensor product of two such algebras (cf. [9, Section 2.4]). Note that (E 0,k,0 ) A is represented by the rational number The following lemma is crucial for the proof of the faithfulness of the 2TQFT corresponding to A. Proof. Let p and q be such that k p , l q > 0 and k p+1 = 0 = l q+1 (if there are any). Since the last digit in 2 2k−1 + 1 is either 3 or 9, such a factor is not divisible by 5, and we may conclude that n = m. Since all the factors but 2 ( p 1 ki)−p and 2 ( q 1 lj )−q are odd, we may conclude that ( (2 2lj −1 + 1).
2018-01-11T13:55:10.000Z
2017-11-16T00:00:00.000
{ "year": 2020, "sha1": "d1a1d2d6e4518f2922d8be19b89533f19efc1393", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1711.06044", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d1a1d2d6e4518f2922d8be19b89533f19efc1393", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
220384024
pes2o/s2orc
v3-fos-license
Insights on the Larvicidal Mechanism of Action of Fractions and Compounds from Aerial Parts of Helicteres velutina K. Schum against Aedes aegypti L. Viral diseases transmitted by the female Aedes aegypti L. are considered a major public health problem. The aerial parts of Helicteres velutina K. Schum (Sterculiaceae) have demonstrated potential insecticidal and larvicidal activity against this vector. The objective of this research was to investigate the mechanisms of action involved in the larvicidal activity of this species. The cytotoxicity activity of H. velutina fractions and compounds of crude ethanolic extract of the aerial parts of this species was assessed by using fluorescence microscopy and propidium iodide staining. In addition, the production of nitric oxide (NO) and hemocyte recruitment were checked after different periods of exposure. The fluorescence microscopy revealed an increasing in larvae cell necrosis for the dichloromethane fraction, 7,4′-di-O-methyl-8-O-sulphate flavone and hexane fraction (15.4, 11.0, and 7.0%, respectively). The tiliroside did not show necrotic cells, which showed the same result as that seen in the negative control. The NO concentration in hemolymph after 24 h exposure was significantly greater for the dichloromethane fraction and the 7,4′-di-O-methyl-8-O-sulphate flavone (123.8 and 56.2 µM, respectively) when compared to the hexane fraction and tiliroside (10.8 and 8.3 µM, respectively). The presence of plasmocytes only in the dichloromethane fraction and 7,4′-di-O-methyl-8-O-sulphate flavone treatments suggest that these would be the hemocytes responsible for the highest NO production, acting as a defense agent. Our results showed that the larvicidal activity developed by H. velutina compounds is related to its hemocyte necrotizing activity and alteration in NO production. Introduction Aedes aegypti L. (Diptera: Culicidae) is a major vector for viruses that threaten human health, such as dengue, chikungunya, and Zika. Globally, 2.5 billion people live in high-risk areas, especially in tropical and subtropical regions of the world where temperature and humidity promote their proliferation [1][2][3]. Efficient vaccines against these arboviruses have not yet been developed, making vector control the main form of preventing these diseases. Several natural products are considered promising for use as insecticides, repellents, or larvicides, due their biodegradability, efficiency, and low cost [4,5]. According to Faraldo et al. (2005), the larvae possess an extremely efficient immune system that is an excellent model for studying insect defense mechanisms [6]. In contrast to the complexity of the vertebrate immune system, the relative simplicity of the invertebrate immune system makes it a potentially sensitive and accessible means of monitoring the effects of environmental contaminants and the complex interactions that ultimately affect host resistance [7]. Invertebrate hemocytes have been used as a model to study and measure the impact of chemicals on the immune system, including pesticides and heavy materials [8]. Studies in invertebrates have related nitric oxide (NO) production to cytotoxic effects against pathogens, with the increase of its production in hemocytes being correlated with the immune response against foreign agents [9,10]. In order to continue the ongoing work with H. velutina against A. aegypti larvae, this study investigates the possible mechanisms of action involved in the larvicidal activity of fractions and isolated substances [12,13] by analyzing in vitro cytotoxicity and NO production. Larvae (L4) Survival Time after Exposure to Test Substances The mortality profile of the larvae over time was monitored in order to determine the exposure time to initiate the larvicidal effect. In this study, using the needed concentration to kill all exposed larvae, it was registered that the dichloromethane fraction of crude ethanolic extract of aerial parts of H. velutina (10.0 mg/mL) had the best mortality profile in the first three hours, reaching 33.5%, followed by compound 7,4 -di-O-methyl-8-O-sulphate flavone (1) (1.0 mg/mL), which produced 16.7% mortality, and the hexane fraction (5.0 mg/mL) with 6.7% mortality within three hours. The results for the three tested materials (dichloromethane fraction, hexane fraction, and 7,4 -di-O-methyl-8-O-sulphate flavone) showed quite similar results over time, reaching 90.0%, 98.34%, and 96.7% after 24 h, respectively ( Figure 1). Tiliroside (2) (1.0 mg/mL) showed a later mortality effect, with 6.7% of exposed larvae dead in the first 12 h, reaching 65.0 and 100.0% after 48 and 72 h, respectively. In the negative control, no mortality was recorded (Figure 1). Comparing the curves, the hexane and dichloromethane fractions of the crude ethanolic extract of aerial parts of H. velutina do not differ statistically. The hexane fraction differs from tiliroside and 7,4 -di-O-methyl-8-O-sulphate flavone, and the dichloromethane fraction differs from tiliroside, but does not differ from 7,4 -di-O-methyl-8-O-sulphate flavone. Tiliroside and 7,4 -di-O-methyl-8-O-sulphate flavone differ from each other (Table 1). Considering the used doses, the most active substance would be 7,4 -di-O-methyl-8-O-sulphate flavone, which showed the greatest mortality profile at 1.0 mg/mL. Morphological changes and macroscopic aspects showed that the larvae treated with the hexane fraction were more debilitated, allowing us to conclude that although the larvicidal action is not as expressive in the first hours as in the dichloromethane fraction and 7,4 -di-O-methyl-8-O-sulphate flavone, it is more aggressive after 24 h of exposure. The greater effectiveness of the hexane fraction can also be seen since it is necessary to double the dichloromethane fraction concentration to trigger larvicidal activity in the same time interval. Tiliroside did not show visible body changes, but a slight change in the larvae body color was observed ( Figure 2). Measurement of Nitric Oxide The concentration of NO in the larvae treated with the hexane fraction was higher after 3 h of exposure (37.5 ± 6.5 µM) and decreased at 6 and 24 h (27.3 ± 7.5 and 10.8 ± 3.0 µM), representing 100%, 72.8%, and 28.8% NO in this sample ( Figure 3A-1). The substances that have the best action over time are 7,4 -di-O-methyl-8-O-sulphate flavone and the dichloromethane fraction. Although sulphated flavonoid has shown a tendency to kill a higher percentage over time, statistically it has an action similar to its original fraction (dichloromethane). Among the tested substances, the one that had the least effect compared to the dose used was tiliroside, taking longer to kill the larvae. . NO production in larvae exposed to LC 50 of the substances over time. (*) statistically significant in relation to the negative control. Bars with the same letter are not significantly different by Tukey test, 5%. Cytotoxic Activity After analyzing the images obtained by fluorescence microscopy, it was possible to observe that within 24 hours of exposure, there was no significant cellular necrosis of the hemocytes of the larvae treated with the tiliroside when compared with the hemocytes of the control group. Greater propidium iodide (PI) impregnation was observed for cells treated with the dichloromethane fraction, 7,4 -di-O-methyl-8-O-sulphate flavone, and hexane fraction, with a percentage of hemocyte necrosis of 15.4%, 11.0% and 7.0%, respectively ( Figure 4). The cell types found in each treatment were identified in order to correlate the results obtained with the possible mechanisms of action involved in cell death. Morphological analyses included observation of region geometries and obtaining data of interest [15]. The identification of hemocytes was performed based on comparisons with the literature [16], and the diameter and area were measured using the ImageJ software. Table 2 lists the number of total and viable cells counted in the hemocytometer with computational images and ImageJ software. The main uses of the program include optimization, algebraic manipulation, counting, and defining the areas and diameters of the cells under analysis. With the data obtained in Table 2, it was also possible to calculate the number of necrotic cells in each experiment ( Figure 4). The dichloromethane fraction presented a higher percentage of necrotic cells (stained) in relation to the total number of cells counted (15.4%). The 7,4 -di-O-methyl-8-O-sulphate flavone (sulphated flavonoid) had a percentage necrosis of 11.0%, followed by the hexane fraction (7.0%) and the tiliroside (glucoside flavonoid) did not show necrotic cells, which showed very similar result to that seen in the control. There was no statistical difference between the groups. In the present study, the hemocytes found in the analyses were oenocytoids, prohemocytes, and plasmatocytes. The prevalence of each hemocyte type for the test substances are shown in Figure 5. The oenocytoids were predominant in Tiliroside and negative control (51.6 and 100.0%, respectively), prohemocytes in turn were majority in the hexane and dichloromethane fractions, and in the substance 7,4 -di-O-methyl-8-O-sulphate flavone (96.0%, 59.9% and 73.8%, respectively). The plasmatocytes were only found in the dichloromethane fraction and 7,4 -di-O-methyl-8-O-sulphate flavone (7.4 and 13.1%, respectively), the presence of this hemocyte suggests that the larvae need to recruit more defense agents to combat these substances, which may characterize their greater toxicity in the larvae, corroborating what was seen previously in the survival tests and NO production in the larvae of A. aegypti exposed to these substances. Discussion In Brazil and in other parts of the world, plant extracts (medicinal, native, and adapted) and their derivatives have been tested against different disease vectors, including A. aegypti and A. albopictus [17]. The species H. velutina stands out for its popular use as repellent [11] and for its proven larvicidal activity of the extract, fractions, and isolated substances against A. aegypti [12,13]. Monitoring larval mortality over time has provided data to better understand the possible mechanisms of action involved in the larvicidal activity. It has been shown that the dichloromethane fraction, hexane fraction, and the substance 7,4 -di-O-methyl-8-O-sulphate flavone trigger a faster larvicidal activity compared to tiliroside. Studies on this survival profile are still scarce; however, Nunes (2013) reported that Agave sisalana had a similar larvicidal action over time using the concentration of 6.5 mg/mL. For A. sisalana, the larvae started dying after 12 h, reaching 88.0% larval mortality at 24 h of exposure [18]. During this assay, the macroscopic aspects of the larvae and their morphological changes were also observed after 24 h, allowing us to verify that in the test groups, even when the larvae were not dead, they were weakened, with reduced motility, and showing alteration in color and body aspects. In the negative control group, the larvae had normal external morphology and motility [19]. Larvae treated with the hexane fraction showed greater body deterioration. On the other hand, the dichloromethane induced a contraction of larvae body. The substances induced toxic effects on many regions of the body (including thorax, abdomen, anal gills, loss of external hairs, crumbled epithelial layer of the outer cuticle, and shrinkage of the larvae), results similar to those found in other studies [20,21]. These data also help explain the greater effectiveness of the hexane fraction, which requires half the concentration of the dichloromethane fraction to have the same percentage of mortality within 24 hours of exposure. The larvae exposed to the sulphated flavonoid (7,4 -di-O-methyl-8-O-sulphate flavone) showed greater deterioration and loss of color, being more opaque than those exposed to tiliroside, suggesting that these substances act with different mechanisms of action. NO production was quantified as nitrite ion in the hemolymph of A. aegypti larvae (L4). For tiliroside, the NO was evaluated after 72 h because it developed later mortality. For the other test substances, the 24 h interval was used [13]. The hexane fraction and tiliroside showed a similar NO production profile, where after a maximum peak in the first hours of exposure, the concentration decreased over time. This decrease may be related to the slower action of these compounds on the larvae. This later response could cause a smaller number of hemocytes to be recruited and low production of defense cells, which would result in a lower concentration of NO and a greater action of the aggressor agent. Meanwhile, the dichloromethane fraction and the 7,4 -di-O-methyl-8-O-sulphate flavone developed the opposite result on NO concentration, increasing over time. The result on NO concentration was more significant for the dichloromethane fraction, reaching 123.8 µM. This higher NO production may be related to a greater recruitment of hemocytes to act against the larvicidal agent, since these substances start to trigger mortality in the first hours, being more aggressive. Our results show that all substances tested cause a significant increase in the NO levels of larvae. These levels are many times higher than the basic levels of the controls (64.5, 213.4, 96.4, and 136 times for hexane, dichloromethane, sulphate, and tiliroside, respectively, in 24 h). The excess of NO is potentially toxic, especially with regard to oxidative stress, as demonstrated by Oliveira et al. (2016) [22]. Nunes et al. (2015) have reported that the increase in NO production by larvae hemocytes is correlated with the immune response against foreign agents [9]. Previous studies have reported an increase in NO production by invertebrates over time, showing the involvement of this molecule in the insect's immune defense [6,10,23]. The determination of the total cell number and their viability are considered important measures to study the mechanisms of action of substances in insects and larvae [18]. There are several ways to measure cell viability, the most common being the detection of membrane integrity. Defective membranes allow the releasing of intracellular components that may be found in extracellular fluids [15]. In our study, the propidium iodide (PI) was used as a marker of cell necrosis, which crosses only necrotic cell membranes, staining DNA and RNA present in the cytoplasm. PI emits red fluorescence when absorbing UV light [18,24,25]. The results showed greater PI impregnation by cells treated with the dichloromethane fraction, 7,4 -di-O-methyl-8-O-sulphate flavone, and hexane fraction, respectively. No necrosis was detected in the cells of the larvae treated with tiliroside; slower activity of this substance can cause changes in cell responses. These data corroborate the fact that the dichloromethane and hexane fractions and the sulphated flavonoid recruit a greater number of defense cells, while the tiliroside attract insufficient number of cells, justifying the lower NO production. There are few studies that evaluate the percentage of cell necrosis through fluorescence microscopy and staining with PI for hemocytes of A. aegypti larvae. In a previous investigation, Nunes et al. (2015) used flow cytometry to analyze the percentage of hemocyte necrosis within 3, 6, 12, and 24 h of exposure to A. sisalana. It has showed that in the first 12 h of exposure of the larvae there was 21% of necrosis, and in 16 h of 16.5% compared to the control group [9]. The free circulating cells in the hemolymph are called hemocytes and have different forms and functions [18]. The number and types of hemocytes can vary in response to stress, injuries, and infections. After an injury or contact with toxic compounds, the hemocytes migrate to the place where they destroy the invading agents, as part of the insect's defense mechanisms, including recognition, phagocytosis, encapsulation, coagulation, nodule formation, and cytotoxicity [16]. In the present study, the hemocytes types found were: oenocytoids, prohemocytes, and plasmatocytes. Oenocytoids measure 7-10 µM in diameter, possess a round shape, and have small, lobulated, and eccentric nuclei. Prohemocytes are the smallest cells found in hemolymph. They usually are found in groups, with spherical, oval, or even elongated shapes, measuring 5-7 µM in diameter. They are very similar to oenocytoids, when observed in the Neubauer chamber, differing by size. Plamatocytes are very polymorphic cells, ranging from rounded to elongated with 9-40 µM in diameter [16]. In our study, it was possible to observe that the plasmocytes were only observed in the larvae exposed to the dichloromethane fraction and 7,4 -di-O-methyl-8-O-sulphate flavone, the two treatments that presented the highest percentage of necrosis. Previous studies have shown that granulocytes and plasmatocytes are the main hemocytes actively involved in cellular defenses of insects [26]. Greater production of prohemocytes was observed for 7,4 -di-O-methyl-8-O-sulphate flavone and the dichloromethane and hexane fractions, and the oenocytoids were predominant in larvae treated with tiliroside. The cell viability data led us to suggest that the NO produced by the larvae treated with the hexane fraction and tiliroside would not be enough to generate an effective response against the toxic substances and would decay over time, resulting in the larvae mortality. The plasmatocytes found when the larvae were treated with the dichloromethane fraction and the 7,4 -di-O-methyl-8-O-sulphate flavone suggest that the compounds were recognized as a more aggressive toxic agents. Thus, other defense cells were recruited and start to produce excessive NO to generate an effective response against the toxic larvicidal substances. The greater the number of cells killed by necrosis, the more NO the remaining cells will produce as a defense mechanism [6]. Both flavonoids tested here were isolated from the dichloromethane fraction [12], according to the results obtained. The 7,4 -di-O-methyl-8-O-sulphate flavone has the most promising activity of the fraction, different from tiliroside, which triggered later activity and resembled the results found in the negative control. Our findings corroborate those that were previously reported about the importance of the sulphate group (OSO 3 H) for larvicidal activity [13]. Plant Material The aerial parts of H. velutina were collected in Serra Branca/Bahia/Brazil, in the winter season. The material was identified by Prof. Adilva de Souza Conceição, and a specimen voucher was kept in the Herbarium of the State University of Bahia. The plant material was oven dried at 40 • C, and 1976.0 g of the powder was macerated with 95% ethanol for 72 h. The extract solution was dried under reduced pressure at 40 • C and provided 39.7 g of crude ethanolic extract (CEE) that was submitted to liquid-liquid chromatography using hexane, dichloromethane, ethyl acetate, and n-butanol, resulting in their respective fractions and a hydroalcoholic fraction [12]. The biomonitored, phytochemical study of extracts derived from the aerial components of H. velutina against A. aegypti larvae resulted in the discovery of the promising biological activities of the hexane and dichloromethane fractions. From those fractions, chromatographic and spectroscopic methods resulted in the isolation and identification of 17 substances [12,13]. Two isolated flavonoids from dichloromethane fraction, tiliroside (glucoside flavonoid) and 7,4 -di-O-methyl-8-O-sulphate flavone (sulphated flavonoid) (Figure 6), were submitted to in vitro assays, which demonstrated that these compounds showed larvicidal activities at low concentrations [13]. This study has been registered in the National System of Genetic Resource Management and Associated Traditional Knowledge (SisGen-A568B8A). Larvae Survival Time Exposed to Test Substances The fourth-stage A. aegypti larvae (L4) (Rockefeller strain) were obtained from the Laboratory of Biotechnology Applied to Parasites and Vectors, Biotechnology Center, Federal University of Paraiba. They were kept under conditions of biological oxygen demand (BOD), at 27 ± 2 • C, relative humidity of 27 ± 5 • C, with a photoperiod of 12 h light and 12 h dark [13]. The survival time of the larvae exposed to the test substances was evaluated at intervals of 0, 3, 6, 12, 24, 48, and 72 h. The concentrations used for the hexane fraction (5.0 mg/mL), dichloromethane fraction (10.0 mg/mL), and for both isolated compounds, 7,4 -di-O-methyl-8-O-sulphate flavone and tiliroside (1.0 mg/mL), were determined experimentally [13]. To evaluate the insecticidal activity of the different substances over the 72 h period, a comparison analysis of the survival curve was performed using the Log Rank (Mantel-cox) and Chi square test using the Prism program. Measurement of Nitric Oxide The production of nitric oxide (NO) was determined by the Griess reagent in a pool of hemolymph from 20 larvae (L4) in a final volume of 20 µL in PBS buffer [27]. The larvae were exposed to the LC 50 of the hexane fraction, dichloromethane fraction, and 7,4 -di-O-methyl-8-O-sulphate flavone and tiliroside for different periods of time (3,6, and 24 h). The control groups were only exposed to distilled tap water and 1% DMSO [28] for the same periods of time. The assays were performed in triplicate. To determine NO 2 − concentrations, an aliquot of each sample was analyzed using spectrophotometry. The absorbance was measured using a microplate reader with a 562 nm filter, and the NO was quantified using a standard curve of NaNO 2 as reference. Statistical analyses were performed using GraphPad Prism software for Windows version 5.0 (GraphPad Software, San Diego, CA, USA). Significant differences among groups were analyzed by analysis of variance (ANOVA) followed by the Tukey's post hoc test when appropriate (P < 0.05). Cytotoxicity Assay Fluorescence microscopy was performed with a pool of hemolymph of 20 larvae (L4) exposed to LC 50 of the fractions and substances isolated from H. velutina for 24 h. The larvae were washed in PBS buffer and immobilized under refrigeration (1-2 min). Then, they were placed in a petri dish and, with the aid of a magnifying glass and a scalpel blade, they were decapitated. The hemolymph was collected using a glass microcapillary and transferred into a 1.5 mL Eppendorf containing 100 µL of PBS buffer. The hemolymph pool was then centrifuged under refrigeration (4 • C) at 1500 rpm for 10 min. The supernatant was discarded and 20 µL of the cell button was transferred to another Eppendorf containing 160 µL of PBS. Then, 20 µL of Propidium Iodide (PI) was added to differentiate intact hemocyte from that in necrosis. The sample was then incubated for 15 min in the dark. Using a micropipette, a 10 µL aliquot of the sample was placed in the Neubauer chamber, the cell integrity and viability were analyzed using a fluorescence microscope using the 20× objective [15]. The cells were characterized using morphology and size as parameters. Prohemocytes are spherical, oval, or even elongated, measuring about 5-7 µM in diameter. Adipohemocytes are round or oval cells that measure approximately 12-50 µM in diameter. They have a round nucleus that is centralized or displaced to the periphery of the cell. Its cytoplasm is quite characteristic with the presence of large lipid vesicles. Granulocytes have a circular shape, with 8-20 µM diameter. They have an irregular plasma membrane and the cytoplasm exhibits some dense granules. Plasmatocytes are very polymorphic cells, 9-40 µM in diameter. The plasma membrane has an irregular surface showing philopodia and pseudopods, with characteristics of fibroblasts. Oenocytoids measure 7-10 µM in diameter, have a round shape, with small, lobulated, and eccentric nucleus. The ultrastructure reveals a nucleus without the presence of a prominent nucleolus and homogeneous cytoplasm with few organelles. The ImageJ software was used to measure the diameter and area of the cells in order to characterizing them [16]. The total concentration of cells is given by the sum of the number of viable cells (not stained) plus the number of nonviable cells (stained) and multiplied by the dilution factor. Equations n V + n D n C × D × 10 4 = cells/mL (4) where n V corresponds to the total number of viable cells, n D is the total number of nonviable cells, D is the dilution factor (in our case D = 10), and n C is the number of quadrants counted on the Neubauer chamber (in our case n C = 4). Conclusions The present study contributed to the knowledge of the biological action of the species H. velutina and its compounds against A. aegypti larvae. The mechanisms of action responsible for causing the death of mosquito larvae are complex and generally multifactorial. Our results show that all substances tested cause a significant increase in the NO levels of larvae. With regard to cytotoxicity, we found that only tiliroside does not cause cell necrosis. Thus, we can conclude that an increase in NO levels plays a key role in the mechanism of action of the larvicidal activity of H. velutina. This effect is potentiated by the necrotizing action of all substances, except tiliroside.
2020-07-08T13:02:48.720Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "f921a3c89504fec99213ca8b69a912bf0b412577", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/25/13/3015/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "823ac25a49bbb407f23de70ec6d6d2ad3fc3f721", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
13155886
pes2o/s2orc
v3-fos-license
Towards a Unified Taxonomy of Health Indicators: Academic Health Centers and Communities Working Together to Improve Population Health The Clinical and Translational Science Awards (CTSA) program represents a significant public investment. To realize its major goal of improving the public’s health and reducing health disparities, the CTSA Consortium’s Community Engagement Key Function Committee has undertaken the challenge of developing a taxonomy of community health indicators. The objective is to initiate a unified approach for monitoring progress in improving population health outcomes. Such outcomes include, importantly, the interests and priorities of community stakeholders, plus the multiple, overlapping interests of universities and of the public health and health care professions involved in the development and use of local health care indicators. The emerging taxonomy of community health indicators that the authors propose supports alignment of CTSA activities and facilitates comparative effectiveness research across CTSAs, thereby improving the health of communities and reducing health disparities. The proposed taxonomy starts at the broadest level, determinants of health; subsequently moves to more finite categories of community health indicators; and, finally, addresses specific quantifiable measures. To illustrate the taxonomy’s application, the authors have synthesized 21 health indicator projects from the literature and categorized them into international, national, or local/special jurisdictions. They furthered categorized the projects within the taxonomy by ranking indicators with the greatest representation among projects and by ranking the frequency of specific measures. They intend for the taxonomy to provide common metrics for measuring changes to population health and, thus, extend the utility of the CTSA Community Engagement Logic Model. The input of community partners will ultimately improve population health. Launched in 2006, the Clinical and Translational Science Award (CTSA) Program constitutes a significant public investment estimated at $500 million in 2012. 1 Currently, 61 academic health centers (AHCs) in 30 states and the District of Columbia participate in the CTSA Consortium. 1 A central goal for the overall effort is to improve the health of local communities and the nation by "streamlining science, transforming training environments, and improving the conduct, quality, and dissemination of research." 2 With the goal of guiding and evaluating health interventions nationally and within specific communities, the CTSA institutions must collectively adopt an integrated set of community health indicators that reflect both public health* and community-driven priorities. 3,4 (Here we define "community" broadly as any group defined by common geography [e.g., neighborhoods], membership [e.g., ethnicity], or experience [e.g., veterans].) Currently, many international and U.S. policy initiatives have created community health indicators. 5-44 However, collectively, these indicators pose-for those who would adopt them-multiple challenges including substantial overlap, ambiguity, and disagreement. Further, if the data collected do not reflect community priorities, indicators will ultimately lack relevance for the entire range of stakeholders and result in further divergence of metrics. The purpose of this article is to provide an overview of the wide range of community health indicators and to propose the use of a systematic, common taxonomy for organizing and discussing them. We then illustrate the taxonomy's application through a review of 21 health indicator projects. Finally, we discuss the intersection of the health needs of communities with the availability of data, plus the related, important need for striking a balance between the data Abstract The Clinical and Translational Science Awards (CTSA) program represents a significant public investment. To realize its major goal of improving the public's health and reducing health disparities, the CTSA Consortium's Community Engagement Key Function Committee has undertaken the challenge of developing a taxonomy of community health indicators. The objective is to initiate a unified approach for monitoring progress in improving population health outcomes. Such outcomes include, importantly, the interests and priorities of community stakeholders, plus the multiple, overlapping interests of universities and of the public health and health care professions involved in the development and use of local health care indicators. The emerging taxonomy of community health indicators that the authors propose supports alignment of CTSA activities and facilitates comparative effectiveness research across CTSAs, thereby improving the health of communities and reducing health disparities. The proposed taxonomy starts at the broadest level, determinants of health; subsequently moves to more finite categories of community health indicators; and, finally, addresses specific quantifiable measures. To illustrate the taxonomy's application, the authors have synthesized 21 health indicator projects from the literature and categorized them into international, national, or local/ special jurisdictions. They furthered categorized the projects within the taxonomy by ranking indicators with the greatest representation among projects and by ranking the frequency of specific measures. They intend for the taxonomy to provide common metrics for measuring changes to population health and, thus, extend the utility of the CTSA Community Engagement Logic Model. The input of community partners will ultimately improve population health. requirements of AHCs and those of local public health departments and community partners. Health Indicators: An Overview The taxonomy of community health indicators we propose will help align CTSA activities and, in turn, allow us to define, measure, compare, and improve the effectiveness of interventions within and across CTSAs in a broad effort to improve the health of communities and, ultimately, the United States. At least four conditions are necessary for such a taxonomy. The taxonomy of community health indicators must: (1) reflect community input, be relevant to communities, and have utility for both communities and researchers; (2) be capable of identifying a set of measures or metrics that can be used to compare outcomes across multiple community health interventions so as to enable both comparative effectiveness research (CER; e.g., comparing different public health interventions 3 ) and large-scale metaanalyses (e.g., aggregating results from similar community intervention studies for specific disease processes); (3) achieve, or at least work toward, consensus on a shared language for community engagement processes, interventions, and health outcomes among community members, researchers, and public policy makers; and (4) interface directly with the CTSA Community Engagement Logic Model. 4 The Community Engagement Logic Model is a tool the CSTA Consortium has developed that focuses on building infrastructure to support relationships and collaboration between community and academic research partners. The logic model uses evidence-based structures and processes extant within CTSAs to support CTSA institutions' engagement with community partners. 4 The model includes inputs of community engagement activities and results in short-term outcomes (i.e., increased bidirectional trust and communication), intermediate or midterm outcomes (i.e., increased community capacity to engage in research or university capacity to engage with communities), and longterm outcomes (i.e., improved translation of science to new practices, policies, and programs that ultimately improve population health). Why look at community health indicators? The study of community health indicators is key to translating new scientific knowledge to applied health systems and practices consistently and broadly. It also allows investigators to conduct better CER, increases the ability to improve the relevance of publichealth-related science, and makes it easier to competently measure underlying factors that impede adoption of new health findings. Creating shared metrics among health researchers, government agencies, and communities themselves facilitates policy and decision making to improve population outcomes for a wide range of groups. By identifying indicators and best-fit metrics, researchers and communities are better able to more adequately define root causes and address complex issues related to health inequities. 5 Finally, organizing health indicators into a community-engagement-focused taxonomy will allow organizations to leverage data collection for measures already employed for other purposes such as pay-for-performance, accreditation, or quality improvement programs. A unified taxonomy will advance community priorities while also improving the delivery of health care system services. 3,4 Community health indicators and AHCs as data warehouses Another reason for studying community health indicators is related to the current efforts to strengthen large health systems, including AHCs, that are vested in improving population health. The rapid advance of electronic health records, coupled with the development of large provider networks, puts many AHCs in the position of retaining detailed, primary data on the health status and health services utilization of many groups-sometimes most of the local population. The ability to build population health reports from primary health status and services data, and the challenges and limitations of this approach, are of key importance in the development of community health indicators. Wide-scale deployment of electronic health records, in conjunction with traditional ongoing public health surveillance methods (e.g., cancer registries), can create a mix of realtime aggregated data, which can be supplemented by surveys targeted to particular communities. In addition, these new data sources permit real-time tracking of measures, and some of the resulting metrics-both biomedical (e.g., HgbA1C) and those related to health systems utilization (e.g., transportation for medical appointments, obesity prevention services, access to social support services, group counseling sessions) 6 -may be of considerable interest to communities. Further, members of the research community can often access these new data sources (e.g., electronic health records, targeted surveys), such that the sources serve as another bridge between the community and academia. Key Literature Informing a Taxonomy of Community Health Indicators As we began to develop our taxonomy, we reviewed relevant literature, including historical and government-related uses of health indicators. Over several decades, various U.S. and international experts have identified their own lists of key health indicators in endeavors to focus health promotion and disease prevention efforts while monitoring changes in outcomes. European Community Health Indicators The taxonomy of standardized health indicators we propose is not without precedence. The European Community Health Indicators (ECHI) Project advanced the following four categories to serve as the conceptual basis for refining community health indicators across Europe: (1) demographic and socioeconomic factors, (2) health status, (3) determinants of health, and (4) health systems. 9 These main categories have commonly been referred to as a basis for defining more specific health indicators. 9, [17][18][19] The ECHI Project indicators were developed to generate national and regional public health reports to shape policy; to create a logical framework for longitudinally monitoring health programs; to identify data gaps for prioritizing data collection and harmonization processes (i.e., step-by-step procedures used to arrive at a set of decisions); and to enable the establishment of a data-sharing infrastructure in the European Union. 9 The EU consortium's model focuses on nonmedical ecological determinants of health-those that emphasize mental health and social-cultural-environmental structures and processes. 18 There are parallels between ECHI goals and several CTSA initiatives. First, ECHI goals and strategies echo those of CTSA Strategic Goal Four: "enhancing the health of our communities and the nation." 3 Like the CTSA initiative, ECHI represents a largescale strategy across diverse populations that supports the improvement and achievement of equity in access, quality, and health care delivery. It is focused on a centralized information exchange to facilitate comparisons, disseminate best practices, and achieve health equity. 3 The centralized European health care system model and the ECHI infrastructure are accelerating development of a common platform to disseminate health research findings and technology to community users, which will reduce barriers to communication and collaboration, strengthen public health relationships, increase community research capacity, and accelerate policy change. 9 Despite many similarities, the ECHI and CTSA programs have differences, especially in the challenges each program faces. One challenge for ECHI is moving forward with the adoption of the common classification system. Another involves standardizing all health care systems such that each has similar basic care infrastructure components. In contrast, the U.S. system's challenge is to centralize the health care system so that improvements can be rapidly disseminated and uniformly implemented across communities. Over time, the differences between and lessons learned across these two major systems (EU and U.S.) will be mutually beneficial and informative. Towards a Taxonomy of Community Health Indicators As the literature shows, policy makers and leaders at various levels have established systems for monitoring change in health through health indicators. Fundamental is the need to provide common, highquality, reliable, objective data that measure population health in areas where progress can be tracked over time. 11,20 A set of indicators will support the evaluation of community engagement activities outlined in the CTSA Community Engagement Logic Model. 4 Finding common ground between these CTSA community engagement measures and other data-driven evaluations will help achieve some economies of scale through the use of current metrics. Common metrics may also advance the development of electronic information systems and databases that can support community engagement evaluations and grant development. Most important, the development of a detailed taxonomy will serve as a guide for the production of public health reports and foster the dissemination and, when appropriate, the implementation of health research findings to communities. A taxonomy is a particular classification system arranged in a hierarchical structure providing supra-and subtype relationships. 21 Our research uncovered common concepts across the literature that, when compiled, fell into three ordered and nested categories. The community health indicator taxonomy provides a conceptual foundation for the 21 indicator projects we explore in this article. Synthesizing the use of indicators and measurement terms in these projects points to a simple hierarchy that can be expressed in a single sentence: Determinants of health have categories of community health indicators that include specific quantifiable measurements (see Figure 1). The hierarchy for our taxonomy is based on the need to express observations ranging from broad-based determinants of health to highly specific, quantifiable and measureable phenomena. Determinants of health, at the topmost level, include the social, economic, and physical environment, as well as a person's individual characteristics and behaviors. 8, [17][18][19] These determinants also include factors that combine to affect individual and community health, both directly and indirectly. 8, 20 Our review of indicator projects reveals a list of indicator types or classes that are organized within their respective, overarching determinants of health categories. In the middle, community health indicators, more specific than determinants of health, but less specific than quantifiable measurements, are particular characteristics of an individual, population, or environment that can be measured and used to describe the health of that individual, population, or environment. 12 Health indicators are considered to be tools 21 with enough information or data to describe and compare (across individuals, populations, or environments) health statuses and health services. 17,22,23 Quantifiable measures, at the lowest point in our hierarchy, are the standard reference points through which other points of information can be evaluated. 24 Data measures can originate from various indicator categories including epidemiological, socioeconomic, geographic, health care utilization, 25 health care quality, 26 social capital, 27 and resource distribution. 28 We acknowledge that others have used these same terms for other purposes and to have other meanings; for example, some uses of the term indicator in programs such as the Baldridge Criteria for Performance Excellence in Health describe comparisons of processes or meta-comparisons, whereas measure indicates a more direct, data-driven evaluation. 23 We hope that one benefit of the taxonomy we propose will be to standardize vocabulary. Summary of Community Health Indicator Projects Using the Proposed Taxonomy Approach To develop a picture of current health indicator efforts, to identify gaps in current understanding and levels of consensus, and to guide future work, we present here a summary of indicator projects based on a review of available literature. Using our taxonomy (Appendix 1) and our hierarchy described above (Figure 1), we identified commonly used broad categories of determinants of health, extensive lists of categories of indicators, and many specific quantifiable measures. We organized health indicator projects into one of three jurisdictions: international (multi-country), 20,29-32 national (country), 7,11,12,24,26,33-35 or local (state, county, and/or special populations). 17,[36][37][38][39][40][41]45 Next, we ordered the types of health indicator projects by frequency at two levels. At one level, we ordered the broad Determinants of Health categories by the relative number of specific quantifiable measures identified for each in the literature; in other words, "Health System Services" had the highest number of specific quantifiable measures, while the more challenging to collect, but critically important "Social Structure" category had the fewest. At the second level, we ranked, within each Determinant of Categories of community health indicators Quantifiable measurements have that include Figure 1 Hierarchical or nested relationship among community health indicators, based on the need to express observations ranging from broad-based determinants of health to highly specific, quantifiable and measureable phenomena. Health category, the indicators with the greatest representation across the various indicator projects; for example, within the Health System Services category, "access to health care provider" measures are discussed in 18 out of 21 projects, whereas "composite" measures are discussed in only one of these projects. Finally, the shaded boxes in the Measures area of Appendix 1 denote at least one specific measure that has been identified within an indicator category. Among the 21 indicator projects (many of them providing national and regional data) that we reviewed, we observed great variation among the specific quantifiable measures communities have used to describe and track health outcomes. Suggested indicators that lack a clear set of population-based, public health surveillance data measurements include those related to the role of social status and social capital, 27,37 discrimination and stress, 8 and perceptions of role in society (see Disparities, Social Cohesion, and Social Structure in Appendix 1). In 2003, the Task Force on Community Preventive Services provided an extensive list of variables which may measure a more local social environment's impact on health 36 rather than directly addressing either community well-being on a larger scale or even the interconnectivity and support among persons, 43 communitybased organizations, government, 44 local businesses, 46,47 and other built environmental conditions and resources. 48 However, as illustrated by our taxonomy, more work needs to be done in this arena. Working With Community Partners to Identify Local Health Indicators Categories of indicators can become important guides for individuals and communities developing and planning health improvement interventions. However, these indicators clearly need to be streamlined and made relevant to individual communities. The current number of unconnected sets of various indicators are indicative of disagreement among community members and organizations serving communities. And, although some existing sets are reliable and valid, the sheer variety of these sets, across federal and other systems, creates enormous barriers to reaching consensus and often leads to parallel or overlapping indicator projects. Also, trust and confidence in both private and federal systems and in public health often do not exist among all consumers or in many underserved populations. 48 Therefore, to promote community trust and to stimulate interest in the indicators, a next logical step is to involve community stakeholders, as well as health care professionals, researchers, and policy makers, in assessing existing indicator options and in identifying their own priorities. The role of community partners Over the last two decades, a growing number of communities, led by local health departments, clinical care systems, not-for-profit organizations, and local/ county governments, have developed their own community health indicators relevant to their local or state context; one example is the Wisconsin County Health Rankings. 45 Tasked with conducting community needs assessments as one of the three core public health functions, both state and local health departments currently prioritize health indicators, monitor health status, and investigate health problems in their community. These organizations regularly make their data available (e.g., via community health profiles, vital statistics, and health status) to constituents who, in turn, use the data to depict the health challenges and strengths of smaller geographic areas. 45 These local health departments and other groups could come together to integrate indicators from surveillance systems, real-time electronic health records, and local data priorities (gleaned, for example, from targeted health surveys or environmental monitoring). Community stakeholders may especially welcome maps generated by Geographic Information System initiatives or visual portrayals of information to better use data as a positive force for strategic planning and health improvement. Showing the impact of community engagement on health through evaluating community-academic partnerships requires evaluating both the process of partnering and the resulting impact of the partnership on the system (e.g., greater capacity, community empowerment, new policies, or clinical practice changes). Community involvement in creating logic models for change enables communities to document benchmarks for progress and to formulate hypotheses about which indicators and which partnership practices may enhance capacity or improve system change measures. 49 This process of hypothesis testing and partnership consolidation can, in turn, contribute to population health changes. 50 Engaging community groups also helps the community identify the health indicators that have the greatest potential to improve local well-being. Additionally, local neighborhoods may assist in conducting assessments of community needs and community strengths or assets so as to provide data for health improvement efforts, identifying areas of strength and leveraging these strengths to address needs or concerns. 28 Some of the community agencies that have conducted their own needs assessments range from social service agencies to faith-based organizations, to hospitals, local funders, and community coalitions. Community groups use a variety of data sources and organizing principles. Local coalitions may obtain independent funding to assess health needs and establish indicators to monitor progress over time. They may use data supplied by state and local authorities and/or collect data on their own using community-organizing principles (e.g., trust building) to develop consensus on indicators. Many organizations demonstrate a tremendous ability to connect with their constituents, many of whom may be underserved or marginalized. A number of strategies for identifying community needs and assets, including Mobilizing for Action through Planning and Partnerships, 51 Protocol for Assessing Community Excellence in Environmental Health, 52 and Assessment Protocol for Excellence in Public Health, 53 have been popularized. Although some universities may be involved as leaders in or organizers of these activities, it would be an advance for academic institutions to work more closely with communities to improve health at this bidirectional level. The role of the CTSAs Today, the CTSAs have a unique opportunity to contribute both to bridging the gap between academe and public/ community health improvement and to documenting that change through an integrated set of health indicators. Our recommendation would be for CTSA institutions to include in their investigations and research protocols indicators representing each of the major Determinants of Health categories; that is, to examine factors related, for example, to health system services and general health status, to personal behavioral and community socioeconomic composition, and to social cohesion and social structure. As CTSAs engage with communities in selecting and measuring indicators, community stakeholders can participate in and add to already-extant community indicator projects. 54,55 Local efforts to measure indicators, many of which already involve partnerships with universities, would make valuable contributions to the efforts to establish a standard set of indicators for the nation. CTSAs can therefore build on local connections and collaborations. CTSA institutions may provide specialized infrastructure and offer technical assistance, expertise, and resources while both honoring the work that has emerged from within community institutions and addressing areas that are of high priority to community members and leaders. 54,55 Choosing the right combination Our hope for this community health indicator development work is that it will allow individual communities to use indicators more meaningfully. A magnitude of data is available, especially as health systems such as AHCs deploy electronic health records, so determining which of the multiple basic community health indicators are appropriate for a specific community is important. The taxonomy allows leaders at the county or city level to review the indicators and measures available and to select those that are most appropriate for their purposes and constituents. Every community interested in improving the health of local populations must identify a health issue and target population and collect baseline data. For CTSA institutions and other universities, the greatest potential for improving the health of the community and nation will come through partnerships that blend the expertise and resources of universities with established community entities, including state and local departments of public health, as well as community-based organizations and grassroots groups. Benefits to the taxonomy itself In addition to standardized categories or indicators that could be adopted nationally, the CTSA Consortium has, as mentioned, developed an infrastructure logic model of community engagement structures and processes within CTSAs. 4 This logic model posits short-term outcomes, intermediate-term outcomes (increases in community capacity to engage in research and university capacity to engage with communities), and long-term outcomes (i.e., improved translation of science to health practices, policies, and programs that, ultimately, improve population health). Though the logic model takes into account the congruence of community and academic interests and outlines community-based strategies for determining health indicators and desired outcomes, there is still a paucity of specific, quantifiable measures or metrics for systems-capacity or population-health changes. Many of the midterm capacity and long-term, system-wide outcomes that result from CTSA-community partnerships could themselves serve as indicators or benchmarks of progress towards population health changes. 4 Seizing the Opportunity to Be Relevant to Communities The IOM, after evaluating progress of the CTSAs, identified community engagement as one of "three crosscutting domains that … are integral to effectively advancing clinical and translational science." 1 In the same report, the IOM provided recommendations to strengthen the support of community engagement efforts and noted that community support "is critical in all phases of clinical and translational research from basic research to clinical practice and community and public health." 1 The CTSAs have a unique and timely opportunity through their community engagement programs and activities both to enhance academicpublic-community partnerships and, through these partnerships, to support efforts to determine community health indicators at the national, state, and local levels. Armed with awareness of local community health activities, CTSA institutions are poised to be active players in health improvement efforts locally; they can provide infrastructure support, technical assistance, and leadership to the community health indicator development process. The challenge CTSA Consortium members have undertaken is to reconcile the potentially contradictory goals of, on one hand, arriving at standardized health indicators that allow for the monitoring and comparing of progress in improving public health outcomes and, on the other, respect for the interests of communities who want to retain ownership of the process for identifying health indicators that reflect local priorities rather than those imposed from outside. Minimally, the CTSA institutions and consortia need a shared language and a standardized hierarchy of categories of community health indicators. Possibly, they could also develop specific metrics for a core set of community health indicators, quantifiable outcome measures, and monitoring systems that not only allow for local, community organizations to begin to gather data more readily but also inform policy makers and the public on progress made in improving health. 26 The taxonomy we propose here is explicitly designed to serve the needs of the CTSAs and communities throughout the nation. Further, this effort is designed to situate community health indicators in the CTSA context of translational science, enhance the methodological rigor of community-engaged research, and ultimately improve population health.
2018-04-03T00:37:14.078Z
2014-02-25T00:00:00.000
{ "year": 2014, "sha1": "54250d6c064efff9fab670e8f98e5f20fd126634", "oa_license": null, "oa_url": "https://journals.lww.com/academicmedicine/Fulltext/2014/04000/Towards_a_Unified_Taxonomy_of_Health_Indicators_.18.aspx", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "54250d6c064efff9fab670e8f98e5f20fd126634", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
20636565
pes2o/s2orc
v3-fos-license
The 4-Arylaminocoumarin Derivatives Log P Values Calculated According to Rekker ' s Method In the QSAR (quantitative structure-activity relationship) and QSPR (quantitative structure-property-activity relationship) study physico-chemical property, lipophilicity is used, to predict bioactivity of the newly synthesized coumarin compounds. Lipophilicity is a property of a molecule which depends on and can be changed by modifications in molecular structure. The parameter of the lipophilicity, partition coefficient (log P), is commonly used in drug designing and it is a numeric characteristic of lipophilicity of the examined substance, potential drug. The synthesis of 4-arylaminocoumarins derivatives from 4-hydroxycoumarin, has been carried out. In this work we described the fragmental method of calculation of partition coefficient according to the Rekker's method, because the best correlation between calculated and experimental values log P was determined according to the Rekker model. 4-Arylaminocoumarin has negative value of log P, but substitution with alkyl or allyl groups on the position 3 increases lipofilicity. Introduction of methyl or ethyl group into position 3 increases lipofilicity, suggesting that by increasing the chain length the values of log P become higher. The influence of allyl substituent in position three increases lipophilicity similar to methyl group. The aryl supstitent decreases lipoflicity, but a general relationship among them could not be established. The results obtained in this study enable further synthesis of new coumarin derivatives and predict their biological activity and properties. Introduction Search for new drugs is very difficult and expensive process.The experimental testing of many thousand compounds for their biological activity and medicinal potential is enormously expensive and time consuming process.Today we can employ some kind of modelling by which the most unpromising compounds can be sorted out , so that only 200-300 compounds out of, say 200 000 remain for experimental work (1).The QSAR (quantitative structure-activity relationship) and QSPR (quantitative structure-property-activity relationship) studies provide one such modelling framework for a cheep and fast search of biologically-active molecules.The main use of QSAR and QSPR is in the selection of compounds for preparation and biological, pharmaceutical and medicinal research (2).From this we can make conclusion that QSAR and QSPR methods can use to predict medicinal potential for synthetising and also for non synthetising compounds.Practically, before synthesis of some compound we can, using QSAR and QSPR, predict physico-chemical properties and biological activity of molecule.The fundamental axiom of QSAR and QSPR modelling is that the structure of molecules is reflected in their biological activities and physico-chemical properties (3).In QSAR approach, molecular structure is described by a number of parameters which can even be calculated.The representation of molecular structures by numbers is a way to encode the structural information in QSAR and QSPR studies.The modelling process reduced to a correlation between two sets of numbers, one sets of numbers representing the molecular bioactivity or property and the other set representing the molecular structure.This correlation is meaningful only if it is carried out for a larger set of molecules.One of the goals in QSAR and QSPR research is to predict lipophilicity of molecule which is potential drug.Lipophilicity of drug have big influence on the drug disposition such as: pass trought the cellular membranes, binding on the plasma proteins, elimination.That means if we know value for lipophilicity for supstance we can predict her pass through the cellular membranes, binding on the receptors, elimination.It means that we can predict is it substance promising or unpromising as drug.Lipophilicity is a property of a molecule which depends on and can be changed by modifications in molecular structure.The parameter of the lipophilicity, partition coefficient (log P), is commonly used in drug desing and it is a numeric characteristic of lipophilicity of the examined substance, potential drug.The values of partition coefficient (log P) can be positive or negative.The higher log P value means the higher lipophilicity of the drug which penetrates easily through the cell membrane, while negative log P values indicate hydrophilic characteristics of the drug.The partition coefficient describing the partitioning equilibrium of solute molecules between lipid organic solvent (octanol) and water: P= Co / Cw were Co is concentration of a substance in the octanol phase and Cw is the concentration in the water phase.The partition of drugs between n-octanol and water reflects the process by which substances, potential drugs, are distributed between the aqueous biophase (intracellular and extracelular liquid) and lipophilic biophase (cellular membrane) (4). The correlations of partition coefficient, log P, with activity were studied and significant correlation was obtained (5).The log P value should be of practical use in developing new drugs, in optimizing the therapeutic index or toxicity. To be able to predict the lipophilicity theoretical and experimental methods for the determination of partition coefficient (log P) values have been developed. Methods of determination of the partition coefficient a) Experimental way Shake flask method is commonly used experimental method for the calculation of partition coefficient (log P) values.The basic procedure for obtaining a partition coefficient is to shake a weighed amount of chemical in a flask containing a measured amount of water//saturated octanol and octanol/saturated water.Partitioning system n-octanol/ water seems to mimic the lipid membranes /water systems found in the body.It must be remembered that the n/octanol water system is only an approximation of the actual environment found in the interface between the cellular membranes and the extracellular/intracellular fluids.Many times, the aqueous phase will be buffered with a phosphate buffer at pH 7,4 to reflect physiological pH.The determination of partition coefficients is tedious and time consuming.Some chemicals are too unstable and either degrade during the procedure, which can take several hours.Quantification of the amount of the substance in the two phases is performed by appropriate analytical method.This has led to attempts at approximating the partition coefficient.Second experimental way , perhaps the most popular approach has been high/ performance liquid chromatography HPLC or thin -layer chromatography TLC.This model has also limitations (6).In this chromatographic determination of the substances lipophilicity the retention time on Rf values in different mobile phases are discriminatory for it. b) The calculation methods for calculation of log P Hanch/Fujita's π-System and Rekker's f-System are methods for calculation of log P. It is expected for the drugs to have the same log P values obtained experimen-tally or by calculation, but in the practise this is not happened.If we compared experimental lipophilicity values to calculated values do not give identical values.The best correlation between calculated and experimental values log P determined according to Rekker model.Rekker's f-System Rekker and covockers choose following equation as the basis of a new approach to calculated lipophilicity(7,8): f-the hydrophobic fragmental constant, the lipophilicity contribution of a constituent part of a structure to the total lipophilicity.annumerical factor which indicating the incidence of fragment (fn) in the structure. Synthesis of derivatives of 4-arylaminocoumarin The synthesis of 4-arylaminocoumarin and its derivatives, their spectral characteristics, elementary analysis were described in our previous investigation (9,10). The calculation of partition coefficient logP (octanol/water) Values the log P(o/w) for the series coumarines are calculated according to the method of Rekker: f-the hydrophobic fragmental constant, the lipophilicity contribution of a constituent part of a structure to the total lipophilicity.annumerical factor which indicating the incidence of fragment (fn) in the structure. Results and Discussion In this work we shall describe the fragmental method of calculation of partition coefficient according to Rekker's method.This method is based on summing up convenient parameters.4-arylaminocoumarin and its derivatives will serve as examples for calculation of log P. The regression analysis and experimentally or mathematical determined lipophilicity have been used to assess the effect of structural modification on these processes.Values of calculated or experimental log P can be positive or negative.The higher log P value means the higher lipophilicity of the drug which penetrates easily through the cell membrane, while negative log P values indicate hydrophilic characteristics of the drug.The position 3 of coumarin ring is a very important site of molecular modification.The introduction of various substituens on the different sites on coumarin ring can change lipophilicity, properties and activity of coumarin derivatives.The results of log P obtained of the synthetised 4-arylaminocoumarin derivatives show different values of partition coefficient.4-Arylaminocoumarin (example 1) has negative value of log P. Our previous investigations showed that some derivatives of 4-arylaminocoumarin with supstituents CH3, OCH3, CH2CH3, OCH3, Cl changed lipophilicity, but all values of log P were always negative.The influence of substitution in position three of coumarin ring plays a role in modification of lipophilicity.Substitution with alkyl or allyl groups (example 2-4) increases lipophilicity.Introduction of methyl or ethyl group into position 3 increases lipophilicity, suggesting that by increasing the chain length the values of log P become higher.3-ethyl-4-arylaminocoumarin is the only one that has a positive value of log P and this compound is the only one of lipophilic character.The influence of allyl substituent (example 4) in position three increases lipophilicity similar to methyl group.The aryl substituent (example 5) decreases lipophilicity, but a general relationship among them could not be established. Conclusion In this work we have investigated influence of different substituents in position three of coumarin ring on lipophilicity of derivatives of 4-arylaminocoumarins. 4-Arylaminocoumarin has negative value of log P, but substitution with alkyl or allyl groups increases lipophilicity.Introduction of methyl or ethyl group into position 3 increases lipophilicity, suggesting that by increasing the chain length the values of log P become higher.3-ethyl-4-arylaminocoumarin is the only one that has a positive value of log P and this compound is the only one of lipophilic character The influence of allyl substituent in position three increases lipophilicity similar to methyl group.The aryl substitent decreases lipophilicity, but a general relationship among them could not be established.The results obtained in this study enable further synthesis of new coumarin derivatives and predict their biological activity and properties.
2018-04-03T06:23:16.166Z
2003-11-20T00:00:00.000
{ "year": 2003, "sha1": "f1f753b4fec21f7bc60cd1adf67fc568c20391dc", "oa_license": "CCBY", "oa_url": "https://www.bjbms.org/ojs/index.php/bjbms/article/download/3491/1033", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f1f753b4fec21f7bc60cd1adf67fc568c20391dc", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
56437153
pes2o/s2orc
v3-fos-license
The Effect of Age on English Professors ’ Integration of the New Technologies in Teaching The integration of computer technologies in teaching has become a vital technique to prepare students to face the challenges of the 21 st century. Indeed, today’s education systems have moved from a focus on information transformation through books, blackboards and chalks to a concentration on information processing via technological instruments, mainly computers, smart phones and tablets. Therefore, teachers are required to adopt these new technologies in their teaching practices. However, it is observed that there are several factors affecting teachers’ actual implementation of computer technology in different educational institutions. This paper aims to examine the impact of the age factor on English professors’ use of Information and Communication Technology (ICT) in Moroccan higher institutions. Descriptive analysis of means, and standard deviations were used to analyse the collected data. Also, inferential statistics, especially the ANOVA test, were employed to determine the impact of age on ICT adoptions. The findings revealed that there are statistically significant differences in the means of professors’ age when integrating ICT in instruction, F(3,159)=20.455,p<0.05. Introduction Information and Communication Technology can be defined as new multimedia technologies, including computer software, CD-ROM, the Internet, mobile phone, television, movie as well as Internet-based Project work, e-mail, chat, blogs, wikis, podcasts, and so on (Andrews, 2000).Lever-Duffy et al. (2005), state that some 'educators may take a narrower view' and predominantly 'confine educational technology (ICTs) primarily to computers, computer peripherals and related software used for teaching and learning ' (p. 4-5). Computer technologies have become a significant characteristic of our daily lives.According to Deaton (1990), " whether or not we touch a computer, it is almost impossible to escape its daily influence on us; from speedy information transmittal, and receipts, to control of lights and temperature of our workplace" (p.1).Thus, if schools, universities and other educational institutions intend to prepare the new generations for employment, computer technologies must be integrated in teaching and learning practices (Soine, 1996).In this respect, Wilmore (2001) reported that educational institutions which make effective use of ICT in their classrooms, will certainly help learners boost their learning and develop the necessary skills to face life after schooling and the upcoming social and educational shifts. The use of ICT in the classroom is very essential for providing chances for learners to function appropriately in an information age.Obviously, with the growth of new technologies, the benefits of computers may have increased step by step as well.The centre of attention however, should not be on the computer as an instrument in education, but as a useful learning tool (Bransford et al., 2000;Romeo, 2006).Bransford et al. (2000) state that "what is now known about learning provides important guidelines for uses of technology that can help students and teachers develop the competencies needed for the twenty-first century" (p.206).Another way of expressing this point is that institutions that do not embody the employment of ICT in schools cannot really claim to get their students ready for life in the twentyfirst century.Dawes (2001) confirms that technologies have the capacity to assist education across the curriculum and supply chances for useful communication between learners and educators in ways that have not been possible before.That is to say, ICT in education has the ability to be effective in bringing about changes in ways of teaching.Actually, with the advent of the new technologies, learning has become more exciting for learners regardless of their level of education.A lot of studies have revealed that the implementation of ICT in classrooms have come up with many fruitful consequences for both teachers and learners as well.It has increased their willingness to develop their knowledge through these modern tools.Therefore, universities and other educational institutions have realized the value of including computer technologies in instructional processes.Thus, this current paper intends to explore the actual use of the new technologies in Moroccan higher institutions.It aims to answer the following research question: are there any significant differences between professors' use of ICT in terms of their age? Computer technologies in Education One of the most essential gifts of ICT in the discipline of education is easy access to learning.ICT enhances the flexibility of delivery of education so that students can approach knowledge anytime and from anywhere.It can affect the way learners are taught and how they learn.Indeed, this would get the learners ready for lifelong learning as well as to ameliorate the value of learning.Individuals are recommended to access knowledge by means of ICT to keep pace with the latest advancements (Plomp, Pelgrum& Law, 2007).ICT can be employed to eliminate communication obstacles such as that of space and time (Lim and Chai, 2004).More precisely, teachers and learners no longer have to depend only on printed books for their educational requirements.With the Internet, a plenty of learning materials can now be accessed from anywhere at anytime of the day.Attwell and Battle (1999) investigate the connection between owning a home computer and school performance, their conclusions propose that learners who have access to a computer at home for educational aims, have advanced scores in reading and math.Becker (2000) discovers that ICT magnifies learner engagement, which guides to an addition amount of time learners to expend working outside class. Computer technology has the capability to increase teaching and learning opportunities through providing professors as well as learners with more appropriate knowledge and suitable skills (Ouzts & Palombo, 2004).For this reason, ICT should be incorporated within classrooms.In this context, Miller et al. ( 2000) noted that " the use of technology in education can facilitate learning by providing more relevant learning opportunities, changing the orientation of the classroom from professor to student-centered, preparing students for employment, increasing flexibility of delivery, increasing access, and potentially satisfying demands for efficiency" (p.231).Apparently, higher education institutions are encountering different challenges due to the influence of technological devices on the field of education.Rice and Miller ( 2001) revealed that Institutions face major challenges in trying to keep pace with technological advances.These challenges include keeping up with the costs of rapidly changing technologies, fostering changes in the learning processes and teaching methods, providing students with the electronic resources they expect, competing with private enterprises investing in distance learning, and training faculty in the use and integration of various technologies.(p.330). It is self evident that ICT has been developing very quickly in recent years and opens new directions in the area of education.In other words, the speedy growth in ICT has brought conspicuous and notable changes in the twenty-first century, and influenced the requirements of modern societies.Bransford et al. (2000) confirm that " what is now known about learning provides important guidelines for uses of technology that can help students and teachers develop the competencies needed for the twenty-first century" (p.206). The employment of ICTs in the classroom could foster 'deep' learning and permit teachers to react better to the various requirements of different students (Lau &Sim, 2008).In other words, ICT is a very significant instrument which, when employed suitably, can cultivate the move to a learner centered environment.Harris (2002) carries out case studies in three primary and three secondary schools, which concentrated on innovative pedagogical practices including ICT.Harries deduces that the advantages of ICT will be obtained "…when confident teachers are willing to explore new opportunities for changing their classroom practices by using ICT".The employment of technology will not only intensify learning conditions but also get next generation ready for coming lives and occupations (Wheeler, 2001). Actually, the use of computer technology in teaching and learning processes can be very advantageous and profitable for both students and professors.As for students, computer technology can help learners boost their motivation and increase their learning achievements.Also, it can help them become more autonomous and selfreliant in the sense that it enables them define their objectives, assess the outcomes of their learning, and use authentic materials effectively.By authentic materials, I mean the materials that were not originally produced for instructional purpose such the use of electronic magazines, electronic newspapers, songs, movies, etc.As far as professors is concerned, the use of ICT is thought to help teachers change their methodologies of teaching.Instead of sticking to old methods which focus on lecturing, teachers can utilize more modern ways that are highly appreciated by their learners.Moreover, these new innovative technologies could provide countless opportunities to develop professionally. The Effect of Age on ICT Integration There are conflicting findings in the literature in terms of the effect of age on ICT integration in education.For instance, Kendel (1995) concluded that the age factor was statistically significant in the sense that younger professors showed more favourable attitudes with regards to the use of ICT for instructional objectives.However, Chio (1992) found that teachers who are old tend to have more positive attitudes towards the implementation of the new technologies in teaching practices.Similarly, Spiegel & Shohamy (1989) examined the relationship between age and the use of ICT in classrooms.The findings of his study revealed that there was no significant correlation between teachers' age and their attitudes towards computer technologies. Additionally, Lamboy and Bucker (2003) reported that young professors were more acquainted with computer technological skills than those who were older.Also, Ahadiat (2008) found that teachers who were younger demonstrated a high level of comfort with computers than older ones who found difficulties in employing the new technologies to improve their teaching methodology.Furthermore, Al-Ghonaim (2005) conducted a study at Buraidah College of Technology in Saudi Arabia to investigate professors' integration of ICT in teaching.The results of his study revealed that younger professors had more positive attitudes towards computers than older teachers who less favourable attitudes regarding the use of ICT for educational purposes.Dyck & Smither (1994) found that older instructors possessed more positive attitudes and liking for computer technologies than younger ones.Todman & Lawrenson ( 1992) conclude and theorize that The younger [teachers] became familiar with computers at an early age as an everyday part of their home environment.The older [teachers] lacking this gradual and causal introduction to computers at an early age seem more likely to have been confronted abruptly with pressure to achieve prescribes goals in an unfamiliar and seemingly capricious environment and this is unlikely to be an anxiety reducing experience.(p.69) Population In this study, approximately 300 teachers were summoned to take part.However, only 195 (65 %) full-time and part-time English teachers agreed to respond to the survey.The researcher discarded 32 questionnaires which were incomplete since they had significant parts of the survey instrument missing.Hence, 163 (54, 33%) answered the questionnaire appropriately.Finally, the resulting sample size employed in this study was a total of 163 teachers working in various Moroccan higher institutions. Instruments The researcher used a questionnaire entitled "Use of Computer Technology".It was adapted from the instrument designed by O'Dwyer et al.(2004).The instrument was originally developed to assess the integration of computer technology by middle and high school teachers in the U.S. It contained four major aspects regarding teachers' use of ICT in their teaching.These aspects were: Teachers' Use of ICT tools for Delivering Instruction (TUTDI), Teachers' Use of ICT for Class Preparation (TUTCP), Teachers and Students' Use of ICT to Create Products (TSUTCP), and Teachers' Use of ICT during Class Time (TUTCT).The items included in the instrument were rated on a five-point scale (1= never, 2 = once or twice a year, 3 = several times a year, 4 = several times a month, and 5 =several times a week).Higher scores on each facet suggest that teachers employ ICT devices more often in their teaching.It is worth-stating that the questionnaire was slightly modified by removing and adding other statements related to the use of computer technology in classrooms.It was originally composed of twenty items.However, in this paper, it consisted of twelve items that were ranked on a five-point Likert scale ranging from never, once or twice a year, few times a year, few times a month and several times a month.O'Dwyer et al. (2004) tested the reliability of the instrument they employed to collect the necessary data.They reported that the coefficient alpha reliabilities were .74for TUTCP and .85 for TUTCT.As far as validity, it was established by having experienced professors thoughtfully scrutinize the material this instrument was to cover. Data Analysis Procedures Both descriptive and inferential statistical analyses were used to answer the research question: are there statistically differences in professors' use of computer technology based on age?Inferential statistics, mainly the Analysis of Variance (ANOVA) to answer this research question.ANOVA was employed to conclude whether there were statistically significant differences among the means of the groups.The dependent variable was professors' use of computer technology and the independent variable was age. Demographic Data of the Participants Responses on the first section of the survey questionnaire provided demographic data about the professors who participated in this study.The data describing the demographic characteristics were computed and analyzed using descriptive statistics such as frequencies and percentages.The examined demographic information incorporated age and university of affiliation. Age of the Participants The first demographic variable examined on the questionnaire was age.The age of the participants in this study ranged from less than 30 to greater than 51.This information is demonstrated in figure 1.About half of the participants (46.6%, n = 76) were 51 years old or older.Also, almost one third of the respondents (28.2 %; n = 46) were within the 41-50 age range, 16.6 % (n = 27) were between 30-40 age range, and only 8.6% (n = 14) were less than 30 years old. University of Affiliation of the Participants As illustrated in Figure 2, the professors participating in this study were from thirteen different Moroccan universities.The highest percentage of the respondents 21.5 % (n =35) taught English language at Moulay Ismail University followed by Sidi Mohammed BenAbdellah university, 14.1 % (n = 23).Of the 163 participants, 11.7 % (n = 19) taught at Mohammed V.The data showed that the representation of Mohammed I was somewhat less, 4.3 % (n = 7), equal with both Ibnou Zohr and Soultane Solimane universities.Furthermore, about 10 % of the professors taught at Hassan II university.Actually, a small sample of the respondents (1.23) taught at AL Akhawayne University which is a private institution. Findings Related to the Research Questions Below is the presentation of the findings related to the research questions.According to Table 1, the means of the four age groups on professors' use of ICT differ from one another.In fact, participants aged 30 or under obtained the highest mean score (M=2.64,SD =0.58), followed by respondents aged between 31 and 40 with a mean score of 2.41 (SD=0.64).Also, participants aged between 41 and 50 scored a mean of 1.20 (SD=0.80).Actually, the oldest participants aged 51 or over recorded the lowest mean score (M=1.20,SD=1.02).These results are wellrepresented in the following Figure.The means plot for age and computer technology implementation in teaching revealed that the oldest professors are less expected to integrate technological instruments in their classrooms.Therefore, there are differences between professors' use of ICT based on age as the ANOVA result demonstrates in the following table.The findings of the one-way ANOVA test revealed that there are statistically significant differences in the means of professors' age when integrating ICT in instruction, F(3,159)=20.455,p<0.05.Consequently, since the p value (p =0.000) was smaller than the significant level set at 0.05 (2-tailed), the null hypothesis indicating that there were no significant differences between the two variables was rejected.The effect size, calculated utilizing eta squared, was 0.27.To interpret the strength of Eta squared values, the following guidelines were used: 0.01= small effect, 0.06=moderate effect, 0.14=large effect (Cohen, 1988).The magnitude of the differences in the means was large (eta squared =0.27).This means that 27% of the variance in professors' use of ICT in the classroom is explained by age.In other words, age is not the only factors influencing professors' integration of computer technology in their teaching practices. In fact, because the one-way ANOVA test showed that differences exist among the various age groups, a Scheffé Post Hoc test was utilized to identify which group is significantly different from other groups.The results are demonstrated in the following table.According to Table, there are only small differences between professors' integration of computer technologies in teaching with respect to the factor of age.Only the group of respondents aged 51 or over is statistically different from the remaining groups.This suggests that participants aged 51 or more use ICT less frequently than the other respondents. Conclusions Information and Communication Technologies have become significant instructional instruments in higher educational institutions in different parts of the world including Morocco.However, it is apparent that the effective use of these new devices relies on the existence of appropriate conditions.In other words, successful implementation of ICT in teaching processes cannot be achieved without erasing the barriers that hinder its effective use in classrooms. According to the findings of this paper, there are significant differences between professors' integration of the new technologies based on age.In fact, young teachers tend to make more use of ICT for instructional purposes than older professors.Therefore, it is possible to conclude that age might have some impacts on professors' decision to incorporate modern technologies in their classrooms.It is worth-stating that there are more other variables that affect professors' use of Information and Communication Technology in teaching.These factors include teachers' gender and attitudes towards different technological gadgets. Based on the results of this study, the following recommendations can be suggested.Up-to-date training programs should be provided on a continuous manner for professors to help them understand how to make effective use of ICT in their classrooms.Moreover, the training offered to the professors should not only focus on developing professors' computer skills, it should also concentrate on showing them how to integrate these technologies in their teaching performance.Furthermore, appropriate technological infrastructure, enough equipments, up-todate software, access to computers and to the internet, and financial support should be provided in all educational institutions to guarantee the successful adoption of the new technologies in the classrooms.Also, it is strongly recommended that administrators need to provide programs that aim at changing those negative attitudes towards the use of new modern technological tools in teaching.Finally, policy makers need to become better informed about the various factors that hinder the effective and successful implementation of Information and Communication Technologies within educational institutions including universities. Figure 2 . Figure 2. Distribution of Participants by University of Affiliation. Figure 3 . Figure 3. Means Plot for age and ICT use in teaching
2018-12-15T06:41:10.751Z
2016-11-06T00:00:00.000
{ "year": 2016, "sha1": "764ef5192065336aa2a294a9b1ab314f2f7a9617", "oa_license": "CCBYSA", "oa_url": "http://ijeltal.org/index.php/ijeltal/article/download/11/10", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "764ef5192065336aa2a294a9b1ab314f2f7a9617", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Engineering" ] }
234865368
pes2o/s2orc
v3-fos-license
Static Vulnerability Analysis of Docker Images Many organizations are renovating their businesses by grasping DevOps, microservices, and container technologies. Docker is emerged as a new technology, proving an efficient means to develop and deploy applications. Docker containers are created by images to run an application with all its dependencies so that it could run isolated from other processes. Security is always being a foremost concern as our industries are already persistent to improve the reliability and efficiency of new software applications. Security of local Docker containers from the attacks of malicious containers, perceived threats present in Docker images need to be detected and the risks identified when instances of Docker containers run on the host machine. This paper reviews Docker’s existing security mechanisms, vulnerabilities, threats and the related tools required for static security analysis. Introduction Docker containers are already in demand because it allows the user to run multiple applications on the same host operating system. It also administers the bundle of operating system isolation and security features, but challenge arises when these Docker images are found to be insecure which may cause threats and other major security issues. They should be analyzed statically in order to overcome or render these container images more stable. The above strategy for the container is to be worked before the runtime of the container so that the attack of threats can be recovered without being detrimental to the software [1]. From security aspect, the Docker technology is relatively simple for container images. We need to give priority attention to two layers that can analyze containers in two different ways: Statically or Dynamically. We also usually do not need to concern more about APIs, as the overlay of networks or composite software-defined storage configurations, because these are not a major part of the analyzing portion, as the analysis is done after the container image is built i.e., before the execution of container. Preliminaries Many scholarly articles are available that are based on the security of Docker containers, but only a few pertaining directly to the static security analysis of Docker containers. Most of the security industries acknowledged that Docker container needs to be the subject of security and has the chances of dissipating. These security concerns are not to be neglected as it may be a big concern to optimize with the emergent of these attacks. This section gives a brief description of the vital terms that are going to be used throughout the paper. Docker The "Docker" was in the first instance, founded in March 2013, with the disclosure of a new approach called containerization that led to the virtualization at the OS level. Docker helps to provide a lightweight and quick environment. Docker is sometimes presented as superficial Virtual Machine (VM), but that is not the case. It is totally different from VM. The differences are shown in table 1. Docker has a complicated usage mechanism consisting of both third party and Docker managed tools. Docker gives the aptness to work with the infrastructure in a similar way that applications are handled, adding the image building instructions in "Dockerfile" and sustain a version control of the images. Furthermore, it provides the ability to increase or decrease the resources and scalability to run on different platforms. Containers are not a new abstraction but Docker technology makes its implementation much easier to use. Moreover, while in operation with images it is critical to understand that these images may contain vulnerabilities. Containers Docker containers are created by images to run an application with all its dependencies so that it could run isolated from other processes. Containers are hosted in a physical or virtual server on top of their operating system (OS). Containerization is a technology that integrates the application, system libraries, and related dependencies. Eventually, containerization uses host OS which serves relevant libraries and resources. Each container shares the host OS. This makes containers lightweight allowing them to be a few megabytes in size and take a few seconds to start. Containers do not have much management overhead, as they share a common operating system. That system may be patched and fixed if an issue arises [2]. Containers got its name from the shipping industry where different products are placed into compact shipping containers designed to minimize cost and time [3]. Security The main concern with the container is security. There are major security issues surrounding containers, such as the security of the containers that are being run on, the content of images by unknown users. It is important to know that the content of container images must be correctly configured. Managing security in containerization is different from other traditional applications. The security challenges occur when an application goes to production. Its chances to be under threat may increase dramatically leading to multiplied user base which implies that it could be open to access from outside the organization [4]. Two types of security checking can be done-static security analysis and dynamic security analysis. Using both the security analysis techniques, it can be checked or confirmed if any of the container images have any bugs. According to the survey conducted in January 2015, 53% of the enterprises revealed that their considerable concern was security. This is because containers rely on the base image and images might contain vulnerabilities which can expand to all containers [5]. Docker Security Scanning Security scanning has limited features as it can only contrast the software in an image to the familiar vulnerabilities and disclosure database for vulnerabilities [6]. Docker brings forth container images for the limited number of applications/operating systems. These images are of the finest quality for general IOP Publishing doi:10.1088/1757-899X/1131/1/012018 3 use but there may be some images that are already in use and are not provided officially by Docker which makes it the major security issue [7]. In the approach of Docker security scanning, there are many scanning techniques already available. Image Security Docker has more performance advantages than traditional virtual techniques [8]. There are few areas to consider when reviewing Docker image security like whether its packages are installed in the image or not. When it comes to security, the best choice for a base image is an empty container or distroless (set of images made by Google) or UBI provided by reputed vendors like redhat, IBM, that were created with the intent to be secure [9]. Docker images are-distributed, stored, and managed publically or they can be privately used to build containers. Official images are available on Docker hub used to build public or private processes. Docker images security is an unavoidable concern while creating an infrastructure [10]. • In Docker containers, there is a pre-definition of what is exactly running inside like the path of the data directories, daemon configurations, mount points, etc. A strong focus on security is a must, in this demanding world where the need is not only to make their system fast and reliable but also to make it secure. In 2018, the organizations that use container technology were facing the same issues over security and the percentage of those industries was 60% which is already a huge number that needs to be noticed and cannot be ignored. • Not only those industries were having the issue of security but with this finding, hundreds of more organizations gave several statistics: They said around 47% of containers are having vulnerabilities and 46% of their developers accept that they had no idea if their containers had vulnerabilities or not. Some threats and attacks that may cause vulnerabilities in container images and may target the container to fetch their confidential data and make the Docker container less secure are discussed in Methodology In the last few years, various analysis techniques have been used for the security of images with the intention of attaining optimized solutions to get rid of these security issues. In this section, we evaluate these analysis techniques. The methodology adopted in this paper is the result of our work combined with the outcomes from the previous analysis techniques. To our knowledge, the most recent analysis of Docker images was performed by Socchi in 2019. They share the information about security measures introduced by Docker Inc., gives the information of verified and certified images that can improve the security of Docker hub. Besides, they discovered the distribution of all the vulnerabilities across repository types. They implement their software to analyze the image's security. Their conclusion says that the security measures do not improve overall Docker hub security [11]. The container would be able to configure the confidential data of other containers that target the integrity of the application and other information. Furthermore, these containers can also contain similar attacks to another semi-honest container which led another container to be targeted [15]. Our goal is to protect from those threats that may target the other containers and create errors in different ways. An appropriate way to protect those images from different threats and attacks would be to statically analyze the images so that no threats can be detected after the execution process. For performing the static analysis of a particular number of images, we used an open-source Docker image scanner tool. In this section, we will compare all the analysis that has been already performed during these previous years to find out how the following analysis can be improved in most possible ways. Our findings can be summarized by the table that contains information about all the tools that analyze image security. According to the surveys that are conducted during previous years, in order to identify all the possible solutions to decrease the vulnerability of images, it must be recognized that the scanning of images is one of the main possible solutions to save our containers from security issues. Discussion This section discusses the analysis of performance of the software that is already in use to scan Docker hub images or registries. Results are classified as shown in the table with their working environment. There are 10 open source software or tools that are already working to analyze all the trusted or untrusted images to check their vulnerabilities and enhance the security of Docker container images. They are shown in Table 3. Threat Attacks Explanation Solution Cautious Images Attacks on the containers and host operating system. Cautious images may cause serious threats and harm the container. To avoid these attacks, need to use trusted images or registries. Vulnerable Images Images may have inbuilt threats. Some of the images have inbuilt threats that led to the corrupt all the data and damages the whole project. These vulnerable images are required to follow the analysis process before executing in the container field. Attack on images Images that came from unknown sources. These images may be used as threats and attacks on the container which may target confidential data. Images need to be verified first either they are secured enough to use or not during build a container. Container Image Authenticity There are many Docker images and repositories on the internet and Docker hub which are doing all kind of useful work, but it comes out with complicated issues while pulling these images without any authenticity mechanism. The questions that relate to the authenticity of images are like: • Where this image comes from? • Does this image come from the trusted image creator? • Does the cryptographic proof say that an author is a person? Docker Bench for Security A Tool to examine Docker container against security specification. It bases its tests on the industry-standard CIS benchmark. The result conveys the information, pass logs, and warnings. Clair Built by CoreOS, performs static analysis of container vulnerability. By using the Clair API, developers can ask the database for all the issues related to a specific image. Cilium Cilium is all about securing network connectivity. A Developer can apply the cilium security policies without making any changes in the application code or container configuration. Cilium support is great to use, find extensive guides and documentation. Anchore A tool for inspecting container security using CVE data and user-defined policies. It provides a list of vulnerabilities, threat levels, CVE identifiers, and other information. OpenSCAPWork bench An environment for developing and maintaining security policies for various platforms. It allows multiple organizations to efficiently develop security content by avoiding redundancy. Dagda A tool for scanning for threats, attacks, vulnerabilities, and malware in Docker containers. Developers can run it remotely or continuous, it shows the number of threats and other details. Notary Notary is a framework enhancing container security with a server of cryptographic responsibility. It verifies the cryptographic integrity of Docker container images. Grafes An API to help and analyze internal security policies. It helps speed remediation attempts. Sysdig Falco It serves behavioral activity monitoring with deep container visibility. Sysdig provides further container troubleshooting materials. Banyanops Collector A framework for static analysis of Docker container images or registries. It offers deep data analysis. • Does the image that has been pulled by you is secured enough to be used? In any case, Docker allows pulling and running any image or registries by default. Even if the custom images are used during the process, it needs to make sure that nobody inside the organization is able to make changes in an image [4]. Container image authenticity directly implies the security issues of Docker images as the containers can be built by using the same Docker image from the Docker hub or repositories. While discussing the authenticity of Docker images, the first and easiest way to check the vulnerability of images can be done by scanning [12]. The various scanning tools have already been mentioned in table 3. This scanning process scans the vulnerabilities for Docker local images that run on the engine, those local Docker files and images provide users visibility into the security posture. Common vulnerabilities and exposures (CVE) database contain the list of all the vulnerability found during the scanning process [13]. Vulnerability of Images In this part, our focus is on the causes that make images more vulnerable as well as how these issues are solved with every other analyzing tool [14]. Below are the root causes behind the vulnerability of Docker images and repositories: • Insecure generation of images. • Un-trusted production of image cryptographic configuration. • Possibility of errors in the image distribution, verification, storage process, and decompression. • Vulnerabilities inside the images. • Threats may be directly linked to the Docker or libcontainer. It is important not to trust any of the images or repositories found on the internet. Docker already sponsors a team dedicated just to review and publish the images only in official repositories. In Docker hub 23% of images are tagged as the latest images and these images are already in the list of most downloaded images from the Docker hub, these images also contain a high amount of vulnerabilities
2021-05-21T16:57:41.952Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "ba6e1b9d68841d850e71848de02abbb60fc14586", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1131/1/012018", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "88895ff534c3bc3278bc3036fdce5b006280b21d", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
220700071
pes2o/s2orc
v3-fos-license
Prediction of five-year mortality after COPD diagnosis using primary care records Accurate prognosis information after a diagnosis of chronic obstructive pulmonary disease (COPD) would facilitate earlier and better informed decisions about the use of prevention strategies and advanced care plans. We therefore aimed to develop and validate an accurate prognosis model for incident COPD cases using only information present in general practitioner (GP) records at the point of diagnosis. Incident COPD patients between 2004–2012 over the age of 35 were studied using records from 396 general practices in England. We developed a model to predict all-cause five-year mortality at the point of COPD diagnosis, using 47,964 English patients. Our model uses age, gender, smoking status, body mass index, forced expiratory volume in 1-second (FEV1) % predicted and 16 co-morbidities (the same number as the Charlson Co-morbidity Index). The performance of our chosen model was validated in all countries of the UK (N = 48,304). Our model performed well, and performed consistently in validation data. The validation area under the curves in each country varied between 0.783–0.809 and the calibration slopes between 0.911–1.04. Our model performed better in this context than models based on the Charlson Co-morbidity Index or Cambridge Multimorbidity Score. We have developed and validated a model that outperforms general multimorbidity scores at predicting five-year mortality after COPD diagnosis. Our model includes only data routinely collected before COPD diagnosis, allowing it to be readily translated into clinical practice, and has been made available through an online risk calculator (https://skiddle.shinyapps.io/incidentcopdsurvival/). Introduction Chronic obstructive pulmonary disease (COPD) is the fifth highest cause of death in the United Kingdom (UK) [1]. One of the goals of COPD diagnosis and assessment is to provide information about the risk of future events such as death in order to make informed decisions about the use of primary and secondary prevention strategies, and advanced care plans [2]. However, existing prognosis models focus on prevalent COPD, rather than incident cases, meaning that they depend on variables which are often not recorded in GP records at the time of COPD diagnosis. Additionally, external validation of these models appears to be rare, and when performed have resulted in inconsistent findings [3][4][5]. PLOS ONE PLOS ONE | https://doi.org/10.1371/journal.pone.0236011 July 21, 2020 1 / 13 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 A key predictor of mortality is the presence of co-morbidities, as demonstrated by the Charlson co-morbidity index, which takes into account age and the presence of 16 diseases [6]. More recently, Rupert Payne et al., (under review) have developed the Cambridge multimorbidity score. This uses data on the presence or absence of 20 diseases, and performs slightly better than the Charlson co-morbidity index. The deaths of up to two thirds of COPD patients are thought to be due to co-morbidities [7][8][9][10]. However, existing COPD prognosis models that include co-morbidities have been developed either in small cohorts, or in populations unrepresentative of general practice or with small lists of co-morbidities [8,11,12]. An exception to this was developed using data on 59,990 patients from UK general practice, but again this focused on prevalent cases and performed worse in a validation cohort [13]. In this study we sought to develop and validate GP-record-based (i.e. not claims based) models predicting survival for incident COPD patients, focusing on longer-term survival (5-year). We aimed to produce a model that could be implemented in a user-friendly website. Importantly, we sought to make predictions based on data available at or before the point of diagnosis, often before many of the variables used in COPD prognosis models have been logged within GP records, such as dyspnea and FEV1% predicted (required for the BARC model). Our aim was to provide accurate predictions of survival for individuals based on their baseline characteristics. Data source Data from Clinical Practice Research Datalink (CPRD)-GOLD (March 2017 release) were used to develop and validate the prognostic model. CPRD-GOLD, which is based on the Vision GP health record system (i.e. not a claims database), is representative of the UK population [14]. Data on mortality and socioeconomic status were collected through linkage (where available) to Office for National Statistics (last death date 19 th September 2017) and Index of Multiple Deprivation 2010, which were available for approximately 60% of CPRD-GOLD practices, all of which are based in England (linkage set 15). Mortality data for patients who could not be linked were derived from CPRD-GOLD, which has been shown to be approximately correct [15]. However, vital status for CPRD-GOLD patients without linkage to ONS are not known if they have transferred out of practice before end of study. CPRD-GOLD data are available to approved researchers for approved projects (https://www.cprd.com/). The protocol for this study, which is covered by the CPRD Independent Scientific Advisory Committee ethics approval, is provided S1 File. Study population All patients received their first COPD diagnosis between 1 st January 2004 and 19 th September 2012, determined using a previously validated algorithm using diagnostic codes alone. By contacting GPs, we have shown that this algorithm has a positive predictive value of 87% for the identification of COPD patients [16]. To be included in this study, patients were required to be 35 years or older, registered at their GP practice, and belong to a practice with up-to-standard data reporting at the time of their COPD diagnosis. We further divided the patients into two groups, such that the 'linked group' belonged to a practice that allowed linkage to Office for National Statistics and Index of Multiple Deprivation whereas the 'unlinked group' did not (Fig 1). By using only the linked group to develop our model we reduce bias due to unknown death dates for individuals in the unlinked group who transferred out of practice within 5 years. The linked group contained only patients in English practices, whereas the unlinked group contained patients from across the UK. Outcome and prognostic predictors Death was defined as mortality from any cause within five years of COPD diagnosis. Prognostic predictors were divided into two categories: 'basic', and 'co-morbidities'. Basic variables were collected from visits before or on the same day as the first COPD diagnosis, and included age, gender, socioeconomic status (twentiles of Index of Multiple Deprivation), smoking status (most recently recorded: never, ex, current), body mass index (most recently recorded), body mass index not-recorded (1 = TRUE 0 = FALSE), FEV1% predicted in preceding year, and FEV1 not-recorded in preceding year (1 = TRUE, 0 = FALSE). MRC Dyspnea score was not included because of its high missingness (90%) before or on the day of COPD diagnosis. Co-morbidities considered in this study were based on the list used in Barnett et al. [17]. To extract these conditions, we used the read and product code based definitions that have been developed by the CPRD @ Cambridge team (Rupert Payne et al., under review; https://www. phpc.cam.ac.uk/pcu/cprd_cam/codelists/v11/). For comparison we also extracted co-morbidities used in the Charlson co-morbidity index using read and product code based on liver disease, metastatic carcinoma, dementia, hemiplegia/paraplegia from the same website, with all other codelists available on request. Asthma was defined using an alternative codelist and approach developed for COPD patients, requiring presence of an asthma code between two-five years before COPD diagnosis, to reduce the presence of misdiagnosed patients. The Barnett co-morbidities were used to calculate the Cambridge multimorbidity score, as detailed in Payne et al., (under review). Co-morbidities were considered as present or absent at the point of COPD diagnosis, with the exception of kidney disease which we modelled using the maximum value of eGFR from the last two measurements before COPD diagnosis. An indicator for not recording eGFR at least twice, irrespective of its value, was used (1 = eGFR tested only once or not at all, 0 = eGFR tested twice or more). An additional co-morbidity was also added-gastro-oesophageal reflux disease recorded in the preceding year. Latest values for blood albumin, platelets and c-reactive protein as well as their corresponding not-recorded indicators were also considered in some models. Continuous variables were median centred. Missing values in variables with a corresponding indicator of test not performed were set to the median observed value. Outliers were removed as follows: FEV1 above 5 litres, body mass index above 70 kg/m 2 , eGFR above 200 mL/min/1.73m 3 , c-reactive protein above 370 mg/L and albumin above 70 g/L. Risk prediction modelling approaches In this study, we considered several modelling methods (logistic, survival, lasso, ridge, random forest) and sets of variables (basic, co-morbidities, co-morbidity interactions), as summarised in S1 Table. The model that we ultimately chose, which we call incident COPD prognosis (iCOPD), was based on logistic regression (glm) without any interaction terms, and only used 16 co-morbidities. Assessment of predictive ability To avoid over-optimism about the predictive performance of any given model in the development stage, patients with linked data were randomly split into a training set of 80% of practices and a held-out test set of 20% of practices. The training set was used to fit the models and to determine which combination of model and variable set (listed in S1 Table) provided the best predictions. This was done using ten-fold cross-validation of the training set, with five replications. The predictive performances of the iCOPD model was evaluated in the held-out test set. We additionally tested iCOPD (and only this model) in the patients without linked data. To do this, we used CPRD-GOLD recorded death dates and excluded the 10% of patients whose vital status after 5-years is unknown. The score we used to assess overall predictive accuracy was the Brier score (rms) which takes a value between zero and one, with lower scores indicating more accurate prediction [18]. To assess calibration we used the calibration slope (rms) where a slope of 1 indicates perfect calibration [18]. To assess discrimination we used the area under the curve measure (equivalent to c-index) (rms) which takes a value between 0 and 1, with higher scores indicating better discrimination [18]. Finally, we compared actual to predicted risk in each subgroup of the sample defined by quintiles of predicted risk (ResourceSelection). Ethics approval The use of data from the Clinical Practice Research Datalink was approved by the CPRD-Independent Scientific Advisory Committee (16_276). Characteristics of the COPD population From a total of 222,970 COPD patients in CPRD-GOLD, 60,060 patients (all in England) had linked data (from Office for National Statistics, Hospital Episode Statistics and Index of Multiple Deprivation) and 37,218 (from across the UK) did not. Patient flow is depicted in Fig 1. The median age was 68 and 67 years, and the median FEV1% predicted was 65% and 64% for the linked and unlinked patients respectively. The majority-84% in the linked and 86% in the unlinked group-had at least one of the Barnett co-morbidities by COPD diagnosis (i.e. were multimorbid), with the median number being 2. Between a quarter and a fifth-24% in the linked and 21% in the unlinked group-died within five years of their COPD diagnosis. As expected, the presence of co-morbidities was related both to age and to death. The proportion of patients whose body mass index or smoking status were not recorded was higher in those without a Barnett co-morbidity (Tables 1 and 2). The most prevalently recorded of the Barnett co-morbidities in these patients was hypertension (38% in the linked and 37% in the unlinked group), followed by painful condition (30% in the linked and 34% in the unlinked group) and asthma (20% in the linked and 18% in the unlinked group). Only seven Barnett co-morbidities-dementia, chronic liver disease, anorexia/bulimia, Parkinson's, migraine, multiple sclerosis and learning disability-had a recorded prevalence of <1% in the linked group (S2 Table). All of these except dementia also had a recorded prevalence of <1% in the unlinked group. The linked group was randomly split at the practice level into a training set for model development containing 47,964 COPD patients and a held-out test set of 12,096 COPD patients. Five-year mortality was 24% in both these datasets. Development of models within the training set First we compared various modelling approaches, and found logistic regression to perform well (S1 Fig). Using logistic regression we wanted to develop a model, iCOPD, that uses a similar number of variables as the Charlson co-morbidity index uses (or fewer if possible). However, we wanted to include in addition four variables with known relevance to prognosis of survival in COPD patients: gender, smoking status, body mass index and FEV1% predicted. We used repeated 10-fold cross validation (with five replicates) in the training set of linked patients to compare two models, both of which used information on 21 variables, including age, gender, smoking status, body mass index and FEV1% predicted. These models also included not-recorded indicators for body mass index and FEV1% predicted, as well as quadratic terms for age, body mass index and FEV1% predicted. The first of the two models additionally included Charlson co-morbidity index, which is derived from information on 16 variables (i.e. diseases). This model was out-performed by iCOPD, which included main effects for the 16 diseases whose variables had the largest absolute log odds ratios in a larger model that included main effects for the 30 co-morbidities with a prevalence >1% in the linked group (S2 Fig). The iCOPD model had a better overall predictive accuracy and discrimination than models using only basic variables and/or multimorbidity risk scores (i.e. the Charlson co-morbidity index or Cambridge multimorbidity score-S1 and S2 Figs). The model was not noticeably improved by the inclusion of additional co-morbidities, a diagnosis year variable or extra blood tests (eGFR, albumin, c-reactive protein, platelets-S1 Fig). The iCOPD model was re-fitted to the full training data (80% of practices in the linked group), resulting in the coefficients provided in Table 3 (and S3 Table in machine readable form). In this model, having cancer (odds ratio (OR) 0.44), heart failure (OR 0.44), alcohol problems (OR 0.49) and being older (e.g. ORs 2.0 and 0.47 for ages 59 and 76, respectively, compared to median age of 68) were most negatively associated with survival. In contrast, never smoking (OR 1.9) was most positively associated with survival. Not-recorded indicators for FEV1 (OR 0.58) and for BMI (OR 0.60) were negatively associated with survival. Validation of models within the held-out test set of English practices Within the held-out test set of the linked group iCOPD performed well (area under the curve of 0.801, calibration slope of 0.991 and Brier score of 0.139) and comparably to its performance in the training set (Table 4). Actual versus estimated deaths in risk quintiles of the held-out Caution should be taken in the interpretation of these odds ratios, which, while useful for prediction, may be biased. This is especially true for odds ratios for variables with associated not-recorded indicators. https://doi.org/10.1371/journal.pone.0236011.t003 Table 4. Actual versus estimated deaths in risk quintiles of the held-out data set for iCOPD. data set for iCOPD are compared in Table 4. Positive and negative predictive values for both models in the test set across a range of thresholds are given in Fig 2. Validation of models within the test set of UK practices Within the unlinked group, which was not used in model development, iCOPD performed well (area under the curve 0.794, calibration slope 0.978 and Brier 0.134). The performance of iCOPD was comparable between the linked and unlinked patient groups, this was also the case when the unlinked group was stratified by country ( Table 5). The largest difference in performance was seen between the linked and unlinked patients from English practices. However, the performance of iCOPD in unlinked English practices was still acceptable (area under the curve 0.783, calibration slope 0.911 and Brier 0.133). Fig 2. Positive and negative predictive value (PPV and NPV) for prediction of five-year mortality in the held-out test set across a range of probability cut-offs for the ICP model. https://doi.org/10.1371/journal.pone.0236011.g002 Table 5. Comparison of iCOPD validation performance between the linked and unlinked groups, and regions of the UK. Region Brier score Calibration slope AUC PLOS ONE Prediction of five-year mortality after COPD diagnosis using primary care records Discussion We have used a large primary care cohort to develop and validate the iCOPD model for the prediction of mortality within 5 years of a COPD diagnosis, using only variables already recorded within health records at the time of diagnosis. iCOPD achieved area under the curves of between 0.783-0.809 and calibration slopes between 0.911-1.04 in validation cohorts from across the UK not used in model development. Being the first models to predict 5-year mortality from the point of COPD diagnosis based only on data already available within health records, there is no direct comparison with existing COPD prognosis scores. However, our models outperformed models using the Charlson co-morbidity index and Cambridge multimorbidity score risk scores. Importantly, iCOPD had relatively consistent performance between development and validation cohorts. iCOPD is accessible through an online risk calculator (https://skiddle.shinyapps.io/incidentcopdsurvival/). We used not-recorded indicators for several variables, because it is likely that the fact that data are not recorded within GP records is itself informative of risk. For example, FEV1 data is necessary for COPD diagnosis, and so its absence within GP records at the first recording of COPD is likely to be because patients were diagnosed and tested within secondary care. This could indicate that they are more ill, which is consistent with the negative association of survival with FEV1 not-recorded in GP records. Limitations of this study include that patients may be misclassified due to undiagnosed comorbidities, or misdiagnosis of COPD or co-morbidities. However, the use of many relevant co-variates, such as never smoking, will partly account for this. For the unlinked group vital status at five years was unknown for 10% of patients. Therefore, we are encouraged by the similarity of the estimated performance measures between the unlinked and held-out part of the linked group (where vital status was always known). Additionally, due to the observational and prediction-based nature of this study, associations between variables and mortality should not be interpreted causally. As a substantial proportion of COPD patients are on long-term bronchodilators, it is likely that FEV1 measurements are post-bronchodilator. Unfortunately, specific information on whether FEV1 was measured post-bronchodilator is not routinely recorded in UK GP records. Finally, while we have taken care to rigorously assess the predictive model using cross-validation and held-out data, it has not yet been validated using external data, e.g. other GP record systems or other data from non-UK countries. Within the UK consistent clinical and recording practices in GP record systems mean that our models are likely to be relevant [19]. While clinical and recording practice may differ subtly in other European countries, we believe that iCOPD is likely to have utility in these settings (and would like to validate this). In countries, including USA, where diagnosis and management is more often in specialty settings, iCOPD is less likely to have utility. The focus of our work was on developing a good prediction model, rather than searching for significant associations between individual variables and mortality. However, agreeing with the results of the COTE study [3], we found that cancer was strongly associated with risk of mortality. We see a stronger association between heart failure and death than the COTE study, which may be to do with differences in the populations studied, the data sources (designed study versus primary care records) or the modelling approaches used. Increased risk of mortality in individuals with both heart failure and COPD has previously been found to be associated with intense COPD treatment [20]. Our studies agree that alcohol problems, atrial fibrillation and coronary heart disease are associated with mortality risk. However, we find many more conditions that help to predict mortality in incident COPD patients. In the future we hope to improve iCOPD with the addition of extra variables (e.g. additional COPD symptoms, exacerbation-like events, severity of co-morbidities, or using less broad co-morbidity definitions) and the use of longitudinal (i.e. time-varying) data up to the point of diagnosis. We also plan to use to it as the basis of a model that works equally well for both incident and prevalent cases, and dynamically over time. The most important thing to study, however, would be whether iCOPD is useful for clinicians and their COPD patients. In conclusion, we have developed and validated a model for the prediction of mortality five years after the diagnosis of COPD, providing an online risk calculator. If shown to be helpful, it could be implemented within GP health records, providing prognosis information to GPs automatically using the data that they already collect on their COPD patients. Supporting information S1 Table. Modelling approaches compared in model development for objective 1, for results see S1 Fig. The modelling methods were logistic regression, random forests (a popular machine-learning technique) and Cox regression (i.e. Cox proportional hazards). The variable sets were: just basic variables; basic variables and co-morbidity score; and basic variables and co-morbidity indicators. Logistic regression (glm) and random forest (randomForest) analyse survival as a binary variable: death within five years of COPD diagnosis. Cox regression (survival) analyses survival as a time to event outcome, in this case with survival times censored at 5 years after COPD diagnosis. This censoring has been advocated as a way to improve predictions. Logistic regression and Cox regression were performed with ridge penalisation, lasso penalisation or no penalisation, and, when the variable set included co-morbidity indicators, both with and without pairwise interactions between these indicators. The Aalen-Nelson estimator of the baseline hazard was used to make predictions from the fitted Cox regression model. CRP = C-reactive protein. Default settings were used for all methods and nested crossvalidation of penalized models was used to choose the penalty parameter (cv.glmnet). Co-morbidity indicators and pairwise interactions between co-morbidity indicators were only included in relevant models if they were >1% prevalent, e.g. a pairwise interaction between co-morbidities was only included if at least 1% of patients had both. Table, results of 5 repeats of 10-fold cross validation within the training set (80% of the linked group). Boxplot showing median and interquartile ranges for (a) prediction accuracy (Brier score), (b) discrimination (AUC = Area Under the Curve) and (c) calibration slope of the prediction models. 'B' variables include age, gender, socioeconomic status, smoking status, BMI (value and testing indicator), FEV1% predicted (value and testing indicator). 'CCI' is a single variable (derived from 17 variables), the Charlson Co-morbidity Index. 'CMS' is a single variable (derived from 20 variables), the general Cambridge Multimorbidity Score, which depends on the presence of Barnett comorbidities. 'C' includes a separate term for each co-morbidity variable. 'C^2' includes main effects and pairwise interactions between each co-morbidity variable. 'All' includes all basic and co-morbidity variables in a non-linear fashion. For (a) and (b) the red dashed line indicates the best median value over all modelling strategies, whereas for (c) it indicates the perfect calibration (slope = 1). CRP = C-reactive protein. (DOCX) S2 Fig. Comparison of 21 variable models with each other, with basic variables only (excluding IMD) and with a larger model. Results of 5 repeats of 10-fold cross validation within the training set (80% of the linked group). Boxplot showing median and interquartile ranges for (a) prediction accuracy (Brier score), (b) discrimination (AUC = Area Under the Curve) and (c) calibration slope of the prediction models. For (a) and (b) the red dashed line indicates the best median value over all modelling strategies, whereas for (c) it indicates the perfect calibration (slope = 1). 'CCI' is a single variable, the Charlson Co-morbidity Index. 'Basic-IMD (B-I)' is a model using age, gender, smoking status, body mass index (BMI) and Forced Expiratory Volume in 1-second (FEV1) % predicted, as well as quadratic terms for age, BMI and FEV1% predicted, and not-recorded indicators for BMI and FEV1% predicted. '(B-I) + CCI' adds the Charlson Co-morbidity Index (CCI) to the previous model, this index is calculated using data on 16 co-morbidities. Our 21 variable model uses adds 16
2020-07-23T09:01:59.737Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "58379639d124d983b2f8206a9b068804d50c0662", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0236011&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b57641ee1c2f504815cd8d4a34b9212f429d16a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236479575
pes2o/s2orc
v3-fos-license
MIT Open Access Articles Differentially Private Outlier Detection in Multivariate Gaussian Signals —The detection of outliers in data, while preserving the privacy of individual agents who contributed to the data set, is an increasingly important task when monitoring and controlling large-scale systems. In this paper, we use an algorithm based on the sparse vector technique to perform differentially private outlier detection in multivariate Gaussian signals. Specifically, we derive analytical expressions to quantify the trade-off between detection accuracy and privacy. We validate our analytical results through numerical simulations. a much stronger privacy guarantee than anonymization techniques such as k-anonymity [9]. Differential privacy guarantees that the participation or absence of an individual agent does not significantly alter the output of any query (e.g., "is i x i an outlier?"). Differential privacy can be achieved through input (i.e., random noise is added to x i , ∀i) or output perturbation (i.e., random noise is added to i x i ) [7]. Higher noise provides more privacy at the cost of query accuracy. A more sophisticated method is the sparse vector technique (SVT) [6], [10], which provides, for certain types of queries, higher accuracy for the same level of privacy. Several differentially private algorithms have been proposed for classical hypothesis testing. For example, [11] and [12] assume categorical data that follow a multinomial distribution in order to prove the privacy properties of their proposed algorithms. Differential privacy has also been considered in the context of anomaly detection, using Monte Carlo (MC) [11], [12] and machine learning-based [13] techniques. [14] and [15] propose statistical tests for normally distributed data under differential privacy constraints to decide whether or not the mean of a sequence of scalar, independent, and identically distributed (i.i.d.) Gaussian random variables attains a given value. Furthermore, [16] proposes a differentially private mechanism to detect distributional changes at an unknown change-point in a sequence of scalar i.i.d. random variables. However, these prior works assume that the privacy-sensitive data provided by each individual are i.i.d., which may not be the case in data generated from networked systems where individuals' data are correlated [5], [8], [17]. Our contribution in this paper is the design and analysis of SVT for detecting outliers in multivariate Gaussian signals. This setting considers agents whose signals may be correlated. Our approach differs from prior work in two ways: Unlike the Monte Carlo approaches presented in [11], [12] and the machine learning-based approach of [13], we derive analytic expressions for the accuracy of a differentially private mechanism; unlike [14]- [16] that only consider i.i.d. scalar quantities, we explore multivariate correlated data. The remainder of the paper is organized as follows: We set up the problem in Sec. II, present our main results in Sec. III, validate our approach with numerical simulations in Sec. IV, and provide concluding remarks in Sec. V. A. Notation A generic probability triple is denoted (Ω, F, P), where F stands for a σ-algebra on the sample space Ω, and P is a probability measure defined on F. The p -norm of a vector x ∈ R n is denoted by we use |·| for absolute values. We denote an ndimensional Gaussian vector with mean µ = [µ i ] ∈ R n×1 and covariance Σ ∈ S n×n ∼ N (µ, Σ). We denote by Lap(b) a zero-mean Laplace distribution with variance 2b 2 and probability density function 1 Σ1. When µ, σ, and σ 2 have no indices, they refer respectively to the mean, standard deviation, and variance of the sum of the elements in X, i.e., ∼ N (µ, σ 2 ). We denote the error function by erf(x) = 2 √ π x 0 e −t 2 dt, and the complementary error function by erfc(x) = 1 − erf(x). ∼ N (µ, Σ), meaning that the samples across k are i.i.d. However, individual values across each vector x (k) are correlated. We design a statistical test to check whether or not there is an outlier in terms of the magnitude of the signal. We first formally define the term outlier, as used throughout this article. B. Outlier Detection Problem In other words, we label an observation as an outlier of level κ if the sum of its elements is at least κ standard deviations away from its expected value. This notion of outliers captures the impact of the signal magnitude. Accordingly, we map observations in the data set O m to one of the two following hypotheses: where the threshold is h = κσ. Consequently, we can compute the following decision rule: where We note that rule (1) determines whether or not an observation is an outlier, and it depends on the data set O m . In this article, we consider cases in which the data set O m is privacy-sensitive. As we discuss in the next subsection, our goal is to publish the results of outlier detection under a differential privacy constraint. In order to satisfy the differential privacy requirement, we will modify the decision rule (1) in Sec. III. In the rest of this section, we briefly review the concepts of differential privacy and differentially private mechanisms [18]. C. Differential Privacy Consider a space H of data sets. Throughout this article, we have that H ≡ R n×m denotes the space containing the observation sequence O m . A mechanism M is defined as a random map from H to some measurable output space. The goal of a differentially private mechanism is to produce outputs with similar distributions for inputs that we wish to make indistinguishable [6]. We define a symmetric binary relation Adj on H, called adjacency, to describe which inputs are considered "close" in some sense. For example, two inputs are termed adjacent if all the entries are the same for all individuals, except for at most one entry corresponding to one individual, that has a bounded difference. More formally, two sequences of are adjacent if, and only if: Next, we provide the formal definition of differential privacy as presented in [18], [19]. Definition 2. Consider H, a space provided with a symmetric binary relation denoted Adj, and let (P, M) be a measurable space, where M is a given σ-algebra over P. Note that (4) O m are adjacent. We now define a quantity that plays a key role in the design of differentially private mechanisms. Definition 3. Consider a space of data sets H with an adjacency relation Adj, and let P be a vector space with norm · P . The sensitivity of a query q : H → P is the quantity In particular, when P = R n (with n = +∞ being a possibility), and given the p-norm for p ∈ [1, ∞], this definition of ∆ Pq is called the p -sensitivity. For notational brevity, we simply write ∆ instead of ∆ Pq when the context is clear. In this article, we design a differentially private outlier detection algorithm for multivariate Gaussian signals, namely, an outlier detection algorithm which publishes a decision that is differentially private with respect to the adjacency relation (3) for queries on O m . III. DIFFERENTIALLY PRIVATE DETECTION OF OUTLIERS IN MAGNITUDE Following the SVT as presented in [20], we design the following differentially private outlier detection algorithm for multivariate signals: Differentially private outlier detection algorithm for each observation vector x (k) = x (k) i ∈ R n×1 . We omit k for brevity. Proof. For two observation sequences O m andÕ m with an adjacency relation defined in (3), the sensitivity can be bounded as follows: By using the reverse triangle inequality, it follows that . Fig. 1 summarizes the OUTLIERDETECT algorithm. A. Performance Analysis In this section, we characterize the privacy-utility tradeoff of our privacy-preserving algorithm, OUTLIERDETECT. Our analysis relies on the following calculation. Proposition 1. Consider two independent random variables Z 1 ∼ Lap 4∆ and Z 2 ∼ Lap 2∆ . The probability density function (pdf) of the difference Z = Z 1 − Z 2 can be computed as follows: Proof. Since the Laplace pdf is symmetric about 0, the pdf of Z 2 is the same as pdf of −Z 2 . Thus, the pdf of Z = Z 1 + (−Z 2 ) is given by We evaluate (7) for two cases: z ≥ 0 and z < 0. When z ≥ 0, we split the integration limits in (7) Next, we give formal definitions of the classification errors that will be used to characterize the performance of the OUTLIERDETECT algorithm. Error definitions Two types of errors are important for any classification or hypothesis testing problem: Type I (or false positives) and Type II (or false negatives). In our case, the Type I error rate (P I ) is the probability that a nominal data point is classified incorrectly as an outlier by the OUTLIERDETECT algorithm, and the Type II error rate (P II ) is the probability that an outlier is classified incorrectly as nominal by OUTLIERDETECT. Complementary to P I is the true negative rate given by 1 − P I ; similarly, the true positive rate is given by 1 − P II . Figure 2 shows a geometric perspective of these four probabilities with respect to the threshold h, the query q x (k) , and noise Z drawn according to the density (6) from Proposition 1. Next, we use these error definitions to discuss the performance of the OUTLIERDETECT algorithm. First, we need to derive the pdf of the queries q x (k) . For conciseness, let f Q (q) denote the pdf of q x (k) . The following proposition gives us an expression for f Q (q): Proposition 2. Each query q x (k) defined in (2) is a realization of a random variable Q whose pdf is , and the result follows. Next, in Theorem 2, we derive an analytical formula for the true positive rate 1 − P II of the OUTLIERDETECT algorithm. Theorem 2. For each data point indexed by k = 1, . . . , m, the algorithm OUTLIERDETECT (O m , q(·), h, ρ, ) achieves the following true positive probability: Proof. The outlier detection algorithm gives a true positive result when q Combining the noise terms, we can define z (k) = ζ (k) − η, where z (k) has density (6) from Proposition 1. We then have the true positive rate, defined in terms of a conditional probability, as: Denoting the true positive region in Figure 2 (i.e., the 1 − P II region) by R, and using the fact that Q and Z are independent, we have: Expanding using Propositions 1 and 2, we get: The rest of the proof involves algebraic simplification of Equation (9) in terms of erfc, and defining appropriate constants c, a 1 and a 2 . Theorem 3. For each data point indexed by k = 1, . . . , m, the algorithm OUTLIERDETECT (O m , q(·), h, ρ, ) incurs the following false positive probability: with constants c, a 1 , and a 2 as defined in Theorem 2. Proof. The proof is similar to that of Theorem 2. Corollary 2. The probability P I of incurring a Type I error (as derived in Theorem 3) is 1/2 for → 0, and is 0 for → ∞. B. Discussion In the proposed OUTLIERDETECT algorithm, the differential privacy parameter directly affects the scale of the noise terms for both the query and the threshold. Corollaries 1 and 2 confirm our intuitions of the behavior of the true positive and false positive probabilities of OUTLIERDETECT at the two extreme regimes: infinite noise ( → 0) and no noise ( → ∞). In the former case, the -differentially private OUTLIERDETECT algorithm classifies no better than a random guess, with both 1 − P II = P II = 1/2 and P I = 1−P I = 1/2. In the latter case, thedifferentially private variant of OUTLIERDETECT behaves exactly like its non-differentially private counterpart, so no Type I nor Type II errors are expected. In other words, for the case with no noise, we have that P I = P II = 0, and the true positive and true negative probabilities are both equal to 1. IV. NUMERICAL SIMULATIONS We generate a data set of residuals O m with m = 1, 000, ∼ N (µ, Σ), and 1 µ = 1.73 × 10 4 , 1 Σ1 = 3.01 × 10 7 . We set a sensitivity of ∆ = ρ = 500. We first compare the analytical and empirical performance of Theorems 2 and 3, using O m . We set h ≈ 9.13 × 10 3 , resulting in 10% of observations x (k) being classified as outliers. This comparison is plotted in Figure 3. Note the validation of our analytical expressions in the limiting regimes of → 0 and → ∞. Additionally, the classification performance degrades ("i.e., less information is revealed") with higher levels of privacy (i.e., decreases). In Figure 4, we examine the performance of Theorems 2 and 3 with respect to O m , parameterized by threshold h. Each -level curve in Figure 4 sweeps out the P I versus P II probabilities from left to right corresponding to decreasing h. We see that for high privacy requirements (i.e., = 0.01), the classifier conceals information regardless of the threshold h. As the privacy requirements relax, the incidence of false positive classifications decrease with higher h, at the cost of incurring more false negatives. Finally, in Figure 5 we show via a receiver operating characteristic (ROC) curve the trade-off between detection accuracy (i.e., the true positive rate 1 − P II versus false positive rate P I ) and privacy requirements when we use OUTLIERDETECT to analyze O m . Once again, we set h ≈ 9.13 × 10 3 . As expected, the performance of the algorithm is worse in the high privacy regime (i.e., as becomes smaller). V. CONCLUSION In this article, we considered the problem of conducting norm-based outlier detection in multivariate Gaussian signals under a differentially private constraint. We designed a differentially private outlier detection algorithm, and derived closed-form expressions for its classification probabilities. Using a numerical example, we quantified the trade-off between classification accuracy and privacy, and empirically validated our theoretical results. Our ongoing work investigates the applications of the proposed framework to mode detection in hybrid systems, and to the classification of graph signals.
2021-07-29T13:27:25.542Z
2021-05-25T00:00:00.000
{ "year": 2021, "sha1": "1806c22fe18d2f31f9daa941d1df1b30c2139f02", "oa_license": "CCBYNCSA", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/145270/2/Degue-etal-ACC_21_FINAL.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "95cfb531519a18a064b459aa7a9b7fc12b158e08", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
226258790
pes2o/s2orc
v3-fos-license
Multiclonal complexity of pediatric acute lymphoblastic leukemia and the prognostic relevance of subclonal mutations Genomic studies of pediatric acute lymphoblastic leukemia (ALL) have shown remarkable heterogeneity in initial diagnosis, with multiple (sub)clones harboring lesions in relapse-associated genes. However, the clinical relevance of these subclonal alterations remains unclear. We assessed the clinical relevance and prognostic value of subclonal alterations in the relapse-associated genes IKZF1, CREBBP, KRAS, NRAS, PTPN11, TP53, NT5C2, and WHSC1 in 503 ALL cases. Using molecular inversion probe sequencing and breakpoint-spanning polymerase chain reaction analysis we reliably detected alterations with an allele frequency below 1%. We identified 660 genomic alterations in 285 diagnostic samples of which 495 (75%) were subclonal. RAS pathway mutations were common, particularly in minor subclones, and comparisons between RAS hotspot mutations revealed differences in their capacity to drive clonal expansion in ALL. We did not find an association of subclonal alterations with unfavorable outcome. Particularly for IKZF1, an established prognostic marker in ALL, all clonal but none of the subclonal alterations were preserved at relapse. We conclude that, for the genes tested, there is no basis to consider subclonal alterations detected at diagnosis for risk group stratification of ALL treatment. Introduction Improvements in the treatment of pediatric acute lymphoblastic leukemia (ALL) have resulted in high overall survival rates, now approaching 90%. 1,2 Nevertheless, relapse still remains the most common cause of treatment failure and death in children with ALL, and better recognition of individuals at risk of developing relapse will likely aid further improvements in outcome. Recent studies describing the genomic landscape of relapsed ALL have shown that relapse often originates from a minor (sub)clone at diagnosis, at a cellular fraction often undetectable by routine diagnostic methods. [3][4][5] These minor (sub)clones harbor genomic alterations acquired later during leukemia development, which could potentially contribute to clonal drift, but are unlikely to be essential for initiation of the primary disease. However, selective pressure of the upfront treatment may provide a competitive advantage to subclones that harbor alterations in cancer genes, enabling their selective survival, eventually leading to treatment failure. Both the number and clonal burden of the alterations in these genes are expected to be increased at the time of relapse, compared to initial diagnosis. Indeed, mutations in relapse-associated genes, such as those in the histone acetyltransferase (HAT) domain of the histone methyltransferase CREBBP, can often be traced back to minor subclones in the diagnostic sample. 4,6,7 Genomic characterization of relapsed pediatric ALL has revealed multiple alterations that are enriched compared to diagnosis, including activating mutations in RAS pathway genes, HAT domain mutations in CREBBP and deletions or mutations in the B-cell transcription factor IKZF1. [6][7][8][9][10][11][12][13] The presence of these aberrations at the time of diagnosis can be of potential prognostic relevance, as has been demonstrated extensively for IKZF1 in many different treatment protocols 12,[14][15][16][17][18][19] and can even lead to adjustments in stratification and treatment. 14,20 However, it remains unclear whether mutations in relapse-associated genes when present in a minor subclone at initial diagnosis are also clinically relevant. Subclonal mutations can be identified using deep targeted, next-generation sequencing techniques. 21,22 Despite the sensitivity of these techniques, both amplification and sequencing can easily lead to errors that hamper the reliable detection of low-level mosaic mutations. We previously demonstrated that single molecule molecular inversion probes (smMIP), which use unique molecular identifiers to barcode each DNA copy, can correct for sequencing and amplification artefacts, resulting in a reliable detection of low-level mosaic mutations, down to a variant allele frequency of 0.4%. 23 In this study we used the smMIP-based sequencing approach to perform deep targeted sequencing of seven relapse-associated genes in a cohort of 503 pediatric ALL samples taken at initial diagnosis, resulting in the detection of 141 clonal and 469 subclonal mutations. In addition, we performed real time quantitative polymerase chain reaction (PCR) to sensitively detect subclonal IKZF1 exon 4-7 deletions (del 4-7), which were found at a similar frequency as full-clonal deletions. Subsequently, we estimated their potential as drivers of clonal expansion and prognostic markers for relapse development. Methods In this study we analyzed two cohorts of diagnostic samples from B-cell precursor ALL patients treated according to the Dutch Childhood Oncology Group (DCOG) protocols DCOG-ALL9 (n=131) 12,24 and DCOG-ALL10 (n=245) (Online Supplementary Table S1). Both cohorts were representative selections of the total studies 12,24 (Online Supplementary Table S2). The median age at diagnosis of the patients in these cohorts was 4 and 5 years, and the median follow-up time, estimated with a reverse Kaplan-Meier method, was 138 and 104 months, respectively. 25 Relapse occurred in 18% (24/131) and 11% (27/245) of the patients, while 0.7% (1/131) and 2.8% (7/245) died during the follow-up. DNA was isolated from mononuclear cells obtained from bone marrow or peripheral blood. The median blast percentage of the samples was 92% (Online Supplementary Table S3). To increase the number of patients for the comparisons between relapsed and non-relapsed cases, we used an extended cohort of diagnostic samples from 127 additional ALL patients treated according to the DCOG-ALL9 (n=76) or DCOG-ALL10 (n=51) protocols; this cohort was enriched for patients who had a relapse and also contained 55 patients with T-cell ALL. This latter cohort was not included in the survival analyses. In order to detect mutations preserved in major clones at relapse, we performed Sanger sequencing (73/171) or used previously published Ampliseq-based deep-sequencing data (98/171) to verify alterations observed at diagnosis. 26 In accordance with the Declaration of Helsinki, written informed consent was obtained from all patients and/or their legal guardians before enrollment in the study, and the DCOG institutional review board approved the use of excess diagnostic material for this study (OC2017-024). In order to accurately detect subclonal alterations in diagnostic samples, 166 smMIP were designed in CREBBP, PTPN11, NT5C2, WHSC1, TP53, KRAS and NRAS, seven genes that are frequently mutated in relapsed ALL (Online Supplementary Table S4, Online Supplementary Materials and Methods). IKZF1 and ERG deletion status was assessed using the multiplex ligation-dependent probe amplification assay (MLPA) SALSA P335 ALL-IKZF1 and P327 iAMP-ERG kits, respectively (MRC-Holland, the Netherlands), according to the manufacturer's instructions and as described before. 12,24 Additionally, IKZF1 4-7 deletions were assessed with Sanger sequencing and real-time quantitative PCR, using an IQ SYBR Green supermix (Biorad, USA). For detailed descriptions of the smMIP-based sequencing, IKZF1 deletion detection and data analysis, see the Online Supplementary Materials and Methods (Online Supplementary Figures S1 and S2, Online Supplementary Tables S4-S6). To test continuous and categorical variables, the nonparametric Wilcoxon signed rank and Fisher exact tests were used, respectively (R packages ggpubr version 0.2 and stats version 3.5.1). Cumulative incidence of relapse (CIR) was estimated by employing a competing-risk model with death as a competing event. 27 To assess the statistical difference between CIR, the Gray test 28 was applied. To investigate the effect of prognostic factors on relapse, univariate and multivariate Cox proportional hazard regression models were estimated. Competing risk analysis was performed with the R packages cmprsk (version 2.2-7) and survminer (version 0.4.3). Univariate and multivariate Cox models were estimated using R package survival (version 3.1-12). Data were visualized using the R package ggplot2 (version 3.2.1) and cBioPortal MutationMapper. 29,30 Results A total of 503 diagnostic samples from children with ALL (Online Supplementary Table S1) was subjected to targeted deep sequencing of the relapse-associated genes TP53, CREBBP (HAT domain), KRAS, NRAS, PTPN11, NT5C2 and WHSC1 using smMIP, which contain random molecular tags to accurately detect low-level mosaic variants. 23 Each targeted region was covered with an average of 308 unique capture-based consensus reads ( Figure 1, Online Supplementary Figure S1A, B), enabling the reliable detection of alterations with allele frequencies even below 1%. A total of 7,836 quality-filtered variants was detected, of which 610 were absent in public and private variant databases and were predicted as pathogenic. The allele frequency of these mutations ranged from 0.03-100% ( Figure 2A, Online Supplementary Table S3). The majority of the mutations (473/610; 78%) was found in one of the three RAS pathway genes (KRAS, NRAS, PTPN11), of which 418 (88%) were known hotspot mutations. In addition to sequencing the seven relapse-associated genes, we performed sensitive screening for IKZF1 deletions, which are strongly associated with the occurrence of relapse. We chose to focus on exon 4-7 deletions, which represent 25% of all IKZF1 deletions, have a similar unfavorable outcome as other IKZF1 deletions, 31 and show the strongest clustering of deletion breakpoints, thus enabling their sensitive upfront detection by breakpoint-spanning semi-quantitative PCR. 32 Applying this strategy to the 503 diagnostic samples revealed all 22 IKZF1 exon 4-7 dele-Prognostic relevance of subclonal mutations in ALL haematologica | 2021; 106 (12) tions previously identified using a standard MLPA method, as well as 28 additional cases carrying deletions that were missed with the MLPA technique. All breakpoints were sequenced to determine their unique breakpoint-spanning sequences (Online Supplementary Table S6). Using a dilution series of a control sample with a full-clonal IKZF1 exon 4-7 deletion, we determined the level of clonality of the deletions, which ranged from 100% down to 0.32% (Figures 1 and 2, Online Supplementary Figure S1C). All but one of the subclonal IKZF1 exon 4-7 deletions had allele frequencies below 10% (Online Supplementary Table S7). Subclonal alterations in relapse-associated genes are common at diagnosis Combining sequence mutations and IKZF1 exon 4-7 deletions, we detected 660 genomic alterations in 285 diagnostic samples, of which 165 (25%) were present in the major fraction of cells (allele frequency ≥25%), which were referred to as high-clonal. The remaining 495 mutations (75%), most of which had an allele frequency <10% were referred to as subclonal (Online Supplementary Figure S2, Online Supplementary Table S7). A total of 147/285 patients carried at least one alteration in a major clone, while 138/285 (48%) patients carried exclusively subclonal alterations. NRAS and KRAS were the most frequently affected genes, showing major clone mutations in 6% and 8% of the cases and subclonal mutations in 20% and 15% of the cases, respectively (Figure 2A, B). Potency of RAS pathway genes as drivers of clonal expansion We identified 473 RAS pathway mutations in 225/503 (45%) cases, of which 78% were subclonal (median allele frequency = 3.5%). Over half of the RAS-affected cases were hyperdiploid (>47 chromosomes), in line with previous studies indicating that RAS mutations are associated with hyperdiploidy at diagnosis. 10,33 The abundance of these mutations in major and minor clones suggests that these mutations drive clonal expansion during the development of leukemia. Major clone RAS pathway mutations (n=102; all being known hotspots) were found to be mutually exclusive, and 52/102 (51%) of these RAS-mutated cases had at least one additional subclonal mutation in one of the three RAS pathway genes. The mutations mostly affected codons 12 and 13 of KRAS and NRAS ( Figure 3A-C), and considerable variability in the level of clonality was observed between the different RAS hotspot mutations at the time of diagnosis. For example, NRAS G12A (10 cases), NRAS G12V (7 cases), and PTPN11 E76K (7 cases) were never found to be present in a major clone, whereas 55% (n=11) of the KRAS G13D and 27% (n=9) of the KRAS G12D mutations were found in major clones. With these high numbers of RAS mutations, the variability in clonal burden between hotspot mutations may provide an opportunity to compare the capacity of different hotspots to drive clonal expansion of ALL. In order to test this hypothesis we compared allele frequencies and performed statistical analyses. We found that KRAS hotspot mutations had a significantly higher allele frequency compared to both NRAS and PTPN11 mutations (Wilcoxon signed-rank test, P<0.01) ( Figure 3D). When comparing the different hotspot mutations within KRAS, A146V showed the lowest allele frequency, indicating a weaker potential of this hotspot to drive clonal expansion compared to the other KRAS hotspots. Furthermore, the allele frequency of KRAS G13D was significantly higher than haematologica | 2021; 106(12) Figure 1. Schematic representation of the study design. Single-molecule molecular inversion probe-based sequencing approach and real-time quantitative polymerase chain reaction were used in order to detect alterations in known relapse-associated genes in a large cohort of diagnostic samples from patients with acute lymphocytic leukemia. Detected alterations were correlated with outcome and Sanger sequencing was performed on available relapse samples in order to confirm that exactly the same alteration was present in the major clone in relapse. smMIP: single-molecule molecular inversion probe; MLPA: multiplex ligation-dependent probe amplification assay; PCR: polymerase chain reaction; qPCR: real-time quantitative polymerase chain reaction. Figure 3E). This finding indicates that some RAS hotspot mutations (e.g., KRAS G12D, G13D, A146T) may result in a stronger expansion potential compared to others (e.g., KRAS A146V, NRAS G12D, G13D), and further illustrates the complex heterogeneity of RAS hotspot mutations in their potential to drive clonal expansion. Relevance of gene alterations to relapse development The high number of alterations in these relapse-associated genes at the time of diagnosis triggers the hypothesis that these could be used as prognostic biomarkers for relapse development, even when present at subclonal levels. To test this hypothesis, we first explored whether alterations in each of the eight genes were enriched in diagnostic samples from patients who subsequently relapsed compared to diagnostic samples from patients who did not relapse. In general, subclonal alterations were very common at primary diagnosis in patients who relapsed (60/82; 73%) as well as in patients who did not (165/203; 81%). For high-clonal alterations, we only observed a higher percentage of relapse development in cases with IKZF1 deletions compared to wild-type cases, whereas an association with relapse development was not observed for diagnostic samples with subclonal alterations in any of the genes, including IKZF1 ( Figure 4). Furthermore, patients with high-clonal IKZF1 4-7 deletions were more often classified as having high minimal residual disease (MRD; >5×10 −4 at day 79 or 84 after start of the treatment) in both representative ALL9 and ALL10 cohorts (Fisher exact test, P<0.01 and P<0.05, respectively), compared to patients without an IKZF1 deletion (Online Supplementary Table S10). The CIR at 5 years was 41.7% (SE 0.04%) and 42.9% (SE 0.03%) in patients with high-clonal IKZF1 4-7 deletions treated according to the ALL9 and ALL10 protocols, respectively ( Figure 5). The cause-specific hazard ratio (HR CS ) in the two representative cohorts (n=376), estimated with a univariate Cox proportional hazards regression model, revealed an association of high-clonal IKZF1 exon 4-7 deletions with relapse (HR=7.22; 95% CI: 3.27-15.95; P<0.01). In the multivariate Cox model, in which age at diagnosis, gender and MRD status were included, the adjusted HR CS was 3.6 (95% CI: 1.38-9.55; P<0.01) ( Table 1, Online Supplementary Table S11). These data are in line with those from earlier studies on these cohorts in which all IKZF1 deletions were included. 12,24 However, when we assessed the clinical relevance of subclonal alterations for relapse development in IKZF1, or any of the other genes, Cox regression analysis revealed no significant associations in the combined ALL9 and ALL10 cohorts compared to wild-type cases ( Table 1, Online Supplementary Table S11), and the CIR was similar in the two groups ( Figure 5). Furthermore, patients with subclonal IKZF1 4-7 deletions did not have significantly different levels of MRD compared to IKZF1 wild-type patients (Online Supplementary Table S10). Since previous studies have shown a lack of association of IKZF1 deletion with relapse in patients who carry a deletion in ERG, 34,35 we used MLPA to test whether there was an enrichment of ERG deletions in cases with subclonal IKZF1 exon 4-7 deletions compared to those with clonal IKZF1 exon 4-7, but these deletions were infrequent in both groups (Online Supplementary Table S12). Tracing of major and minor clone mutations at the time of relapse To obtain further insight into the clinical relevance of the identified alterations in relapse development, we investigated whether these were preserved in the cases that relapsed. For this analysis, we used all 146 cases that later developed a relapse, of which 82 carried alterations in a major or minor clone in one or more of the genes (Online Supplementary Tables S13 and S14). Overall, we found that for most genes at the time of diagnosis the frequency of subclonal alterations was similar or slightly higher compared to that of the alterations detected in a major clone (Online Supplementary Figure S3A, Online Supplementary Tables S13 and S14). We collected 73 relapse samples from patients who carried these major or minor clone alterations at the time of diagnosis (89%), which enabled us to trace 171 of the 185 sequence mutations, and 25 of the IKZF1 exon 4-7 deletions. We did not assess whether mutations detected at diagnosis were still preserved in minor clones at relapse, since these clones were unlikely to be true relapse drivers. Overall, 56% (22/39) of the tested major clone mutations were found to be preserved in the major clone at relapse, whereas the value for the subclonal mutations was 7% Table S7). For IKZF1 exon 4-7 deletions, the difference was even more striking. Here, the presence of deletions was studied in 19 available relapse samples using breakpoint-spanning PCR, followed by Sanger sequencing to confirm that the breakpoint sequences were identical at diagnosis and relapse ( Figure 6, Online Supplementary Figure S3B, Online Supplementary Tables S6 and S7). All major clone IKZF1 exon 4-7 deletions were found to be preserved in the major clone at the time of relapse (n=12), which is in agreement with earlier findings and illustrates their relevance to relapse development in these treatment protocols. 12,24 In contrast, none of the subclonal exon 4-7 deletions in IKZF1 (n=13) was preserved in either the major or a minor clone at relapse. Collectively, the data from the present study indicate that these deletions, when present at initial diagnosis at a subclonal level, do not drive relapse in pediatric ALL. Discussion ALL is a heterogeneous disease in which specific genomic alterations show strong associations with relapse risk and outcome. In this study, we assessed the clinical relevance and prognostic value of subclonal alterations in eight genes frequently mutated in relapsed B-cell precursor ALL in a cohort of 503 diagnostic samples. Our data demonstrate that subclonal alterations in these genes are very common at the time of diagnosis, but that these mutations do not provide a basis for risk stratification in Ž. Antić et al. 3052 haematologica | 2021; 106(12) Figure 4. Prevalence of relapse associated genomic alterations at diagnosis. Bar plot showing the percentage of relapses in cases with high-clonal (blue) or subclonal (yellow) mutations in seven relapse-associated genes, and in cases that were wild-type (black) for these genes. Only cases with high-clonal IKZF1 4-7 deletions showed a significantly higher percentage of relapse development compared to wild-type cases (Fisher exact test, P<0.01) (Online Supplementary Table S9). pediatric ALL. This finding is particularly relevant for IKZF1 alterations, which are currently used or implemented for treatment stratification in multiple upfront treatment protocols. 14,20 The selection of these genes was made based on enriched mutation frequencies in relapse found in previous studies. Of all alterations identified in this study, 75% were subclonal at diagnosis, suggesting that these relapseassociated gene mutations accumulate during progression of the leukemia before the initial diagnosis, thereby increasing the clonal complexity. Whereas seven of the genes selected in our study showed this high mutational burden at diagnosis, both in terms of numbers and level of clonality, we identified only a single, not previously reported, subclonal NT5C2 mutation in a non-relapsed case (follow-up time 9.5 years). NT5C2 encodes the cytosolic nucleotidase, which is responsible for inactivating cytotoxic thiopurine monophosphate nucleotides, and activating mutations in this gene are recurrently found in relapsed ALL, mainly T-cell ALL. 4,9,[36][37][38] One explanation for the low number of activating NT5C2 mutations at diagnosis is that these mutations decrease cell fitness, and only obtain their selective advantage during treatment with thiopurine. 36 If already present at the time of initial diagnosis, these mutations are usually detectable in only a very small subset of cells, far below the detection level of our smMIP analysis. 36 Hotspot RAS pathway mutations have been detected in nearly half of the cases, often of the hyperdiploid subtype, and their frequency and clonal burden varied between the different mutations. In our study, we used this variability to compare the potential of different hotspot mutations to drive clonal expansion under physiological conditions. Compared to diagnosis, we observed a less diverse spectrum of KRAS and NRAS hotspot mutations in relapse, with G12D, G12V and G13D together accounting for two-thirds of KRAS and NRAS hotspot mutations found in relapse-fated clones. Studies in other cancers have demonstrated that the prevalence of different RAS pathway mutations varies depending on the type of cancer and tissue of origin, with KRAS mutations G12D, G12V, G13D and G12C being among the most common ones. 39,40 Comparison of oncogenic capacities of different RAS hotspots has also been performed using in vitro and in vivo modeling studies, focusing primarily on KRAS. These studies identified KRAS mutations G12D, G12V and G13D as having higher proliferative and transforming potential compared to other common hotspots in various tumors of epithelial origin. 39,41,42 Our data indicate that in competition of multiple RAS hotspot mutations, some of these not only confer a proliferative advantage but can also more effectively sustain a treatment-induced selective sweep. 4,10 The presence of IKZF1 deletions has been shown to be associated with relapse and survival in multiple clinical ALL studies, 12,14-19 and these deletions have been described to play a role in resistance to tyrosine kinase inhibitors and glucocorticoids. [43][44][45][46] Therefore, with the advance of more sensitive detection techniques, the question of whether subclonal alterations are also associated with relapse is very relevant, both from biological and from clinical perspectives. We here demonstrate that, in contrast to major clone IKZF1 exon 4-7 deletions, cases that carry this deletion only in a subset of the cells do not show an association with relapse. Moreover, whereas all major clone exon 4-7 deletions were preserved in cases that relapsed, none of the relapses from cases with subclonal exon 4-7 deletions at diagnosis carried this deletion. Importantly, the majority of subclonal deletions had allele frequencies below 10% (Online Supplementary Table S7). Therefore, since a threshold to distinguish subclonal from major clonal deletions is difficult, deletions closer to our threshold of 25% should be evaluated with caution. Nevertheless, the difference between major and minor clone IKZF1 4-7 deletions is striking, and the reason behind this remains unclear. Possibly, the functional impact of full-clonal IKZF1 deletions, which arise early during leukemia development, is different from that of deletions that occur in later stages when the leukemia has already expanded. Other deletions in IKZF1 show much less clustering in their breakpoints and, therefore, screening for these subclonal deletions in diagnostic samples is much less efficient. We did not, therefore, directly assess the stability and potential prognostic importance of whole gene and rare intragenic IKZF1 deletions. However, a previous study showed that other IKZF1 deletion subtypes Prognostic relevance of subclonal mutations in ALL haematologica | 2021; 106(12) have similar prognostic relevance as exon 4-7 deletions, 31 suggesting that subclonal alterations in these other IKZF1 deletions may show the same lack of association. In summary, we show that subclonal alterations in the relapse-associated genes IKZF1, CREBBP, KRAS, NRAS, PTPN11, TP53, and WHSC1 in pediatric ALL are frequently present at initial diagnosis, often at a subclonal level. At relapse, however, most of these subclonal mutations are lost, suggesting that their selective advantages over wild-type clones during treatment is limited. This finding has direct implications for clinical practice, particularly in the case of IKZF1, where deletion status is used for routine risk stratification. We conclude that, at least for the investigated set of genes, there is no basis for the use of subclonal alterations at initial diagnosis as a prognostic marker. Disclosures No conflicts of interest to disclose.
2020-11-06T14:07:16.044Z
2020-11-05T00:00:00.000
{ "year": 2020, "sha1": "1a4428945c81daf47333aef0349ebd342c90024a", "oa_license": "CCBYNC", "oa_url": "https://haematologica.org/article/download/haematol.2020.259226/72575", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "68e517375d8e3fb6128f9f0eeae44fc6409986c8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
53993327
pes2o/s2orc
v3-fos-license
Improved Results on Reachable Set Bounding for Linear Delayed Systems with Polytopic Uncertainties This paper focuses on bound of reachable sets for delayed linear systems with polytopic uncertainties. Based on LyapunovKrasovskii functional theory, delay decomposition technique, and reciprocally convex method, some new results expressed in the form of linear matrix inequalities are derived. It should be noted that triple integral functionals are first to be introduced for reachable set analysis. Consequently, a tighter bound of the reachable set is obtained. Four numerical examples are given to illustrate the effectiveness and advantage of the proposed results comparing with the existing criteria. Introduction In real world, many phenomena can be described by time delay systems, such as communication networks, biology, and physical process.It is well known that the presence of time delay may lead to complicated behaviors for dynamic system, including instability, oscillations, and robustness [1][2][3][4][5].In addition to stability and robustness of the state, the property of input-to-state for dynamical systems is also concerned.For a dynamic system, reachable set is the set of all the states in the Euclidean space that are reachable from the origin, in finite time, by inputs with peak value that is bounded by some given positive scalar [6].It was first considered in the late 1960s and it has a wide range of applications, such as peak-to-peak gain minimization problem and control systems with actuator saturation.Thus, the problem of reachable set bounding for time delay systems has received considerable attention in recent years; for instance, see [6][7][8][9][10][11][12][13][14][15][16][17][18] and the references therein. There are already some relevant results about the problem of reachable set bounding for linear systems.An LMI condition for an ellipsoid that bounded the reachable set of linear systems without time delay was given by Boyd in [13].In [6], Fridman and Shaked firstly derived LMIs criteria of an ellipsoid that bounded the reachable set of uncertain systems with time-varying delays and bounded peak input based on the Razumikhin theory.In [11], Kim proposed an improved condition by using the modified Lyapunov-Razumikhin functionals.Nam and Pathirana obtained a smaller reachable set bound by the delay decomposition technique [10].The maximal Lyapunov functionals, combined with the Razumikhin method, were employed to give a nonellipsoidal description of the reachable set in [15].More recently, the authors derived the ellipsoid bounds of reachable sets of linear uncertain linear discrete-time systems based on the idea to minimize the projection distances of the ellipsoids on each axis with different exponential convergence rates [17].Based on property of Metzler matrix, a new approach which did not involve the Lyapunov-Krasovskii functional method was used to get the state bounding for linear timedelayed systems [18].The delays considered in [6-9, 11, 12, 15, 16, 18] are from 0 to an upper bound.However, delays may vary in an interval for which lower bound of delays is not necessary to be 0, such as [10].On the other hand, the authors considered nondifferentiable time-varying delays in [10,18], and differentiable time-varying delays were considered in [6-9, 11, 12, 15, 16].Paper [11] assumed the derivative of delay to be less than 1.As is well known, large value of derivative of delay may yield bigger reachable set bounding.These constraints on the delays are strong and may be relaxed. In this paper, we study the reachable set bounding for linear delayed systems with polytopic uncertainties.Constraints for delay are relaxed.Time delays vary in an interval for which lower bound of delays is not necessarily 0, and value of derivative of delay is not necessarily less than 1.Inspired by the Lyapunov functionals in [2], we construct Lyapunov-Krasovskii functionals, combining with the delay decomposition technique and reciprocally convex method to derive a more accurate description of the reachable set bound.Different from the Lyapunov functionals in [2], the integral terms of Lyapunov functionals in this paper contain (−) .Moreover, to the best of our knowledge, it is first time to introduce triple integral functionals for reachable set analysis.We will show that the reachable set bound is tighter than that of [6,[8][9][10][11][12]14].Numerical examples illustrate the effectiveness and improvement of the obtained results. Notations.The notations are used in this paper except where otherwise specified. is the -dimension Euclidean space and × denotes the set of × -dimension real matrices; real matrix > 0(≥ 0) means that is a symmetric positive definite (positive semidefinite) matrix.Superscript "" denotes transposition of a vector or a matrix; ⋆ represents the elements below the main diagonal of a symmetric block matrix; denotes an identity matrix; "-" in tables represents no feasible solution for linear matrix inequality. Lemma 1 (see [19]).The following relation is known as the Leibniz rule: Lemma 2 (see [4]).For any constant matrix = > 0 and ℎ 2 > ℎ 1 ≥ 0 such that the following integrations are well defined, then Lemma 3 (see [5]).For any constant matrix > 0, scalars ℎ 2 > ℎ 1 ≥ 0 such that the following integrations are well defined, then Proof.By using Lemma 2, one can obtain According to Schur complement, the following inequality holds: Integrating both sides of the above inequality from −ℎ 2 to −ℎ 1 , we have By using Schur complement again, inequality ( 8) is equivalent to the inequality in Lemma 3.This completes the proof. Main Results In order to study the reachable set bounding of uncertain system (1), firstly, we consider Δ = 0, Δ = 0, Δ = 0 in system (1); that is, The reachable set bounding of system (14) with timevarying delay () for case (a) and case (b) is stated in Theorems 7 and 8, respectively. Using the spectral properties of symmetric positive definite matrix , the following inequality holds: This further implies that ‖()‖ ≤ = 1/√ min () due to (27).This completes the proof. Remark 12.In this paper, delay decomposition technique and reciprocally convex method are used to construct Lyapunov functionals, and triple integral terms are introduced in Lyapunov functionals for the first time to investigate bounds of reachable set for systems with uncertainties, which may lead to tighter bounding for reachable set. Remark 14.The reachable set of system (1) can be minimized by solving the following optimization problem for a scalar > 0: Examples In this section, four numerical examples will be presented to show the validity of the main results derived in this paper. Example 1.Consider the following uncertain time-delayed system with parameters: By solving optimization problems (42), computed 's for the case ≤ () ≤ with different values of are listed in Table 1.Computed 's for different values of with = 0.7 and = 0.75 for the case ≤ () ≤ , τ () ≤ are obtained in Tables 2 and 3, respectively.It is clear to see that the proposed method in this paper yields tighter bounds than literatures [6,8,11]. Computed 's for the case ≤ () ≤ , τ () ≤ with = 0, = 0.1 are listed in Table 4 to compare with the ones in [6,8,11].It should be noted that there is no feasible solution employing the approaches in [6,11], and the derived method in this paper yields much tighter bounding than [8].Hence, the proposed method leads to a wider application range. From Theorem 9, computed radiuses 's for case (1) and case (2) are listed in Table 5.These results are compared to the ones in [6,8,10,11].It is clear to see that our results decrease radiuses of the ellipsoid. By employing the method of Theorem 8 in this paper, 's for different values of () with = 0 are listed in Table 6.It is easy to see that bounds obtained in this paper are better than the ones of literatures [8][9][10][11][12]14]. Example 4 . Consider the following uncertain time-delayed system with parameters: 2 , . . ., : → have positive values in an open subset of .Then, the reciprocally convex combination of over satisfies [3]is clear to see that radius is smallest if = min Hence, we can use MATLAB's Toolbox to solve the matrix inequalities in Theorems 7-10.Remark 17.The approach is likely to help further work in this area.It may be used to improve estimate partial state bounding for neural networks with time-varying delays, such as[3]. Table 6 : Computed 's of Example 4 for different values of with = 0. contain triple integral terms, which lead to tighter bounding than previous literatures.Numerical examples have been given to illustrate the effectiveness and improvement of the proposed methods.These results are likely to help further work in this area.One future work is to extend the results in this technical note to linear neutral systems and linear mixed delay systems.
2018-11-27T10:33:19.597Z
2015-05-14T00:00:00.000
{ "year": 2015, "sha1": "e0f1128a322312175cd1427145e3a17cca1c2358", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ddns/2015/895412.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e0f1128a322312175cd1427145e3a17cca1c2358", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
210708653
pes2o/s2orc
v3-fos-license
DNA barcode trnH-psbA is a promising candidate for efficient identification of forage legumes and grasses Objective Grasslands are widespread ecosystems that fulfil many functions. Plant species richness (PSR) is known to have beneficial effects on such functions and monitoring PSR is crucial for tracking the effects of land use and agricultural management on these ecosystems. Unfortunately, traditional morphology-based methods are labor-intensive and cannot be adapted for high-throughput assessments. DNA barcoding could aid increasing the throughput of PSR assessments in grasslands. In this proof-of-concept work, we aimed at determining which of three plant DNA barcodes (rbcLa, matK and trnH-psbA) best discriminates 16 key grass and legume species common in temperate sub-alpine grasslands. Results Barcode trnH-psbA had a 100% correct assignment rate (CAR) in the five analyzed legumes, followed by rbcLa (93.3%) and matK (55.6%). Barcode trnH-psbA had a 100% CAR in the grasses Cynosurus cristatus, Dactylis glomerata and Trisetum flavescens. However, the closely related Festuca, Lolium and Poa species were not always correctly identified, which led to an overall CAR in grasses of 66.7%, 50.0% and 46.4% for trnH-psbA, matK and rbcLa, respectively. Barcode trnH-psbA is thus the most promising candidate for PSR assessments in permanent grasslands and could greatly support plant biodiversity monitoring on a larger scale. Introduction Grasslands are some of the most widespread ecosystems on Earth, covering two-fifth of its land surface [1]. They provide roughage for ruminant livestock production and many other environmental services related to carbon sequestration, water flow regulation and soil stabilization [2,3]. Plant species richness (PSR) is a component of biodiversity with major effects on the ecosystem functioning of grasslands. In experimental grassland plant communities, high levels of PSR stabilize yields and confer tolerance against environmental stressors [4]. Similar effects have been observed in semi-natural grasslands, which are composed of a limited number of species and are an important component of sustainable livestock production [5]. Assessing PSR is thus crucial for tracking its changes and effects on ecosystem services. However, such assessments have traditionally relied on morphology-based surveys that are labor-intensive and require trained taxonomists, limiting their use for surveying PSR over large scales and long time periods [3]. Furthermore, grasses and legumes (the two plant families of major economic relevance in temperate grasslands) can be taxonomically assessed with highest precision only when certain distinctive morphological characters are on display (e.g., flowering bodies and leaves). Still, some grass and legume species are difficult to distinguish from closely related species. A standardized, precise, high-throughput solution for PSR surveys in grasslands is therefore desirable for large-scale assessments of changes in PSR. DNA barcoding is a methodology that has been successfully applied for standardizing and increasing the throughput of PSR surveys in ecological studies [6,7]. DNA barcodes are organellar or nuclear loci that show a high degree of species-level conservation [8,9]. By comparing newly sequenced DNA barcodes to reference databases, it is possible to assign an unknown biological sample to its correct taxonomy. An international effort is currently in place to maintain a well-curated, public reference database of DNA barcodes (The Barcode Of Life Datasystems database, BOLD [10]). In animals, the DNA barcode of choice is the mitochondrial COI gene, which can reproducibly differentiate most of the major animal phyla [8]. In plants, in contrast, there is no single DNA barcode with comparable success [11]. Most plant DNA barcodes are located in the chloroplast genome, either within coding sequences (such as rbcLa and matK) or in intergenic regions (such as trnH-psbA) [11,12], although some nuclear loci have also been used as DNA barcodes, e.g., the internal transcribed spacer of the ribosomal DNA (ITS) [13]. More than one barcode per plant individual are typically sequenced and used for taxonomical assignments [11,12]. However, sequencing more than one DNA barcode per plant may not be technically feasible in higher throughput settings, particularly when analyzing mixed-species samples. The aim of the present study was to determine the best DNA barcode sequences for forage species by screening the BOLD database for promising candidates and sequencing three DNA barcodes (rbcLa, matK and trnH-psbA) from multiple cultivars of 16 forage plant species that are common in sub-alpine grasslands. Plant material and DNA extraction Seeds of 2-3 cultivars of 16 forage species (Alopecurus pratensis L., Arrhenaterum elatius L., Cynosurus cristatus L., Dactylis glomerata L., Festuca pratensis Huds., F. rubra L., Lolium perenne L., L. multiflorum Lam., Lotus corniculatus L., Medicago sativa L., Phleum pratense L., Poa pratensis L., Trifolium pratense L., T. repens L. and Trisetum flavescens L.), kindly provided by Agroscope, Zurich, Switzerland were used for the study (Table 1). Seeds were germinated and transferred into pot trays (77 wells, 50 cm × 32 cm, with compost as substrate). The species selected are predominant components of sub-alpine grasslands and hold great potential for multifunctional, species-rich agriculture [14,15]. Plants were grown for 3 weeks after which DNA was extracted from three plants per species. For grasses, three leaf fragments of ~ 1 cm and for legumes three young leaflets were harvested. The plant material was freeze-dried for 48 h and pulverized in a QIAGEN TissueLyser II (QIA-GEN, Hilden, Germany). DNA was extracted using the NucleoSpin ® II kit (Macherey-Nagel, Düren, Germany) and its integrity visually inspected by agarose gel electrophoresis (1% w/v). DNA purity and concentration were determined with a NanoDrop ™ spectrophotometer (ThermoFisher Scientific, Waltham, MA, USA). DNA barcode amplification and sequencing The BOLD database was screened for DNA barcode sequences of the selected species and close relatives; barcodes rbcLa, matK and trnH-psbA were selected as candidates because they reported the most available sequences. Those DNA barcodes are mainly located in the chloroplast genome and are not known to have paralogs that can interfere with taxonomic assignments, as is the case for some nuclear loci such as ITS [13]. Primer sequences for the three barcodes were obtained from BOLD [10] and were optimized for amplification in the target plant families (Additional file 1: Table S1). Amplicons were purified in a MultiScreen PCR96 filter plate (Merck, Darmstadt, Germany). Sequencing reactions were prepared with 1× BigDye ™ Terminator 3.1 Reaction Mix (ThermoFisher Scientific, Waltham, MA, USA), 1× BigDye ™ 3.1 Sequencing Buffer, forward or reverse primer at 0.16 µM and 800 ng of purified amplicon to a final volume of 5 µL. The same primers used for PCR were used for sequencing. Capillary electrophoresis was performed on a 3130 ABI (ThermoFisher Scientific, Waltham, MA, USA). The resulting traces were quality filtered and merged using GAP4 [16] with the default settings. All traces and sequences were uploaded to BOLD v4 (project code: SWFRG; http://www.bolds ystem s.org/ index .php/Publi c_Searc hTerm s). Taxonomical assignments Sequences of matK, rbcLa and trnH-psbA were downloaded from BOLD v4 on May 23, 2019 [10]. Only In total, 6232 rbcLa, 11,971 matK and 1236 trnH-psbA sequences were present in the downloaded fasta files, which also include the plants from the BOLD project SWFRG (Additional file 1: Table S2). The taxonomical identifiers of the BOLD fasta files were reformatted to remove spaces and rearrange their informative fields in a consistent manner (fasta_name_reformat.py script from https ://githu b.com/mloer a/forag e-barco ding). Each barcode-specific fasta file was then used to make a blast database and the SWFRG sequences were queried in their corresponding database with blastn using the flag outfmt = 6 (i.e., tabular format). The resulting blast output tables were parsed with the blastn_matcher.R script from the above-mentioned GitHub repository. The script removes self-hits and corrects some misspellings in the taxonomy of queries and hits. The script then compares the taxonomy of the queries and hits at the species-and genuslevel. A "match" was called when the taxonomy of a query sequence is equal to the taxonomy of the highest scoring hit or hits (Additional file 1: Table S3). A "taxonomical assignment rate" for each barcode was then calculated as the ratio between the sum of its correct taxonomical assignments and the total number of query sequences. Results and discussion PCR and sequencing results The primer sequences of trnH-psbA and matK were adapted to allow for amplification within the target species, while the primer sequences of rbcLa did not need any modification (Additional file 1: Table S1). From the 48 processed specimens, 130 sequences were obtained (46 for matK, 43 for rbcLa and 41 for trnH-psbA-) after repeating and optimizing failed amplifications. The size of the sequences ranged from 470 to 588 bp for rbcLa, 185 to 888 bp for matK and 268 to 614 bp for trnH-psbA (Table 1). The low CARs for grass DNA barcodes could be due to various factors. Some grass species, such as Poa spp., are notoriously hard to discriminate morphologically and their phylogeny is subject to controversy [17,18]. This could have resulted in misidentified reference sequences. Another factor is the high genetic similarity between some grass taxa. For example, the genetic similarity of some species of the Festuca-Lolium complex is reported to be > 90%, as calculated from transcriptomic data of orthologous genes [19]. This may result in a higher proportion of incorrect taxonomic assignments for such grass species [20]. Barcode trnH-psbA makes for a good candidate for large-scale DNA barcoding of forage legumes and some grasses, such as C. cristatus, D. glomerata and T. flavescens (Table 3). However, further work is needed to produce reference sequences in more forage species and cultivars. Overall, our results provide the basic tools to implement DNA barcoding in forage species (i.e., family-specific primer pairs and a standard bioinformatic workflow for taxonomic assignments) and can help in choosing an appropriate DNA barcode for high-throughput applications. Such high-throughput applications could greatly enhance the biodiversitymonitoring protocols that are used to study the ecology of grasslands, its dynamics and its interplay with agriculture. Limitations This is exploratory work focused on the most common forage plant species from sub-alpine temperate grasslands; further work is needed to address other forage species from different kinds of grasslands. As a proof of concept, three specimens per species were analyzed.
2020-01-19T14:03:12.946Z
2019-11-08T00:00:00.000
{ "year": 2020, "sha1": "5668b31300028e0ff2b993e46d03019d53667c33", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-020-4897-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06fac386b25d641f0a5bdb826f2b4bf3e0e35456", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
271717111
pes2o/s2orc
v3-fos-license
[1-(Anthracen-9-ylmethyl)-1,4,7,10-tetraazacyclododecane]chloridozinc(II) nitrate The ZnII atom in the complex cation of the title salt has a square-pyramidal coordination environment defined by four nitrogen atoms from cyclen (1,4,7,10-tetraazacyclododecane) in the basal plane and one chlorido ligand in the apical position. In the title salt, [ZnCl(C 23 H 30 N 4 )]NO 3 , the central Zn II atom of the complex cation is coordinated in a square-pyramidal arrangement by four nitrogen atoms from cyclen (1,4,7,10-tetraazacyclododecane) in the basal plane and one chlorido ligand in the apical position.The anthracene group attached to cyclen contributes to the crystal packing through intermolecular T-shaped � interactions.Additionally, the nitrate anion participates in intermolecular N-H� � �O hydrogen bonds with cyclen. The crystal structure of the title compound comprises a [Zn(C 23 H 30 N 4 )Cl] + complex cation and a nitrate anion (Fig. 1).The coordination environment around the Zn II atom is slightly distorted square-pyramidal, with the coordination geometry index (Addison et al., 1984), � = (� À �) / 60 � = 0.08, where � [132.23 ( 9) � ] and � [136.98 (8) � ] are the second-largest and largest angles around the central Zn II atom, respectively.A � value of 0 corresponds to an ideal square pyramid, while a value of 1 corresponds to an ideal trigonal bipyramid.The four nitrogen atoms N1, N2, N3, and N4 of cyclen form the basal plane, with the chlorido ligand occupying the apical position.The mean Zn1-N bond length of 2.16 A ˚(Fig. 2) is comparable to that (2.13A ˚) observed in the crystal structure of the salt Zn(C 23 H 30 N 4 )] + -(ClO 4 ) 2À (Ichimaru et al., 2021).The Zn II atom is displaced by 0.8306 (12) A ˚above the mean basal plane toward the apical chlorido ligand.The Zn-Cl bond length of 2.2464 (7) A ˚is comparable to that found in other Zn II -polyamine complexes with chlorido ligands, such as chlorido(1,4,7,11-tetraazacyclotetradecane-N,N 0 ,N 00 ,N 000 )zinc(II) perchlorate [2.2734 (8) A ˚; Lu et al., 1997] or bis [�-chlorido-(1,4,8,11-tetracyclotetra-decane)zinc(II)] tetrachloridozincate(II) hemihydrate [2.288 (5) A ˚; Alcock et al., 1992].The presence of Cl À as a ligand can be deduced from the synthesis conditions (see Synthesis and crystallization).The bromine salt of the ligand was freed by an anion-exchange resin.In this process, hydrochloric acid was employed to regenerate the resin to its chloride anion form, which is the source of Cl À binding to the Zn II atom. The anthracene group exhibits a slight deviation from planarity, with fold angles of 4.69 (10) � between the A (C2-C7) and B (C1, C2, C7, C8, C9, C14) rings and 2.78 (11) � between the B and C (C9-C14) rings.The torsion angle defined by Zn1-N1-C15-C1 is 170.33 (18) � , positioning the anthracene group away from the macrocyclic ring, thereby preventing repulsive interactions with the Cl atom.In the crystal, nitrate O1 forms intermolecular hydrogen bonds with H2 of the Zn II complex and H3 of a neighboring molecule.The hydrogen-bond distances O1� � �H2 and O1 i � � �H3 are 1.985 and 2.16 A ˚(Table 1).These interactions contribute to the formation of a spiral structure extending parallel to the b axis The coordination polyhedron around Zn1, with displacement ellipsoids drawn at the 50% probability level.Bond angles are depicted in red, whereas bond lengths are shown in black. Figure 1 The molecular structures of the complex cation and the anion in the title salt with displacement ellipsoids drawn at the 50% probability level.Cbound H atoms are omitted for clarity; the hydrogen bond is represented as a red dotted line.direction of the crystal.Additionally, intermolecular T-shaped � interactions (Jin et al., 2022) occur between the anthracene ring and a neighboring anthracene ring [symmetry code: (ii): À x, 1 2 + y, 1 2 À z] (Fig. 3).The distance between H8 and the centroid (Cg) of the middle ring of the neighboring anthracene ring is 2.96 A ˚, and the angle C8-H8� � �Cg is 152 � . The title complex was prepared by adding a MeOH solution (1 ml) of Zn(NO 3 ) 2 •6H 2 O (235 mg, 0.8 mmol) to a MeOH solution (5 ml) of N-Ant-cyclen (287 mg, 0.8 mmol).The mixture was heated, with stirring, at 323 K for 2 h and then concentrated.After the resulting residue was dissolved in a MeOH-water mixture (v/v = 1/1; 2 ml each) and filtrated, the filtrate was allowed to stand for 10 days at room temperature to obtain the title salt (286 mg, 84%). Special details Geometry.All esds (except the esd in the dihedral angle between two l.s.planes) are estimated using the full covariance matrix.The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry.An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s.planes. Figure 3 A Figure 3A schematic drawing of the T-shaped �-� interactions, with displacement ellipsoids drawn at the 50% probability level.Methylene H atoms of cyclen rings and nitrate ions were omitted for clarity; T-shaped �-� interactions are depicted as green dotted lines. Table 2 Experimental details. Computer programs: CrysAlis PRO
2024-07-17T11:24:40.045Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "b01c5f50c45053d52e943ced269d520d3f9a79a9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "83951cfd50a84070817ebd9e76425c4586effed3", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
244882674
pes2o/s2orc
v3-fos-license
THE PRODUCTION OF SPACE: A BALKAN PERSPECTIVE The article describes the approach to and condenses some of the main arguments presented in the author’s book Beyond Balkansim: The Scholarly Politics of Region Making. It charts the main phases in the scholarly conceptualization of the Balkans and its characteristics and, against this background, tackles the question: What can we learn from the Balkan case about the actual production of regions? 1. Introduction s a paradigmatic historical region, the Balkans has of late been exerting a pull for not only its students, but increasingly for humanities scholars and social scientists trying to make sense of their spatial units of analysis. The response to this seduction can be, and has been, rightly read as an undisguised riposte to the 'balkanist' take on the notion of the Balkans, in that it juxtaposes 'western Balkanism' with understandings of the Balkans that have emerged from within the region, and especially from academically embedded discursive practices and political usages. 1 The insistence on the importance of scientific knowledge in the construction of the Balkans springs not simply from its omission in discussions of the western balkanist discourse. We can, of course, easily recognize thatin comparison to media, travelogues, and fiction, which are the main production sites of public 'balkanism'-scholarship plays a lesser role as a channel of disseminating images. We might also concede that scholarly discourse obeys rules that restrict overt political or ideological implication. And yet, it is that discourse which performs the critical function of providing the resources for legitimization and 'empowering' political discourses. After all, knowledge as power is taken to be a natural consequence of the inability of the Orient, or the Balkans, to create its own self-representation. Ideally, then, one should consider in parallel and interaction both extra-regional and A intra-regional expert conceptualizations of the Balkans. This I had tried to do in a recently published book. 2 For the sake of this paper, I shall only briefly sketch some aspects of the external expert or academic engagement with the region and, in the course of the subsequent exposé, detect certain connections or disjunctions with the local conceptualizations. Incipient external drives towards regionalization Scholarly interest in the Balkans as a distinct geographical and cultural area, and even its perception and naming as a single region, does not predate the early nineteenth century. The geographical notions of the 'Balkan peninsula', 'the Balkans', and 'Southeastern Europe' were late coinages of non-local origin, whereas the dominant appellation until almost the end of the nineteenth century was a political one-'Turkey in Europe' or 'European Turkey'-associated not so much with a fixed territory as with the geopolitical implications of the so-called Eastern question. The institutionalization of the study of the Balkans came about in the late nineteenth and early twentieth centuries along with the ultimate disintegration of the Ottoman Empire. Up until World War II, the tendency to treat the Balkan or Southeastern European states en bloc had, as a rule, political and economic incentives. Discrete foreign academes, however, participated with varying weight and proficiency in the regional conceptualizations. On the whole, while proximity and imperial expansion ensured the almost uninterrupted German political and economic involvement in the area, German-language scholarship-Austrian and Germancontributed most to the extensive and painstaking study of the area and the stabilizing of the Balkans or Southeastern Europe as a historical region. For the better part of the pre-World War II period, the British interest in the area was aligned with the framework of the 'Near East', which put the whole Balkan problématique in a specific light. The French academic approach to les Balkans was shaped mainly by fears of the 'pan-German' economic and political thrust in the area. This explains the French preoccupation with the South Slavs, who were portrayed as the moral, political, and racial opposite-and the strategic counterforce-to the Germans. For Russia, on the other hand, studying the Balkan religious and ethnic brethren-edinovertsy i edinoplemenniki-meant not only extending Russian influence in the region but also bolstering Russia's historical consciousness, Slavic identity, and imperial status. That being said, the relationship between imperialism (or strategic interest) and academic engagement was not necessarily a straightforward one. While the larger German, British, French, and (later) American geopolitical stakes determined to a great extent the scale of academic investment, Italian imperialist pursuits in the region in the interwar period failed to engender academic interest, while Russian imperial cartography operated with various configurations: the Slavic world, the Balkans, or a satellite Eastern Europe. Regionalizing the Balkans from within Within the region, we can analytically distinguish four periods of academic regionalization: 1. It is quite significant that the first regional self-representations emerged as parallel identity projects amidst the dynamic phase of European nation-state building. The period of the ultimate dismantling of 'Turkey in Europe' at the beginning of the twentieth century, marked as it was by radicalization of national discourses, also saw the inception of an encompassing Balkan and Southeastern European entity. Nation-state building and the construction of an overarching regional unity at that time went hand in hand and were compatible. Different disciplines participated with varying weight in creating Balkan regionality and in defining its attributes. In the origin of the Balkans as a unitary notion, the then vanguard comparative linguistics played a key role. Today, the 'Balkan linguistic area' or 'linguistic league' is considered as 'the first area of contact-induced language change to be identified as such' and the model prototype for language interaction and convergence. Linguists were the first to use the term 'balkanism' to indicate, contrary to the present-day resignification of the term, the opposite of fragmentation: a lexical and, more importantly, grammatical feature shared among the unrelated or only distantly related languages of the Balkans-Slavic, Romance, Albanian, Greek, and Turkish. Such morphological similarities among the Balkan languages, which were first observed by the Habsburg philologists Jernej Kopitar and Franz Miklosich, came to be increasingly interpreted as testimony to 'centuries of multilingualism and interethnic contact at the most intimate levels'. 3 'The commonality of grammatical features and developments among Balkan languages', Raymond Detrez argues, 'can be taken as a reasonable indication of the presence of social and cultural modes of convergence… An intensive process of mutual exchange of material and spiritual goods, characterized by 'contamination', 'hybridization' or-to use a less connotative term-'osmosis' must have taken place along channels paralleling those of linguistic contact'. 4 At the time, the linguistic approach to the Balkans stirred other academic fields to turn their attention to phenomena like contact, interaction, and convergence. According to Nicolae Iorga, Romania's foremost historian before World War II, regional history revealed a number of similarities strikingly reminiscent of the Balkan linguistic union. Iorga postulated the existence of a 'fundamental unity resting on archaic traditions', a particular culture and heritage common to the whole European southeast. He claimed that this unity was drawing upon the great Thraco-Illyrian-Roman tradition, had been epitomized by Byzantium and later the Ottoman Empire, and was enshrined in a wide range of common institutions. 5 On their part, literary scholars like the Bulgarians Ivan Shishmanov and Boyan Penev or the Romanian Ioan Bogdan charted massive ethnographic, folkloric, and literary borrowings that undermined the romantic notion of national uniqueness and shaped a space of cultural osmosis ensuing from long-standing coexistence and interaction. The commonalities on the level of grammar, syntax, belief, and popular lore, in turn, seemed to imply an underlying primeval unity in the way of thinking, mentality, and the unconscious. This trend was contemporaneous with the upsurge of psychological discourses and disciplines of comparative folk psychology and national characterology across Europe at the beginning of the twentieth century. One of the outcomes of such studies was the notion of a 'Balkan mentality'. Its diffusion, however, was not due-as is commonly claimed-to dubious academic fashions external to the region that tended to portray the Balkan cultures as a sanctuary of patriarchal practices and lifestyles long extinct elsewhere in Europe. In fact, it was the Serbian anthropogeographer Jovan Cvijić, who for the first time implemented this 'scientific/psychological' approach to the Balkans-by the way, to be later taken aboard by Fernand Braudel-elaborating on the link between the mental constitution of populations and geographic factors. 6 2. The interwar period saw the rise of new paradigms promoting ontological and cultural-morphological models for explaining spatial similarities and differences. They were less concerned with interaction and diffusion between nations, which were so characteristic of the previous period, than with devising some common cradle and shared structures for these societies. That was the aim of the 'new science of balkanology', driven by several Yugoslav and Romanian scholars. 'The time has come', wrote the editors of the Belgrade-based new journal Revue internationale des études balkaniques, to contemplate the coordinating of national academic Balkan studies, giving them cohesion and, above all, orienting them towards the study of a Balkan organism that constituted one whole since the most distant times' 7 and elucidating 'the elements of Balkan interdependence and unity'. It is in this sense significant that, at a time when the national historiographies were busy eliminating the Ottoman features from the national cultures, scholars of the Balkans endeavored to reverse the notion of the region as the Ottoman legacy in Europe. They did so not by asserting an inherent difference from the Ottomans, but by praising the Ottoman 'primitiveness' and the segregation of the Christians under Ottoman rule, which they saw as prerequisites for the preservation and development of the unique Balkan virtues and potential. The implication of this kind of argument was that-had the Turks been more advanced, that is, more like the West-the culture and identity of the Balkan Christians would not have survived. This reading differed substantially from the contemporary western view, which continued to describe the Ottoman rule as an aberration and unmitigated disaster-a black 'yoke' that was held responsible for all the ills that plagued the development of the Balkan states. In this regard, the western scholars of the Balkans found themselves in the same camp with the Balkan nationalists, not the Balkan regionalists. Even more remarkably, Balkan regional scholars considered the (western idea of) nationhood as a misplaced importation that brought about the disruption of an organic society. 'The principle of nationality, and later the right to self-determination', Romanian medievalist and founder of the 'Institute for Balkan Research' in Bucharest Victor Papacostea wrote, 'has not found in our area the right time and the right solution. Created in the West and for the West, the idea of national states was borrowed by or enforced on the Balkans …; no attempt was made to adapt this idea to the conditions of our region …. It is hard to find another example in world history that reveals more clearly the catastrophic consequences of the blind application of an idea in disregard for the major natural realities'. Against the tendency of framing the Balkans in terms of nationalist discord, Balkan regionalists stressed the 'unnaturalness' of nationalism and the difficulties it encountered in the region. 8 Such a view, predictably, was unpopular outside the region. Arnold Toynbee was one of the very few western scholars who shared the view that the application of the utterly exceptional western formula of making language the basis for political demarcation to the intermixed populations of the Balkans and the whole Near East had resulted in huge human suffering and massacre and, as he put it, 'diminishing returns in happiness and prosperity '. 9 From such positions, Balkan regionalists developed the theoretical and methodological parameters of the new science of balkanology. Its domains outline a truly interdisciplinary field of study: from history, linguistics, and folklore to anthropology, demography, statistics, and human geography to economic development, law, the arts, architecture, and literature. 10 It is indeed remarkable that a genuine blueprint for what would come to be called 'area studies' after World War II, aimed at 'total knowledge' by combining the humanities and social sciences, originated in the 1930s in the region itself. On the symbolic level, the shift was no less stunning. Precisely at the time when the western discourse of balkanism reached its peak and when the Balkans became increasingly recreated as the ultimate internal European 'Other', in the local regional context the term 'Balkans' and, being Balkan underwent systematic rehabilitation and veritable thriving as both a political and cultural concept. The movement toward a 'Balkan Conference' and 'Balkan Pact', the founding of 'Balkan institutes' to conduct 'Balkan research', and the appeals for a 'Balkan fatherland', 'Balkan consciousness', and 'Balkan patriotism' converged in the slogan 'the Balkans for the Balkan peoples'. Accordingly, this new political concept was in explicit opposition to Southeastern Europe, which was found to be an artificial and 'faceless' coinage. This forceful rearticulation was aimed not at eluding the western balkanism but at directly confronting and emasculating it. Next to laying the grounds for a new study field, interwar balkanologists sought to resignify the Balkans and turn its Orientalist semantics on its head. They did so through a series of para-historical accounts about a primeval Balkan soul, regenerated Balkan culture, a proper cultural orientation and global mission, and regional self-reliance and self-sufficiency. The Balkans they tried to promote was not just a cultural-historical and socioeconomic entity but an axiological category-one that embodied a peculiar value system underlain by cultural and moral elements. 11 All this went beyond coping with stigma and overturning self-stigmatization. Interwar academic balkanism strived to supply the conceptual toolkit and the scholarly basis for the construction of a Balkan identification. While not denying the still persisting power of the nation-state, this balkanism pursued a more encompassing, regionally anchored collective identity. In the process, the Balkans gelled into a discrete civilizational sphere, occasionally underpinned by overt racism and couched in moralizing oratory or metaphysical, even mystic references. Ironically, this representation borrowed heavily from the then fashionable ethnoontological discourse praising ethnic authenticity, organicity, and autarchy. In essence, the interwar 'Balkan idea' was an emancipatory one. It was an attempt at 8 Papacostea 1996. 9 Toynbee 1922: 16-18. 10 Budimir and Skok 1934: 14-19. 11 See, in particular, Skok and Budimir 1936: 601-613;Balkan i Balkanci 1937. offsetting the impotence of small statehood in the geopolitical environment of the 1930s. 'To protect the Balkans as one entity, to preserve it for the Balkan peoples themselves', wrote one of the founders of the Balkanski institut in Belgrade, 'this today is the only true and the greatest national idea. Our patriotism, if it wants to be real, should be a Balkan patriotism'. 12 Furthermore, the Balkan idea, as conceived at that time, removed the compulsion to choose and define the identity of the Balkans between the poles of Europe and Asia. It asserted the existence of a 'strong and irreducible Balkan individuality', which valorized in-betweenness, liminality, and complexity. It sought to subvert the western notion of progress, where different communities trod towards the pinnacle of history occupied by 'the West'. It professed a proper, Balkan time axis, leading from the deepest past to the present and future, where universal ancient virtues-the bedrock of European civilization-were continuously re-enacted. Accordingly, 'The Balkan Other [was] re-imagined as the West's anthropological Utopia, as the Westerner's alternative, or possible self'; he (or she) appeared as 'considerably more gifted, more admirable, and even more appealing than the average, banal Westerner'. 13 It is worth noting that such self-representation tallied with a conspicuous strain in Central European and western literature at that time that estheticized Balkan underdevelopment, spontaneity, and artlessness. Arguably, this convergence of perspectives was the outcome not of imitation, but of a flow of ideas and concepts between east and west. Thinkers as different in other respects as the German Slavicist Gerhard Gesemann, British historian Robert William Seton-Watson, and Spanish philosopher José Ortega y Gasset, saw in the 'pristine Balkans' a way of exploring the contemporary challenges to the self-assurance of 'the West' and of expressing the widely shared feeling of estrangement from modern life. 14 From this perspective, engagement with the Balkans, both inside and outside the region, was a way of engaging with wider domestic and transnational debates about the fate of western modernity and progress. 3. After World War II, the Balkans as a politically or economically relevant notion all but disappeared. It survived as a cultural-historical space plowed by a cluster of historically oriented human sciences and as a terrain for exercising the soft power of cultural diplomacy. The proliferation of regionalist organizations and the consolidation of Balkan studies as an autonomous field in the 1960s brought together cultural politics, geopolitics, and national propaganda and marked a new wave of politicization of Balkan research. The major themes organizing the balkanist academic discourse during those years were ethnogenesis and ethnocultural continuity, the impact of empires, the sources of backwardness and modernization, and relations with 'Europe'. These themes were approached from strongly normativist positions, marked by evolutionism, Eurocentrism, and teleological thinking. Unlike their predecessors, the postwar balkanists showed no enthusiasm for devising a 'Balkan' road to modernity. The neo-Marxist 'dependency', 'world-economy', and 'coreperiphery' paradigms did not produce visible resonance in the region in contrast to other parts of eastern Europe. The same applied to contemporary nationalism studies. The dominant approach to modernization favored comparisons of the local 'stages of development' with those in the west rather than with neighbors or other peripheries. 12 Parežanin and Spanačević 1936: 321;Knjiga o Balkanu, vol. 1, vii-ix. 13 Antohi 2002. 14 Gesemann 1943;Seton-Watson 1917;Radica 1940: 221. As a general rule, regional scholars tended to stress particular aspects of the 'common Balkanness' where 'their' nation could claim a special contribution. The periods that, in theory, featured as crucial for Balkan historical unity were partitioned in similar national chunks. It was common to offer selective Greek, Bulgarian, or Serbian perspectives to the Byzantine Empire or to parcel the study of the Ottoman Empire into 'Greek', 'Bulgarian', or 'Serbian' lands within teleological national narratives. Balkan studies were, in this sense, a virtual playground of 'methodological nationalism'. Not surprisingly, such a 'regional approach' did not affect the writing of national history, which remained a selfcontained, didactic, and parochial field. Remarkably, communication across the Iron Curtain was made possible precisely by the consensually shared national framework of history writing and by neither side subjecting the national paradigm to any critical scrutiny. The US journal Southeastern Europe regularly published thematic issues devoted to key national anniversaries featuring the diehards of the Balkan national historiographies. The same was true of the German journal Südost-Forschungen. The 'historical Balkans' thus came to be understood as a mosaic of national spaces validated by immutable ethnic or national communities fully conscious of their distinct character. Unlike interwar balkanology, its postwar continuation never went as far as to interrogate the basic theoretical premise of the discipline: the construction of boundaries per se. Overall, Balkan studies remained isolated from the theoretical and methodological debates taking place since the 1970s in general history and the social sciences, especially in political economy and nationalism studies, in both western and eastern Europe. 4. Finally, the post-1989 period has been characterized by a theoretical clash over the meaning of the Balkans. In reaction to the resurrected ghost of 'balkanism' in the wake of the Yugoslav wars of succession, some scholars, coming mainly from literary and cultural studies, sought to argue for the Balkans not as a product of geography, history, or culture but as 'a place' in a discourse-geography'. A great deal of the research after the mid-1990s has centered around the nature of this discourse as well as how it was established, its characteristics, and its critique. But there are also those who have continued the search for the historical or cultural 'reality' of the Balkans, variously defined in terms of a cluster of structural and cultural characteristics or historical legacies. The theoretical discussions the Balkans gave rise to placed the area at the center of the debates on the meaning of regions and the mechanisms for the production of space that has led to interrogating definitions, traits, and boundaries. A quintessential historical region? So, what can we actually learn from the Balkan case about the production of regions itself? The entanglement of politics with scholarship appears as a major propeller of region making. The politicization of scholarly regionalisms related to, on the one hand, the great European states' economic and political interests in this area and, on the other, various local nationalist or federalist schemes typically conceived in response to external or domestic political pressure. Balkan regionalist projects were steeped in diametrically opposed value systems: conservative, national-liberal, Marxist, social constructivist, etc. When we talk about supranational frameworks, we tend to believe that we are referring to politically 'progressive' projects. Many regional schemes, however, spoke on behalf of far more ambiguous political stances. Consequently, the Balkans could be referred to as the root of European civilization or be envisaged as the driver of an alternative, anti-European value system; it could signify a younger Europe that would revitalize the old one or represent a stigmatizing notion denoting deficiency in civilizational terms to be overcome by consistent efforts at Europeanization. Yet the most enduring source of politicization of scholarly regional terminology is the fusion of regionalist and nationalist designs in the fields of politics, economy, or culture. The academic notion of the Balkans was construed in dialogue with national autarchy and nation-centered scholarly paradigms. The outcome was patently ambivalent: Balkan regionalism could at one time erode and at another reinforce national differences. The drive for methodological rescaling beyond the national often originated from essentially nationalist agendas. There is, indeed, no clear-cut difference; rather, there is a complex relationship between the conceptualizations of the national and the regional: Nationalist arguments may be adduced to buttress a regionalist framework, and a regional definition may serve to bolster a nationalist project. Local regionalizations sometimes connected to and other times clashed with the regional discourses produced outside the region. To put it bluntly, as powerful as the post-Enlightenment 'Western discourse' (or rather different national western discourses) of the European east and southeast might have been, it was neither the sole nor, at all times, the dominant 'agent' of regionalization. The flow of ideas, concepts, and narratives were never unidirectional. The ideas of scholars like Shishmanov, Cvijić, Iorga, Papacostea, Budimir, and Skok strongly influenced western conceptualizations. Sometimes they went beyond the understanding of the Balkans: Cvijić's influence is clearly attestable in both Fernand Braudel's conceptualization of the Mediterranean and the paradigm of histoire des mentalités. Iorga partook in both Karl Lamprecht's project of Weltgeschichte and the 'new cultural history' that prepared the ground for the Annales school. Such cases of knowledge transfer bespeak a movement of concepts and ideas that, although being asymmetrical, breaches the rampant view of a mono-dimensional 'West'-to-'East' pattern. The Balkan case is also revealing in the way various disciplines are contributing to the production and life cycle of regions. Until World War II, linguistics, folklore, literature, and ethnography were much more important than history proper for the crystallization of the Balkans as a historical region. The upsurge of the social sciences and of divisions based on socioeconomic and political models after 1945, subsumed to a large extent Southeastern Europe under an Eastern European umbrella, undermining the Balkan narrative, which reemerged with the 'cultural turn' in the 1980s. The recurrent and currently prevailing notion of the Balkans as based on the continuity of its history springs from the assumption that shared historical experiences within this geographical space necessarily produce a structural entity-a historical regionand even something like a regional identity. However, none of the 'regional' historical experiences and legacies was exclusively a Balkan one, as they typically applied to much bigger political configurations; nor did they affect this geographical space as a whole and in the same degree. A closer look at individual historical periods suggests that most of the so-called defining characteristics of the region were not incomparable with other regionsin Europe and beyond. Moreover, social, demographic, religious, cultural, economic or political phenomena draw different lines, shape different zones, and render different regional 'definitions'. Diverging geographies also result from zooming differences-areas charted by criteria on the micro level (like marriage or hereditary patterns, gender relations, household and work organization, etc.) differ from those drawn on a macro level (state building, industrialization, urbanization, etc.). There is thus no single 'shared' history that scholars can reify, that might be thought to produce a specific cluster of characteristics, or that could legitimately serve to construct a region. Instead, all histories encompass 'multiple geographies'. Conversely, tailoring academic research to established spatial categories tends to predetermine to a large extent its conclusions. The endless debates about the boundaries of the Balkans have been the result of not only differing political agendas or geographical determinism, but also the scholarly fallacy of projecting a spatial category coined at a particular time and for particular purposes backwards and forwards in time, where it sits uneasily with very different political and social realities. Such challenges to the meaning of 'regions' and the legitimacy of 'area studies' feed on postcolonial critique, sensibilities attuned to an increasingly globalized world, and new theories related to the social construction of space. They inevitably raise hard questions about the rationale and future of regional research. The tackling of these issues falls beyond the purview of this paper, but some general observations may be broached here. Regions have not been overcome or made irrelevant by the demise of traditional 'area studies' and the rise of the 'new transnationalism'. However, sustaining their relevance as a terrain of action and an object of study entails reconfiguring their meaning. A vessellike concept of a historical region marked by objective criteria and a cluster of structural and cultural traits, or even legacies, should recede before a fuzzier, processual, and openended one. This means shifting the focus of discussion to the social, political, and intellectual mechanisms effecting the materialization of space and borders and, most prominently, to human agency. In our time, 'rage to deconstruct has rather given way to a fuller and richer exploration of the capacity, and its limits, of people (and things) to act '. 15 This most surely concerns academics, whose discourses are a powerful social mechanism for constructing space, whereby heuristic frameworks tend to crystallize into cognitive maps and political realities. 15 Geyer 2006.
2021-12-05T16:19:35.559Z
2021-12-03T00:00:00.000
{ "year": 2021, "sha1": "122359c853c29c64868b0f215ec0d13f8fb2bb44", "oa_license": "CCBYSA", "oa_url": "https://istrazivanja.ff.uns.ac.rs/index.php/istr/article/download/2184/2206", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "851d3cb84a38f0b92105cdb92cd0dec60fb8eb66", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
186204732
pes2o/s2orc
v3-fos-license
Newborn infant parasympathetic evaluation (NIPE) as a predictor of hemodynamic response in children younger than 2 years under general anesthesia: an observational pilot study Background It is still unknown whether newborn infant parasympathetic evaluation (NIPE), based on heart rate variability (HRV) as a reflection of parasympathetic nerve tone, can predict the hemodynamic response to a nociception stimulus in children less than 2 years old. Methods Fifty-five children undergoing elective surgery were analyzed in this prospective observational study. Noninvasive mean blood pressure (MBP), heart rate (HR) and NIPE values were recorded just before and 1 min after general anesthesia with endotracheal intubation as well as skin incision. The predictive performance of NIPE was evaluated by receiver-operating characteristic (ROC) curve analysis. A significant hemodynamic response was defined by a > 20% increase in HR and/or MBP. Results Endotracheal intubation and skin incision caused HR increases of 22.2% (95% confidence interval [CI] 17.5–26.9%) and 3.8% (2.1–5.5%), MBP increases of 18.2% (12.0–24.4%) and 10.6% (7.7–13.4%), and conversely, NIPE decreases of 9.9% (5.3–14.4%) and 5.6% (2.1–9.1%), respectively (all P < 0.01 vs. pre-event value). Positive hemodynamic responses were observed in 32 patients (62.7%) during tracheal intubation and 13 patients (23.6%) during skin incision. The area under the ROC curve values for the ability of NIPE to predict positive hemodynamic responses at endotracheal intubation and skin incision were 0.65 (0.50–0.78) and 0.58 (0.44–0.71), respectively. Conclusions NIPE reflected nociceptive events as well as anesthestic induction in children less than 2 years undergoing general anaesthetia. Nevertheless, NIPE may not serve as a sensitive and specific predictor to changes in hemodynamics. Trial registration This study was registered on May 3, 2018 in the Chinese Clinical Trail Registry; the registration number is (ChiCTR1800015973). Background Endotracheal intubation and skin incision are two of the strongest noxious stimuli received by surgical patients under general anesthesia [1]. From one perspective, sufficient analgesic levels are critical to avoid unexpected movements, sympathetic reactions with consequent cardiovascular complications, and the development of pain memory. From another perspective, restriction to a minimum dosage of analgesic is desirable to avoid opioid-induced hyperalgesia, respiratory depression, and nausea as well as to achieve a shorter perioperative treatment period [2,3]. Due to a lack of reliable tools for predicting and assessing the balance between analgesia and nociception during general anesthesia, clinicians mainly use classical symptoms of insufficient analgesia including increases in heart rate (HR), blood pressure, lacrimation, and sweating to tailor the administration of analgesic drugs, an approach that can reduce the side-effects of opioid overdosage but not underdosage [1,4]. Subcortical-derived autonomic nervous system changes induced by nociceptive stimuli was shown to be reflective of the balance between nociception and analgesia [5][6][7][8][9][10][11][12][13]. Two parameters, the newborn infant parasympathetic evaluation (NIPE) and analgesia nociception index (ANI, MDoloris Medical Systems, Lille, France), were derived from a real-time reliable analysis of HR variability (HRV) in a time window of 64 s on a scale from 0 (maximum of nociception) to 100 (complete analgesia). NIPE is the neonatal version of the ANI used in adults [14,15]. It has been shown that the autonomic nervous system responses to a noxious stimuli would change with the advancing age. As the nervous system matures, sympathetic HR modulation increases, while parasympathetic modulation decreases [16]. The ANI used in adult could not be adapted directly to children less than 2 years old due to the higher respiratory rate and heart rate in children [17]. The NIPE, on the other hand, reflects the parasympathetic tone. It was found that NIPE would decrease significantly in newborn infants after a painful surgical procedure [8]. The NIPE was also significantly reduced in babies borned by instrument-assisted delivery when compared to those delivered naturally [9]. In adult patients, the automomic index ANI had been used to predict hemodynamic changes associated with painful stimulation [10][11][12][13]. Study of the prediction ability of the NIPE in children has not yet been reported. In this observational pilot study, two manuvors, namely endotracheal intubation and skin incision, were chosen as the noxious stimuli. As a primary endpoint, we evaluate whether the pre-event value of NIPE would be a good predictor of the hemodynamic responses of such stimulation. Patients This observational prospective study was approved by the ethics committee of Shanghai Children's Medical Center affiliated to Shanghai Jiao Tong University (SCMCIRB-K2018049) prior to its start and was registerated in the Chinese Clinical Trial website ( http://www.chictr.org.cn/ showproj.aspx?proj=27154, ChiCTR1800015973). Patients were enrolled over a 4-month period between June 2018 and September 2018. Full-term pediatric patients aged 1 month to 2 years with an American Society of Anesthesiologists physical status score I~II were included. Patients were scheduled for elective general or urinary surgery. We excluded children who had a history of premature delivery or neurological, cardiac or respiratory conditions. Children who required prolonged resuscitation at birth, underwent general anaesthesia within the preceding week of study, experienced prolonged exposure to pain, and those who were currently receiving drugs with known effects on sympathetic and parasympathetic activity were also excluded. Written informed consents were obtained from the parents of study subjects. Anesthetic technique and monitoring All pediatric patients were fasted according to the relevant guidelines [18]. Crystalloid fluid (Ringer's acetate) containing 5% glucose was given in the ward by the attending surgeon or in the operating theater by the anesthesiologist in charge as appropriate. Oral midazolam 0.5 mg/kg as sedative premedication was administered to all children 30 min before patients were transferred to the operating room. All patients were accompanied by their parents or a senior nurse staff in our preparation room as they watched cartoon video or listened to stories for relaxation. Upon arrival in the operating theater, standard monitoring was applied using an anesthesia workstation (Datex-Ohmeda Aisys CS 2 , GE Healthcare, USA) with a three-lead electrocardiogram (ECG), pulse oximetry and non-invasive blood pressure measured at the arm. After an intraveous line was secured, all patients received Ringer's acetate as maintenance fluid following the 4-2-1 rule. Fentanyl 2-3 μg/kg was injected over a 15-s period. After 1 min, anesthesia was induced with propofol 2-3 mg/kg administered intravenously (i.v.) over 30 s. When the eyelash reflex was absent, the child was ventilated via a facemask with 100% oxygen. Rocuronium 0.6 mg/ kg was administered i.v. for muscle relaxation, after adequate ventilation could be achieved via a facemask. A senior anesthesiologist (> 3 years experience) decided the timing of endotracheal intubation and performed intubation using video laryngoscope. The patients were then ventilated using pressure-controlled mode at a frequency of 20 breaths per minute (inhalation-to-exhalation ratio of 1:2). Peak inspiratory pressure would be adjusted to achieve tidal volume between 8 and 10 ml/kg and endtidal carbon dioxide was maintained between 35 and 45 mmHg. Anesthesia was maintained with sevoflurane between 1.0-1.3 MAC according to the patient's age and the fentanyl boluse administered, as clinically required. Study protocol The MDoloris system (MDoloris Medical Systems, Loos, France) was integrated to the monitors of the anesthesia workstation for HRV analysis. After calibration, the instantaneous NIPE index was displayed on the monitor screen. The instantaneous NIPE was obtained based on four individual windows of 16 s. The R-R interval analysis for each 16-s block is based on a sliding window of 64 s. Continuous measurement of the indexes can be assumed by moving the 64-s window after each calculation. The sampling rate of the final parameters depends on the window moving period. In practice, a 1-s moving period is used [19]. The timing of endotracheal intubation was decided by the anesthesiologist who was blinded to the study protocol and the MDoloris monitoring system. The research team was responsible for recording the NIPE, HR and MBP measurements immediately before and 1 min after tracheal intubation and skin incision [20]. The changes in the MBP and HR during the observation were calculated by the following formula: A hemodynamic response was considered significant and clinically relevant if an increase in either parameter (HR and/or MBP) of more than 20% was observed after the noxious event. We also calculated the dynamic NIPE to examine its ability to predict a hemodynamic response [21]. Statistical analysis Patient data was presented as the mean (95% confidence interval [CI]) or median (interquartile range [IQR]) as appropriate. All data were tested for normal distribution using the Kolmogorov-Smirnov test. Variables before and after stimulation were compared using paired t tests. Receiver operating characteristic (ROC) curves and the associated area under curve (AUC) values were computed to assess the ability of the NIPE (pre-stimulation values) to predict hemodynamic reactivity. GraphPad Prism 7 (GraphPad Software, Inc., San Diego, CA, USA) and MedCalc version 18.2 for Windows (MedCalc Software, Ostend, Belgium) were used for statistical analysis. P values < 0.05 were considered statistically significant. Results Seventy-one pediatric patients were initially recruited into this study. Of these, 16 patients were excluded due to a history of premature delivery, recent general anaesthesia, or arrythemia or a lack of parents' permission for participation. Finally, 55 patients met the inclusion criteria and parents or informed consents were obtained. The characteristics of these patients are presented in Table 1. Of these pediatric patients, 47 children were male, and 8 were female. Their mean age was 1.3 years (95% CI 1.1-1.5 years), and their mean weight was 10.6 kg (10.0-11.1 kg). During the period of tracheal intubation, the NIPE recordings for four patients were complicated by noise due to poor electrode-skin contact. Thus, the final analysis included NIPE data from 51 patients undergoing endoutracheal intubation and 55 patients undergoing skin incision (Fig. 1). The AUC values for the ability of NIPE to predict a positive hemodynamic response at endotracheal intubation and at skin incision were 0.65 (95%CI 0.50-0.78) and 0.58 (0.44-0.71), respectively. The best cut-off values (the optimal threshold) for the NIPE index at the respective events were 42 (sensitivity 71.9% and specificity 52.6%) and 60 (sensitivity 69.2% and specificity 52.4%). These results indicate the probability of correctly predicting a positive hemodynamic response based on the NIPE was similar to that achieved with a random coin toss (Fig. 2). The AUC values for the ability of NIPE dynamic to predict a positive hemodynamic response at endotracheal intubation and at skin incision were 0.68 (95%CI 0.53-0.80) and 0.54 (0.40-0.68), respectively. Thus, our results showed that the NIPE dynamic also was not sufficiently sensitive and specific to predict a hemodynamic response to these events. Discussion In the present study, we observed (1) the NIPE as well as NIPE dynamic in children less than 2 years of age undergoing general anesthesia and found that these indexes failed to predict a hemodynamic response at tracheal intubation and skin incision. (2) The NIPE reflected nociceptive events as well as anesthestic induction in pediatric patients undergoing general anetshesia. Several studies indicated ANI was able to predict movement and hemodynamic reactivity in adult patients [10][11][12][13]. Some studies observed hemodynamic response could not be anticipated based on the ANI [1,21,22]. Our study also did not find that the NIPE could predict a HR-or MBP-based response to nociceptive stimuli. The reason for this discrepancy of results are ultimately unclear. Patients in previous studies [10][11][12][13] did not receive neuromuscular blocking drugs before stimulation, whereas we used a regime with rocuronium. The previously described variation in autonomous stress response to tracheal intubation with and without neuromuscular blockade may partially explain the observed differences [23]. In consideration of perioperative safety, all children enrolled underwent ETT intubation after administration of a muscle relaxant. Children who were allowed to resume spontaneous breathing during surgery were not enrolled in order to avoid a statistical bias introduced by this anesthetic technique. The respiratory sinus arrythymia (RSA) was also affected by respiratory parameters (e.g., respiratory rate, tidal volume and inspiration-expiration ratio) [24,25]. In the current study, the respiratory rate (RR) during mechanical ventilation was kept at a fixed number (20 breaths per minute), which was relatively lower than normal respiratory rate in awake children (~30 breaths per minute). Furthermore, in infants, the effects of different breathing rates with I:E ratios different from spontaneous breathing on HRV indices are unknown [26]. The pre-induction and pre-intubation NIPE values were low in our cohort (40 ± 12 and 45 ± 8, respectively). Absence of ventilation before intubation could explain the low NIPE values observed before intubation. NIPE is a measure for analyzing HRV that evaluates the parasympathetic activity by assessing RSA. Secondly, we speculate the a lower pre-intubation NIPE value may be associated with the injection pain and withdrawal response. Usually anesthesia for infants and children is induced with propofol and rocuronium. At last, HRV reflects the balance between parasympathetic and sympathetic nerve outflow from the central nervous system to the cardiac sinus node. Anxiety, which most likely plays a pathogenetic role, is associated with autonomic dysfunction [27]. Preoperative emotion (such as anxiety and fear) was shown to mediate RSA alteration in adults [28,29]. Crying and struggling in older infants and toddlers may also lead to increased sympathetic nervous system activity. Acute stress, with subsequent release of catecholamines into the systemic circulation, causes abnormal HRV responses to acute pain [30]. In the pre-anesthesia period, although a series of strategies has been adopted, anxiety in children can be reduced but not eliminated, which also explains the low NIPE values that we observed prior to anesthesia induction. Despite low pre-event values of NIPE in current study making a significant decline after noxious stimulus less likely, the NIPE showed a significanted change and reflected nociceptive events as well as anesthestic induction. Thus, NIPE may potentially aid the monitoring of nociception. Finally, the present study has several limitations that should be noted. (1) NIPE evaluation requires ventilation, and NIPE analysis during apnea phases is questionable. (2) Under current sample size, weak response of hemodynamics to skin incision due to enough analgesia might have reduced the statistical power to detect prediction ability. A larger sample size is needed in further research. Conclusion In conclusion, the NIPE reflected nociceptive events as well as anesthestic induction in infants and young toddlers undergoing general anaesthetia. Nevertheless, NIPE may not serve as a sensitive and specific predictor to changes in hemodynamics.
2019-06-13T13:06:31.293Z
2019-06-11T00:00:00.000
{ "year": 2019, "sha1": "c21d037a497998314eff9a7c20ace9f6acac7d08", "oa_license": "CCBY", "oa_url": "https://bmcanesthesiol.biomedcentral.com/track/pdf/10.1186/s12871-019-0774-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c21d037a497998314eff9a7c20ace9f6acac7d08", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247058517
pes2o/s2orc
v3-fos-license
Trajectory planning in Dynamics Environment : Application for Haptic Perception in Safe HumanRobot Interaction In a human-robot interaction system, the most important thing to consider is the safety of the user. This must be guaranteed in order to implement a reliable system. The main objective of this paper is to generate a safe motion scheme that takes into account the obstacles present in a virtual reality (VR) environment. The work is developed using the MoveIt software in ROS to control an industrial robot UR5. Thanks to this, we will be able to set up the planning group, which is realized by the UR5 robot with a 6-sided prop and the base of the manipulator, in order to plan feasible trajectories that it will be able to execute in the environment. The latter is based on the interior of a vehicle, containing a person (which would be the user in this case) for which the configuration will also be made to be taken into account in the system. To do this, we first investigated the software's capabilities and options for path planning, as well as the different ways to execute the movements. We also compared the different trajectory planning algorithms that the software is capable of using in order to determine which one is best suited for the task. Finally, we proposed different mobility schemes to be executed by the robot depending on the situation it is facing. The first one is used when the robot has to plan trajectories in a safe space, where the only obstacle to avoid is the user's workspace. The second one is used when the robot has to interact with the user, where a dummy model represents the user's position as a function of time, which is the one to be avoided. Introduction In human-robot interaction systems, knowing how to compute a path for the robot to follow, while taking into account the human position, is a crucial task to ensure the safety of the individuals around the robot. This is where path and trajectory planning plays its role in the field of robotics, where achieving realtime behaviour is one of the most challenging problems to solve. The result is a constant demand for research into more complex and efficient algorithms that allow robots to perform tasks at higher speeds, reducing the time they need to complete them, resulting in increased efficiency. But this also comes at a cost: to achieve higher speeds and shorter times, robot actuators must work under more demanding conditions that can shorten their overall life or even damage their structure. High operating speeds can also affect the accuracy and repeatability of manipulators. Therefore, it is important to generate well-defined trajectories that can be executed at high speeds without generating high accelerations (to avoid robot wear or end effector vibrations during stopping). Path planning is the generation of a geometrical path from an initial point to an end point and the calculation of the crossing points between them. Each point of the generated trajectory is supposed to be reached by the robot end effector through a specific movement. When the robot is supposed to interact with a human, its velocity and acceleration must be zero at the end of the trajectory. Another important element to take into account is the environment in which the task or the movement is going to be performed. This is what allows the system to identify the robot's environment and the colliding objects that might be present, thus determining the areas in which the robot must be constrained or limited to ensure the safety of the user. The Lobbybot project is a project that allows interaction between a user and a cobot. These interactions allow for the creation of a touch-sensitive interface or intermitant contact interface (ICI). The scenario used allows the user to be inside a car with the possibility to interact with its environment by getting a sensory feedback of the different surfaces thanks to a 6 faces prop providing the different textures. Due to the immersion of the user via a VR headset, the system must ensure the safety of the user, as he cannot see the location of the robot. Therefore, it is necessary to implement trajectory planning techniques to be able to avoid unwanted interactions between the robot and the user. To do this, the system must take into account the obstacles present (environment or user). A virtual mannequin is modelled using data from the HTC Vive trackers which provide an estimate of the user's position, and will give the system a model to plan the movements. Thus, the goal of the LobbyBot project is to provide an immersive VR system that is safe for the user and gives them the ability to interact with the environment at different locations, providing a new level of interaction between VR environments and the real world. 2 State of the art Intermittent contact interface In the area of human-robot interaction and haptic perception, the ability to reproduce the sense of touch to appreciate different textures and motion sensations through the use of cobots has been addressed in [1], where a rotatable metaphorical accessory approach (ENTROPiA) has been proposed to provide an infinite surface haptic display, capable of providing different textures to render multiple infinite surfaces in VR (virtual reality). Studies in [2] [3] have focused on the perception of stiffness, friction, and shape of tangible objects in VR using a wearable 2-DoF (degrees of freedom) tactical device on a finger to alter the user's sense of touch. In [4,5], a 6-DoF cobot is used in a VR environment to simulate the interior of a car, where interaction between the robot and the user is expected just at specific, instantaneous points. This proposal is to use ICIs (Intermittent Contact Interfaces) [6] to minimise the amount of human-robot interactions to increase safety. In order to use the proposed implementations in this study in a real-time environment that involves human movement, it is important to ensure the safety of both the user and the robot to avoid potential collisions or accidents. This is where it is necessary to implement proper path and trajectory planning, in order to determine a feasible path to the desired goal, while avoiding interaction with the human until said goal is reached, generating a human-robot interaction just at the desired time. Path Planning Path planning refers to the calculation or generation of a geometric path, which connects an initial point to an end point, passing through intermediate viapoints. These trajectories are intended to be followed by the end effector of a robot in order to execute a desired task or motion. This geometric calculation is based on the kinematic properties of the robot as well as its geometry (included in its workspace). In the simplest case, path planning is performed within static and known environments. However, this problem can also be generated for robotic systems subjected to kinematic constraints in a dynamic and unknown environment. Path planning can be done using a previously known map. This is called global planning. This method is commonly used to determine the possible paths to follow to reach the final position. It is used in the case of a known and static environment, where the position of the obstacles does not change. This operation can be performed offline, as it is based on previously known information. In the case of dynamic environments, it is necessary to perform local path planning, which relies on sensors or any other type of interface providing data to obtain updated information about the robot's environment. This planning can only be done in real time, as it depends on the dynamic evolution of the environment. Figure 1 presents the main differences between local and global path planning [7,8]. There have been multiple proposals on path planning algorithms over the years. In [9], one can find a review of the basics and workings of the most common algorithms most commonly found in the robotics literature. The main methods are the following: -The Artificial Potential Fields (APF) approach [8] introduced by O. Khatib in 1985 and further developed by [10] [11]. -The Probabilistic Road-maps approach [7] consists in generating random nodes in the configuration space (C space ) in order to generate a grid (so called, the road-map). -The Cell Decomposition algorithms [12]. -The Rapidly Exploring Random Trees (RRTs) [13], introduced by S. LaValle in 2001 as an optimisation from the classical Random Trees algorithm. Algorithm Comparison In robotics, path planning is one of the most difficult tasks in real-time dynamic environments. Among the presented algorithms, APF and its variations offer a good adaptation to path planning in dynamically changing environments, where any obstacle entering the C space generates a new repulsive field that can be taken into account to generate a new path. But the local minima problem requires the use of alternative algorithms to overcome it. The case of PRM, it is well known for its ability to find a path without needing to explore the whole C space , but it is also a graph based algorithm, which requires the use of shortest path method like A * . It works well in static environments and can handle initial and final configuration changes, but if the objects in C space change position, the connections between the nodes must be redone. Some alternatives propose to keep the previously generated nodes and recheck whether they belong to C f ree or C obs , then rebuild the graph based on this information and find a new path. This is also the case for cell decomposition methods, where the graph search has to be reconstructed again. Nevertheless, these methods have proven to be viable options in real time, capable of adapting to a dynamic environment. Finally, regarding the RRT and RRT* methods and their alternatives, they are known to be good path planning methods, with the limitations that the generated trees are related to the initial configuration and have high computational demands. The proposal of the different alternatives allows to obtain very optimal real-time path planners. The limitations of this type of algorithms are that they require a large memory capacity, as the entire tree must be stored at all times, and that they only work in bounded environments, with unbounded and long distance environments remaining a challenge. Setup of the experimentation In this section, we will present the tools used in the development of the project, such as the laboratory system, the software used, a description of the system environment as well as the laboratory setup. System Architecture The architecture of MoveIt is based on two main nodes, the node move group and the node planning scene, which is part of the first one. The move group node is responsible for obtaining the parameters, configuration and individual components of the robot model being used, in order to provide the user with services and actions to use on the robot. Collision detection Collision checking in MoveIt is configured within a planning scene using the CollisionWorld object. Collision checking in MoveIt is performed using the Flexible Collision Library (FCL) package -MoveIt's main collision checking library. Kinematics MoveIt uses a plugin infrastructure, specifically designed to allow users to write their own inverse kinematics algorithms. Direct kinematics and Jacobian search are built into the RobotState class itself. The default inverse kinematics plugin for MoveIt is configured using the KDL numerical solver [22] based on Jacobians. This plugin is automatically configured by the MoveIt configuration wizard. ROS-Industrial ROS- Industrial is an open-source project that extends the advanced capabilities of ROS software to industrial hardware and applications. For this project, we used the ROS-Industrial-Universal-Robots metapackage [23], which provides and facilitates the main configuration files for the use of Universal Robots cobots in the ROS environment, providing the different descriptions of the robot, configuration files such as joint boundaries, UR kinematics, etc.. This package also facilitates the use of the robot in MoveIt, providing the setup for its use in simulation or in real implementations. HTC Vive The HTC Vive is a motion tracking system that allows users to be immersed in a VR system [24]. It consists of trackers, which can attach to any rigid object, and work with the VR headset. The tracker creates a wireless connection between the object and the headset and then allows the user to represent the objects movements in a virtual world. Laboratory Setup The laboratory setup consists of a UR5 robotic system and a car chair in a face-to-face configuration ( Figure 2). The location and height of the robot was determined by [4] to be 75 cm above the floor. This position is optimal enough for the robot to reach all the interaction points that the system is interested in reaching. For the user, the VR headset and trackers are attached to the body (the humerus and palms), in order to obtain data and locate the user's location in the VR environment ( Figure 3). 3 Selection of the optimal trajectory planning and its application We present the setup associated with the choice of the optimal trajectory generator available within the MoveIt software and its application for the LobbyBot project. MoveIt Setup The installation of MoveIt consisted of configuring and defining the planning group, as well as making it compatible to work in Gazebo. The start-up phase was very important to analyse the behaviour of the different movement alternatives found in the MoveIt API. For this, it was important to configure the simulation environment in Gazebo so that we could test without compromising the real robot. Planning group The planning group is defined as the group of elements that make up the entire robotic system. These are the UR5 robot, the 6-faced prop and the robot support. These three elements are the ones that the trajectory planning algorithms must consider in order for them to avoid any collision state existing with one of these elements. The robot support was modelled to match the size of the real system that was optimally defined [4]. For the configuration of the plannig group, MoveIt has an integrated graphical interface to create all the configuration files related to the kinematics, controllers, Semantic Robot Description Format (SRDF) and other files for the usage of the robot in ROS. This interface is called MoveIt Setup Assistant. The MoveIt Setup Assistant creates all the mentioned files based on the robot description given to it, in this case the UR5 robot description files provided by [23] where taken and modified to include the robot support (included in the URDF definition of the robot) and also the mesh file for the prop. User's Model To model the user, a mannequin was defined in a URDF robot model. The main torso of the model is fixed, while the arms are structured as a serial robot with seven revolute joints, where the first three constitute the shoulder, the fourth joint represents the elbow, and the last three revolute joints represent the wrist of the arm. In the model, two small dots have been created in the humerus and palm links, which represent the location of the sensors in the user, as shown in Figure 4. Regarding the movement of the mannequin model, a kinematic model has been developed in parallel to this project in [25], where the connection between the sensor data and the model is defined. This will allow the system to recognise the user's movements and represent it in the simulation 4. Motions To generate collision-free trajectories, the different algorithms implemented in the MoveIt API have been analysed. All the tasks related to planning group movement are handled by the move group class. By specifying the planning group we want to consider, we are able to use all the different functions that the class offers for it, such as getting information about the current values of the joints, the target, configuring the planning algorithm we intend to use, and performing the planning and execution of the movements in the environment. Types of movement The move group class has the ability to perform path planning through different types of movements. These options can be chosen according to the nature of the task. For example, we can define a given pose in workspace or a desired joint value as the goal. Given the nature of the system, we will work with joint value goals, as we hope to achieve the different points in a specific configuration that provides a higher level of safety to the user (elbow up configuration for the UR5). Another important feature is the ability to specify whether one wants to achieve each of the requested objectives or not. As the implementation will receive constantly changing goals, the best implementation is to plan and move towards said goal by allowing the system to replan if the goal changes, meaning that we do not need to reach the initial goal. To do this, the move group class relies on the move group.execute(my plan) function to strictly reach the goal and on the move group.asyncExecute(my plan) function to execute the planned path with the possibility of re-planning during this execution. In Figure 5, two trajectories are calculated from an initial configuration, to an intermediate goal, and then to a final goal. In this case, by using the function move group.execute(my plan), we ensure that the robot will completely execute each of the trajectories and achieve both goals. This is illustrated in Figure 5, where the speeds drop to zero as the robot comes to a stop. In the case of figure 6, we have calculated the same two trajectories as before, but using the function move group.asyncExecute(my plane), which allows replanning during the execution of the first plane. In this case, in figure 6(a), we can see the two plans one after the other, while in figure 6(b), we show the representation of the segment that was not executed from the first plan, because a replanning scenario was set up. In this case, the current positions of the first plan were taken as the initial positions for the second plan, resulting in Figure 6(c), showing the two plans that were executed. Algorithm Selection Another parameter to select was the planning algorithm that best suited the task. As mentioned earlier, MoveIt has several built-in path planning algorithms that can be used. In order to determine the best option, we went through all the available options and performed a planning task to a desired target configuration, measuring the time required for each algorithm and recording the data. We ran each of the 12 available planning algorithms five times through nine different paths. We then took the average time it took them to find a solution, to simplify the trajectory (only for the algorithms that had this feature) and calculated the total average time. Using this data, we were able to select the algorithms that performed best with the shortest planning times (Figure 7). After performing these calculations, given the large difference in planning times for some of the algorithms, we select the six best algorithms to compare them on 12 trajectories (Figure 8). Another analysis that allowed us to select the algorithm which behaved the best for the implementation, was to perform an analysis on the generated trajectories with each one of the algorithms for a fixed task. Based on the six best algorithms from the previous analysis as a starting constraint, we computed the average execution time and via-points number for a set of trajectories. The BiTRRT algorithm wins for both comparisons. This analysis was performed for the same trajectories as in the previous graphs for a total of ten iterations for each algorithm, but instead of considering only the computation time (Figure 9), we also took into account the amount of via-points generated ( Figure 10). Between each via-point, a linear interpolation is performed in the joint space. For theses trajectories, the mannequin was placed in its seat so that it was avoided in the calculation, in order to test each algorithm's ability to plan around it. This also allowed us to see how consistent the behaviour of each algorithm was. Unity's Virtual Environment In parallel to the development of the project, and to better explain the developed implementation, it is important to specify how it will fit into the project. The system will receive a desired goal configuration which will be the q goal for the planning algorithm, from the current q init configuration. This goal selection is done in Unity by a Point selection algorithm which determines the interaction point the user intends to reach [26] (Figure 11). Planning Scene For the definition of the planning environment and scene, MoveIt has instances that allow the manipulation and monitoring of the scene to keep it up to date. These instances are : The last of these instances is absolutely necessary to perform the collision check, as we need to ensure that the scene being processed is the last one available. Mobility Schemes Based on the Unity information, two different motion or mobility schemes and scenarios has been proposed depending on the nature of the task we want to achieve at the moment. One for which no interaction with the user is required, and another one for when it is. These two scenarios have their own environment to consider, presenting in general two different behaviours. Movement outside user's workspace The first scenario is based on [26] where a distinction for velocity zones is made and where a plane divides the environment (space with the user and space where the user cannot go).Based on the same idea, we represented the effective working space of the mannequin as a sphere surrounding the model (Figure 12). The mobility scheme consists of alternating between different "Safe positions". These positions are so called because they are points out of reach of the user, which means that there is no need to constrain the robot's speeds. Therefore, the movement from one point to another just has to take into account the defined sphere, as we do not want to "collide" with it. Following this idea, we have performed a calculation of all existing trajectories between the different "Safe positions" and stored them in a data file. This allows us to perform offline path planning, and then at runtime, depending on the initial and desired goal, we can access the pre-calculated paths to execute them directly, eliminating the computational time that would otherwise be required by performing online planning. The algorithm 1 allows the storage of the trajectory. Then, the second part of the device consists of loading the pre-registered data and being able to use them on demand (Algorithm 2). We wait until we know the position we want to reach. Unlike [26], we have used a spherical surface here to divide the two areas of the space instead of a plane, as this allows greater flexibility for the planning group to consider more configurations when calculating the path between points. It also allows for more feasible trajectories for the robot. workspace, which means that the movements have to take into account the user's model in order to avoid any collision with him. We also have to take into account that the speed of these movements must be limited, in order to ensure safety. Unlike the first scheme, in this case the environment consists of moving obstacles, which requires constant updating of the scene and constant tracking of the objects in it ( Figure 13). For this reason, we used the images of the mannequin model to obtain its current positions and orientations in order to track their movement and link it to the objects created in the scene. We also need to be able to determine whether a computed plan will collide or not, which requires taking several aspects into account. First, based on the calculated path to the desired goal, we check whether the path remains valid during the execution of the plan. To do so, we check for all calculated via-points of the path, whether the respective configurations are currently colliding with any other object present in the scene. If there are no collisions, we continue the execution. In the case of a collision present in any of the remaining states of the path, we instruct the robot to stop the execution of the computed path and replan it based on the updated scene information. To test our framework, we performed an initial trajectory planning. Then, during the execution, we created an obstacle. Then, by checking the validity of the trajectory, we are able to detect that an object is in collision with the Algorithm 2 Trajectory upload and execution Require: A desired frame to go to des f rame. A home pose home. Number of elements noe. Initial positions init pos id. Goal positions goal pos id. Planned trajectories plan. A counter i. 1: for i < 0 ; i < noe ; i + + do 2: {Extract the data from the file} 3: start end if 24: end while planned trajectory. We then instruct the robot to stop the current execution and replan towards the same goal, taking into account the updated planning scene. This work is intended to be extrapolated to work according to the size of the mannequin. Thus, we can take into account the user moving in the environment as an obstacle to be avoided ( Figure 14). Conclusions In this paper, we have presented motion generation algorithms that can be used by a cobot to create an intermittent contacts interface. A framework was presented including a UR5 cobot, ROS nodes, HTC Vive sensors and a car chair. Taking into account the objects present in the environment, a comparison of trajectory planning algorithms is presented. The selected algorithm is then used in two examples. An experimental validation is in progress and will be presented in the final version of the paper.
2022-02-24T06:48:06.484Z
2022-02-23T00:00:00.000
{ "year": 2022, "sha1": "2b573d5067e3cc805d2e74cacbe38ceffeabe372", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e773c453203935f16d97fd25e9a04be230205561", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
239778688
pes2o/s2orc
v3-fos-license
A case report of visual outcome in keratoconus with retinitis pigmentosa Background: It is uncommon to see retinitis pigmentosa in keratoconus patients. The main difficulty of visual rehabilitation for this is restricted visual field. We presented the treatment and gene screen of visual system homeobox 1 (VSX 1) gene in this case. Case presentation: A 24-year-old man with retinitis pigmentosa presented with progressively blurred vision. Slit lamp examination revealed Vogt’s striae over both eyes, and corneal topography indicated bilateral keratoconus. We had tested 5 exons of VSX 1 gene from him and the did not find mutation on direct sequence. To improve visual acuity, we prescribed keratoconus rigid gas permeable (RGP) contact lens for him with good efficacy. However, lens dislocation occurred occasionally. He could not find dislocated lens easily due to visual field restriction, so he asked for more stable visual aids. Therefore, we instead prescribed scleral lens (SL), which were more stable on the ocular surface and led to more stable vision. Visual acuity was also gained with SL, but the tolerance time for SL was shorter compared to that of keratoconus RGP contact lens. To compare the efficacy of these two lenses, we surveyed life quality using the National Eye Institute Visual Functioning Questionnaire – 25 in three situations: baseline, with keratoconus RGP contact lens, and with SL. Conclusion: The patient used the two lens types according to his needs, and benefited from vision rehabilitation with both keratoconus RGP contact lens and SL. However, lens dislocation occurred occasionally. He could not find dislocated lens easily due to visual field restriction, so he asked for more stable visual aids. Therefore, we instead prescribed scleral lens (SL), which were more stable on the ocular surface and led to more stable vision. Visual acuity was also gained with SL, but the tolerance time for SL was shorter compared to that of keratoconus RGP contact lens. To compare the efficacy of these two lenses, we surveyed life quality using the National Eye Institute Visual Functioning Questionnaire -25 in three situations: baseline, with keratoconus RGP contact lens, and with SL. Conclusion: The patient used the two lens types according to his needs, and benefited from vision rehabilitation with both keratoconus RGP contact lens and SL. Background Keratoconus is characterized on the anterior corneal surface by a cone-shape protrusion [1,2], and the characteristic pattern of the corneal architecture is used for evaluating the severity of keratoconus. It is uncommon to see retinitis pigmentosa in keratoconus patients. These two diseases may be coexisted in Leber congenital amaurosis 3 (LCA), which was diagnosed by clinical features. The manifestation of LCA are various, including poor vision, nystagmus, multi-systemic involvement such as renal, cardiac, skeletal or neurology anomalies [3,4]. Rigid gas permeable (RGP) contact lens and keratoconus RGP contact lens have been important tools for visual rehabilitation of patients with keratoconus [5], and there are various materials and designs of keratoconus RGP contact lens, e.g., Rose K lenses (Menicon Z material from Menicon CO., Ltd, Nagoya, Japan) and HiClear keratoconus RGP contact lens (Brighten optix company, Corp.,Taipei, Taiwan). For severe cases, scleral lenses (SL) offer an alternative strategy [6,7]. However, to the best of our knowledge, there were seldom reports to compare keratoconus RGP contact lens and SL in same patients with long follow-up time. Herein, we represent a case of severe keratoconus with visual rehabilitation through keratoconus RGP contact lens and SL alternatively under 2- year follow-up. Case Presentation A 24-year-old man visited our outpatient department complaining of progressively blurred vision in both eyes. He had been diagnosed of retinitis pigmentosa for many years. The slit lamp examination for him revealed Vogt's striae, and corneal topography (Oculus Optikgerte GmbH, Wetzlar, Germany) ( To improve visual acuity, we applied keratoconus RGP contact lens (Brighten optix company, Corp.,Taipei, Taiwan) for him. The patient gained visual acuity to 0.3 logMAR in each eye, and binocular visual acuity improved to 0.2 logMAR (Table 1). However, he experienced instability and dislocation with lens, and he could not successfully retrieve it due to restricted visual fields. Moreover, there were complaints of two events of missing lens after dislocation. Thus, the patient was subsequently fitted with scleral lenses (SL) (Brighten optix company, Corp., Taipei, Taiwan). Visual acuity was also improved (Table 1), however, he complained of intolerance with extended use of SL, with an upper limit of tolerance time of about 4 to 5 hours. In contrast, the longest tolerable time for the keratoconus RGP contact lens was 8 to 9 hours. Depending on his needs, he wore both alternately. Since the condition of the patient was stable, we assess his visual function with National Eye Institute Visual Functioning Questionnaire -25 (NEI VFQ-25) in Mandarin, which is his native language and was valid in 2010 [8] ( Table 2). The composite NEI VFQ-25 score were 45.75, 69.46, and 63.38 with baseline, corrected with keratoconus RGP contact lens, and with SL, respectively ( Table 2). The score improved both with two types of lens. 5 It is uncommon to see retinitis pigmentosa in keratoconus patients. These two diseases may be coexisted in congenital disorders, for example, LCA [3,4]. In this subject, the clinical presentation was unlikely to be LCA. The etiology of keratoconus is multigenic [9,10], and it was reported that VSX 1 gene expressed in the retinal layer [11,12], so we tested 5 exons of VSX 1 gene from him and the did not find mutation on direct sequence. The role of VSX 1 in the pathogenesis of keratoconus is controversial [13,14], and may be only accounts for rare cases [15]. Contact lens are the primary form of visual correction for patients with keratoconus [16]. In a large lens trial for keratoconus, it was observed that aspheric RGP was preferred by patients wearing lenses 4-8 hours/day, but Rose K was reported to have wearing time more than 8 hours a day [17]. In our patient, his wearing habit of keratoconus RGP contact lens was 8 to 9 hours in a day. In a 6-year cohort study of SL [18], the overall failure rate of SL wear was 27%, and mainly be caused of ocular complications such as intolerance or corneal edema. In our patient, intolerance was presented of 4 hours after wearing SL. SL was used for severe keratoconus patients. However, there was no gold-standard for severe keratoconus grading. Koppen [19] reported the cutoff value for maximal keratometry was 70 D, and our patient matched this criteria in one of his eyes. In our patient, the NEI VFQ-25 showed that both two lenses improved visual function compared with baseline in composite scores, and keratoconus RGP contact lens presented even better than SL in general vision scores ( Table 2). In conclusion, this subject with keratoconus and retinitis pigmentosa, had good visual rehabilitation with keratoconus RGP contact lens and SL, and his visual function also got 6 improved with both visual aids.
2019-08-17T01:15:09.765Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "65073c2cef8a26c9fc41bafaf5ca433f36560e7e", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-8/v1.pdf?c=1585610382000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "ce51f0cbc7f01e1d8f2baeeeb1b14c60525be62a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
2741173
pes2o/s2orc
v3-fos-license
Deciphering the Genome of Polyphosphate Accumulating Actinobacterium Microlunatus phosphovorus Polyphosphate accumulating organisms (PAOs) belong mostly to Proteobacteria and Actinobacteria and are quite divergent. Under aerobic conditions, they accumulate intracellular polyphosphate (polyP), while they typically synthesize polyhydroxyalkanoates (PHAs) under anaerobic conditions. Many ecological, physiological, and genomic analyses have been performed with proteobacterial PAOs, but few with actinobacterial PAOs. In this study, the whole genome sequence of an actinobacterial PAO, Microlunatus phosphovorus NM-1T (NBRC 101784T), was determined. The number of genes for polyP metabolism was greater in M. phosphovorus than in other actinobacteria; it possesses genes for four polyP kinases (ppks), two polyP-dependent glucokinases (ppgks), and three phosphate transporters (pits). In contrast, it harbours only a single ppx gene for exopolyphosphatase, although two copies of ppx are generally present in other actinobacteria. Furthermore, M. phosphovorus lacks the phaABC genes for PHA synthesis and the actP gene encoding an acetate/H+ symporter, both of which play crucial roles in anaerobic PHA accumulation in proteobacterial PAOs. Thus, while the general features of M. phosphovorus regarding aerobic polyP accumulation are similar to those of proteobacterial PAOs, its anaerobic polyP use and PHA synthesis appear to be different. Introduction Economically exploitable phosphate rock, the major source of industrial phosphorus, is estimated to be depleted in 50-100 years. 1,2 Nevertheless, wasted phosphorus is rarely reused and, to make matters worse, can induce eutrophication in the surrounding water. Polyphosphate accumulating organisms (PAOs) are expected to help solve these problems. PAOs are frequently found in activated sludges in the enhanced biological phosphate removal (EBPR) process, where they are believed to play a pivotal role in phosphorus removal from the wastewater. 3 The EBPR process is also attracting interest for its potential use in phosphorus recycling. 4,5 In the EBPR process, PAOs take up phosphate into the cells and accumulate it as polyphosphate ( polyP) under aerobic conditions. In addition, under subsequent anaerobic conditions, PAOs in such sludges are thought to accumulate polyhydroxyalkanoates (PHAs), at the expense of polyP hydrolysis, 3 using volatile fatty acids such as acetate as substrates. To date, the features of PAOs have been studied mainly in bacterial communities, 6 -8 because few such microorganisms have been successfully isolated from EBPR sludge. Proteobacteria and/or actinobacteria are frequently observed in activated EBPR sludges, and several proteobacteria, e.g. Acinetobacter spp. and Lampropedia spp., have been isolated, although their metabolic or morphological characteristics differ from those typically observed in activated sludges. 3 Besides these bacterial isolates, an unculturable proteobacterium, 'Candidatus Accumulibacter phosphatis', is regarded as a typical PAO based on its PolyP and PHA accumulation properties. 7 Because 'Ca. Accumulibacter phosphatis' can predominate in an EBPR community fed with acetate or propionate, such sludges have been used for metagenomic and metaproteomic analyses. 9 -11 In this way, the molecular information of polyP accumulating proteobacteria has gradually been accumulating. Less is known about the cellular and molecular features of actinobacterial PAOs. Two species, Microlunatus phosphovorus and Tetrasphaera elongata, have been isolated from EBPR-activated sludges as candidate PAOs. 12 -15 These actinobacteria aerobically accumulate polyP in their cells, as do proteobacterial PAOs. Microlunatus phosphovorus, in particular, accumulates substantially more polyP (.10% of cell mass as phosphorus on a dry weight basis) than T. elongata (,1%) and other proteobacterial PAOs. Unlike proteobacterial PAOs, these candidates do not release phosphate when fed with acetate. 12,15,16 Instead, glucose and mixed substrates (acetate, casamino acids, and yeast extract) induce phosphate release in M. phosphovorus and T. elongata, respectively. Although PHA synthesis in M. phosphovorus had not been observed for over a decade from its first isolation, Aker et al. 17 recently demonstrated the presence of PHA in M. phosphovorus cells using PHA staining and gas chromatography, suggesting the existence of some metabolic systems for PHA production. In the present study, we determined the complete nucleotide sequence of the M. phosphovorus NM-1 T (NBRC 101784 T ) genome. We put particular focus on (i) polyP synthesis and degradation, (ii) polyP transport, (iii) retention of polyP granules (volutin granules), and (iv) PHA synthesis, which are all considered to be essential traits of a typical PAO. Very recently, the whole genome sequence of 'Ca. Accumulibacter phosphatis' was made available (INSD accession number: CP001715). We discuss similarities and differences between these two genome-sequenced organisms. This report is the first detailed analysis of the whole genome sequences of PAOs. Genome sequencing, assembly, and gap closure The complete genome sequence of M. phosphovorus NM-1 T (NBRC 101784 T ) was determined using a conventional whole genome shotgun strategy, as described previously. 18 Shotgun libraries with average insert sizes of 1.5 and 6.0 kb were constructed in pUC118 vector (TaKaRa, Kyoto, Japan), and a fosmid library with an average insert size of 35 kb was constructed in pCC1FOS vector. Plasmid and fosmid clones were end-sequenced using dyeterminator chemistry on an ABI 3730xl DNA Analyzer (Applied Biosystems, Foster City, CA, USA). Raw sequence data corresponding to the 8.6-fold coverage were assembled using PHRED/PHRAP/ CONSED software. 19 Gaps between assembled sequences were closed either by primer walking on gap-spanning library clones or by the transposonmediated random insertion method on bridging fosmid clones with a Template Generation System II Kit (Finnzymes, Vantaa, Finland). Gene identification and annotation Putative non-translated genes were identified using Rfam, 20 tRNAscan-SE, 21 and ARAGORN 22 programs. To predict protein-coding genes, GLIMMER3 23 was used. The initial set of open reading frames (ORFs) was manually selected from the predictions in combination with the similarity search results. Similarity searches against the Uniprot, 24 Interpro, 25 and HAMAP 26 databases were used for the functional prediction of ORFs. The KEGG 27 database was used for pathway reconstruction. Data availability The nucleotide sequence of M. phosphovorus NM-1 T (NBRC 101784 T ) has been deposited in the INSD database with an accession number AP012204. The annotated genome sequence is also available in the genome database DOGAN (http://www.bio.nite.go. jp/dogan/project/view/MP1). Results and discussion 3.1. General information 3.1.1. Genome overview The genome consisted of a single circular chromosome of 5 683 123 bp with an average G þ C content of 67.3% (Fig. 1). No plasmid DNA sequence was detected. The chromosome was predicted to encode 5360 protein-coding genes, 46 transfer RNA genes, and a set of ribosomal RNA genes (16S, 23S, and 5S). Of the 5360 predicted protein-coding genes, 4887 (91%) were orthologous 3.1.2. Adaptation in anaerobic conditions Microlunatus phosphovorus NM-1 T is an obligately aerobic chemoorganotroph but can grow anaerobically if nitrate is added to the medium as an electron acceptor. 13 Consistent with this observation, we found a gene cluster that putatively encodes subunits of the membrane-bound respiratory nitrate reductase, NarG (MLP_46640), NarH (MLP_46650), NarJ (MLP_46660), and NarI (MLP_46670), linked to a NarK-type nitrate/nitrite transporter gene (MLP_46680). In addition, we found a gene that encodes another NarK-type nitrate/nitrite transporter (MLP_35250) located the downstream of putative assimilatory nitrite reductase genes (MLP_35260 and MLP_35270). These components together may constitute a system for nitrate respiration in anaerobic conditions as well as for nitrogen assimilation. Under anaerobic conditions, M. phosphovorus NM-1 T was reported to generate acetate from glucose, suggesting the presence of an acetate fermentation system. 28 In support of previous experimental data, 28 we found putative genes, pta-ackA (MLP_01330 and MLP_01320), encoding phosphate acetyltransferase and acetate kinase, which usually ferment acetate in bacteria. Furthermore, putative pflBA genes (MLP_33410 and MLP_33420), which are necessary for the synthesis and activation of pyruvate formate lyase, a central enzyme in anaerobic glucose metabolism, were found in the M. phosphovorus genome. Because pflBA genes are present in few lineages of actinobacteria, and because an IS-like sequence, a putative transposase gene flanked by 17-bp inverted repeat sequences, was present just upstream of the pflBA genes, pflBA could have been acquired by horizontal gene transfer. Substrates for growth Metabolic pathways reconstructed in this study were roughly in accordance with previous studies on nutrient use in M. phosphovorus NM-1 T (Supplementary Table S3). 13,29 -31 However, while lactose and malate were predicted to be used as nutrients based on the genome information, the growth of M. phosphovorus NM-1 T on these nutrients has not been observed. On the other hand, while growth has been experimentally shown on mannose, galactose, N-acetyl-D-glucosamine, sorbose, salicin, p-arbutin, dulcitol, and adonitol, metabolic pathways for the use of these compounds were either missing or incomplete. Other substrates that have not yet been experimentally investigated, such as formate and butyrate, might also be used based on the predicted pathways. Butyrate metabolism may be advantageous for PAOs because butyrate, as well as acetate, could be a major fatty acid available in sewage water, and butyrate was consumed when nitrate is added under anaerobic conditions in an activated sludge sample. 32 Predicted features as a PAO 3.2.1. PolyP metabolism As described above, M. phosphovorus NM-1 T aerobically accumulates polyP in its cells, and phosphorus can exceed 10% of the cell mass on a dry weight basis. 13 In addition, the rate of phosphorus release under anaerobic conditions is significantly higher in M. phosphovorus NM-1 T than in any other isolated PAO candidate. 12,28,33 -35 The following sections describe the gene products that were implicated in the accumulation, degradation, and high turnover rate of polyP in M. phosphovorus. 3.2.1.1. PolyP kinase PolyP kinase (PPK) catalyses the transfer of phosphate between nucleoside phosphates and polyP. There are three main subtypes of PPK, PPK1, PPK2, and polyP-dependent AMP phosphotransferase (PAP), and they are present in a wide variety of bacterial species. 36 Even though the reaction catalysed by PPK is reversible, PPK1 favours polyP synthesis with nucleoside triphosphates as phosphate donors, and thus PPK1 is recognized to play a principal role in polyP accumulation. On the other hand, PAP favours polyP hydrolysis. The PPK2 subtype also catalyses polyP hydrolysis, but the dominance of either kinase or phosphatase activity varies in different actinobacterial species; Corynebacterium glutamicum PPK2 is a polyP kinase and Mycobacterium tuberculosis PPK2 is a polyP phosphatase. 37 -39 Four putative PPK genes were identified in the M. phosphovorus genome; a single ppk1 (MLP_47700) and three ppk2 (MLP_05750, MLP_50300, and MLP_23310) homologues. The number of ppk homologues in M. phosphovorus was relatively large; usually 1-4 ppk homologues exist in an actinobacterial genome (Table 1). Based on the similarity of deduced amino acid sequences, one of the M. phosphovorus PPK2s (MLP_05750) was of the C. glutamicum type (63% identity), whereas the other (MLP_50300) was of the Myc. tuberculosis type (78% identity; Fig. 2a). The third ppk2 homologue (MLP_23310) was relatively similar to both ppk2 and pap, but was located in a distinct cluster of undetermined function (Fig. 2a). The presence of multiple PPK genes in M. phosphovorus may have favoured the high polyP turnover rate and accumulating ability, although which of the four PPK homologues are polyP-synthetic or degradative has yet to be clearly distinguished. The possible importance of multiple PPK genes was supported by the genome information of 'Ca. Accumulibacter phosphatis'. 'Ca. Accumulibacter phosphatis' harbours multiple putative PPK genes, a ppk1, four ppk2, and a pap, while proteobacterial species usually contain only one or two ppk subtypes. Exopolyphosphatase Exopolyphosphatase (PPX) mediates the hydrolysis of the terminal phosphate of polyP. 36 Two types of PPX, PPX1 and PPX2, are present in a wide variety of bacteria and archaea. 36 Although most actinobacterial species harbour both types of PPX, only the ppx2 homologue (MLP_44770) was found in the M. phosphovorus genome (Fig. 2b). Propionibacterium acnes, which also belongs to the family Propionibacteriaceae, does not harbour ppx1 either. In C. glutamicum, which was recently shown to form volutin granules, mutational analysis showed that the amount of polyP in the cells increased when one of these genes was mutated. 40 In contrast to M. phosphovorus, 'Ca. Accumulibacter phosphatis', the proteobacterial PAO, has two ppx homologues in its genome. PolyP-dependent glucokinase Glucokinase is a key enzyme that catalyses phosphorylation of glucose, the first reaction of glycolysis. Glucokinase is widely present in both prokaryotes and eukaryotes. 41 In almost all organisms, glucokinase uses ATP as the sole phosphoryl donor. In some actinobacterial species, however, a paralogue of glucokinase, polyPdependent glucokinase (PPGK), has been identified which uses polyP as the phosphoryl donor as well as ATP. 42 -46 In C. glutamicum, western blot analysis showed that putative PPGK was localized in the volutin granules, suggesting that PPGK uses polyP from those granules. 47 A PPGK has already been identified in M. phosphovorus NM-1 T . 48 That report showed that the PPGK was strictly polyPspecific, unlike other actinobacterial PPGKs, suggesting the importance of polyP as an energy source for M. phosphovorus. In the present study, three genes putatively encoding glucokinase were identified in the M. phosphovorus genome; one was putative ATP-dependent glk (MLP_41670) and another (MLP_05430) was the ppgK reported previously. 48 The third gene (MLP_26610) was inferred to also be ppgK by molecular phylogenetic analysis (Fig. 2c). Internal deletions characteristic of PPGKs, as well as other functional motifs including the possible polyPbinding site, were observed in the deduced amino acid sequence of MLP_26610 ( Supplementary Fig. S1), suggesting that MLP_26610 is another ppgK orthologue (hereafter MLP_05430 and MLP_26610 are designated ppgK1 and ppgK2, respectively). This is the first report of two ppgK orthologues being present in a single organism (Table 1). In the presence of glucose, M. phosphovorus may use the dual ppgKs for efficient glycolysis by consuming intracellular polyP, which is then followed by acetate fermentation (see the 'Adaptation in anaerobic conditions' section) to adapt to anaerobic environments. This notion agrees well with the observation that adding glucose to a pure culture of M. phosphovorus NM-1 T under anaerobic conditions resulted in a rapid release of Pi from the cells. 12,16 3.2.2. Phosphate transport systems For rapid polyP metabolism, M. phosphovorus must have efficient phosphate uptake and release. In bacteria, phosphate uptake commonly occurs via the phosphate-specific transport (Pst) and phosphate inorganic transport (Pit) systems. 49 The Pst is an ATP-binding cassette (ABC) transporter, encoded by the gene cluster pstSCAB, for active Pit, while the Pit is a symporter of divalent metal-chelated phosphate (MeHPO 4 ) and H þ . In some Gram-negative bacteria, another ABC transporter, PhnCDE, has been identified. 50 -52 Recently, the PhnCDE transport system was also identified in an actinobacterium, Mycobacterium smegmatis, 53 although the system is not commonly present in this phylum. A putative pstSCAB (MLP_47720, MLP_47730, MLP_47740, and MLP_47750) and three pits (MLP_00530, MLP_29830, and MLP_51060) were identified in the M. phosphovorus genome. Notably, the number of pit genes in M. phosphovorus was among the largest known in an actinobacterial species (Table 1). Based on the molecular phylogenetic analysis, MLP_29830 and MLP_51060 were most homologous to those in actinobacteria (Fig. 2d). On the other hand, MLP_00530 seemed not to be an actinobacterial gene but rather was similar to proteobacterial genes (Fig. 2d) with a maximum amino acid identity of 48%. This might suggest that MLP_00530 was obtained from another phylum, such as Proteobacteria, although an upstream gene with a putative regulatory function (Pfam: PF01865), which is commonly associated with proteobacterial Pit genes, could not be found in the case of MLP_00530. Thus far, the Pit system has not been much studied in Gram-positive bacteria. Rather, it appears not to be mandatory in the group; pit in Myc. smegmatis was dispensable, with the Pst and Phn systems suggested to compensate for the function of Pit, 54 and some actinobacteria do not harbour pit in their genomes ( Table 1). The reversible Pi transport mediated by Pit without consuming ATP may be favourable to cope with drastic changes in intracellular Pi concentration Retention of volutin granules To accumulate a large amount of intracellular polyP, polyP must be stably maintained as volutin granules. Although molecular mechanisms of volutin granule formation have not been clarified in detail, polyamines were suggested to play a role in polyP accumulation in Escherichia coli. 55 Polyamines mainly consist of putrescine, spermidine, and spermine in eukaryotic cells, whereas spermine is contained in only a few prokaryotic species, including some actinobacteria. 56 Polyamines are aliphatic amines that are highlycharged cations under physiological conditions. They are generally recognized as essential for the maintenance of cell growth and macromolecular biosynthesis by interacting with nucleic acids, proteins, and membranes. In addition, Motomura et al. 55 demonstrated that intracellular polyP increased after adding spermidine and putrescine to E. coli cells in which polyamine synthesis genes had been disrupted. They further demonstrated in vitro that adding polyamines to volutin granule-containing solutions increased the retention time of the granules and that spermidine was the best polyamine for stabilizing the granules. Polyamines are synthesized from L-arginine or L-ornithine via reactions catalysed by arginine decarboxylase (SpeA) and agmatine ureohydrolase (SpeB) or ornithine decarboxylase (SpeC) (Fig. 3). The synthesized putrescine is then converted to spermidine and spermine in the reactions catalysed by spermidine synthase (SpeE) and spermine synthase, respectively, although spermine synthase has not been found in prokaryotes thus far. 56 In the M. phosphovorus genome, putative speA (MLP_07520) and speB (MLP_15750) were identified but did not form a gene cluster. Although speB is conserved among actinobacteria, speA or its homologues are rarely identified in actinobacteria. In addition, similarities of M. phosphovorus speA to non-actinobacterial genes are modest, with amino acid identities less than 33%, obscuring the exact origin of this gene. In contrast, the homologue of speE, which is usually present in actinobacteria, was not identified in the M. phosphovorus genome. Busse and Schumann 57 reported that spermidine and spermine were the major polyamines in the cells of M. phosphovorus NM-1 T , while only a trace amount of putrescine was found. This result contradicts the prediction from the genomic data, i.e. M. phosphovorus can synthesize only putrescine among the three polyamines, as far as the currently recognized metabolic pathway (Fig. 3) is taken into account. Perhaps spermidine and/or spermine are synthesized via unknown synthases or taken up actively by polyamine transporters. M. phosphovorus harbours eight putative genes whose deduced amino acid sequences contain the amino acid/ polyamine transporter I motif (InterPro ID: IPR002293). These transporters may be related to the stabilization of volutin granules, although their substrate specificities could not be clearly assigned. PHA synthesis In addition to aerobic polyP accumulation, anaerobic PHA production is a recognized feature of PAOs. 3 Recently, Aker et al. 17 reported the detection in M. phosphovorus NM-1 T cells of two PHA species, polyhydroxybutyrate and polyhydroxyvalerate, by Sudan Black B-and Safranin O-staining and gas chromatography methods. However, the system of PHA production in M. phosphovorus appears to be different from that proposed in proteobacterial PAOs; M. phosphovorus apparently produces PHA under aerobic conditions, whereas proteobacterial PAOs are believed to synthesize PHAs anaerobically. In proteobacterial PAOs, PHA is produced using glycogen/glucose or volatile fatty acids such as acetate as substrates, and the uptake of the acetate is conducted via ActP, a proteobacteria-specific acetate/H þ symporter. Acetate uptake through ActP is hypothesized to be counterbalanced by Pi release via Pit using the proton motive force. 58,59 The intermediate acetyl-CoA is then converted to PHAs by acetyl-CoA acetyltransferase (PhaA), acetoacetyl-CoA reductase (PhaB), and PHAsynthase (PhaC) (Fig. 4). 60 This model was further supported in a proteobacterial PAO, 'Ca. Accumulibacter phosphatis', by metagenomic and metaproteomic analyses. 9,11 In contrast, neither actP nor phaABC exists in most actinobacteria, and these genes were not found in the M. phosphovorus genome either. In addition to the Pha system, other pathways derived from the b-oxidation pathway proposed in E. coli and Pseudomonas putida could synthesize PHA (Fig. 4). 61 -63 In E. coli, the gene cluster yfcYX is related to the pathway; the encoded YfcX was demonstrated to be a multifunctional protein with enoyl-CoA hydratase, 3-hydroxyacyl-CoA dehydrogenase, and putative 3-hydroxyacyl-CoA epimerase activities, and YfcY to be a b-ketothiolase. In P. putida, (R)-specific enoyl-CoA hydrolase (PhaJ) is involved in the system. Both YfcX and PhaJ result in the synthesis of (R)-3-hydroxyacyl-CoA, the monomer unit of PHAs. In the M. phosphovorus genome, homologues of yfcYX (MLP_23080 and MLP_23090) and phaJ (MLP_12780) were identified, suggesting that these genes might produce PHA in M. phosphovorus rather than the PhaABC system proposed in proteobacterial PAOs. There remains an unresolved issue; both pathways require PhaC for the final polymerization reaction, but the gene is not present in the M. phosphovorus genome. Novel unidentified PHA synthase(s) might exist in M. phosphovorus that enable it to synthesize PHAs via a pathway independent of polyP degradation and distinct from that conventionally proposed in proteobacterial PAOs. A possible model for an actinobacterial PAO Based on the genome analysis as described in this report and previous experimental observations, possible pathways of M. phosphovorus that may represent the features of an actinobacterial PAO were summarized in Fig. 5 (upper panels) together with a model proposed in proteobacterial species (lower panels). 4,9,11 Under aerobic conditions, M. phosphovorus takes up Pi through a PstSCAB and multiple Pits. PhnCDE rarely exists in non-proteobacteria and is absent in M. phosphovorus. For Pi uptake via Pits, a proton motive force is required. In 'Ca. Accumulibacter phosphatis', the electron transport chain of aerobic respiration was proposed for that function 11 and may be the primary source of the proton motive force in aerobic M. phosphovorus, as well. The ingested intracellular Pi is then polymerized into polyP by some of the multiple PPK(s) and stored as volutin granules. In M. phosphovorus, PHA may be synthesized through the b-oxidation pathway, whereas proteobacteria degrade PHA and utilize it as an energy source under the aerobic conditions. Under anaerobic conditions, polyP is degraded by other PPK(s), a PPX2 and two PPGKs. Glucose-6-phosphate generated by PPGKs is possibly fed into glycolysis followed by acetate fermentation, and the NADH produced in the process can be used for nitrate respiration and nitrogen assimilation. Because PPGK has been identified only in actinobacteria thus far, this glycolytic pathway coupled with polyP consumption would be unique within this phylum. The release of Pi through Pit symporters generates a proton motive force. While the proton motive force is thought to be used for acetate uptake via ActP in proteobacterial PAOs, this model cannot be applied to non-proteobacteria, because ActP is present almost exclusively in proteobacteria. Instead, the proton motive force may be used, at least in part, for the symport of nitrate by NarK-type transporters in M. phosphovorus. The mechanism of anaerobic polyP utilization in M. phosphovorus thus appears to be substantially different from that in proteobacterial PAOs, in contrast to the general similarities seen in aerobic polyP accumulation. In proteobacteria, the polyP degradation system is thought to be tightly linked to acetate uptake and the synthesis of PHA as an energy reservoir. The system in M. phosphovorus, however, does not appear to be directly linked to the synthesis of PHA, but may couple with the carbon and energy metabolism necessary for a minimal level of growth under anaerobic conditions. This might be another reason why M. phosphovorus can accumulate much larger amounts of polyP under the alternating aerobic/anaerobic conditions in the EBPR process. Future perspectives In the present study, we found genetic features in the M. phosphovorus genome that could allow it to express phenotypic characteristics of a PAO. Some of these features, such as the high multiplicity of ppk and pit genes, were commonly seen in the genome of a proteobacterial PAO candidate, 'Ca. Accumulibacter phosphatis', supporting the importance of these genes for polyP metabolisms. On the other hand, some other features, such as the presence of duplicated ppgk genes and the unique absence of one type of ppx gene, were specific to the M. phosphovorus genome. Yet to be elucidated, however, is whether other factors, such as expression levels and activities of these gene products or their differential regulation, would also affect the PAO phenotype. Genes possibly related to polyP accumulation were widely dispersed across the genome of M. phosphovorus, and molecular phylogenetic analyses suggested that some had exogenous origins. This tendency was also seen in the 'Ca. Accumulibacter phosphatis' genome (data not shown). In support of the high plasticity of M. phosphovorus genome, the number of genes whose predicted protein product had at least one Pfam domain related to transposase, integrase, or recombinase was highest in M. phosphovorus (99 genes) among the actinobacterial species listed in Table 1. From the aspect of genome evolution, a possible ancestral genetic locus for phosphate metabolism can be seen in the genome of Gemmatimonas aurantiaca (DOGAN database: http://www.bio.nite.go.jp/dogan/project/view/GA1, accession number in INSD: AP009153) which was also isolated from an EBPR-activated sludge. 64 In this organism, all predicted genes related to phosphate metabolism (a pstSCAB, a ppx, a ppk, a pit, and five regulator genes) occur as a single gene cluster. Provided that the gene cluster represents an ancestral trait, PAOs may have evolved convergently in a variety of taxa via complex recombinational events including duplication, deletion, and horizontal gene transfer that occurred independently in each lineage. Recently, five species in the genus Microlunatus have newly been isolated; none accumulates polyP. 29 -31,65,66 Genome data for these related species and comparative genomics approaches would provide a clearer picture of the genetic background characteristic of PAOs.
2016-05-12T22:15:10.714Z
2012-08-23T00:00:00.000
{ "year": 2012, "sha1": "bb0f75a8dbffe7bb5c5c73624a9ee743812b5ae5", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/dnaresearch/article-pdf/19/5/383/1751477/dss020.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98175384c1926a6ca6847d522c41c6e704f5b112", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258460669
pes2o/s2orc
v3-fos-license
The influence of frailty on perioperative outcomes in patients undergoing surgical resection of liver metastases: a nationwide readmissions database study Background Liver metastases arise frequently from primary colorectal, pancreatic, and breast cancers. Research has highlighted the patient’s frailty status as an important predictor of outcomes, but the literature evaluating the role of frailty in patients with secondary metastatic disease of the liver remains limited. Using predictive analytics, we evaluated the role of frailty in patients who underwent hepatectomy for liver metastases. Methods We used the Nationwide Readmissions Database from 2016-2017 to identify patients who underwent resection of a secondary malignant neoplasm of the liver. Patient frailty was evaluated using the Johns Hopkins Adjusted Clinical Groups (JHACG) frailty-defining diagnosis indicator. Propensity score matching was performed and Mann-Whitney U testing was used to analyze complication rates. Receiver operating characteristic (ROC) curves were created following creation of logistic regression models for predicting discharge disposition. Results Frail patients reported significantly higher rates of nonroutine discharges, longer inpatient stays, greater costs, higher rates of acute infection, posthemorrhagic anemia, urinary tract infection (UTI), deep vein thrombosis (DVT), wound dehiscence and readmission, and greater mortality (P<0.05). Predictive models for patient discharge disposition, DVT and UTI demonstrated that the use of frailty status and age improved the area under the ROC curves significantly compared to models using age alone. Conclusions Frailty was found to be significantly correlated with higher rates of medical complications during inpatient stay following hepatectomy in patients with liver metastasis. The inclusion of patient frailty status in predictive models improved their predictive capacity compared to those using age alone. Introduction Liver metastases are neoplasms that have spread from cancer elsewhere in the body [1], arising most frequently from colorectal, pancreatic, and breast cancers. In fact, 50% of patients with colorectal cancer are diagnosed with liver metastases [2]. The liver is the most common organ affected by metastasis, because of its large blood supply [1,3]. As the incidence of colon cancer continues to rise, it is increasingly important to categorize its association with liver metastases [4]. The median 1-year survival rate of patients with liver metastases (15.1%) is significantly lower than the 1-year survival rate for patients diagnosed with non-hepatic metastases (24.0%). In a study that reviewed 2.4 million patients diagnosed with any type of cancer, 5.14% presented with liver metastases at the time of initial diagnosis. The most frequent primary cancer sources in this study were the pancreas (35.6%) and colonrectum (26.9%) [5]. Despite their advanced cancer stage, most of these patients were asymptomatic, with only some reporting constitutional symptoms [1]. In fact, liver metastases are more common than primary liver cancer in the US, with 5-year survival rates ranging around 25% for those not receiving early surgical intervention [6]. The most common treatment of liver metastases remains surgical resection [7], although these cases are frequently inoperable because of the heavy metastatic burden. Even with resection, the prognosis of this disease remains extremely poor, with recurrence of disease in two thirds of patients [7,8]. Frailty is an important factor affecting patients' health outcomes, as it reflects the patients' overall physiological reserve [9]. Frailty is defined as an age-associated condition that reduces the patient's physiologic ability to handle stressors, both chronic and acute [10]. Regarding patients undergoing elective surgery, frailty has been reported to be a more accurate predictor of outcomes than an array of other patient demographics, including age [11]. In fact, frailty has proved to be an independent, more accurate preoperative predictive risk factor, even after adjustment for socioeconomic status, depression, and disability [12]. Because frailty is an important risk factor affecting health outcomes, we aimed to investigate its role in predicting outcomes in patients with liver metastases who underwent surgical resection. We hypothesized that frail patients would have a higher rate of postoperative complications and longer hospital stays. Finally, using statistical modeling and predictive analytics, we investigated the relationship between patient's frailty status and perioperative outcomes. The goals of this study were to help surgeons better identify which patients are metastasectomy candidates, to improve perioperative management for frail patients, and to assist physicians in communicating more accurate prognoses to patients diagnosed with secondary metastatic disease of the liver. Data source In this study, we used the 2016 and 2017 Healthcare Cost and Utilization Project (HCUP) Nationwide Readmissions Database (NRD). The NRD is an annually updated database that contains national information regarding inpatient demographics, diagnoses, procedures and readmissions. Each year of the NRD can be purchased from the HCUP website and is designed to facilitate a nationally-representative analysis of inpatients and readmissions when the appropriate NRD discharge weights are applied. Patient hospital admissions are de-identified and are represented as unique patient linkages to allow for accurate patient tracking throughout the calendar year. Patient diagnoses and procedures of interest for this study were queried using the International Classification of Diseases, Tenth Revision (ICD-10) codes in combination with cost-tocharge ratios. The latter are imputed from national hospitalspecific or hospital-group-averaged all-payer inpatient cost data, which may be used to convert total hospital charges to allpayer inpatient costs. Institutional Review Board approval was not necessary as this study was based on a publicly available de-identified dataset. Patient sample Between 2016 and 2017, we identified a total of 28,781 inpatient admissions with ICD-10 codes for liver resection procedures (ICD-10: 0FT0xZZ, 0FBxxZZ). Within this cohort, appropriate coding was utilized to identify 10,799 (37.5%) patients who underwent a liver resection procedure for liver metastases. Frail patients were identified using the Johns Hopkins Adjusted Clinical Groups (JHACG) frailty-defining diagnosis indicator, which uses 10 categories of ICD-10 codes (malnutrition, dementia, vision impairment, decubitus ulcer, urine control, weight loss, fecal control, social support, difficulty walking, and history of a fall) to predict a patient's frailty status [13]. A patient is deemed categorically frail if at least one of these comorbidities has been discovered. Frailty is measured over 5 phenotypic characteristics, including accidental weight loss, tiredness, poor energy expenditure, limited grip strength and/or sluggish walking pace. This measure, which takes into account the decline in a number of physiological systems, was created to help medical professionals identify people more susceptible to suffering negative health effects [13]. Several studies have confirmed the clinical validity of the JHACG frailty-defining diagnosis indicator [13][14][15][16]. Based on the above, the cohort was then subdivided into frail (n=766) and propensity score matched non-frail (n=749) patients. Nearest-neighbor propensity score matching for age, sex, Elixhauser Comorbidity Index (ECI), insurance type, median income by ZIP code, and NRD discharge weighting was performed using the R "MatchIt" algorithm [17]. In this technique, parametric models are chosen based on the minimum "distance" parameter, determined through logistic regression models that minimize the propensity score with no replacement. MatchIt improves parametric statistical models and reduces model dependence by preprocessing data with semi-parametric and non-parametric matching methods. Model balance, defined as the similarity of empirical covariate distributions between the 2 groups undergoing propensity matching, is analyzed and the model with the best balance is selected to ensure the best model fit (Fig. 1). Complications queried for analysis in this study included postoperative infections, acute posthemorrhagic anemia, ileus, wound dehiscence, mortality, readmission rates, urinary tract infection (UTI), pulmonary embolism (PE), deep vein thrombosis (DVT), inpatient length of stay (LOS), costs, and discharge disposition. Nonroutine discharges were defined as discharges to places other than home (e.g., skilled nursing facility, home health care, short-term care facility, etc.) Statistical analysis All statistical analysis was conducted in RStudio (Version 1.3.959). Following propensity score matching, chi-squared tests were performed to evaluate differences between categorical variables. Mann-Whitney U test was performed to evaluate statistically significant differences in continuous data. Continuous variables followed a normal distribution and are thus reported as mean ± standard deviation. Binarized patient complication variables were analyzed using the "Epitools" package, with post hoc receiver operating characteristic (ROC) curves implemented following the creation of logistic regression models for relevant postoperative complications, using both age and frailty status as predictor variables. ROC curves were constructed for outcomes including nonroutine discharge, DVT and UTI, as these complications showed the greatest improvement in predictive power with the addition of frailty when considered against age alone. The area under the curve (AUC) of each ROC was computed and served as a proxy for model performance. DeLong's test for 2 correlated ROC curves was utilized to compare ROC AUCs. All statistical tests were 2-sided, with P<0.05 defined as significant. Demographics The average age of the frail cohort was 61.5±14.2 years and 49.0% were female. The average age of the non-frail cohort was found to be 62.7±13.4 years and 48.6% were female. Because the 2 cohorts were propensity score matched, the age, sex, ECI, insurance type and median income quartile by ZIP code did not differ statistically between the 2 cohorts. No significant differences in hospital size (P=0.63) or teaching status (P=0.66) were found between the 2 cohorts. However, significant differences were found between frail and non-frail patients when comparing discharge disposition (P<0.001) ( Table 1). Predictive models and ROC analysis Two sets of logistic regression models were developed: the first used age alone as the primary predictor, and the second used patient frailty status and age as the primary predictors. These models were used to assess the predictive capabilities of age and frailty status for nonroutine discharge, DVT and UTI. ROCs were plotted for both the logistic regression models for each outcome (Fig. 2-4). As the figures show, the logistic regression models using frailty and age as primary predictors outperformed the model using age alone. In addition, the AUC of the ROC incorporating frailty was found to be significantly higher when compared to age alone for nonroutine discharge (P=0.017), DVT (P=0.040), and UTI (P=0.040). Figure 2 Receiver operating characteristic (ROC) plot for prediction of nonroutine discharge status. The black ROC represents the logistic model using age alone as the primary predictor, the blue ROC represents the logistic model using frailty alone as the primary predictor, and the red ROC represents the logistic model using frailty status and age as the primary predictors. A noticeable increase in predictive power occurs when frailty is jointly considered for prediction of discharge status AUC, area under the curve Figure 4 Receiver operating characteristic (ROC) plot for prediction of postoperative urinary tract infection (UTI). The black ROC represents the logistic model using age alone as the primary predictor, the blue ROC represents the logistic model using frailty alone as the primary predictor, and the red ROC represents the logistic model using frailty status and age as the primary predictors. A noticeable increase in predictive power occurs when frailty is jointly considered for prediction of UTI . The black ROC represents the logistic model using age alone as the primary predictor, the blue ROC represents the logistic model using frailty alone as the primary predictor, and the red ROC represents the logistic model using frailty status and age as the primary predictors. A noticeable increase in predictive power occurs when frailty is jointly considered for prediction of DVT AUC, area under the curve Discussion In this retrospective study of patients treated surgically for secondary metastatic disease of the liver in 2016 and 2017, we investigated the influence of frailty on perioperative complications. Using propensity score matching techniques, we analyzed the association between frailty and complications of interest, while controlling for demographic confounders. Further modeling allowed for the creation of several ROCs for nonroutine discharge, DVT and UTI, which demonstrated that the addition of frailty to age alone within predictive models improved the AUC significantly. This study contributes to the body of work dedicated to improving the postoperative management of patients who undergo surgical intervention for metastasis to the liver, highlighting specific complications that may predominate in frail populations. Over the last several decades, frailty has become a topic of particular interest in hepatobiliary surgery, and has been shown to be highly correlated with rates of postoperative morbidity and mortality [18][19][20]. A 2018 review by Laube et al concluded that frailty may affect 17-43% of patients with advanced liver disease: frail patients who undergo hepatectomy have a higher incidence of postoperative complications, with a longer LOS and greater short-and long-term mortality [18,[21][22][23][24]. In addition, 2 recent 2020 and 2021 studies by Yamada et al demonstrated that elderly frail patients undergoing surgery for hepatocellular carcinoma (HCC) had significantly worse overall and diseasefree survival compared to non-frail patients [19,20]. These findings suggest that, even when controlling for age, elderly patients who meet clinical criteria for frailty continue to have worse perioperative outcomes. In other words, the decreased physiological reserve that defines frailty is poorly captured by age alone; thus, considering age together with patient frailty status may provide a superior predictor of perioperative morbidity. Furthermore, while frailty has been well studied within the field of hepatobiliary surgery, data outlining the influence of frailty in patients with metastatic disease to the liver are still limited. A 2021 study by Tokuda et al used multivariate regression analysis to assess the role of frailty in 29 frail and 58 non-frail patients with primary colorectal cancer (CRC) metastatic to the liver [25]. Their study found that overall and disease-specific survival rates were significantly worse in frail patients, while 21 of 58 patients with disease recurrence were frail patients, representing 72.4% of the frail cohort [25]. Similar findings were also demonstrated in a 2021 study by Dauch et al, who used the modified frailty index and multivariable regression analysis to evaluate the influence of frailty on postoperative outcomes in patients with primary CRC metastatic to the liver [26]. In their study, they found that frail patients had significantly higher rates of minor/major complications, readmissions, unfavorable discharges and mortality, and a longer LOS [26]. While both of these studies contribute important information to the existing literature, our study expands upon these studies in several important ways. First, we used propensity score matching techniques, which have been shown to be more robust in estimating causal effects using observational data compared to multivariate and multivariable analyses [27]. Additionally, our study included all patients with secondary metastatic disease to the liver captured in the NRD, allowing our models to be applicable to all patients with liver metastasis, regardless of the primary origin of the cancer. Because frailty is undeniably associated with worse outcomes, several studies have investigated interventions that may reduce frailty burden and improve frail patients' outcomes. Frailty is associated with a reduced physiologic reserve and resistance to stressors, resulting in vulnerability to adverse outcomes [18,28]. Intuitively, approaches that increase physiological reserve could act as a means of combating frailty. In a 2021 propensity score-matched study by Tsuchihashi et al, patients were started on an exercise regimen the day after HCC resection, and received interventions in the form of physical therapy 5 days per week, ranging through stretching, resistance, balance and aerobic exercises [29]. They found that the patients who completed an in-hospital exercise regimen improved their frailty status and had lower rates of postoperative complications [29]. This study suggests that specific in-hospital interventions may prevent the development of frailty-associated illnesses following surgical intervention for liver cancer. However, they are limited by patient compliance and additional studies are necessary to demonstrate the same findings in patients with secondary metastatic disease of the liver. In addition, a broad body of literature has evaluated the influence of nutrition on frailty. Specifically, adequate energy intake, especially protein intake, has been shown to reduce rates of frailty in large population studies and systematic reviews [30,31]. Thus, a combination of adequate preoperative nutrition and postoperative physical therapy may reduce rates of patient frailty, leading to lower perioperative complication rates in patients like those in our cohort. This study has several limitations, including those inherent in retrospective cohort analyses. Namely, the quality of analysis is dependent on the depth and accuracy of patient encounters documented in the NRD, and Berkson's bias is present when working with inpatient databases. Furthermore, this study is limited by its retrospective nature, focusing on a narrow range of time (2016 and 2017 only). However, the choice of dates was due to the implementation of mandatory ICD-10 coding in late 2015, which allowed for more detailed codes to be drawn for analysis. Lastly, the NRD allows for retrospective readmission analysis within one calendar year (January to December). Therefore, additional readmissions not occurring within the same calendar year are not captured and cannot be analyzed using the NRD. Our study suggests that patient frailty status strongly correlates with rates of medical complications, costs, LOS and discharge disposition in patients with secondary metastatic disease of the liver following surgical intervention. Frailty also improved the prediction of nonroutine patient discharges, DVT and UTI, compared to patient age alone, when incorporated into logistic models. Overall, frailty represents a robust predictor of patient outcomes and a better understanding of frailty may aid surgeons' decision-making following surgical intervention for liver metastases. Further research, including multicenter analyses with a large number of participants, is necessary to fully understand the influence of frailty on outcomes in patients with secondary metastatic disease of the liver. Summary Box What is already known: • Research has highlighted patient frailty status as an important predictor of outcomes • A variety of therapeutic agents have been evaluated to prevent postoperative recurrence endoscopically and clinically, and to induce and maintain remission • Frailty is a more accurate preoperative predictive risk factor, even after adjusting for socioeconomic status, depression, and disability • Frailty in hepatobiliary surgery has been shown to be correlated with rates of postoperative morbidity and mortality What the new findings are: • Frailty was found to be significantly correlated with higher rates of medical complications during inpatient stay following hepatectomy in patients with liver metastasis • Inclusion of patient frailty status in predictive models improved their predictive capacity compared to those using age alone • Predictive modeling allowed for the creation of several receiver operating characteristic curves for nonroutine discharge, deep vein thrombosis and urinary tract infection, which demonstrated that the addition of frailty to age alone within predictive models improved the area under the curve significantly
2023-05-04T05:06:30.997Z
2023-04-04T00:00:00.000
{ "year": 2023, "sha1": "67c602dac18b9e679d50c913bec17381af2bc98d", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "67c602dac18b9e679d50c913bec17381af2bc98d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1973130
pes2o/s2orc
v3-fos-license
A survey of visitors on Swedish livestock farms with reference to the spread of animal diseases Background In addition to livestock movements, other between-farm contacts such as visitors may contribute to the spread of contagious animal diseases. Knowledge about such contacts is essential for contingency planning. Preventive measures, risk-based surveillance and contact tracing may be facilitated if the frequency and type of between-farm contacts can be assessed for different types of farms. The aim of this study was to investigate the frequency and types of visitors on farms with cloven-hoofed animals in Sweden and to analyse whether there were differences in the number of visitors attributable to region, season, and type of herd. Data were collected from Swedish farmers through contact-logs covering two-week periods during four different seasons. Results In total, 482 (32%) farmers filled in the contact log for at least one period and the data represent 18,416 days. The average number of professional and non-professional visitors per day was 0.3 and 0.8, respectively. Whereas the number of professional visitors seemed to increase with increasing herd size, this relation was not seen for non-professional visits. The mean numbers of visitors per day were highest in the summer and in the farm category ‘small mixed farm’. Reports of the visitors’ degree of contact with the animals showed that veterinarians, AI-technicians, animal transporters and neighbours were often in direct contact with the animals or entered the stables and 8.8% of the repairmen were also in direct contact with animals, which was unexpected. In a multivariable analysis, species, herd size and season were significantly associated with the number of professional visitors as well as the number of visitors in direct contact with the animals. Conclusion In conclusion there was a large variation between farms in the number and type of contacts. The number of visitors that may be more likely to spread diseases between farms was associated with animal species and herd size. Background Contagious livestock diseases have a negative impact on production and farm economy as well as animal welfare. Moreover, several of the diseases in livestock are zoonotic and affect human health. There are thus major reasons to prevent and control these diseases, both endemic and exotic diseases. In infectious disease prevention and control, an important key is to understand the contact patterns and thus potential routes of spread between livestock farms within a country. Because one of the most important routes of spread is direct contact between live animals, livestock movements are often registered in central databases [1]. However, many diseases, e.g. foot-and mouth disease, classical swine fever, bovine viral diarrhoea and Aujezsky's disease, can also spread via indirect contacts, such as farm visitors, transports or shared equipment [2]. In contrast to data on livestock trade, these indirect contacts are seldom registered centrally. Assessments of the type and frequency of contacts such as visitors can be used in contingency planning as an indication of what can be expected regarding number of contacts during an outbreak. This information can be relevant for assessing potential spread and when designing forms for contact tracing. The identification of farm characteristics associated with more frequent contacts can also be useful input for prioritizations in contact-tracing and in the design of risk-based surveillance activities. The risk of disease spread via visitors can be minimized by preventive biosecurity measures such as use of clean protective clothing and boots (preferably provided by the farmer), cleaning of equipment used on the farm and hand wash [3][4][5][6]. However, from a previous study in Sweden it is clear that farmers perceive the risk of disease introduction as low and are not always motivated to apply biosecurity routines [7]. It has also been shown that there was large variation in the biosecurity routines applied by different types of professional visitors [7]. In order to increase awareness among farmers and veterinarians, as well as other visitors, specific information campaigns related to disease prevention and control can be performed. In such activities, knowledge about the average number and type of visitors in different types of farms can be very useful, as it enables targeting of high risk farm categories and visitors. This knowledge is also important when risks for disease spread through indirect contacts and possible contact patterns are communicated. Furthermore, the expected number of contacts is often needed as input data in mathematical modelling of disease outbreaks and for the highly contagious diseases the indirect contacts are also relevant [8,9]. Such modelling can in turn be used to approximate the extent of an outbreak and to assess possible effects of different disease control interventions. The aim of this study was to investigate the frequency and types of visitors as potential indirect contacts between farms with cloven-hoofed animals in Sweden and to analyse whether there were differences in the number of visitors attributable to region, season, and farm characteristics such as herd size or species present on the farm. Selection of farms and contact log This study was based on data collected through a mailed contact log that was sent to Swedish livestock farmers in 2006 and 2007. The participants were asked to register visitors and other farm contacts daily during four twoweek periods, throughout the different seasons of the year. In total, the contact log was sent out on five occasions, covering all four seasons, i.e. July 2006, November 2006, February 2007, April 2007, and July 2007. The reason for the fifth round sent out in summer 2007 was to ensure that data was obtained across seasons also for these farmers who joined the study in the 2nd round. The data collection was done in parallel with a questionnaire dealing with on-farm biosecurity routines [7], i.e. farmers were asked to respond to both the biosecurity questionnaire and to document contacts in the contact logs. Data from the biosecurity questionnaire regarding animal species present on the farm and herd size were also used in this study. The selection process is described in detail in the cited paper. In summary, a stratified random sample of farmers was selected in five different regions, from the very south to the north of Sweden to capture different geographical density and different predominant production. From each region, approximately 200 cattle farmers and 120 pig farmers with different production systems were selected, as well as 40 sheep farmers and 20 goat farmers (not all regions had this many farmers), resulting in a total of 1498 farmers. The basis for the sample was the official register of animal holdings at the Swedish Board of Agriculture. The sample size was a compromise between (i) having enough data to analyse, (ii) expected return rate, (iii) time limitations for data entry, and (iv) number of holdings in the regions. The contact log forms were sent by mail approximately ten days before the start of the period of data collection. Each time an accompanying letter was enclosed, in the first round it described the background of the study and on consecutive occasions it reminded participants of the purpose of the study and encouraged continued participation. Farmers were informed that their replies would be treated anonymously and for each round an instruction on how to fill in the log was included. A response envelope (free of charge) was included and a lottery ticket was enclosed as a sign of gratitude. The study was prospective and participants were asked to record data on a daily basis for the defined period. To avoid retrospective data collection and potential recall bias reminders were therefore not sent. However, unless farmers declined participation, we continued to send contact log forms for the remaining periods if they had responded to at least one previous period. The contact log forms were prospective and designed as a table of the different types of contacts and with one page per day (an English translation of the contact log is available as e-supplementary Additional file 1). The different types of contacts specified were transports, professional visitors, other visitors, livestock, dead-stock, shared equipment and farmers' own visits to other farms. Farmers were asked to indicate the number of visitors of each type and their level of contact, i.e. if the visitor was in direct contact with the livestock, entered the stable or stayed outside the stable, and if they were livestock owners. Before submission to the participants, the contact log was tested on a reference group of veterinarians specialised in disease control and thereafter on six farmers. Background population The animal species of interest in this study were cattle, pigs, sheep and goats. In 2006, there were approximately 25,000 agricultural enterprises with cattle in Sweden and of these, 8027 had cattle for milk production [10]. The average cattle herd size was 64 cattle (or 48 dairy cows, 14 suckler cows). Furthermore, there were 2,414 companies with pigs. The average pig herd size was 116 sows and 495 piglets and pigs for fattening. Moreover, there were 9,152 agricultural enterprises with sheep and of these, one third had <10 ewes and only 14% had >50 ewes [10]. In total, there were approximately 5,500 goats in the whole country. However, information on the number of holdings with goats was not available in the official statistics. For all species, the population is concentrated to the southern parts of the country. Since 2006, the number of pig herds and dairy cattle herds has decreased and the average size of herds has increased [11]. Data management and editing The contact logs were entered into a Microsoft Office Access database by single entry. In the editing of the data, some assumptions were made. In the instructions farmers were asked to use integers when registering the number of contacts and when editing the data "x" or "yes" were constantly interpreted as "1", unless other information indicated that another integer should be used. Furthermore, the data were scrutinised after entry and whenever there were indications of typing errors, data were checked and corrected. For herds where data were available for two summer periods, the first was kept in the dataset while the second was dropped. The parallel questionnaire on biosecurity routines [7] included questions on herd size and species present on the farm, and these data were also used in this study. The categories of species were cattle, swine, sheep or goats, and mixed. A herd was considered mixed if animals of more than one of the other categories were present. For herd size, three classes were created; hobby, medium and large. The aim of the classification was to create groups reflecting different levels of production intensity. This was based on the number of animals on the holding reported in the questionnaire, and the limits were set using the rather rough assumptions that hobby farmers do not earn their living from their livestock production and that large farms will in general need employed staff were used. For cattle and pig farmers, herd sizes <15 cattle and <20 pigs, respectively (corresponding to farms below the 30 th percentile) were classified as hobby and >150 cattle and >1500 pigs, respectively as large (above the 90 th percentile). For sheep and goats, <50 animals were classified as hobby (below the 85 th percentile), and >300 animals as large (above the 98.5 th percentile). Locations of the herd were denoted according to the Nomenclature of Territorial Units for Statistics level 2 which divides Sweden into eight regions [12]. Five of these regions were represented in the study; Övre Norrland, Östra Mellansverige, Småland med öarna, Sydsverige and Västsverige. In the statistical analysis, visitors were categorised into either professional or non-professional: veterinarians, AI technicians, inspectors, transporters, hoof trimmers, repairmen etc. were considered professional visitors, while e.g. visitors on field trips, neighbours and customers in farm shops or Bed & Breakfast enterprises were considered non-professional visitors. Data analysis Descriptive statistics were obtained for the different types of visitors and levels of contact, by species category, herd size, region and season. Furthermore, the proportion of visitors reported to have livestock of their own was calculated. Considering their expected high influence on the risk of disease spread, special attention was given to the number of professional visitors and the number of visitors with direct contacts with the animals. These two outcomes were further investigated using regression models where possible associations between number of visitors per twoweek period and different explanatory variables were analysed. The potential explanatory variables investigated were; species, herd size, region and season. Associations between outcomes and explanatory variables were first investigated by univariable regression. The outcome variables also contained an excess of zeroes and zero-inflated negative binomial regression was therefore chosen. In this type of model, a binary (here logistic) model and a negative binomial model are fit simultaneously to capture both the probability of zero counts and the probability of nonzero counts [13]. Because farmers contributed with several observations (i.e. one observation per season), robust standard errors were applied with clustering on herd level. Potential variables were tested in both the logistic and the negative binomial parts of the model in a stepwise process using backward elimination. The limit for keeping the variable in the model was set to p < 0.10. Biologically relevant interactions between the remaining variables were tested and interaction terms were kept if significant at the 0.05 level. The fit of the final models was examined by comparing the observed and predicted values for the different covariate patterns. Software used Data was entered and stored in Microsoft Office Access 2007 (Microsoft Co., Redmond, Washington, USA), and analysed using Stata Statistical Software: Release 11.2 (StataCorp. 2009, College Station, Texas, USA). Response rate Out of the selected farmers, 482 (32%) responded to the contact log on at least one occasion. The numbers of responses per period were as follows: summer '06 n = 427, autumn '06 n = 235, winter '06 n = 289, spring '07 n = 327 and summer '07 n = 241. The number of farmers that sent in one, two, three, four or five contact logs was 85, 76, 93, 137 and 95 respectively. After data cleaning, the responses represent a total of approximately 1,315 two-week periods (18,416 days). The number of responding farmers and response rate by different categories of registered species on the holding and by region is shown in Table 1. Reasons for non-response were given by 21% of non-responders [7]. The most important reason for non-response among these farmers was "ceased animal production" (50%). Notably, one reason given by a few farmers was "having too many visits to keep track of". Descriptive statistics According to the replies, 45 herds (9.3%) had no visitors at all during any of the two-week periods. There were 111 herds (23.0%) that did not have any professional visitors and 133 herds (27.6%) that did not have any nonprofessional visitors. On average, the number of visitors per day was 1.1 (range 0-221). The mean number of professional visitors was 0.3 (median 0; range 0-18) and the mean number of non-professional visitors per day was 0.8 (median 0; range 0-221). The numbers of professional visitors, non-professional visitors and visitors in direct or indirect contact with animals are given by category of species and herd size in Figure 1a The descriptive statistics indicated differences between daily mean numbers of visitors related to herd size (Figure 1a-b). Whereas the number of professional visitors seemed to increase with herd size, this relation was not seen for non-professional visits. Moreover, there were differences related to species present on the farm. For example, veterinary visits in cattle herds seemed to increase with herd size but this tendency was not obvious for pig herds or for sheep herds. The highest mean number (6.5) of non-professional visitors per day was found in the category 'small mixed farm'. When seasons were compared, the average number of visitors was higher in summer. This difference was most obvious as regards non-professional visitors in mixed herds and herds with sheep or goat ( Figure 2). Farmers registered information on the animal ownership of the non-professional visitor for 88% of reported visits (3696 occasions). However, they often indicated this with an "x" or "yes" instead of the actual number of visitors having livestock and these data were therefore analysed on occasion and type of visitor. The results are shown in Table 2. The largest proportion of visitors reported to own livestock was found in the neighbour category. The locations within the farm where visitors were reported to enter are shown in Table 3. As expected, veterinarians and AI-technicians were often in direct contact with animals. Animal transporters were also often in direct contact with the animals (39.3%). Among deadstock collectors, on the other hand, few were in contact with animals (6.1%) or entered the stables (5.1%). Notably, 8.8% of the repairmen were in direct contact with animals and neighbours were in direct contact or entered the stables at 36.5% of their visits. Results from multivariable regressions In the final models, the number of professional visitors, as well as the number of visitors in direct contact with animals were significantly associated with species, herd size and season (Tables 4 and 5). For both of the outcomes, species and herd size were included in the negative binomial part of the model (representing counts of visitors) while species, herd size and season were included in the logistic part of the model (representing the probability of no visits at all). Although an interaction between herd size and species was found, the models were not stable with this interaction term (i.e. resulted in extreme incidence risk ratios and confidence intervals) and it was therefore excluded from the final models. The geographical differences seen in the univariable analyses were not observed when other risk factors were accounted for. For both professional visitors and visitors in direct contact with animals, visits were more likely in large herds compared to small and medium herds. From herds reporting these types of visits, there were also more visitors in herds with cattle, compared to other species. However, the numbers of visitors in direct contact (i.e. including both professional and non-professional) were higher in hobby farms compared to large and medium sized farms. In addition, Total 1498 Contact-log response rates disaggregated by farm types according to species and by regions. *Selection was based on species and region. Reported species was not always consistent with species registered on farm. These respondents had not used the coded response letter and their selection strata could therefore not be identified. professional visits were less likely in summer compared to spring and autumn. Discussion In general, one of the most important routes of disease spread is considered to be live animal trade, and animal movements between Swedish herds have recently been described [14][15][16]. However, for some highly contagious diseases, indirect contacts are also a potential route of disease transmission. Introduction of animals from other herds can to some extent be avoided by limiting the purchases of animals and instead relying on within-farm recruitment. Professionals visiting the farm, on the other hand, can seldom be totally avoided. The non-professional visits could in theory be avoided, but benefits from having children and urban people visit farms in order to enable better understanding of agricultural production would then be lost. Studies to investigate these types of contacts have been done in other countries [17][18][19][20][21], however, information on indirect contacts between Swedish farms has been missing. The results presented here therefore contribute substantially to the knowledge of what can be expected when it comes to between herd contacts. Based on the replies, there was a large difference in the number of visitors per farm, and substantial variation within categories of farms was observed. In general however, species and herd size were significantly associated with the number of professional visits and this is an expected finding. Depending on the type of production, different professionals will be needed and with many animals the frequency of the visits will increase. For example, if the herd is large, the probability of one animal in the herd needing veterinary care will increase compared to if the number of animals is low. The contact pattern observed for veterinary visits can also be explained by other underlying structures. For example, provided that they fulfil requirements of education and regularly veterinary visits, Swedish pig farmers are generally allowed to keep certain drugs (e.g. antibiotics) on their farm and perform first line treatment of individual animals themselves. This was at the time of the study not possible for cattle owners, and this is one explanation why the veterinary visits to pig farms did not increase with herd size to the same extent as visits to cattle farms. Another example of factors that can influence frequency of visitors is the production cycle on the farm, which will affect how often animal transporters collect animals on the farm. In comparison, the non-professional visitors did not follow the same pattern as the professional ones. From a contingency planning and information perspective, one important finding was that a number of hobby farms with mixed species had large numbers of non-professional visits. From previous studies it was clear that hobby farmers often had low biosecurity [7]. This category of farmers has also been shown to be overrepresented among farmers who were unaware of an ongoing outbreak [22]. Although non-professional visitors may not be as important for disease spread as the professionals, who tend to visit one farm after the other, it is clear that low biosecurity and unawareness in combination with large number of contacts may present a high risk. Even if only a small proportion of these visitors are in contact with other farms, the actual number may be significant when the total number of visitors is high. With hundreds of visitors per week, tracing of contacts during outbreaks may also be extremely time consuming and difficult. Thus, information about preventive biosecurity measures is crucial in such farms. Not all visitors pose equal risk, and focus could be on hygiene measures related to visitors in direct contact with animals and especially if they are livestock owners. New legislation coming into force in Sweden in September 2013 establishes the famers' responsibility for biosecurity related to farm visits [23]. As part of implementing the new rules, these results provide important information when communicating the risk of disease transmission through visitors to farmers. Further, the findings are relevant for strategies on how to come into contact with visitors in case of a disease outbreak. The study has identified that the number of people that would need to be reached can be very high and that many of them may not be part of the farming community, i.e. they will probably not be reached through the farmers' press or information sent to farmers. From the results, the importance of asking the farmer at an early stage of contact tracing if they have many visitors e.g. due to hosting fields trips or having construction workers or repairmen at the farm, has been highlighted. Another important finding is the proportion of visitors of different categories that was reported to be in direct contact with the animals or to enter the stable. These findings also need to be seen in light of previous findings, where use of protective clothing was examined [7]. For example, salesmen and repairmen were reported to have poor use of protective clothing, and many farmers did not require such usage, whereas this study found that that one fourth of the salesmen entered the stables and almost nine percent of the repairmen were in direct contact with animals. Many visitors within these categories of professionals may not have an education related to animal husbandry and there is a risk that they do not realise their potential role in spreading disease. These results should be considered in the design of preventive biosecurity programmes or information campaigns during disease outbreaks where it is important not to forget this category of visitors. There is a need to communicate, both to the farmer and to the visitors, the risk of disease spread through indirect contact. These results have therefore been forwarded to the Swedish Animal Health Services and the Swedish Dairy Association which are currently working on a new farm biosecurity programme. It is often assumed that farms in the northern parts of Sweden pose a lower risk of disease introduction as they have fewer contacts. This study demonstrates that farm characteristics were more important than geographical region and that when implementing control measures region should not be considered a primary factor. From previous outbreak investigations, it is known that it may be difficult for farmers to recall detailed information of events such as farm visits. Farmers have also been surprised when they have realised the actual number of contacts they have had. Making a single assessment at one point in time can thus lead to underestimation of the amounts of contacts. In order to avoid recall bias and underestimation of contacts, the choice was a prospective contact log. Although the aim was to make data registration as simple as possible, participation in the study was a considerable workload for the farmers. In spite of this, 32% of the invited farmers chose to participate in the study. There were farmers that only responded to one period, we did not investigate the reasons for this and can only speculate why. Some might have found the questionnaire too burdensome, and since there is a rapid structural change in Swedish agriculture with decreasing farms it is probable that some of them quit farming during the study period. There were also farmers that responded in the start and the end of the study period but missed one or two periods in the middle. Although other information would have been interesting to include in the contact log, we tried to minimise the amount of data to be collected by the farmer and did not ask about duration of contact, biosecurity routines applied during the specific visit, origin or destination of the contact. Data on distances are planned to be collected in a future study focusing on the routes travelled by professionals. Compared to similar studies in Switzerland and New Zealand, in which farmers were also asked to register contacts during two or three week periods, the response rate in this study lies between the two (22% and 43%). However in the New Zealand study the participants were recruited through telephone calls [20,24] and the response rate was even higher (70% when non-eligible farms had been excluded) in a corresponding Dutch study where farmers were recruited by letters and telephone calls from their local veterinarians [19]. In recent European studies in UK and Belgium, data collection was based on estimates made at one point of time and farmers were not asked to register contacts continuously, and response rates are therefore not directly comparable [17,18]. As already concluded by Ribbens et al. [18], the result of these studies are not straight forward to compare either, because they focus on different groups of farmers and because the methods for data collection have differed between countries. However some findings are worth mentioning. The large variation between farms in number of contacts and association with herd size was also observed in Belgium, California, New Zealand and the United Kingdom [17,18,20,21]. In the Belgian study it was observed that professional visits more often entered the stable compared to non-professionals [18], and when removing milk trucks (which were not relevant in the Belgian study focusing on pigs) from the Swedish data the proportion of professionals entering stables was clearly higher compared to non-professionals also in Sweden. In some parts, the study from the Netherlands registered level of contact in a comparable way, and similar findings were seen with veterinarians, AI-technicians and temporary employees among the visitors most often in direct contact with animals. However, other categories differed between the two countries, e.g. animal obstetricians which occurred in the Dutch study do not exist in Sweden, and hoof-trimmers did not occur in the Dutch data [19]. This example illustrates both the constraints in comparing results from studies with different designs, and also the need for collecting country specific data. Because half of the non-responders who explained their non-response said that they did no longer have livestock, the response rate among farmers that in fact had livestock on their farm was even higher. A few nonresponders explained that their high number of contacts was the reason why they could not participate in the study. This is unfortunate because from a disease prevention perspective, farms with many contacts are of special interest. It is noteworthy that one of these farmers indicated that the farm had around thousand visits each week, due to on-farm sales. Thus, it is possible that the average number of visits reported in this study was underestimated. Another possible reason for underestimation is that farmers, in spite of the prospective study-design, forgot to fill in all visits. This was observed in a Dutch study were comparative data were available to check the registrations [19]. There was no simple way to identify differences between responders and non-responders. It can be speculated that farmers who were more interested in biosecurity and disease prevention were more likely to agree to participate in the study. If so, it is possible that the number of non-professional visits, i.e. the category of visitors that a farmer can limit, was higher than reported. However, professional contacts are needed to keep the farm running, regardless of the farmer's attitude towards responding to questionnaires. The results from this study reflect the large variability among farms and contribute to the understanding of the frequency and nature of indirect contacts between Swedish livestock holdings. The large number of nonprofessional visits in some farms, the fact that mixedspecies hobby farms (potentially with low biosecurity and outbreak awareness) often had many visitors and the proportion of salesmen and repairmen entering the farm stables, are all important observations. The expected findings, such as number of visitors being related to species and herd size, are also of value as this has not been documented before in Sweden. The study results will constitute useful background information in the planning of risk-based surveillance, risk communication, biosecurity information campaigns, as well as in outbreak management and preparedness, and as input in ongoing work on modelling of disease spread where the distributions on actual numbers of contacts can be used to simulate contact patterns relevant for different types of Swedish livestock farms. Conclusions There was a large variation in number of farm visitors, both professionals and non-professionals. The number of visitors that may be more likely to spread diseases between herds was associated with animal species and herd size of the farm, however the non-professional visitors did not show the same association with herd-size and there were small mixed farms with high numbers of non-professional visitors. There were expected findings with e.g. veterinarians and AI-technicians often in direct contact with animals, but also unexpected findings with e.g. more repairmen than expected being in direct contact with animals. Additional file Additional file 1: The contact log (in English) is available in as an additional file.
2017-04-06T14:37:39.644Z
2013-09-16T00:00:00.000
{ "year": 2013, "sha1": "d31fba05f3ba6330361fcc685e3289f0384e1c55", "oa_license": "CCBY", "oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/1746-6148-9-184", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0480084eefee6333df832a821af8945ea7ab379a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
232172336
pes2o/s2orc
v3-fos-license
αB-Crystallin Alleviates Endotoxin-Induced Retinal Inflammation and Inhibits Microglial Activation and Autophagy αB-Crystallin, a member of the small heat shock protein (sHSP) family, plays an immunomodulatory and neuroprotective role by inhibiting microglial activation in several diseases. However, its effect on endotoxin-induced uveitis (EIU) is unclear. Autophagy may be associated with microglial activation, and αB-crystallin is involved in the regulation of autophagy in some cells. The role of αB-crystallin in microglial autophagy is unknown. This study aimed to explore the role of αB-crystallin on retinal microglial autophagy, microglial activation, and neuroinflammation in both cultured BV2 cells and the EIU mouse model. Our results show that αB-crystallin reduced the release of typical proinflammatory cytokines at both the mRNA and protein level, inhibited microglial activation in morphology, and suppressed the expression of autophagy-related molecules and the number of autophagolysosomes in vitro. In the EIU mouse model, αB-crystallin treatment alleviated the release of ocular inflammatory cytokines and the representative signs of inflammation, reduced the apoptosis of ganglion cells, and rescued retinal inflammatory structural and functional damage, as evaluated by optical coherence tomographic and electroretinography. Taken together, these results indicate that αB-crystallin inhibits the activation of microglia and supresses microglial autophagy, ultimately reducing endotoxin-induced neuroinflammation. In conclusion, αB-crystallin provides a novel and promising option for affecting microglial autophagy and alleviating symptoms of ocular inflammatory diseases. INTRODUCTION Uveitis, a common ocular inflammatory disease, is a leading cause of blindness worldwide affecting individuals regardless of age, sex, or race (1,2). Corticosteroids are the main treatment option for noninfectious uveitis (3,4). In addition to poor response by 30% of patients (5), corticosteroid treatment is also accompanied by inevitable systemic as well as ocular side effects (6,7). Therefore, there is a need for an efficient and safe intervention for uveitis. Microglia, the primary resident population of innate immune cells in the brain and retina, activates under stress and produces proinflammatory neurotoxic cytokines such as the tumor necrosis factor (TNF) and nitric oxide (NO), leading to a cascade of inflammation, which results in irreversible neurodegeneration in various diseases (8,9), including uveitis (10), glaucoma (11), age-related macular degeneration (AMD) (12), and retinitis pigmentosa (RP) (13). Thus, microglial activation plays an important role in a large number of inflammatory diseases. Recently, interventions aimed at inhibiting the activation of microglia in neuroinflammatory disease have gained considerable interest. αB-Crystallin (CRYAB/HSPB5), a member of the small heat shock protein (sHSP) family, has been a recent target of interest for therapy. sHSP family members, induced by numerous stressors, are observed throughout the species (14). Recent studies have shown that αB-crystallin is not only an intracellular chaperone (stabilizing the correct protein conformation, folding, and translocation from multiple stresses) (15), but also acts as a signaling molecule in the extracellular space, communicating with other cells such as microglia, to regulate immune response and inflammation (16,17). As an immunomodulatory neuroprotectant, αBcrystallin is involved in several stress-related statuses such as stroke (18), spinal cord contusion (19), and autoimmune demyelination (20). Furthermore, studies report that αBcrystallin alleviates neuroinflammation through inhibition of microglial activation in the anterior ischemic optic neuropathy (AION) (21) and experimental autoimmune encephalomyelitis (EAE) (17). Interestingly, retinal αB-crystallin is upregulated in Staphylococcus aureus-induced endophthalmitis and protects the retina from damage. Thus, modulation of the microglial activation by αB-crystallin in ocular inflammatory disease requires further investigation, which may lead to promising new therapy for uveitis. Autophagy is a cellular process that eliminates aggregated or unfolded proteins to maintain protein homeostasis. It also removes excess or damaged organelles in the cells through several processes, including macroautophagy, microautophagy, and chaperone-mediated autophagy (CMA) (22,23). In addition to maintaining natural innate proteostasis, molecular chaperones also participate in autophagy. As a form of molecular chaperone, αB-crystallin is closely linked with autophagy. Studies suggest that αB-crystallin not only modulates the autophagy of retinal pigment epithelium (RPE) in AMD (16) and the astrocyte in Parkinson's disease (24), but also the autophagy of cardiomyocytes in cardiomyopathy (25). Moreover, an increasing number of studies have shown that the upregulated levels of autophagy could promote microglial activation, leading to neuroinflammation, for example, in HIV associated encephalitis (26), intracerebral hemorrhage (ICH) (27), and cocaine exposure (23). In addition, one investigator reported that suppressing autophagy could inhibit microglial classical activation (28). Therefore, the role that αB-crystallin plays in microglial autophagy requires additional investigation; particularly, as microglial autophagy may provide a potential method of influencing microglial activation and ultimately ocular neuroinflammation. Whether exogenous αB-crystallin can play a role in microglial autophagy and inhibit microglial activation in acute ocular inflammation remains unclear. Based on existing studies, we hypothesized that αB-crystallin may suppress microglial activation and influence autophagy to alleviate neuroinflammation. We aimed to explore the potential of αB-crystallin in the alleviation of ocular inflammatory diseases such as uveitis. For this purpose, the endotoxin-induced uveitis (EIU) mouse model, a mature model developed for acute ocular inflammation, was chosen. We divided our study into two parts, in vitro and in vivo. This study aimed to investigate the role of exogenous αB-crystallin in regulating microglial autophagy and activation in the BV2 microglia cells and the EIU mouse model. Cell Culture and Treatment BV2 cell lines (mouse microglial cell lines) were purchased from a commercial cell bank (DMSZ, Germany). The cells were cultured in Dulbecco's Modified Eagle Medium (DMEM) (C11330500BT, Gibco), containing 10% fetal bovine serum (FBS) (LB-10270, Roby), at 37 • C in a humidified incubator with 5% CO 2 . BV2 microglial cells were randomly divided into three groups: (1) a control group, (2) a group treated with phosphate buffered saline (PBS)+ lipopolysaccharides (LPS), and (3) a group treated with αB-crystallin + LPS. The BV2 cells were seeded in sixwell plates, followed by the addition of recombined human αBcrystallin (C7944-53, Us Biologycal) (1 µg/ml) or sterile PBS (C10010500BT, Gibco) 12 h before treatment with LPS. The selection for appropriate concentration of αB-crystallin was shown in the Supplementary Figure 1A. For LPS stimulation, LPS (L6529, Sigma) (100 ng/ml) was added into the six-well plates for the PBS+LPS or αB+LPS groups. The cells from one well were harvested at 6 h or 12 h after LPS stimulation for further analysis. EIU Mouse Model and Treatment All animal studies were conducted in accordance with the Association for Research in Vision and Ophthalmology resolution and approved by the Zhongshan Ophthalmic Center Animal Care and Use Committee, Sun Yat-sen University, Guangzhou, China (authorization number 2019164). The C57BL/6J mice (6-8 weeks old) were maintained in a 12 h light/dark cycle at 23 • C with ad libitum access to food and water. They were randomly divided into three groups: (1) a control group, (2) a PBS+LPS group, and (3) an αB+LPS group. The right eyes of mice in the PBS+LPS and αB+LPS groups were intravitreally injected with PBS and LPS (PBS+LPS group) or αB-crystallin and LPS (αB+LPS group). The EIU mouse model was induced by a single intravitreal injection of 1 µl LPS (200 µg/ ml). Twenty four hour before the LPS injection, mice in the PBS+LPS group received one intravitreal injection of 1 µl PBS. Mice in the αB+LPS group received one intravitreal injection of 1 µl αB-crystallin (500 µg/ml). For further analysis, mice in each group were sacrificed and the right eyeballs were enucleated at 6, 12, and 24 h after the LPS stimulation. The results for single intravitreous injection of αB-crystallin were shown in the Supplementary Figures 1B-D. Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) Total RNA was isolated from the BV2 microglia of one well of six-well plates (n = 3) or the mouse retinas (n = 3) using RNAiso Plus (9108, Takara). RNA was subject to reverse transcription to cDNA using the PrimeScriptTMRT reagent kit (RR036B, Takara). The nucleic acid purity was quantified and analyzed using spectrophotometry (NanoDrop Technologies, Wilmington, DE). Primer sequences are presented in Table 1. Gene expression levels were measured using the LightCycler 480 system (Roche, Switzerland). The PCR procedure was as follows: pre-incubation for 5 min at 95 • C, followed by 40 cycle amplification of denaturation for 10 s at 95 • C, and annealing for 15 s at 60 • C. The expression level of each gene was expressed as the fold expression after normalization to the internal control glyceraldehyde 3-phosphate dehydrogenase (GAPDH). Western Blotting The total protein was extracted from the BV2 cells of one well of six-well plates (n = 3) or the mouse retinas (n = 3) using lysis buffer (KGP250, KeyGen) containing protease and phosphatase inhibitors. The protein concentration was measured using the PierceTM BCA Protein Assay Kit (23227, Thermo Scientific). An equal amount of protein from each sample was mixed with 5X SDS loading buffer (KGP101, KeyGen), subjected to 12% SDS gel, and then transferred to PVDF membranes (ISEQ00010, Merck Millipore). The membranes were blocked with 5% non-fat milk (MB3217, Meilunbio) in PBS for 1.5 h at room temperature and then incubated with primary antibodies against a nitric oxide synthase (iNOS) (ab15323, Abcam), cyclooxygenase 2 (COX2) (12282S, CST), Beclin1 (3738S, CST), Light Chain 3 (LC3) (3868S, CST), and GAPDH (ab8245, Abcam) overnight at 4 • C. The membranes were then incubated with secondary antibodies (ab6802, Abcam; ab6789, Abcam) for 1 h. Protein bands were visualized using the ChemiDoc Touch Imaging System (Bio-Rad, USA). The band intensity was quantified using Image J software (NIH, USA). Enzyme-Linked Immunosorbent Assay (ELISA) The cellular supernatant and intraocular liquid from mice were collected. Cardiac perfusion was performed with PBS on the anesthetized mice before they were sacrificed, followed by removal of the right eyes. The extraocular muscle and fascia were cleared. The eyeball was washed with PBS and dried with gauze. The eyeball was carefully dissected to remove the cornea, iris, lens, retina, and choroid; the intraocular liquid was then collected with tips. The cellular supernatant or intraocular fluid was collected and centrifuged at 16,000 g for 30 min at 4 • C. The supernatant was removed and stored at −80 • C until further analysis. The cell supernatant (n = 3) and intraocular fluid (n = 9) were collected 12 h after LPS stimulation. The cellular supernatant (1:50 diluted by PBS) and intraocular fluid (1:20 diluted by PBS) of each group were measured using ELISA kits for tumor necrosis factor alpha (TNF-α) (MTA00B, R&D) and interleukin-6 (IL-6) (M6000B, R&D), following the manufacturer's instructions. Immunofluorescent Staining Cells were seeded onto 8-well glass chamber slides (PEZGS0816, Merck Millipore) for fixation and further staining. Cells were fixed with 4% paraformaldehyde ( The retinal cups were cut into four pieces and flat-mounted with an anti-fade mounting medium (S2100, Solarbio). Images were captured using a confocal microscope (LSM880, Carl Zeiss). The immunofluorescent intensity was quantified using Image J software. Clinical Evaluation of Ocular Inflammation After LPS injection for 24 h, the mice were anesthetized with an appropriate dose of phenobarbitone (50 mg/kg, intraperitoneal injection). The mice were administered tetracaine and tropicamide before the examination. The severity of the anterior and posterior segmental inflammation of the right eye was evaluated with slit lamp biomicroscopy (SL-D7/DC-3/IMAGEnet, Topcon) and the fundus imaging system (MicronIV, Phoenix). For the anterior segmental images, the eyeball was adjusted to a suitable position to observe and photograph the inflammation in the anterior segment. For the posterior segmental images, methylcellulose was applied to the ocular surface to maintain contact with the lens, and to acquire the fundus image. Tobramycin ointment was then applied to protect the cornea until palinesthesia. Images were captured for further analysis. The data were shown in the Supplementary Materials. Optical Coherence Tomographic (OCT) Imaging Mice (n = 6) were anesthetized with pentobarbital (50 mg/kg, intraperitoneal injection), and their pupils were dilated with tropicamide. Retinal structure was assessed using an OCT imaging system (SpectralisOCT, Germany). The scan area centered on the optic nerve was 9 x 9 mm (496 data points/A scan, 1536 A-scans/horizontal B-scan, 85000 A scan/s, 30 • × 30 • , an average of three frames per B-scan). Saline was used to keep the cornea moist for the transparence of optical media to guarantee the image acquisition quality to be above 15. Retinal thicknesses (range of one diameter of optic disc distant from the margin of optic nerve head) were measured using the "measure" tool in the software. Total retinal thickness was measured from the nerve fiber layer to the RPE layer. Histopathological Analysis The enucleated eyes (n = 6) were fixed in FAS solution (G1109, Guge) for 24 h, washed with PBS, dehydrated using gradient reagent alcohol (65, 75, 85, 95, and 100%), and then embedded in paraffin. Mice eye sections throughout the optic disc were cut at 5 µm thickness, deparaffinized, and stained with hematoxylin and eosin (H&E) (DH0006-2, Leagene) for histopathologic analysis. The number of intraocular inflammatory cells was calculated to assess the severity of uveitis symptoms. The sections throughout the optic disc were photographed using a microscope (Leica DM4000, Germany). Each image was captured with the optic disc as the center for counting inflammatory cells. A marquee of the same size (length: 6.5 times the scale, width: 5.5 times the scale) was applied across the optic disc and the number of inflammatory cells in it was counted. The counting was performed separately by two experienced researchers and the average of the numbers from each researcher were documented. Apoptosis by Terminal Deoxynucleotidyl Transferase dUTP Nick End Labeling (TUNEL) Assay Cryosections were obtained using the before mentioned method. A TUNEL kit (12156792910, Roche) was used. The cryosections (n = 6) were washed by PBS three times and incubated the cryosections with 0.1% Triton-100 at 4 • C for 5 min. The sections were incubated with the TUNEL reaction mixture at room temperature for 1 h, and then washed with PBS three times. The sections were then stained with DAPI for 5 min and washed three times. Finally, the sections were mounted with an antifade mounting medium and photographed using a confocal microscope (LSM880, Carl Zeiss). Images of the mid-peripheral retina were taken to count TUNEL positive cells in the visual field. The counting was performed by two researchers and the average numbers were documented. Visual Evoked Potential (VEP) VEP recordings of mice (n = 6) were performed using an electrophysiological system (Diagnosys Celeris, USA). All experimental mice underwent a dark adaptation for 12 h prior to the daytime tests. Mice were anesthetized with pentobarbital (50 mg/kg, intraperitoneal injection). Pupils were dilated with tropicamide, and the cornea was anesthetized with tetracaine. VEP was recorded using a gold-plated wire loop electrode contacting the cornea as an active electrode. Stainless steel needles inserted under the skin at the middle part of the skull and the tail were the reference and ground electrodes, respectively. The amplitudes of the VEP wave were recorded as the average of three responses under 0.05 cd·s/m 2 flash stimuli intensity. Examination parameters were as follows: LED intensity, 100%; frame rate, 6 ms; waveform contrast, 100%; and waveform frequency, 0.5 Hz. Grids were left to dry overnight at room temperature before being imaged using the HT7700 TEM system (Hitachi, Japan). Five fields were randomly selected for each sample and photographed. The number of autophagolysosomes in the field was recorded. Finally, the average number of the five fields was recorded as the result for the sample. Autophagolysosome is a single membrane hybrid structure generated by the fusion of autophagosome and lysosome, containing partially or fully degraded organelles as well as cytoplasmic material, and containing electron-dense regions representing undegraded residuals (29)(30)(31)(32)(33). Statistical Analyses Statistical analyses were performed using SPSS 22.0 (IBM, USA). The mean ± SEM value comparisons of multiple groups were analyzed using one-way analysis of variance (ANOVA) followed by Tukey's post hoc test. A value of p < 0.05 was considered statistically significant. αB-Crystallin Inhibited the Release of Inflammatory Cytokines and the Activation of BV2 Cells After the cultured BV2 microglia cell lines were stimulated by LPS for 6 and 12 h, the mRNA and the protein levels of inflammatory cytokines (including COX2, iNOS, TNFα, and IL6) were significantly upregulated when compared with those in the control group. Pre-treatment of cells with αBcrystallin reduced the expression of inflammatory cytokines at the mRNA and protein levels (Figures 1A-D). In terms of cell morphology, the control BV2 cells were flat with filopodia. When challenged with LPS, BV2 cells were activated, as represented by a round shape with retracted filopodia. Pre-treatment with αB-crystallin sustained the ramified microglial morphology ( Figure 1E). In conclusion, αB-crystallin efficiently suppressed the activation of microglia and blocked the inflammation in vitro. αB-Crystallin Suppressed Autophagy of BV2 Cells Beclin1 and LC3II play an important role in the autophagy process. Beclin1 contributes to the maturation of the autophagosome, whereas LC3II is an indispensable molecule for the formation of the autophagosome membrane. After the LPS stimulation, the Beclin1 mRNA in BV2 increased at 6 h ( Figure 2A); furthermore, a Beclin1 protein increase was observed at 12 h (Figures 2B,C). The transformation of LC3I to LC3II increased at 12 h at the protein level (Figures 2B,D). Autophagolysosomes (the end product of autophagy) were also observed, and LPS increased the number of autophagolysosomes in the cells (Figures 2E,F). Therefore, LPS led to an increase in autophagy. Nevertheless, pre-treating cells with αB-crystallin significantly inhibited the autophagy level in cells, as confirmed by the mRNA and protein levels and the intracellular ultrastructure (Figures 2A-F). αB-Crystallin Alleviated Inflammation of C57BL/6J Mice After the 24 h intraocular injection of LPS, we observed anterior and posterior segmental ocular inflammation. Anterior segmental images showed that LPS induced inflammatory reactions such as congestion, hypopyon, hyphema, and pupil synechia. Treatment with αB-crystallin suppressed the anterior ocular inflammation (Supplementary Figure 2A). Regarding the posterior inflammation, the image showed that LPS led to significant inflammation, including vitreous opacity, vascular white scabbard, optic disc edema, and inflammatory cell infiltration, whereas αB-crystallin inhibited these reactions (Supplementary Figure 2B). H&E pictures and the scatter diagram showed that αB-crystallin decreased the number of inflammatory cells infiltrating into the vitreous body caused by LPS (Figures 3A,B). αB-Crystallin Inhibited the Release of Inflammatory Cytokines and the Activation of Microglia in C57BL/6J Mice The C57BL/6J mice were sacrificed, and the retina and vitreous fluid were tested for inflammatory cytokines (including COX2, iNOS, TNFα, and IL6). After LPS injection at 6 h and 12 h, the mRNA and protein levels of these inflammatory factors were significantly upregulated compared with those in the control group. Pre-treatment with αB-crystallin reduced the expression of inflammatory cytokines at both the mRNA and protein levels (Figures 4A-D). In terms of cellular morphology, retinal flat mounts showed that naive microglia presented a ramified shape with a large covering area and several branches. The activated microglia caused by LPS had an amoeboid shape with a small covering area and little branches. The ramified microglial morphology was sustained by αB-crystallin ( Figure 4E). In addition to the direct morphologic changes, the statistical results of the subtended area and branches of each cell further confirmed the αB-crystallin suppression of microglial activation (Figures 4F,G). αB-Crystallin Suppressed Autophagy of Retinal Microglia in C57BL/6J Mice After the C57BL/6J mice were sacrificed, their eyeballs were collected for immunofluorescence assessment. The eyeball sections were double-stained for Iba1/Beclin1 or Iba1/LC3. The immunofluorescent images showed that Beclin1 and LC3 were mainly expressed in microglia, rather than other retinal cells, indicating that autophagy was an active biological process in the microglia cells. The results showed that LPS increased Beclin1 and LC3 expression compared with that in the control group, whereas αB-crystallin inhibited the expression of these autophagy-related proteins (Figures 5A-D). (Figures 6A,B). The TUNEL experiments showed that LPS induced retinal ganglion cell (RGC) death, whereas αB-crystallin reduced the TUNEL-positive cells (Figures 6C,D). In order to evaluate the function of the retina, we tested the VEP for mice at 7 d. Lower VEP amplitude showed that LPS induced functional impairment. This impairment was reduced by αB-crystallin, and the VEP amplitude for the αB+LPS group was higher than that for the PBS+LPS group (Figures 6E,F). DISCUSSION To determine the effect of αB-crystallin exposure on LPS-induced inflammation, microglial activation, and microglial autophagy, we utilized indicators representing inflammatory reaction, microglial activation, microglial autophagy, and retinal structure and function. We first examined whether αB-crystallin influences microglial activation and autophagy, and then investigated whether αB-crystallin can be used as an effective in vivo intervention for ocular inflammatory disease. We hypothesized that modulating microglial autophagy may provide a potential target for suppressing LPS-induced neuroinflammation. Our results showed that prophylactic αB-crystallin treatment effectively reduced the inflammation and the activation of microglia in vitro. Similarly, Guo et al. (17) reported that αBcrystallin inhibited the expression of IL1β, IL6, and TNFα in microglia under stress conditions. Pangratz-Fuehrer et al. (21) concluded that αB-crystallin could dampen microglial activation. Holtman et al. (34) found that human microglia exposed to αB-crystallin induced a series of anti-inflammatory signals based on hub gene research. These results are consistent with the anti-inflammatory effects of αB-crystallin for sustaining cellular internal environment homeostasis. Differently, Bsibsi et al. (35) showed that microglia exposed to αB-crystallin (50 µg/ml) could induce IL6, TNF, and COX2 expression. Additionally, Bhat and Sharma (36) showed that α-crystallin could induce microglial activation. The likely reasons for this are as follows: the αcrystallin used was extracted from the bovine lens, whereas we used pure recombinant αB-crystallin. In their study (35,36), they exposed microglia to α-crystallin (0-50 µg/ml) and found that a high concentration (50 µg/ml) of α-crystallin induced inflammation in microglia, whereas a low concentration of the protein did not. Thus, purity and protein concentration may influence the role of αB-crystallin in inflammation. In order to exert its protective effect, investigators need to ensure its purity and the appropriate concentration. To the best of our knowledge, no reports currently exist for αB-crystallin in an EIU mouse model. In our study, we found that αB-crystallin reduced intraocular inflammation and inhibited the activation of microglia in vivo. Similarly, Holtman et al. (34) showed that systemic administration of αB-crystallin inhibited neuroinflammation. Arac et al. (18) demonstrated that αB-crystallin reduced both the stroke volume and the inflammatory cytokines. Wu et al. (37) reported that α-Crystallin inhibited microglial activation after the optic nerve crush. Despite the administration route (intravenous, intravitreal, or intraperitoneal) of exogenous αB-crystallin, the results show that αB-crystallin has a strong anti-inflammatory effect in various diseases, providing promising evidence for its use in alleviating ocular inflammatory diseases. This differs from the findings of Arac et al. (18) who showed that after stroke, the number of microglia cells was similar to the wild type. They only described the total number of microglia between the different groups and did not highlight the activated microglia of the CRYAB −/− mice. Furthermore, Rao et al. (38) reported that αB-crystallin did not show a protective effect in experimental autoimmune uveitis (EAU). Their EAU mouse model was induced by subcutaneous injection of interphotoreceptor retinoid binding protein in B10.RIII mice, and the protective effect of crystallin was only evaluated on day 21 (EAU is a chronic process); this approach differed from ours. Studies suggest that the effect of α-crystallin might depend on the severity of oxidative stress and the duration of the stimulus (18). Moreover, results might differ between acute and chronic oxidative stress (39,40). Therefore, the Frontiers in Immunology | www.frontiersin.org Frontiers in Immunology | www.frontiersin.org immunoregulatory role of αB-crystallin in different types of disease deserves further investigation. The mechanism underlying αB-crystallin-mediated inhibition of microglial activation is still unclear (17). Studies show that increased autophagy might promote the activation of microglia and increase neuroinflammation in several instances, such as acute infection (41), acute electric stimulation (42), hypoxia (43), and traumatic brain injury (44). Yang et al. (45) found that suppressing autophagy decreased the microglial activation and inflammatory injury in ICH. Besides, François, et al. (46) found that through tri-culturing microglia, astrocytes, and neurons with LPS, only microglia exhibited an increased level of autophagy combined with upregulation of an inflammatory reaction. Therefore, autophagy might play an important role in the regulation of microglial activation and neuroinflammation (46). For this reason, we observed microglial autophagy in our study. Our results showed that αB-crystallin suppressed the microglial autophagy in vitro and in vivo. Similarly, Lu et al. (24) reported that CRYAB knockdown in astrocytes resulted in a marked augmentation of autophagy activity. In contrast, Kannan (16) concluded that extracellular αB-crystallin present in drusen increased the autophagy-mediated clearance. Pattison et al. (25) found that the mutation of αB-crystallin decreased the expression of Atg7 and reduced the autophagic function of rat cardiomyocytes. Therefore, αB-crystallin may either promote or inhibit cellular autophagy in different cells. To date, there have been several studies on autophagy, but no unanimous conclusion has been reached regarding its mechanism. Mizushima et al. (22) reported the apparent conundrum that autophagy had dual effects on cells, which might be either beneficial or detrimental. Numerous studies have shown that persistent, inefficient, or excessive induction of autophagy is detrimental and promotes cellular injury (47). Based on our results, αB-crystallin may prevent microglial immoderate autophagy, which may influence microglial activation (23). However, the specific relationship between microglial autophagy and microglial activation requires further investigation. To our surprise, our results showed that autophagic markers were detected almost exclusively in microglia (in the retina) (Figures 5A,B). Therefore, autophagy played an active and vital role in the microglial biological process. Further studies are required to understand the effect of αB-crystallin on microglial autophagy, such as the possible pathway. It has been reported in the literature that αB-crystallin is the ligand for TLR2 (35,48), and based on our results it can inhibit microglial autophagy induced by LPS (ligand for TLR4). Furthermore, both TLR2 and TLR4 participate in microglial autophagy (41,45). Whether there is a connection between the two receptors during autophagy is unclear. A deeper understanding of αB-crystallin combined with preliminary clinical application provides a potential therapy not only for uveitis but also for other inflammatory diseases. In summary, the present study showed that prophylactic αBcrystallin treatment can suppress the LPS-induced inflammation. It reduced the release of proinflammatory cytokines as well as the activation of microglia, both in vitro and in vivo. Simultaneously, we found that αB-crystallin inhibits microglial autophagy. Microglial autophagy may therefore play a role in microglial activation. Finally, we demonstrate its beneficial effects for protecting the structure and function of the retina in the EIU model. Taken together, the use of αB-crystallin could be considered a novel potential interventional strategy for acute ocular inflammatory diseases induced by microglial activation. Additionally, the regulation of microglial autophagy may be a new effective target for the alleviation of inflammatory diseases. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The animal study was reviewed and approved by Zhongshan Ophthalmic Center Animal Care and Use Committee, Sun Yatsen University, Guangzhou, China.
2021-03-11T14:14:08.918Z
2021-03-11T00:00:00.000
{ "year": 2021, "sha1": "1455b72b7a521ad2b4e7626eb24c688f855b68c9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.641999/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1455b72b7a521ad2b4e7626eb24c688f855b68c9", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
244680479
pes2o/s2orc
v3-fos-license
Hip Joint Osteoradionecrosis following Pelvic Radiotherapy: A Report of Two Cases Radiotherapy represents a highly effective and curative treatment modality for the treatment of pelvic malignancies and for metastatic bone disease. One of the most severe and challenging long-term complication in radiotherapy is osteonecrosis. These changes may be deleterious effects on osseointegration and provide a challenge for long-term implant stability in Total Hip Arthroplasty patients that remains a challenge because of high rates of early failure of traditional implants. The addition of a Kerboull reinforcement cross to the reconstruction reduce the risk of loosening of the acetabular component by giving elasticity to the implant, decreasing the stress applied to the peri-acetabular bone, and allows fixation over a large surface area. There are very few cases of radiation-induced hip necrosis described in the literature. We report two cases of hip joint necrosis with an acetabular protrusion and femoral head deformities, after a therapeutic pelvic radiation for cervical cancer managed by a total hip arthroplasty, by a cemented femoral and acetabular component with the addition of a Kerboull reinforcement cross. Introduction Pelvic irradiation is commonly used in the treatment of pelvic malignancies and for metastatic bone disease [1]. Modern irradiation planning techniques enable very accurate dose distribution and precise beam delivery. However, Frequent osseous complications are summarized as radiation-induced reactions. One of the most severe and challenging long-term complication in radiotherapy is osteonecrosis [2,3]. There are very few documented cases of radiation-induced hip joint necrosis in the literature [4]. We report two cases of hip joint necrosis with an acetabular protrusion and femoral head deformities, after a therapeutic pelvic radiation. Case 1 A 53-year-old female, suffered from cervical cancer, the patient was treated with radical hysterectomy and postoperative chemotherapy and radiotherapy two years before presentation. The exact dose distribution within the pelvis is not available. One years after therapy, the patient started to feel pain in her right hip and the inguinal region increasing with movement, which had been progressive and became worse recently. and had no history of fall, trauma or osteoarthritis in the left hip. She had no fever, night sweats, or weight loss. Examination of the left hip revealed painful movements with restriction in the range of motion, and a shortening of the lower left limb of 1 cm. There were no inflammatory signs, and the rest of the general examination was normal. A radiograph of the pelvis showed an acetabular osteolysis with protrusion, destruction of the left femoral head and pathological central dislocation of the hip (Figure 1a). Computed Tomography (CT) revealed irregular Osteolytic process of the acetabulum with an ill-defined border and femoral head destruction, with intra-articular bone fragments (Figure 1b). Magnetic Resonance Imaging (MRI) of the pelvis revealed a remodeling of the left hip in T1 hyposignal and hypersignal in T2 and STIR, moderately enhanced by the contrast product and severe destruction of the femoral head, suggestive of osteonecrosis of the hip (Figure 2). The bone scan revealed increased uptake in the left acetabulum and femoral head, but there were no other suspicious areas of increased uptake to suggest metastases ( Figure 3). To eliminate the diagnosis of bone metastasis or Septic arthritis of the hip joint, a CT-guided biopsy was arranged showing Chronic inflammatory remodeling without evidence of malignancy, with signs related to bone avascular necrosis. Via a standard posterolateral approach, the patient underwent a total hip arthroplasty, by a cemented Orthopedics and Rheumatology Open Access Journal (OROAJ) femoral and acetabular component. In order to fill the bone loss, we used a cemented reconstruction with the addition of a Kerboull reinforcement cross. With use of bone graft from the femoral head ( Figure 4). Cultures obtained at the time of surgery were negative. Pathological examination confirmed osteonecrosis. She received twenty days of thrombo-embolic prophylaxis with low molecular weight heparin with Post-operative rehabilitation and fully weight-bearing mobilization immediately after the operation. At 12 months follow-up she had no pain over the left hip and could walk without sticks, the Postel Merle d' Aubigné (PMA) score was 17. Case 2 A 50-year-old woman diagnosed with cervical cancer. She underwent radical hysterectomy; Adjuvant radiotherapy was given for the whole pelvis in April 2015. The exact dose distribution within the pelvis is not available. Two years after therapy, the patient presented to our department with bilateral hip pain which had been progressive and became worse recently. There was no history of fall, trauma or any significant injury prior to the onset of pain. Examination of the hips revealed painful movements, with restriction in the range of motion. The symptoms were more marked on the right side. She had no symptoms of infection and could walk less than one block. The rest of the general examination was normal. Radiographs showed destruction of both femoral heads and osteolysis of the right acetabulum (Figure 5a). Computed tomography revealed no evidence of infection, bone metastasis, tumour recurrence, or radiation sarcoma (Figure 5b). The CT-guided biopsy revealed no evidence of malignancy in the right hip. The patient underwent a total hip arthroplasty by a cemented femoral and acetabular component. Pathological examination revealed no signs of malignancy. At eight months follow-up she had no pain over the right hip and could walk without sticks, the Postel Merle d' Aubigné (PMA) score was 15. Discussion Radiotherapy represents a highly effective and curative treatment modality for the treatment of pelvic malignancies and for metastatic bone disease [1]. Because of the anatomic vicinity of these cancers, the femoracetabular articulation can be exposed to a large dose of radiation that may induce changes in the skeletal system [2,5,6]. These injuries range in severity from radiation osteitis, through stress fractures and avascular osteonecrosis, to pathological fractures [1,3]. There are very few cases of radiation-induced hip necrosis described in the literature [4]. It is characterized by cellular death of bone components, due to the cellular depletion caused radiation and the local ischemia resulting from radiotherapy-induced microvascular damage [2,4]. These changes may be deleterious effects on osseointegration and provide a challenge for long-term implant stability in THA patients [7]. 0.44% of symptomatic osteoradionecrosis involve pelvic bones, and the median time of onset is 44 months [6]. Postirradiation lesions are often bilateral (21%) [8]. In the first case, only the left hip was affected by osteonecrosis, perhaps resulting from a higher radiation dose on the left hip, or even inadequate shielding of the left hip during the radiotherapy. Radiation-induced protrusion is very rare, the mechanism relates to the weight-bearing forces operatives in the hip, and multiple insufficiency type stress fractures superimposed upon the diminished structural strength occurring during bone revascularization and remodeling [9]. The probability of radiation induced changes in bone depends on many factors, such as dose per fraction, total dose, dose intensity and irradiated volume [2]. The reported incidence of pelvic Osteoradionecrosis varies widely, with a range of 2.1-34 % depending on technique, criteria applied and many factors, such as dose per fraction, total dose, dose intensity and irradiated volume [2]. The extent on bone depends on patient-related factors such as age, sex, body weight, skeletal co-morbidities and co-medications, primarily corticosteroids [3]. Osteoradionecrosis constitutes a very difficult diagnostic problem, as the symptoms often appear many years after radiotherapy and patients do not associate them with past treatment [2,8]. It is important to differentiate osteoradionecrosis from local recurrence of the malignancy, bone metastasis, radiation induced sarcoma, and infection, especially in patients with a history of malignancy [6]. While computed tomography (CT) plays an important role in fracture detection, magnetic resonance imaging (MRI) is more sensitive to bone marrow abnormalities and to evaluate the viability of the femoral head [3][4][5][6][7][8][9][10]. Bone scan shows a typical symmetric uptake pattern in osteoradionecrosis, whereas metastasis appears asymmetrical [6,11]. THA in a patient with a post-irradiated pelvis remains a challenge [7]. Rates of early failure of traditional implants have been documented as high as 44% and 52% at 2-6 years, for both cemented and uncemented components due to dense sclerotic bone, and risk of infection du to damaged tissues that may provide a site for colonization following bacteremia [12]. In addition, host defense mechanisms may have been compromised by irradiationinduced damage and by the lymph stasis or lymphedema [5]. The failure of uncemented components is due to poorly elastic bone, reduced capacity of bone matrix to remodel over time and his decreased capability for osseous integration. The use of cemented cups with or without augmentation rings prevent early loss of fixation [12]. However, reported outcomes of THA have been disappointing [7]. The failure of cemented components is due to bad cement interdigitation and inability of bone to withstand the stress around the implant, microfractures are created inducing implant fails [1,13]. The addition of a Kerboull reinforcement cross to the reconstruction reduce the risk of loosening of the acetabular component by giving elasticity to the implant, decreasing the stress applied to the peri-acetabular bone, and allows fixation over a large surface area [1][2][3][4][5][6][7][8]. The use of trabecular metal implants has shown better primary stability and bone ingrowth because of their high coefficient of friction and porosity [1,14]. Joglekar et al. [15] investigated outcomes of 22 hips under-going primary THA using tantalum acetabular cups in patients who had previously undergone pelvic radiotherapy. At 5-year follow-up, no patients presented signs of acetabular loosening [8]. Conclusion The long-term side effects of radiotherapy depend on many factors, but the pathologic features are consistent. Although Osteoradionecrosis seems to be a very rare side-effect of radiotherapy, it may lead to severe functional impairment in patients who often have been cured of cancer. An early diagnosis and proper treatment may protect patients from long-term morbidities.
2021-11-27T16:30:47.887Z
2021-01-22T00:00:00.000
{ "year": 2021, "sha1": "5c89b47da931db75e383ad6ab369207d3f2c1d9d", "oa_license": "CCBY", "oa_url": "http://juniperpublishers.com/oroaj/pdf/OROAJ.MS.ID.555965.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e5bf301142861cebef3178de39b4b009e1835ebf", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
257233003
pes2o/s2orc
v3-fos-license
Phononic-subsurface flow stabilization by subwavelength locally resonant metamaterials The interactions between a solid surface and a fluid flow underlie dynamical processes relevant to air, sea, and land vehicle performance and numerous other technologies. Key among these processes are unstable flow disturbances that contribute to fundamental transformations in the flow field. Precise control of these disturbances is possible by introducing a phononic subsurface (PSub). This comprises locally attaching a finite phononic structure nominally perpendicular to an elastic surface exposed to the flowing fluid. This structure experiences ongoing excitation by an unstable flow mode, or more than one mode, traveling in conjunction with the mean flow. The excitation generates small deformations at the surface that trigger elastic wave propagation within the structure, traveling away from the flow and reflecting at the end of the structure to return to the fluid-structure interface and back into the flow. By targeted tuning of the unit-cell and finite-structure characteristics of the PSub, the returning waves may be devised to resonate and reenter the flow out of phase, leading to significant destructive interference of the continuously incoming flow waves near the surface and subsequently to their attenuation over the spatial extent of the control region. This entire mechanism is passive, responsive, and engineered offline without needing coupled fluid-structure simulations; only the flow instability’s frequency, wavelength, and overall modal characteristics must be known. Disturbance stabilization in a wall-bounded transitional flow leads to delay in laminar-to-turbulent transition and reduction in skin-friction drag. Destabilization is also possible by alternatively designing the PSub to induce constructive interference, which is beneficial for delaying flow separation and enhancing chemical mixing and combustion. In this paper, we present a PSub in the form of a locally resonant elastic metamaterial, designed to operate in the elastic subwavelength regime and hence being significantly shorter in length compared to a phononic-crystal-based PSub. This is enabled by utilizing a sub-hybridization resonance. Using direct numerical simulations of channel flows, both types of PSubs are investigated, and their controlled spatial and energetic influence on the wall-bounded flow behavior is demonstrated and analyzed. We show that the PSub’s effect is spatially localized as intended, with a rapidly diminishing streamwise influence away from its location in the subsurface. Introduction Flow control is a central topic in fluid dynamics that is concerned with devising passive or active means of intervention with the flow structure and its underlying mechanisms in a manner that causes desirable changes in the overall flow behavior.Through flow control, it is possible, in principle, to enable favorable outcomes such as, for example, delay of laminar-to-turbulent transition and reduction of skin-friction drag in wall-bounded flows [1].These scenarios allow for substantial savings in fuel expenditure for air, sea, land vehicles, wind and water turbines, long-range gas and liquid pipelines, and other similar applications.Flow control by active means has been extensively investigated over the past few decades [2,3,4,5,6,7].Passive techniques, on the other hand, are desirable because of their simplicity and low cost, i.e., no active control devices, wires, ducts, slots, etc., are needed and no electric power is required to drive the control process.Passive techniques widely explored in the literature include the use of riblets [8,9], roughness [10,11], or porous features [12] on the surface exposed to the flow, or coating the surface with a compliant material [13,14,15,16,17,18,19,20,21].An ideal intervention requires an understanding of the key characteristics of the flow dynamics and using this knowledge to tailor, with dynamical precision, a control stimulus that accounts for the underlying flow mechanisms.Recently, this endeavor has been shown to be possible with the use of phononic materials passively employed in the subsurface of a wall-bounded flow [22,23]. Flow transition may occur when external disturbances or inherent fluctuations develop and become significant within the flow field.These disturbances may be in the form of unstable waves that represent a small component of the total velocity field; an example of a widely studied type of disturbance in shear flows is a Tollmien-Schlichting (TS) wave [24,25].In this context, flow disturbances, also known as perturbations or instabilities §, appear at various frequencies, wavenumbers, phases, and orientations and depending on their character may grow in amplitude as they travel downstream.In 2015, the general concept of a phononic subsurface (PSub) [22] was introduced as a means to provide a wave-synchronized intervention with flow instabilities to cause either stabilization or destabilization, as desired.A PSub is installed in the subsurface region, and is nominally perpendicularly oriented and configured to extend all the way to expose its edge to the flow, forming an elastic fluid-structure interface.The underlying mechanism that a PSub induces is passive and responsive localized control of both the versus an all-rigid-wall surface (back) [22].Yellow color represents low-instability intensity, red color represents high-instability intensity.The reduced color intensity at the position where the PSub is placed is indicative of stabilization exactly at that location. sign and rate of production of the perturbation kinetic energy within the flow field.A strong (weak) PSub intervention for flow stabilization causes a strong (weak) negative rate of production, effectively shutting off the source of energy intake into the instability from the mean flow.When a PSub is introduced for flow destabilization, the opposite effect takes place and the instability is forced to acquire energy from the mean flow at a higher rate.These two scenarios are manifestations of a contiguous solid-fluid flow antiresonance or resonance phenomenon, respectively.A PSub takes the form of a finite, relatively stiff elastic structure oriented in a manner that enables only small elastic motion perpendicular to the fluid-structure interface to be admitted and transferred into the flow.Ensuring small vibrations at the surface allows the PSub to modulate mostly (to the extent allowed in practice) a single-velocity component of the instability field (the wall-normal component in the present study), as opposed to simultaneously influencing multiple components at once.With these conditions in place, the PSub is engineered to exhibit specific frequency-dependent amplitude and phase response characteristics at the edge exposed to the flow.These quantities represent the two core properties on which the PSubs design theory is based on.Figure 1 provides an illustration of a PSub in operation, showing clearly its ability to attenuate the instability field exactly within the region in the flow where the PSub is installed [22]. A PSub may be composed of any form of phononic materials ¶.The study of phononic materials, in general, is an area that has received tremendous attention in the literature over the past three decades [26,27].Figure 2 displays a schematic of two types of PSubs.The model in Fig. 2a is based on a phononic-crystal (PnC) rod comprised of a repeated layering of acrylonitrile butadiene styrene (ABS) polymer and aluminum; this is the original configuration used in Ref. [22].In contrast, the PSub configuration in Fig. 2b is formed from a locally-resonant elastic metamaterial (MM) which is here realized in the form of a homogeneous ABS polymer rod with a repeated inclusion of spring-mass units to serve as intrinsic local resonators.Both structures comprise five unit cells in the schematic and throughout the paper.Phononic crystals draw their unique wave propagation properties from wave interference mechanisms, namely Bragg scattering [35].This typically requires the unit cells to be relatively long for intervention at a given frequency regime; e.g., the unit cell used in Ref. [22] is 40 cm long to enable control of a flow instability near 2 kHz.A 10-unit-cell PSub, in that case, would be 4 m long extending into the subsurface in the wallnormal direction, prohibiting practical deployment.The schematic shown in Fig. 1 is of this particular PSub, passively stabilizing a TS wave [22].Elastic metamaterials, on the other hand, produce their unique wave propagation properties via resonance hybridization−an elastodynamic coupling mechanism that frees the unit cell from any length constraints [36].An alternative PSub configuration for overcoming this length constraint is a coiled PnC [23,37].The reader may refer to books [38,39,40] and extensive reviews [26,27] for in-depth description and analysis of phononic crystals and elastic metamaterials.Another key PSub dimension is its length along the streamwise direction; this has to be designed to correlate with the wavelength of the instability waves (or range of wavelengths in the case of multiple instability waves). In this paper, the notion of a metamaterial-based PSub is introduced to mitigate the large unit-cell length limitation imposed on a phononic-crystal-based PSub.Furthermore, we demonstrate the desirable precision of the PSub's impact on the flow field, showing that the alterations to the flow structure are spatially localized, as targeted, with no (or insignificant) undesirable behavior downstream to the position in the flow where a PSub is installed.The type and intensity of control are also shown to take effect with design precision.Finally, we provide a rigorous analysis of the effect of the PSub on the intrinsic flow dynamics, quantitatively demonstrating the mechanism of the rate of energy exchange between a flow instability and the mean flow as a result of the presence of a PSub.We also examine the PSub's spatial influence on the flow vector field, and, conversely, the flow instability's spatial influence on the elastodynamic energy field within the PSub itself.The unique ability to passively enact perfect wave synchronization across the PSub and flow domains is explicitly demonstrated. provides significantly favorable dynamical properties and attributes because a phononic material has a frequency band structure [26,27] and when rendered finite gives unique local and global resonance characteristics [28,29,30,31,32,33,34] that are not attainable by conventional materials and that are highly controllable by design. .Each PSub is installed in the flow subsurface and extends all the way to allow for direct exposition to the flow.Flow instabilities, e.g., TS waves, will excite the PSub at the top edge (i.e., at the fluid-structure interface), and the PSub, in turn, will respond at or near its structural resonance and out of phase at the excitation point.This passive process will repeat and cause sustained attenuation of reoccurring and continuously incoming instability waves.Alternatively, the PSub could be designed to trigger destabilization instead of stabilization by producing an in-phase elastic response. PSub Design Theory The general elements of the PSub design theory were outlined in Ref. [22].A PSub configuration consists of a finite phononic structure with its principle path of elastic wave propagation typically oriented orthogonal to the fluid-structure interface to enable "pointwise" spatial control as needed.A PSub is engineered offline to exhibit target frequency-dependent amplitude and phase response characteristics at the edge exposed to the flow, i.e., at the top end of each PSub shown in Fig. 2.This pair of response quantities at this location represents the two principle properties targeted by the PSub design theory.In all cases, the PSub edge should be ensured to vibrate at, or close to, resonance at the frequency of the instability to be controlled.A high vibration amplitude allows for strong interaction with the flow.Yet, still, the regime of operation is intentionally limited to small elastic vibrations, where the local fluid-structure interface remains practically flat, and large finite deformations of the solid surface are avoided.As described above, this confines the control to exclusively, or predominantly, the vertical, i.e., wall-normal, component of the perturbation velocity field (see Section 4 for an analysis and further discussion on this aspect).As for the phase, the PSub is designed to display a negative phase (out of phase) if the target is stabilization or a positive phase (in phase) if the target is destabilization.Given the importance of both the vibration amplitude and phase, a performance metric P was introduced and is defined as the frequency-dependent product of the two quantities.Negative and positive values of P correspond to flow stabilization and destabilization, respectively.The absolute value |P | indicates the strength of the stabilization or destabilization.For example, to impede the growth of a particular instability to delay the transition to turbulence, the PSub is designed to exhibit a strongly negative P value at the frequency of the instability.For a range of instability frequencies, the PSub would need to display this property over that frequency range.As for the spatial size or width of a PSub along the downstream direction; this is tuned according to the wavelength of the flow wave instability to be controlled.In Ref. [22], the PSub length was set to be roughly one quarter of the wavelength of the unstable flow wave. From the flow's perspective, the phase of the elastic waves returning to the flow−after being passively processed by the PSub−will cause destructive or constructive interference with the vertical velocity component of the continuously incoming instability waves.This, in turn, will influence the work done by or on the instability field, causing either a diminishing or an enhancing effect on the transfer of energy from the mean flow into the instability, depending on whether the PSub is designed to stabilize or destabilize, respectively.Figure 3 provides a schematic illustration of this mechanism.This effect on the exchange of energy with the mean flow is quantified by what is known as the production rate term, an averaged quantity involving the wall-normal and streamwise (vertical and horizontal, respectively, in Fig. 2) components of the instability field, which is derived from the Navier-Stokes equations governing the flow [22]. Stop-band truncation-resonance approach In Ref. [22], a PnC was used to form the PSub structure.The frequency band structure of a unit cell for this PnC is shown in Fig. 4a.The finite extent of the PnC represents a symmetry breaking, or truncation, of an otherwise idealized PnC with an infinite extent.The symmetry breaking has been taken to our advantage as it created a truncation resonance inside a band gap [29,30,31,32,33,34], the first Bragg band gap for the unit cell considered.Associated with the truncation resonance, there is a phase change from positive to negative as the frequency is increased, allowing us to yield a negative value of P with a high absolute value at frequencies higher than the truncation resonance frequency.Furthermore, the negative P properties extend over a relatively wide frequency range compared to what is produced by a standard structural resonance associated with, for example, a statically-equivalent homogeneous structure.The higher the value of |P |, the stronger the control, in the negative for stabilization or in the positive for destabilization, and the broader its frequency range, the more robust the control effect.The performance metric for a PnC-based PSub versus a PSub comprising a statically-equivalent homogeneous structure is shown and contrasted in Fig. 5a. Pass-band lowered-resonance approach As mentioned above, a PnC-based PSub must be relatively long to accommodate low-frequency instabilities.To mitigate this limitation, we demonstrate in this paper the concept of a subwavelength PSub using a locally-resonant elastic MM.An elastic MM may be designed to feature a band gap in the subwavelength regime (i.e., where the wavelength of the elastic wave is larger than the unit-cell size of the periodic medium [36]), as shown in Fig. 4b.In this case, it is also possible to produce a finite structure with a truncation resonance inside a subwavelength band gap [41,42,43,44].However, here we provide an alternative approach whereby the PSub resonance that we utilize is a sub-hybridization resonance.This global-structure resonance appears at a frequency lower than the subwavelength band gap, which otherwise would appear at a much higher frequency if the band gap did not exist. The performance metric for an MM-based PSub versus the same PSub without the local resonators is shown and contrasted in Fig. 5b.The lowest resonance for the MMbased PSub is near 1550 Hz, whereas the lowest resonance for the same rod without the resonators is near 8000 Hz. Models and Methods As described in Section 2, a PSub is designed without the need for any coupled fluidstructure simulations−a trait that is indicative of the mechanistic nature of the theory of phononic subsurfaces.The theory entails producing four key plots that allow for full characterization of the properties of a given PSub configuration [22].The first is the dispersion curves (elastic band structure) for the unit cell from which the PSub is formed.The second and third plots are the frequency-dependent amplitude and phase response of the PSub, with both the excitation and response being at the edge that will be exposed to the flow.The fourth characterization calculation produces the frequencydependent performance metric P for the PSub, defined as the product of the amplitude and phase as mentioned earlier.These plots allow for prediction of the changes that will occur in the instability field in the flow when the PSub is installed.A simulation of the flow coupled to the PSub is then run only to verify and assess the performance.In the simulation, the Navier-Stokes equations are solved simultaneously with Newton's second law governing the elastodynamic motion in the PSub, with appropriate boundary conditions applied at the fluid-structure interface.This section briefly describes the models, numerical procedures, and physical parameters used throughout the paper.The reader is referred to Ref. [22] for more details on the modeling and solution methods. PSub model and analysis approach All the PSub structures we investigate are modeled as one-dimensional (1D) linear elastic solid rods with a constant cross-sectional area, where the elastodynamic motion is governed by where the structure's axial spatial coordinate and time are denoted by s and t, respectively, and ρ s = ρ s (s), E = E(s), C = C(s), η = η(s, t), and f = f (s, t) represent the material density, elastic modulus, damping constant, longitudinal displacement, and external force, respectively.Differentiation with respect to position is indicated by (.) ,s , and the superposed single dot ( .) and double dot (.) denote the first and second time derivatives, respectively. The dispersion curves for a given PSub unit-cell configuration are obtained by setting the force f to zero in Eq. ( 1) and applying Bloch's theorem [45,46]; this yields a relationship between the frequency ω and the wavenumber κ for longitudinal wave propagation along the axis of the rod.The amplitude and phase response of a finite version of the PSub composed of n c repeated unit cells are obtained by solving Eq. ( 1) as a boundary value problem.Free and fixed boundary conditions are chosen for the PSub top and bottom edges, respectively, i.e., η ,s (0, t) = 0 and η(l, t) = 0, where l = n c a UC , and a UC is the PSub unit-cell length.The top end is excited harmonically, i.e., f (0, t) = f (0)e iωet and f (s, t) = 0 for s > 0, where ω e is the excitation frequency and f is the amplitude of the forcing.The displacement response is given by η(s, t) = η(s)e iωet , where η(s) is the amplitude of the response.The phase φ = φ(s) is formulated to span the range −π/2 ≤ φ ≤ π/2.When running the coupled fluid-structure simulations, the PSub model must be treated as an initial boundary value problem where no assumptions are made for the displacement field's temporal dependency.As in the steady-state analysis, we set f (s, t * ) = 0 for s > 0 and the value of f (0, t * ) is fed in from an integration of the pressure field exerted by the flow at each time step in the simulation covering the time interval 0 ≤ t * ≤ t * T , where t * and t * T are the coupled simulation dimensional time and end time (in seconds), respectively. The PSub rod material/structure is numerically analyzed using the finite-element (FE) method utilizing 1D 2-node iso-parametric elements [47].Damping is introduced in the form of viscous proportional damping, which yields a unit-cell damping matrix defined as C = q 1 M+q 2 K, where q 1 and q 2 are damping constants and M and K denote the FE mass and stiffness matrices, respectively.The dispersion curves are obtained for values of wavenumber in the range 0 ≤ κ ≤ π/a UC [48].The number of nodes in the unit cell is denoted by n m .For the finite version of the PSub, the number of nodes along the full structure is n s = n c (n m − 1) + 1.For the wave propagation simulation problem, the second-order Newmark time integration scheme is used with the dimensional time step increment ∆t * .We use an implicit version of the scheme by selecting the parameters γ = 1/2 and β = 1/4 in the formulation provided in Ref. [22]. Model of unstable channel flow with PSub installed and simulation approach We examine spatially evolving instabilities in fully-developed incompressible plane channel flows, also known as Poiseuille flows.The flow is driven by a mean pressure gradient between two parallel walls that are nominally rigid except for the region where the PSub is located.An exact solution of the Navier-Stokes equations gives the mean velocity for the flow field [49,50], which is considered the base inflow.The dynamic stability in this flow is governed by the Orr-Sommerfeld equation [51,52,53], which is obtained by linearizing the Navier-Stokes equations using the normal assumption.As mentioned in Section 1, we consider TS waves as examples of two-dimensional (2D) evolving instabilities in parallel shear flows.These waves are represented by growing eigensolutions of the Orr-Sommerfeld equation and have been observed in laboratory experiments for channel flows [54] and earlier in boundary layer flows [55,56].In our coupled fluid-structure simulations, we superimpose a particular Orr-Sommerfeld unstable spatial mode at the channel inflow boundary.This causes an excitation of the parabolic base velocity, which provides a representative model of an unstable spatiallyevolving transitional flow in a typical laboratory experiment. The simulations are based on the time-dependent, three-dimensional Navier-Stokes equations where the channel half-height δ and the centerline velocity U c are used for nondimensionalization.The continuity and momentum equations are as follows, respectively where u(x, y, z, t) = (u, v, w) is the velocity vector with components in the streamwise x, wall-normal y, and the spanwise z, directions, respectively, and p is the nondimensional pressure.Moreover, Re = U c δ/ν f is the Reynolds number based on the centerline velocity, ν f is the kinematic viscosity, and t (in this context) is the nondimensional time.The ranges of the wall-normal and spanwise domains are 0 ≤ y ≤ 2 and 0 ≤ z ≤ 2π, respectively.We decompose the velocity vector in Eqs.The initial and boundary conditions for the decomposed velocity field for an allrigid-wall channel are where A 2D is the amplitude of the 2D perturbation, u e2D is the Orr-Sommerfeld eigenfunction we prescribe, and ω TS is the perturbation dimensionless frequency (which is a real quantity).Only the û and v components of u e2D are nonzero.Furthermore, periodic boundary conditions are applied in the z direction and a non-reflective buffer domain is added to the physical domain for the outflow boundary conditions [22,57,58,59].The complex wave speed of the perturbation is defined as c = −ω TS /α where α = α R + iα I denotes complex wavenumber [60].The perturbation grows in space when −α I > 0. The PSub installation region covers a streamwise distance from x s to x e and extends uniformly across the entire spanwise direction.For the coupled simulations throughout this paper, (•) * represents dimensional quantities, whereas the omission of the asterisk symbol denotes dimensionless flow quantities.We define the dimensional wall pressure as p * w = pρ f U 2 c , where ρ f is the fluid density and p is the averaged pressure between x s and x e , respectively.At every time step, this quantity is computed on the fluid-structure interface.It acts on the top edge of the PSub as a force.On the other hand, the resultant displacement η(0, t * ) and velocity η(0, t * ) obtained from the time integration of the structure model is imposed as boundary conditions to the flow field at the interface such that [22] These boundary conditions ensure that the stresses and velocities match at the fluid-structure interface and are valid when η << δ is maintained throughout the computations.Referred to as transpiration boundary conditions [61,62], Eqs.(5a) and (5b) are obtained by keeping the interface location fixed and retaining only the linear terms following a Taylor series expansion of the exact interface compatibility conditions.Other boundary conditions have been examined by Barnes et al. [23] giving qualitatively similar results.Given our assumption of small displacements, these fluidstructure interface boundary conditions allow wall motion predominantly along the wallnormal y-direction, since η >> η.The spanwise velocity w is zero at the interface. For the flow field, the Navier-Stokes equations are integrated using a timesplitting scheme [57,58,59] on a staggered structured grid system, in which the velocity components are computed at the edges, and the pressure is determined at the centers.The wall-normal diffusion term is discretized by implementing the implicit Crank-Nicolson method, and the Adam-Bashforth scheme is used for an explicit treatment of all the other terms.This numerical procedure was verified with the linear theory giving a maximum deviation of 0.05% in the predicted perturbation energy growth [59].Since the equations for the fluid and the structure are inverted separately in the coupled simulations, a conventional serial staggered scheme [63] is implemented to couple the two sets of time integration. Model parameters Table 1 lists the geometric parameters and material properties of the PSubs we examine in this paper.For the PnC-based PSub, we select the values of 2.4 GPa and 1040 kg/m 3 for the elastic modulus and density of ABS polymer, respectively, and the corresponding values of 68.8 GPa and 2700 kg/m 3 for the Al.The unit cell of the PnC rod consists of two layers, aluminum (Al) and ABS polymer.The PSub comprises 5 unit cells, each with a length of a PnC = 40 cm (i.e., l PnC = 2 m).In the FE analysis, each unit cell is discretized into 50 linear elements; hence, the structure has 250 degrees of freedom considering the fixed end at the bottom.The unit cell of the MM-based PSub consists of a homogeneous rod made out of ABS polymer and a local mass-spring resonator attached at the center.This configuration may be realized in practice by, for example, a rod/beam structure with pillars periodically attached to represent the resonators [64,65,66,67].We choose the elastic modulus and density of ABS polymer to be 3 GPa and 1200 kg/m 3 , respectively.The unit cell has a length of a MM = 1 cm, and the PSub is formed from either 5 (l MM = 5 cm), 10 (l MM = 10 cm), 15 (l MM = 15 cm), or 20 (l MM = 20 cm) unit cells.The resonator's mass and spring stiffness are tunable according to the target instability frequency.In the nominal case, the resonator's frequency is set to f res = 2000 Hz.The resonator's point mass is set to be ten times higher than the total mass of the rod portion in the unit cell, m res = 10 × ρ ABS a MM ; this gives a resonator's stiffness equal to k res = m res (2πf res ) 2 .The metamaterial unit cell is discretized into seven FE elements (including six rods and one mass-resonator elements); thus each unit cell has eight degrees of freedom including that of the resonator.A 5 unit-cell MM-based PSub would therefore have 35 degrees of freedom by applying fixed boundary conditions at the bottom.The reader is referred to Ref. [68] for details on the dispersion behavior of this particular elastic metamaterial configuration. The coupled fluid-structure simulations are based on Re = 7500, incorporating an instability with nondimensional frequency ω TS = 0.25 and wavenumber α = 1.0004 − i0.0062, which corresponds to the least-attenuated eigenmode of the Orr-Sommerfeld equation.Utilizing nondimensional analysis to simulate a given TS wave with a dimensional frequency f TS = ω TS U c /2πδ Hz, we vary the centerline velocity (velocity scale) U c and half height of the channel (length scale) δ accordingly in the DNS code.All simulations are done for liquid water for which the kinematic viscosity is ν f = 1 × 10 −6 m 2 /s.While not considered here, PSubs may also be designed for air by adjusting the elastic compliance of the PSub surface exposed to the flow.The δ quantity varies between the different models examined.For example, a value of δ For all the MM-based and PnC-based PSub simulations, the dimension of the channel is fixed as L x = 20δ, L y = 2δ, and L z = 2πδ.The fluid domain is discretized into n x = 225, n y = 65, and n z = 8 points in the streamwise, wall-normal, and spanwise directions, respectively.The length of the PSub interface (control surface) along the streamwise direction is approximately a quarter of the instability wavelength, λ TS = 2πδ/α R .The front and end edges of the PSub interface in the streamwise direction are x s /δ ≈ 6 and x e /δ ≈ 8, respectively.The dimensional time step ∆t * is selected such that 2000 time steps cover a period of the instability wave.Specifically, ∆t * = 3 × 10 −7 s and ∆t * = 3.22 × 10 −7 s for the PnC-based and MM-based PSub simulations, respectively.The dimensional time integration step for the flow is the same as that for the PSub.All the simulations are run for 3 million time steps until t * end ≈ 1 s where t * end is the dimensional time at the end of the coupled fluid-structure simulations.The averaging time window for adequately capturing the relevant statistics for the various cases is chosen to begin when the simulation has become quasi-steady, i.e., the effect of the initial conditions has faded, and to extend sufficiently long to cover approximately 1000 TS wave periods.The buffer region is sized to 40% of the channel length ending at the outlet [22].All the simulations were executed on the RMACC supercomputer Summit at the University of Colorado implementing parallel computation. Results We now examine the detailed characteristics and actual performance from coupled fluidstructure simulations of the two types of PSubs considered in Figs. 2, 4 and 5. PnC-based PSub The four key characterization plots for the PnC-based PSub configuration whose geometric and material properties are given in Section 3.3 are shown in Fig. 6.This structure is identical to that investigated in Ref. [22] which was designed for a TS instability with a frequency of 1690 Hz, except here it comprises five unit cells instead of 10.The band structure pertaining to the PSub unit cell features a band gap, as shown by the grey region throughout the four plots.A truncation resonance appears inside the band gap at 1660.3 Hz for five unit cells; and, as shown in the third plot, the phase turns from positive (in-phase) to negative (out-of-phase) at that frequency and stays negative until the next resonance.This, in turn, gives a value of P that is positive at pre-resonance and negative at post-resonance, as shown in the fourth plot.Both the amplitude and phase quantities are determined from isolated steadystate harmonic frequency response analysis of a 5-unit-cell long version of the PSub with fixed support at the bottom, as described in Section 3.1.This contrasts with Ref. [22] where the phase spectrum was obtained by running long-time simulations.For comparison, the characterization curves of the statically equivalent homogeneous structure are superimposed in all plots.It is noticeable that the distance between the resonances and the range of the negative phase for the homogeneous structure near the TS wave frequency peak is markedly narrower than that of the PnC-based PSub.Consequently, the dip in the P curve near the TS wave frequency is both wider (broader) and deeper (higher in absolute value) for the PnC-based PSub compared to the corresponding homogenized structure, as marked in Fig. 6d.This advantage is present for both the functions of stabilization and destabilization.In Fig. 7a, we show a portion of the P -function again and mark the frequency values of four different TS wave instabilities.The first from the left (light orange line) is at 1637 Hz, which intersects the performance metric curve at a relatively low positive value (P = 1.02 × 10 −9 rad•m/N)−indicating the ability to trigger weak destabilization once the PSub is applied to a flow carrying an instability at this particular frequency.The second line from the left (dark red) is at 1650 Hz and can be seen to intersect the P curve at a higher positive value (P = 2.32 × 10 −9 rad•m/N), indicating the ability to cause strong destabilization.The third frequency (dark green line) has a value of 1670 Hz; this intersects with the P curve at a relatively high negative value (P = −2.46× 10 −9 rad•m/N) which would bring rise to strong stabilization.Lastly, the fourth vertical line (light green curve) corresponds to a TS wave with a frequency of 1684 Hz; this intersects with the performance metric curve at a lower negative value (P = −1.04×10−9 rad•m/N) which would cause weak stabilization.Figure 7b shows the actual performance of the PSub in passively controlling each of these instabilities as seen from four separate coupled fluid-structure simulations.To serve as a reference case, a fifth simulation is conducted with no PSub installed (i.e., the flow is exposed to a rigid wall all along) with a TS wave at 1660.3 Hz, corresponding to the center between the resonance and anti-resonance peaks in the PSub performance metric shown in Fig. 7a.The figure shows a time-averaged quantity of the kinetic energy of the perturbation velocity field plotted as a function of the streamwise position.The perturbation kinetic energy K * p , in unit of J/m, is defined as where û * , v * , and ŵ * are the perturbation velocity components in the streamwise, wallnormal, and spanwise directions, respectively.The symbol • denotes time-averaged quantities.The channel flow characteristics are expected to be nonhomogeneous along the streamwise direction due to the presence of the instability and PSub.It is clearly observed from Fig. 7b that the K * p of the instability field rises above the reference rigidwall case for the destabilization cases and falls under it for the stabilization cases, and this rise or fall takes place exactly where the PSub is installed (as indicated by the two vertical lines).Furthermore, the intensity of the rise or fall of K * p is consistent with the absolute value of the performance metric at the frequency intersects in Fig. 7a, where a small value of |P | correlates with a weak change in K * p and a large value of |P | correlates with a strong change in K * p .We also observe that the K * p levels return to nearly the same level of the reference rigid-wall case downstream to the PSub, which is a desired outcome as it indicates precise local control of the instability field.The stronger the stabilization or destabilization within the PSub region, the larger the offset of K * p in the far downstream region compared to the rigid-wall case. In Fig. 7c, we present the skin-friction coefficient calculated at the bottom wall of the channel where the PSub is installed.The skin-friction coefficient C f for channel flows is defined as is the wall mean shear stress, µ f is the fluid's dynamic viscosity, and U B is the bulk velocity.The mean shear stress at the wall was computed using a polynomial fit.We observe that the skin-friction coefficient decreases in the stabilization cases and increases in the destabilization cases within the PSub control region.The behavior of the skin friction is, therefore, compatible with what we observe for the perturbation kinetic energy in Fig. 7b.In the region where the perturbation kinetic energy decreases, the wall mean shear stress τ * w also reduces, resulting in a drop in the skin-friction coefficient values and vice versa for the destabilization cases.This reduction (or enhancement) for the stabilization (or destabilization) is mild (less than 0.5%) because the PSub region is relatively small, and the TS wave examined is growing slowly and represent a small linear perturbation in the flow.Nevertheless, the PSub is shown to influence the flow precisely as desired, and stronger influence on the skin friction will be achieved with greater area coverage by PSubs acting on more dominant instability fields.A PSub designed to exhibit even higher values of |P | at the instability frequency will also cause stronger changes to the skin-friction coefficient.The time-averaged spatial distribution of K * p over both the x and y directions is shown in Fig. 8, for the rigid-wall (Fig. 8a), weak stabilization (Fig. 8b), and weak destabilization (Fig. 8d) cases, respectively.Figure (Fig. 8c) examines a case for ω TS = 1444 Hz, which corresponds to P = 0 thus offering a neutral effect.For Figs. 8b-d, we also show the corresponding time-averaged quantities of the total elastodynamic energy within the PSub, defined as Ψ(s, t * ) = 1 2 (Eη 2 + ρ s η2 ), as obtained simultaneously from the same coupled simulations.The peaks in the PSub total energy plots correspond to the regions occupied by the aluminum layers, where the speed of sound is higher than that of the ABS polymer layers.We observe the total energy profile for the stabilization case (Fig. 8b) to be lower overall than that of the destabilization case (Fig. 8d), which is expected because the former admitting at out-of-phase (cancelling) wave motion across the fluid-structure assembly, and the latter is admitting in-phase (adding up) wave motion.The total elastodynamic energy in the neutral case (Fig. 8c) is very small (almost negligible) because the PSub response amplitude at that frequency is zero (hence P = 0), thus preventing the system from experiencing any substantial fluid-structure interaction.The results of Fig. 8 demonstrate, most explicitly, a remarkable passive synchronization of response across both the PSub structure and the coupled flowing fluid. MM-based PSub As described in Section 2.2, the MM-based PSub design approach utilizes a passband resonance that has been lowered in its frequency value due to the presence of a locally resonant hybridization band gap.The unit-cell dispersion diagram of the MM-based PSub configuration whose material and geometric properties are given in the following subwavelength resonance frequency for each case: 1547.3Hz (5-unit-cell PSub), 1032.4Hz (10-unit-cell PSub), 742.2 Hz (15-unit-cell PSub), and 573 Hz (20unit-cell PSub).The longer the PSub we can afford to install, the lower the frequency we can target for TS wave stabilization or destabilization for a given MM unit-cell configuration.As seen in Fig. 9b, the PSub unit-cell band gap enables the generation of several structural resonances at frequencies lower than the band gap, which itself is already in the subwavelength regime.In particular, for the shortest PSub with 5 unit cells, we employ the resonance at 1547.3 Hz for our flow control objective.Similar to the results for the PnC-based PSub shown in Fig. 7, when comparing Fig. 10a with Fig. 10b we observe a direct correlation between the P value at the intersection with the TS wave frequency and the corresponding actual K * p performance in the flow simulation.Once again, we observe a perfect a priori prediction of whether the TS wave stabilizes or destabilizes, and at what level in each case.Furthermore, similar to the PnC-based PSub cases, all the reductions in K * p take place exactly where the PSub is placed, and, favorably, the K * p levels return to nearly the same level of the reference rigid-wall case downstream to the PSub. Figure .10c displays the corresponding skin-friction coefficient calculated at the bottom wall of the channel, with qualitatively similar results to the PnC-based PSub results shown in Fig. 7c.The rigid-wall case here is taken for a TS wave at 1547.3 Hz, corresponding to the center between the resonance and anti-resonance peaks in the PSub P metric shown in Fig. 10a. The time-averaged spatial distribution of K * p in the flow and the corresponding time-averaged total elastodynamic energy Ψ(s, t * ) within the MM-based PSub are shown in Fig. 11.The rigid-wall (Fig. 11a), strong stabilization (Fig. 11b), and strong destabilization (Fig. 11d) cases, as well as a neutral case at 1400 Hz where P ≈ 0 (MMbased PSub does not generate zero P before resonance frequency) (Fig. 11c), are shown.The black horizontal lines represent the total energy level of the locally resonating masses depicted in Fig. 2b.In analogy to in Fig. 8, we observe the energy in the resonators for the stabilization case (Fig. 11b) to be lower overall than that of the destabilization case (Fig. 11d), and also note that the the neutral case (Fig. 11c) experience very small (almost negligible) energy in the resonators.As in Fig. 8 for a PnC-based PSub, the results of Fig. 11 for a MM-based PSub demonstrate a holistic synchrony in the coupled fluid-structure interaction response, and exactly consistent with the corresponding P value in each case. Figure 12 provides a contour plot of the absolute value of the instantaneous velocity perturbation for the strong stabilization and destabilization cases and the rigid-wall case for comparison.A snapshot of the instantaneous vector field of the perturbation velocity is overlaid in each subfigure.It is clear that at the PSub region, the stabilization case attains the lowest value of |û| (smallest and least bright yellow spot), followed by the rigid-wall case (where these is no PSub), and then the destabilization case.Consistent with this pattern, the perturbation velocity vector field experiences the smallest wallnormal components near the wall at the PSub region for the stabilization case, also followed by the rigid-wall case, and then the destabilization case.Small wall-normal components compared to the rigid-wall case are indicative of coherent wave cancellation due to the presence of a stabilizing PSub.In contrast, relatively large wall-normal components near the wall are indicative of destructive interference from a destabilizing PSub.In Fig. 13, we examine the exchange of energy within the flow.With no PSub installed, Fig. 13b shows that the mean-flow kinetic energy drops at the upstream region of the channel as the perturbation kinetic energy grows and acquires energy from the mean flow.The trend eventually reverses when the mean flow begins to experiences structural changes itself as it carries a growing instability.The timeaveraged perturbation kinetic energy for the strong stabilization and destabilization cases are shown, again, in Fig. 13a and contrasted with the corresponding mean- flow component that is plotted in Fig. 13b.The sum of both components is given in Fig. 13c.The changes incurred in the controlled mean-flow component are very small due to the small magnitude of the perturbation, but nevertheless reveal valuable qualitative information.In the presence of a PSub, we observe a short rise (fall) in the meanflow kinetic energy near the upstream border of the PSub while the perturbation kinetic energy drops (rises) for stabilization (destabilization).Subsequently, as the perturbation kinetic energy profile reverses direction, a corresponding opposite change in direction is seen in the mean-flow kinetic energy profile.These trends confirm the energy exchange mechanisms depicted in the Fig. 3 schematic.Fig. 14 examines the influence on the mean flow from a contour diagram perspective.In this figure, the base flow field is subtracted from the mean flow field yielding a ū − u b vector field which is plotted in 2D space.Furthermore, the corresponding time-averaged quantity |ū − u b | is mapped out using color contours.First we observe in the rigid-wall case that the velocity vectors points backwards (opposite to flow direction) near the middle of the half-channel, and, conversely, point forwards near the wall.This pattern reveals that the instability is causing the mean-flow velocity profile to shorten and broaden, demonstrating very early traits of birth of transition to turbulence.In the stabilization and destabilization plots, we observe an increase (decrease) in the mean-flow resultant amplitude and a pointing up (down) of the arrows near the wall for the cases of stabilization (destabilization).This reveals slower (faster) transition process in comparison with the rigid-wall case.This adds further evidence of the phased energy exchange mechanisms described and discussed earlier. To further examine the underlying anti-resonance and resonance mechanisms within the flow, we compute the production rate of the perturbation energy P * r , given by This quantity depicts the energy transfer rate between the mean flow and instability, or more generally, the rate of perturbation generation (turbulence generationin fullydeveloped turbulent flows [19,22,69,70,71,72]). Without control, the production rate is generally positive for an unstable laminar flow, indicating a flow resonance phenomenon where energy is being transferred from the mean flow to the instability, causing it to grow as it propagates downstream−hence the positive, upward trend of K * p that we observe in Figs.7b and 10b.In contrast, a negative production rate is a PSub-induced flow anti-resonance phenomenon whereby energy is transferred from the instability back to the mean flow.A negative production rate of the perturbation kinetic energy diminishes the intensity of an instability.In Fig. 15, we present the production rate of perturbation kinetic energy (expressed in dimensionless form) with respect to the wall-normal direction at three streamwise x-locations (stations) for both strong (Fig. 15a) and weak (Fig. 15b) passive control.In both plots, the nominal MM-based PSub with five unit cells is used.Since the TS waves are small linear perturbations, we observe significantly modest changes in the production rate, on the order of ∼ 10 −6 in dimensionless units; however, these changes elucidate the underlying dynamics of the impact of the PSub on the flow field.In Fig. 15, Station 1 is at the left edge of the PSub, x * /δ = 6.25 (solid curves).This is the position where the flow first "experiences" ProducƟon rate of perturbaƟon kineƟc energy, P r /(U f U c ) (×10 -6 ) * 3 -4 -2 0 2 4 6 x * /G=7.86 x * /G=6.25 x * /G=12 Strong stabilizaƟon: x * /G=7.86 x * /G=6.25 x * /G=12 Rigid wall: x * /G=7.86 x * /G=6.25 x * /G=12 Wall-normal posiƟon, y * /G x * /G=7.86 x * /G=6.25 x * /G=12 Weak stabilizaƟon: x * /G=7.86 x * /G=6.25 x * /G=12 Rigid wall: x * /G=7.86 x * /G=6.25 x * /G=12 the influence of the PSub, and according to Fig. 10b, where the strongest reduction in K * p occurs for the stabilization cases.Station 2 is at the right edge of the PSub, x * /δ = 7.86 (dashed-dotted curves).This is the location where the instability initiates its recovery from the effect of the PSub.For the stabilization cases, at this station, we notice the perturbation kinetic energy rises substantially, exceeding even the rigid-wall case, but only for a short distance downstream.The last streamwise station, Station 3, is at x * /δ = 12 (dashed curves) which is at the far downstream where the effect of the PSub has practically vanished, confirming that the influence of the PSub is strictly local, within and very closely around the control region.Similar but opposite trends for P * r are observed for the destabilization cases.A comparison between Figs. 15a and 15b clearly reveals that the absolute strength of the production rate of perturbation kinetic energy is larger for strong PSub control.This is again consistent with the prediction of the performance metric P from Fig. 10a.The impact of the PSub on production rate along the y-direction is also intriguing, showing that it starts with zero at the wall (due to the nominally zero velocity boundary conditions), reaches the peak close to the wall, and then gradually diminishes to zero again around the centerline.The near wall peak of the production rate occurs closer to the wall.Moreover, due to the existence of the PSub at the bottom wall, the flow is not symmetric along wall-normal direction, see Fig. A1.An analysis of the flux of the perturbation energy for PnC-based PSubs is provided in Ref. [22]. Conclusions The theory of phononic subsurfaces enables the design of subsurface structures for the passive responsive control of wall-bounded laminar/transitional flows with growing instabilities.We have investigated an MM-based configuration of PSubs that operates in the elastic subwavelength regime.This renders a PSub much shorter (5 cm) than the PnC-based PSub investigated in Ref. [22] (4 m).We considered channel flows with unstable TS waves as examples for demonstrating the underlying performance of this new form of PSubs.A parallel analysis of a PnC-based PSub was conducted as well for comparison.A PnC-based PSub is designed by tuning a stop-band truncation resonance to engage the target TS wave [22,23], whereas the proposed MM-based PSub uses a pass-band resonance that has been lowered in frequency due to the generation of a subwavelength locally resonant band gap. Both TS wave stabilization and destabilization were demonstrated.It was reaffirmed that the performance metric curve P for a given PSub design (which is calculated a priori without the need for coupled fluid-structure simulations) perfectly predicts both the nature of engagement with the instability (i.e., stabilization versus destabilization) and the intensity of engagement (e.g., weak, moderate, or strong control of the instability).The results clearly display that the perturbation kinetic energy of the flow instability field is altered as desired specifically near the wall in the channel region where the PSub is installed.Furthermore, and importantly, it was shown that the timeaveraged value of K * p returns to nearly the same level as the reference rigid-wall case downstream of the PSub.This ascertains the local nature of PSub-based flow control, which in turn implies the ability to extend control to wider spatial regions by installing more PSubs as desired.The time-averaged total elastodynamic energy in the PSub was also calculated and shown to be relatively low, zero, or high for stabilization, neutral effect, or destabilization, respectively.This demonstrates the coherent nature of the PSub controlled coupled fluid-structure interaction and phased response across both media, and confirms the perfect predictability of the actual response by the predetermined value of the performance metric P .Analysis of the rate of production of the flow perturbation kinetic energy, as a function of both the downstream and wallnormal directions, reveals the intrinsic anti-resonance and resonance mechanisms that take place within the flow when a PSub is installed.For stabilization, a PSub causes steady-state energy transfer from the flow instability into the mean flow at the start of the control region and vice versa closer to its end.The opposite effect takes place for a PSub designed to destabilize the flow. The PSubs theory lays the foundation for a mechanistic, spatially precise, and frequency-and wavenumber-dependent passive and responsive flow control paradigm that is fundamentally based on enabling a targeted contiguous synchronization of wave characteristics across both the flow and an interfacing subsurface elastic structure.Future research will aim to advance PSubs design to enlarge the green area (A S P ) or red area (A D P ) under the P curve in Fig. 10a for flow stabilization or destabilization, respectively.Emphasis will be on both deepening and widening these green and red regions to further strengthen the control and make it more robust over broad-frequency ranges.Ongoing innovative research in phononics (see reviews by Hussein et al. [26], Jin et al. [27], and others) will drive this track.Investigation of PSubs will be extended to boundary-layer flows, supersonic and hypersonic flows, advanced transitional flows, and fully developed turbulent flows, among other problems in flow control [1].Switchable PSub control using piezoelectrics [73,74] is also a potential application.Multifunctional PSub design to target flow control and, simultaneously, vibroacoustic control [75], energy harvesting [76], and/or structural support [77] is another promising research direction that will build on the current investigation.x * /G=7.86 x * /G=6.25 x * /G=12 Strong stabilizaƟon: x * /G=7.86 x * /G=6.25 x * /G=12 Rigid wall: x * /G=7.86 x * /G=6.25 x * /G=12 x * /G=7.86 x * /G=6.25 x * /G=12 Weak stabilizaƟon: x * /G=7.86 x * /G=6.25 x * /G=12 Rigid wall: x * /G=7.86 x * /G=6.25 x * /G=12 absence of any other disturbances or stochastic variations, the flow remains symmetric in the wall-normal direction y = 1 around the centerline axis of symmetry.Clearly the intensity of the time-averaged kinetic energy changes over the entire channel section increases when two PSubs are applied.This is shown for both the strong (Fig. B1a) and weak (Fig. B1b) stabilization and destabilization cases.The corresponding results for the rate of production of perturbation kinetic energy are shown in Fig. B2.These results indicate promise for the future application of PSubs around the entire circumference of long-range pipelines.x * /G=7.86 x * /G=6.25 x * /G=12 Strong stabilizaƟon: x * /G=7.86 x * /G=6.25 x * /G=12 Rigid wall: x * /G=7.86 x * /G=6.25 x * /G=12 x * /G=7.86 x * /G=6.25 x * /G=12 Weak stabilizaƟon: x * /G=7.86 x * /G=6.25 x * /G=12 Rigid wall: x * /G=7.86 x * /G=6.25 x * /G=12 Figure 1 . Figure 1.Passive flow stabilization by a PSub.Contours showing the streamwisecomponent of an instability velocity field when a PnC-based PSub is installed (front)versus an all-rigid-wall surface (back)[22].Yellow color represents low-instability intensity, red color represents high-instability intensity.The reduced color intensity at the position where the PSub is placed is indicative of stabilization exactly at that location. Figure 2 . Figure 2. Schematic of two types of PSubs: (a) a phononic-crystal-based PSub[22] and (b) a locally-resonant elastic metamaterial-based PSub.In this schematic, the PSub in (b) has a unit-cell length 40 times shorter than the PSub in (a).Each PSub is installed in the flow subsurface and extends all the way to allow for direct exposition to the flow.Flow instabilities, e.g., TS waves, will excite the PSub at the top edge (i.e., at the fluid-structure interface), and the PSub, in turn, will respond at or near its structural resonance and out of phase at the excitation point.This passive process will repeat and cause sustained attenuation of reoccurring and continuously incoming instability waves.Alternatively, the PSub could be designed to trigger destabilization instead of stabilization by producing an in-phase elastic response. Figure 3 . Figure 3. Schematic illustration of key energy exchange mechanisms passively triggered by a PSub installed for channel flow control.The flow total velocity field u is decomposed into a mean flow component ū and a perturbation (instability) component û.When designed for stabilization, the PSub causes energy transfer from the instability component to the mean flow component.When designed for destabilization, the opposite effect takes place.The flow model details are given in Section 3.2 Figure 4 . Figure 4. Dispersion diagram for (a) PnC unit cell and (b) locally resonant elastic MM unit cell with a close-up.The dispersion curves for a corresponding homogeneous rod unit cell in each case are also provided.Schematics of the unit-cell configurations are shown as insets.The frequency and wavenumber are nondimensionalized by multiplication with corresponding unit-cell parameters: a PnC and a MM are the PnC and MM unit-cell length, respectively, and c ABS is the long-wave longitudinal speed for the ABS material. Figure 5 . Figure 5. Schematic illustration of two PSub design principles: PnC-based PSub versus MM-based PSub.To produce the desired performance metric properties at the frequency of the instability, in (a) a truncation resonance is utilized (PnCbased PSub), and in (b) a sub-hybridization resonance is utilized (MM-based PSub).The performance metric for the corresponding homogeneous structures is shown for comparison. ( 2 ) and (3) into u = ū + û, where ū is the mean flow component obtained by averaging u over a time range and û is the perturbation (instability) component.With this decomposition, p = p + p, where () represents the perturbation part of the flow. = 4.23 × 10 −4 m is used for a PnC-based PSub targeting strong stabilization of instability at 1670 Hz (see details in Section 4.1) and δ = 4.38 × 10 −4 m for an MM-based PSub comprising 5 unit cells and targeting strong stabilization of instability at 1550.3 Hz (see details in Section 4.2).The corresponding centerline velocities for these PnC-based and MM-based PSub simulations are U c = 17.72 m/s and U c = 17.11m/s, respectively. 1 Figure 6 . Figure 6.Four key characterization plots that form the foundation of the PSubs theory: (a) Dispersion curves for a unit cell from which the PSub is formed.Steadystate vibration (b) amplitude and (c) phase response of the PSub top edge when harmonically excited at the same location.(d) Performance metric obtained by multiplying the amplitude by the phase.The phase is between the force and the displacement at the PSub top edge.All plots are obtained by analyzing a stand-alone FE model of the PSub without yet coupling to the flow.These results are for the PnC-based PSub with a 40-cm long unit cell. Figure 7 . Figure 7. Demonstration of PnC-based PSub performance for both flow stabilization and destabilization.(a) Performance metric curve (grey) and four vertical lines respectively representing four different instability waves investigated (each characterized by a frequency as indicated).Green and red regions quantify the intensity and frequency breadth of the stabilization and destabilization capacity of the PSub; grey region represents frequency range of band gap.Time-integrated (b) kinetic energy of the flow perturbation (instability) and (c) skin-friction coefficient as a function of streamwise position for each of the four cases as obtained from coupled flow-PSub simulations.The PSub location spans the distance between the two dashed lines as indicated.The responses quantitatively correlate with the frequency-performance metric intersection values in (a), indicating a perfect prediction of PSub performance. Figure 8 . Figure 8. Synchronized passive phased response and energy exchange between PnCbased PSub and flow perturbation (instability) field.The colored contours show the time-averaged spatial distribution of the perturbation kinetic energy within the flow.The black curves represent the time-averaged total elastodynamic energy in the PSub.Stabilization (P < 0) and destabilization (P > 0) cases are shown in (b) and (d), respectively, whereas a neutral case (P = 0) is shown in (c).The rigid-wall case is shown in (a) as a reference. Figure 9 .Figure 10 . Figure 9. Four key characterization plots for MM-based PSubs: (a) Dispersion curves for a unit cell from which each MM-based PSub is formed.Steady-state vibration (b) amplitude and (c) phase response at top edge of a 5-, 10-, 15-, or 20-unit-cell long PSub, all when harmonically excited at the same location.(d) Performance metric obtained by multiplying the amplitude by the phase.The phase is between the force and the displacement at the PSub top edge.Insets show the plots over an extended frequency range for the 5 unit-cell case and its corresponding homogeneous rod.All plots are obtained by analyzing a stand-alone FE model of the PSub without yet coupling to the flow.The unit cell of the MM-based PSub is 1-cm long. ZFigure 11 . Figure 11.Synchronized passive phased response and energy exchange between MM-based PSub and flow perturbation (instability) field.The colored contours show the time-averaged spatial distribution of the perturbation kinetic energy within the flow.The black curves represent time-averaged total elastodynamic energy in the PSub, with horizontal lines indicating the total energy level of the resonating masses.Stabilization (P < 0) and destabilization (P > 0) cases are shown in (b) and (d), respectively, whereas a neutral case (P ≈ 0) is shown in (c).The rigid-wall case is shown in (a) as a reference. ZFigure 12 . Figure 12.Instantaneous vector field of the perturbation (instability) velocity component û.In the background, the resultant magnitude of the perturbation velocity field |û| is also plotted.Both are plotted along the z = π plane.In the top and bottom panels, the strongest stabilization and destabilization cases of Fig. 10 are shown, respectively.The PSub location spans the distance between the two white dashed lines as indicated.The corresponding all-rigid-wall case is shown in the middle panel for comparison.Close-up views are shown in the right column. Figure 13 . Figure 13.Decomposition of flow kinetic energy.Time-averaged kinetic energy of (a) perturbation (instability) component, (b) mean flow component, and (c) summation of both components, i.e., total kinetic energy.All results are for the MM-based PSub examined in Fig.10; only the strongest stabilization and destabilization cases are shown.The PSub location spans the distance between the two white dashed lines as indicated.In each of (b) and (c), the corresponding kinetic energy curve for the same channel flow without the presence of an instability is shown (light orange).In all sub-figures, the case with the û (streamwise) component of the fluid-structure interface boundary conditions not applied (i.e., replacing Eq. (5a) with û(x, 0, z, t) = 0) is shown in dashed lines. ZFigure 14 . Figure 14.Instantaneous vector field of the mean-flow velocity component with the base-flow velocity subtracted.i.e., ū − u b .In the background, the resultant magnitude of this quantity, i.e., |ū − u b |, is also plotted.Both are plotted along the z = π plane.In the top and bottom panels, the strongest stabilization and destabilization cases of Fig. 10 are shown, respectively.The PSub location spans the distance between the two white dashed red lines as indicated.The corresponding all-rigid-wall case is shown in the middle panel for comparison. Figure 15 . Figure 15.Production of flow perturbation (instability) kinetic energy in channel with MM-based PSub for (a) strong and (b) weak stabilization or destabilization as a function of the wall-normal direction, y * /δ at three streamwise positions (denoted measuring stations).Station 1 is located at the left edge position (beginning) of the PSub, Station 2 at the right edge position (end) of the PSub, and Station 3 at a far position downstream from the PSub.A schematic of the PSub-installed channel with the station locations marked is shown in the insets. Figure A1 . Figure A1.Production of flow perturbation kinetic energy for the cases considered in Fig.15, but plotted across the entire channel height. 2 Figure B1 . Figure B1.Time-averaged kinetic energy of the perturbation (instability) of the (a) strongest and (b) weakest stabilization and destabilization cases when two PSubs are installed, one at the bottom wall and the other at the top wall.The PSub location spans the distance between the two dashed lines as indicated.The presence of two PSubs increases the intensity of the stabilization or destabilization. Figure B2 . Figure B2.Production of flow perturbation (instability) kinetic energy for the cases considered in Fig. B1, but showing only the bottom half of the channel due to symmetry.Results for only bottom PSub are shown as transparent curves for direct comparison. Table 1 . Geometric parameters and material properties of PSubs
2023-03-01T06:42:45.779Z
2023-02-28T00:00:00.000
{ "year": 2023, "sha1": "6c1d01549631348efaa1b8c0e7311acb80174046", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1367-2630/accbe5/pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "6c1d01549631348efaa1b8c0e7311acb80174046", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
254929008
pes2o/s2orc
v3-fos-license
Ursolic acid and rosmarinic acid ameliorate alterations in hippocampal neurogenesis and social memory induced by amyloid beta in mouse model of Alzheimer’s disease Alzheimer’s disease (AD) is a multifaceted neurodegenerative disorder characterized by substantial neuronal damage which manifests in the form of deficits in memory and cognition. In spite of the debilitating nature of Alzheimer’s disease (AD), a dearth of treatment strategies calls for the need to develop therapeutic agents that stimulate neurogenesis and alleviate the associated cognitive deficits. The present study investigates the therapeutic potential of two major phytochemicals, rosmarinic acid (RA) and ursolic acid (UA) in an amyloid beta1–42 (Aβ1–42)-induced model of AD. UA, a natural pentacyclic triterpenoid and RA, a phenolic ester are major bioactive constituents of Rosmarinus officinalis, which is a medicinal herb belonging to family Lamiaceae and exhibiting significant biological properties including neuroprotection. Donepezil, a second generation cholinesterase inhibitor approved for the treatment of mild, moderate and severe Alzheimer’s disease (AD) is used as control. Out of eight groups of male BALB/c mice, stereotaxic surgery was performed on four groups (n = 6 each) to introduce Aβ1–42 in the hippocampus followed by treatment with vehicle (phosphate-buffered saline (PBS)), donepezil, UA or RA. The other four groups were given vehicle, donepezil, UA and RA only. Behavior analysis for social interaction was performed which constitutes the social affiliation and the social novelty preference test. Presence of Aβ plaques and expression of neurogenesis markers i.e., doublecortin (DCX) and Ki-67 were also assessed. Results revealed the neuroprotective effect of UA and RA observed through substantial reduction in Aβ plaques as compared to the Aβ1-42- and donepezil-treated groups. The neuronal density was also restored as evident via DCX and Ki-67 immunoreactivity in Aβ1–42 + RA and Aβ1–42+UA-treated groups in comparison to Aβ1–42-treated and Aβ1–42+donepezil-treated groups. The social affiliation was reestablished in the Aβ1–42 administered groups treated with UA and RA. Molecular docking studies further validated the comparable binding of UA and RA with Ki-67 and DCX to that of donepezil. Our findings suggest that UA and RA are potential neuroprotective compounds that reverses the histological hallmarks of AD and ameliorate impaired social memory and hippocampal neurogenesis. Introduction Alzheimer's disease (AD) is the most common form of dementia accounting for more than 80% of the cases diagnosed. With a prevalence that continues to grow as the world population ages, it has emerged as a leading health problem. The debilitating disorder affects nearly 50 million people worldwide (Crous-Bou et al., 2017). It is characterized by the formation of neurofibrillary tangles (NFTs) of hyperphosphorylated tau protein and amyloid beta (Aβ) plaques which manifests as deficits in cognition and memory (Lopez et al., 2019). Aβ is a major contributing factor in neurotoxicity and neural function and the deposition of amyloid plaques in the hippocampus, cerebral cortex and amygdala can lead to stimulation of astrocytes and microglia, axonal and dendritic damage and synaptic loss which manifest as cognitive impairments (Armstrong, 2009;Chen et al., 2017). The plaque formation constitutes the primary pathological process associated with AD while NFT formation and the subsequent neurodegeneration are downstream processes (Lane et al., 2018). Adult hippocampal neurogenesis, is a unique phenomenon hosted by the hippocampus, which confers significant levels of plasticity to the hippocampal circuitry improving pattern separation and spatial memory (Anacker et al., 2018). AD leads to a sharp decline in adult hippocampal neurogenesis in comparison to neurologically healthy subjects (Moreno-Jiménez et al., 2019). Impaired neurogenesis is thereby considered as a relevant mechanism that leads to cognitive deficit associated with AD. Among the various markers of neurogenesis, Doublecortin (DCX) is a brain-specific protein associated with the microtubules which regulates neuronal migration through the polymerization and stabilization of microtubules in migrating neuroblasts (Sadeghi et al., 2018). DCX is vital for the proper initiation and maintenance of differentiation as well as migration during neurogenesis. Neural cells with reduced expression of DCX exhibit impaired migration, differentiation and neurite formation (Shahsavani et al., 2018). Ki-67 another widely acclaimed marker of cell proliferation and neurogenesis is expressed in dividing cells during mitosis except the G0 phase (Sun and Kaufman, 2018). Currently, only two classes of drugs have been approved for the treatment of AD. Cholinesterase enzyme inhibitors and N-methyl d-aspartate (NMDA) antagonists function mainly by treating the symptoms of AD and do not possess preventive or curative effects (Breijyeh and Karaman, 2020). Although a huge amount of research on AD has been directed towards the development of disease-modifying therapy in the last decade, however there is still a dearth of therapeutic agents which will alter the course of disease rather than providing symptomatic treatment alone. Lack of disease modifying drugs even after decades of studies indicates the challenges associated with the development of therapeutic agents with curative potential against AD (Salomone et al., 2012). In the recent times, natural compounds have garnered significant interest due to their pharmacologically significant activities. These natural products, including herbs and spices, possess various phytochemicals which serve as potential sources of natural antioxidants and neuroprotectants and are devoid of the potentially life-threatening side effects characteristic of the existing approved drugs (Twilley et al., 2020;Karthika et al., 2022). Rosmarinic acid (RA), a phenolic ester is abundantly present in the herbs belonging to the family Labiatae and exhibits antioxidant, antimutagenic, antiapoptotic and several other pharmacological activities (Amoah et al., 2016). It also plays a beneficial role against AD through the suppression of Aβ aggregation (Hase et al., 2019). Ursolic acid (UA), a natural pentacyclic triterpenoid also exerts health benefits against inflammation, oxidative stress and fibrosis Xu et al., 2018). Also, RA and UA substantially improve the deficits in cognition as well synaptic dysregulation and the associated neurodegeneration in AD model of Aβ 1-42 -induced neurotoxicity suggesting their therapeutic significance against AD (Mirza et al., 2021). Furthermore, an in silico study also presents the therapeutic potential of these compounds based on drug-likeness, pharmacokinetic properties and binding affinity with AD-associated proteins (Mirza et al., 2022). In the current investigation we evaluated the neuroprotective effects of RA and UA against impaired neurogenesis and social memory deficits produced by Aβ 1-42 . The effects were assessed in comparison to donepezil, commonly prescribed for AD (Eskandary et al., 2019). It is routinely used as a standard drug in studies investigating the therapeutic potential of different agents against AD (Adlimoghaddam et al., 2018). Moreover computer-aided molecular docking assessment also elucidated the interaction of RA and UA with the markers of neurogenesis. The data obtained for the current study may provide further insights into molecular mechanisms and clinical intervention options for AD and its associated consequences. Experimental animals Male BALB/c mice were maintained in the animal facility of Atta-ur-Rahman School of Applied Biosciences (ASAB) National Animal grouping The male BALB/c mice were segregated into eight groups (n = 6 each). The groups 1-4 were pre-treated with Aβ 1-42 . The groups 2, 3 & 4 received donepezil (15 mg/kg) (Ahmed et al., 2017), RA (16 mg/kg; ab141450, Abcam, United Kingdom) (Farr et al., 2016), or UA (40 mg/kg; ab141113, Abcam, United Kingdom) (Liang et al., 2016), respectively for 15 days post Aβ 1-42 administration. Group 5 received vehicle (PBS) while groups 6, 7, 8 received the similar dose of donepezil, RA and UA, respectively for similar duration. All the groups were orally administered with normal feed and water. The purity of UA and RA was >99% and >95% respectively while donepezil was taken as a positive control. After the end of treatment period the mice were subjected to behavioral tests and subsequently brain tissue harvesting. Social interaction behavior The procedure was performed as reported previously (Rizwan et al., 2016). The apparatus comprised of a glass box having dimensions 40 × 40 × 40 cm with two similar cages placed inside the box. The mice placed in the cages were characterized as mouse A and B while the treated mice were referred to as test subject. Mice A and B were never encountered before with the test subject and were of the same background in terms of age, weight and gender. Session I: Social affiliation test After habituating the test subjects (5 min) inside the box, they were introduced back into the box with a cage having mouse A on one side and an empty cage on the other side. The test subject was then left in the box for 10 min and allowed to move in both the cages. The discrimination index (DI) or the interaction time or with the empty cage and mouse A was evaluated and the difference between the time spent interacting with the mouse A or empty cage and the total time was scored. Session II: Social novelty preference test The mouse B was released in the empty cage while the mouse A remained unaltered. The test subject was again allowed an exploration time of 10 min. DI or interaction time was measured as the difference between the time spent interacting with mouse A or mouse B and the total time spent with mouse B. Histology and immunohistochemistry Histology was performed according to the protocol used by Amber et al., 2020. The tissue was dewaxed and rehydrated in ethanol and double distilled water (ddH 2 O). Congo red stain (1%) was applied to the tissue for 25 min after which it was thoroughly rinsed with ddH 2 O, followed by counter Haematoxylin and eosin staining. The sections were rinsed again with ddH 2 O, dried and incubated for 20-25 min before clearing in xylene solution. Immunostaining was performed in accordance with the protocol previously described by Mirza et al., 2021. Hippocampal tissue sections (5 μm) were deparaffinized after antigen retrieval. H 2 O 2 (1%) was applied to quench the peroxidase activity and blocked with BSA (5%) in PBS for 1 h. After an overnight incubation with primary antibodies at 4°C the sections were incubated for 1 h with a secondary antibody followed by washing thrice with TBST and visualised with 0.025% 3, 3′ diaminobenzidine (DAB Kit, ab50185, Abcam, MA, United States). Ki-67 (Leica Biosystems, PA0230) and DCX (Abcam, ab207175) diluted in the block solution 1:200 and 1:100 respectively, were used as primary antibodies while anti-rabbit IgG-HRP conjugated (ab97051, Abcam, MA, United States) diluted in block solution 1: 100 was used as secondary antibody. The images were visualized using B-150, OPTIKA microscope (Italy) at 4× and 10× resolution. The images were captured using Optika Vision Lite 2.1 image analysis software. Quantitative analysis was performed for cell count in an area of 10,000 μm 2 from three randomly selected areas and the average values were calculated and plotted. Molecular docking PatchDock server was used for docking using cluster RMSD at default value of 4.0 to identify the interaction among RA and UA with target proteins DCX and Ki-67. The interaction was compared with donepezil. Patchdock algorithm produces potential complexes based on the criteria of shape complementarity (Schneidman-Duhovny et al., 2005). The 3D structures of the target proteins Ki-67 and DCX were acquired from RCSB Protein data bank (PDB) (https://www.rcsb.org/). The PDB IDs of Ki-67 and DCX were 5J28 and 2BQQ, respectively. The 3D structures of RA, UA and donepezil were taken from PubChem database (https://pubchem.ncbi.nlm.nih. gov/). The automated docking models generated were further assessed using FireDock (Mashiach et al., 2008) and visualized through BIOVIA Discovery Studio (Systèmes, 2016). Statistical analysis Data are presented as mean ± SEM and statistical significance was set at 95% confidence level and a p-value <.05. Behavioural and biochemical data was analysed by one-way analysis of variance (ANOVA) with Bonferroni's multiple comparisons as post hoc test using GraphPad Prism 5.0. Results Rosmarinic and ursolic acid improve social affiliation and social novelty preference in Aβ 1-42 treated mice To evaluate the effects of RA and on sociability in mice, social interaction test was performed. Aβ 1-42 treated group (2.798 ± 0.7145; p < 0.0001) exhibited a significant reduction in interaction with mouse A, a conspecific placed in one of the cages, in comparison to the control (22.66 ± 1.726; p < 0.0001) and other experimental groups exhibiting a significant decrease in social affiliation. The Aβ 1-42 + RA (20.25 ± 3.779; p < 0.001) and Aβ 1-42 + UA-treated groups (23.74 ± 3.434; p = 0.0003) performed significantly better than the diseased group and comparable to the Aβ 1-42 +donepezil-treated group (22.91 ± 5.629; p = 0.0051). Aβ 1-42 + UA-treated group exhibited improved social affiliation in comparison to all of the other groups ( Figure 1). Interactions with the empty cage also show least amount of interaction by the Aβ 1-42 -treated group. The decrease social affiliation pattern observed during session I for Aβ 1-42 -treated group is inferred from the same length of time spent with the mouse A (2.798 ± 0.7145; p < 0.0001) as well as empty cage (2.276 ± 0.7505; p = .0664). Conversely, the control and experimental groups interacted more with the mouse A in comparison to the empty cage (Figure 1). In the session II, the mice were assessed for novelty preference and social memory. The mice treated with Aβ 1-42 showed diminished levels (6.572 ± 2.116; p = .0144) of interaction with an unfamiliar mouse B in comparison to the control group (21.38 ± 4.261). The treated groups did not show preference with any mouse cage. The DI for session II revealed a significantly low value for the Aβ 1-42 -treated group (0.3820 ± 0.02107; p < 0.0001) relative to the control (0.7880 ± 0.05054; p < 0.0001) and the other experimental groups, thus indicating alterations in social memory. A significant improvement in DI was evident in Aβ 1-42 + RA (0.7300 ± 0.06258 p = .0007) and Aβ 1-42 + UAtreated (0.6867 ± 0.03783 p < 0.0001) groups in comparison to the control. The DI in RA-and UA-treated groups was comparable to that of the standard drug donepezil (0.6850 ± 0.01893; p < 0.0001) demonstrating their protective effects against social memory deficit in AD (Figure 2). Improvement in neurogenesis by RA and UA A considerable reduction in neuronal proliferation was significantly apparent in the group treated with Aβ 1-42 relative to the control mice. An improvement in the density of Ki-67positive neurons was observed with treatment with RA (22.87 ± 0.4667; p= .0111) and UA (25.33 ± 0.8819; p = 0.0027) post Aβ 1-42 administration as compared to the mice treated Aβ 1-42 only (19.50 ± 0.6455). It was found that RA and UA have greater improvement activity than donepezil (21.30 ± 1.825; p = 0.34), evident through the restoration of the Aβ 1-42 deteriorated neurons. No obvious neuronal loss was encountered by the control mice (27.65 ± 1.525), donepezil, RA and UA in Aβ 1-42 untreated groups (Figure 3). Frontiers in Pharmacology frontiersin.org RA and UA treatment reduces the accumulated amyloid beta burden The presence of congophilic amyloid plaques in the mice administered with Aβ 1-42 was substantially decreased with the treatment of RA and UA indicating a reversal of the plaque formation. RA and UA treatment significantly rescued the cellular density and morphology in Aβ 1-42 -treated groups and comparatively have greater neuronal restoration than donepezil. The control littermates showed a normal neuronal pattern ( Figure 5). Molecular docking studies of Ki-67 and DCX with RA, UA and donepezil Molecular docking studies were used to predict the receptorligand interaction geometrics of RA, UA and donepezil with the neurogenesis markers Ki-67 and DCX. All the compounds successfully docked against the target proteins. Lowest atomic contact energy (ACE) is depictive of higher binding affinity, therefore, the ligand molecules that had the lowest ACE were considered better ligands to Ki-67 and DCX. The docking scores of the complexes are shown in Table 1. The docking results were Significance was analyzed by one-way ANOVA followed by Bonferroni comparison test (mean ± SEM) using Graphpad Prism. *p < 0.05, **p < 0.01, ***p < 0.001. Discrimination index during session II. Significance was analyzed by one-way ANOVA followed by Bonferroni comparison test (mean ± SEM) using Graphpad Prism. *p < 0.05, **p < 0.01, ***p < 0.001. Frontiers in Pharmacology frontiersin.org further refined using FireDock and the results obtained are stated in Table 2. UA exhibits ACE comparable to donepezil in binding interactions with Ki-67 and DCX Binding interaction of UA, RA, and donepezil with Ki-67 revealed that amongst the other compounds, UA with an ACE of −290.71 had better affinity than RA (−203.58) which was comparable to that of donepezil (−293.37). Interactions with DCX also demonstrated a comparable binding energy of UA (−154.2) and donepezil (−157.92). These binding energy values suggest an interaction of UA with Ki-67 and DCX validating the present in vivo results (Table 1). UA shows global energy comparable to donepezil in binding interactions with Ki-67 and DCX Global energy depicts the binding energy of the complex while the ACE is based on its contribution to global energy. Low values of ACE correspond to more stable complexes. UA Significance was analyzed by One-way ANOVA followed by Bonferroni comparison test (mean ± SEM) using Graphpad Prism. *p < 0.05 **p < 0.01. Frontiers in Pharmacology frontiersin.org exhibited an ACE of −10.96 in its interaction with Ki-67 which is comparable to that of donepezil (−10.92). UA (−10.06) also showed a comparable ACE to that of donepezil (−11.84) in its interaction with DCX. Attractive and repulsive vdw (Van der waal forces) are a quantification of the input of van der waal's forces to the global binding energy. The values are tabulated in Table 2 and the interacting residues are shown in Figure 6. Discussion Our study elucidated the potential effects of the bioactive compounds of R. officinalis, RA and UA on Aβ 1-42 -induced neurotoxicity in comparison to donepezil. The data revealed the restoration capability of RA and UA of altered social memory in AD mouse models. Social engagement with the surrounding environment is associated with an improved Vdw, Van der waal's forces. Frontiers in Pharmacology frontiersin.org angiogenesis, synaptogenesis, and neurogenesis which are crucial in delaying the progression of AD (Fratiglioni et al., 2004). Notably, RA and UA are previously known to exert anxiolytic and antidepressant effects in various models of neurotoxicity (Colla et al., 2015;Ramos-Hryb et al., 2019;Lataliza et al., 2021). The present study showed that RA and UA treatment post Aβ 1-42 administration exhibited significantly greater sociability levels relative to the diseased group. RA and UA treatment post Aβ 1-42 administration also displayed significant improvement in social novelty preference in mice. The social interaction behavior of mice with RA and UA treatment was comparable to that of the standard drug donepezil which is indicative of their comparable protective effects to restore social memory. These data suggest the potential of RA and UA in alleviating the behavioral deficits in social affiliation and novelty preference induced by Aβ 1-42 . Altered social behavior is demonstrated by a decrease in sociability, avoidance of novel social stimuli and exacerbation Frontiers in Pharmacology frontiersin.org of aggression associated with amyloid pathology (Kosel et al., 2021). Impaired social interaction memory also results due to mitochondrial damage contributed to increased oxidative stress and deficits in hippocampus and medial pre-frontal cortex activity (Misrani et al., 2021). According to Okada et al., the removal of cholinergic cell groups in the basal forebrain cause an impairment of social behavior which was significantly reinstated by cholinesterase inhibitors, suggesting the critical role of cholinergic dysfunction in sociability deficits corelated with psychological and behavioral symptoms of dementia in AD (Okada et al., 2021). Interestingly, modulation of matrix metallopeptidase 9 (MMP9) caused an improvement in sociability and social recognition memory, along with a reduction in anxiety in AD mouse model, supporting the notion that targeting MMP9 could serve as a therapeutic strategy in restoring the neurobehavioral damage in AD (Ringland et al., 2021). Although the effect of RA and UA on social memory has not been reported previously but their effect on the restoration of spatial memory and object recognition memory has been documented. A mechanistic study on RA describes its role in the inhibition of cognitive decline through the suppression of tau phosphorylation (Yamamoto et al., 2021) while UA ameliorates oxidative stress and inflammation to improve cognitive deficits in an Aβ-induced mouse model (Liang et al., 2016). UA also exerts radioprotective effects improving radiation induced deficits in memory and learning in BALB/c mice (Tang et al., 2017) whereas also exhibit a potent anti-dementia effect observed in olfactory bulbectomized mice (Nguyen et al., 2022). Improvement of spatial memory and amelioration of Aβ 25-35 accumulation by UA has also been demonstrated . We further studied the effects of RA and UA on altered hippocampal neurogenesis induced by Aβ 1-42 . Several studies indicate the relevance of Aβ 1-42 induction in the impairment of adult hippocampal neurogenesis eliciting neurodegeneration associated loss of memory and cognition. The intra-cerebrovascular injection of Aβ 1-42 in male kunming mice caused mitochondrial damage, inflammation, loss of memory (Qi et al., 2019) and altered adult hippocampal neurogenesis observed in balb/c mice (Amber et al., 2020). In addition, the interneuronal accumulation of phosphorylated tau protein is also crucial for AD progression and impairs adult hippocampal neurogenesis through the suppression of GABAergic transmission (Zheng et al., 2020). Contrarily, promotion of neurogenesis reconstructs the degenerated neural circuits in AD hindering the associated cognitive decline (Choi et al., 2018). Aβ 1-42 -induced neurotoxicity causes significant deterioration of Ki-67 and DCX expression levels in the hippocampal tissue (Amber et al., 2020). Our results indicated a reduction in the neuronal proliferation induced by Aβ 1-42 evident through a significant decline in the immunoreactivity of DCX and Ki-67 positive cells in the mice treated with Aβ 1-42 , however it was considerably restored upon treatment with UA and RA. The results of behavioral and immunohistochemical analysis indicated the neurogenic potential of UA and RA. Based on these results, in silico analysis was further conducted to determine the binding interactions of the compounds with the neurogenesis markers, Ki-67 and DCX in comparison to donepezil. Molecular docking analysis showed comparable binding energy values of UA to that of donepezil. UA had an ACE value comparable to that of donepezil in its interaction with Ki-67 and DCX. Previously, we reported the effect of UA and RA in normalizing the mRNA expression levels of neurogenesis markers, Ki-67, DCX and NeuN. Interestingly, UA exhibit significant restoration of the expression levels of these markers in comparison to RA and donepezil (Mirza et al., 2021). It also enhanced neurogenesis and repressed inflammation in temporal lobe epilepsy and cerebral ischemia models Liu et al., 2022). Its role in neurite outgrowth and neuronal survival mediated by nerve growth factor has also been reported (Theis et al., 2018). These results reiterate the neurogenic potential of UA through interaction with Ki-67 and DCX indicating its therapeutic potential against neurodegeneration associated with AD. The reduction in the plaque formation by RA and UA post Aβ 1-42 administration also suggests their role in the suppression of AD progression. RA has been previously found to be effective against copper (II)-induced neurotoxicity through the formation of an original ternary association between Aβ and Cu (II) (Kola et al., 2020). Also, the prevention of fibrillization and assembly of β sheets in tau protein suggest its therapeutic potential against AD (Cornejo et al., 2017). Interestingly, RA also reduces the formation of Aβ and ameliorated tissue structure in an AD-like dementia model induced by scopolamine (Deveci et al., 2021). It also exerts anti-apoptotic effect that results in the alleviation of inflammation and oxidative stress associated neurodegeneration as observed in a model for Parkinson's disease (Lv et al., 2020). Consistently UA also hindered the deposition of Aβ and lowered the levels of its oligomers and monomers in an Aβ-induced Caenorhabditis elegans transgenic model (Wang et al., 2022). These results are indicative of the potential of UA and RA in the reduction of Aβ plaques which constitute a hallmark feature of AD. Conclusion This study suggests the pro-neurogenic potential and neuroprotective effects of UA and RA on neurotoxicity Frontiers in Pharmacology frontiersin.org induced by Aβ 1-42 that represents pathological hallmarks of AD. Our findings revealed that UA and RA can rescue the AD like alterations characterized by accumulated amyloid plaques, impaired social memory and neurogenesis induced by Aβ, thereby reiterating their potential as promising therapeutic agents against AD. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The animal study was reviewed and approved by Internal Review Board, Atta ur Rahman School of Applied Biosciences-NUST. Author contributions SZ, substantial contribution to conception and design of the study and finalization of the manuscript; FM, all experimental work, data analysis, interpretation, and drafting the article. Funding This work was supported by National University of Sciences and Technology (NUST), Islamabad, Pakistan. The work was partially supported through Research grant number 5974 awarded to SZ under National Research Grants Program for Universities, Higher Education Commission, Pakistan and through the research funding provided to FM under the Indigenous 5000 PhD Fellowship Program, Higher Education Commission, Pakistan. The funding sources had no involvement in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2022-12-22T14:40:44.164Z
2022-12-22T00:00:00.000
{ "year": 2022, "sha1": "96a7d9232b066a3f060e7194b8639ceecf49be8a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "96a7d9232b066a3f060e7194b8639ceecf49be8a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
245826970
pes2o/s2orc
v3-fos-license
A statistics and physics-based tropical cyclone full track model for catastrophe risk modeling Catastrophe (CAT) risk modeling of perils such as typhoon and earthquake has become a prevailing practice in the insurance and reinsurance industry. The event generation model is the key component of the CAT modeling. In this paper, a physics-based tropical cyclone (TC) full track model is introduced to model typhoons events in the western North Pacific basin. At the same time, a comprehensive test of the model is presented from the perspective of CAT risk modeling for insurance and reinsurance applications. The full track model includes the genesis, track, intensity, and landing models. Driven by the global environmental circulations, the model employs the advection and beta drift theory in atmospheric dynamics to model the track of typhoons. The proposed model is novel in the way of modeling the genesis of TCs with three-dimension kernel distributions in space and time. This enables the simulation of seasonal characteristics of TCs. By generating 10,000-year TC events, we comprehensively test the model from the standpoint of CAT insurance and reinsurance applications. The typhoon hazard model and the generated events can serve as the inputs for assessing the typhoon risk and insured loss caused by winds, rains, floods, and storm surges. Page 2 of Chen et al. Dis Prev Res 2022;1:3 https://dx.doi.org/10.20517/dpr.2021.08 22 INTRODUCTION The western North Pacific (WNP) basin is one of the regions with the most frequent and strongest tropical cyclones (TCs) in the world [1] . Generally, the total number of intense TCs exceeds 20 per year and yet reaches 40 in some years. China has suffered an average annual loss of about 25 billion yuan caused by typhoons from 1983 to 2008 with an average of 8 typhoons making landfall per year [2] . It appears that the typhoon loss increase, worldwide, is due to the warming climate. Insurance and reinsurance provide the necessary financial mechanism to mitigate the impact of natural disasters such as typhoons. At the same time, it enhances the resilience of society. Catastrophe (CAT) is generally defined as a category of natural and manmade disasters with low frequency but huge impact and great uncertainty of consequence, e.g., financial loss and death toll. CATs include both natural disasters such as typhoons, earthquakes, floods, wildfires, and manmade disasters such as terrorist attacks, cyber-attacks, and pandemics. Due to their low frequency and high uncertainty of loss, the risk assessment of CATs is not reliable by the reported historical losses or observations, especially for predicting the loss of events at the long tail of distributions. The use of special computer models based on physical mechanisms and statistics is becoming more prevailing in the insurance and reinsurance industry. Simultaneously, a substantial demand for risk premium pricing, portfolio, and cash flow management is apparent in these two domains. These computer models, which are called CAT models, are composed of three elements: hazard module, engineering module, and financial module. The hazard module simulates the magnitude, frequency, location, and characterization of the hazard events. On the other hand, the engineering module defines the exposure in terms of range, type and value of the affected objects, and the extent of damage to the objects caused by the hazard. The financial module yields the financial loss or financial implication to the insurers. In this paper, we focus our scope on the hazard module. The existing typhoon hazard models are mainly based on historical typhoon data. Although researchers began to track and observe the interior of typhoons and hurricanes using reconnaissance aircraft and dropsondes since the 1940s, some TCs were still absent until the application of meteorological satellites in the 1970s [3] . Limited by the accuracy and performance of the early observational instruments, the quality of observational data, in the early years, was worse than that of the meteorological satellite era in post 1970s, especially for the intensity observations. Basically, direct use of a limited amount of historical observation data may not be sufficient to obtain a reliable estimate of typhoon hazards. The current studies extract typhoon characteristics from historical data to simulate a large number of typhoon events to expand its statistical samples. Finally, it completes the regional risk analysis through the extended typhoon samples. In the insurance industry, there are two dominant typhoon stochastic event models: the circular subregion model and the full track model. The circular subregion model was first proposed by Russell [4] in the 1970s and subsequently improved by Tryggvason et al. [5] , Batts et al. [6] , Georgiou [7] , Vickery and Twisdale [8] . The principal idea of the model is to center on the target point. Then, to expand it outward into a circular region with a radius of 200 to 300 km. Following this, to extract the probability distribution of the movement and intensity parameters from historical TCs passing through the circular region. Finally, the regional typhoon events are randomly simulated according to the probability distributions. The combination of the simulated typhoon events with the wind field model can estimate the wind hazard probability in the target region. This method is suitable for small area with multiple coastal cities or distributed infrastructure such as railways, highways, and power grid systems. However, for areas with insufficient historical typhoon data, it may not be possible to obtain sufficient typhoon samples, in the target region, to establish a reliable typhoon hazard model. To overcome the above shortcomings, Vickery et al. [9] developed an empirical full track model using historical samples of TCs in the Atlantic Ocean. Compared to the circular subregion model, the full track model can synthesize the track and intensity of the typhoon from generation to extinction. This facilitates typhoon risk assessment for a large range of areas. Later, Vickery et al. [10] applied it to the hurricane hazard assessment in coastal areas of the United States. Although the full track model was continually improved by many works [11][12][13][14][15][16][17][18][19][20] , its core ideas are still consistent. The full track model simulates the initial genesis position of typhoons. Then, the migration speeds and directions are simulated. Following this, the model predicts the intensity of wind at each location affected by the typhoon. Specifically, for genesis position simulation, Vickery et al. [9] randomly sampled genesis positions directly from the historical data, yielding a restricted distribution of the simulated initial positions. James and Mason [11] randomly interpolated the historical genesis positions to synthesize a large number of genesis positions. Alternatively, Rumpf et al. [12,13] , Emanuel et al. [14] , and Hall and Jewson [15] used the Gaussian kernel density function to estimate the generation probability of each location in the ocean. Thus, the simulated genesis position is extended to a larger area covering the entire sea. For movement simulation, one method is to estimate the typhoon velocity through the historical movement speed and direction [11][12][13] . The other method is to establish the relationship between movement of typhoons, ambient airflow, and β-drift term [14,20] . This method is more suitable for areas with insufficient historical typhoon data because the ambient airflow can be easily obtained from reanalysis data. Similarly, there are two approaches for intensity simulation: statistical method and physical method. The former is a statistical intensity model based on historical typhoon intensity samples, such as autoregressive [9,10,18] , and Markov [14] models. The latter, physical method, is a numerical intensity model based on simplified cyclone physics [3,14] . Although it is based on atmospheric physics, it requires more computational time. In general, the full track model can simulate the track and intensity of typhoons from generation to extinction. Therefore, the typhoon hazard assessment can be performed on a large area to quantify the relevance of typhoon activities in different countries or regions. This is critical for insurers and reinsurers with a global presence. One of the main motivations of this paper is to develop a typhoon hazard model for CAT modeling which can be applied and leveraged by the insurance and reinsurance industry. In this paper, we introduce a novel statistical dynamic model which simulates genesis, track, and intensity of typhoons. Compared to the recent model developed by Chen and Duan [20] , our typhoon model improves modeling data, simulation of genesis, track, and intensity. We use our proposed model to generate an event set of 10,000-year typhoon. Then, to demonstrate the competence of our model, we test it on the generated event set comprehensively, from the perspective of catastrophe insurance, including the intensity and frequency of typhoon landings, noting that the insurance industry is mostly interested in these aspects. In this paper, typhoon and TC are used interchangeably and they are the same except for the specified. MODELING DATA The commonly used best track data sets of the WNP basin are from the Shanghai Typhoon Institute of China Meteorological Administration (CMA) [21] , the Japan Meteorological Agency (JMA), and the United States Joint Typhoon Warning Center (JTWC), respectively. Due to the different methods and criteria for discriminating typhoon events, the best track data sets of the three organizations differ in several ways. As shown in Figure 1, prior to the 1990s, the annual TCs recorded by the CMA occurs much more frequently than those of the JMA and JTWC. Especially prior to the 1980s, the number of TCs recorded by CMA is much higher than the others. Notably, the JMA best track data only counts tropical storms with intensity greater than 35 knots (about 18 m/s, 10-min average wind speed). To fairly compare the three data sets, we count the annual number of TCs with the maximum intensity above the tropical storm level (17.2 m/s). As shown in Figure 2, the annual number of TCs in the three data sets post 1980 is relatively close, especially for the CMA and JMA data sets. The difference in the annual TC number of the three data sets is mainly manifested in the number of tropical depression records. The insurance and reinsurance industry pay more attention to the landing typhoons. For the typhoons landing in China, the CMA record is believed to be more reliable than the JMA and JTWC records, the latter is employed in the Chen and Duan model [20] . Therefore, the CMA data sets are utilized in this study. The CMA best track data reports the center position and intensity of typhoons that occurred in the NWP basin from 1949 to 2019. They are provided for every 6 h. The period from 1980 to 2019 is used to develop the typhoon hazard model because the data quality post the 1980s is more reliable than prior to it. It is worth noting that the maximum wind speed in the data set is a 2-min average value at a height of 10 meters. The detailed description of the data set is illustrated in Table 1. The environmental parameter data is obtained from the National Centers for Environmental Prediction (NCEP)/the National Center for Atmospheric Research (NCAR) global reanalysis data [22] and the COBE-SST data [23,24] . It includes 6 h atmospheric environmental wind velocity, monthly average atmospheric environmental temperature, monthly average atmospheric specific humidity, and monthly average sea surface temperature (SST) as described in Table 2. The atmospheric environment wind speed and temperature of the NCEP/NCAR global reanalysis data in height is from 1000 hPa to 10 hPa, a total of 17 pressure layers, and that of the specific humidity is at the bottom 8 pressures layers. Since the location of the typhoon center does not necessarily fall on the grid of the reanalysis data, the environmental parameters are linearly interpolated, in the simulation, to the location and time of the typhoon center. Genesis simulation The full track typhoon hazard model is composed of three parts: genesis model, movement model, and intensity model. The genesis model simulates the number, location, and time of TCs generation. It has been found that the historical annual TC occurrence follows Poisson distribution by the K-S test at a 5% significance level. Unlike the kernel density method used in Chen and Duan model, we use this hypothesis distribution to model the annual occurrence of TCs in the WNP basin, as shown in Equation (1). where k is the annual occurrence number; λ is the annual average occurrence number, which is estimated from the typhoon best track data set. The annual occurrence number can be randomly sampled by the Monte Carlo method using the distribution in Equation (1). Regarding the location and time of TC generation, we define the occurrence of TC when the cyclone reaches 15 m/s for the first time. The Gaussian product kernel density method is used to estimate the threedimensional space-time probabilistic distribution of TC generation. The three dimensions are longitude, latitude, and time of TCs. The distribution is shown in Equation (2). where x is the vector of genesis position; x i is the vector of genesis position sample; n is the genesis sample size; S is the standard deviation matrix of the genesis position; σ xx , σ yy , and σ zz are the variances of the longitude, latitude, and time respectively; γ 1 , γ 2 , and γ 3 are the eigen-vectors of the correlation coefficients between the three variables after standardization; w i is the weight of the generated probability; λ 1 , λ 2 , and λ 3 are the eigenvalues, respectively; h otp1 , h otp2 , and h otp3 are the optimum bandwidths which are determined by minimizing the Equation (3) [25] . , and x jk represent the kth dimension of the ith and jth normalized variables; h k represents the kth dimensional bandwidth, k = 1, 2, 3. Track simulation The typhoon track is mainly controlled by the large-scale environmental airflow and β drift. Therefore, we decompose the typhoon migration velocity into the steering velocity component caused by the atmospheric ambient flow, and the β drift component, with which the typhoon interacts with the ambient atmosphere. The steering airflow velocity is estimated using 6 h environmental wind velocity from NCEP/NCAR reanalysis data. In order to further consider the variation in migration velocity, the β drift is modeled with a regression model with random noises, instead of using a regional average value as in Chen and Duan model which will cause the simulated speed to be closer to the average value. The migration velocity is described in Equation (4). where U i and V i are the latitudinal and longitudinal velocity at the current moment, respectively; the subscript i and i-1 present the current and previous 6-h moment, respectively; U steer,i and V steer,i are the latitudinal and longitudinal velocity of steering flow, respectively, defined as a 250 to 850 hPa pressureweighted mean wind speed averaged over annulus 5° latitude from the typhoon center; γ u and γ v are the regression coefficients of the migration velocity. The WNP basin is divided into 5° × 5° grids for estimating the regression coefficients of each grid; β x and β y are the latitudinal and longitudinal β drift, respectively; a 0 , a 1 , b 0 , b 1 are the autoregressive coefficients of the β drift; ε x and ε y are assumed to be normal random terms. To eliminate the interference of the cyclone vortices on the calculation of the steering flow in the historical reanalysis typhoon wind field data, this study uses a filtering method proposed by Kurihara et al. [26] to remove the vortices. Intensity simulation The intensity model includes two parts: ocean intensity model and land decay model. When typhoons travel over the ocean, the ocean intensity model is used. Here we use the maximum wind velocity at typhoon center at 10-m height as the typhoon intensity measure, and develop an ocean intensity model based on the autoregressive method. First, we take the 6-h change in the logarithm of relative intensity as the dependent variable. Independent variables may include climate and persistence variables, as well as atmospheric and marine environmental parameters extracted from the environmental reanalysis data, such as SST, relative humidity, wind shear, and so on. Compared with the Chen and Duan model, we consider more independent variables, such as the relative humidity and the wind shear which are not considered in the previous model. We have found that the intensity simulation results in high latitude areas will be too large without those variables. In addition, the 6-h changes of these variables are also considered to be potentially the explainable variables. The stepwise regression method and the best independent variables are selected from the candidate independent variables to ensure the significance of the regression equation. A random error term is also introduced to model the regression residuals. The ocean intensity model is finalized in the form of Equation (5). The second equation of Equation (5) is for the first step of the regression model. where i + 1, i -1, and i are the next 6-h moment, the previous 6-h moment, and the current moment, respectively; LR is a linear regression operator, which divides the WNP into 5° × 5° grids for estimating regression coefficients of each grid; RI is the relative intensity, RI = (V max )/PI; V max is the maximum wind speed; PI is the potential intensity [27] ; SST is the monthly SST; VS is the monthly vertical wind shear; RH is relative humidity; U is the latitudinal migration speed; V is the longitudinal migration speed; ε is the normal random term of intensity. Once the typhoon landfalls, the heat source will be cut off, coupled with friction on the ground, thus the intensity will gradually decrease. From the landing intensity record, the variation of landing intensity presents an e-index decreasing relationship with time [28] . Here we establish a regional intensity decay model in mainland China to estimate the intensity on land, as shown in Equation (6). where ΔP(t) is the central pressure difference at time t after the typhoon makes landfall; ΔP 0 is the central pressure difference at the time of typhoon landfall; a is the decay coefficient; a 0 and a 1 are fitting constants; ε is the normal random residual term. We use both the central pressure difference and the surface maximum wind speed (2-min average wind speed at 10 m height) to qualify the intensity of TCs. The former is used for TCs after landfall, and the latter for TCs over the sea. Equation (7) which relates the maximum wind speed to the central pressure difference is fitted for ocean and land, of which the R-squared value are 0.97 and 0.91, respectively. Where Δp ocean and Δp land is central pressure difference on ocean and land, respectively. Different from the k-nearest neighbor method used in Chen and Duan model to estimate the landing intensity, we group the areas with similar landing terrains together for estimating the decay coefficient. As the decay of typhoon after landing may differ from region to region because of the topography, we divide the coastline of mainland China and Hainan Island (excluding Taiwan island) into the following 6 regions, as shown in Table 3. We respectively fit Equation (6) in the six regions. We find that only the area 1 (Hainan and Leizhou Peninsula) has samples with the decay coefficient a < 0. This is because typhoons passing this area may re-enter sea and make landfall twice. For typhoons that do not enter the sea after landing, a > 0 must be guaranteed; for typhoons that re-enter ocean after landing, we allow a < 0 before reentering the ocean. Considering that the Leizhou peninsula has a marine environment on both sides, we combine the peninsula with Hainan Island into Area 1, and the rest of Guangdong and Guangxi is Area 2. We consider the weakening effect of the Taiwan island on typhoons landing in Fujian. Fujian is separated from its neighbor to define Area 3. For the east China area, the number of typhoons landing to the north of Zhoushan has been greatly reduced. Therefore, we consider this region with Shanghai and Jiangsu as Area 5, and the north China region as Area 6. TESTS AND RESULTS We use the developed full track model to generate a 10,000-year typhoon event set, which has about 300,000 simulated typhoon events with the position, time, and intensity of each typhoon at every 6 h. We compare the statistics of the 10,000-year typhoon event set with the CMA best track data of three different periods (1949 to 1979, 1949 to 2019, 1980to 2019). The historical data from 1980 to 2019 is what the model was built from, and the comparison with the 1980 to 2019 statistics is the in-sample test. the simulated events are consistent with the historical data from 1980 to 2019 at the 5% significance level, but are not consistent with the historical data from 1949 to 2019. In addition, we also compare the monthly occurrence frequency of the simulated TCs in the WNP basin with that of the CMA best track dataset (1949 to 1979, 1949 to 2019, 1980 to 2019). It can be seen from Figure 4 that the simulation results are consistent with the historical data of 1980 to 2019. Likewise, we used the K-S test to verify the distribution of historical data and simulations. The results show consistency between the simulated results and the historical data for the three periods. Statistics of TC events We further divided the WNP basin into 2.5° × 2.5° grids and count the annual average number of typhoons passing the cells as in Figure 5. It is clear that the patterns of the three periods are unchanged from 1949 to 2019. Moreover, the correlation coefficient between the simulations and the historical data (1980 to 2019) is 0.97, indicating that the track distribution across the WNP basin is consistent between the simulation results and the historical data. However, the number of simulated TCs passing through parts of eastern Japan and north of 35°N is 13% higher than the historical records. This may be due to the interference in the steering airflow caused by the subtropical high pressure in the east of Japan which was not considered by our model. Figures 6 and 7 show the average value and standard deviation of migration speed at each cell. The simulation results are consistent with the three historical best track data. The average and standard deviation of moving speeds vary at different latitudes. However, the simulation results capture this difference. In the high latitudes, both the simulation results and the historical data show higher migration speeds due to the intense prevailing westerlies. In addition, as shown in Figure 7, the typhoons moving to the northeastern part of the WNP have large uncertainty in its speed. The correlation coefficient of the average value between the simulation and the historical data from 1980 to 2019 is 0.96 while, the correlation coefficient of the standard deviations is 0.94. Furthermore, we compared the average and standard deviation of migration direction at each cell as in Figures 8 and 9. The simulation results are consistent with the three historical best track data. The correlation coefficients of average value between the simulation and the historical data from 1980 to 2019 is 0.97 and the standard deviation is 0.74. In high latitudes, both the simulation results and historical data from the three periods show a shift in the migration direction to the northeast. At the same time, the migration direction gradually changes from west to east as the latitude increases. As shown in Figure 9, the typhoon track direction of the central part of the Pacific Ocean and the southern part of the South China Sea demonstrates larger uncertainty. Finally, we compared the maximum intensity of the simulated and historical TCs in each cell (defining the intensity as the typhoon center pressure difference) as shown in Figures 10 and 11. Compared with the historical data from 1980 to 2019, the simulated average maximum intensity is consistent with that of the historical data. The maximum life intensity mainly occurs at 120°E to 152.5°E and 12.5°N to 32.5°N. The correlation coefficient of average value between the simulation and the historical data from 1980 to 2019 is 0.90 and that of standard deviation between the simulation and the historical data is 0.92. As illustrated in Figure 10, the largest difference between the simulated and historical values occurs at the southern South China Sea where fewer historical typhoons in this area were observed. From the standard deviation shown in Figure 11, the simulated maximum intensity has wider fluctuations compared with that of the historical data. Statistics of landing TC events People are more concerned about the landing typhoons as most losses are due to typhoons near shore and on shore. In this section, we test the statistics of simulated typhoons against that of the best track data. Figure 12A, the simulation results are comparable to the historical data. Although the number of historical TCs in the CMA data set is more before the 1980s than after the 1980s, interestingly, the number of TCs landing in China is almost the same for two periods. It could be the case because many of the tropical depressions died out at sea before landing prior to the 1980s. Figure 12B shows the distribution of historical and simulated annual number of landing typhoons. The simulation has more chance to generate years with both less and more frequent landing typhoons. The statistical distribution of typhoon intensity in terms of the central pressure difference after landing in China's coastlines is shown in Figure 13A. Figure 13B shows statistical distribution of historical and simulated landfall typhoons for different typhoon intensity scales. As demonstrated earlier, the simulation results are in-line with the CMA data set (1980 to 2019) with a maximum deviation of 3.8%. Since the data samples prior to 1980 are problematic, there is a major discrepancy between the statistics of 1949 to 1979 data and 1980 to 2019 data for tropical depression and tropical storms. We first segment the coastline of mainland China (which also extends to a part of Vietnam and North Korea) and Hainan Island with each segment length 100 kilometers, as shown in Figure 14. Four parameters, namely, annual landfall number, landfall migration speed, landfall direction, and landfall intensity of both the simulated and historical typhoons are calculated for each segment as shown in Figure 15. It also shows the 90% confidence interval for the four parameters. Figure 15A indicates that the model can capture both the high-frequency regions (Hainan, Guangdong, Fujian, Zhejiang, section number from 1 to 36) and low-frequency regions (Jiangsu, section number from 37 to 40). Figures 15B and C show that the track model can simulate the migration speed and direction of landing typhoons adequately. The migration speeds are slower when TC lands in high-frequency regions, which may cause more floods brought by typhoon driven-rainfall. Figure 15B and D illustrate that in areas with fewer TC landfalls (western Hainan Island and north of the Yangtze River Delta) there is a higher uncertainty of the landing parameters. Furthermore, we partition mainland China and Hainan Island into five regions: A, B, C, D, and E as shown in Figure 16. Area A (Hainan) and Area B (Guangdong and Guangxi) are mainly affected by TCs generated in the South China Sea followed by TCs originating in the east sea of Philippines; Area C (Fujian, Zhejiang, and Shanghai) is a mountainous region. The corresponding landing TCs decay quickly. Area C and E (Shandong, Hebei, Tianjin, and Liaoning) are mainly affected by TCs originated in the east sea of Philippines; Area D (Jiangsu) is mainly affected by TCs moving from Area C. It is less affected by direct landing TCs. Figure 17 shows the historical and simulated frequency distribution of annual landfall numbers of typhoons in the 5 partitions. The annual numbers of landfall typhoons in D and E are less than 3 and it is less than 6 for A. It is quite likely to have more than 6 landing TCs in B. The proportions of the central pressure difference for landfall TCs in the 5 regions are shown in Figure 18 which reveals the consistency between the simulated data and best track data. EXTREME TYPHOON EVENTS Extreme typhoons are the most destructive and rare events which potentially cause big losses. A typhoon is considered to be extreme if it lands multiple times or if its intensity is extreme. Table 4 gives the annual average number of typhoon landings twice or more in the five partitions. It can be found that in both historical records and simulated data there is a considerable number of typhoons made multiple landfalls in the E region and a lot less in the other regions. The reason is that area E covers the area from the Shandong Peninsula to Liaoning. In parallel to this, typhoons made to the Northeast China often make their first landfall in the Shandong Peninsula and Korean Peninsula. Figure 19 presents the historical and simulated tracks in a year with the most typhoons landing in China. In best track data for 1949 to 2019, there were 15 tracks landing in China in 1952. The greatest number of simulated landfall typhoons in a year is 21. The simulated typhoon event set of 10,000 years has about 300,000 events and the CMA best track data set of 71 years (1949 to 2019) has only about 2300 TC events. This implies that the simulation data may have much more super typhoons than those in the historical records. Table 5 shows the maximum wind speeds of landfall TCs in the historical records and the simulated data set for the five partitions. Clearly, the maximum wind speeds of the simulated landfall TCs are higher than those of the historical typhoons. CONCLUSIONS Typhoon hazard module is the key component of the CAT risk modeling system. The function of the hazard module is to generate historically compatible TC events using empirical or physics-based TC hazard models. In this paper, a full track synthesis TC model based on statistics and atmospheric dynamics was introduced. It can simulate a TC from its generation in the ocean, moving track over the ocean, to its landfall and decay in the continent. The generation of TCs in the sea is modeled by the Poisson process. The statistical genesis model employs a three-dimensional kernel density function to estimate the formation of TCs in time and space. For the track modeling, a regression model of the typhoon migration speed, the steering airflow, and β drifting airflow was proposed based on the atmospheric dynamical theory. The intensity model of TC over the sea was established by the autoregressive model, in which the vertical wind shear and relative humidity were considered. After typhoons make landfall, decay models were fitted for the China mainland and offshore islands which were partitioned into six regions taking into account the geography and topography characteristics of China coastal areas. The models' parameters were calibrated for the NWP basin using the CMA best track dataset (1979 to 2019) and NCEP/NCAR reanalysis data. The typhoon hazard model was thoroughly and rigorously tested by generating a 10,000-year event set. Then, various comparisons were made using the statistics of the generated event set and CMA historical best track data set. From the perspective of CAT modeling in the insurance and reinsurance industry, characteristics pertaining to the generation and tracks of TCs over the sea, landfall and decay of TCs onshore on China coastal lines, and extreme TC events were studied, too. For TCs in the sea, results showed that the generated event was consistent with the historical best track data in annual average number of TC generations, track spatial distribution in the sea, migration speed and direction, and maximum intensity of TCs in the sea. The maximum life intensity of both simulated and historical typhoons mainly occurred in the sea areas within 120°E to 152.5°E and 12.5°N to 32.5°N. For landing and decay characteristics of typhoons in China coastal lines, the annual average number of landing typhoons of the generated events matched that of the historical. The difference of the maximum intensity at landing is less than 5% between the two event sets. A close look of the landfall statistics by segmenting the China coastal lines into 100 kilometers distance mileposts showed that, the number of landfalls, the moving speed and direction, the intensity of the historical TCs fall into the 90% confidence intervals by the generated TCs. For the five partitions of China's coastal regions, the landing typhoons showed similar characteristics between the two event sets. Rare and extreme events with multiple landfalls and very high intensity were also observed, the greater maximum intensity of simulated landfalls was reasonable with respect to its much larger data size than the historical data.
2022-01-09T16:24:42.131Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "92b8cb2d90450022b36195f239769d03bab75b82", "oa_license": "CCBY", "oa_url": "https://dprjournal.com/article/download/4508", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cdbfafa90a52e017dbc3276e35705d42ec9c7794", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
221249
pes2o/s2orc
v3-fos-license
The Possibility of Cosmic Acceleration via Spatial Averaging in Lemaitre-Tolman-Bondi Models We investigate the possible occurrence of a positive cosmic acceleration in a spatially averaged, expanding, unbound Lemaitre-Tolman-Bondi cosmology. By studying an approximation in which the contribution of three-curvature dominates over the matter density, we construct numerical models which exhibit acceleration. Introduction We live in an inhomogeneous Universe, whose exact and complicated dynamics is described by Einstein's equations. It is generally assumed that when the spatial inhomogeneities are averaged over, the resulting Universe is described by the standard Friedmann equations for a homogeneous and isotropic cosmology. However, as is known [1], since Einstein equations are non-linear, the averaging over the inhomogeneous matter distribution will in general not yield the solution of Einstein equations which is described by the Friedmann-Robertson-Walker (FRW) metric. There will be corrections to the FRW solution, which could be small or large, and which could in principle lead to observational effects indicating a departure from standard FRW cosmology. The possibility that the observed cosmic acceleration [2] is caused by the spatial averaging of the observed inhomogeneities, rather than by a dark energy, has been investigated and debated in the literature [3,4,5]. A systematic framework has been developed for describing the dynamics of a modified Friedmann universe, obtained after spatial averaging [6]. It has been suggested that, within the framework of standard cosmology with cold dark matter initial conditions, an explanation of the acceleration in terms of averaged inhomogeneities is unlikely to work [7]. However, it is perhaps fair to say that the matter cannot be treated as completely closed, and further studies are desirable [8]. The Lemaître-Tolman-Bondi (LTB) cosmology [9], being an exact solution of Einstein equations for inhomogeneous dust matter, provides a useful toy model for investigating the possible connection between acceleration and averaging of inhomogeneities. Various authors have examined different aspects of the model in this regard. The redshift-luminosity distance relation in an LTB model and its possible connection with cosmic acceleration, or the lack of it, have been studied by Celeriér [10], Alnes et al. [11] and by Vanderveld et al. [12], Sugiura et al. [13], Mustapha et al. [14], Iguchi et al. [15]. Nambu and Tanimoto [16] give examples of cosmic acceleration after averaging in an LTB model consisting of a contracting region and an expanding region. Other works which study cosmic acceleration in LTB models are those by Moffat [17], Mansouri [18], Chuang et al. [19], Räsänen [20] and Apostolopoulos et al. [21]. It has sometimes been suggested in the literature that both an expanding and a contracting region are needed for acceleration. In the present paper we will address a question which does not seem to have been addressed in the above-mentioned works: can spatial averaging in a universe consisting of a single expanding LTB region produce acceleration? We show that the answer is in the affirmative. We do this by considering a low density, curvature dominated unbound LTB model in which the contribution of matter density is negligible compared to the contribution of the curvature function. Further, we concentrate on the late time behaviour of such a model. As a result of this proposed simplification, the calculation of the acceleration of the averaged scale factor becomes relatively simpler and conclusions about acceleration can be drawn, for specific choices of the energy function. In Section 2 of the paper we recall the effective FRW equations, resulting from spatial averaging in a dust dominated spacetime. In Section 3 we discuss spatial averaging for the marginally bound LTB model and point out there can be no acceleration in this case. The unbound LTB model is investigated in Section 4, in the approximation that the spatial curvature (equivalently, the energy function) dominates over the dust matter density, and numerical and analytical examples of acceleration are given. Averaging in Dust Dominated Spacetime For a general spacetime containing irrotational dust, the metric can be written in synchronous and comoving gauge 1 , The expansion tensor Θ i j is given by Θ i j ≡ (1/2)h ikḣ kj where the dot refers to a derivative with respect to time t. The traceless symmetric shear tensor is defined as σ i j ≡ Θ i j − (Θ/3)δ i j where Θ = Θ i i is the expansion scalar. The Einstein equations can be split [6] into a set of scalar equations and a set of vector and traceless tensor equations. The scalar equations are the Hamiltonian constraint (2a) and the evolution equation for Θ (2b), where the dot denotes derivative with respect to time t, (3) R is the Ricci scalar of the 3-dimensional hypersurface of constant t and σ 2 is the rate of shear defined by σ 2 ≡ (1/2)σ i j σ j i . Eqns. (2a) and (2b) can be combined to give Raychaudhuri's equatioṅ The continuity equationρ = −Θρ which gives the evolution of ρ, is consistent with Eqns. (2a), (2b). We only consider the scalar equations, since the spatial average of a scalar quantity can be defined in a gauge covariant manner within a given foliation of space-time. For the space-time described by (1), the spatial average of a scalar Ψ(t, x) over a comoving domain D at time t is defined by where h is the determinant of the 3-metric h ij and V D is the volume of the comoving domain given by Spatial averaging is, by definition, not generally covariant. Thus the choice of foliation is relevant, and should be motivated on physical grounds. In the context of cosmology, averaging over freely-falling observers is a natural choice, especially when one intends to compare the results with standard FRW cosmology. Following the definition (4) the following commutation relation then holds [6] which yields for the expansion scalar Θ Introducing the dimensionless scale factor a D ≡ (V D /V Din ) 1/3 normalized by the volume of the domain D at some initial time t in , we can average the scalar Einstein equations (2a), (2b) and the continuity equation to obtain [6] Here R D , the average of the spatial Ricci scalar (3) R, is a domain dependent spatial constant. The 'backreaction' Q D is given by and is also a spatial constant. The last equation (7d) simply reflects the fact that the mass contained in a comoving domain is constant by construction : the local continuity equationρ = −Θρ can be solved to give ρ √ h = ρ 0 √ h 0 where the subscript 0 refers to some arbitrary reference time t 0 . The mass M D contained in a comoving domain D is then which is precisely what is implied by Eqn. (7d). This averaging procedure can only be applied for spatial scalars, and hence only a subset of the Einstein equations can be smoothed out. As a result it may appear that the outcome of such an approach is severely restricted, and essentially incomplete due to the impossibility to analyse the full set of equations. However one should note that the cosmological parameters of interest are scalars, and the averaging of the exact scalar part of Einstein equations provides the requisite needed information. A more general strategy would be to consider the smoothing of tensors, which is beyond the scalar approach that certainly provides useful information, albeit not the full information. Equations (7b), (7c) can be cast in a form which is immediately comparable with the standard FRW equations [22]. Namely,ä with ρ eff and P eff defined as A necessary condition for (10a) to integrate to (10b) takes the form of the following differential equation involving Q D and R DQ and the criterion to be met in order for the effective scale factor a D to accelerate, is The LTB Solution The system of equations (10a), (10b) and (12) is only consistent, it does not close. For a completely general spacetime with dust, therefore, it is not possible to proceed with the analysis without making certain assumptions about the form of the functions Q D and R D [6,7]. For this reason, it becomes convenient to work with the LTB metric, an exact solution of the Einstein equations which is a toy model consisting of a spherically symmetric inhomogeneous dust dominated spacetime. In this section, we describe the LTB solution and apply to it the averaging procedure described above for the simplest, marginally bound case. In the next section we extend the analysis to the unbound LTB solution. The LTB metric for pressureless dust is given in the synchronous and comoving gauge, by The Einstein equations simplify to Surfaces of constant r are 2−spheres having area 4πR 2 (r, t). ρ(r, t) is the energy density of dust, while E(r) and M (r) are arbitrary functions that arise on integrating the dynamical equations. Solutions can be found for three cases E(r) > 0, E(r) = 0 and E(r) < 0. We will restrict our attention to models in which E(r) has the same sign, for all r. The solution for E(r) = 0 (the marginally bound case) has the particularly simple form Here t 0 (r) is another arbitrary function arising from integration. The solution describes an expanding region, with the initial time t in chosen such that t > t in ≥ t 0 (r) for all r. For the other two cases, the solutions can be written in parametric form In the unbound case (E(r) > 0), R(r, t) increases monotonically with t, for every shell with label r. In the bound case (E(r) < 0), R(r, t) increases to a maximum value R max (r) for each shell r and then decreases back to 0 in a finite time. In all cases, there are two physically different free functions, although three arbitrary functions E, M and t 0 appear. One of the three represents the freedom to rescale the coordinate r. We use this freedom to set R(r, t in ) = r. To completely specify the solution, we specify the initial density ρ in (r) and the function E(r). This specifies M (r) = 4π r 0 ρ in (r)r 2 dr (which in the marginally bound case is interpreted as the mass contained in a comoving shell), and t 0 (r) can be solved for using Eqns. (16), (17a) or (17b) as the case may be, at time t = t in . Averaging the LTB Solution The quantities defined in Sec. 2 can be computed for the LTB metric of Eqn. (14). The averages are computed over a spherical domain of radius r D , centered on the observer. Other choices of the averaging domain will possibly yield different results, however, the choice of a spherical domain seems natural for the spherically symmetric metric of Eqn. (14). For clarity, we suppress the r and t dependences of the various functions in the following where a prime denotes a derivative with respect to r. Note that M D = M (r D ) if E = 0. Only in the marginally bound case is the function M (r) identified with the mass contained in the shell with label r. It is convenient to work with the combination (2/3) Θ 2 D − 2 σ 2 D rather than evaluate the average rate of shear σ 2 D separately. We define this to be C D and obtain The marginally bound case -vanishing backreaction The results of the previous subsection hold for all classes of the LTB solution, provided the averaging domain is spherically symmetric about the center. Now consider the marginally bound case E(r) = 0 for all r. The algebra in this case becomes very simple, and the backreaction can be computed analytically. We will show next that for a single domain with E(r) = 0 throughout, the backreaction Q D is, in fact, zero. Also, the average spatial curvature R D is zero (which is expected by inspection of the metric (14) if we note that a spatially uniform initial density profile in the LTB solution in this case yields the corresponding FRW solution). As described earlier we have To obtain the second equation we have used Eqn. (16) at time t = t in with the condition R(r, t in ) = r. Some algebra then yields the following resultṡ Since the solution was constructed assuming t > t 0 (r) for all r, Eqn. (21d) immediately shows thaẗ a D < 0 and hence acceleration is not possible in this case. Further, Eqn. (21e) shows that Thus the backreaction term vanishes for a region described by the marginally bound LTB solution. This result is not unexpected. We note that mathematically, the General Relativistic equations (15a) and (15b) describing the evolution of a spherical dust cloud are identical to the corresponding Newtonian equations. It has been shown by Buchert,et. al. [23] that the backreaction Q D in a spherically symmetric Newtonian model of dust, must vanish. Further, we note that in the marginally bound case, the mathematical expressions for the averaged quantities defined earlier coincide with their corresponding Newtonian analogues. Hence, for the fully relativistic (marginally bound) case also, the backreaction must vanish. The spatial Ricci scalar and its spatial average for a general E(r) are given by which shows that (3) R and hence R D vanish in the marginally bound case. This is consistent with the requirement of Eqn. (12). The unbound LTB solution Since the solution with zero spatial curvature fails to produce a non-trivial backreaction, we consider next the opposite extreme -a curvature dominated solution in which the contribution to the Einstein equations due to matter is much smaller than that due to spatial curvature. Before describing the construction of such a solution, we present a general treatment of regularity conditions which an unbound LTB model must satisfy. Regularity conditions on unbound LTB models Consider the class of unbound LTB models given by (17a). The functions M (r) and E(r) are to be specified by initial conditions at t = t in , and the choice of scaling R(r, t in ) = r fixes t 0 (r) as The regularity conditions imposed on this model, and their consequences, are as follows • No evolution at the symmetry centre: This is required in order to maintain spherical symmetry about the same point at all times, and translates asṘ(0, t) = 0 for all t. The right hand side of Eqn. (15a) must therefore vanish in the limit r → 0. Since the functions involved are non-negative, we assume that we can write Consistency requires β to be constant, and our scaling choice further requires β = 1. We do not require the exponents δ and α to necessarily be integers. • No shell-crossing singularities: Physically, we demand that an outer shell (labelled by a larger value of r) have a larger area radius R than an inner shell, at any time t. Unphysical shell-crossing singularities arise when this condition is not met. Mathematically, this requires R ′ (r, t) > 0 for all r, for all t. • Regularity of energy density: We demand that the energy density ρ(r, t) remain finite and strictly positive for all values of r and t. Combining this with Eqns. (15b) and (25) gives (assuming that R ′ is finite for all r and since β = 1) • No trapped shells: In order for an expanding shell to not be trapped initially, it must satisfy the condition r > 2GM (r). Near the regular center, this condition is automatically satisfied independent of the exact form of M (r), since there M ∼ r 3 . Consider now the function t 0 (r) given by (23). By observing the behaviour of the functions (cosh η in − 1) and (sinh η in − η in ) for values of δ equal to, less than, and greater than 2, it is easy to check that t 0 (r) is finite at r = 0 for all values of δ. However, this involves the assumption that M (r) is positive for r = 0. In the limit of M → 0 for all r, we find Although now, in the limit r → 0, t 0 (r) is finite only when δ ≤ 2, it will turn out that the integrals involved in the averaging procedure are insensitive to the behaviour of t 0 (r) in the r → 0 limit, and remain well defined for all positive values of δ. The expression for (3) R in (22) indicates that the spatial Ricci scalar diverges as r → 0 unless δ ≥ 2. However, we note that the spatial Ricci scalar is not a fully covariant quantity and depends on our choice of time slicing. The four -dimensional Ricci scalar, obtained after taking the trace of the Einstein equations as (4) R = 8πGρ(r, t) is finite at the origin irrespective of the behaviour of E(r). It is interesting to see how this cancellation occurs. We have On using the Einstein equation (15a) we obtain which neatly cancels the contribution from (3) R, leaving precisely 8πGρ(r, t) after applying the second Einstein equation (15b). Hence the 4−dimensional Ricci scalar does not impose any further restrictions on the form of E(r). The fact that the origin is well behaved can also be seen from the behaviour of the Kretschmann scalar, given by [25] (4) R µνσρ (4) R µνσρ = 12 A condition on the value of δ is obtained, however, by the regularity of the energy density ρ(r, t), which assumes that R ′ (r, t) is not only positive, but also finite for all r and t. Equation (27) shows that unless δ = 2, R ′ either diverges or vanishes at the center, violating this regularity condition. Late time solution and curvature dominated unbound models The function R(r, t) is an increasing function of time in all the unbound models described by (17a). The Einstein equation (15a) then shows that for sufficiently late times t ≫ t in , neglecting the term involving 1/R, all unbound models have the approximate solution given by (27). If on the other hand, we start with a model which satisfies GM (r)/(rE(r)) ≪ 1 for all r, then since our scaling assumes that R = r at t = t in , we will have GM (r)/(R(r, t)E(r)) ≪ 1 for all r and for all t ≥ t in , and (27) is then an approximate solution at all times, the approximation becoming better as t increases. To make this idea more precise, consider the closed form expression for t in terms of R obtained by integrating Eqn. (15a) [24] t − t 0 (r) = Hence, imposing R(r, t in ) = r we have Let us write GM (r)/E(r) ≡ ǫGM (r)/E(r) ≡ ǫg(r) where ǫ is a dimensionless positive number whose value we can control. This relation also defines the functionsM (r) and g(r). We can rewrite (31) for ǫ ≪ 1 as Here O(x 2 ) represents a power series beginning with a term of order x 2 , and we have used a binomial expansion in ǫg(r)/R and the asymptotic expansion for the inverse hyperbolic sine given by (as x → 0) [26] sinh −1 1 x = ln 2 − ln x + 1 4 The terms in Eqn. (33) involving ǫ vanish as ǫ → 0, although the expression in (33) cannot be inverted to get R = R(r, t), due to the presence of the logarithm. We can, however, make the following statement. Provided the function g(r)/r is finite for all values of r, then given any starting time t in , we can choose ǫ small enough that the terms involving ǫ on the right hand side of Eqn. (33) are all negligible compared to unity. Then, since R increases with time, these terms will always be negligible compared to unity. Alternatively, given some ǫg/r = GM/(Er) which is finite for all r, one can always wait for a sufficiently long time, and find that the ǫ dependent terms become smaller compared to unity. In this case, we need not even assume that ǫ is small. It is in this sense that the approximation involved in writing the equations in (27) becomes better as t increases (with the caveat that if ǫ is not small, then t 0 (r) must be given by the full expression (32) and not the approximation of (27)). This shows that the first of equations (27) is the correct late time solution for all unbound models, with the second being a good approximation when ǫ is small. The condition that for some ǫ > 0, g(r)/r be finite for all r, and in particular as r → 0, implies that δ ≤ 2 where δ is defined in (24). This is not inconsistent with the requirement δ = 2 imposed by the criterion of regularity of energy density. Consider now a model which begins with negligible matter (ǫ → 0) and in which we have waited for a sufficiently long time (t ≫ t in ). Eliminating t 0 (r) from (27) the approximate solution becomes where we have introduced two placeholders λ r and λ t which will remind us of the relative magnitudes of various terms. We will ultimately set λ r = λ t = 1. Substituting for R in the expression for the domain volume V D in Eqn. (18), we find where we have defined the domain dependent integrals The sum of the exponents of λ r and λ t in each term in (36) indicates the relative order of that term with respect to the leading t 3 term. This approach of treating some terms as small compared to others is valid since the various integrals which multiply the powers of t, are all finite and non-zero. We note that the solution in Eqn. (27) actually corresponds to Minkowski spacetime, since in this limit the matter content has been neglected. The corresponding Riemann tensor and Kretschmann scalar are exactly zero. The constant time three-spaces are hypersurfaces of negative curvature, with the threecurvature being determined by the function E(r). The 'FRW' limit of this solution is in fact the Milne universe; the solution (27) could hence be thought of as the 'Tolman-Bondi' type generalization of the Milne universe. For our purpose, it is not a problem that the solution describes Minkowski spacetime -we know from the initial conditions that dust matter is present, only its density is negligible compared to the curvature term. The form of the solution then allows us to easily determine if the average scale factor a D undergoes acceleration. We demonstrate this with explicit examples in the next subsection. Subsequently, we argue that if a small amount of matter is introduced, so as to introduce departure from Minkowski spacetime, the sign of the acceleration of a D is preserved. Condition for late time acceleration The expression for the volume V D (t) in (36) allows us to determine the late time behaviour of the effective scale factor a D (t) ≡ (V D (t)/V D (t in )) 1/3 . Using a binomial expansion for t ≫ t in in (36), we get where O (3) represents terms involving λ m r λ n t , i.e. containing (1/t m+n ) with m + n ≥ 3. We see that the generic late time (i.e. t → ∞) behaviour of the unbound models under consideration isä D → 0, and that deviations from zero are small, being a second order effect. Whether the approach tö a D = 0 is via an accelerating or decelerating phase, depends upon the relative magnitudes of the domain integrals involved. A sufficient condition for an unbound model with negligible matter to accelerate at late times, is To proceed further we need to specify a particular model.As an explicit example of models admitting acceleration, we consider the power law models characterized by 2E(r) = r δ , for all r, in some units. (At present we are only demonstrating the existence of such models, and shall therefore not worry about the physical scales involved.) Keeping in mind the discussion of Secs. 4.1 and 4.2, we must strictly speaking only consider the model with δ = 2. The models with δ > 2 cannot be considered at all, since they violate the conditions assumed in Sec. 4.2 which justified the approximation in Eqn. (27). The models with δ < 2 on the other hand, contain a Ricci scalar that diverges and a matter density that vanishes at the center. Despite these pathologies, we display the results for the models with δ < 2 as an existence proof of acceleration using this very simple parametrization. Although it is possible to obtain analytical expressions for the integrals in (37) in terms of the incomplete Beta function, it serves our purpose much better to numerically evaluate the integrals for various values of δ and r D , and plot the function P defined in (39). The results are shown in fig.1. Note that P vanishes along the line δ = 2, but is positive everywhere else in the region plotted, and that the positivity ofä D /a D at late times is independent of the size of the domain r D . We have therefore obtained a continuous range of parameter values (δ, r D ) which admit late time acceleration. In order to demonstrate that the acceleration obtained above is not an artifact of the singular behaviour of those models, we construct another set of models which show late time acceleration, and in which the spatial Ricci scalar (3) R remains finite everywhere. Consider the models characterized by the energy function where we have again used arbitrary units. Since a > 0, the r → 0 behaviour of these models is 2E ∼ r 2 , which satisfies the regularity conditions of Sec. 4.1 and keeps (3) R finite at the origin. Also, keeping a < 2 ensures the 'no shell-crossing' criterion of Sec. 4.1. The function P/I E for these models, which controls the magnitude of the late time acceleration, is shown in fig. 2, against r D and a. For clarity, in the second panel we have shown P/I E against a for specific values of r D . Again, we find that P/I E is positive everywhere in the region shown, and hence the models show late time acceleration for all allowed values of a and r D . As an example, we plot the evolution of the dimensionless quantity q D defined by for various fixed values of a and r D . The results are shown in fig. 3. We have used units in which t in = 1, and have displayed the evolution for times t > 100 t in . As mentioned earlier, a potentially contentious issue is that in all of the calculations above, we have actually set ǫ = 0. Since the function t 0 (r) approaches its approximation in Eqn. (27) in a continuous fashion as ǫ → 0, we expect that models with a non-zero but small matter density will also exhibit the same qualitative late time behaviour as the ones above. To demonstrate this, we consider the leading corrections to the function t 0 (r) in the presence of a small but non-zero ǫ. We are still assuming the late time limit so that the ǫ dependent terms on the right hand side of Eqn. (33) can be neglected (more precisely, we treat both ǫ and g/R as small quantities). First, let us rewrite Eqn. (32) as It is easy to check that in the late time limit, the expression for volume V D becomes where I E is the same as defined in (37), and the remaining integrals are defined analogously to those in (37), In arriving at Eqn. (43), we have neglected terms involving (g/R)(ǫ ln ǫ), ǫ(g/R) ln(g/R) and terms of order O(ǫg/R) on the right hand side of Eqn. (33). In order to proceed as before, we further assume that these leading order corrections are smaller than the terms of order λ 2 r coming from the binomial expansion of V D in Eqn. (43). This is essential in order to be able to make a statement analogous to (39), and can be ensured by choosing ǫ small enough, without setting it exactly to zero. The condition for late time acceleration in this situation becomes For ǫ < e −1 , we have 0 < ǫ < −ǫ ln ǫ < 1, and the leading order terms in the expansion of h(r) contain (ǫ ln ǫ) (assuming that the r dependent coefficients are well behaved for all r). On expanding the integrals in (45) to this leading order, we find for the function P h , whereM is defined by M = ǫM , P is defined in (39) and the integrals I give the leading order corrections to I Er and I Er 2 respectively. The function P h of (45) replaces P in Eqn. (38). This shows that a non-zero ǫ brings in an additional correction toä D /a D which is of order λ 2 r (ǫ ln ǫ). We have already neglected terms of order (g/R)(ǫ ln ǫ), and since (g/R) is small because t is large, we should therefore also ignore terms of order λ r (ǫ ln ǫ), λ t (ǫ ln ǫ) and higher. Hence the correction given by P (ǫ ln ǫ) (provided it is finite), should be ignored. We therefore see explicitly that within the late time approximation, we can always have a non-zero but small enough ǫ which does not affect the sign of the acceleration at the leading order. A rigorous argument to demonstrate acceleration in models with non-zero matter in general would, of course, require a complete numerical evolution of the LTB equations, and this work is in progress. Our semi-numerical analysis, however, throws open the possibility of a large class of models which may show late time acceleration after averaging. Our results for the zero matter limit also point to the need for caution in interpreting acceleration -by suitable choices of the energy function E(r) one could obtain acceleration, even though in this limit the spacetime coincides with Minkowski spacetime. A similar demonstration was earlier given by Ishibashi and Wald [5]. They showed that by suitably joining two negative curvature slices (hyperboloids) in Minkowski spacetime one can construct a spatial region which exhibits acceleration. In contrast though, the slicing we choose is physically motivated, so as to coincide with the FRW slicing when matter is included. Our own interest of course was in demonstrating, by adding matter beyond the Minkowski limit, that acceleration is possible in a single expanding LTB region. An analytical example (r = 0 excluded) In this subsection we will follow a slightly different approach and try to construct an accelerating model from purely analytic arguments. We begin with a domain in which t 0 (r) > 0 for all r. We now use the approximate solution (27) in the expression for volume in (18) and keep the integration limits unspecified as r 1 , r 2 , i.e. V D = 4π At late times t, by treating t 0 (r) in its entirety as a small quantity compared to t, we findä respectively. By definition, 0 < p < 1 since we require t 0 > 0 for all r. This implies (2p − 1)/p < 1 for all r, so if the first relation in (52) holds, then so does the second. The first relation, in terms of E(r), reduces to E ′ /E > (−1/r) which is necessarily true. Hence we have obtained a class of solutions which show late time acceleration of the effective scale factor a D , with the caveat that we must exclude a sphere around the origin from our domain of integration, the radius of this sphere being determined by the form of E(r). Discussion We have shown that it is possible within the framework of classical General Relativity, to construct models of universes in which the average behaviour of spatial slices is that of accelerating expansion. Although the LTB models are unrealistic (since they place us at the center of the Universe), they are useful in building intuition. Especially, since LTB is an exact solution, it helps towards a deeper understanding of averaged inhomogeneous cosmological solutions of Einstein equations. In particular, our solution could be assumed to apply not necessarily to the whole Universe, but only to a local underdense region such as a void. The volume average of the late Universe is dominated by voids, and structures occupy a tiny fraction of the volume. The average over such a distribution does not lead to a FRW model. The curvature of voids can be estimated to be proportional to minus the square of their Hubble expansion rate and thus must be negative [27]. The negative curvature LTB model discussed in this paper could be useful for describing such a situation. In both the examples which we gave, namely the power law models and the models given by Eqn. (40), the qualitative behaviour of the evolution of the effective scale factor was independent of the size of the averaging domain. Whereas the power law models were pathological in the sense that, among other things, their spatial Ricci scalars diverged at the symmetry center, the second class of models described by (40) had no such pathology. The point to be emphasized, though, is that our analysis clearly shows the importance of curvature (expressed as a non-zero energy function E(r)) in causing the late time acceleration. The fact that the limit of vanishing matter density (ǫ → 0) is well defined, means that the qualitative behaviour of curvature dominated models is not expected to change by adding a finite but small amount of matter. A few remarks comparing the results of the present paper with the concordance model in standard cosmology are in order. One could assert that observational data show that our currently accelerating universe has zero spatial curvature. On the other hand, our LTB model with zero spatial curvature (the marginally bound case) shows no acceleration. It might hence appear that our curvature dominated LTB model is of very limited interest. However, observations in the late Universe must not be matched only with a zero-curvature Universe. This assumption may be good in the early stages of evolution, but it is just a fitting ansatz to the late-time inhomogeneous Universe. What the consideration of averaged inhomogeneities in the present and other similar papers demonstrates is that the real Universe could in principle have negative curvature and yet exhibit acceleration. If this were indeed to be the case, it could eliminate the need for a dark energy. It might also be said that observational data show that the Universe has up to thirty percent of its content in luminous and dark matter and hence our low density curvature dominated model showing acceleration does not meet this criterion. However, in reality our model is in disagreement with the concordance model, which while being one of the simplest and most favoured possibilities, need not turn out to be the final answer. The consideration of averaged inhomogeneities shows that low density, curvature dominated models can also in principle fit the observational data. This issue should hence be regarded as an open one. Furthermore, as noted above, our LTB model could by itself be considered relevant for describing a locally underdense region such as a void. Our results highlight the intimate connection between the averaged spatial curvature, and the evolution of the kinematic back-reaction, as anticipated from the integrability condition given by Eqn. (12). It will be interesting to consider the more general dust models described by the metric (1), and enquire if acceleration can again be produced in the approximation where the averaged three-curvature dominates over the averaged matter density during some epoch of cosmic evolution.
2014-10-01T00:00:00.000Z
2006-05-08T00:00:00.000
{ "year": 2006, "sha1": "467dba9fa52aae425d4df7ae67fe89a5b03cd907", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0605195v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "467dba9fa52aae425d4df7ae67fe89a5b03cd907", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10469731
pes2o/s2orc
v3-fos-license
Poly(Dimethylsiloxane) (PDMS) Affects Gene Expression in PC12 Cells Differentiating into Neuronal-Like Cells Introduction Microfluidics systems usually consist of materials like PMMA - poly(methyl methacrylate) and PDMS - poly(dimethylsiloxane) and not polystyrene (PS), which is usually used for cell culture. Cellular and molecular responses in cells grown on PS are well characterized due to decades of accumulated research. In contrast, the experience base is limited for materials used in microfludics chip fabrication. Methods The effect of different materials (PS, PMMA and perforated PMMA with a piece of PDMS underneath) on the growth and differentiation of PC12 (adrenal phaeochromocytoma) cells into neuronal-like cells was investigated using cell viability, cell cycle distribution, morphology, and gene expression analysis. Results/Conclusions After differentiation, the morphology, viability and cell cycle distribution of PC12 cells grown on PS, PMMA with and without PDMS underneath was the same. By contrast, 41 genes showed different expression for PC12 cells differentiating on PMMA as compared to on PS. In contrast, 677 genes showed different expression on PMMA with PDMS underneath as compared with PC12 cells on PS. The differentially expressed genes are involved in neuronal cell development and function. However, there were also many markers for neuronal cell development and functions that were expressed similarly in cells differentiating on PS, PMMA and PMMA with PDMS underneath. In conclusion, it was shown that PMMA has a minor impact and PDMS a major impact on gene expression in PC12 cells. Introduction Microfluidics provides the opportunity to investigate cells on both single and multi-cellular level with excellent spatial and temporal control of cell growth and stimuli [1]. Although microfluidics based cell culturing presents many advantages over conventional cell culturing methods, it is not yet widely used [2]. This may be due to that additional factors have to be considered before using microfluidics for biological experiments, e.g. the influence of flow conditions on the cells and the material used for system construction. While batch cultures are standardized using polystyrene (PS) flasks or microtitre plates, microfluidics devices are made of a whole range of other materials, such as poly(dimethylsiloxane) (PDMS), poly(methyl methacrylate) (PMMA), polycarbonate (PC), cyclic olefin copolymers (COC) and glass [3][4][5][6]. One reason for this is that PS is not straightforward to us for constructing microfluidics devices; the main challenge being to bond two pieces of PS together [4,7]. Composite PDMS based devices, in which a PDMS layer is grafted onto another material like glass, PS, or PMMA, have become widely popular in the microfluidic field. The reason is that it is possible to create highly complex fluidic control features in PDMS, such as pumps and valves that control medium delivery to the cells [8]. We have recently developed a powerful way to create and drive microfluidic cell culturing systems using a modular approach, also containing PDMS parts [9,10], based on a handful of components fabricated in PMMA and PDMS [11][12][13][14]. Although a significant number of PDMS-based microfluidic cell culture systems have been reported [5,[15][16][17][18], remarkably little attention has been paid to the specific properties of PDMS, which may potentially influence the biological results. Properties of interests are gas permeability, absorption of hydrophobic molecules and leaching of uncured oligomers from the polymer components into the cell culture medium [4,19]. It has been reported that mouse mammary fibroblasts cultured in PDMSbased microchannels responded significantly different, when compared to culturing in a 96-well plates [20]. Furthermore, PDMS oligomers were detected in the plasma membranes of NMuMG cells cultured in PDMS microchannels for 24 hours [19]. Millet et al. [17] showed that the biocompatibility of PDMS microdevices may be significantly increased by several extractions/washes of PDMS with various solvents to remove impurities. Due to the extensive use of PDMS and its reported negative effects on cells, it is highly important to gather as much information as possible about its effects on cells in order to be able to predict the effect of PDMS on any given assay. The aim of this study was to explore the biocompatibility of cell culturing on PMMA and PDMS in a configuration resembling our previously developed modular system [9,10,10,11], and compare it to cell culturing on PS as the reference material. The study also includes a model for composite PDMS chips where the control features are defined in PDMS while the cells are grown on glass, PS or PMMA [4]. Biocompatibility is often assessed using measurements of cell viability, growth, and morphology. However, these parameters are not sufficient to explain specific material effects on the molecular level [21] (Lopacinska, 2012). For instance, alterations in gene expression can underlie many diseases, e.g. neurodegenerative disorders such as Alzheimer's disease [22][23][24]. Therefore, the cell experimental system must have a minimal impact, or at least a known impact, on the biological system since there is a link between gene expression and disease mechanism. The choice of investigated biocompatibility parameters is thus vital. For a general-purpose cell culture chip, a material is biocompatible when it: (i) supports high proliferation rates, (ii) does not induce cell death, and (iii) does not alter the transcriptome profile, compared to a reference material such as PS. In most microfluidics chips or even at the chip material level, especially the latter requirement is not well met or characterized. We therefore decided to analyze the gene expression profiles of cells by means of DNA microarray expression analysis to check if any differences between tested conditions can be observed. DNA microarrays are powerful tools that provide a means to correlate effects on gene expression profiles to fundamental biological processes [25,26]. We previously reported that a PMMA based cell culture chip showed excellent biocompatibility using both viability assays and gene expression profiling of HeLa cells grown on PS, PMMA and inside PMMA chips [26][27][28]. The results strongly indicated that PMMA is a good candidate for fabrication of general-purpose cell culture chips, as PMMA had no effect even on Hela cells at the molecular level or cellular function level as compared to cells growing on PS. However, our previous studies on PMMA were limited to one cancer cell line (HeLa cells), which has been selected for growth, and only one biological condition -proliferation. Detailed investigation of the impact of cell culture chip-materials on other cell types and other cellular functions is highly needed to understand chip material-cell interactions. The present study was therefore designed to explore the response of a more complex cellular model (PC12 cells) that included a detailed characterization of both growth and differentiation of these cells. The PC12 cell line represents a wellestablished in vitro model to examine neuronal cell differentiation, neurite outgrowth and neurosecretion [29,30]. Upon nerve growth factor (NGF) treatment, PC12 cells stop dividing and start to extend neuritis and acquiring the phenotype of sympathetic ganglion neurons [31,32]. Sympathetic-like neuron development is characterized by neurite outgrowth, electrical excitability, and the presence of synaptic vesicles [33]. This study answers question about the applicability of PMMA as a solid support material and trans-acting effects of PDMS on the complex rearrangement of biochemical constitution of dividing cells into non-dividing differentiated cells. The cells were incubated at 37uC in a humidified atmosphere of 95% air and 5% CO 2 . PC12 cells were grown to 80-90% confluency and harvested with 0.1% trypsin, 0.02% EDTA in Ca 2+ and Mg 2+ -free phosphate-buffered saline (Sigma, USA). The cells, at passage number 8, were seeded into 3 different kinds of laminin-coated (10 mg/ml in PBS, Sigma, USA) cell culture dishes (100 mm620 mm, Nalgene Nunc International Rochester, USA): (1) petri dishes (polystyrene -PS), (2) petri dishes with a 2 mm thick PMMA plate (Nordisk Plast, Denmark) placed at the bottom (PMMA), (3) petri dishes having a 2 mm thick PMMA plate with through holes, placed on top of a 2 mm thick PDMS layer (Sylgard 184, DowCorning) (PMMA-PDMS). PDMS was prepared by mixing a 10:1 mass ratio of elastomer to curing agent and cured overnight at 65uC. All PMMA pieces were made by a CNC controlled micromilling machine (Folken, Glendale, CA). Tested materials were sterilized by using 0.5M NaOH. After 10 min of sterilization, NaOH was aspirated and samples were washed three times with a sterile 1xPBS. Two independent biological repeats for each culture conditions were prepared. Cell Viability and Metabolic Activity Assays The calcein and propidium iodide assay was used to determine the number of live and dead cells on the investigated surfaces. Cells were seeded and cultured for 4 days at the concentration of 3610 5 cells per well in 6-well plates, containing the different test materials. After 4 days of cell culturing, the medium was removed and 2.5 ml of 3 mM calcein AM (staining live cells) and 3 mM propidium iodide (PI, staining dead cells) was added to each well. The dyes were incubated with cells for 30 min, after which the signal from calcein and PI were determined, using an automated inverted life science microscope (Axio Observer.Z1, Carl Zeiss). The experiments were repeated three times and for each experiment at least 100 cells were visualized and analyzed. The results were shown as the mean 6 SD. The MTT ((3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide, a yellow tetrazole)) assay was utilized to evaluate PC12 cell growth on the test surfaces. Cells were cultured without NGF for 4 days, after which 2.5 ml of a 5 mg/ml solution of MTT in 1xPBS was added to each well. MTT was allowed to be metabolized by cells for 4 hours in 37uC and an atmosphere of 5% CO 2 . The medium was thereafter carefully removed from each well and the water insoluble product dissolved in 2.5 ml DMSO. The plates were shaken for 10 min and the optical density (OD) of the dissolved solute was read at 560 nm and the background at 670 nm subtracted. Each sample analysis was repeated three times and the results were reported as the mean 6 SD. Cell Morphology Investigation After 4 days of differentiation with NGF-treatment, the PC12 cells were fixed in 2% glutaraldehyde in 0.05M cacodylate buffer for 15-20 minutes at room temperature, and subsequently washed in 1xPBS. Cell morphology was investigated by using an automated inverted life science microscope (Axio Observer.Z1, Carl Zeiss) equipped with 40x magnification objective for phase contrast and PlasDIC. The number of differentiated cells was determined by manual counting of cells that had at least one neurite with a length equal to the cell body diameter. The data was presented as the percentage of the total number of cells for each experiment, with each experiment repeated three times, and where at least 100 cells were visualized and analyzed. The results were shown as the mean 6 SD. Cell Cycle Analysis and Sub-G1 DNA Measurement PC12 cell were cultured at the three different test conditions (PS, PMMA, PMMA-PDMS) with and without NGF treatment, and then analyzed by ow cytometry assays. Briey, cells were harvested every 24 hours by centrifugation at 2006g for 10 min, washed with ice-cold PBS and fixed with 70% cold ethanol at 4uC overnight. The fixed cells were suspended in PBS, and further treated with RNase (DNase free, 170 mg/ml final concentration in PBS) and PI (45 mg/ml final concentration in PBS) for 30 min at 37uC in the dark. The cell suspension was stored at 4uC protected from light until analysis. The intensity of PI was measured using a Gallios Flow Cytometer (Beckman Coulter, USA). At least 10,000 cells were analyzed for each sample and gated on the basis of forward and side scatter. The number of cells in different phases of the cell cycle was analyzed using Cyflogic v. 1.2.1 and for DNA, doublets and higher order cell clumps were detected, collecting peak versus integrated signals. Error bars show mean 6 SD estimated from results of three independent identical experiments. RNA Preparation After 4 days of subculture, the non-differentiated PC12 cells were harvested from half of the seeded cell culture dishes and the total RNA was isolated and collected. The medium in the rest of the samples was changed to MEM/F-12 (1:1) with GlutaMAX TM (Invitrogen, USA) supplemented with 0.5% horse serum, 0.5% fetal bovine serum, 0.2% 0.05mg/ml nerve growth factor, 0.5% Hepes and 1% penicillin/streptomycin (all from Sigma, USA). After 4 days differentiation the total RNA was isolated from differentiated PC12 cells grown on the various surfaces. The RNeasy total RNA isolation kit (Qiagen, USA) was used to isolate total RNA from the cells cultured under the different experimental conditions. Quantification of obtained total RNA was done using an Ultraspec 3000 spectrophotometer (Pharmacia Biotech, UK). To assess RNA quality, an Agilent 2100 Bioanalyzer (Agilent Technologies, USA) was utilized. DNA Microarray The experiments were performed using NimbleGen Rattus norvegicus 126135K Array that consisted of 26,419 complementary DNA (cDNA) spots. Ten micrograms of the appropriate RNA (PS.ND, PMMA.ND, PMMA-PDMS.ND, PS.D, PMMA.D, PMMA-PDMS.D, see Fig. 1 and Fig. 2 for definitions) were processed and labeled using the standard NimbleGen protocol. Briefly, RNA was converted into cDNA using the SuperScript II cDNA Conversion Kit (Invitrogen, USA). cDNA was randomprimed labeled with Cy3-nonamers and hybridized to the microarrays for 16 hours at 42uC. The arrays were washed, dried, and scanned at 5 mm resolution using a GenePix 4000B microarray scanner (Molecular Devices, USA). Data were extracted from scanned images using NimbleScan software (Roche NimbleGen, USA). Data Analysis using R and Bioconductor Data analysis was performed using the Bioconductor packages and the statistical program R (version 2.10.1), as described in [34]. The limma package was used to determine differentially expressed genes between the experimental groups, by fitting a linear model to the expression data for each gene [35,36]. The data was normalized using the RMA algorithm offered by the oligo package, consisting of three preprocessing steps: convolution background correction, quantile normalization, and a summarization via median polish to the raw data of the obtained expression arrays [37,38]. To assess the differentially expressed genes, the Empirical Bayes method implemented in the limma package was used. The resulting test statistics is a moderated t-statistics, which operates on a weighted average of the single-gene estimated variances s g 2 and a global variance estimator s 0 2 instead of only s g 2 [35]. Multiple hypotheses testing were controlled by applying the false discovery rate (FDR) algorithm. For the limma package to assess differentially expressed genes from the microarray data, two matrices need to be specified: the design matrix and the contrast matrix [36]. In the same way, expression of the genes of interest was assessed for NGF-differentiated PC12 cells. The genes of interest were selected as those displaying an expression difference more than two-fold between the compared cell culture conditions. A Venn diagram was utilized, representing each contrast as a circle enclosing the number of more than two fold regulated genes (up and down). The number of genes similarly regulated on more than one contrast was presented in the overlapping region of the corresponding circles. The top20 most highly differentially expressed genes, between NGF-differentiated and non-differentiated PC12 cells, were revealed for each cell culture condition by using topTable function, and the similarly expressed genes at all cell culture conditions were identified. Functional Gene Ontology (GO) annotation of genes of interest was performed using DAVID Bioinformatics Resources 6.7 [39,40] to identify functional gene groups and ontology terms that are significantly overrepresented among the genes of interest. To remove any redundancy in our gene list, i.e. when two or more IDs represent same gene, DAVID gene ID was used as unique identifier. Results Growth (no NGF treatment) and differentiation (NGF treatment of cells) of PC12 cells were investigated on both PS and PMMA. To determine the effect of PDMS, PC12 cells were cultured on a perforated PMMA sheet resting on a PDMS layer (Fig. 1). The PMMA sheet was perforated to ensure that possible soluble compounds from PDMS were distributed evenly to the cells. The cell culture models shown in Fig. 1 were chosen in order to be compatible with the requirements of the large number of cells (up to several million) needed for several of the bioanalytical methods used in this study. The objective of this study was to investigate the biocompatibility of PMMA and PDMS as cell-culturing material for supporting PC12 cell growth and differentiation, as shown in Fig. 2. Cell Viability and Metabolic Activity The cell viability and cells metabolic activity (Fig. 3) of PC12 cells was analyzed after 4 days of cultivation in the absence of NGF. Cells grown on tissue-grade PS was used as the reference condition. As seen, the cell viability and cell death frequencies were comparable at all three cell culture conditions with about 97% of the cells viable and 3% dead in the cultures (Fig. 3A). Cells, seeded with the same density, displayed similar metabolic activity at the three different culture conditions (Fig. 3B), as measured with the MTT test. Cell Morphology In the absence of NGF, the PC12 cells were relatively small and round-shaped (data not shown). Treatment with NGF for 4 days resulted in larger cell bodies and extended neurites of the cells (Fig. 4). No difference in morphology and neurite network magnitude between PC12 cells was observed when grown and differentiated at the three different cell culture conditions. Multiple neurites from the cell body with multiple branches and multiple neurites from the cell body with a minor branch were seen in all cultures (Fig. 4). The fraction of 4-days-NGF-differentiated cells expressed, as the percentage of total cell number, was similar for the three different culture conditions: PS 25163%, PMMA 25262%, PMMA-PDMS 25062%. Cell Cycle Analysis and Sub-G1 DNA Measurement PC12 cells were collected after 24, 48, 72, and 96 hours, both without and with NGF treatment and subsequently analyzed by flow cytometry (Fig. 5). An example of cell cycle analysis is presented in Fig. 5A: cell cycle results are presented as a histogram and the estimation of percentage of cells in each of the intervals: sub-G1, G1, S, and G2/M was provided by the Cyflogic program. The overlay histogram (Fig. 5B) presents PC12 cells treated with NGF for 24 hours (blank graph) and 96 hrs (solid, grey graph). In general, for non-differentiated and NGF differentiated cells (Fig. 5), cell cycle phase distribution and number of cells with less DNA content than 2N (sub G1 population) were similar for the three different culture conditions. Although some differences can be observed between non-differentiated cells after 48 and 72 hrs, after 96 hrs the number of cells in various phases was very similar for the three culture conditions. These results are in a good agreement with the results from the cell viability and metabolic activity tests (Fig. 3). For NGF differentiated PC12 cells, the number of cells in the G1 and sub-G1 phases increased with time while the number of cells in G2/M and S phases decreased. The only exception to this trend was that the amount of cells cultured on PMMA-PDMS for 48 hours in G2/M was higher than in G2/M after 24 hours (Fig. 5). The number of sub-G1 cells after 96 hours of culturing upon NGF treatment was 5% for PS and 8-9% for PMMA and PMMA-PDMS. Overall, the results are consistent with the notion that PC12 cells stop dividing when differentiating into nerve cells as induced by NGF. To determine trans acting effect of PDMS, differentiation was also initiated on PC12 cells seeded on perforated PMMA resting on a PDMS layer. In the next step, we examined whether the cell morphology changes between different polymeric materials can be identified (Fig. 4). Flow cytometry was employed to analyze cell cycle phases in all tested conditions (Fig. 5). The gene expression profile of non-differentiated cells on PS, PMMA and PMMA-PDMS did not differ significantly (data not shown), indicating that PMMA and PDMS had little impact on non-differentiated cells. The gene expression was significantly different before and after differentiation into neuronal-like cells grown on all three cell culture conditions. This correlates with changes in morphology and inhibition of cell division ( Fig. 4 and 5). The top10 similarly regulated genes during differentiation at all cell culture conditions (PS, PMMA, PMMA-PDMS) were associated with neuronal cell development ( Fig. 6 and Tab. 1) and correlated well with the observed morphological changes (Fig. 4). This indicated that the resulting neuronal-like cells The result of this analysis showed dramatic differences in gene expression in nerve cells grown at different cell culture conditions (Fig. 7): 6 genes were up regulated and 35 were down regulated in nerve cells grown on PMMA as compared with nerve cells grown on PS. By contrast 642 genes were up regulated and 35 genes were down regulated in nerve cells grown on PMMA-PDMS compared to nerve cells grown on PS. The regulated genes were involved in neuronal cell development and function ( Fig. 7 and Tab. 2), indicating that PMMA as well as PDMS in combination with PMMA create conditions that are not the same as PS. Discussion Despite being widely used in cell culture chip, only a few reports have investigated the effects of PDMS on cells in greater detail. These reports have furthermore been suggesting that PMDS significantly affect cells by releasing uncured oligomers in the medium [19]. The present study was designed to study PMMA as cell culture substrate and possible factors released from PDMS (Fig. 1). There are abundant examples in the literature showing that PDMS based chips can support cell growth and survival, suggesting that the impact of released oligomers is little. These results are however not necessarily contradictory. PDMS effects might be sufficiently subtle not to affect major cell functions, such as proliferation, apoptosis and cell morphology (Fig. 3,4,5) while still having significant impact on the molecular machinery of cells (Fig. 6). As noted before, cell growth can be unaffected while large changes in gene expression can still be observed [28]. PMMA is an alternative material to produce microfluidics chips in, which sometimes can be biocompatible and sometimes not [26,28,41] and as PDMS, the compatibility is apparently dependent on cellular context. We have cultured endothelial cells (HUVEC), epithelial cells (HeLa) [26,28], fibroblasts, mesenchymal stem cells, embryonic stem cells, adipocyte derived stem cells on PMMA. Corresponding long list of cells grown in PDMS cell culture chips is present in the literature. However, there is lack of information what PDMS (and PMMA) do on the molecular level. Molecular similarities and dissimilarities are context dependent as demonstrated here. There were no molecular differences in cells nondifferentiated on PMMA and PS respectively (and also PMMA-PDMS vs. PS), but the corresponding neuronal-like cell cultures showed large differences in gene expression ( Fig. 6 and data not shown), which indicate that factors released from PMDS might have little effect on proliferation in general while still having large impact on other cell processes. The cell cycle analysis revealed that the amount of cells in S phase and G2 phase was decreasing for all tested cell culture conditions during the differentiation process, indicating that more and more cells exit the cell cycle to enter the differentiation process. One exception was however observed. The amount of cells in the phase G2/M for the PMMA-PDMS material after 48hour of NGF treatment was higher than after 24-hour period (Fig. 5). Even if the G2/M phase is as low at later time points for PMMA-PDMS as the other cell culture condition, it is possible that differentiation on PDMS-PMMA is slower than on PMMA alone and on PS, which might explain the observed differences in gene expression (Fig. 6 and Tab. 2). PC12 cells represent a well-established in vitro model to examine neuronal processes, assessing the viability, growth, proliferation, as well as the differentiation process from a dividing cell into a sympathetic-like neuron cell. According to previous results, maximum induction of PC12 differentiation, upon b-NGF treatment, occurs after 6 to 8 days [42,43] where 60-100% of PC12 cells in a culture differentiate into neurite protruding cells, indicating a differentiated status [42,43]. At day 4 after b-NGF induction, approximately 50% of the cells in the culture had neurite protrutions (Fig. 4). Since maximum neurite length is dependent on time [42,43], it is plausible that some cells have differentiated into an early stage of a nerve cell, but not yet extended any significant neurites. Support for this notion is that the cell cycle analysis showed that the majority of cells had left the cell cycle 4 days after b-NGF induction (Fig. 5), indicating that more cells might be on the way to be differentiated than visualized by the morphological examination (Fig. 4). Overall, the results of the PC12 cell growth, proliferation and differentiation analysis at the three different cell culture conditions seem to be in general agreement (Fig 4) with what has previously been reported, i.e. that Table 1. Gene ontology processes and pathways associated with the similarly expressed genes at all tested conditions. NGF promotes neuronal differentiation associated with neurite extensions [44,45]. Neuropeptide Y (NPY), synapsin II (SYN2), corticotrophinreleasing hormone (CRH), Eph receptor A2 (EPHA2), and neuronal guanine nucleotide exchange factor (Ngef) might be considered as universal markers for PC12 cell differentiation towards neurons by NGF on PS, PMMA and PMMA-PDMS in the tested configurations (Tab. 2). Three genes (NPY, SYN2, CRH) are involved in the neurological system process while EPHA2 and Ngef are engaged in the Axon guidance process. This is supported by a number of other studies, indicating that NPY mRNA levels increase during nerve growth factor treatment of PC12 cells [31,46,47]. Synapsins were proposed as the optimal candidates to modulate neurite outgrowth in the first stages of neuronal development, owing to their abilities to modulate polymerization and assembly of actin [48]. Recently, it has been demonstrated that corticotrophin-releasing hormone facilitates the outgrowth of axon in spinal neurons [49]. As reported by Sahin and colleagues [50], neuronal guanine nucleotide exchange factor plays an important role in Eph-mediated axon guidance by promoting outgrowth in the absence of ephrins and retraction in the presence of ephrins. Our studies revealed in addition that two types of matrix metallopeptidases (MMPs), MMP3 and MMP10, are involved in NGF-differentiation of PC12 cells on the tested materials (Fig. 6). It has been reported that MMPs are expressed during neuronal development and are up-regulated in nervous system diseases [51]. The reduction in stable focal adhesion points caused by MMPs leads to extracellular matrix degradation and subsequent cell migration [52]. Previously, other gene expression profiling studies have been conducted in PC12 cells [53,54]. In the first report, longer treatment (9 days) with NGF was applied. To identify NGF regulated genes, a serial analysis of gene expression (SAGE) technology was utilized [53,54]. These studies and the present study could identify that S100 calcium-binding protein A4 (S100a4) and stathmin-like 4 (stmn 4) genes were regulated during differentiation. In the second study [54], PC12 cells were cultivated for 4 days in DMEM containing 1% horse serum in the presence or absence of 100 ng/ml NGF or 100 ng/ml IGF-1 (insulin-like growth factor-1). Affymetrix Rat RG_U34A_Genechips, which detect mRNAs for approximately 8,800 known genes and ESTs, were employed to assess the difference between NGF (or IGF-1) treated and non-treated cells. Both NGF and IGF-1 are potent trophic factors, which means that they can maintain cell viability as a sole factor, but IGF-1 does not support PC12 cell differentiation. Therefore differentially regulated genes by IGF-1 were removed from a set of NGF-induced genes, and the list of 66 genes with two-fold change between non-treated and treated cells were obtained. Three genes, namely synapsin II, GAP-43, and presenilin-2 were considered as neural specific. Those genes were also found in the present study. It should be noted that tyrosin hydroxylase, which is one of the genes required for production of dopamine was found to be equally expressed in non-differentiated as well as NGF differentiated PC12 cells. This confirms previous findings analysing gene expression [54]. Also undifferentiated PC12 cells have been shown to release dopamine without being treated by NGF [55,56]. Biocompatibility investigations of microfluidic system can be divided into two approaches with increasing complexity. The first approach is investigating the biocompatibility of a material. Such investigations are usually performed in batch cultures and the reference point is usually a commercial cell culture flask [6]. The second approach is to test the whole microfludic system and compare the results to batch cultures in cell culture flasks. Microfluidic cell culturing can have multiple additional effects on the cells besides effects caused by the material of the system. The surface to volume of a microfludics system is typically large as compared to batch cultures, which increases the effects of the material in relation to adsorption and absorption effects. Furthermore, cell cultures in microfludic chips are usually continuously perfused, resulting in effective removal of waste product from cells, removal of released possibly cytotoxic compounds from the material like PDMS, but also removal of autocrine and paracrine factors essential for the cells. It is therefore very difficult to link effects of a material when investigating effects of complete microfluidics systems. In addition, many of the powerful analysis tools such as flow cytometry and gene expression profiling requires relative large number of cells, which can be difficult to obtain from microfluidic systems. In conclusion, the results show that a piece of PDMS underneath a perforated PMMA alone can lead to significant variations of gene expression profiles compared to PMMA and PS (Fig. 6). Due to the extensive use of PDMS and its reported negative effects on cells, it is highly important to improve the understanding on how PDMS affect cells. Moreover, detecting additional NGF-responsive genes enables new knowledge how trophic factors perform their functions [53].
2016-04-30T18:35:23.062Z
2013-01-03T00:00:00.000
{ "year": 2013, "sha1": "ce3144a1ad3f8b0f996abc4221ce08421b7b1ca9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0053107&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce3144a1ad3f8b0f996abc4221ce08421b7b1ca9", "s2fieldsofstudy": [ "Biology", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
216553865
pes2o/s2orc
v3-fos-license
Evolution of safety factor profiles in sawteeth Two different definitions of the safety factor are applied to investigate the evolution of the safety factor profile during normal sawteeth, the stationary state, and the incomplete reconnection. It is found that the safety factor profiles from the old definition are sometimes inconsistent with the Poincare plots of the magnetic field during sawteeth. The old safety factor always indicates that the safety factor around the magnetic axis is flattened and equal to 1.0 with the development of the kink instability. However, the Poincare plots of the magnetic field lines indicate that the topology of the magnetic field around the magnetic axis has not been changed. To solve the inconsistency, we propose a new definition of the safety factor, in which the poloidal angle relative to the new twisted magnetic axis is used instead of the poloidal angle to the original axis. With the new definition, the safety factor profiles are consistent with Poincare plots of the magnetic field. We also find that the safety factor profiles are significantly different from the two different q definitions. With the new q definition, the safety factor at the magnetic axis q0 remains unchanged in almost the entire period of a sawtooth and jumps up to 1.0 near the end during normal sawteeth; in the non-axisymmetric equilibrium, q0 is still far below 1.0; q0 remains its initial value throughout the incomplete reconnection. Introduction Sawteeth are common phenomena for magnetically confined fusion device, whose central safety factor falls below one. [1] During sawteeth, the central plasma pressure periodically crashes after a slow rise. Sawteeth can not only flatten center plasma temperature but also trigger neo-classical tearing modes in nearby resonant surfaces [2], which results in a significant reduction of energy confinement. Since the sawteeth are deleterious for Tokamak operations, many efforts were taken to understand the mechanism of sawteeth. [3][4][5] However, after more than 40 years, two fundamental points of sawteeth is still unknown, i.e., the mechanism of the fast pressure crash and whether the magnetic reconnection is complete or incomplete during the crash. For the first problem, there are several candidates, i. e. the Hall effect, [6] the stochasticity of the magnetic field [7], and the pressure-driven instabilities. [8] For the second problem, we are even unable to know whether the incomplete reconnection has actually occurred since the q profile evolution from different Tokamaks is significantly different. In TFTR [9] and ASDEX-U [10], the safety factor of the magnetic axis 0 q remains almost unchanged during sawteeth. However, in other experiments, 0 q goes above one after the crash. Therefore, the calculation of q profiles is of great importance to understand the physical mechanism of the sawteeth, especially for incomplete reconnection. The safety factor is defined as q    =  to reflect the helicity of a magnetic field line, where   and   are the changes of the toroidal and poloidal angles along the magnetic field line based on the initial untwisting magnetic axis. However, if the magnetic axis is twisted due to kink instabilities, the magnetic field lines not only twist with its helicity but also have to wind around the twisted magnetic axis. If we still use such a definition, the q profile will totally be misleading and deceptive. For example, if the 1/1 kink instability is well developed, the magnetic axis will be twisted, and its helicity is m/n=1/ where ( 0 We name the old safety factor as, 00 where 0 z N = is the time for a magnetic field line crossing the 0 Z = plane. From To get an accurate safety factor, we have And the old safety factor will be Equation (10) indicates that the old definition of the safety factor has not considered that the magnetic axis is twisted due to the kink instability, and all magnetic field lines in the region must wind around the twisted magnetic axis. As a result, no matter what the helicity of the magnetic field it is, the old definition of q always 'proves' that the profile of the old safety factor around the magnetic axis ( Note that magnetic field lines in the m/n=1/1 island do not wind around the new magnetic axis, and the safety factor in the island is not affected. From the above discussion, the old safety factor will always give a wrong value in the region near the twisted magnetic axis. That is the reason why the q profiles sometimes are inconsistent with the Poincare plots of magnetic field lines during sawteeth. II. A new method for the safety factor calculation Since the main problem is resulted from the influence of the twisted magnetic relative to the twisted magnetic axis. In the region relative position of the magnetic field line to the twisted magnetic axis is Similarly, the new safety factor 00 where 0 new z N = is the time for the magnetic field line crossing the 0 Since magnetic field lines are not affected by the twisted magnetic axis, Since Equations (17) and (23) indicate that, when the magnetic axis is twisted by the kink instabilities, we can obtain the right safety factor by using the poloidal angle relative to the twisted magnetic axis during q calculation. It is also should be noted that the new safety factor calculation method can be applied in the whole region, not only in the region where the magnetic field lines have to wind around the twisted magnetic axis. III. Simulation results All the simulations in the present paper are carried out with the CLT code. [11] Since the purpose of the simulations is to verify the accuracy of the new safety factor, we do not repeat the details of the CLT code. Similar simulation results could be found in our previous studies (W. Zhang et al. to be published). i. Normal sawteeth old poloidal angle that is still defined based on the untwisting magnetic axis, and new q is the safety factor with the new poloidal angle that is redefined based on the twisting magnetic axis. Figure 3 The contour plots of the old (a) and new (b) safety factors at 1597 A tt = . It should be noted that, even at the moment ( 1597 jumps up to 1.0 at the end of the reconnection process. The minimum old q , which was wrongly regarded as the safety factor at the magnetic axis, gradually rises to 1.0 before the reconnection finishes (i. e. Figure 2b, 2c, 2f, and 2g). Moreover, if one uses (0) old q , which is the old safety factor that located at the original axis. The axis safety factor will keep larger than or equal to 1.0 during the sawteeth. represents the minimum old safety factor along X=0. ii. Stationary state Recently a non-axisymmetric stationary state that is related to sawteeth has been reported in many experiments [12][13][14]. In those papers, the magnetic field and the stream function both have the helicity of m/n=1/1 at the stationary state. It also has 11 / 18 been reported that the safety factor in the core region becomes flattened and is about 1.0. However, if the new q calculation method is applied, the safety factor profile at the stationary state is not entirely flattened and is still smaller than 1.0. We could illustrate this by carrying out similar simulations. Figure 9b. The real safety factor at the magnetic axis is 0.9387, which is still below 1.0, which could also be seen from the contour plot of the safety factor ( Figure 10 (b)). iii. incomplete reconnection Several studies [15,16] have reported that magnetic reconnection during sawteeth could be incomplete due to plasmoid instabilities. It is interesting and important to calculate the evolution of the safety factor. The parameters used in this subsection are given as follows: the plasma beta 0~0  , the resistivity ) are shown in Figure 11. The system is unstable for the resistive-kink mode since the initial safety factor at the magnetic axis is 0.9 (Figure 11 e). During the development of the m/n=1/1 resistive-kink mode (Figure 11b), the current sheet near the X-point becomes thinner and thinner. When the current sheet thickness decreases below a critical value, a secondary tearing instability will be triggered, and plasmoids form near the original X-point ( Figure 11c). The secondary islands finally merge and form a large secondary island, which prevents the resistive-kink mode from further growing up and then finally results in an incomplete reconnection ( Figure 11d). As shown in Figure 11e~11f, the profiles of the new and old safety factors are the same except the region near the magnetic axis. The old safety factor indicates that the safety factor is flattened and becomes equal to IV. Summary and discussion The safety factors from two different definitions are adopted to investigate the evolution of the safety factor profile during normal sawteeth, the stationary state, and the incomplete reconnection. We find that the safety factor profiles from the old definition are sometimes inconsistent with the Poincare plots of the magnetic field. The old safety factor always indicates that the safety factor around the magnetic axis
2020-04-28T01:00:57.873Z
2020-04-25T00:00:00.000
{ "year": 2020, "sha1": "6e9de3ee794151d6c9d57c82f86a5a83d62f633d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6e9de3ee794151d6c9d57c82f86a5a83d62f633d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231861058
pes2o/s2orc
v3-fos-license
Metastasis-Directed Radiotherapy for Oligometastatic Castration-Resistant Prostate Cancer The treatment effects of metastasis-directed therapy in patients with oligometastatic disease have received much attention. In our case, a 72-year-old man with oligometastatic castration-resistant prostate cancer was referred to our hospital. The patient had undergone radical radiotherapy with a total dose of 76 Gy in 36 fractions for localized prostate cancer nine years prior to the first visit. Positron emission tomography showed a slight increase in accumulation in the para-aortic lymph nodes. The patient received conventional radiotherapy at a total dose of 50 Gy in 25 fractions to the para-aortic region as oligometastasis-directed local therapy. After radiotherapy, his prostate-specific antigen (PSA) level decreased slightly, but it increased again soon after. According to the results of positron emission tomography, the accumulation around the para-aortic lymph nodes had decreased; however, a slight increase in accumulation in the sub/supra-clavicular lymph nodes was observed. He received radiotherapy at a total dose of 50 Gy in 25 fractions to the sub/supra-clavicular region. We confirmed a significant reduction in lesion volume and a downward trend in PSA levels. Metastasis-directed therapy has shown remarkable effectiveness in controlling disease without severe treatment-related adverse events. Metastasis-directed therapy is considered as one of the treatment options in patients who need salvage therapy. Introduction Oligometastasis is a type of metastasis in which only a limited number of metastases are identified in patients with cancer. The concept was first proposed by Hellman and Weichselbaum in 1995 [1]. While the standard treatment for patients with distant metastases in many types of cancer is systemic therapy, oligometastasis can be treated with radical local therapy, including surgery and radiotherapy, to cure the disease. The efficacy of local therapy for liver metastasis from colorectal cancer has long been known. Surgical resection of liver metastasis is an effective treatment for selected colorectal cancer patients. This approach was reported to achieve five-year survival rates of approximately 40% [2]. In recent years, evidence regarding the treatment outcomes of oligometastasis-directed therapy has been accumulating. Stereotactic ablative radiotherapy for comprehensive treatment of oligometastatic cancers (called SABR-COMET trial), an international randomized phase II trial, has suggested useful results for patients with oligometastasis [3]. Ninety-nine patients with well-controlled primary tumors and oligometastatic lesions were randomized in a 1:2 ratio between the palliative care arm and the stereotactic ablative radiotherapy (SABR) arm. The SABR arm was shown to significantly prolong overall survival. The five-year overall survival rate in the palliative care arm was 17.7%, while that in the SABR arm was 42.3%. Another multi-institutional, phase II randomized study reported the advantage of metastatic local therapy in patients with stage IV non-small-cell lung cancer with three or fewer metastases after front-line systemic therapy [4]. The median progression-free survival improved from 5.6 months in the maintenance therapy or observation group to 14.2 months in the local therapy group. Moreover, the median overall survival also improved from 17.0 months to 41.2 months, without severe treatment-related toxicities. This trial was closed early due to the significant efficacy benefit observed in the metastatic local therapy arm. Further evidence is needed regarding oligometastatic treatment focusing on specific primary cancer types. Herein, we present the case of a patient with oligometastatic castration-resistant prostate cancer treated with metastasis-directed radiotherapy. Case Presentation A 72-year-old man was referred to our hospital due to an elevated prostate-specific antigen (PSA) level during androgen deprivation therapy (ADT). Nine years prior to admission, he had undergone radiotherapy with a total dose of 76 Gy in 36 fractions for localized prostate cancer (PSA level, 25.55 ng/mL; Gleason score = 5+4; and cT1cN0M0 stage). The treatment was successful, and his PSA level decreased to 0.02. However, two years after treatment, he experienced biochemical failure, and his PSA level increased. Therefore, he was started on ADT comprising bicalutamide and leuprorelin. Subsequently, his PSA level dropped to 0.024 ng/mL a year prior to the first visit of our department and then rose again to 2.5 ng/mL; thus, he was referred to our department. His serum testosterone level was found to be 38.8 ng/dL, and he was diagnosed with castration-resistant prostate cancer (CRPC). Positron emission tomography (PET) showed a slight increase in accumulation in the para-aortic lymph nodes (fluorodeoxyglucose maximum standardized uptake value [SUV max]: 3.1) (Figure 1). FIGURE 1: Positron emission tomography combined with computed tomography Positron emission tomography combined with computed tomography showing slightly higher fluorodeoxyglucose uptake in the para-aortic lymph nodes In our case, SABR was not suitable due to extensive para-aortic lymph node enlargement. We decided to administer conventional radiotherapy at a total dose of 50 Gy in 25 fractions to the para-aortic lymph nodes as metastasis-directed therapy ( Figure 2). FIGURE 2: Radiotherapy planning for the para-aortic lymph nodes Distribution of the conventional radiotherapy dose for the para-aortic lymph nodes. The isodose lines are shown in the upper left corner. The 95% isodose of the prescribed dose is indicated as the red-colored area. Only grade 1 nausea developed as a radiation-related adverse event. No hematological adverse events were observed. After radiotherapy, his PSA level decreased slightly, but it increased again soon after. On PET, the accumulation around the para-aortic lymph nodes was obscured; however, there was a slight increase in accumulation in the sub/supra-clavicular lymph nodes (SUV max: 3.0-3.3) (Figure 3). FIGURE 3: Positron emission tomography combined with computed tomography Positron emission tomography combined with computed tomography showing slightly higher fluorodeoxyglucose uptake in the sub/supra-clavicular lymph nodes The patient received radiotherapy at a total dose of 50 Gy in 25 fractions to the sub/supra-clavicular region ( Figure 4). FIGURE 4: Radiotherapy planning for the sub/supra-clavicular lymph nodes Distribution of the conventional radiotherapy dose for the sub/supra-clavicular lymph nodes. The isodose lines are shown in the upper left corner. The 95% isodose of the prescribed dose is indicated as the redcolored area. Only grade 1 radiation dermatitis developed as an adverse event, and he had no hematological adverse events. We then confirmed volume reduction of the lesion by magnetic resonance imaging ( Figure 5) and a downward trend in the PSA level ( Figure 6). No significant abnormal accumulation was found in whole-body PET images after five months of the radiotherapy. FIGURE 5: Magnetic resonance imaging Magnetic resonance imaging showing a significant reduction in the size of the sub/supra-clavicular lymph nodes A) before radiotherapy and B) after radiotherapy FIGURE 6: Trend of prostate-specific antigen (PSA) levels Trend of prostate-specific antigen (PSA) levels from the first visit to our institution. After the first radiotherapy (RT1), the PSA level decreases slightly but increases soon after. After the second radiotherapy (RT2), the serum PSA level shows a downward trend. Discussion Many patients with recurring or advanced prostate cancer initially respond to ADT, and most patients show progression to CRPC after a few years [5]. Systemic treatment options for CRPC are expanding, including new androgen-targeted agents such as enzalutamide, apalutamide, darolutamide, and abiraterone and a next-generation taxane, cabazitaxel [6]. The efficacy of metastasis-directed therapy has been revealed for oligometastatic hormone-sensitive prostate cancer. In a phase II randomized clinical trial of observation versus stereotactic ablative radiation for oligometastatic prostate cancer, known as the ORIOLE trial, the patients were randomized in a 2:1 ratio to either the SABR treatment or surveillance group [7]. A total of 54 study participants were included: seven of 36 patients (19%) in the SABR group showed disease progression within six months, compared to 11 of 18 (61%) in the surveillance group. Another randomized phase II trial assessed the benefit of metastasisdirected therapy in terms of the initiation of ADT in 62 patients with oligorecurrent prostate cancer. The median ADT-free survival was 21 months in the ADT arm and 13 months in the surveillance arm [8]. Regarding oligo-progressive CRPC, a recent retrospective analysis reported that metastasis-directed therapy prolonged the time to PSA failure compared to systemic therapy alone [9]. There is no evidence-based standardized sequential therapy for CRPC. It is important to determine individual treatment methods based on an understanding of each patient's characteristics, such as performance status, disease symptoms, presence of organ metastases, and the effect of prior therapy. Precision medicine, which is the practice of applying the most appropriate treatment method to individual patient information, might help this decision. For example, in CRPC patients with gene mutations in the homologous recombination repair pathway, the poly (adenosine diphosphate-ribose)-polymerase-1 inhibitor olaparib significantly prolonged progressionfree survival compared to enzalutamide or abiraterone plus prednisone [10]. The hazard ratio for disease progression or death was 0.34 (95% confidence interval: 0.25-0.47, p < 0.001) in the olaparib arm. Moreover, the overall survival rate was also higher in the olaparib arm at the time of analysis, although this difference was not statistically significant. Patients with oligometastatic prostate cancer include those who are clinically inhomogeneous. Evaluating and treating patients with oligometastatic prostate cancer is extremely difficult. Owing to the lack of knowledge regarding disease progression from the oligometastatic state to an extensive metastatic state, treatment decisions should be made carefully.
2021-02-10T06:40:32.413Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "995e23278292400cd4d89cb051417a188ecdb475", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/49747-metastasis-directed-radiotherapy-for-oligometastatic-castration-resistant-prostate-cancer.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "995e23278292400cd4d89cb051417a188ecdb475", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261084512
pes2o/s2orc
v3-fos-license
Improving Cancer Targeting: A Study on the Effect of Dual-Ligand Density on Targeting of Cells Having Differential Expression of Target Biomarkers Silica nanoparticles with hyaluronic acid (HA) and folic acid (FA) were developed to study dual-ligand targeting of CD44 and folate receptors, respectively, in colon cancer. Characterization of particles with dynamic light scattering showed them to have hydrodynamic diameters of 147–271 nm with moderate polydispersity index (PDI) values. Surface modification of the particles was achieved by simultaneous reaction with HA and FA and results showed that ligand density on the surface increased with increasing concentrations in the reaction mixture. The nanoparticles showed minimal to no cytotoxicity with all formulations showing ≥ 90% cell viability at concentrations up to 100 µg/mL. Based on flow cytometry results, SW480 cell lines were positive for both receptors, the WI38 cell line was positive for CD44 receptor, and Caco2 was positive for the folate receptor. Cellular targeting studies demonstrated the potential of the targeted nanoparticles as promising candidates for delivery of therapeutic agents. The highest cellular targeting was achieved with particles synthesized using folate:surface amine (F:A) ratio of 9 for SW480 and Caco2 cells and at F:A = 0 for WI38 cells. The highest selectivity was achieved at F:A = 9 for both SW480:WI38 and SW480:Caco2 cells. Based on HA conjugation, the highest cellular targeting was achieved at H:A = 0.5–0.75 for SW480 cell, at H:A = 0.75 for WI38 cell and at H:A = 0.5 for Caco2 cells. The highest selectivity was achieved at H:A = 0 for both SW480:WI38 and SW480:Caco2 cells. These results demonstrated that the optimum ligand density on the nanoparticle for targeting is dependent on the levels of biomarker expression on the target cells. Ongoing studies will evaluate the therapeutic efficacy of these targeted nanoparticles using in vitro and in vivo cancer models. Introduction Nanoparticles have received significant attention as targeted drug carriers for cancer due to their large surface area-to-volume ratio, ability to encapsulate therapeutics, and easy surface functionalization [1,2].Additionally, some unique properties of tumors have been exploited to enhance targeting of nanoparticles for therapeutic or diagnostic applications [3].Furthermore, ligand conjugated nanoparticles have been shown to have a high affinity towards cancer cells with overexpressed biomarkers [2,4,5].This specific targeting can increase the concentration of drugs at the cancer site, improving treatment efficacy and reducing non-specific toxicity [1,6].Compared to single-ligand targeting, the use of dualand four-ligands was shown to significantly improve targeting of cancer metastasis in a 4T1 mouse model of breast cancer [7]. In the ideal case, the targeted biomarker would be exclusively and homogenously expressed by only and all cancer cells.However, targeted biomarkers are often also expressed on normal cells and cancer cells themselves may show broad expression levelsleading to undesired and/or suboptimal targeting [8].One approach to improve differential targeting of cancer cells, within the complex mixtures of cells found in the body, is to develop nanoparticles that target multiple biomarkers [9].In addition to selection of the multiple targeting ligands with which to decorate the particle surface, the surface density of these ligands has also been shown to be important for cell targeting [10,11].Other works have shown that ligand conformation and surface density are important factors in determining the receptor-binding affinity of the particles, which directly impacts targeting efficiency [12,13].A too-high ligand density can promote surface adsorption of proteins, which can lead to nanoparticle clearance by the immune system [14].Optimization of ligand density is, therefore, necessary to improve cancer cell targeting and limit non-specific interactions [15]. While other studies have compared dual-versus single-targeted nanoparticles, there is a significant gap in knowledge with regards to the effect of dual-ligand density and its optimization for targeting cells with variable biomarker expression.For example, Qhattal et al. presented that, compared to non-targeted control, particles with low ligand-grafting density of a single ligand showed no increase in targeting but the targeting increased significantly for particles with a high grafting density [16].Another study by Moradi et al. showed increasing cellular targeting with higher surface ligand density up to a critical value above which it plateaued [17].It has also been shown that dual ligands on the particle surface increase cellular targeting due to dual receptor-mediated endocytosis [18].The goal of our study was to evaluate the impact of dual-ligand density on targeting of nanoparticles to cells with varying levels of biomarker expression.To achieve this goal, we developed silica nanoparticles conjugated with hyaluronic acid (HA) and folic acid (FA), shown in Figure 1, to target the Cluster of Differentiation 44 (CD44) and folate receptors, respectively, using a colon cancer model.As far as we are aware, this is the first study evaluating this dual targeted system for targeting of colon cancer cells. Int. J. Mol.Sci.2023, 24, x FOR PEER REVIEW differential targeting of cancer cells, within the complex mixtures of cells found body, is to develop nanoparticles that target multiple biomarkers [9].In addition to tion of the multiple targeting ligands with which to decorate the particle surface, t face density of these ligands has also been shown to be important for cell targeting [ Other works have shown that ligand conformation and surface density are importa tors in determining the receptor-binding affinity of the particles, which directly im targeting efficiency [12,13].A too-high ligand density can promote surface adsorp proteins, which can lead to nanoparticle clearance by the immune system [14].Opt tion of ligand density is, therefore, necessary to improve cancer cell targeting an non-specific interactions [15]. While other studies have compared dual-versus single-targeted nanoparticles is a significant gap in knowledge with regards to the effect of dual-ligand density optimization for targeting cells with variable biomarker expression.For example, Q et al. presented that, compared to non-targeted control, particles with low ligand-g density of a single ligand showed no increase in targeting but the targeting increas nificantly for particles with a high grafting density [16].Another study by Morad showed increasing cellular targeting with higher surface ligand density up to a value above which it plateaued [17].It has also been shown that dual ligands on th ticle surface increase cellular targeting due to dual receptor-mediated endocytos The goal of our study was to evaluate the impact of dual-ligand density on targe nanoparticles to cells with varying levels of biomarker expression.To achieve thi we developed silica nanoparticles conjugated with hyaluronic acid (HA) and fol (FA), shown in Figure 1, to target the Cluster of Differentiation 44 (CD44) and fol ceptors, respectively, using a colon cancer model.As far as we are aware, this is th study evaluating this dual targeted system for targeting of colon cancer cells.Silica nanoparticles were used as a model because of their biocompatibility, lo icity, systemic stability, and relatively simple and low-cost preparation [19][20][21].In tion, their controllable particle size, porosity, and crystallinity make them suitable f ious biomedical applications [19].Silica nanoparticles are 'Generally Recognized A (GRAS) by the United States Food and Drug Administration (US FDA) [22,23]. In this work, we developed silica nanoparticles conjugated with HA and FA to target colon cancer.In vitro targeting studies were conducted using human colon cancer cells (SW480 and Caco2 cell lines) and lung normal cell (WI38 cell line), which were selected for their varying expression of CD44 and folate receptors. Nanoparticle Synthesis and Characterization Fluorescently labeled silica nanoparticles were synthesized using the water-in-oil microemulsion method, which is well-suited for formation of functional nanoparticles [43].In our synthesis, the (3-aminopropyl)trimethoxysilane (APTMS):Tetramethyl orthosilicate (TMOS) mole fraction was fixed at 0.12 and the FITC concentration at 0.01M as higher amounts resulted in aggregation and fluorescence quenching, respectively.Figure 2 shows representative fluorescence microscopy and TEM images of the core silica nanoparticle.These particles showed excellent fluorescence and spherical morphology with relatively uniform size and served as the base for synthesis of the targeted nanoparticles. Nanoparticle Synthesis and Characterization Fluorescently labeled silica nanoparticles were synthesized using the water-in-oil microemulsion method, which is well-suited for formation of functional nanoparticles [43].In our synthesis, the (3-aminopropyl)trimethoxysilane (APTMS):Tetramethyl orthosilicate (TMOS) mole fraction was fixed at 0.12 and the FITC concentration at 0.01M as higher amounts resulted in aggregation and fluorescence quenching, respectively.Figure 2 shows representative fluorescence microscopy and TEM images of the core silica nanoparticle.These particles showed excellent fluorescence and spherical morphology with relatively uniform size and served as the base for synthesis of the targeted nanoparticles. Single-and dual-targeted nanoparticles were synthesized by modifying the surface of core nanoparticles with HA and FA.The ligand density on the particle surface was varied by adjusting the ligand-to-surface amine mole ratio during the chemical conjugation step.Four different hyaluronic acid-to-surface amine molar ratios (H:A), ranging from 0.5-1.25, and four folic acid-to-surface amine molar ratios (F:A), from 3-9, were selected for this study based on initial data (not shown) which demonstrated that they yielded nanoparticles with varying surface ligand densities.It was also observed that use of higher ratios led to nanoparticles that were prone to aggregate.Molar ratios of HA:EDC, HA:NHS, FA:EDC and FA:NHS were fixed at 1:50, 1:50, 1:2, and 1:2, respectively.Table 1 shows the average hydrodynamic diameter and polydispersity index (PDI) of targeted nanoparticles produced using the various H:A and F:A molar ratios.All data are reported as mean ± SD (n = 3).Nanoparticle sizes ranged from 150-250 nm with PDI in the range from 0.07-0.28.PDI provides an indication of the broadness of particle size distribution with values less than 0.1 generally considered to be monodisperse.Single-and dual-targeted nanoparticles were synthesized by modifying the surface of core nanoparticles with HA and FA.The ligand density on the particle surface was varied by adjusting the ligand-to-surface amine mole ratio during the chemical conjugation step.Four different hyaluronic acid-to-surface amine molar ratios (H:A), ranging from 0.5-1.25, and four folic acid-to-surface amine molar ratios (F:A), from 3-9, were selected for this study based on initial data (not shown) which demonstrated that they yielded nanoparticles with varying surface ligand densities.It was also observed that use of higher ratios led to nanoparticles that were prone to aggregate.Molar ratios of HA:EDC, HA:NHS, FA:EDC and FA:NHS were fixed at 1:50, 1:50, 1:2, and 1:2, respectively. Table 1 shows the average hydrodynamic diameter and polydispersity index (PDI) of targeted nanoparticles produced using the various H:A and F:A molar ratios.All data are reported as mean ± SD (n = 3).Nanoparticle sizes ranged from 150-250 nm with PDI in the range from 0.07-0.28.PDI provides an indication of the broadness of particle size distribution with values less than 0.1 generally considered to be monodisperse. Figure 3a shows the zeta potential of nanoparticles as a function of F:A molar ratio with each line representing a different H:A molar ratio.The average zeta potential of non-targeted, bare nanoparticles was −15 mV due to the presence of negatively charged silanol groups on the nanoparticle surface [44].As expected, the zeta potential shifted towards the positive direction following modification with folic acid (orange line) due to the presence of protonated amino acid groups of folate [45].Addition of hyaluronic acid produced a negative shift in zeta potential due to the presence of negatively charged, deprotonated carboxyl groups [46].Zeta potential, therefore, provided some confirmation of the successful linkage of HA and FA to the nanoparticle surface.The amount of HA conjugated on the nanoparticles, as measured by the hexadecyltrimethylammonium bromide (CTAB) turbidimetric method [16], is shown in Figure 3b.CTAB is an anionic surfactant that forms an insoluble complex with polyanionic hyaluronic acid, which shows light absorption proportional to the HA concentration.It was observed that at higher F:A molar ratios (i.e., ≥7), there was no significant change in surface coverage by HA (p = 0.46 and 0.18 for FA:A = 7 and 9, respectively) as H:A was increased.At lower F:A molar ratios (i.e., ≤5), however, increase in:A molar ratio from 0.5 to 1.25 significantly increased HA conjugation on the nanoparticle surface (p = 0.037, 0.028, and 0.00001 for FA = 0, 3, and 5, respectively).With increasing F:A molar ratio, less HA was conjugated on the nanoparticle due to ineffective competition for reaction sites on the surface.Figure 3c shows the amount of FA conjugated on the nanoparticles as measured by spectrophotometry.As seen from the figure, at higher H:A molar ratios (i.e., ≥0.75), an increase in F:A molar ratio from 3 to 7 did not significantly change the FA surface coverage (p = 0.20, 0.36, and 0.13 for: A = 0.75, 1, and 1.25, respectively).However, further increase in F:A molar ratio to 9 significantly increased the surface coverage (p = 0.0009, 0.027, and 0.034 for H:A = 0.75, 1, and 1.25, respectively).At lower H:A molar ratios (i.e., ≤0.5), increase in F:A molar ratio from 0 to 9 significantly increased FA conjugation on the nanoparticle surface (p < 0.0001 and 0.0095 for H:A = 0 and 0.5, respectively).At F:A = 9, less FA was conjugated with increasing H:A molar ratio from 0 to 1.25, which again was likely due to competition for the reaction sites. (p = 0.20, 0.36, and 0.13 for: A = 0.75, 1, and 1.25, respectively).However, further increase in F:A molar ratio to 9 significantly increased the surface coverage (p = 0.0009, 0.027, and 0.034 for H:A = 0.75, 1, and 1.25, respectively).At lower H:A molar ratios (i.e., ≤0.5), increase in F:A molar ratio from 0 to 9 significantly increased FA conjugation on the nanoparticle surface (p < 0.0001 and 0.0095 for H:A = 0 and 0.5, respectively).At F:A = 9, less FA was conjugated with increasing H:A molar ratio from 0 to 1.25, which again was likely due to competition for the reaction sites. Protein-nanoparticle interactions are important because the resulting protein corona creates a new 'biological identity' that can alter their destination in the body [31].As seen in Figure 3d, there was no change in protein adsorption with increasing F:A molar ratio.With an increase in H:A molar ratio, however, there was a slight reduction in BSA protein adsorption, possibly due to electrostatic repulsion between the nanoparticles and negatively charged BSA protein [47].Overall, ligand conjugation had negligible impact on protein adsorption which could help limit non-specific targeting of non-target cells.Protein-nanoparticle interactions are important because the resulting protein corona creates a new 'biological identity' that can alter their destination in the body [31].As seen in Figure 3d, there was no change in protein adsorption with increasing F:A molar ratio.With an increase in H:A molar ratio, however, there was a slight reduction in BSA protein adsorption, possibly due to electrostatic repulsion between the nanoparticles and negatively charged BSA protein [47].Overall, ligand conjugation had negligible impact on protein adsorption which could help limit non-specific targeting of non-target cells. In Vitro Cellular Studies 2.2.1. Receptor Expression in Cells Prior to conducting cell targeting studies, the receptor expression profiles for cells were determined using flow cytometry.In total, we characterized four colorectal cancer cell lines and three normal cell lines, from which SW470, WI38, and Caco2 were selected because of their varying receptor expression profiles, as shown in Figure 4.Each individual data point in the plots represents a single cell with its location within the quadrants indicating relative CD44 and FR expression.Cells in the top-right quadrant are positive for both receptors, bottom-right are positive only for CD44, bottom-left are negative for both receptors, and top-left are positive only for FR.These three cell lines were selected because SW480 was positive for both receptors, WI38 was positive for only the CD44 receptors, and Caco2 cells were positive for only folate receptors.While it would have been ideal to have a group that did not express these receptors, none of the cell lines evaluated were negative for both CD44 and FR. Receptor Expression in Cells Prior to conducting cell targeting studies, the receptor expression profiles fo were determined using flow cytometry.In total, we characterized four colorectal c cell lines and three normal cell lines, from which SW470, WI38, and Caco2 were se because of their varying receptor expression profiles, as shown in Figure 4.Each in ual data point in the plots represents a single cell with its location within the qua indicating relative CD44 and FR expression.Cells in the top-right quadrant are po for both receptors, bottom-right are positive only for CD44, bottom-left are negati both receptors, and top-left are positive only for FR.These three cell lines were se because SW480 was positive for both receptors, WI38 was positive for only the CD ceptors, and Caco2 cells were positive for only folate receptors.While it would hav ideal to have a group that did not express these receptors, none of the cell lines eva were negative for both CD44 and FR. Evaluation of Cellular Targeting Cellular targeting studies were conducted to evaluate the targeting efficacy of particles.Initial competitive inhibition studies were conducted by incubating SW48 first with free HA and FA (50/50 wt.%) mixture at varying concentrations and then a Evaluation of Cellular Targeting Cellular targeting studies were conducted to evaluate the targeting efficacy of nanoparticles.Initial competitive inhibition studies were conducted by incubating SW480 cells first with free HA and FA (50/50 wt.%) mixture at varying concentrations and then adding 0.3 mg/mL of dual targeted particles for 3 h.As seen in Figure 5, increasing concentrations of the free ligand mixture resulted in increased inhibition of nanoparticle targeting of the cells.Free HA and FA likely bound to the CD44 and FR, respectively, and competed with particle-receptor interactions [33,41,48,49].Decreased targeting was observed up to a concentration of 1 mg/mL at which it leveled off.These results confirmed that nanoparticlecellular interactions were based on HA-CD44 receptor and FA-FR interactions. Int. J. Mol.Sci.2023, 24, x FOR PEER REVIEW 0.3 mg/mL of dual targeted particles for 3 h.As seen in Figure 5, increasing concentr of the free ligand mixture resulted in increased inhibition of nanoparticle targeting cells.Free HA and FA likely bound to the CD44 and FR, respectively, and competed particle-receptor interactions [33,41,48,49].Decreased targeting was observed up to centration of 1 mg/mL at which it leveled off.These results confirmed that nanopa cellular interactions were based on HA-CD44 receptor and FA-FR interactions.Figure 6a shows nanoparticle targeting of SW480 cells, which are positive fo receptors.Each data point represents a different particle formulation with variou representing the H:A molar ratio, and the F:A ratio is shown on the x-axis.An incre F:A molar ratio from 0 to 9 produced a significant increase in cellular targeting of particles.This increase is likely due to FA-FR mediated interactions because SW48 are positive for FR [44,50].With regards to the H:A molar ratio, an optimum was obs at 0.5 for most F:A ratios.Further increase in ligand density yielded lower cellular t ing, likely due to steric crowding of surface ligands restricting interactions with receptors [15,[51][52][53]. Nanoparticle targeting of WI38 cells, which are positive only for CD44, is sho Figure 6b.Each line again represents particles with a specific H:A molar ratio and axis shows the F:A molar ratio.Since WI38 cells do not express the folate recept increase in F:A molar ratio produced particles with reduced cellular targeting.C targeting increased with increasing H:A molar ratio up to a maximum at a ratio o beyond which it again decreased.Improved targeting with increasing ligand den likely due to HA-CD44 mediated endocytosis while its decrease at higher ligand d could be due to steric crowding [15,[51][52][53]. In Figure 6c, we see the cellular targeting results with Caco2 cells, which are po only for FR.Increasing the F:A molar ratio produced increased cellular targeting of particles.There also appeared to be some increase with H:A molar ratio from 0 to 0 it decreased at higher ratios.While the Caco2 cells were highly positive for FR, ap mately 10% of the cells showed positivity for CD44, as observed by the few sca points in the top-right quadrant of Figure 4c.This minor cell population, being po for FR and CD44, would have exhibited some HA-CD44 mediated targeting by HA ified particles [54,55].Figure 6a shows nanoparticle targeting of SW480 cells, which are positive for both receptors.Each data point represents a different particle formulation with various lines representing the H:A molar ratio, and the F:A ratio is shown on the x-axis.An increase in F:A molar ratio from 0 to 9 produced a significant increase in cellular targeting of nanoparticles.This increase is likely due to FA-FR mediated interactions because SW480 cells are positive for FR [44,50].With regards to the H:A molar ratio, an optimum was observed at 0.5 for most F:A ratios.Further increase in ligand density yielded lower cellular targeting, likely due to steric crowding of surface ligands restricting interactions with target receptors [15,[51][52][53]. Nanoparticle targeting of WI38 cells, which are positive only for CD44, is shown in Figure 6b.Each line again represents particles with a specific H:A molar ratio and the x-axis shows the F:A molar ratio.Since WI38 cells do not express the folate receptor, an increase in F:A molar ratio produced particles with reduced cellular targeting.Cellular targeting increased with increasing H:A molar ratio up to a maximum at a ratio of 0.75 beyond which it again decreased.Improved targeting with increasing ligand density is likely due to HA-CD44 mediated endocytosis while its decrease at higher ligand density could be due to steric crowding [15,[51][52][53]. In Figure 6c, we see the cellular targeting results with Caco2 cells, which are positive only for FR.Increasing the F:A molar ratio produced increased cellular targeting of nanoparticles.There also appeared to be some increase with H:A molar ratio from 0 to 0.5 but it decreased at higher ratios.While the Caco2 cells were highly positive for FR, approximately 10% of the cells showed positivity for CD44, as observed by the few scattered points in the top-right quadrant of Figure 4c.This minor cell population, being positive for FR and CD44, would have exhibited some HA-CD44 mediated targeting by HA-modified particles [54,55].) showed increased cellular targeting with an increase in F:A molar ratio due to FA-FR mediated endocytosis.CD44 positive cell line WI38 showed reduced cellular targeting with an increase in F:A molar ratio.All 3 cell lines showed increased cellular targeting with an increase in H:A molar ratio up to a certain point and then decreased with further increase in H:A molar ratio up to 1.25 which probably could be due to steric crowding. Selectivity of each nanoparticle formulation was calculated as the ratio of colon cancer cell (SW480) targeting relative to targeting of other cells tested.Figure 7a shows the selectivity to SW480 cells, which express both receptors, to WI38 cells that express CD44 only.A higher selectivity was observed with increasing F:A molar ratio, likely due to improved interactions with the cancer cells that express FR.As expected, the selectivity was reduced with increasing H:A molar ratio due to the presence of CD44 on both cell types.Figure 7b compares the selectivity to Caco2 cells which primarily express only FR.Nanoparticles with only FA, marked by the orange line, showed higher selectivity with increasing F:A molar ratio.From Figure 6c, it is observed that with only FA (i.e., H:A = 0) the targeting of Caco2 is uniformly low except for at the highest ratio while the targeting of SW480 (Figure 6a) increases-leading to this observed trend.When HA was added in the reaction mixture, however, the selectivity was reduced with increasing H:A and F:A molar ratios.This opposite trend in selectivity could be due to differences in relative affinity of the two cell types for the targeted nanoparticles.Additionally, the decrease in selectivity with increasing H:A ratio could be a result of the presence of CD44 in approximately 10% of the cells.) showed increased cellular targeting with an increase in F:A molar ratio due to FA-FR mediated endocytosis.CD44 positive cell line WI38 showed reduced cellular targeting with an increase in F:A molar ratio.All 3 cell lines showed increased cellular targeting with an increase in H:A molar ratio up to a certain point and then decreased with further increase in H:A molar ratio up to 1.25 which probably could be due to steric crowding. Selectivity of each nanoparticle formulation was calculated as the ratio of colon cancer cell (SW480) targeting relative to targeting of other cells tested.Figure 7a shows the selectivity to SW480 cells, which express both receptors, to WI38 cells that express CD44 only.A higher selectivity was observed with increasing F:A molar ratio, likely due to improved interactions with the cancer cells that express FR.As expected, the selectivity was reduced with increasing H:A molar ratio due to the presence of CD44 on both cell types.Figure 7b compares the selectivity to Caco2 cells which primarily express only FR.Nanoparticles with only FA, marked by the orange line, showed higher selectivity with increasing F:A molar ratio.From Figure 6c, it is observed that with only FA (i.e., H:A = 0) the targeting of Caco2 is uniformly low except for at the highest ratio while the targeting of SW480 (Figure 6a) increases-leading to this observed trend.When HA was added in the reaction mixture, however, the selectivity was reduced with increasing H:A and F:A molar ratios.This opposite trend in selectivity could be due to differences in relative affinity of the two cell types for the targeted nanoparticles.Additionally, the decrease in selectivity with increasing H:A ratio could be a result of the presence of CD44 in approximately 10% of the cells. Discussion The goal of this project was to study the use of dual ligands for targeting of nanoparticles to cells with varying levels of biomarker expression.To achieve this goal, silica nanoparticles were conjugated with hyaluronic acid (HA) and folic acid (FA) to target CD44 and folate receptors, respectively, in a colon cancer model.Following surface modification, the nanoparticles showed negligible protein adsorption and excellent fluorescence stability to allow for in vitro studies with cell cultures.We hypothesized that dual targeting of the CD44 and folate receptors on colon cancer cells would increase the targeting of nanoparticles, compared to single-targeted particles, and that it would be a function of surface ligand density.We observed that cellular targeting was in fact dependent on ligand type, number of ligands used, ligand densities, and cellular expression of target receptors.Cellular targeting was observed to increase as a function of ligand density up to an optimum, beyond which it decreased.In addition, dual targeted nanoparticles showed higher cellular targeting compared to single targeted nanoparticles for cells that positively expressed both receptors.Using cell lines with different expression of the two biomarkers, we have identified different formulations to maximize the targeting and selectivity of nanoparticles for each cell line.Based on FA conjugation, the highest cellular targeting was achieved at F:A = 9 for SW480 and Caco2 cells and at F:A = 0 for WI38 cell.The highest selectivity was achieved at F:A = 9 for both SW480:WI38 and SW480:Caco2 cells.Based on HA conjugation, the highest cellular targeting was achieved at H:A = 0.75 for SW480 and WI38 cells and at H:A = 0.5 for Caco2 cells.The highest selectivity was achieved at H:A = 0 for both SW480:WI38 and SW480:Caco2 cells.Although the single, FR-targeted nanoparticles showed highest selectivity in both cases, the Caco2 cells were shown to also have some expression of the CD44 receptor, which complicated analysis.Additionally, the extent of cellular uptake and any associated toxicity must be taken into account when designing a drug delivery vehicle for therapeutic applications.Moving forward, the targeted nanoparticles developed in this project can be used to encapsulate a therapeutic drug and in vivo and ex vivo experiments used to evaluate the overall anti-tumor efficacy.Successful application of insights gained in this work could lead to improved therapies for colon cancer and other diseases. Discussion The goal of this project was to study the use of dual ligands for targeting of nanoparticles to cells with varying levels of biomarker expression.To achieve this goal, silica nanoparticles were conjugated with hyaluronic acid (HA) and folic acid (FA) to target CD44 and folate receptors, respectively, in a colon cancer model.Following surface modification, the nanoparticles showed negligible protein adsorption and excellent fluorescence stability to allow for in vitro studies with cell cultures.We hypothesized that dual targeting of the CD44 and folate receptors on colon cancer cells would increase the targeting of nanoparticles, compared to single-targeted particles, and that it would be a function of surface ligand density.We observed that cellular targeting was in fact dependent on ligand type, number of ligands used, ligand densities, and cellular expression of target receptors.Cellular targeting was observed to increase as a function of ligand density up to an optimum, beyond which it decreased.In addition, dual targeted nanoparticles showed higher cellular targeting compared to single targeted nanoparticles for cells that positively expressed both receptors.Using cell lines with different expression of the two biomarkers, we have identified different formulations to maximize the targeting and selectivity of nanoparticles for each cell line.Based on FA conjugation, the highest cellular targeting was achieved at F:A = 9 for SW480 and Caco2 cells and at F:A = 0 for WI38 cell.The highest selectivity was achieved at F:A = 9 for both SW480:WI38 and SW480:Caco2 cells.Based on HA conjugation, the highest cellular targeting was achieved at H:A = 0.75 for SW480 and WI38 cells and at H:A = 0.5 for Caco2 cells.The highest selectivity was achieved at H:A = 0 for both SW480:WI38 and SW480:Caco2 cells.Although the single, FR-targeted nanoparticles showed highest selectivity in both cases, the Caco2 cells were shown to also have some expression of the CD44 receptor, which complicated analysis.Additionally, the extent of cellular uptake and any associated toxicity must be taken into account when designing a drug delivery vehicle for therapeutic applications.Moving forward, the targeted nanoparticles developed in this project can be used to encapsulate a therapeutic drug and in vivo and ex vivo experiments used to evaluate the overall anti-tumor efficacy.Successful application of insights gained in this work could lead to improved therapies for colon cancer and other diseases. Nanoparticle synthesis: Silica nanoparticles were synthesized using the water-in-oil microemulsion method.Initially, a mixture of cyclohexane, n-hexanol, Triton X-100, and DI water was stirred vigorously at room temperature to form a microemulsion.After 15 min, FITC dye was added followed by the addition of TMOS and APTMS after 5 min.NH 4 OH was added after 30 min of vigorous stirring.The reaction was carried on for 24 h at room temperature.Ethanol was added to break the stability of the microemulsion and particles recovered by centrifugation (14,800 rpm for 30 min).Particles were washed three times with ethanol and one time with water to remove any unreacted reagents.Bare silica nanoparticles were stored in DI water at 4 • C prior to use. Dual-targeted silica nanoparticles with varying densities of the HA and FA ligands on the nanoparticle surface were produced using the well-established EDC-NHS chemistry.In one round bottom flask, hyaluronic acid was activated by dissolving it in deionized water followed by addition of EDC and NHS solutions prepared separately in deionized water.The reaction mixture was stirred overnight.In another round bottom flask, folic acid was activated by dissolving it in DMSO followed by addition of EDC and NHS solutions prepared separately in DMSO.The reaction mixture was stirred overnight in dark.The activated solutions of hyaluronic acid and folic acid were mixed together and stirred.FITC doped amine conjugated silica nanoparticles were suspended in deionized water and then added to the mixture of activated solutions of hyaluronic acid and folic acid and the reaction mixture stirred for 24 h in dark.Following functionalization, the particles were washed 4x with ethanol to remove the unreacted hyaluronic acid and folic acid.To the best of our knowledge, this project presents the first simultaneous approach of HA and FA conjugation on nanoparticle surface.Particles were immediately used for studies. Size and Zeta Potential: Nanoparticle size and zeta potential were measured by dynamic light scattering (DLS) using a Malvern Zetasizer Nano ZS (Malvern, UK).The size was measured with backscatter detection at θ = 173 • and zeta potential measured using the Smoluchowski model. Quantification of Amines: The primary amine content of silica nanoparticles was measured using a quantitative fluorescamine assay.Fluorescamine (4 -phenylspiro-1,3dione) reacts with primary amines to form fluorescent pyrrolinone moieties.Briefly, 150 µL of nanoparticle suspension was placed into a 96-well plate and 50 µL of 3 mg/mL fluo-rescamine, dissolved in DMSO, was added to each well and allowed to react for 10 min in dark.Fluorescence of each sample was measured at 400 nm excitation and 460 nm emission with a FlexStation 3 microplate reader (Molecular Devices, Sunnyvale, CA, USA).Ethanolamine of known concentrations was used as a standard. Quantification of Hyaluronic Acid: The hyaluronic acid content of the silica nanoparticles was quantified indirectly using a hexadecyltrimethylammonium bromide (CTAB) turbidimetric method [16].CTAB is an anionic surfactant that forms an insoluble complex with polyanionic hyaluronic acid.Formation of this complex leads to increased light absorption at 570 nm in a manner that is correlated to the hyaluronic acid concentration.Briefly, 50 µL of supernatant samples after each centrifugation was added in triplicate to a 96-well plate.The samples were incubated with 50 µL of 0.2 M sodium acetate buffer (pH 5.5) at 37 • C for 10 min and then 100 µL of 10 mM CTAB solution was added to the wells.The absorbance of the precipitated complex was read within 10 min against the control using a FlexStation 3 microplate reader (Molecular Devices, Sunnyvale, CA, USA).The amount of conjugated hyaluronic acid was measured by subtracting the total amount of hyaluronic acid in the supernatant solutions from the initial amount added to the reaction mixture.Hyaluronic acid of known concentrations was used as a standard. Quantification of Folic Acid: Folic acid content on the silica nanoparticles was measured using a quantitative ultraviolet (UV) spectrophotometric method at 358 nm.Absorbance of 0.2 mg/mL nanoparticle samples was measured using a FlexStation 3 microplate reader (Molecular Devices, Sunnyvale, CA, USA).Folic acid of known concentrations was used as a standard. Determination of Protein Adsorption: Adsorption of protein on nanoparticles was determined using the Bradford assay with Coomassie brilliant blue dye.Briefly, 0.1 mg/mL of each nanoparticle sample was incubated with 0.5 mg/mL of Bovine Serum Albumin (BSA) at 37 • C and pH 7.4.After 2 h, the mixture was centrifuged at 12,000 rpm for 20 min and 5 µl of supernatant was added to a 96-well plate with 250 µL of the Bradford Reagent.After keeping the mixture at room temperature for 10 min, absorbance of each sample was measured at 595 nm using a FlexStation 3 microplate reader (Molecular Devices, Sunnyvale, CA, USA).The adsorbed BSA was calculated using the equation: q = V(C i − C f )/m where C i and C f are the initial and final BSA concentrations in the solution, respectively; V is the BSA solution volume; and m is the mass of nanoparticles added into the solution.BSA of known concentrations was used as a standard. Stability of Particle Fluorescence: The fluorescence stability of nanoparticles was determined by incubating three different concentrations nanoparticles in cell culture media at 37 • C and pH 7.4.At various time points, the fluorescence of each sample was measured at Ex:495nm and Em:525nm using a FlexStation 3 microplate reader (Molecular Devices, Sunnyvale, CA, USA).Culture media without nanoparticle was used as a control.Particle fluorescence was determined to be stable for at least 24 h. Visualization by Electron Microscopy: A Zeiss EM 10 transmission electron microscope (TEM) operating at a voltage of 60 K was used to characterize the shape of the nanoparticles.Samples were prepared by dropping 10 µL of nanoparticle suspension on formvar-carbon film of a 300 mesh copper grid and wiping the remaining solution using a filter paper after 15 min.The grid was then placed in a petri dish and allowed to dry overnight at room temperature. Quantification of Receptor Expression: Expression of CD44 and folate receptor was determined with a BD Accuri C6 flow cytometer (BD Biosciences, San Jose, CA, USA) containing two lasers (488 and 635 nm).The instrument was equipped with a 533/30 band pass filter to examine fluorescence emitted by 488 nm laser excitation and a 675/25 band pass filter to examine florescence emitted by 635 nm laser excitation.Measurements consisted of 10,000 events with a flow rate of 12 µL/min and data recorded for 2 min. Expression of receptors on cells was determined by first seeding 10 6 cells/tube, adding 1 µL of zombie violet, and then incubating for 15 min at room temperature in dark.Cells were washed twice with stain buffer and then 5 µL/tube of block buffer was added and incubated for 1 h at room temperature.CD44 or FR antibodies were then added at 100 µg/tube and incubated for 2 h at room temperature in dark.Cells were washed twice with stain buffer before addition of 0.5 mL/tube of fixation buffer and incubation for 1 h at room temperature in dark.Cells were washed twice with stain buffer, re-suspended in 0.5 mL/tube stain buffer, and then filtered through a 40 µm filter.Cells were kept in the dark at 4 Assessment of Cytotoxicity: Chinese Hamster Ovary (CHO) cells were seeded in a 96-well plate at 20,000 cells/well and incubated overnight at 37 • C.After incubation, the culture media was renewed with culture media containing varying concentrations of nanoparticles.Four hours prior to each measurement time point, 2 mg/mL of MTT was added to each well and, after 4 h, the culture media of each well aspirated completely without touching the blue-purple crystals of insoluble formazan.DMSO was added to each well to dissolve the crystals and the well plate was vortexed for 5 min, at around 500 rpm, with the plate agitator to completely dissolve the crystals.Absorbance was measured at 540 nm using a spectrophotometer.Control samples consisted of cells without nanoparticle treatment.Cell viability was calculated as the ratio of the absorbance of cells with nanoparticles divided by absorbance of control cells) × 100%.Effects of particles on cells were negligible with all samples showing viability ≥ 90% at concentrations up to 100 µg/mL (data not shown). Cellular Targeting Studies: For qualitative analysis of cellular targeting, cells were seeded at 100,000 cells/well in 6-well plates and incubated overnight at 37 • C. The culture media was replaced with culture media containing varying concentrations of nanoparticles and incubated for a desired time period.Cells were then rinsed thrice with ice-cold 1X PBS (pH 7.4, 4 • C) to eliminate excess nanoparticles and dead cells.Then, the cells were fixed with 4% paraformaldehyde for 1 h at room temperature and rinsed thrice with ice-cold 1X PBS (pH 7.4, 4 • C).The cells were stained with 300 nM DAPI solution for 5 min in the dark at room temperature and again rinsed thrice with ice-cold 1X PBS (pH 7.4, 4 • C).Finally, the cells were imaged under a fluorescence microscope.Cells without nanoparticle treatment were used as control. For quantitative analysis of cellular targeting, 40,000 cells/well were seeded in a 96well plate and incubated overnight at 37 • C. The culture media was then replaced with culture media containing varying concentrations of nanoparticles, which were incubated for a predetermined amount of time.The media was then aspirated and transferred to a different 96 well-plate for analysis.The cells were thoroughly rinsed thrice with ice-cold 1X PBS (pH 7.4, 4 • C) to eliminate excess nanoparticles and dead cells and the cells lysed using lysis buffer for 60 min at room temperature.After lysis, the well plate was vortexed for 5 min at 500 rpm and the fluorescence of FITC associated with the cells and the previously collected media measured at excitation of 495 nm and emission of 525 nm.The fluorescence intensity was converted to number of nanoparticles based on a standard curve obtained with known nanoparticle concentrations in the lysis buffer and culture media.Cells without nanoparticle treatment were used as control. Cellular targeting of each nanoparticle sample was calculated using the equation, The same protocol was followed for the competitive inhibition study with the exception that cells were preincubated with different concentrations of free ligand mixture for 3 h before nanoparticle addition. Figure 1 . Figure 1.Chemical structure of (a) Hyaluronic Acid and (b) Folic Acid. Figure 1 . Figure 1.Chemical structure of (a) Hyaluronic Acid and (b) Folic Acid. Figure 2 . Figure 2. Typical nanoparticles produced in this study as visualized by (a) fluorescence microscopy and (b) TEM image.Particles showed bright fluorescence and uniform, spherical morphology. Figure 2 . Figure 2. Typical nanoparticles produced in this study as visualized by (a) fluorescence microscopy and (b) TEM image.Particles showed bright fluorescence and uniform, spherical morphology. Figure 3 . Figure 3. Physicochemical characteristics of targeted nanoparticles.(a) Effect of ligand conjugation on nanoparticle zeta potential.Zeta potential of the nanoparticles shifted toward positive direction with FA conjugation and reversed towards negative direction with HA conjugation; (b) Effect of H:A molar ratio on nanoparticle surface conjugation by HA.At lower F:A molar ratios, increase in H:A molar ratio significantly increased HA conjugation on nanoparticle surface; (c) Effect of F:A molar ratio on nanoparticle surface conjugation by FA.At lower H:A molar ratio, increase in F:A molar ratio significantly increased FA conjugation on nanoparticle surface; (d) BSA protein adsorption on nanoparticles.Ligand conjugation had negligible effect on protein adsorption. Figure 3 . Figure 3. Physicochemical characteristics of targeted nanoparticles.(a) Effect of ligand conjugation on nanoparticle zeta potential.Zeta potential of the nanoparticles shifted toward positive direction with FA conjugation and reversed towards negative direction with HA conjugation; (b) Effect of H:A molar ratio on nanoparticle surface conjugation by HA.At lower F:A molar ratios, increase in H:A molar ratio significantly increased HA conjugation on nanoparticle surface; (c) Effect of F:A molar ratio on nanoparticle surface conjugation by FA.At lower H:A molar ratio, increase in F:A molar ratio significantly increased FA conjugation on nanoparticle surface; (d) BSA protein adsorption on nanoparticles.Ligand conjugation had negligible effect on protein adsorption. Figure 4 . Figure 4. Dot plots for receptor positivity where the x-axis quantifies CD44 and y-axis shows receptors.Cells selected for targeting studies included (a) SW480 cells, which are positive fo receptors, (b) WI38 cells, which were positive for CD44 receptor only, and (c) Caco2 cells, were positive for only the folate receptor. Figure 4 . Figure 4. Dot plots for receptor positivity where the x-axis quantifies CD44 and y-axis shows folate receptors.Cells selected for targeting studies included (a) SW480 cells, which are positive for both receptors, (b) WI38 cells, which were positive for CD44 receptor only, and (c) Caco2 cells, which were positive for only the folate receptor. Figure 5 . Figure 5. Competitive nanoparticle (0.3 mg/mL) targeting of SW480 cells as a function of free concentration.Pre-incubation cells with free ligand mixture for 3 h resulted in reduced targe nanoparticles to cells as the free ligand concentration was increased. Figure 5 . Figure 5. Competitive nanoparticle (0.3 mg/mL) targeting of SW480 cells as a function of free ligand concentration.Pre-incubation cells with free ligand mixture for 3 h resulted in reduced targeting of nanoparticles to cells as the free ligand concentration was increased. Figure 6 . Figure 6.Targeting of silica nanoparticle to (a) SW480 cells; (b) WI38 cells; and (c) Caco2 cells.FR positive cell lines (SW480 and Caco2) showed increased cellular targeting with an increase in F:A molar ratio due to FA-FR mediated endocytosis.CD44 positive cell line WI38 showed reduced cellular targeting with an increase in F:A molar ratio.All 3 cell lines showed increased cellular targeting with an increase in H:A molar ratio up to a certain point and then decreased with further increase in H:A molar ratio up to 1.25 which probably could be due to steric crowding. Figure 6 . Figure 6.Targeting of silica nanoparticle to (a) SW480 cells; (b) WI38 cells; and (c) Caco2 cells.FR positive cell lines (SW480 and Caco2) showed increased cellular targeting with an increase in F:A molar ratio due to FA-FR mediated endocytosis.CD44 positive cell line WI38 showed reduced cellular targeting with an increase in F:A molar ratio.All 3 cell lines showed increased cellular targeting with an increase in H:A molar ratio up to a certain point and then decreased with further increase in H:A molar ratio up to 1.25 which probably could be due to steric crowding. Figure 7 . Figure 7. Selectivity of targeted silica nanoparticles comparing (a) SW480:WI38; and (b) SW480:Caco2.The targeted nanoparticles showed lower selectivity with an increase in H:A molar ratio because both the cancer (SW480) and control cells (WI38 and Caco2) express CD44. Figure 7 . Figure 7. Selectivity of targeted silica nanoparticles comparing (a) SW480:WI38; and (b) SW480:Caco2.The targeted nanoparticles showed lower selectivity with an increase in H:A molar ratio because both the cancer (SW480) and control cells (WI38 and Caco2) express CD44. Targeted = Fluorescence of cell associated nanoparticle Fluorescence of nanoparticle added × 100% Selectivity of each nanoparticle sample was calculated using the equation, Selectivity = Concentration of nanoparticle targeted to SW480 cell Concentration of nanoparticle targeted to WI38 or Caco2 cell Distribution co-efficient of each nanoparticle sample was calculated using the equation, Distribution co − efficient = Concentration of nanoparticle targeted to cell Concentration of nanoparticle in the media Table 1 . Summary of synthesis conditions and resulting size and polydispersity index (PDI) of targeted nanoparticles. • C before analysis by flow cytometry.Unstained cells and cells with isotype antibodies were used as controls.Maintenance of Cell Cultures: Chinese Hamster Ovary (CHO) cells were maintained in Ham's F-12K nutrient mixture with L-glutamine, supplemented with 10% fetal bovine serum and 1% penicillin.Human Colon Adenocarcinoma (HCT116) cells were maintained in Dulbecco's Modified Eagles Medium, supplemented with 10% FBS and 1% antibiotics.Human Colorectal Adenocarcinoma cells (HT-29, HCT-116 and SW-480) were maintained in Dulbecco's Modified Eagle's Medium, supplemented with 10% FBS, 1% L-glutamine, and 1% antibiotics.Human Epithelial Prostate (RWPE-1) cells were maintained in Keratinocyte Serum Free Medium supplemented with EGF and BPE.Human Fibroblast (WI38) cells were maintained in Eagle's Minimum Essential Medium, supplemented with 10% FBS and 1% antibiotics.Human Colorectal Adenocarcinoma (Caco-2) cells were maintained in Eagle's Minimum Essential Medium, supplemented with 20% FBS and 1% antibiotics.Human Embryonic Kidney (HEK-293) cells were maintained in Eagle's Minimum Essential Medium, supplemented with 10% FBS.All cell lines were incubated at 37 • C in 5% CO 2 .
2023-08-24T15:08:50.958Z
2023-08-22T00:00:00.000
{ "year": 2023, "sha1": "5966eec4c9c1b33a5a6e8ab7b0aec0bf3185a981", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/17/13048/pdf?version=1692690573", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7837ce29fb9dcab15fcfae9f3bc8ca8f51c2e3f6", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
159105573
pes2o/s2orc
v3-fos-license
Contributing to Sustainable Healthcare Systems with Case Theory Sustainability is a fundamental concern of our times and is particularly important in healthcare, since people are embedded in the healthcare systems on which their health and the viable usage of resources depend. Due to the complexity of healthcare systems, both in terms of governance and pathways from decisions to behaviours, there is no one-size-fits-all theory, but solutions have to be co-created with the actors operating within the systems. This paper proposes case theory as the methodological approach to analyse and interpret cases in order to identify theories suitable for sustainable healthcare systems. Case theory is a recent extension of case study research that includes network theory and systems theory. An example of a case from an Italian regional healthcare system dealing with stroke patients is presented to show how several methods can be mixed and how a substantive theory emerges. Introduction It is widely known that the increasing demand for healthcare services is mainly due to the growth and ageing of the population, the rise of chronic diseases and the increased access that developing countries have to specialized care (OECD, 2017). Notwithstanding the predictability of this scenario, current healthcare systems are still affected by many problems (OECD, 2017), which can be classified according to the economic, social and environmental dimensions of sustainability (Elkington, 1998;U.N.G.A., 2005). Indeed, the unit cost per patient treatment is increasing due, for example, to the development of high technology equipment and drugs. Social issues such as a decrease in the quality of life of patients and their relatives and a lack of healthy education, civil rights, equality, and patient empowerment are pervasive (Hussain et al., 2018). Environmental concerns have only recently been raised related to the cost-ineffective utilization and protection of resources. For example, pollution is not only due to transformation and transportation activities unrelated to healthcare systems but also to the use of technologies and drugs in medicine: available resources are ruined for exogenous and endogenous reasons, thus impacting wellness. Based on the aforementioned reasons, it seems clear that there is a call for integrated responses to these problems oriented towards the overall sustainability of healthcare systems (W.H.O., 2018). that "it is more helpful to think like a farmer than an engineer or architect in designing a health care system. Engineers and architects need to design every detail of a system […] because the responses of the component parts are mechanical and, therefore, predictable. In contrast, the farmer knows that he or she can do only so much… [He] simply creates the conditions under which a good crop is possible" (pp.314-315). This similarity is very peculiar: every healthcare system actor should feel like a farmer inspired by the principle of "creating the best conditions". However, while there are many publications related to the optimization of healthcare systems in terms of maximization of effectiveness and efficiency (Hans et al., 2012;Sarno, 2017), only a few studies have adopted a more comprehensive interpretative framework and approach oriented towards sustainability (Prada, 2012;Fischer, 2015;Saviano et al., 2018); studies based on, or at least applicable to, real systems, are still needed. These considerations are in line with Henry Mintzberg's myth #2 of healthcare, which states that healthcare systems can be fixed by clever social engineering driving the change from the top (Mintzberg, 2017). In contrast, it has been recognized that there is no one-size-fits-all solution, and every problem should be treated in its own context. Thus, solutions to better manage healthcare and create the conditions for "optimum crop" should come from the ground level, where there are experienced and engaged with by people dealing with everyday challenges (Mintzberg, 2012). This should be achieved not by "doing things to people" but by "doing things with people" and co-creating with them (Gummesson et al., 2018). Thus, given the call for sustainability and the complexity of healthcare systems, successful strategies able to create favourable conditions should be grounded on robust theories derived from the interaction with the people who are operating within them. Case theory (Gummesson, 2017) is guided by the complexity paradigm and, for this reason, it puts emphasis on interactive research. It is a recent extension of case study research (Yin, 2014) to include two languages that can face complexity in a more systematic and structured way: network theory and systems theory. Due to its characteristics, case theory is suitable as the service research methodology (Gummesson, 2014), and it contributes to the development of new theories and theory testing (comparisons with extant theories). Thus, the research question (RQ) investigated in this paper is as follows: RQ: Can case theory support theory generation for sustainable healthcare systems? The remainder of the paper is structured as follows: in Section 2, a brief literature review on sustainability and sustainable healthcare systems studies is reported; Section 3 addresses the main characteristics and methods of case theory; in Section 4, a straightforward example of case theory applied to a healthcare system struggling for sustainability is shown; implications follow. Sustainable Healthcare Systems Given the concern for the future of healthcare systems, individuals and public/private institutions have started to raise awareness about sustainability issues, even trying to adopt strategies to incorporate some managerial solutions in the existing systems. Indeed, sustainable development addresses harmonizing social-ecological systems (Berkes et al., 2003) and socio-technical systems (Trist, 1981). Sustainability can be seen as both an outcome and a goal, since it entails the building of dynamic capabilities within a system to meet internal and external challenges of the continual regeneration of economic, environmental, and social resources in order to not lose options for the future (Lifvergren et al., 2015). Thus, a sustainable healthcare system is designed to meet the health needs of people, resulting in optimal outcomes that are able to adapt to cultural, social and economic conditions without compromising the possibilities of future generations (Prada, 2012). Six pillars of healthcare systems sustainability were synthetically recognized in Prada (2012): disease prevention and health promotion; attention to structure, processes, and approaches; affordability of financial resources and investments; innovation; development of human resources; and government policies outside of healthcare. To better highlight the environmental role in sustainable healthcare systems, another study (Fischer, 2015) proposed five pillars: • disease prevention and health promotion; • long-term strategic perspective and innovation, reflecting the economic/strategic perspectives among policy makers; • quality, or the degree to which health care services increase the likelihood of desired outcomes consistently with current professional knowledge (Institute of Medicine US, 2001). Quality is a determinant of sustainability because its lack would imply a decrease in the acceptance of the cost sustained for it; • institutionalization of environmental concerns (in terms of both social and ecological environments); ijbm.ccsenet.org International Journal of Business and Management Vol. 14, No. 2;2019 • institutional accountability (to balance the resources invested with the expected long-term stability of the system) and individual responsibility (patient empowerment to decrease their dependence on gatekeepers). Although general, these conceptual frameworks are probably not exhaustive in covering all sustainability issues. For example, another research has dealt with the role of organizational commitment in healthcare sustainability, highlighting how training, mentoring, leadership, effective management practices, readiness to adapt, willingness to collaborate and other leverages truly influence sustainability strategy implementations in the long run (affective, normative and continuance (Goh & Marimuthu, 2016)). Thus, the social capital dimension has to be strengthened through cooperation among citizens and professionals in planning and evaluating healthcare services (Botturi et al., 2015) and by focusing on the personalization of care paths (Borgonovi & Compagni, 2013). Other dimensions of the sustainability of healthcare systems have been identified in the effectiveness of resource allocation. To this extent, a practical tool was developed to evaluate disinvestment feasibility in Australian health services (Harris et al., 2017). To become theories for sustainable healthcare systems, the different frameworks and pillars identified in these studies should be validated against real data and compared to other theories. Case study research has been recognized as a qualitative method (Yin, 2014) able to produce the best theory derived from real cases (Walton, 1992). Several publications adopted case study research to derive findings suitable to interpret the role of sustainability in healthcare systems-for example, by dealing with intellectual capital in regional healthcare services by means of semi-structured interviews and focus groups (Cavicchi, 2017). An example of action research (a type of case study research requiring the practical involvement of the researcher in the case (Clark, 1972)) with different waves of interviews was reported for a Swedish hospital coping with sustainability and transformation (Lifvergren et al., 2015). In an attempt to explore methods to identify technologies and practices suitable for disinvestments and drive the implementation and evaluation of relative projects, both case study research (with mixed methods and tools such as interviews, workshops, consultations, etc.) and action research methodologies were adopted (Harris et al., 2017). To become theories for sustainable healthcare systems, the different findings of these studies should be generalized. In summary, to conduct an impactful research on sustainable healthcare systems (i) theoretical conceptual frameworks should be validated against real data and operationalized in practical cases (the limit of the former studies) and (ii) empirical findings should be generalized to become theories (the limit of the latter studies). In other words, new theories can emerge from (and are confuted by) the interplay between extant theories and quantitative and qualitative methods adopted to analyse real cases. Recently, case study research has been elevated to "case theory" by Gummesson (2017). It faces complexity incorporating network and systems theory and emphasizing action research and the role of the researchers in understanding complex issues, possibly resulting in theory generation and testing, as described in the following section. Facing Complexity with Case Theory Cases allow to address complexity by studying "numerous factors and their links and interactions in dynamic context" (Gummesson, 2017, p.8). The scientific contribution of case theory is the conceptualization of cases as the ground for theory generation, reporting, conclusions and practical applications. Thus, the expression "case theory" covers both the process of knowledge generation and the outcome (the new knowledge) of the generation process, constantly performing comparative analyses among new concepts, categories and theories that already exist in a continuous process of validation and generalization of theory. Case theory answers the need to better ground theory in the real world, making efforts to generalize data towards grand theory and giving back condensed complexity to mid-range theory in the form of facilitated actions (Fig. 1). In particular, it has the following purposes: • Particularization, in the sense that it can allow to solve specific problems in a specific real context; • Generalization, because it can be adopted to generalize results from particular cases (Flyvbjerg, 2006) in terms of substantive theory to be adopted in similar cases or towards generalization to mid-range (more general models, frameworks, etc., such as the Boston Consulting Group Model or the model for 5 Forces of Competition) and grand theory (more abstract and general theories, such as Service-Dominant logic). Thinking in Service, a Value Co-Creation Perspective The adoption of case theory should take advantage of the service perspective, which can be very successful for analysing healthcare systems targeting sustainability because it shifts the focus from the goods to the service that actors exchange within these systems. In a healthcare setting, the focus on goods ("Good-Dominant logic") manifests in the spread of more specialized drugs, equipment, facilities and procedures. According to the Good-Dominant logic, some actors (parties) are in charge of producing and delivering goods, while others are in charge of consuming them, passively receiving the value the formers provide. This is revealed in the fact that the ill-person is called "patient", referring also to the fact that he/she needs to wait without asking or getting irritated for healthcare services (Gummesson, 2001). Thus, Good-Dominant logic seems to be the logic of division between actors. It addresses nouns, such as laboratory tests, nurses, and medications, and it does not take into account that patients do not need goods but rather solutions to their health problems and wellness. In contrast, the min-set of Service-Dominant logic has been developed (Vargo & Lusch, 2008;. This logic is the logic of togetherness and verbs, such as caring, monitoring, and visiting (Joiner & Lusch, 2016). Service-Dominant logic is based on the assumption that no value can be obtained in isolation, but value is derived from exchanges, interactions, and collaborations. Value is co-created by all involved parties that integrate different types of resources (private, such as friends and families, or public, such as nature, laws, or protocols), knowledge and goods. Thus, the value co-creation process takes place when actors integrate their resources and exchange service, which is the application of competences and other resources for the benefit of another party. In summary, goods are only service distribution mechanisms. Moreover, value is subjectively perceived by each party depending on his/her information variety, needs, and the context of value co-creation Polese, 2018). Therefore, even taking an appointment for a visit can be valuable-or not-for the parties involved (Kim, 2018) depending on the agreement on the date/time based on these characteristics (for example, the resources available to the patient to travel to the appointment or the availability of the equipment for the physician). Adopting a service perspective, Gummesson et al. (2018) stated that the Italian healthcare system needs new strategies that are not just planned but also emergent, new organizations designed to encourage distributed leadership, new managerial stiles oriented to meet patients' expectations, and new, humanized measurement scales. Reynoso (2009) connected value co-creation to sustainability, stating that a balanced integration of economic, social and environmental aspects is the foundation of value co-creation. Accordingly, Saviano et al. (2017) explored the interests of service research in the topic of sustainability, finding a low but increasing rate of usage of the word "sustainab*". Then, based on the sustainability literature, they listed some key requirements for global engagement in the challenge of sustainability in light of the service research: multi-stakeholder engagement and participatory process (addressable by means of concepts such as value co-creation and consonance of the Viable Systems Approach (Golinelli, 2010)), cultural change through education (T-Shaped professionals to improve not just the technical but also the comprehension capabilities of managers), and a multi-perspective approach with interdisciplinary thinking and systems thinking (such as the developed Viable Systems Approach and Service Science (Maglio et al., 2009)). Thinking in Networks, a Relational Perspective Case theory suggests adopting network theory, which can be very successful in analysing healthcare systems that are targeting sustainability, because it allows for the identification of the actors involved and their relationships. Thus, it provides the researcher the whole picture of the value co-creation networks (Enquist et al., 2015). According to the relational perspective, reality is seen as a set of relations (links) linking actors (nodes), which allows to describe-with various degrees of sophistication-the existent possible interactions among the components of a network, translating the narrative description of a case into advanced graphs and mathematics. When enlarging the analysis from one dyad of actors to the whole context, many-to-many marketing is investigated (Gummesson, 2004). Many-to-many marketing, initially inspired by the IMP researchers (Wilkinson, 2008), "describes, analyses and utilizes the network properties of marketing" (Gummesson, 2008). The general definition of networks attributes to them a scale-free meaning (that is, there is no limit to their size) and a random occurrence. In marketing and management (Gummesson & Polese, 2009), networks are built and defined for a purpose, which requires control and gives rise to limited and planned networks. Partners, suppliers, shareholders, and other stakeholders offer access to external resources as an alternative to a company acquiring its own resources. A member of a network based on cooperation cannot solely maximize its own benefits at all times but-within reasonable limits-has to show respect for the other members. Moreover, there is a cost for building, maintaining or finding a network. There are several concepts and properties of networks (Barabási, 2002) that can be very useful for analysing a real case, looking for peculiarities, commonalities, etc., thus enlarging the power of narratives. For example, by drawing a network, it is possible to identify clusters (dense grouping of links and nodes in which every actor can easily reach the others), while a cluster coefficient is a measure of the closeness of belonging actors. A hub, in contrast, is an actor (individual or organization) with a particular attraction to others (measurable in terms of fitness). A centralized network presents one hub. Several studies have adopted network theory in healthcare systems, as Gummesson (2017) did by comparing the Canadian Shouldice Hospital to a Swedish hospital to understand the unique value proposition of the former. Thinking in Systems and Eco-Systems, Holistic and Institutional Perspectives Case theory suggests adopting systems theory (Bruni et al., 2018), which can be very successful in analysing healthcare systems that are targeting sustainability from a holistic perspective. Moreover, the eco-systemic view of Service-Dominant logic (Vargo & Lusch, 2016) allows for addressing the role of institutions (rules, norms, symbols, etc.) in orienting interactions among actors in healthcare systems. Systems thinking shifts the focus from the parts (reductionism) to the whole (holism, where the whole is not just the sum of its parts (Checkland, 1997)). Thus, in healthcare settings, the focus is shifted from drugs, facilities, equipment, etc. (the "components") to how the components are organized and behave (the "whole"). Moreover, a holistic perspective takes into account the relationships among the parts and the whole. Thus, the universality of health can be interpreted as a sound principle because healthy people handle their lives better and are able to contribute to the nation in which they live and its health (Gummesson et al., 2018). Based on systems thinking, the Viable Systems Approach (Golinelli, 2010) assumes that every system is oriented towards a purpose and is viable (it wants to survive over time) (Barile & Saviano, 2011). A system should be aligned to supra-systems, which are other systems that retain critical resources for the original system's survival (Barile & Polese, 2010). To be aligned, a system shows adaptation traits both as autopoiesis (self-organization (Maturana & Varela, 1975)) and homeostasis (auto-regulation (Beer, 1975)). Alignment among systems can be measured in terms of consonance (similarity of information varieties, Barile et al., 2013) and resonance (positive harmonic interaction to achieve a common goal). The Viable Systems Approach community has developed a thorough understanding of sustainability. Among other topics, the community has analysed the intersections among the three spheres of sustainability and the dynamics of the resulting helix (Scalia et al., 2018). Moreover, the relationships among efficiency, effectiveness and sustainability have been defined and state that efficiency is a primary short-term goal of every system's process, effectiveness is a target based on an extended perspective of the organization and its strategies, and sustainability is a wider measure of the overall activity of the system operating in its environment (Barile et al., 2014). A recent study based on the contribution of Viable Systems Approach to sustainability in healthcare systems was developed by Saviano et al. (2018). According to service science, an initiative developed by IBM with a clear technological focus, service systems (Maglio & Spohrer, 2018) are configurations of people, technologies, and other resources that interact with other service systems to create mutual value. Among quantitative methods to address complex systems, system dynamics (Stermann, 2000) is a modelling and computer simulation technique that describes the behaviour of systems when variables interact. In Service-Dominant logic, the metaphor of natural (eco)systems has been adopted to analyse networks of actors who need resources to survive under a service ecosystem view (Akaka et al., 2015). In service ecosystems, actors gain mutual benefit from co-creating value together while their actions are enabled and constrained by shared rules, norms, practices, interpretation schemes and beliefs, which are together named institutional arrangements (Vargo & Lusch, 2016). According to Service-Dominant logic, actors choose to engage in resource integration with other actors by interpreting, in light of the institutional arrangements (potential resourceness (Koskela-Huotari &Vargo, 2016)), their value co-creation potential. Finally, service ecosystems are self-adjusting because social practices are responsive to the changes according to the agency of the individuals acting within (Peters et al., 2010). Several studies have analysed the behaviours of healthcare systems under a service ecosystem view (Frow et al., 2016;Gambarov et al., 2017) Adopting Case Theory for a Sustainable Healthcare System: An Example The presentation of the following short case has the objective of intuitively showing how many insights can be easily derived by adopting a case theory methodological approach, both in terms of theory building (by using case theory to solve problems) and theories comparison and generalization (to compare different theories or simplify findings to make them more theoretical and abstract). Thus, the case-presented in the form of a narrative-exemplifies some of the concepts expressed in the paper. However, in practical implementations of case theory, rigorous descriptions of research the method, context, analysis and findings must be reported. The case dealt with an Italian healthcare system for stroke patients struggling for sustainability (stroke and other circulatory diseases are responsible for more than one in three deaths in the world, OECD, 2017). The case initially focused on 1 medical unit (12 beds) of a regional university hospital with its nurses and physicians; then, it was enlarged to a regional technical committee of more than 20 participants from different institutions. Finally, it was widened to a national level, as reported below. The purpose of studying this case was to extend the understanding of sustainability in healthcare systems while helping involved actors make decisions to increase the sustainability of their systems. In particular, the research questions were as follows: how can traces of sustainability in this healthcare system be identified, or how can the sustainability be increased, possibly in terms of innovative social, economic and environmental practices? How can some new theories eventually be derived from that? How can it be ensured that these traces are "better" than other traces identified in other systems? How can sustainability principles be incorporated into the education, values, skills and common practices of the actors of this system? The access to case data was simplified since, as researchers, professors in healthcare management, co-founders of medical devices start-ups, friends of patients, and friends of physicians, the authors were actively involved several times in the healthcare system for more than 5 years. The authors were actually actors in this healthcare system. As researchers, they were asked to identify a method for sizing the nursing staff in a hospital (Sarno & Nenni, 2016). As marketing and management professors, they had to explain to students what the production implications of a clinical pathway are (capacity or resources, etc.) and how the national healthcare systems manage reimbursement to healthcare providers using the stroke case as an example. As entrepreneurs, they interacted with RX-machine providers to test the effectiveness of new dosimeters to measure radiations during diagnostic exams. As friends of patients and physicians, they paid attention to the symptoms of their relatives, took them to hospitals and rehabilitation structures, participated in medical workshops, talked about electronic health records, and collected paper medical records. In summary, with different roles, needs, purposes, resources, knowledge, and institutions, they interacted many times with multiple actors, building a basic understanding and explicit knowledge related to the case study of the management of such a healthcare system under the pressure for sustainability considerations. The number of actors, purposes, resources, relationships, and institutions involved was clearly huge, and the system appeared to be a Complex Adaptive system with decentralized power control. This scenario was fitting for case theory. Different theories were mixed under a service perspective. At first glance, according to network theory, it was clear that the network of actors (including the hospital actors, the post-discharge facilities, the drug and equipment providers, the universities-for education and research purposes-the citizens, etc.) was fragmented. Some nodes were more connected than others as attractors for the remainder of the network. In particular, a group of physicians from the hospitals, some professors from the university, and a group of doctors from the rehabilitation network participated in a technical voluntary committee promoted by the region to try to integrate their practices, aligning the education of students from medicine to the best practices developed in the hospitals, and, at the same time, trying to adopt homogeneous treatments in hospitals and in the post-stroke network. A set of nodes partially overlapping with the previous nodes decided to fund a committee to compare their strategies, tactics and operations with those of other hospitals in other Italian regions to identify the most effective way to manage stroke patients and obtain the highest outcomes. This committee included some of the authors. By analysing the work of the committee as a case within a case, it is possible to identify a wide mix of methods adopted to derive practical solutions (Fig. 2). Following the basics of case theory, they started drawing their own networks and developing a graphical representation (as a flowchart) of the adopted clinical pathways to make comparisons. Some parameters-as key performance indicators of clinical and managerial activities (such as the average length of stay collected for each stroke patient category) and as results of stakeholders' satisfaction surveys, both weighted on the case mix treated by the different stroke units-were chosen to make the comparisons more accurate. Incidentally, the survey was based on the findings of a system dynamics model (Stermann, 2000). Finally, while the initial objective of the research was the outcome effectiveness in the context of the regional healthcare system, the committee recognized that there was a tacit objective to take efficiency into account and answer the call for environmental concerns of national and international organizations. These tacit goals were later translated into explicit ones named the social, economic and environmental areas of sustainability of the stroke healthcare regional system even if they were based on fuzzy sets (for example, the change of demission rate in 6 months after the development of a new regional guideline-used to measure the flexibility of the hospital-addressed both economic and social dimensions of sustainability). In particular, the sustainability strategy of the committee was defined according to the five pillars proposed by Fischer (2015) in terms of prevention and the promotion of health in the geographical area, the long-term perspective on innovation in drugs and treatments, the quality of care (even in terms of possible hospitalization at home ), environmental concern, and patient empowerment and responsibility. Moreover, a focus on organizational commitment was introduced. The initial set of key indicators was extended and clustered for each pillar, and a target was set out for each pillar to identify the structures that were able to perform better. The different practices were analysed and compared in terms of processes, equipment and other resources, and actors involved to identify the best practices. To this extent, a system dynamics model and the analysis of supra and sub-systems of Viable Systems Approach were adopted. Indeed, causal relationships among the variables determining the values of the parameters were shown based on the understanding of the influences of the different actors participating in or impacting the system. Then, several what-if analyses were performed to finally develop common guidelines to implement sustainability practices. For example, to speed the decision process related to the particular lifesaving treatment of thrombolysis, guidelines were provided to educate the first aid personnel (on the ambulance) to quickly recognize stroke symptoms, evaluate the possibility of administrating a specific class of drugs, and inform the hospital before arrival at the emergency department of the need for treatment. From a service ecosystem perspective, the "sustainability inspired" guidelines are institutions co-created and shared by actors who are willing to cooperate. In such an ecosystem, value co-creation processes are simplified, since actors already know and agree with the same "rules of the game" and are facilitated to integrate resources and exchange service since they better perceive and understand the value propositions of the other actors. As an example, after the release of the report of the committee, other actors agreed to change processes to align to the guidelines and temporary exchange personnel. Thus, these common guidelines, as boundary objects (Sajtos et al., 2018), became the facilitators to aggregate new actors into the network (enlarging the ecosystem) and favour more cohesion within it. Moreover, due to the growth of the network and the importance of the initiatives supported by it, the development of a new information system for patient medical records was financed by the regional healthcare department. The design of this software was inspired by service systems principles. The software was oriented not just to keep medical records but also to allow the different actors (medical records stakeholders) to co-create value on a common platform: tele-medicine initiatives were carried out, and patients were allowed to take part in decision making processes by expressing preferences on visits and self-annotating their health statuses. The software was used as an engagement platform (Storbacka et al., 2016) to communicate healthier lifestyles and the availability of new healthcare services. Furthermore, while respecting privacy laws, social networking was facilitated by specific software functions to allow patients with similar problems to connect and share experiences, making patients and their relatives more empowered. Analysing the software introduction through the lens of case theory, it can be found that, after its first release, value co-creation in the network increased actors' engagement, and engagement increased participation in the service ecosystem in a positive reinforcing loop (a higher number of active participants on the committee and in the healthcare system was recorded). In behaviour of these systems is context-dependent, there is no one-size-fits-all theory, but solutions must be co-created with the actors operating within the systems. This paper proposes case theory as the methodological approach to analyse, interpret and identify theories that are suitable for sustainable healthcare systems. Indeed, this paper adopts interactive case study research (asking about the service-oriented involvement of researchers in the systems and co-creating value with the other actors), and it introduces a system and network approach to face the complexity of real cases. A straightforward example from the real case of an Italian regional healthcare system dealing with stroke patients is presented to show how several methods can be mixed in the research plan and how a substantive theory emerges and is institutionalized due to the continuous cycle of comparison with other theories and improvement of the current one, thus enlarging the number and variety of the cases examined. Several theoretical and practical implications are derived from this. First, the literature review reveals that there is still a lack of a clear understanding of the sustainability variables in healthcare systems, which makes it difficult to even state that current theories have been error-proofed. To the contrary, sustainability issues should be addressed based on appropriate methods to address complexity, accepting that no theory will survive forever, also because they are context-dependent. Second, based on case theory and its adoption in the presented case, sustainable healthcare systems need guidelines developed by the "community". Indeed, from a system point of view, since healthcare systems are CAS, there is no single point of control, and self-organization is required. However, resources are scarce, particularly in universalistic systems, and sustainable self-organization can be achieved only with difficulty. Here, the focus on a few simple rules creates the conditions for the emergence of viable systems-as opposed to complicated procedures and processes that can be developed by technicians but may not be applicable to real systems (myth #2 of Mintzberg). Moreover, from a service ecosystem perspective, guidelines developed by groups of involved actors are already aligned to the other shared institutions of the ecosystem, not only in terms of laws and rules but also in terms of symbols, current practices, and beliefs. This also implies that the other actors can easily engage in resource integration and service exchange according to the new guidelines. The information variety alignment of actors can be evaluated by means of measures of consonance and resonance of the Viable Systems Approach. Finally, from a network perspective, it becomes clear that it is useful to organize around existing connections, increasing attractiveness by co-creating new meanings to relationships rather than focusing on vertical silos of knowledge, which encourage only the local optimization of processes and lose the view of the whole picture. Third, the importance of the role of T-shaped professionals is confirmed , since these professionals not only understand sustainability principles but also possess the capabilities needed to be the driver of change by cooperating, negotiating, and leading the other actors towards sustainability decisions and goals. Fourth, as "practical theories" that are adaptable to contexts and re-comprised in case theory, systems and network theories are confirmed to play a fundamental role in healthcare systems (Polese, 2013;Gervasio et al., 2017;Sarno, 2017), while case theory further helps to systematize their findings. Fifth, the interplay between cases and theories; abduction, induction, deduction; particularization and generalization of results are endless. According to case theory, healthcare sustainability studies that are already published may be combined and tested in real cases, taking case theory as a wider framework. With a top-down approach, case theory may help to compare current mid-range sustainability theories and look for the best theory to try to introduce to healthcare systems. Moreover, abstraction from healthcare systems theories may be helpful in deriving a grand theory of sustainability. Finally, new, detailed case studies based on case theory are welcome. Thus, the journey in "methodologyland" opened by case theory does not end in this paper.
2019-05-21T13:04:11.745Z
2019-01-25T00:00:00.000
{ "year": 2019, "sha1": "96d690903a0c0b1543cf2119091699b5fccef50e", "oa_license": "CCBY", "oa_url": "http://www.ccsenet.org/journal/index.php/ijbm/article/download/0/0/38305/38858", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4ca9af734ae72f8d00084bed9f9caca1cb97ab6b", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
229489123
pes2o/s2orc
v3-fos-license
Delayed pulmonary embolism after COVID-19 pneumonia: a case report Abstract Background  Since the onset of the COVID-19 pandemic, several cardiovascular manifestations have been described. Among them, venous thromboembolism (VTE) seems to be one of the most frequent, particularly in intensive care unit patients. We report two cases of COVID-19 patients developing acute pulmonary embolism (PE) after discharge from a first hospitalization for pneumonia of moderate severity. Case summary  Two patients with positive RT-PCR test were initially hospitalized for non-severe COVID-19. Both received standard thromboprophylaxis during the index hospitalization and had no strong predisposing risk factors for VTE. Few days after discharge, they were both readmitted for worsening dyspnoea due to PE. One patient was positive for lupus anticoagulant. Discussion  Worsening respiratory status in COVID-19 patients must encourage physicians to search for PE since SARS-CoV-2 infection may act as a precipitant risk factor for VTE. Patients may thus require more aggressive and longer thromboprophylaxis after COVID-19 related hospitalization. Introduction On 11 March 2020, The World Health Organization (WHO) declared the onset of the COVID-19 pandemic; the disease caused by the novel coronavirus, SARS-CoV-2. Since then, several reports have been published on the cardiovascular implications of this emerging disease. 1 Among them, venous thromboembolism (VTE) appears to be a frequent complication 2 particularly in patients hospitalized for severe acute respiratory distress syndrome. 3,4 In this observation, we report two cases of COVID-19 male patients developing acute pulmonary embolism (PE) Learning points • A deterioration in respiratory status in COVID-19 patients associated with increased D-dimer level must encourage physicians to search for pulmonary embolism. • COVID-19 patients may require more aggressive and longer thromboprophylaxis after hospital admission since SARS-CoV-2 could be a precipitant factor for venous thromboembolism. after a first hospitalization for pneumonia of moderate severity. To the best of our knowledge, no cases of pulmonary embolism (PE) occurring secondly after a first hospitalization for COVID-19 non-severe infection have been described yet. Timeline Case presentations Patient 1 Patient 1, aged 68, with history of heavy smoking (60 pack-year) and hypercholesterolaemia presented with polypnoea (22 cycle/min) and low oxygen saturation in room air of 88% after 3 days of fever and myalgia. Physical examination demonstrated coarse crackles in both lower lung fields. Reverse transcription-polymerase chain reaction (RT-PCR) testing on nasopharyngeal swab was positive for SARS-Cov-2. A low-dose computed tomography (CT) scan on the first admission showed peripheral ground-glass opacifications with underlying centrilobular emphysema lesions with an extension of COVID lesions estimated to 10-25% of lung parenchyma ( Figure 1A). The baseline electrocardiogram (ECG) was normal. A modest lymphopenia was present [750/mm 3 , referential range (RR) 1000-4000/mm 3 ] with increased values of C-reactive protein (CRP) (41 mg/L, RR < 4 mg/L) and D-dimer (1040 mg/L, RR < 500 mg/L). Brain natriuretic peptide (BNP) and troponin I levels were normal. Lupus anticoagulant (LA) testing was negative. The arterial blood gas on nasal cannula 8 L/min showed PaO2 60 mmHg, PCO2 37 mmHg, and SaO2 90%. Based on these findings, treatments with low molecular weight heparin (LMWH), enoxaparin 40 mg once a day, ceftriaxone, and hydroxycholoroquine were started. The following day, oxygen flow was increased to 15 L/min and delivered on a non-rebreather mask. After 8 days, the clinical status improved, and the patient was discharged home after progressive oxygen weaning. Forty-eight hours after discharge, he presented with rapidly worsening dyspnoea and severe hypoxaemia. D-dimer level was high (>20 000 mg/L), and Troponin I and BNP remained normal. A second chest CT evidenced worsening infectious lesions ( Figure 1B) with an extension of 30% and filling defects in the right pulmonary artery and its right superior lobe divisions diagnostic for pulmonary embolism ( Figure 1C). Unfractionated heparin was then started and switched after 48 h to LMWH twice a day. A second LA testing was positive. Vitamin K antagonist treatment with warfarin was initiated. A second LA test was planned after 3 months to decide whether anticoagulation should be discontinued or not. The patient was advised to consult his cardiologist one month after discharge. Patient 2 Patient 2, aged 62, was referred for dyspnoea after 5 days of fever, dry cough, and myalgia. He had a history of dilated cardiomyopathy with mildly reduced ejection fraction (42%) and several cardiovascular risk factors (smoking, hypertension, type 2 diabetes mellitus, and hypercholesterolaemia). The physical examination did not demonstrate any abnormalities on admission. No crackles had been detected on lung auscultation. RT-PCR testing on nasopharyngeal swab was positive for SARS-Cov-2. A low-dose CT scan was initially normal (Figure 2A). The baseline ECG showed sinus tachycardia. Creactive protein was mildly increased (peak value of 20 mg/L) with no leucocytosis. No troponin, BNP, or D-dimer tests were performed during the index hospitalization. Despite subnormal arterial blood gas on room air on admission (PaO2 of 73 mmHg, PCO2 of 42 mmHg, SO2 of 95%), the patient received low flow nasal oxygen (1 L/min) for 2 days. Enoxaparin 40 mg once a day during the hospital stay of 5 days was the sole treatment. Four days after discharge, he was referred for a worsened dyspnoea (New York Heart Association III). A second CT scan showed multiple subpleural ground-glass opacifications and a filling defect diagnostic for pulmonary embolism in the left inferior lobe ( Figure 2B Discussion We describe two cases of COVID-19 patients presenting with posthospital discharge acute pulmonary embolism despite adequate thromboprophylaxis in a non-intensive care unit setting. Despite the absence of major predisposing risk factors for venous thromboembolism (VTE), the administration of a weight-adjusted thromboprophylaxis during hospitalization and the absence of severe inflammatory syndrome, PE occurred and was associated with a worsening of CT lung injuries. In one case, LA could be evidenced by questioning about the correct anticoagulant treatment and its duration. To the best of our knowledge, no reports of acute pulmonary embolism (PE) after discharge from the hospital have been described yet in non-intensive care unit patients. The novel Coronavirus Disease outbreak is a global public health challenge. Since the first cases of SARS-CoV-2 were detected in Wuhan, China, 5 more than 2 500 000 confirmed cases and 175 000 deaths have been documented worldwide. 6 COVID-19-induced interstitial pneumonia leading potentially to acute respiratory distress syndrome (ARDS) and multi-organ failure is in the spotlight of all medical teams as it often triggers transfer of patients in intensive care units. Recently, it has been evidenced that SARS-CoV-2 could predispose patients to increased thrombotic disease in the venous and arterial circulations. 7 Severe inflammation, hypoxia, endothelial dysfunction, platelet activation and stasis particularly in intensive care unit patients could explain this pro-thrombotic state. Very recently, in a ARDS population, 16.7% of PE were diagnosed and 88.7% of those patients had positive LA. 4 Llitjos et al. observed that the systematic screening by complete duplex ultrasound in 26 intensive care unit patients showed a peripheral VTE prevalence of 67%. 8 Interestingly, in the two cases described here, thrombotic events occurred 13-14 days from the onset of COVID-19 symptoms, at home after a first non-intensive care unit hospitalization and in patients clinically recovered. There were no clinical signs of severe pneumonia, fever nor major inflammatory syndrome when PE occurred. Interestingly, both patients evidenced an increased extension of the peripheral groundglass opacifications when PE was diagnosed despite a clear clinical improvement before discharge. Thromboprophylaxis during first hospital stay was effective but stopped after discharge. Several reports confirmed that attention should be paid to venous thromboembolism prophylaxis in COVID patients during hospitalization 9 but no recommendations existed regarding routine post-hospital discharge thromboprophylaxis, recommended agent and/or duration. Our case report suggests the potential role of SARS-CoV-2 as a major precipitant factor for VTE. Some acute viral infections are known to be associated with LA which are often transient, but can persist and lead to thromboembolic complications by various mechanisms including the release of membrane microparticles and the exposure of pro-thrombotic phospholipids. 10 Although the significance of these antibodies is not well established yet, COVID-19-induced LA could favour the highly frequent thrombo-embolic events in this population and should be systematically tested. COVID-19 patients may thus require longer and more aggressive VTE prophylaxis after discharge. The type of anticoagulant treatment after pulmonary embolism may be adapted according to the presence of COVID-19induced LA, taking into consideration that oral direct anticoagulants are contraindicated in case of LA in the general population. Since COVID-19 patients are at high risk of developing PE, a sudden deterioration in respiratory status associated with high level of D-dimers must draw attention to progressive radiographic deterioration on CT and/or pulmonary embolism occurrence. Lead author biography Dr Mohamad Kanso is a cardiologist in Strasbourg University Hospital, Strasbourg, France. He graduated from the Cardiovascular Medicine, University of Strasbourg, in 2018 and is actually training in interventional electrophysiology. Supplementary material Supplementary material is available at European Heart Journal -Case Reports online. Slide sets: A fully edited slide set detailing this case and suitable for local presentation is available online as Supplementary data. Consent: The author/s confirm that written consent for submission and publication of this case report including images and associated text has been obtained from the patients in line with COPE guidance.
2020-11-26T09:07:18.653Z
2020-11-24T00:00:00.000
{ "year": 2020, "sha1": "02d9b4974388bae6c58603e4f0ae5eb1fe5d5224", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ehjcr/article-pdf/4/6/1/35545554/ytaa449.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c55812ef4107d290ad19bf8e84a3d1f34c2dc9b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250489863
pes2o/s2orc
v3-fos-license
Extrapolating evidence for molecularly targeted therapies from common to rare cancers: a scoping review of methodological guidance Objectives Cancer is increasingly classified according to biomarkers that drive tumour growth and therapies developed to target them. In rare biomarker-defined cancers, randomised controlled trials to adequately assess targeted therapies may be infeasible. Extrapolating existing evidence of targeted therapy from common cancers to rare cancers sharing the same biomarker may reduce evidence requirements for regulatory approval in rare cancers. It is unclear whether guidelines exist for extrapolation. We sought to identify methodological guidance for extrapolating evidence from targeted therapies used for common cancers to rare biomarker-defined cancers. Design Scoping review. Data sources Websites of health technology assessment agencies, regulatory bodies, research groups, scientific societies and industry. EBM Reviews—Cochrane Methodology Register and Health Technology Assessment, Embase and MEDLINE databases (1946 to 11 May 2022). Eligibility criteria Papers proposing a framework or recommendations for extrapolating evidence for rare cancers, small populations and biomarker-defined cancers. Data extraction and synthesis We extracted framework details where available and guidance for components of extrapolation. We used these components to structure and summarise recommendations. Results We identified 23 papers. One paper provided an extrapolation framework but was not cancer specific. Extrapolation recommendations addressed six distinct components: strategies for grouping cancers as the same biomarker-defined disease; analytical validation requirements of a biomarker test to use across cancer types; strategies to generate control data when a randomised concurrent control arm is infeasible; sources to inform biomarker clinical utility assessment in the absence of prospective clinical evidence; requirements for surrogate endpoints chosen for the rare cancer; and assessing and augmenting safety data in the rare cancer. Conclusions In the absence of an established framework, our recommendations for components of extrapolation can be used to guide discussions about interpreting evidence to support extrapolation. The review can inform the development of an extrapolation framework for biomarker-targeted therapies in rare cancers. recommendations. This paper is helpful because it provides a timely summary of key recommendations of extrapolation that can be widely used. However, some concerns should be addressed before it can be considered for publication. 1) According to the paper, the methodological guidance for extrapolating evidence were summarized from 19 related papers. It is recommended to list the biomarkers, the tumor types and the brief extrapolating methods of the 19 papers in a table. 2) In the INTRODUCTION section, the example of the V600E mutation given here is over detailed. It will be better to simplify this instance and add another one. 3) Page 13 line 26-27, we suggest the authors specifically describe the components of defining the disease discussed by the eight papers. In addition, page 13 line 34-35, specific description about the four articles is also recommended. 4) There should be more updated references to support the views of this paper. 5) The title of this article should be more concise. 6) The INTRODUCTION section should be segmented into several paragraphs appropriately. 7) There are some formatting and spelling errors in the manuscript, please check throughout the article and correct them. For example, page 8 line 47, a punctuation is missing here. 8) In the EXTRAPOLATION FRAMEWORK section, page 12 line 38, the authors claimed that "Our search did not identify an explicit framework for extrapolating evidence for targeted therapies from common to rare cancers sharing the same biomarker". Do you mean that there is no existing research or reference about an explicit framework for extrapolating evidence for targeted therapies from common to rare cancers? It is confusing here. VERSION 1 -AUTHOR RESPONSE Reviewer 1: 2.1. The written objectives are clear and although there is a rationale stating the need for this study, it is not clear what you mean by common to rare cancers, the examples given are broad. We thank the reviewer for this comment. Revisions: To improve clarity, we have now defined "common cancers" and "rare cancers" in our revised manuscript. We have inserted in the third paragraph of the introduction on page 7 and 8: "In cancer tissue types of high prevalence where biomarker prevalence is also high, referred to herein as 'common cancers', it is feasible to conduct randomised controlled trials (RCTs) in biomarkerdefined subpopulations within each cancer tissue type. However, in cancer tissue types of low prevalence, particularly where biomarker prevalence is also low, RCTs within biomarker-defined subpopulations may not be feasible or provide timely results. A 'rare cancer' is formally defined as incidence of the disease less than 6 to15 cases per 100,000 persons per year. In this paper, we use 'rare cancer' to mean cancer tissue types where the biomarker-defined subpopulation is sufficiently small that RCTs are deemed infeasible." 2.2. Also in regards to the objectives -are you looking to improve the guidelines on the use of biomarkers? Which aspect -diagnosis, monitoring and some other aspects? We thank the reviewer for this question. In our study, we were looking to improve the guidelines on the use of molecular targeted treatments across different cancer types. We have clarified the aim and objectives as outlined below. Revisions: We have revised the aim (pages 9 to 10) as follows: "The aim of this scoping review is to inform the development of guidance on assessing the effectiveness of molecular targeted treatments across different cancer types defined by the same biomarker -and concomitantly the value of the biomarker to predict treatment benefit for a rare cancer, where randomised trial evidence is only available for a common cancer type. Specifically, we sought to identify guidelines outlining approaches for extrapolation of evidence for targeted therapies from common to rare cancers sharing the same biomarker. The findings were used to develop a framework for extrapolation to assist key stakeholders, such as regulatory and reimbursement bodies, researchers and clinicians. This work will assist interpretation of existing data and make standardised, transparent decisions on whether extrapolation from common to rare cancers is appropriate." We have moved the following from the aim to the methods section, under subheading data extraction and synthesis (page 11) "If a comprehensive framework does not exist, we then sought to identify, categorise and integrate recommendations relevant for extrapolation in this context into a single summary document." 2.3. Following the above, what would be the direction of the review for readers besides that more work needs to be done, which areas exactly, what areas have been fully addressed? Who will use this review paper to guide them? We thank the reviewer for this question. Revisions: Please see revision for comment 2.2 Subheading "Future directions" in the discussion was revised to "Application and future directions". The paragraph under the subheading "Application and future directions" have been revised to (page 19 and 20): "This summary document has systematically identified distinct components for extrapolation, namely disease definition, analytical validity of the biomarker test, control ('standard of care') data, biomarker actionability, clinical trial endpoints and safety. Applicability of our work includes appraisal of evidence to support the prognostic and predictive value of the biomarker across different cancer types. This framework is specific for our setting of extrapolating data from common to rare cancers sharing the same biomarker treated with targeted therapy; and is not designed for other cancer and non-cancer settings. This work is valuable to guide discussions between key stakeholders in drug development, regulatory approval and reimbursement when extrapolated evidence from other cancers is being used to interpret evidence for targeted therapies in rare cancers. This work is also relevant for clinical trialists designing future studies in rare cancers where extrapolated data from common cancers are used to justify the trial design. It can also inform trialists designing studies in common cancers so that data generated would be useful and easily applicable for extrapolation in the future. The work is also relevant for clinicians as it outlines important considerations when using extrapolated evidence from common to rare cancers to make targeted treatment recommendations for their patients. Ongoing work will articulate criteria for assessing the level of uncertainty to promote standardised decision-making for clinical and regulatory decisions and facilitate transparent discussion between key stakeholders in the development and evaluation of molecular targeted therapies." 2.4. Consider adding one more Table to compare and contrast the sections you have in discussion of table 1 search -rather then too many paragraphs discussion it, summaries parts into a table and refer to it, and only discuss key points under the subheadings. New references may be needed to address the comments above, add them if that is the case. We thank the reviewer for this comment. We agree that outlining and summarising the recommendations for each component in an additional table will allow the readers to more easily compare and contrast the recommendations made by each author and author group. We have added a new table, Table 2, summarising the recommendations made by each author/author group for each component of extrapolation. Revisions: We have simplified the result section of the manuscript because they are now comprehensively summarised in Table 2. Reviewer 2: 3.1. According to the paper, the methodological guidance for extrapolating evidence were summarized from 19 related papers. It is recommended to list the biomarkers, the tumor types and the brief extrapolating methods of the 19 papers in a table. We thank the reviewer for this comment. We agree that outlining the specific recommendations from each of the included papers would improve transparency. Revisions: We have added a new table, Table 2, to summarise the specific recommendations for the components of extrapolation made by each paper. The focus of our data extraction was not on specific biomarker or tumour type examples. Instead, we have outlined general principles and recommendations for extrapolating evidence for targeted therapies from data generated in common cancers to rare cancers where conducting randomised studies are not feasible. The main focus of each paper that we identified is outlined in a new Supplementary Table 1. 3.2. In the INTRODUCTION section, the example of the V600E mutation given here is over detailed. It will be better to simplify this instance and add another one. We thank the reviewer for this comment. We agree the example of the V600E mutation can be simplified however we have retained some detail as it is a hallmark example of the scenario where treatment evidence have been extrapolated from common to rare cancers sharing the same biomarker without reference to established guidelines. We feel this example outlines the scenario sufficiently that another example may not add any further value. Revisions: We have simplified the V600E example in the fourth paragraph of the introduction on page 8 and 9. 3.3. Page 13 line 26-27, we suggest the authors specifically describe the components of defining the disease discussed by the eight papers. In addition, page 13 line 34-35, specific description about the four articles is also recommended. We thank the reviewer for this comment as well as the related comment 3.1. We agree the new Table 2 that summarise the specific recommendations for each of the components of extrapolation made by each paper would be very helpful for the readers. Revisions: We have added Table 2 outlining the specific recommendations made by each paper for all components of extrapolation. 3.4. There should be more updated references to support the views of this paper. We thank the reviewer for this comment and have added further references to support the views of the paper. We have used pivotal studies/references and have chosen more contemporary papers where available. Additional references are outlined below.
2022-07-14T06:16:13.196Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "4a1dd34f77f1f9829bb7a3b8eaaa8e9cae99ec10", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/7/e058350.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce0357dea1ffee0c099afdccc2a182f04ccb398c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266284877
pes2o/s2orc
v3-fos-license
Health Expenditure and Nigeria’s Economic Growth Using time series data from 1999 to 2022 A B S T R A C T A R T I C L E I N F O Using time series data from 1999 to 2022, this study examines the impact of health financing on economic growth in Nigeria.The findings indicate that the previous year's productive activities have a positive effect on economic growth in both the short and long run.The current domestic government general health expenditure has a negative growth effect on economic growth, whereas the previous year's domestic general government health expenditure has a positive growth effect on economic growth.Domestic private health spending has a significant positive effect on economic growth.As a result, the importance of private health spending over government health spending in improving economic growth is reinforced.Thus, it was determined that health financing is required for long-term economic growth.As a result, the government should increase individual health spending capacity, increase health sector budgetary allocation, and ensure prudent and effective health sector budgetary implementation. INTRODUCTION There can be no meaningful economic growth in any country unless adequate investment is made in people's health care.(Udeorah, Obayori and Onuchuku, 2018) argued that investing in health and education has recently become critical social priorities, as adequate human capital improves workers' skills, efficiency, and standard of living.Human capital accumulation is a key determinant of economic performance due to its efficiency, and higher economic growth allows for more human capital investment (Blundell, et al., 1999;Enggoh, Houeninvo and Sossou, 2015) Health is an important component of human capital because it increases both worker efficiency and productivity.A country's economic growth is determined by the health of its citizens.The health of a country's population is a major factor driving productivity because only a healthy labor force can contribute meaningfully to production and national output growth.According to (Piabuo and Tieguhong, 2017), one of the key mechanisms for demonstrating leaders' commitments and political will, as well as their ability to translate these commitments into results, is the development of a sound system for financing health care.The desire to develop strong health financing systems is shared by all nations, but the rising cost of health care, combined with poor economic performance in developing countries, particularly in Africa, makes meeting this goal difficult. Good health is essential for human well-being, which is a measure of increased productivity as well as overall economic growth and development.It is also a driving force for human capitals like education and skills.The positive impact of good health on economic growth underscores the importance of the past decades' progress in human health.According to a (Bauer, et al., 2006) illness and how life expectancy affects economic growth gaps between developed and developing countries.Nigeria is one of the developing countries with poor health outcomes and associated problems.Nigeria's health status is significantly lower than that of other Sub-Saharan African countries.Nigeria's health situation includes low life expectancy at birth, high infant and maternal mortality rates, malaria, and tuberculosis.In Nigeria, for example, life expectancy at birth is expected to be 54 years in 2020, compared to 63 years in Ghana.The high rates of HIV/AIDS infection in Nigeria also contributed to the country's low life expectancy. Nigeria has the world's second largest HIV epidemic and one of the highest rates of new infections, according to the (Dube, 2002).In addition, approximately 1.9 million people in Nigeria are HIV positive, with 1.5 percent of adults aged 15 to 49 HIV positive, 130,000 new HIV infections, and a low rate of anti-related death.Furthermore, malnutrition is responsible for approximately 52% of all under-five deaths.Because the provision of basic health services is a significant form of human capital investment and a key determinant of growth and poverty reduction (Dube, 2002), health conditions can influence the design of economic growth and poverty reduction.However, without adequate funding, the health situation cannot be addressed in a sustainable manner.Adequate and sustainable health financing is critical to achieving sustainable development health goals and achieving sustainable growth and development (Obansa and Orimisan, 2013;Olayiwola, Oloruntuyi and Abiodun, 2017). As a result, a significant portion of the budget is spent on health care in order to achieve economic growth.Given the United Nations (UN) recommendation that countries should spend at least 8-10% of their GDP on the health sector and the 2001 Abuja Declaration committing at least 15% of each African country's annual budget to the health sector, the Nigerian government has been increasing its expenditure on the health sector in order to meet these benchmarks.For example, the government increases its expenditure on health from N84.46 billion in 1981 to N134.12 billion in 1986.However, it fell to N41.31 billion in 1987 before rising to N575.30 billion in 1989.Total government expenditure on health According to the scenario above, the health sector has attracted the attention of the government and received a fair share of the country's GDP in recent years as an important facilitator of economic growth.Despite this, there appears to be no correlation between health-care expenditures, health status, and health-economic growth in Nigeria.Despite the fact that the relationship between health expenditure and economic growth has been extensively researched in developed countries, much less observation is still a far cry in less developed countries, including Nigeria and this trend creates a research gap that the current study seeks to fill.As a result of the foregoing, the purpose of this study is to fill a knowledge gap concerning the causal links between health expenditure and economic growth in Nigeria from 1999 to 2021. LITERATURE REVIEW The mobilization of funds for health care services is known as health care financing (Oyefabi, Aliyu and Idris, 2014;Wagstaff, et, al., 1999).It is the provision of money, funds, or resources to the government's planned activities to maintain people's health.These activities promote the availability of medical and related services aimed at maintaining good health.The amount of resources allocated to health care in a country is said to reflect the placement of health values in relation to other categories of goods and services. Thus, the pattern of health financing is linked to the delivery of health services.There are various methods of financing health care available throughout the world, including Nigeria.These services include, among others, tax-based public sector health financing, household out-of-pocket health expenditure, private sector (donor funding), and health insurance.External health-care financing includes grants and loans from donor organizations such as the World Bank, the World Health Organization, and the European Union, among others (Eggoh, Houeninvo and Sossou, 2015).Tax-based health financing is derived from the proceeds of government tax-based revenue at all levels and sectors.Government-funded health care is largely determined by revenue.Essentially, there is a strong positive relationship between the proportions of tax-based health spending and the progressivity of health care. Several empirical studies have established a causal relationship between health financing and economic growth in various economies around the world (Bloom and Canning, 2000;Bloom and Canning 2003), for example, contend that health as a macroeconomic indicator has a positive impact on aggregate output.(Ogundipe and Lawal, 2011) found that health expenditure has a significant and positive impact on economic growth in the Central African States and selected African countries, and that there is a long-run relationship between the two variables for both groups of countries.The study also demonstrates the existence of a long-run relationship between health expenditure and economic growth for both CEMAC countries and the five other countries that signed the 2001 Abuja Declaration.CEMAC countries exhibited bi-directional causality between economic growth and health expenditure, whereas countries that achieved the 2001 Abuja declaration exhibited unilateral causality running from economic growth to health expenditure.This implies that income is an important factor in explaining health-care spending; thus, an increase in income can stimulate growth in health-care spending.(Anowor, Ichoku and Onodugo, 2020) show that public or e-ISSN: 2540-7694 |p-ISSN: 0854-5251 private expenditures on health care in the Economic Community of West African States (ECOWAS) region have a positive effect on economic performance, with a long-run relationship between health care financing and output per capital within and across ECOWAS countries. According to (Ibukun and Osinubi, 2020) study of the relationship between environmental quality, economic growth, and health expenditure in 47 African countries, air pollutants reduce environmental quality while increasing health expenditure per capital.The study adds to the evidence that economic growth has a positive, inelastic effect on per capita health expenditure.This is the case in all five sub-regions (Central Africa, North Africa, East Africa, West Africa and Southern Africa).This means that, while economic growth increases health expenditure per capita, air pollution degrades environmental quality and drives up health expenditure.As a result, the study concluded that economic growth should not come at the expense of the environment.(Aboubacar and Xu, 2017) assessed the impact of health spending on economic growth in Sub-Saharan Africa using the system general method of moments (GMM) technique and discovered that health spending has a significant impact on the region's economic growth.In their study on health expenditure, education, and economic growth in Nigeria, (Bakare and Olubokun, 2011) discovered that government expenditure on education and health had a positive and significant impact on economic growth using the error correlation model (ECM) as an estimating approach. The impact of public health expenditure on economic growth in Nigeria between 1981 and 2013 was established by (Ibe and Olulu-Briggs, 2015), who found a positive relationship between public health expenditure and economic growth.The study concluded that improving public health improves labor productivity and leads to economic growth gains.The study recommended that Nigerian policymakers pay more attention to the health sector by increasing budgetary allocations to the sector.(Safdari, Mehrizi and Elahi 2013) investigated the effect of health expenditure on Iranian economic growth and discovered that variables such as health expenditure to GDP, GDP investment ratio, and graduate4 growth rate have a positive effect on economic growth rate.(Bein, et al., 2017) findings on the effect of health expenditure on health outcomes in selected West African countries show that public and private health spending have different effects on health outcomes.Government health spending was found to be positively related to health outcomes but had no significant impact, whereas private health spending reduces mortality and has a significant impact on infant.This could be due to how and where these countries fund public health.Private health spending is more important in improving health outcomes than public spending.This is consistent with (Novignon and Lawanson, 2017) findings that the effect of public health spending is less than the effect of private health spending.As a result, the authors emphasized a review of the region's public-private emphasis on health expenditures.(Ibe and Olulu-Briggs, 2015) used a vector autoregressive (VAR) model to investigate the relationship between life expectancy, public health spending, and economic growth in Nigeria and discovered that there was a relationship between public health spending and economic growth (Bokhari, Gai and Gottret, 2007). METHODS Endogenous growth models incorporate the mechanism by which health investments affect economic growth and development.These models emphasize the significance of human capital in economic growth.The theoretical model of Buchanan and Tullock (1975) was used in this study.Which encourages governments to increase public spending on health care regardless of demand?According to the theory, inefficiency in the provision of health care should be characterized not by a lack of supply but by a reduction in the quality of health care services.Human capital, according to (Barro, 1991;Romer, 1986) is an important factor in boosting economic growth.(Mankiw, Romer and Weil, 1992) augmented Solow model emphasized the role of human capital in economic growth as well.These endogenous models assume that economic growth is based on human capital's ability to influence growth in both the short and long run.This theoretical model emphasizes a functional relationship between economic growth and health financing via public health human capital investment.Our functional relationship between economic growth and health financing in Nigeria is written as follows, based on Olaniyi and Adams ( 2000 Where RGDP stands for real gross domestic product, DGHE stands for government health expenditure, DPHE stands for domestic private health expenditure, and OOP stands for outof-pocket health expenditure.Equation ( 1) can be rewritten in explicit linear form as follows: RGDPt = a0 + a1DGHEt + a2 DPHEt + a300P + εt ……….. (2) a1, a2 and a3 are the coefficients of health care financing and εt is the stochastic factor or error term.A priori a1>0, a2 >) 0 and a3 >0. The study's data is a time series data set spanning 22 years .The data stationarity was tested using the Augmented Dickey-Fuller Unit root test.This also influences our estimation technique selection.The Augmented Dickey-Fuller (ADF) equation for unit root testing is as follows: Where Yt is the variable's level, t is the time trend, 1 is the constant term, and t is the error term, which is assumed to be normally distributed with zero mean and constant variance.The Akaike Information Criterion is used to determine the optimal lag length (AIC).The Autoregressive Distribute Lag technique (ARDC) can be used when dealing with time series data that are integrated in different orders, 1(0), 1(1), or a combination of both.The model's ARDL representation is as follows: The error correction model (ECM) can be derived from the ARDL model via a single linear transformation that combines short-run adjustments with long-run equilibrium without sacrificing long-run information.In a time series analysis, the Error Correction Model (ECM) depicts the rate of adjustment from a short-run equilibrium to a long-run equilibrium.The main reason for developing the error correction model is to indicate the speed with which the department can adjust from long-run equilibrium.The ECM coefficient is expected to be negative and significant in order for the errors to be corrected; the greater the co-efficient of the parameter, the faster the departure from the long-run equilibrium.The ECM model is written as follows: The Granger-causality test was used to test the relationship between health funding and economic growth.The rule states that there is a casual relationship if the probability value is between 0 and 0.5.The granger-causality relationship can be expressed as follows: The study relied on secondary data from the World Development Indicator.Domestic Government Health Expenditure (percent of current health expenditure), Domestic Private Health Expenditure (percent of current health expenditure), Out of Pocket Health Expenditure (percent of current health expenditure), and Real Gross Domestic Product Growth (annual Growth) are the data and measurements used (World Development Indicator, 2021). RESULTS AND DISCUSSION More results detail can be seen in the following Table 1, Table 2, Table 3 and Table 4 The unit root stationarity test using ADF statistics is shown in Table 1.As a result, LRGDP is stationary at the level.Other variables, however, such as LDGHE, LDPHE, and LOOP, are stationary at first difference.We concluded that LRGDP is integrated of order zero one(0), whereas others are integrated of order zero one(0), whereas others are integrated of order 1(1).These findings imply that the auto-regressive redistributed lag estimated technique (ARDL) is a better estimation technique for studying the impact of health financing on economic growth in Nigeria.The Akaike Information Criterion (AIC) was used to determine lag length, and lag 1 was chosen as the best lag length for the model.Table 2 displays the ARDL results for the model's short run.According to the findings, the lag of real domestic product (LGDP) has a positive effect on economic growth in both the short and long run, though only the long run is significant.This meant that the impact of last year's productive activities had a positive growth effect on current year productive activities, though this was only significant in the long run.The current domestic general government expenditure on health (DGHE) has a significant negative impact on economic growth, whereas the previous year's domestic general government expenditure on health had a significant positive impact on real GDP in both the short and long run.The implication of these findings is that only consistent and committed general government health spending over time can significantly improve economic growth positively.e-ISSN: 2540-7694 |p-ISSN: 0854-5251 Currently, government general health spending may not have a negative impact on economic growth and may even have a positive impact.All else being equal, current government general health spending may not have a positive impact on economic growth and may even have a negative impact.Our previous year (a year lag) government general health spending results corroborates (Ibe and Olulu-Briggs, 2015), but our current year government health spending results does not.This could be due to the estimation technique used or the data used in the study.The short-run results of current out-of-pocket health spending and previous out-of-pocket health spending follow the same path as government general health spending.Previous out-of-pocket health expenditure had a significant positive impact on economic growth, whereas current out-of-pocket health expenditure had a significant negative impact on economic growth in the short run.In the short run, the situation may be real rather than an exception.As a result, increasing health spending every year may be required in the short run to influence economic growth through health financing.However, long-run out-of-pocket health expenditure has a significant positive effect on economic growth. Furthermore, both current domestic private health expenditure (DPHE) and previous domestic private health expenditure (DPHE) have a significant positive effect on economic growth in Nigeria, both in the short and long run.This highlighted the significance of private health spending over government health spending.As a result, increasing individual spending capacity to spend on their own health may be more effective than direct government health spending.According to the R2 for both the short-run and long-run models, the results have more than 76 percent explanatory power in both the short-run and long-run.The F-test validates the results in both the short and long run.Table 3 shows the rate of correlation of departure from the long-run equilibrium using the co-integration equation (coin Eq(-1).The results show that the coin Eq (-1) is both negative and statistically significant.Its value of 0.65 indicates that the rate of adjustment towards long-run equilibrium is approximately 65%.This means that in a year, approximately 65 percent of the deviation from the long-run is restored. Tables 2 and 3 show the CUSUM and CUSUM of squares of the recursive test for model stability for all variables.The results show that the estimated modes are stable, implying that the models are significant.The granger-causality tests between the health financing mechanism and economic growth are shown in Table 4.The probability of F-static must be less than or equal to 0.05 for causality to exist between two variables.The findings show oneway (unidirectional) causality between domestic government general health expenditure and economic growth; domestic private health expenditure and economic growth; and out-ofpocket health expenditure granger-cause economic growth and out-of-pocket health expenditure granger-cause economic growth.This implies that increased domestic government general health spending and domestic private health spending leads to economic growth, and that economic growth increases out-of-pocket health spending.This backs up some of our earlier findings. CONCLUSION This study uses data from 1999 to 2021 to examine the impact of health financing on economic growth in Nigeria.The unit root result favored the use of estimation techniques based on the auto-regressive distributed lag model (ARDL).The results show that the previous year's productive activities have a short-run and long-run growth effect on the current year's productive activities.The current year's domestic general government expenditure has a negative growth effect on economic growth, whereas the previous year's domestic general government expenditure on health has a positive growth effect ) and the empirical literature on the subject: RGDP = f(DGHE, DPHE, OOP)..… . . . . . . . . . . . . . . . .(1) Table 3 . The Restricted Error Correction Model
2023-12-16T16:29:36.216Z
2023-07-08T00:00:00.000
{ "year": 2023, "sha1": "6c939bc5f55d771af173fb6728f80b9aca9fbbdb", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.17509/jpis.v32i1.53371", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5182026810eb752ba9c283ebaef134f5edd314d6", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [] }
17505743
pes2o/s2orc
v3-fos-license
Using Taxonomies to Facilitate the Analysis of the Association Rules The Data Mining process enables the end users to analyze, understand and use the extracted knowledge in an intelligent system or to support in the decision-making processes. However, many algorithms used in the process encounter large quantities of patterns, complicating the analysis of the patterns. This fact occurs with association rules, a Data Mining technique that tries to identify intrinsic patterns in large data sets. A method that can help the analysis of the association rules is the use of taxonomies in the step of post-processing knowledge. In this paper, the GART algorithm is proposed, which uses taxonomies to generalize association rules, and the RulEE-GAR computational module, that enables the analysis of the generalized rules. Introduction The development of the data storing technologies has increased the data storage capacity of companies. Nowadays the companies have technology to store detailed information about each performed transaction, generating large databases. This stored information may help the companies to improve themselves and because of this the companies have sponsored researches and the development of tools to analyse the databases and generate useful information. During years, manual methods had been used to convert data in knowledge. However, the use of these methods has become expensive, time consuming, subjective and non-viable when applied at large databases. The problems with the manual methods stimulated the development of processes of automatic analysis, like the process of Knowledge Discovery in Databases or Data Mining. This process is defined as a process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data [6]. In the Data Mining process, the use of the association rules technique may generate large quantities of patterns. This technique has caught the attention of companies and research centers [3]. Several researches have been developed with this technique and the results are used by the companies to improve their businesses (insurance policy, health policy, geo-processing, molecular biology) [8,4,9]. A way to solve the problem of the large quantities of patterns extracted by the association rules technique is the use of taxonomies in the step of post-processing knowledge [1,8,10]. The taxonomies may be used to prune uninteresting and/or redundant rules (patterns) [1]. In this paper the GART algorithm and the RulEE-GAR computational module is proposed. The GART algorithm (Generalization of Association Rules using Taxonomies) uses taxonomies to generalize association rules. The RulEE-GAR computational module uses the GART algorithm, to generalize association rules, and provides several means to analyze the generalized rules. This paper is organized as following: first by presenting the association rules technique and some general features about the use of taxonomies, second by describing the GART algorithm and the RulEE-GAR computational module. Finally the results of some experiments performed with the GART algorithm along with our conclusion are presented. Association Rules and Taxonomies An association rule LHS ⇒ RHS represents a relationship between the sets of items LHS and RHS [2]. Each item I is an atom representing the presence of a particular object. The relation is characterized by two measures: support and confidence. The support of a rule R within a dataset D, where D itself is a collection of sets of items (or itemsets), is the number of transaction in D that contain all the elements in LHS ∪ RHS. The confidence of the rule is the proportion of transactions that contain LHS ∪ RHS with respect to the number of transactions that contain LHS. The problem of mining association rules is to generate all association rules that have support and confidence greater than the minimum support and minimum confidence defined by the user to mine association rules. High values of minimum support and minimum confidence just generate trivial rules. Low values of minimum support and minimum confidence generate large quantities of rules (patterns), complicating the user's analysis. A way of overcoming the difficulties in the analysis of large quantities of association rules is the use of taxonomies in the step of post-processing knowledge. The use of taxonomies may help the user to identify interesting and useful knowledge in the extracted rules set. The taxonomies represent a collective or individual characterization of how the items can be classified hierarchically [1]. In Fig. 1 an example of a taxonomy is presented where it can be observed that: t-shirts are light clothes, shorts are light clothes, light clothes are a kind of sport clothes, sandals are a kind of shoes. In the literature there are several algorithms to generate association rules using taxonomies (generalized association rules). Algorithms like Cumulate and Stratify [10] generate rules sets larger than rules sets generated without taxonomies (because they generate association rules with and without taxonomies). To try decrease the quantity of generated rules, a subjective measure is used to prune the uninteresting rules [10]. The subjective measure does not guarantee that the quantity of rules will decrease. Our method proposes an algorithm and a module of post-processing [5]. Using the module, the user looks to a small set of rules without taxonomies, builds some taxonomies and then uses the algorithm to generalize the association rules, pruning the original rules that are generalized. Thus our algorithm always decreases or keeps the volume of the rules sets. The proposed algorithm and module are presented in Section 3 and 4. The Algorithm GART We analysed the structure of the association rules generated by algorithms that do not use taxonomies. The results of the analysis show us that it is possible to generalize association rules using taxonomies. In Fig. 2 The two rules generated by the Step 1 (Fig. 2) were generalized again. We changed the items slipper and sandal by the item light shoes (which represented another generalization) generating two rules light clothes & light shoes ⇒ cap. Then we pruned the repeated generalized rules again, maintaining only one generalized association rule: light clothes & light shoes ⇒ cap. Due to the possibility of generalization of the association rules (Fig. 2), we propose an algorithm to generalize association rules. The proposed algorithm is illustrated in Fig. 3. We called the proposed algorithm of GART (Generalization of Association Rules using Taxonomies). The proposed algorithm just generalizes one side of the association rules -LHS or RHS (after to look to a small set of rules without taxonomies, the user decides which side will be generalized). First, we grouped the rules in subsets that present equal antecedents or consequents. If the algorithm were used to generalize the left hand side of the rules (LHS), the subsets would be generated using the equals consequents (RHS). If the algorithm were used to generalize the right hand side of the rules (RHS), the subsets would be generated using the equal antecedents (LHS). Next, we used the taxonomies to generalize each subset (as illustrated in Fig. 2). In the final algorithm we stored the rules in a set of generalized association rules. In the final algorithm, we also calculated the Contingency Table for each generalized association rules to get more information about the rules. The Contingency Table of a rule represents the coverage of the rule with respect to the database used in its mining [7]. With the calculation of the Contingency Table we finished the algorithm. The Computational Module RulEE-GAR In this section we present the RulEE-GAR computational module that provides means to generalize association rules and also to analyze the generalized rules [5]. The generalization of the association rules is performed by the GART algorithm, described in the previous section. Next we describe the means to analyze the generalized association rules. In Fig. 4 we showed the screen of the interface that enables the user to analyze and to explore the generalized rules sets. On the screen of the analysis interface of generalized rules (Fig. 4) there are some spaces where the user puts data to make a query and select a set of generalized rules, accompanied or not of several evaluation measures [7], to be analyzed. Besides allowing the user to select a set of rules, the interface provides four links in the section Downloads to look for and/or download the files. The files contain, respectively, the set of transactional data (Data Set), the set of source rules (Rule Set), the set of generalized rules (Generalized Rule Set) and the set of taxonomies used to generalize the rules (Taxonomy Set). Besides links for visualization and/or download of the files, each generalized association rule presents others links that enable the user to explorer information about the generalization of the rule. The links are positioned at the left side of the rules (Fig. 4). The links are described as following: Expanded Rule It is represented in the interface by the letter "E". This link enables the user to see the generalized rule in expanded way. The generalized items of a rule are changed by the respective specific items. Source Rules It is represented in the interface by the letter "S". This link enables the user to see the source rules that were generalized. Measures It is represented in the interface by the letter "M". This link is available only if the user selects the support (Sup) and/or confidence (Cov) measures in its query and these measures present values lower than the minimum support and/or minimum confidence values defined to the mining process of the rules set not generalized. With this link it is possible to see which generalized rules have support and/or confidence values lower than the minimum support and/or minimum confidence values. In Fig. 4 we also see that the generalized items in a rule (items between parentheses) are presented as links. These links enable the user to see the source items that were generalized. In the analysis interface, the user can also store the information, selected by the query, in a text file. Experiments We performed some experiments using the GART algorithm to demonstrate that the use of taxonomies, to generalize large rules sets, reduces large quantities of association rules and makes easy the analysis of the rules. The experiments were performed using a sale database of a Brazilian supermarket. The database contained sales data of the recent 3 month. We made 4 partitions of the database to perform the experiments. The partitions were made using the sale data along of 1 day, 7 days, 14 days and 1 month. To generate the association rules, we used the implementation of the Apriori algorithm performed by Chistian Borgelt 3 with minimum support value equal 0.5, minimum confidence value equal 0.5 and a maximum number of 5 items by rule. The generated rules sets are described as following: -RuleSet 1day -32668 rules generated using the partition of 1 day; -RuleSet 7days -19166 rules generated using the partition of 7 days; -RuleSet 14days -16053 rules generated using the partition of 14 days; -RuleSet 1month -21505 rules generated using the partition of 1 month; -RuleSet 3months -19936 rules generated using the whole database (3 months of sale data). To perform the experiments, we looked to the database and to the 5 sets of association rules generated to make 18 sets of taxonomies. Then we ran the GART algorithm combining each set of taxonomies with each set of rules. In Fig. 5 a chart is presented that shows the reduction rates of the 5 rules sets after running GART algorithm using the 18 sets of taxonomies to generalize each rules set. In Fig. 5 the sets of taxonomies are called "T" followed by an identification number, as for example: T01. As it can be observed in Fig. 5, the experiments show reduction rates of the sets of association rules varying from 14,61% to 50,11%. Conclusion A problem found in the Data Mining process is the fact that several of the used algorithms generate large quantities of patterns, complicating the analysis of the patterns. This problem occurs with the association rules, a Data Mining technique that tries to identify all the patterns in a database. The use of taxonomies, in the step of knowledge post-processing, to generalize and to prune uninteresting and/or redundant rules may help the user to analyze the generated association rules. In this paper we proposed the GART algorithm that uses taxonomies to generalize association rules. We also proposed the RulEE-GAR computational module that uses the GART algorithm to generalize association rules and provides several means to analyse the generalized association rules. Then we presented the results of some experiments performed to demonstrate that the GART algorithm may reduce the volume of the sets of association rules. As the sets of taxonomies were made by the user, others sets of taxonomies may generate reduction rates higher than the rates presented in our experiments, mainly whether the sets were made by experts in the application domain.
2011-12-07T15:33:15.000Z
2011-12-07T00:00:00.000
{ "year": 2011, "sha1": "51c5b77697ffc836bd9c649c6873b16dc9b473e5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d9c7d0ab45b91d10012205b8e3d0b999b648c9a3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
248801047
pes2o/s2orc
v3-fos-license
COVID-19 vaccines in patients with decompensated cirrhosis: a retrospective cohort on safety data and risk factors associated with unvaccinated status Background Safety data reported from the large-scale clinical trials of the coronavirus disease 2019 (COVID-19) vaccine are extremely limited in patients with decompensated cirrhosis. The vaccination campaign in this specific population could be difficult due to uncertainty about the adverse events following vaccination. We aimed to assessed the COVID-19 vaccination rate, factors associated with unvaccinated status, and the adverse events following vaccination in patients with decompensated cirrhosis. Methods This is a retrospective study from Ruijin Hospial (Shanghai, China) on an ongoing prospective cohort designed for long-term survival analysis of decompensated cirrhotic patients who recovered from decompensating events or acute-on-chronic liver failure (ACLF) between 2016 and 2018. We assessed the COVID-19 vaccination rate, the number of doses, type of vaccine, safety data, patient-reported reasons for remaining unvaccinated, factors associated with unvaccinated status, and the adverse events of COVID-19 vaccine. Binary logistic regression was used for identifying factors associated with unvaccinated status. Results A total of 229 patients with decompensated cirrhosis without previous SARS-CoV-2 infection participated (mean age, 56 ± 12.2 years, 75% male, 65% viral-related cirrhosis). Mode of decompensation were grade II‒III ascites (82.5%), gastroesophageal varices bleeding (7.9%), hepatic encephalopathy (7.9%). Eighty-five participants (37.1%) received at least one dose of vaccination (1 dose: n = 1, 2 doses: n = 65, 3 doses: n = 19) while 62.9% remained unvaccinated. Patient-reported reasons for remaining unvaccinated were mainly fear of adverse events (37.5%) and lack of positive advice from healthcare providers (52.1%). The experience of hepatic encephalopathy (OR = 5.61, 95% CI: 1.24–25.4) or ACLF (OR = 3.13, 95% CI: 1.12–8.69) and post-liver transplantation status (OR = 2.47, 95% CI: 1.06–5.76) were risk factors of remaining unvaccinated independent of residential areas. The safety analysis demonstrated that 75.3% had no adverse events, 23.6% had non-severe reactions (20% injection-site pain, 1.2% fatigue, 2.4% rash) and 1.2% had a severe event (development of acute decompensation requiring hospitalization). Conclusions Patients with decompensated cirrhosis in eastern China are largely remained at unvaccinated status, particularly those with previous episodes of ACLF or hepatic encephalopathy and liver transplantation recipients. Vaccination against COVID-19 in this population is safe. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s40249-022-00982-0. Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection is a high-risk comorbid condition in patients with immune-compromised population. Coronavirus disease 2019 (COVID-19) related mortality in patients with cancer patients is significantly increased despite the absence of symptoms [1,2]. Twenty percent of these asymptomatic patients may suffer for a long time as evident by their failure to get seroconversion of the COVID-19 antibody [3]. Patients with cirrhosis are at risk of SARS-CoV-2 infection due to innate and humoral immune dysfunction [4] with an increased risk of hospitalization, intensive care unit admission, and death [5][6][7][8]. In a national COVID-19 cohort study across the United States, SARS-CoV-2 infection in patients with cirrhosis was associated with 2.38 times all-cause mortality hazard within 30 days [5]. Baseline decompensated cirrhosis and high organ failure scores are predictors of mortality in cirrhosis following SARS-CoV-2 infection [5,7,9]. Based on the cumulating evidence, it is now recommended by major hepatology societies that patients with cirrhosis, particularly those with decompensation should be vaccinated against SARS-CoV-2 [10,11]. Although the vaccine immunogenicity in patients with cirrhosis is inferior to that of the general population [12], receipt of an mRNA-based COVID-19 vaccine in patients with cirrhosis is associated with a 66.8% reduction in SARS-CoV-2 infection after 28 days of the first dose [13] and vaccination significantly reduces mortality despite breakthrough infections [14]. Nevertheless, 40% of the patients with cirrhosis deferred COVID-19 vaccination in a United States Veterans Health setting despite a wide access to COVID-19 vaccines [15]. Among them, younger patients, current smokers, and living in rural areas were associated with vaccination hesitancy [15]. Although the generalizability of these findings in other settings of cirrhosis remains unknown, the reported 40% acceptance rate of COVID-19 vaccination urges more efforts to develop strategies to guide vaccine campaigns. Despite that the COVID-19 vaccinations are provided for free to the public in China where people have strong trust in central government and high acceptance rate [16], concerns about the safety and side-effects of the vaccines still significantly decrease the willingness to get vaccinated [17]. Currently, safety data reported from the large-scale clinical trials of the COVID-19 vaccine were still extremely limited in patients with cirrhosis, with less than 0.1% of more than 100,000 participants [18]. It is therefore important to investigate the SARS-CoV-2 vaccine coverage and safety profiles in patients with decompensated cirrhosis in real-world settings. To assist in the development of vaccination strategies during the ongoing pandemic, we assessed the COVID-19 vaccination rate, factors associated with unvaccinated status, and the adverse events following vaccination in an ongoing prospective cohort previously designed for longterm survival analysis of cirrhotic patients who recovered from decompensating events or acute-on-chronic liver failure (ACLF). Study design, participants, and follow-up This retrospective study was performed in the Ruijin Hospital (RJH) cohort of patients with established cirrhosis who were non-selectively admitted for decompensation or ACLF between 2016 and 2018. This cohort has been utilized for the comparisons of the clinical utility of different ACLF criteria in the management of hospitalized cirrhosis [19]. All patients are enrolled in the Ruijin Hospital (academic hospital), Shanghai, China and were actively followed-up via hospital information system and/or phone call at scheduled time-point (28-day postenrollment, 90-day post-enrollment, 180-day post-enrollment and every year thereafter). An extra follow-up was performed between 3 and 15 January 2022 via telephone for vital status, COVID-19 vaccination status including the number of doses, type of vaccine, safety data, and reasons for keeping unvaccinated. The study was monitored by a data management committee (QX, HGX) to improve the consistency and accuracy of data. Inclusion criteria for the RJH cohort: Patients aged between 18 and 80 who were nonelectively admitted with cirrhosis for ascites, gastroesophageal varices bleeding, hepatic encephalopathy (HE), bacterial/fungal infection, and/or jaundice (total bilirubin ≥ 5 mg/dL). The diagnosis of cirrhosis was either biopsy-proven or based on the usual clinical, laboratory, endoscopic, and radiologic diagnostic criteria. Exclusion criteria for the RJH cohort: (1) pregnancy or lactation, (2) thepresence of human immunodeficiency virus infection, (3) the presence of hepatocellular carcinoma irrespective of size, (4) other nonhepatic disseminated malignancies, (5) previous solid organ transplant, (6) treatment with immunosuppressive agents for diseases other than severe hepatitis, or (7) severe extrahepatic diseases with expecting poor short-term survival. To account for potential delays in the launch of the local COVID-19 vaccination strategy from national vaccination campaign advocation at the end of March 2021, non-survivors before April 30, 2021 were further excluded, one month after the national vaccination campaign began in population aged between 18 and 80 years in China. Detailed information on vaccination campaigns in China is provided in Additional file 1. Outcomes The primary outcome was the unvaccinated rate as of 15 January 2022. Secondary outcomes included safety data regarding local (pain, redness, swelling, and lymphadenopathy) and systemic solicited adverse events (fever, chills, headache, fatigue, myalgia, arthralgia, nausea and vomiting, diarrhea, rash) for 7 days post-vaccination and unsolicited adverse events after vaccination till 15 January 2022. Definition of a serious adverse reaction is defined as any adverse event at any dose: results in death or is lifethreatening or requires hospitalization or prolongs the existing hospitalization. Data collection Demographics, comorbidities, etiology of cirrhosis, mode of decompensation, episode of bacterial infection, acute kidney injury (AKI), and ACLF during the initial hospitalization since enrollment were prospectively collected in the RJH study. Information on the current residential province/ municipality, Shanghai vs other areas, rural vs urban areas were collected. Vaccination information includes vaccination status, vaccine type, number of doses, and adverse events following vaccination if vaccinated; reasons for remaining unvaccinated if unvaccinated. Statistical analysis Patient characteristics were presented according to the type of data: Mean ± SD and median [interquartile range (IQR)] for normal and skewed distributed continuous variables, respectively; counts (percentages) for categorical variables. Comparisons between two groups were performed using Student's test, Mann-Whitney U test, Chi-square, or Fisher exact test as appropriate. The proportion of patients who remained unvaccinated was calculated in the full analysis set and compared among different regions in China. Univariable and multivariable logistic regression was used to identify variables associated with unvaccinated status. The safety analysis was conducted in the population who received at least one dose of COVID-19 vaccination. Based on the previous reported vaccination rate in cirrhosis, we suppose the vaccination rate would be 80% in non-LT recipient and 50% in LT recipient by the time of our follow-up. The power of our study was 93.15% (β = 0.068) based on the current sample size (LT 37, non-LT 192), using two group proportion tests, with significance level α = 0.05. In all statistical analyses, a 2-tailed P < 0.05 was considered statistically significant. Data handling and analysis were performed with R 4.1.2 (http:// www.r-proje ct. org/). Study population Of all the participants in the RJH cohort, 51 were lost to follow-up and 188 patients died [182 of them (96.8%) did not receive an LT] as of 30 April 2021 (Fig. 1). The remaining 229 survivors [37 (16.2%) LT recipients and 192 (83.8%) survived without an LT] were included in the current study. The median survival time since enrollment in the RJH cohort was 4.42 years (IQR: 3.73, 5.05). Subject characteristics Characteristics of patients included in this study is shown in Table 1 and compared between those with and without LT. These were mainly male patients (75.1%) with viral-related decompensated cirrhosis (65%), living currently in Shanghai and peripheral cities with a mean age of 56 ± 12.2 years. None of the participant in the current study was infected by SARS-CoV-2 by the time they were enrolled in the current study. No differences in age, gender, etiology of cirrhosis, comorbidities, and current living areas were observed between patients with and without LT. Mode of decompensation included moderate-to-large ascites, HE, GEVB. Other complications at the initial enrolment include bacterial/fungal infection, the rates of which were similar between the two groups, but there were significant more cases of AKI (21.6% vs 5.7%, P < 0.01) and ACLF (35.1% vs 7.8%, P < 0.01) amongst LT recipients. No statistically different distribution of vaccination status was observed between groups, but LT recipients were more likely to remain unvaccinated than those without LT (78.4% vs 59.9%, P = 0.052), and no LT recipient received additional vaccination. Among all the 85 vaccinated participants, 60 reported the type of vaccines, all of which were inactivated SARS-CoV-2 vaccines (CoronaVac = 47, COVILO = 11, Mixed = 2). There was no significant difference in the types of vaccine between groups. Patient-reported reasons and objective factors associated with unvaccinated status Among the 144 participants who remained unvaccinated, 75 (52.1%) were not vaccinated for COVID-19 due to the lack of positive medical advice, 54 (37.5%) had fear of negative side events despite positive medical advice, and 15 (10.4%) were unwilling to report reasons (Fig. 3). The reported reasons for vaccination hesitancy were not different between those with and without LT. Safety analysis Safety analysis was performed on 85 participants who received at least one dose of the SARS-CoV2 vaccine. Sixty-four (75.3%) patients did not report side events after SARS-CoV-2 vaccination and the remaining 21 (24.7%) participants reported at least one side event (Table 3). Overall, adverse events were mostly non-severe with injection-site pain (20%) being the most common one. Systemic side events were reported by 1 patient with fatigue and 2 patients with rash. All the systemic symptoms were transient and recovered without medication. Discussion This study demonstrated that, in the RJH cohort, which consists of patients from the eastern provinces of China, more than half of the patients with decompensated cirrhosis who survived from previous episodes of decompensation or ACLF remained unvaccinated against COVID-19 despite the ongoing pandemic. Lack of positive advice from the medical providers and fear of negative events from COVID-19 vaccination were the main reasons for remaining unvaccinated. Vaccination rate varied among different regions and experience of HE, ACLF or LT were identified as risk factors of unvaccinated status independent of living area. Among all the vaccinated patients, side events were reported in one-quarter, mainly with injection-site pain. Patients with cirrhosis, particularly those at decompensated stage should be prioritized for COVID-19 vaccines as recommended by the major liver societies [10,11]. However, the vaccination rate in cirrhosis is relatively low (~ 60%) in a report of the Veterans Health Administration data across the United States [13,15]. The vaccination rate further decreased in decompensated cirrhosis (37.1%) as revealed by our current study, sharply contrasting to the overall 90% rate of COVID-19 vaccination in the general population in our country (China) as of 17 Jan 2022 [21]. That is, despite wide access to free vaccines for general public, specific population like decompensated cirrhosis, LT recipients are lag far behind. Although it is unclear whether this phenomenon also exists in other regions, more efforts are needed to further advocate the necessity of vaccination by the hepatology community. Several cases developed acute liver injury following COVID-19 vaccination [22], including acute exacerbation of AIH [23,24]. Such acute insult of the liver could act as a trigger of ACLF in decompensated cirrhosis. Therefore, the uncertainties of safety contribute mostly to the low vaccination rate in our cohort. The lack of published safety data from COVID-19 vaccination in this particular population makes it difficult for the patient to decide and also makes the healthcare provider hard to advocate with evidence. Ninety percent of the patients remained unvaccinated either because of lacking positive medical advices or fear of side events despite positive medical advice. With the analysis of the safety dataset, we demonstrate that in decompensated cirrhosis, COVID-19 vaccination is generally safe, similar to the previous reports in chronic liver disease [25], non-alcoholic fatty liver disease [26], LT recipients, cirrhosis (mainly compensated stage) [9] and chronic hepatitis B [27]. Patients with decompensated cirrhosis are especially vulnerable to develop severe COVID-19 due to immune dysfunction [5,9,[28][29][30]. The benefits of COVID-19 vaccination greatly surpass the risks of the post-vaccination adverse events. We, therefore, recommend hepatologists or physicians to include discussions about COVID-19 vaccination with decompensated cirrhotic patients who remained unvaccinated and educate them on the risks and benefits while correcting their attitude against vaccination. Identification of risk factors associated with unvaccinated status is critical to target interventions. In our cohort, there is a significant variation of vaccination rates among different provinces/municipalities, being the lowest in Shanghai (25.3%) and highest in Jiangxi (66.7%). This could partly be explained by the exposure risk analysis showing that Shanghai had the lowest number of domestically transmitted COVID-19 cases among Fig. S1). Of note, geographical variation of the COVID-19 vaccine coverage in a specific population is also subject to regional policy, vaccine accessibility, delivery strategy, individual factors, etc. [31][32][33]. Taking advantage of the pre-collected information on the acute episode of GEVB, moderate-to-large ascites, HE, AKI, Infection or ACLF in our database, we demonstrated for the first time that, Values are number (%) for categorical variables and mean (SD) for continuous variables Odds ratio was determined by the logistic regression analysis taking "Unvaccinated status" as outcome a Odds ratio was adjusted by residence in Shanghai in the multivariate analyses b Odds ratio was adjusted by the experience of HE, ACLF and LT in the multivariate analyses AIH auto-immune hepatitis, GEVB gastro-esophageal varices bleeding, HE hepatic encephalopathy, AKI acute kidney injury, ACLF acute-on-chronic liver failure, LT liver transplant, SD: Standard deviation [3,34]. In patients with decompensated cirrhosis, approximately one-third of the cirrhotic patients had low cellular vaccine response [9] requiring additional primary shot and booster shot to enhance immunogenicity. It would be too late for patients with decompensated cirrhosis to start primary shots when the COVID-19 invades into their living communities. Strengths of the present study include a large cohort of patients with a well-documented history of decompensating events and more importantly, ACLF episodes. It is currently the first and largest study to describe vaccination acceptance and safety profile in decompensated cirrhosis who were poorly represented in the previous clinical trials of COVID-19 vaccines. Limitations of the present study include the retrospective design, lacking information on the current disease severity, social economics, and other potential residual confounding factors associated with unvaccinated status. Patients included for analysis in the current study were those with decompensated cirrhosis who survived for more than 3 years from the previous decompensation, some of them might have been re-compensated, though it is hard to define this status. Secondly, serologic response data were not able to be captured since the patients had no regular serologic assessment after vaccination in China. Thirdly, cox regression would be more appropriate to investigate the factors associating with time-to-unvaccination, but is not available in our database because we did not collect the exact accessiable date for vaccination and the vaccinated date. The odds ratio used in our study overestimated the risk in our study due to the relative high rate of unvaccinated patients. Finally, the study was performed in a single-center setting in China during a period when the COVID-19 situation is under control. There is selection bias and it remains unclear whether our observations could be extrapolated to other settings.
2022-05-16T13:34:55.779Z
2022-05-16T00:00:00.000
{ "year": 2022, "sha1": "438f49311b72343c576c13af4dea3d28bb92767d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "438f49311b72343c576c13af4dea3d28bb92767d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266286512
pes2o/s2orc
v3-fos-license
Changes in Commercial Dendromass Properties Depending on Type and Acquisition Time : Forest dendromass is still the major raw material in the production of solid biofuels, which are still the most important feedstock in the structure of primary energy production from renewable energy sources. Because of the high species and type diversity of production residues generated at wood processing sites, as well as at logging sites, the quality of commercial solid biomass produced there has to be evaluated. The aim of this study was to assess the thermophysical characteristics and the elemental composition of ten types of commercial solid biofuels (pinewood sawdust; energy chips I, II, and III; veneer sheets; shavings; birch bark; pine bark; pulp chips; and veneer chips), depending on their acquisition time (August, October, December, February, April, and June). Pulp chips had the significantly lowest moisture content (mean 26.92%), ash content (mean 0.39% DM—dry matter), nitrogen (N) content (mean 0.11% DM), and sulfur (S) content (mean 0.011% DM) and the highest carbon (C) content (mean 56.09% DM), hydrogen (H) content (6.40% DM), and lower heating value (LHV) (mean 13.61 GJ Mg − 1 ). The three types of energy chips (I, II, and III) had good energy parameters, especially regarding their satisfactory LHV and ash, S, and N content. On the other hand, pine and birch bark had the worst ash, S, and N contents, although they had beneficial higher heating values (HHVs) and C contents. Solid biofuels acquired in summer (June) had the lowest levels of moisture and ash and the highest LHV. The highest moisture content and the lowest LHV were found in winter (December). Introduction Forests occupy 9274.8 thousand ha of land, which accounted for 29.7% of the area in Poland at the end of 2022 [1].Publicly owned forests dominated in terms of ownership structure, as they accounted for 80.8% of the total forest areas.Coniferous trees accounted for 68.6% of the forest area, and deciduous trees accounted for 31.4% [2].Broken down into species, pine (Pinus sylvestris L.) accounted for nearly 58.6% of forest areas, and it was the dominant species.Oak trees occupied the largest area among deciduous trees (8.0%).Pine is a common species in Poland and in Europe [3], and it is one of the most economically important species [4].This species is widely used for timber production, in the furniture and construction industries, and for paper pulp production.Moreover, the production residue of this species is used for bioenergy generation [5].Forests and wood resources provide the basis for the development of many branches of industry in Poland [6].In 2022, 44,646.7 thousand cubic meters of wood were acquired in Poland, including 42,702.8thousand cubic meters of large timber, 1943.8 thousand cubic meters of small timber, and 0.8 thousand cubic meters of stumpwood.Compared with 2021, the quantity of harvested wood grew by 5.7%, with that of large timber increasing by 4.9% and that of small timber increasing by as much as 25.2% [1].This dendromass is acquired by the wood industry, which produces higher added-value products, and the process is accompanied by the generation of production waste, which can be (and is) used in energy generation. Forest dendromass is still the major raw material in the production of solid biofuels, which are still the most important feedstock in the structure of primary energy production from renewable energy sources.Solid biofuels account for as much as 70%, with the average for the EU being about 40% [7].Forests and the wood processing industry are sources of dendromass as production residues, e.g., twigs, edgings, shavings, sawdust, bark, etc.It is estimated that more than 63% of dendromass residue is derived at sawmills [8][9][10], and sawmill residues can account for as much as approximately 55% of a log charge [11].This residue is used for a variety of purposes, including the production of chipboard and fiberboard, paper pulp, boxes, cardboard, bedding for farm animals, and compost [12][13][14].They can be used as feedstock for power plants, combined heat and power plants, and heating plants [15,16].This is the reason for the growing demand for these materials, including wood chips, especially for the generation of bioenergy [17,18].Finding suitable sources of biomass to use as energy feedstock in commonly used conversion technologies is a current and important issue.It is critical to understand the energy equivalence of biomass for its effective use in bioenergy generation [19].Dendromass consists mainly of bark, wood, and green material (small twigs and leaves), with wood accounting for 60-75% of deciduous dendromass, bark accounting for 5-20%, and green biomass accounting for 15-20%.As for coniferous dendromass, wood accounts for 70-80%, bark accounts for 5-15%, and green material accounts for 10-15% [20]. Because of the high species and type diversity of production residue generated at wood processing sites, as well as at logging sites [21], the quality of commercial solid biomass produced there has to be evaluated.There is a lack of precise information in this regard, and this is very important from the point of view of logistics companies, biomass producers, and end users of these solid biofuels.Therefore, the aim of this study was to assess the thermophysical and elemental composition of ten types of commercial solid biofuels (pinewood sawdust; energy chips I, II, and III; veneer sheets; shavings; birch bark; pine bark; pulp chips; and veneer chips), depending on the month of their acquisition (August, October, December, February, April, and June). Study Object This study dealt with ten types of dendromass marketed by Quercus Sp. z o.o.This company is one of the leading producers of dendromass transported by lorries and trains, both to large power plants and combined heat and power plants, as well as to small local heating plants [22,23].Depending on the type of energy-generating installation and its technical equipment, as well as the contracts signed, the company supplies various biofuel types (dendromass) to different end customers.Therefore, it produces and offers different solid biofuels from raw materials obtained from wood processing plants and forest logging sites to suit end customers' requirements.Ten solid biofuel types offered by the company were examined in this study (Figure 1): (1) pinewood sawdust; (2) energy chips I, which comprised sawmill edgings, shavings, bark, sawdust, and branches from forest logging sites; (3) veneer sheets generated in poplar and linden processing; (4) shavings from pinewood and fir processing; (5) energy chips II, which comprised sawmill edgings, shavings, bark, sawdust, post-handling waste, and so-called "fronts"; (6) birch bark; (7) pine bark; (8) pulp chips, which consisted of pure (no bark) deciduous and coniferous wood; (9) energy chips III, which comprised sawmill edgings, bark, sawdust, and post-handling waste; and (10) veneer chips, which consisted of pure (no bark) poplar, linden, and aspen. All of these solid biofuel types were prepared and stored in an open concrete-paved logistics yard at the company site.Samples of each solid biofuel type were collected for one year in two-month intervals, i.e., they were collected six times.The collection of representative samples started in early August 2018 and continued early into the following months: October 2018, December 2018, February 2019, April 2019, and June 2019.Heaps of each of the solid biofuels were collected from random places during these periods.The samples were packed into plastic bags, 3-5 kg in each, and transported to the laboratory for analyses.All of these solid biofuel types were prepared and stored in an open concrete-paved logistics yard at the company site.Samples of each solid biofuel type were collected for one year in two-month intervals, i.e., they were collected six times.The collection of representative samples started in early August 2018 and continued early into the following months: October 2018, December 2018, February 2019, April 2019, and June 2019.Heaps of each of the solid biofuels were collected from random places during these periods.The samples were packed into plastic bags, 3-5 kg in each, and transported to the laboratory for analyses. Laboratory Analyses The tests were started by separating the laboratory samples, which was followed by the determination of selected thermophysical characteristics and elemental composition.First, the moisture content (MC) of biomass was determined in an FD BINDER drier at 105 °C, in accordance with PN-EN ISO 18134-2 [24].After being completely dried, dendromass was ground in a Retsch SM 200 laboratory mill equipped with a 1 mm mesh sieve.An Eltra Tga-Thermostep thermogravimetric oven was used to determine the ash content at 550 °C as well as the volatile matter (VM) and fixed carbon (FC) content at 650 °C, in accordance with PN-EN ISO 18122:2016-01 [25] and PN-EN ISO 18123:2016-01 [26].The nitrogen (N) content of the dendromass was determined using the Kjeldahl method with a K-435 mineralizer and a BUCHI B-324 distilling device.The carbon (C), hydrogen (H), and sulfur (S) contents were determined with an ELTRA CHS-500 automatic analyzer in accordance with PN-EN ISO 16948:2015-07 [27] and PN-EN ISO 16994:2016-10 [28].The higher heating value (HHV) was determined with the dynamic method in an IKA C2000 calorimeter.Subsequently, the HHV, moisture, and hydrogen content were used to calculate the lower heating value (LHV) in accordance with PN-EN ISO 18125:2017-07 [29] (Equation (1)).All the laboratory analyses were performed at each biofuel acquisition time in triplicate.In consequence, 180 analyses were performed for each attribute. Statistical Analysis The statistical analyses of all the data for the thermophysical characteristics and ele- Laboratory Analyses The tests were started by separating the laboratory samples, which was followed by the determination of selected thermophysical characteristics and elemental composition.First, the moisture content (MC) of biomass was determined in an FD BINDER drier at 105 • C, in accordance with PN-EN ISO 18134-2 [24].After being completely dried, dendromass was ground in a Retsch SM 200 laboratory mill equipped with a 1 mm mesh sieve.An Eltra Tga-Thermostep thermogravimetric oven was used to determine the ash content at 550 • C as well as the volatile matter (VM) and fixed carbon (FC) content at 650 • C, in accordance with PN-EN ISO 18122:2016-01 [25] and PN-EN ISO 18123:2016-01 [26].The nitrogen (N) content of the dendromass was determined using the Kjeldahl method with a K-435 mineralizer and a BUCHI B-324 distilling device.The carbon (C), hydrogen (H), and sulfur (S) contents were determined with an ELTRA CHS-500 automatic analyzer in accordance with PN-EN ISO 16948:2015-07 [27] and PN-EN ISO 16994:2016-10 [28].The higher heating value (HHV) was determined with the dynamic method in an IKA C2000 calorimeter.Subsequently, the HHV, moisture, and hydrogen content were used to calculate the lower heating value (LHV) in accordance with PN-EN ISO 18125:2017-07 [29] (Equation ( 1)).All the laboratory analyses were performed at each biofuel acquisition time in triplicate.In consequence, 180 analyses were performed for each attribute. Statistical Analysis The statistical analyses of all the data for the thermophysical characteristics and elemental composition were based on two-way ANOVA.Ten types of solid biofuels were the first factor in the analysis, and six acquisition times were the other.The arithmetic mean, the coefficient of variation, and the standard deviation were calculated for each of the analyzed attributes.Homogeneous groups were identified with Tukey's honestly significant difference (HSD) test at the level of significance of p < 0.05.Moreover, descriptive statistics were determined for the whole data set: mean, median, minimum value, maximum value, lower quartile, upper quartile, standard deviation, and coefficient of variation.Moreover, an agglomerative hierarchical clustering analysis was performed for the biofuel types and their attributes.The input data were standardized in columns before the analyses.Ward's method was applied for data agglomeration.Clusters were identified with Sneath's criterion.Two cut-off lines were applied: the first at 2/3 D max and the second at 1/3 D max , where D max denoted the maximum measure of distance D. All the statistical analyses were performed with STATISTICA 13 software (TIBCO Software Inc., Palo Alto, CA, USA). Thermophysical Characteristics All of the thermophysical characteristics under study, i.e., MC, Ash, FC, VM, HHV, and LHV, were differentiated significantly by the primary factors (biomass type and acquisition time) as well as by interactions between them and were below 0.001 (p < 0.001).Among the solid biofuels under study, pulp chips had the significantly lowest moisture content (26.92%) and was considered homogeneous group "h" (Table 1).The moisture content of the veneer sheets was also below 30%, but it was in a different homogeneous group, "g".There were another five biofuel types within the interval between 30 and 40% of mean moisture content, including three types of energy chips (I, II, and III).The moisture content of birch bark slightly exceeded 40%, and that of pinewood sawdust was higher (44.5%).The significantly highest moisture content (51.56%) was determined for pine bark, homogeneous group "a".Higher moisture contents for consecutive solid biofuel acquisition times were determined in the winter and autumn months than in spring and summer.In consequence, the significantly highest value of this attribute was determined in December (47.43%), homogeneous group "a".The moisture content exceeding 40% was also determined in solid biofuels obtained in October and February.The biofuel moisture content in August and April ranged from 37 to 38%.The lowest moisture content was determined in the biofuels obtained in June (19.51%),with the coefficient of variation in that month exceeding 38%.The moisture content of the solid biofuels under study ranged from 10% to nearly 70% for energy chips III obtained in June and pine bark obtained in December, respectively (Figure 2).This is not surprising because the moisture content of solid biofuels may be diverse and many depend mainly on the season of the year; the weather conditions; the methods of dendromass acquisition and processing; the period of storage, if any; and the plant species from which the dendromass was obtained.It is obvious that the moisture content of freshly harvested dendromass will be higher compared with periodical storage for natural drying.The maximum moisture content of raw wood or branches may reach 70% for bark [30].Moreover, depending on the species and conditions, the moisture content of freshly felled wood can range from 35 to 60%.On the other hand, the moisture content of wood dried in the open can decrease to 20-25%, and that of wood dried under a roof can decrease to 15-20%.Therefore, the moisture content of sawdust from fresh pinewood was about 60%, and it was over 50% for sawmill residue [31].An equally high moisture content (over 59%) was determined in sawdust from the industrial processing of pinewood, and the moisture content of chips produced from small logs and twigs was slightly lower (52.5%)[32].The moisture content of wood slabs, as measured in other studies (55%), was higher than that of sawdust (43%) and was 49% in P. sylvestris [33].A lower moisture content (38%) was determined in chips from pinewood edgings, which was a consequence of their several weeks of storage in summer and their drying under natural conditions [22].This was confirmed in other studies, in which the wood chip moisture content ranged from 29 to 46%, depending on the acquisition time, with the mean being 38.3% [23].The higher moisture content in the cited studies was determined in chips in winter (45.6%).The value of this attribute decreased significantly in spring and summer (by 8 and 17 percentage points (pp.), respectively).The moisture content of the chips reached 41% in autumn.The moisture content of chips obtained from logging residues in Sweden was higher and was 50.6% immediately after harvesting, and then, it decreased with the storage period [34].The moisture content of logging residues from various tree species, when dried in summer, was definitely lower and ranged from 23 to 36% for Norway spruce and Black alder, respectively [35].The moisture content of short-rotation woody crop (SRWC) dendromass was also diverse.Black locust biomass contained definitely less moisture (approx.40%) compared with willow (approx.50%) and poplar (approx.60%) [36][37][38][39].a,b,c,d,e,f,g,h Homogenous groups for the main source of variation separated for each attribute and separated for each factor (coefficients of variation-%). Energies 2023, 16, x FOR PEER REVIEW 5 of 20 was determined in chips in winter (45.6%).The value of this attribute decreased significantly in spring and summer (by 8 and 17 percentage points (pp.), respectively).The moisture content of the chips reached 41% in autumn.The moisture content of chips obtained from logging residues in Sweden was higher and was 50.6% immediately after harvesting, and then, it decreased with the storage period [34].The moisture content of logging residues from various tree species, when dried in summer, was definitely lower and ranged from 23 to 36% for Norway spruce and Black alder, respectively [35].The moisture content of short-rotation woody crop (SRWC) dendromass was also diverse.Black locust biomass contained definitely less moisture (approx.40%) compared with willow (approx.50%) and poplar (approx.60%) [36][37][38][39].Table 1.Solid biofuel thermophysical characteristics depending on the biomass type and its acquisition time.The pulp chips had the significantly lowest ash content-0.39%DM-and was considered homogeneous group "h" (Table 1).A low ash content-below 0.5% DM-was also determined in veneer chips and pinewood sawdust.The ash content of veneer sheets and shavings was 0.6% DM, homogeneous group "f".The ash content of energy chips ranged from 0.97 to 1.37% DM for energy chips III and I, respectively.A definitely higher ash content was determined in birch and pine bark (2.59% and 4.46% DM, respectively).For the consecutive dates of solid biofuel acquisition, the lowest ash content (<1% DM) was determined in June and in December, homogeneous group "e".The value of this attribute in August and February was higher by 30-40%, and that in October and April was higher by 69% and 81%, respectively.The ash content at the acquisition times under study had a very high coefficient of variance, ranging from 78 to 129%, in June and April, respectively.The ash content of the solid biofuels under study ranged throughout the experiment from 0.2% to nearly 8.0% for pulp chips obtained in December and pine bark obtained in April, respectively (Figure 3).Pine bark contained the highest ash levels at most of the acquisition times under study.Its higher content in birch bark compared with pine bark was determined only in December.Pine bark contained the significantly highest FC levels (27.87%DM) and the significantly lowest VM levels (67.66%DM) (Table 1).The FC content of birch bark was lower by more than 3 pp., and the VM content was higher by more than 5 pp.Moreover, the FC content was over 20% DM in all three types of energy chips.It was less than 20% in the other five solid biofuel types.The highest FC content with respect to the consecutive harvest dates for the solid biofuels (21.26%DM) and the lowest VM content (77.00%DM) was determined in April.The FC content, as determined on other dates, ranged between 20.73 and 20.93% DM, and the VM content was between 77.49 and 78.28% DM.The FC content ranged between 16.9% DM and 28.7% DM throughout the experiment for veneer sheets acquired in October and pine bark acquired in April (Figure 4).The VM content ranged from 63.2% DM to 82.6% DM for pine bark acquired in April and veneer sheets acquired in October (Figure 5). Solid Biomass The FC content of wood slabs, as determined in a different study (21.4% DM), was higher than that of sawdust (20.0%DM), and the mean for P. sylvestris biomass was 20.7% DM [33].The significantly higher VM content was determined in sawdust (79.7% DM) than in wood slabs (78.1% DM) because of a strong negative correlation between FC and VM.The VM content of Picea sp.sawdust, as determined in a different study, was close (79.2%DM) [48] or higher, at 80.7% DM [49] and 82.1% DM [50].In general, bark contains more ash than wood [40].Therefore, sawmill residue contains less ash than forest residue because of the higher bark and mineral content [41].The ash content of wood slabs was higher (0.5% DM) compared with sawdust, which was caused by the fact that wood slabs contained an admixture of bark whose ash content was higher [33].The ash content of pinewood sawdust from a sawmill, as determined in a different study, was 0.36% DM [42], and it was higher in sawdust from forest residues (0.50% DM) [43].A very low ash content (0.26% DM) was determined in wood chips produced from P. sylvestris slabs [22].The ash content, as measured in chips supplied over a period of two years, ranged from 2.05 to 4.75% DM [23].The ash content determined in wood chips in Sweden was similar (2.88% DM) [34].The ash content of dendromass was differentiated significantly by the species and part of the tree.It was found to be 0.24% DM in the pure wood of Norway spruce and 7.80% DM in the bark of European beech [20].The ash contents of P. sylvestris stem wood (0.22% DM), branch base (0.48% DM), branch twigs (1.56% DM), and stem bark (1.78% DM) were also highly diverse [44].The ash contents of Picea abies wood, bark, and needles were also diverse (0.28%, 2.32%, and 3.22% DM, respectively) [45].Similar relationships between the ash content of wood and the bark of the species were demonstrated by Neiva et al. [46].The ash content, as determined in the branches and bark of Greek spruce, was higher (3.2% and 9.5% DM, respectively) [47]. Pine bark contained the significantly highest FC levels (27.87%DM) and the significantly lowest VM levels (67.66%DM) (Table 1).The FC content of birch bark was lower by more than 3 pp., and the VM content was higher by more than 5 pp.Moreover, the FC content was over 20% DM in all three types of energy chips.It was less than 20% in the other five solid biofuel types.The highest FC content with respect to the consecutive harvest dates for the solid biofuels (21.26%DM) and the lowest VM content (77.00%DM) was determined in April.The FC content, as determined on other dates, ranged between 20.73 and 20.93% DM, and the VM content was between 77.49 and 78.28% DM.The FC content ranged between 16.9% DM and 28.7% DM throughout the experiment for veneer sheets acquired in October and pine bark acquired in April (Figure 4).The VM content ranged from 63.2% DM to 82.6% DM for pine bark acquired in April and veneer sheets acquired in October (Figure 5).The FC content of wood slabs, as determined in a different study (21.4% DM), was higher than that of sawdust (20.0%DM), and the mean for P. sylvestris biomass was 20.7% DM [33].The significantly higher VM content was determined in sawdust (79.7% DM) than in wood slabs (78.1% DM) because of a strong negative correlation between FC and VM.The VM content of Picea sp.sawdust, as determined in a different study, was close (79.2%DM) [48] or higher, at 80.7% DM [49] and 82.1% DM [50]. Birch bark had the significantly highest HHV (21.41 GJ Mg −1 DM, homogeneous group "a") (Table 1).The value of this attribute for pulp chips and pine bark was lower by 3%, and it was higher than 20.7 GJ Mg −1 DM.The HHV of more than 20 GJ Mg −1 DM was also determined for pinewood sawdust and all three types of energy chips (I, II, and III).The HHV determined for the other three solid biofuels (veneer sheets, shavings, and veneer chips) did not exceed 20 GJ Mg −1 DM and was lower than the highest value by approx.7%.Regarding consecutive dates of solid biofuel acquisition, the lowest HHV (19.99 GJ Mg −1 DM) was determined in February, homogeneous group "d".The value Energies 2023, 16, 7973 8 of 18 of this attribute, as determined in the other months, ranged from 20.40 to 20.58 GJ Mg −1 DM in April and October, respectively.The HHV, as calculated for the solid biofuel types under study, ranged from 19.34 to 21.60 GJ Mg −1 DM throughout the experiment for veneer sheets obtained in February and for birch bark obtained in October, respectively (Figure 6).Birch bark had the highest HHV calculated at most of the dates under study.It was higher only for pine bark acquired in June.HHV was significantly correlated with the N, S, C, FC, and ash contents (Table 2).The HHV of P. sylvestris wood slabs, as determined in a different study (20.49GJ Mg −1 DM), was close to that for sawdust (20.45 GJ Mg −1 DM) [33].A similar HHV of coniferous biomass (20.4 GJ Mg −1 DM) was reported by Pretzsch [51], and it was lower for deciduous trees (19.8 GJ Mg −1 DM).Further, Telmo [52] determined the HHV of coniferous wood to be 20.5 GJ Mg −1 DM and 20.2 GJ Mg −1 DM for deciduous dendromass.According to literature reports, a higher HHV was determined for bark compared with other dendromass types [21,53], and this was also confirmed in this study.Birch bark had the significantly highest HHV (21.41 GJ Mg −1 DM, homogeneous group "a") (Table 1).The value of this attribute for pulp chips and pine bark was lower by 3%, and it was higher than 20.7 GJ Mg −1 DM.The HHV of more than 20 GJ Mg −1 DM was also determined for pinewood sawdust and all three types of energy chips (I, II, and III).The HHV determined for the other three solid biofuels (veneer sheets, shavings, and veneer chips) did not exceed 20 GJ Mg −1 DM and was lower than the highest value by approx.7%.Regarding consecutive dates of solid biofuel acquisition, the lowest HHV (19.99 GJ study (20.49GJ Mg −1 DM), was close to that for sawdust (20.45 GJ Mg −1 DM) [33].A similar HHV of coniferous biomass (20.4 GJ Mg −1 DM) was reported by Pretzsch [51], and it was lower for deciduous trees (19.8 GJ Mg −1 DM).Further, Telmo [52] determined the HHV of coniferous wood to be 20.5 GJ Mg −1 DM and 20.2 GJ Mg −1 DM for deciduous dendromass. According to literature reports, a higher HHV was determined for bark compared with other dendromass types [21,53], and this was also confirmed in this study.Obviously, the LHV was negatively correlated with the moisture content (−0.99) (Table 2).Therefore, pulp chips (with the significantly lowest moisture content) had the significantly highest LHV among the solid biofuels under study (13.61GJ Mg −1 , homogeneous group "a") (Table 1).The second homogeneous group "b" included veneer sheets, with the LHV being lower by 9% (12.45GJ Mg −1 ).Further, the LHV of birch bark and shavings was lower by 19%, and it was slightly over 11 GJ Mg −1 .The LHV of the three types of energy chips and veneer chips was lower than the highest value by 20-23%, homogeneous groups "d, e".The LHVs of pinewood sawdust (9.48 GJ Mg −1 ) and pine bark (8.33 GJ Mg −1 ) were lower by 30 and 39%, compared with pulp chips, which was a consequence of their high moisture content.The significantly highest LHV among the solid biofuel acquisition times was determined in June (14.99GJ Mg −1 ) in the homogeneous group "a".The LHV slightly exceeded 11 GJ Mg −1 in another summer month (August) and in the spring (April), and it was lower by 25-27%.The LHV was lower by 35-36% in autumn (October) and winter (February).The lowest value of this attribute (8.98 GJ Mg −1 ) was determined in December, and it was lower by 40%.The LHV of the solid biofuels under study ranged from 4.37 to nearly 17.21 GJ Mg −1 throughout the experiment for pine bark obtained in December and pulp chips obtained in June, respectively (Figure 7). The LHV of wood chips determined in a different study was 10.46 GJ Mg -1 [23].The value of this attribute was significantly affected by the period when they were acquired.The significantly highest LHV (12.35 GJ Mg -1 ) was determined for the chips in summer when their moisture content was the lowest.The value of this attribute decreased significantly in spring, autumn, and winter, by 14%, 19%, and 28%, respectively.In a study in Sweden, the LHV of fresh wood chips was lower (8.35GJ Mg -1 ), and it increased to 9.00 GJ Mg -1 after four months of storage [34].The LHV of fresh P. sylvestris biomass, as determined in a different study, did not exceed 9 GJ Mg -1, and it was 8.63 GJ Mg -1 [33].This attribute for sawdust was significantly higher (9.91 GJ Mg -1 ) than for wood slabs (7.35 GJ Mg −1 ).These values lay within the same range as the results of the authors' experiment for chips obtained in autumn and winter.The LHV for chips obtained in late autumn from naturally dried logging residues of Norway spruce and Scots pine was higher (14 GJ Mg -1 ) [35].It was lower for black alder (12.5 GJ Mg -1 ) and silver birch (11.3 GJ Mg -1 ).The LHV of fresh SRWC dendromass varied depending on the plant species.The LHV calculated for black locust was significantly the highest (10.25 GJ Mg -1 ) [54].The value of this attribute for willow and poplar was significantly lower by 21% and 34%, respectively, which was a consequence of a higher moisture content of willow and poplar compared with black locust. Elemental Composition The C, H, S, and N contents were significantly differentiated by the primary factors and the interactions between them and were below 0.001 (p < 0.001).The pulp chips had the significantly highest C content (56.09% DM) in homogeneous group "a" (Table 3).The same homogeneous group included birch bark, and its C content was lower by 0.3 pp.The C content of the six solid biofuels ranged from 54 to 55% DM in homogeneous groups "b, c, d".The lowest C content was determined in veneer sheets (53.44%DM).The C content for the five biofuel acquisition times ranged from 54 to 55% DM in homogeneous groups "a, b, c".The lowest value of this attribute was determined in August (53.85%DM).The C content of the solid biofuels under study ranged from 51.9% DM to 56.7% DM throughout the experiment for veneer sheets obtained in June and pulp chips obtained in June, respectively (Figure 8).The C content was correlated positively with HHV and FC and negatively with VM (Table 2).The mean C content of P. sylvestris biomass, as determined in a different study, was 53.43% DM [33], with sawdust (54.21%DM) containing more of this element by 2.4 pp.than wood slabs.A lower C content (48.4% DM) was determined in Pinus sp.sawdust in China [55] and in Hevea brasiliensis sawdust (48.5% DM) obtained from a wood processing plant [56].Betula pendula wood chips also contained less C (50.4% DM) [57].A high C content was found in SRWC poplar and willow biomass (over 53.3% DM) compared with black locust (52.6% DM) [54].This attribute was found to be lower in a different study [38].Moreover, black locust and poplar (over 51.5% DM) contained more C than willow (48.8% DM). Elemental Composition The C, H, S, and N contents were significantly differentiated by the primary factors and the interactions between them and were below 0.001 (p < 0.001).The pulp chips had the significantly highest C content (56.09% DM) in homogeneous group "a" (Table 3).The same homogeneous group included birch bark, and its C content was lower by 0.3 pp.The C content of the six solid biofuels ranged from 54 to 55% DM in homogeneous groups "b, c, d".The lowest C content was determined in veneer sheets (53.44%DM).The C content for the five biofuel acquisition times ranged from 54 to 55% DM in homogeneous groups "a, b, c".The lowest value of this attribute was determined in August (53.85%DM).The C content of the solid biofuels under study ranged from 51.9% DM to 56.7% DM throughout the experiment for veneer sheets obtained in June and pulp chips obtained in June, respectively (Figure 8).The C content was correlated positively with HHV and FC and negatively with VM (Table 2).The mean C content of P. sylvestris biomass, as determined in a different study, was 53.43% DM [33], with sawdust (54.21%DM) containing more of this element by 2.4 pp.than wood slabs.A lower C content (48.4% DM) was determined in Pinus sp.sawdust in China [55] and in Hevea brasiliensis sawdust (48.5% DM) obtained from a wood processing plant [56].Betula pendula wood chips also contained less C (50.4% DM) [57].A high C content was found in SRWC poplar and willow biomass (over 53.3% DM) compared with black locust (52.6% DM) [54].This attribute was found to be lower in a different study [38].Moreover, black locust and poplar (over 51.5% DM) contained more C than willow (48.8% DM).a,b,c,d,e,f,g Homogenous groups for the main source of variation separated for each attribute and separated for each factor (the coefficients of variation-%).a,b,c,d,e,f,g Homogenous groups for the main source of variation separated for each attribute and separated for each factor (the coefficients of variation-%). The H content of the nine solid biofuel types exceeded 6% DM, with the highest value being determined in pulp chips (6.40% DM) (Table 3).The lowest H content was determined in pine bark (5.64% DM).The H content for the five dates of the biofuel acquisition exceeded 6% DM, with the highest being determined in August (6.39%DM).The lowest value of the attribute was determined in June (5.97%DM).The H content throughout the experiment ranged between 5.53% DM and 6.66% DM for pine bark obtained in February and birch bark obtained in August (Figure 9).This attribute was correlated positively with VM and negatively with ash and FC content (Table 2). The mean H content of P. sylvestris biomass, as determined in a different study, was 6.64% DM [33], with sawdust (6.75% DM) containing more of this element by 0.22 pp.than wood slabs.A high H content was also determined in Pinus sp.wood chips (6.64% DM) [58] and sawdust (6.72% DM) from China [59].The element content of Populus sp.sawdust in that country was lower (5.91%DM) [60].A similar H content (approx.5.9% DM) was also determined in the biomass of poplar, willow, and black locust [54].A higher H content of the species biomass (6.2-6.4% DM) was determined in a different study [38].The H content of the nine solid biofuel types exceeded 6% DM, with the highest value being determined in pulp chips (6.40% DM) (Table 3).The lowest H content was determined in pine bark (5.64% DM).The H content for the five dates of the biofuel acquisition exceeded 6% DM, with the highest being determined in August (6.39%DM).The lowest value of the attribute was determined in June (5.97%DM).The H content throughout the experiment ranged between 5.53% DM and 6.66% DM for pine bark obtained in February and birch bark obtained in August (Figure 9).This attribute was correlated positively with VM and negatively with ash and FC content (Table 2).The pulp chips and pine sawdust had the significantly lowest S content (0.011% DM) in homogeneous group "e" (Table 3).The S content of the six solid biofuels did not exceed 0.017% DM in homogeneous groups "b, c, d".The highest S content was determined in birch and pine bark (0.033 and 0.032% DM), respectively.Therefore, these values were higher by 300% compared with the lowest S content.The S content for the five dates of the biofuel acquisition was lower than 0.019% DM in homogeneous groups "b, c, d".The highest value of this attribute was determined in August (0.020% DM).The S content of the solid biofuels under study ranged between 0.007% DM and 0.046% DM throughout the experiment for veneer sheets obtained in February and birch bark obtained in April (Figure 10).This attribute was significantly negatively correlated with VM, H, and LHV and positively with the other parameters under analysis (Table 2). The mean S content of P. sylvestris biomass determined in a different study was 0.009% DM [33], with wood slabs (0.011% DM) containing more of this element than sawdust (0.007% DM).This may have been a consequence of the higher bark content of wood slabs, as bark contains more sulfur than wood [21,61].The S content of the solid biofuels from forest dendromass, as demonstrated in the current study, lay within the range indicated for wood (0.01-0.05%DM), as well as for Pinus spp.wood (0.009-0.03%DM) [21].A higher S content can be expected in dendromass from SRWC.Meanwhile, the element content of the SRWC willow and poplar did not exceed 0.026% DM, and it was 0.033% DM in black locust [54].The S content of willow, poplar, and black locust in a study in Spain [38] was higher (0.03, 0.04 and 0.05% DM), respectively.Therefore, the S content for black locust was similar to or higher than those for pine and birch bark in this study.This is important information as the sulfur dioxide emission from biomass combustion depends on the sulfur content, combustion temperature, and the amount of S retained in the ash [62].The mean H content of P. sylvestris biomass, as determined in a different study, was 6.64% DM [33], with sawdust (6.75% DM) containing more of this element by 0.22 pp.than wood slabs.A high H content was also determined in Pinus sp.wood chips (6.64% DM) [58] and sawdust (6.72% DM) from China [59].The element content of Populus sp.sawdust in that country was lower (5.91%DM) [60].A similar H content (approx.5.9% DM) was also determined in the biomass of poplar, willow, and black locust [54].A higher H content of the species biomass (6.2-6.4% DM) was determined in a different study [38]. The pulp chips and pine sawdust had the significantly lowest S content (0.011% DM) in homogeneous group "e" (Table 3).The S content of the six solid biofuels did not exceed 0.017% DM in homogeneous groups "b, c, d".The highest S content was determined in birch and pine bark (0.033 and 0.032% DM), respectively.Therefore, these values were higher by 300% compared with the lowest S content.The S content for the five dates of the biofuel acquisition was lower than 0.019% DM in homogeneous groups "b, c, d".The highest value of this attribute was determined in August (0.020% DM).The S content of the solid biofuels under study ranged between 0.007% DM and 0.046% DM throughout the experiment for veneer sheets obtained in February and birch bark obtained in April (Figure 10).This attribute was significantly negatively correlated with VM, H, and LHV and positively with the other parameters under analysis (Table 2). The mean S content of P. sylvestris biomass determined in a different study was 0.009% DM [33], with wood slabs (0.011% DM) containing more of this element than sawdust (0.007% DM).This may have been a consequence of the higher bark content of wood slabs, as bark contains more sulfur than wood [21,61].The S content of the solid biofuels from forest dendromass, as demonstrated in the current study, lay within the range indicated for wood (0.01-0.05%DM), as well as for Pinus spp.wood (0.009-0.03%DM) [21].A higher S content can be expected in dendromass from SRWC.Meanwhile, the element content of the SRWC willow and poplar did not exceed 0.026% DM, and it was 0.033% DM in black locust [54].The S content of willow, poplar, and black locust in a study in Spain [38] was higher (0.03, 0.04 and 0.05% DM), respectively.Therefore, the S content for black locust was similar to or higher than those for pine and birch bark in this study.This is important information as the sulfur dioxide emission from biomass combustion depends on the sulfur content, combustion temperature, and the amount of S retained in the ash [62].The pulp chips had the significantly lowest N content-0.11%DM-and was considered homogeneous group "g" (Table 3).The N content of the three solid biofuels (pinewood sawdust, veneer sheets, and shavings) did not exceed 0.15% DM in homogeneous group "f".The element content of energy chips III and veneer chips was 0.20% DM.The N content of energy chips II and I was under 0.30% DM.It increased to 0.41% DM in pine bark and to 0.55% DM in birch bark.Therefore, these values were higher by 370% and 500% compared with the lowest N content of pulp chips.As for the dates of biofuel acquisition, the N content ranged from 0.22 to 0.28% DM in December and April, respectively.The N content of the solid biofuels under study ranged between 0.07% DM and 0.81% DM throughout the experiment for pulp chips obtained in October and birch bark obtained in April (Figure 11).The N content was significantly negatively correlated with VM, H, and LHV and positively correlated with S, FC, and ash (Table 2). The mean N content of P. sylvestris biomass determined in a different study was 0.12% DM [33], with wood slabs (0.15% DM) containing more of this element than sawdust (0.08% DM), which could be a consequence of a higher bark content of wood slabs.As in this study, other authors also demonstrated a definitely higher N content of bark compared to wood [21,63].Moreover, a higher N content of this solid biofuel results in higher NOx emissions [64].A low and similar N content to the current results was determined in P. sylvestris sawdust (0.13% DM) obtained in Spain [65].Further, the element content of SRWC biomass was higher, and it was 0.38 and 0.43% DM in willow and poplar, and it was 0.91% DM in black locust [54].The N content of black locust determined in a different study [38] was high (0.63% DM).Therefore, these N content values were even higher than in pine and birch bark in the current study.The pulp chips had the significantly lowest N content-0.11%DM-and was considered homogeneous group "g" (Table 3).The N content of the three solid biofuels (pinewood sawdust, veneer sheets, and shavings) did not exceed 0.15% DM in homogeneous group "f".The element content of energy chips III and veneer chips was 0.20% DM.The N content of energy chips II and I was under 0.30% DM.It increased to 0.41% DM in pine bark and to 0.55% DM in birch bark.Therefore, these values were higher by 370% and 500% compared with the lowest N content of pulp chips.As for the dates of biofuel acquisition, the N content ranged from 0.22 to 0.28% DM in December and April, respectively.The N content of the solid biofuels under study ranged between 0.07% DM and 0.81% DM throughout the experiment for pulp chips obtained in October and birch bark obtained in April (Figure 11).The N content was significantly negatively correlated with VM, H, and LHV and positively correlated with S, FC, and ash (Table 2). The mean N content of P. sylvestris biomass determined in a different study was 0.12% DM [33], with wood slabs (0.15% DM) containing more of this element than sawdust (0.08% DM), which could be a consequence of a higher bark content of wood slabs.As in this study, other authors also demonstrated a definitely higher N content of bark compared to wood [21,63].Moreover, a higher N content of this solid biofuel results in higher NOx emissions [64].A low and similar N content to the current results was determined in P. sylvestris sawdust (0.13% DM) obtained in Spain [65].Further, the element content of SRWC biomass was higher, and it was 0.38 and 0.43% DM in willow and poplar, and it was 0.91% DM in black locust [54].The N content of black locust determined in a different study [38] was high (0.63% DM).Therefore, these N content values were even higher than in pine and birch bark in the current study. General Characteristics of Dendromass Solid Biofuels Table 4 shows the descriptive statistics for the whole data set for all the dendromass solid biofuels of forest origin, obtained at two-month intervals during one year.These results show that the strongest dispersion, expressed as a coefficient of variation, was determined for ash content (>107%).The span (minimum-maximum) of this important attribute was very wide, and it ranged from 0.19 to 8.13% DM, with a mean of 1.31% DM.A lower but also strong dispersion was found for the N and S contents (60.1% and 52.0%), respectively.The N content spanned from 0.07 to 0.82% DM, with a mean of 0.24% DM.The values ranged from 0.006 to 0.046% DM for sulfur content, and the mean was 0.017% DM.The average variability of 25-34% was determined for LHV and moisture content, and the mean values for these attributes were 10.91 MJ kg −1 and 38.25%, respectively.The moisture content lay within a broad range (minimum-maximum) from 10 to 70%, and LHV ranged from 4.18 to 17.28 GJ Mg −1 .Low variability (coefficient of variation < 15%) was determined for FC, VM, HHV, and C and H contents.Moreover, the higher result uniformity (coefficient of variation <3%) was determined for the C content and HHV. General Characteristics of Dendromass Solid Biofuels Table 4 shows the descriptive statistics for the whole data set for all the dendromass solid biofuels of forest origin, obtained at two-month intervals during one year.These results show that the strongest dispersion, expressed as a coefficient of variation, was determined for ash content (>107%).The span (minimum-maximum) of this important attribute was very wide, and it ranged from 0.19 to 8.13% DM, with a mean of 1.31% DM.A lower but also strong dispersion was found for the N and S contents (60.1% and 52.0%), respectively.The N content spanned from 0.07 to 0.82% DM, with a mean of 0.24% DM.The values ranged from 0.006 to 0.046% DM for sulfur content, and the mean was 0.017% DM.The average variability of 25-34% was determined for LHV and moisture content, and the mean values for these attributes were 10.91 MJ kg −1 and 38.25%, respectively.The moisture content lay within a broad range (minimum-maximum) from 10 to 70%, and LHV ranged from 4.18 to 17.28 GJ Mg −1 .Low variability (coefficient of variation < 15%) was determined for FC, VM, HHV, and C and H contents.Moreover, the higher result uniformity (coefficient of variation <3%) was determined for the C content and HHV.The cluster analysis based on the values of all the attributes of the ten solid biofuels from forest dendromass at the cut-off point of 2/3 D max allowed for grouping them into two main clusters (Figure 12a).Pine bark and birch bark made their own cluster.The other eight biofuel types (veneer sheets, pulp chips, veneer chips, shavings, pinewood sawdust, and three types of energy chips) made a second, separate cluster.When the analysis accuracy increased, four clusters were identified at the cut-off at 1/3 D max .Pine bark and birch bark made two separate clusters.Moreover, pulp chips formed a separate cluster.A fourth cluster included all the remaining seven types of solid biofuels, including three types of energy chips: pinewood sawdust, veneer sheets, shavings, and veneer chips.Two clusters were identified for the analyzed biofuel attributes at the cut-off at 2/3 D max (Figure 12b).One cluster included LHV, H content, and volatile matter content.The next cluster included the other seven analyzed parameters: moisture content; ash content; FC; HHV; and C, S, and N content.With an increase in the accuracy of the analysis, three clusters were identified at the cut-off at 1/3 D max .The first cluster remained unchanged.A second cluster was identified, containing HHV and C content.The third cluster included the remaining five analyzed parameters. Energies 2023, 16, x FOR PEER REVIEW 16 of 20 The cluster analysis based on the values of all the attributes of the ten solid biofuels from forest dendromass at the cut-off point of 2/3 Dmax allowed for grouping them into two main clusters (Figure 12a).Pine bark and birch bark made their own cluster.The other eight biofuel types (veneer sheets, pulp chips, veneer chips, shavings, pinewood sawdust, and three types of energy chips) made a second, separate cluster.When the analysis accuracy increased, four clusters were identified at the cut-off at 1/3 Dmax.Pine bark and birch bark made two separate clusters.Moreover, pulp chips formed a separate cluster.A fourth cluster included all the remaining seven types of solid biofuels, including three types of energy chips: pinewood sawdust, veneer sheets, shavings, and veneer chips.Two clusters were identified for the analyzed biofuel attributes at the cut-off at 2/3 Dmax (Figure 12b).One cluster included LHV, H content, and volatile matter content.The next cluster included the other seven analyzed parameters: moisture content; ash content; FC; HHV; and C, S, and N content.With an increase in the accuracy of the analysis, three clusters were identified at the cut-off at 1/3 Dmax.The first cluster remained unchanged.A second cluster was identified, containing HHV and C content.The third cluster included the remaining five analyzed parameters. Conclusions This study characterized the thermophysical characteristics and the elemental composition of ten solid biofuel types produced over a period of one year from dendromass of forest origin, traded between producers and end customers.This is very important from both the scientific and practical perspectives as it will affect the further effectiveness and justifiability of solid biofuels used for heat and electricity generation.This study showed that the solid biofuel quality was significantly differentiated by the biomass type from which they were produced and the acquisition time, and the interactions of these two factors.Pulp chips proved to be the most valuable solid biofuel because of their beneficial thermophysical characteristics and elemental composition.However, this material is also known to have other potential applications.Therefore, its price and availability, depending on the demand for it from other branches of industry, may put some restrictions on the power generation sector.Consequently, attention should be drawn to the three types of energy chips (I, II, and III) produced from various production residues, which also had beneficial energy-related parameters.The other biofuels can be (and are) successfully used for energy generation, although the properties of pine and birch bark were the least Conclusions This study characterized the thermophysical characteristics and the elemental composition of ten solid biofuel types produced over a period of one year from dendromass of forest origin, traded between producers and end customers.This is very important from both the scientific and practical perspectives as it will affect the further effectiveness and justifiability of solid biofuels used for heat and electricity generation.This study showed that the solid biofuel quality was significantly differentiated by the biomass type from which they were produced and the acquisition time, and the interactions of these two factors.Pulp chips proved to be the most valuable solid biofuel because of their beneficial thermophysical characteristics and elemental composition.However, this material is also known to have other potential applications.Therefore, its price and availability, depending on the demand for it from other branches of industry, may put some restrictions on the power generation sector.Consequently, attention should be drawn to the three types of energy chips (I, II, and III) produced from various production residues, which also had beneficial energy-related parameters.The other biofuels can be (and are) successfully used for energy generation, although the properties of pine and birch bark were the least beneficial.Obviously, the thermophysical characteristics of all of the solid biofuels obtained in the summer (June) were better.Nevertheless, they can be successfully used in the all-year supply chain for dendromass used for energy generation.Data on changes in the quality of various commercial solid biofuels are important both for companies dealing with the production and logistics of production residues of forest origin and also for end consumers of such biofuels who use them as energy feedstock.Obviously, various bioenergy installations can be dedicated to various biofuel types with respect to their thermophysical characteristics and elemental composition.Nevertheless, the knowledge of commercial solid biofuel characteristics can facilitate the organization of supply logistics and can provide a specific installation with the optimal fuel produced from production residues of forest origin. Figure 1 . Figure 1.Types of tested commercial solid biofuels. Figure 2 . Figure 2. The moisture content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 2 . Figure 2. The moisture content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Energies 2023 , 20 Figure 3 . Figure 3.The ash content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 3 . Figure 3.The ash content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 4 . Figure 4.The fixed carbon content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 4 . Figure 4.The fixed carbon content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 4 . Figure 4.The fixed carbon content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 5 . Figure 5.The volatile matter content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 5 . Figure 5.The volatile matter content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 6 . Figure 6.Higher heating value of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 6 . Figure 6.Higher heating value of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Energies 2023 , 20 Figure 7 . Figure 7.The lower heating value of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 7 . Figure 7.The lower heating value of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 8 . Figure 8.The carbon content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Energies 2023 , 20 Figure 8 . Figure 8.The carbon content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 9 . Figure 9.The hydrogen content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 9 . Figure 9.The hydrogen content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Energies 2023 , 20 Figure 10 . Figure 10.The sulfur content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 10 . Figure 10.The sulfur content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 11 . Figure 11.The nitrogen content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 11 . Figure 11.The nitrogen content of the solid biofuel types under study depending on the acquisition time; error bars denote standard deviation. Figure 12 . Figure 12.The dendrogram of a hierarchical cluster analysis showing the similarities between solid biofuels from dendromass (a) and their thermophysical characteristics and elemental composition (b).The red vertical line marks the Sneath criterion (2/3 Dmax) and (1/3 Dmax).D-linage distance; Dmax-maximum linage distance. Figure 12 . Figure 12.The dendrogram of a hierarchical cluster analysis showing the similarities between solid biofuels from dendromass (a) and their thermophysical characteristics and elemental composition (b).The red vertical line marks the Sneath criterion (2/3 D max ) and (1/3 D max ).D-linage distance; D max -maximum linage distance. Funding: The results presented in this paper were obtained as part of a comprehensive study financed by the University of Warmia and Mazury in Olsztyn, Faculty of Agriculture and Forestry, Department of Genetics, Plant Breeding and Bioresource Engineering (grant No. 30.610.007-110) and co-financed by the National (Polish) Centre for Research and Development (NCBiR), titled "Environment, agriculture and forestry", No. BIOSTRATEG3/344128/12/NCBR/2017. Table 1 . Solid biofuel thermophysical characteristics depending on the biomass type and its acquisition time. Table 2 . Simple correlation coefficient between the solid biofuel attributes under study. Table 2 . Simple correlation coefficient between the solid biofuel attributes under study. Table 3 . The solid biofuel elemental composition depending on the biomass type and the acquisition time. Table 3 . The solid biofuel elemental composition depending on the biomass type and the acquisition ime. Table 4 . Selected statistical analysis indicators for the attributes under study (N Valid = 180). Table 4 . Selected statistical analysis indicators for the attributes under study (N Valid = 180).
2023-12-16T17:29:17.673Z
2023-12-08T00:00:00.000
{ "year": 2023, "sha1": "526ecb0bf4f08a39ec9872a16711aa4709437589", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/16/24/7973/pdf?version=1702046550", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f21ff06898eb2d86cfcee39f24f05bbef148f353", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "extfieldsofstudy": [] }
23899501
pes2o/s2orc
v3-fos-license
Effect of a quadrivalent vaccine against respiratory virus on the incidence of respiratory disease in weaned beef calves We investigated the effect of vaccination of male beef calves (mean age ± S.D.: 158 ± 31days) against bovine herpes virus (BHV-1 or IBR virus), bovine respiratory syncitial virus (BRSV), bovine viral diarrhea (BVD) virus and para-influenza (PI3) virus on the incidence of respiratory disease during the first forty days after weaning and entering a feed-lot in Portugal. In May 2003, Mertolenga, Preta and mixed-breed calves from 10 different beef herds, were systematically assigned (by order of entrance in a chute) to two treatment groups, before moving to a common feed-lot. One hundred and twenty five male calves were vaccinated with a quadrivalent vaccine (Rispoval 4®) and revaccinated after 21–27 days while 148 herdmates were injected with saline (0.9% NaCl) on the same occasions. The incidence and severity of clinical cases of “bovine respiratory disease” (BRD) were evaluated every day during the first 40 days after entering the feed-lot. Morbidity (3% vs. 14%) and mortality (0% vs. 4%) due to BRD were significantly lower in the vaccinated group. Ten days after revaccination, the calves were treated with an antimicrobial – ending the study – after an outbreak of BRD caused a high incidence of disease in the non-vaccinated group. In conclusion, our results showed that Rispoval 4®, a quadrivalent vaccine against respiratory viruses, under field conditions, reduces morbidity and mortality due to BRD in beef calves after weaning. Introduction Respiratory disease in weaned feedlot calves (''bovine respiratory disease complex''; BRD), is the leading cause of morbidity and mortality in feed-lots worldwide (Griffin, 1997). This syndrome has a complex and multifactorial aetiology that usually is divided into three major categories: environmental factors, host factors and infectious factors (Stilwell, 2003;Ellis, 2001;Dyer, 1993). Bacteria, such as M. haemolytica and P. multocida, cause a severe illness when environmental factors (stress, lack of ventilation, crowding and others) and/or viruses reduce the capacity of the animal to control the infection (Storz et al., 2002;Thomson, 1993). Although environmental factors are crucial in the origin of BRD and should be addressed in a preventive program (Engelken, 1997), their control is usually harder to achieve than the management of the infectious element of the disease. This is the main reason why, although the etiopathogenesis of the disease is well known, the beef producer depends so much on antimicrobials and vaccines. The control of BRD through the continuous use of antimicrobials has many disadvantages (Ellis, 2001;Brumbaugh, 1996): expense, inefficiency, risk of antimicrobial resistance, and threats to animal welfare (because it only controls the bacterial stage of the disease). As a consequence of this, it follows that the administration of antimicrobials (even in a prophylactic approach) should be kept for situations when the control of BRD is not possible through other means. Vaccination against respiratory viruses seems to be a sensible and prudent approach, especially when we can forecast moments of important immunity suppression associated with these viruses, such as during the weaning and grouping of young animals from different herds (Dyer, 1993). However, the use of vaccination for the control of BRD in animals on arrival to feed-lots is still controversial (Ellis, 2001) and only occasionally used in Europe and very seldom used in Portugal (personal observations). The economic benefit of vaccination against BRD is not fully demonstrated (Smith et al., 1996) although there are some important factors (e.g. animal welfare) that are not considered in many of these studies (Tizard, 2000). Most vaccine trials are conducted under controlled conditions or followed by experimental infection. Field studies, on the contrary, are still scarce (Engelken, 1997). The respiratory viruses commonly responsible for BRD include bovine herpes virus-1 (Infectious Bovine Rinotracheitis or IBR), bovine respiratory syncitial virus (BRSV), bovine viral diarrhea (BVD) virus and para-influenza (PI 3 ) virus (Storz et al., 2002). Other viruses that have been implicated in the pathogenesis of BRD include coronavirus, adenovirus, parvovirus and rhinovirus (Storz et al., 2002;Smith et al., 1996). All of these viruses are known to be involved in BRD solely or in synergism with each other and bacteria. Some affect the lung parenchyma directly (BRSV and PI 3 ), while others act on the immune system (BVD and BRSV) or local defences (IBR), like the ciliated epithelium. The characteristics and role of these viruses in the pathogenesis of BRD are well documented (Ellis, 2001;Baker et al., 1997;Smith et al., 1996;Thomson, 1993). However, in the field, it is usually difficult to identify the virus or viruses implicated in an outbreak of BRD especially in the feed-lot conditions. This is the main reason why multivalent vaccines are preferred. Our objective was to evaluate the benefit of a multivalent vaccine (Rispoval 4 1 ; Pfizer Animal Health) against the four respiratory viruses, by comparing the morbidity incidence and the mortality of clinical cases of BRD between groups of vaccinated and non-vaccinated male beef calves, during the 40 days in a feed-lot. Study animals The weaned calves were from 10 different herds from the Ribatejo region of Portugal. The calves' mean age at weaning (overall and by breed) is presented in Table 2. Weaning of calves born during winter takes place in May and on the day of weaning the calves are all transported to one nearby feed-lot. Morbidity due to BRD has always been $10% in this unit and in 2002 mortality reached 6% in the month following weaning. All calves are kept in the same open barn with slated concrete floor. Feeding includes commercial corn-based concentrate and alfalfa hay. Water is always available. The history of these herds showed that no vaccination against the respiratory viruses was ever performed. The vaccination The best vaccination program would be to administer the first dose three weeks before weaning and revaccinate on the day of weaning (Engelken, 1997). However, our main objective was to test the efficacy of this vaccination in field conditions, where vaccination on the weaning day and revaccination 15-28 days later is the most practical schedule. The vaccine used, Rispoval 4 1 -Pfizer, contains the following: - On weaning day, 126 male calves were vaccinated intramuscularly with 5 ml Rispoval 4 1 and 147 received a 5-ml intramuscular injection of saline solution. No calf showed any signs of illness and all were in excellent body condition. In each herd, all males to be weaned were moved into a race and were alternately allocated to the two treatment groups by order of entrance (the first was always allocated to the nonvaccinated group). With two herds, due to vaccine unavailability, the last calves of the last chute were all included in the non-vaccinated group. In contrast, the last four calves in two Mertolenga herds were all vaccinated to use the last doses of already open bottles of vaccine. The first dose of vaccine was administered between the 15th and 22nd of May 2003 (weaning of the 10 herds occurred during this period) and the second on the 12th of June. With this schedule we had calves revaccinated 21-27 days after first dose. The distribution of calves vaccinated and non-vaccinated are presented in Table 1. Disease evaluation and treatment Animals were inspected daily by herdsmen (blind to the study) and sick or suspected calves were separated. Clinical evaluation and selection of animals for antimicrobial treatment were made by the feed-lot veterinarian (blind to the study). Clinical signs used in the selection of BRDaffected animals were: isolation from herdmates, decreased appetite, depression (first signs detected by herdsman) and dyspnea, cough, nasal and ocular discharge and hyperthermia (>39.5 8C), that resulted from the vet-conducted physical examination. Only animals showing all of these signs (albeit with different severities) were included in the BRD-affected group. Four sudden deaths occurred and were also considered BRD after blinded post-mortem confirmed extensive pulmonary lesions. The lungs of three dead animals were sent for microbiology and histopathology exams at the appropriate laboratories of the Faculdade de Medicina Veterinaria de Lisboa (all lab personnel were blind to the study). Statistical analysis Numbers of morbidity and mortality due to BRD were submitted to a logistic-regression analysis through PROC LOGISTIC and PROC GENMOD, respectively, using SAS software (SAS, 2004). Effect of vaccination, breed, calf's age at vaccination (co-variable) and their respective interactions, were included in the logistic multiple regression models. The Wald Chisquare test was used to assess the importance of each factor. Because a significant effect (P > 0.05) was not shown for interactions these were removed from the model. The need to use PROC GENMOD for mortality was due to the fact that data showed some ''quasi-complete separation'' problems. If the data are completely or partially separated, it may not be possible to obtain reliable maximum likelihood estimates because convergence may not occur. Convergence does not occur because one or more parameters in the model become theoretically infinite. Such is the case if the model perfectly predicts the response or if there are more parameters in the model than can be estimated because the data are sparse (Webb et al., 2004). 3. Results Table 1 shows the data concerning morbidity and mortality during the first 40 days after weaning and first vaccination. Mean age at weaning and at onset of disease is not comparable because calves got sick at different days during the 40-day study. Logistic-regression analysis (Tables 2 and 3) shows that vaccination did have a significant effect (P < 0.01) on morbidity and mortality due to BRD. The Odds ratio between nonvaccinated and vaccinated animals showed that the former are 4.822 more likely to get BRD. Age at weaning/vaccination also had a significant effect on respiratory disease incidence (P < 0.05), with a slope coefficient of À0.0133 AE 0.0066 that represent the change in log Odds of the liability per unit increase in age (days). Odds ratio for age showed that older calves are less likely (0.987) to get BRD (Fig. 1). Breed did not have any effect on morbidity or mortality. Calves showed signs of BRD throughout the study period, but mortality was especially high after a few very hot days (>40 8C). During this outbreak (20-24th of June, 40 days after the first group was weaned), it was decided to finish the trial and the majority of the animals were treated with antibiotics (10 mg/kg BW of Tilmicosin) to try to control mortality. From that moment onwards the study comparing susceptibility to BRD was considered closed. There was no report of treatments or deaths due to BRD after the outbreak previously described. Post-mortem exams Post-mortem examination was performed on the four animals that died at three different times-3, 5 and 8 days after second vaccination. Gross lesions included: pleurisy, emphysema and oedema (especially evident in the dorsal lobes), thickness of interlobular septae and signs of bronchopneumonia in the cranioventral lung. The lungs of three dead animals were sent for histopathology exam (the fourth animal had been dead for long and it was felt the laboratory analyses were not going to be useful). Microscopic exam showed pleurisy lesions, traqueitis with pseudo-membranes, intense proliferation of the sub-mucosa lymphoid tissue, pronounced hyperplasia of the peribronchic lymphoid tissue (BALT), hyperplasia of bronchiolar and alveolar epithelium, oedematous and emphysematosous alveolitis showing large number of multinucleated giant cells (syncytial cells). No bacterium was found in two samples sent for microbiology and one revealed P. multocida. Virus isolation was not attempted. Discussion The study was slightly affected in its duration by the need to treat animals, after 40 days of permanence in the feed-lot, because of the outbreak of clinical respiratory disease that severely affected the non-vaccinated population (Table 2). This outbreak and the end of the study occurred 13 days after the second dose of the vaccine was given to the last weaned calves and 20 days after the revaccination of the first weaned calves. Our study promoted more adverse conditions for the vaccinated group than would occur if all animals were to be vaccinated. Virus shedding and circulation is heavier when more than half the population has not been vaccinated. In the same perspective, the non-vaccinated calves were favoured because of the relatively lower circulation of virus compared to the situation where no animal had been vaccinated. In spite of this drawback, vaccine efficacy was demonstrated by reducing the incidence of BRD and mortality due to respiratory disease in calves recently weaned ( Table 2). The Odds ratio analysis also shows that animals that are weaned at an older age are less likely to show respiratory disease. This is especially evident in the non-vaccinated group (Fig. 1). Clinical signs, necropsy lesions and histopathology exams suggest that at least one of the respiratory viruses (BRSV) was involved in the BRD cases. The use of a live vaccine against this virus seems appropriate when vaccination is very close to weaning, transport and commingling because of the rapid immunity response. In conclusion, vaccination with Rispoval 4 1 reduced morbidity and mortality associated with BRD in calves recently weaned and commingled in a feed-lot.
2018-04-03T01:51:30.251Z
2008-04-18T00:00:00.000
{ "year": 2008, "sha1": "dec58004bc3322463bfbb8461d0875a85fe9df65", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.prevetmed.2008.02.002", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "88dec42a659f668bf70726314ae475831a1239a0", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4599371
pes2o/s2orc
v3-fos-license
Investigation of GeSn Strain Relaxation and Spontaneous Composition Gradient for Low-Defect and High-Sn Alloy Growth Recent development of group-IV alloy GeSn indicates its bright future for the application of mid-infrared Si photonics. Relaxed GeSn with high material quality and high Sn composition is highly desirable to cover mid-infrared wavelength. However, its crystal growth remains a great challenge. In this work, a systematic study of GeSn strain relaxation mechanism and its effects on Sn incorporation during the material growth via chemical vapor deposition was conducted. It was discovered that Sn incorporation into Ge lattice sites is limited by high compressive strain rather than historically acknowledged chemical reaction dynamics, which was also confirmed by Gibbs free energy calculation. In-depth material characterizations revealed that: (i) the generation of dislocations at Ge/GeSn interface eases the compressive strain, which offers a favorably increased Sn incorporation; (ii) the formation of dislocation loop near Ge/GeSn interface effectively localizes defects, leading to the subsequent low-defect grown GeSn. Following the discovered growth mechanism, a world-record Sn content of 22.3% was achieved. The experiment result shows that even higher Sn content could be obtained if further continuous growth with the same recipe is conducted. This report offers an essential guidance for the growth of high quality high Sn composition GeSn for future GeSn based optoelectronics. propagate through the entire epitaxial layer. Threading segments have very limited contribution for strain relaxation but could severely deteriorate material quality by acting as non-radiative recombination centers. Benefiting from the maturity of epitaxial technology, such as molecular beam epitaxy (MBE), and chemical vapor deposition (CVD), high quality single crystal GeSn could be grown under non-equilibrium conditions. MBE has achieved single crystalline GeSn on Si 12 , Ge 11,13,14 , and other substrates, leading to the demonstration of light emitting diodes (LEDs) 15 and photo detectors 16 . However, the epitaxy of high Sn content GeSn with decent quality to show clear photoluminescence (PL) spectra is still under development. Recently, CVD growth of GeSn has made significant progress by using industry standard manufacture technique [17][18][19][20][21][22][23] . High quality GeSn with 12.6% Sn content has been grown using Ge 2 H 6 and SnCl 4 as precursors 17 , which has enabled the first demonstration of optical pumping GeSn laser with an operating temperature up to 90 K 18 . The lasing of GeSn micro-disks with 16% Sn has also been realized with a wavelength up to 3.1 μm at 180 K by using the same growth chemistry 19 . High order Ge hydrides such as Ge 3 H 8 or Ge 4 H 10 were also utilized for GeSn growth in order to pursue high Sn incorporation 20 . Those highly reactive Ge hydrides enable the high growth rate at low temperature due to their weak Ge-Ge molecular bond to favor more Sn incorporation. However, utilizing GeH 4 as precursor for GeSn growth remains very attractive for industrial manufacturing due to its low cost and high thermal stability at room temperature [21][22][23][24] . We previously reported low-defect and thick GeSn growth with Sn incorporation up to 17% using GeH 4 and SnCl 4 as precursors 21,22 . The optically pumped GeSn edge-emitting lasers using these GeSn materials demonstrated a broad wavelength coverage of 2-3 μm and the highest lasing temperature of 180 K 23 . Historically it is generally acknowledged that the Sn incorporation via CVD epitaxy growth of GeSn is limited by chemical reaction dynamics. Therefore, substantial growth efforts were devoted to the process optimization of surface chemistry kinetics and thermodynamics 20,24,25 . However, recently we discovered a spontaneous-relaxation-enhanced (SRE) Sn incorporation mechanism for growth using GeH 4 and SnCl 4 as precursors 23 . It was found that Sn incorporation is primarily limited by compressive strain under Sn oversaturation condition while surface chemical reaction being secondary 26 . Since Sn exhibits lower free energy 10 , the excess provided Sn atoms will float and segregate on the surface, or be desorbed from the surface. For example, when the nominal growth recipe was used with targeting Sn content of 12%, the Sn incorporation starts from 12% and then increases continuously to 15% due to the material gradual relaxation. More Sn incorporation also results in the reduction of surface Sn segregation. Since all the growth parameters maintained invariable, the gradient GeSn was grown spontaneously rather than intentionally. Guided by SRE discovery, new growth strategies were carefully designed which lead to high quality and high Sn incorporation 23 . Other research group also observed SRE mechanism in the study of GeSn epitaxy 27 . While this approach showed its effectiveness based on the previous work, the microscopic mechanism of GeSn strain relaxation induced high Sn incorporation as well as high quality material formation is still unclear. A thorough understanding of the mechanism would provide great insights to guide the future high quality and high Sn composition GeSn material growth for the development of high performance Si based optoelectronics. In this work, a systematic study of strain relaxation mechanism for CVD grown GeSn and its effects on Sn incorporation during the GeSn growth process was performed. This study fills the blanks of in-depth understanding of SRE mechanism. It is revealed that the generation of dislocations at GeSn/Ge interface accommodates the large lattice mismatch and favors the crystalline nucleation for initial GeSn growth. A self-organized dislocation network is formed within the first 200-300 nm GeSn layer near the GeSn/Ge interface, which blocks the propagation of dislocation, leaving the subsequent GeSn layer low defect. In addition, the spontaneous gradient Sn incorporation was generated at entire GeSn layer due to the compressive strain relaxation. This paper is organized as the following: First of all, the sample growth methodology for high Sn incorporation was presented and the detailed growth results were summarized. One sample has reached a world-record final Sn composition of 22.3%, which significantly breaks the previously reported Sn incorporation limit even for using high order Ge hydrides as precursors; Second, following the growth sequence, the dislocation configuration at GeSn/Ge interface was investigated using transmission electron microscopy (TEM) to study the initial dislocation generation; Third, the formation of dislocation network region beyond the initial critical thickness was examined to understand the mechanism of dislocation network efficiently localizing dislocations and preventing the upward propagation of threading dislocations; Finally, the Gibbs free energy model was used to further discuss the relationship between compressive strain and Sn incorporation; Based on the mechanism discovered in this work, the approach to achieve higher Sn composition was proposed. Results Growth methodology and sample characteristics. GeSn samples were grown on relaxed Ge buffered Si substrate, using ASM Epsilon ® 2000 Plus reduced pressure CVD (RPCVD) system with GeH 4 and SnCl 4 precursors (see methods). Two strategies have been used in GeSn epitaxy to obtain high Sn incorporation: (i) the SRE approach for sample A, B and C with nominal growth recipe of 9%, 10% and 11% Sn and the corresponding finally achieved Sn compositions 12.5%, 12.9% and 15.9%, respectively; (ii) The GeSn virtual substrate (VS) approach for sample D and E in which GeSn VS was prepared through SRE approach with a nominal growth recipe of 12% and achieved intermediate Sn composition of 16.5% for both samples. Sample D was then grown with single-step Sn enhanced recipe on GeSn VS with the final achieved Sn composition 17.5%. Sample E was grown with a three-step gradient GeSn recipe (see methods). For each step the grading rate of Sn incorporation was designed to be moderate in order to suppress the breakdown of continuous growth. The graded structure eases compressive strain gradually, leading to the continuous increase of Sn concentration. The final Sn content was obtained as 22.3%, an unprecedented achievement so far for CVD technology. The material characterizations were performed after the growth. Table 1 summarizes layer thicknesses, Sn compositions, compressive strains and degrees of relaxation for five samples. Typical dark field TEM images and Secondary Ion Mass Spectrometry (SIMS) of sample A, D, E are presented in Fig. 1 For sample A, two GeSn layers could be clearly resolved as shown in TEM images of Fig. 1(a): (i) highly defective layer (1 st layer) of 180 nm thickness on Ge buffer and (ii) low defect layer (2 nd layer) of 660 nm thickness above the 1 st layer. Correspondingly, the two-layer structure observed from TEM image was also marked on the SIMS plot. From SIMS, both layers show Sn composition spontaneously enhanced gradient. The 1 st layer could be further subdivided into two regions defined by the boundary of the critical thickness h c , in which region-I and II represent constant low Sn content and Sn-enhanced gradient, respectively. Hereby the critical thickness h c was calculated based on People -Bean (P-B) model 28 (Supplementary section 3). The region I maintains with 8.8% Sn contents while the region II obtains the maximum 10.2% Sn contents with sharp Sn composition gradient. Through linear data fitting the gradient rate at region II of the 1 st layer was obtained as 12.6%/μm, which is 5.5 times of 2.3%/μm in the 2 nd layer. Note that since the 2 nd layer was Sn gradient with a small gradient rate, part of the strain could be relaxed by the biaxial distortion of lattice constant without breaking the cubic crystal structure. For Sample D, the GeSn VS exhibits a similar two-layer structure with the thickness of 310 nm and 550 nm for the 1 st and 2 nd layer, respectively. The additional 3 rd layer was measured with a thickness of 260 nm, which is below the theoretical critical thickness of 1171 nm calculated by P-B model (Supplementary section 3). Therefore, the 3 rd layer of sample D could be considered as pseudomorphic growth which was further confirmed by RSM of XRD analysis (Supplementary section 2). One bright line was observed to travel across the 2 nd and 3 rd layers, indicating the penetration of threading dislocation which is partly responsible for strain relaxation. From SIMS, the two-layer structure was marked and the 1 st layer was divided into region-I and II, similar to sample A. Region I shows the constant Sn composition as 11.2% while region II reaches the final Sn composition of 13.7%. The gradient rate at region II of the 1 st layer and the 2 nd layer are 15%/μm and 3.4%/μm, respectively, which is higher than that of sample A. At the 3 rd layer the Sn composition was slightly enhanced with gradient rate of 1.6%/μm. Eventually 17.5% Sn incorporation was achieved on top of the 3 rd layer. For sample E, the GeSn VS shows a similar two-layer structure with sample D. The 1 st defective layer was observed with thickness of 380 nm and the 2 nd layer of 830 nm thickness is low defect without distinct boundaries between the steps, suggesting the smooth growth transition. From SIMS, the two-layer structure and region-I and II in the 1 st layer were marked. The Sn content at region I is 11.9% and the final Sn content at region II is 15.5%. The gradient rate at region II and the 2 nd layer is 21.5%/μm and 6.2%/μm, both of which are higher than sample A and sample D. Sample E achieved the final world-record Sn composition of 22.3% after three-step gradient growth. It is noteworthy that based on the sharp gradient rate at the end of 2 nd layer, higher Sn incorporation than the value of 22.3% is expected if sample E could be grown thicker with the same recipe. High Sn content of sample E increases light emission efficiency due to more directness of bandgap and extends operating wavelength up to 3220 nm, evidenced by photoluminescence spectra shown in Supplementary section 4. Dislocation configuration at GeSn/Ge interface. The current study of dislocation configuration at GeSn/Ge interface is preliminary while the relaxation process at the interface for other group IV epitaxy such as Ge or SiGe on Si has been extensively studied. In this section, the GeSn/Ge interface was investigated by using the high resolution TEM (HRTEM) technique to probe the dislocation configuration, which could microscopically reveal the accommodation mechanism of large lattice mismatch at the interface for the initial GeSn nucleation. Through the analysis of atomic level TEM at the interface, it was found that perfect 90° pure edge (Lomer) and 60° mixed dislocations are formed and the 60° mixed dislocations are dominant over Lomer dislocations at the interface, as shown in Fig. 2. Intrinsic stacking faults were also observed, which are associated with two different reactions of dislocations: 60° dislocation dissociation and Lomer dislocation formation, as shown in Fig. 3. By introducing dislocations, the compressive strain near the interface is partially relaxed, favoring the initial crystalline nucleation of GeSn on Ge buffer. The dislocation generation at the GeSn/Ge interface was studied for each sample with sample B as a typical example presented in this paper. Both Lomer and 60° mixed dislocations are formed near the interface, as shown in bright field HRTEM image viewed from [110] direction in Fig. 2(a). The observation of existence of Lomer dislocation was highly consistent with other studies, even though different CVD reactor and precursors were utilized. From the image, it is clear that the 60° dislocations outnumbered the Lomer dislocations, which indicates that the 60° dislocations are dominant for the low-temperature (<400 °C) GeSn epitaxy on Ge. This is because the activation energy of 60° dislocations is lower than that of Lomer dislocations since the gliding will proceed through the switching inter-atomic bonds instead of the diffusion motion of individual atoms 29 . The inset shows fast Fourier transform (FFT) pattern with different planes marked. In order to verify 60° dislocation, magnified high resolution TEM images of area A is shown in Fig. 2(b1), exhibiting the typical core structure of 60° dislocation. Its burger vector is identified as [112] Fig. 2(b2) and (b3), respectively. The extra half plane of 60° dislocation was observed to be located at (111) plane. The magnified high resolution TEM image of area B presents core structure of Lomer edge dislocation, as shown in Fig. 2(c1). The corresponding burger vector of [110] a 2 , which is obtained by drawing burger circuit, lies in (001) plane. Since {001} planes are not the gliding planes, Lomer dislocation is hardly mobile. Inverse FFT images of Lomer dislocation are shown in Fig. 2(c2) . Lomer edge dislocation is twice effective for strain relaxation in comparison with that of the 60° mixed type because the efficiency of relaxation scales with the length of edge component of burger vector projected into the interface 29 . However, due to high nucleation energy, the onset of Lomer dislocation is kinetically limited for GeSn epitaxy at low temperature growth. The zigzagged intrinsic stacking faults were also observed at the GeSn/Ge interface of sample B as shown in Fig. 3(a). The stacking faults start from the GeSn/Ge interface, and then wander through both Ge and GeSn along {111} planes, and end at the GeSn/Ge interface. The stretching angle between two stacking faults is 54.7°. The inset shows the corresponding FFT pattern, in which the streaks along [111] direction indicate the formation of stacking faults. In order to investigate the formation mechanism of stacking faults, two areas were specifically studied, as annotated as area A and area B in Fig. 3(a). Magnified image of area A shown in Fig. 3(b) indicates the presence of Frank partial dislocation at the interface, verified by drawing burger circuit. The stacking fault associated with Frank partial dislocation was also observed at GeSn alloy in the inserted inverse FFT images of Fig. 3(b), which was obtained by masking {(111)(111)} planes in FFT pattern. As stacking fault extends into GeSn, a 90°Shockley partial dislocation with burger vector [112] a 6 is formed at the edge of stacking fault. Shockley partial dislocation glides to the interface and reacts with Frank partial dislocation to form a perfect Lomer dislocation. Therefore, stacking fault is terminated after this reaction 31 . The reaction is given as: On the Thompsons tetrahedron diagram, shown as Fig. 3(c), it is marked as, It should be noted that during the reaction no energy reduction occurs before and after this reaction according to Frank's criteria: + = . However, there is a significant energy reduction when stacking fault is terminated during the reaction. Therefore, the overall energy of Frank, Shockley partials and stacking faults is greater than the energy of perfect Lomer dislocation, indicating that this process is energetically favorable. Another mechanism of stacking fault associated with 60° dislocation dissociation was studied as well. The magnified image of area B shown in Fig. 3(d) presents the stacking fault extending into Ge buffer along (111) plane with the length of 9 nm. Using burger circuit enclosing the stacking faults, the projected burger vector [ n the Thompsons tetrahedron diagram of Fig. 3(c). The total energy after the dissociation is one third lower than that of 60° dislocation, comparing the energy states of pre-and post-reaction. For the dissociation of 60° dislocation, the glide motion of top atoms in (111) plane was illustrated in Fig. 3(e) with burger vector → b marked. Since the glide of 60° dislocation experiences higher energy barrier than 30° or 90° partial dislocations, it will decompose into the glide of 30° and 90° partial dislocation with the following sequence. The 30° partial dislocation glide takes the lead and 90° partial dislocation closely follows in order to maintain the close packing structure 33 . Self-organized dislocation network. After the initial nucleation of dislocations at GeSn/Ge interface, the subsequent growth is pseudomorphic within critical thickness. Meanwhile, elastic strain energy accumulates with increasing thickness which impedes the Sn incorporation. Beyond the critical thickness, pseudomorphic epitaxy collapses and dislocations are generated to release the strain energy. As a result, more Sn atoms are incorporated into lattice sites. After the generation of dislocations, a self-organized dislocation network is formed as the result of dislocation propagations and reactions. The typical dark field TEM image of sample B, as shown in Fig. 4(a), confirms the formation of self-organized dislocation network, which was marked as the 1 st layer. The 2 nd layer is low-defect, suggesting that the majority of dislocations are localized in the 1 st layer. The formation mechanism of dislocation network could be explained by the following process: (i) half-loop nucleation of 60° dislocation, (ii) half loop propagation and (iii) formation of Lomer dislocation. Magnified TEM image of area A shown in Fig. 4(a1) exhibits the nucleation of half loops, which is attributed to the generation of 60° dislocation 34 . The schematic diagram of half-loop nucleation of 60° dislocation was drawn in Fig. 4(b). In the diagram, the 60° mixed dislocations are dominantly nucleated on the epitaxial surface at critical thickness due to its low activation energy. Afterwards, the 60° dislocations will elongate along {111} planes as semicircular half loops 34,35 . Normally the half loops increase the radius continuously due to the strain field, which would eventually reach the GeSn/Ge interface and then form the linear misfit segments at the interface and two arms travelling upwards as threading dislocations 34 . However, another mechanism occurs before the half loops arrive at the interface: When two 60° dislocations glide along different {111}planes and meet with each other, they will intersect by cross-slipping mechanism in which one of the 60° dislocations climbs to another plane 35 . If two 60° dislocations have appropriate burger vectors, they will react and form Lomer dislocation. The threading components of two 60° mixed dislocations annihilate after the reaction without having to travel through the film. This process facilities strain relaxation since Lomer dislocation is more efficient of relieving strain energy. The total eight equivalent reactions on the pair of {111} planes have been studied and summarized in ref. 36 . One typical reaction was schematically drawn in Fig. 4 Similar process will repeat with the continuous growth to gradually relieve the residual strain energy. After the formation of Lomer dislocation, a self-assembled dislocation network is formed within the 1 st GeSn layer, efficiently accommodating the lattice mismatch between GeSn and Ge. Therefore, the 1 st GeSn layer could act as a sacrificial layer with large amount of dislocations, leaving low-defect GeSn in the 2 nd layer. With gradual strain relaxation beyond critical thickness, the Sn gradient GeSn layer is formed spontaneously in the 1 st layer, as indicated as region II in Fig. 1. The spontaneous Sn gradient GeSn is crucial to obtain high quality GeSn because of the Hagen-Strunk multiplication mechanism in compositional gradient layer 37,38 . The Hagen-Strunk multiplication will help generate of 60° dislocations with complementary burger vectors 39,40 , which react and form Lomer dislocations as shown in Fig. 4. As a result, threading branches of two adjacent 60° dislocations will cancel out with each other. The ideal arrangement of 60° dislocation helps terminate threading dislocation which would otherwise propagate through the film and deteriorate material quality. It also results in the minimum number of dislocations that are required to relieve strain since the Lomer dislocations are formed during this process, which have more energy-relieving efficiency. Through Hagen-Strunk multiplication the self-organized dislocation network acts as a "filter" of threading dislocations, enabling successive low defect GeSn growth. This process is unique for GeSn compared with other group IV alloys such as SiGe because of the Hagen-Strunk multiplication induced by the formation of spontaneous Sn gradient layer. Discussion In order to in-depth investigate the strain effect on Sn incorporation, especially during the 1 st GeSn layer growth, the Gibbs free energy was calculated in both completely relaxed and compressively strained systems for comparison. Gibbs free energy was proven to provide good approximation of thermal stability of alloys under equilibrium condition. The minimization of free energy is the thermodynamic driving force of the stable crystallization. The calculation of Gibbs free energy has been used to study the thermal stability of GeSn and SiGeSn alloys 25,41 . In this work, the elastic strain energy was introduced into Gibbs free energy to estimate thermodynamic properties of compressively strained GeSn alloy. It is revealed that although metastable GeSn alloy was grown under nonequilibrium condition, our calculation of Gibbs free energy provides good description of strain effects on Sn incorporation with good agreement of experimental results. The calculation assumes: (i) Thermodynamic equilibrium condition, (ii) random distribution of Sn atoms in Ge crystal and vibration of lattice constants due to fluctuation of alloy mixing negligible, (iii) thin film thickness below the critical thickness limit and (iv) Sn oversaturation condition. The Gibbs free energy could be expressed as where x and T are Sn composition and system temperature, respectively, ΔH and ΔS are mixing enthalpy and entropy while E s is elastic strain energy per atom. The ideal enthalpy and entropy are given as where α is interaction parameter which scales proportionally with the square of bond length between two nearest-neighbor atoms 42 . The value of α could be obtained experimentally by fitting of liquidus and curves in the T x − phase diagram of the alloy system (Supplementary section 5). The elastic strain energy per atom is written as 43 Fig. 5(a), representing the stability boundary of unstrained (fully relaxed) and compressively strained GeSn alloy systems, respectively. Hereby, the stability area (ΔG < 0) was marked as grey field while instability area (ΔG > 0) was colored as purple field. Compared with unstrained curve ΔG 0 = 0, the whole ΔG s = 0 curve shifts to the lower Sn composition range because of elastic strain energy, thus narrowing the stability area. As a result, the maximum Sn incorporation obtained in stable strained GeSn system decreases compared with unstrained system under the same temperature. For T = 400 °C, which is the upper limit temperature of our growth, the maximum Sn composition for strained system is 2.5%, smaller than 3% of unstrained one. More discrepancy of maximum Sn composition appears at higher temperature range. Similar composition-stability relationship was further clarified by the calculation of Gibbs free energy G(x,T) at T = 400 °C, which was plotted in Fig. 5(b). As shown in the inserted zoomed-in curves, the minimum free energy for strained and unstrained system occurs at 0.9% and 1.1% Sn composition, respectively. Since ΔG < 0 is the driving force of stable crystallization, Sn atoms tend to incorporate less in strained GeSn system compared to unstrained one in order to minimize Gibbs free energy. The more strained energy accumulates, the less Sn incorporates into GeSn system. In the 2 nd layer of GeSn, the relationship between compressive strain and Sn incorporation are interactive. As discussed above, more Sn could be incorporated with gradual relaxation of compressive strain, leading to the increase of local microscopic lattice constant. Therefore, more compressive strain is introduced for successive epitaxy of several atomic layers, which in turn impedes Sn incorporation. The interplay between strain and Sn incorporation delays the occurrence of maximum Sn incorporation and results in the long Sn-enhanced tail with continuous growth. Although the world-record maximum Sn composition of 22.3 % has been achieved in this study, the steep gradient rates at 2 nd layer shown in SMIS of Fig. 1 suggest that Sn incorporation is far from saturation at the end of GeSn growth. The further continuous growth with the same recipe would lead to the "true" maximum Sn composition which is eventually limited by surface chemical reaction. The detailed analysis is under investigation and will be reported later. Conclusion Two growth strategies were investigated in order to analyze strain relaxation, high Sn incorporation, and their relationship. Compressive strain has a strong effect on Sn incorporation, which is well explained by Gibbs free energy calculation including elastic strain energy. The gradual strain relaxation results in spontaneous formation of gradient GeSn. Step graded GeSn structure relaxes compressive strain more smoothly and the final Sn incorporation of 22.3% was achieved, which is unprecedented so far for CVD technology. Different dislocation arrangements are revealed at GeSn/Ge interface. The mixed type 60° dislocations are dominant at the interface over Lomer dislocations. Intrinsic stacking faults are associated with two different reactions: 60° dislocation dissociation and Lomer dislocation formation. Beyond critical thickness half loops of 60° dislocations are nucleated and expended outwards as threading dislocations. Spontaneous gradient GeSn helps terminating threading dislocation by Hagen-Strunk multiplication. The well-ordered dislocation network was formed at gradient layer, leading to the low-defect GeSn on the top. This work provides the thorough analysis of strain relaxation mechanism of GeSn and offers an essential guidance for low defect GeSn growth with high Sn content. Methods CVD growth. Ge buffer layer with approximately 600 nm thickness was grown by low/high temperature two-step growth followed by post thermal annealing. The 1 st 150 nm-thick layer was grown at temperature of 375 °C while the 2 nd 450 nm-thick layer was grown at temperature of 600 °C using 10% GeH 4 in purified H 2 carrier gas. Afterwards the in-situ annealing was done at >800 °C. GeSn growth was initiated on Ge buffer with the temperature between 200 and 400 °C. SnCl 4 is a liquid at room temperature and must be delivered using a bubbler in which H 2 gas is metered to control the SnCl 4 mass flow rate. All the sample was grown with a SnCl 4 / (GeH 4 +SnCl 4 +H 2 ) molar flow fraction of 10 −5 order of magnitude. For sample E three-step Sn-enhanced layer Figure 5. (a) Plots of Gibbs free energy ΔG 0 = 0 and ΔG s = 0 correspond to the stability boundary of fully relaxed and compressively strained GeSn system, respectively. The grey and purple zone marked in the plots represent the stable (ΔG > 0) and unstable (ΔG > 0) region. Under compressive strain, stability boundary (ΔG s = 0) shifts to lower Sn composition, shrinking the stable region. At T = 400 °C, the maximum Sn composition for strained and unstrained system is 2.5% and 3%, respectively. (b) Gibbs free energy plot at T = 400 °C with (G s ) and without strain (G 0 ). The local minimum Sn contents for strained system and unstrained system occur at 0.9% and 1.1%, respectively. SCIEnTIfIC REPORTS | (2018) 8:5640 | DOI:10.1038/s41598-018-24018-6 structure was grown with target thickness 100 nm for each step layer. The SnCl 4 flow fraction for each step epitaxy increases by ~8% compared with the precious step. TEM imaging. The specimen of TEM was mechanically polished until thickness was down to 20 μm. It was then transferred onto a copper grid with 2 mm-diameter concentric hole. Focused ion beam thinning was followed by Fischione 1010 low-angle ion milling machine. The final thickness of TEM-observing area is 50-300 nm. Cross-sectional TEM images were observed using a Cs corrected Titan 80-300 with a Schottky field emission gun (FEG) operating at 300 kV.
2018-04-05T13:23:18.538Z
2018-04-04T00:00:00.000
{ "year": 2018, "sha1": "916101558c0c5e36ddbc76f6e5ef784a5dab7f4f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-24018-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "916101558c0c5e36ddbc76f6e5ef784a5dab7f4f", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
119353398
pes2o/s2orc
v3-fos-license
Dark Energy Accretion onto Van der Waal's Black Hole We consider the most general static spherically symmetric black hole metric. The accretion of the fluid flow around the Van der Waal's black hole is investigated and we calculate the fluid's four-velocity, the critical point and the speed of sound during the accretion process. We also analyze the nature of the universe's density and the mass of the black hole during accretion of the fluid flow. The density of the fluid flow is also taken into account. We observe that the mass is related to redshift. We compare the accreting power of the Van der Waal's black hole with Schwarzschild black hole for different accreting fluid. Introduction On the Anti de-Sitter(AdS) boundary, the studies of strongly coupled thermal field theories led us to some interesting results regarding the physics of asymptotically AdS black holes. In the reference [1], for Schwarzschild-AdS black hole space-time, a first order phase transition of thermal radiation/black hole is observed. The thermodynamic behaviours make after those of Van der Waal's fluid if charge and rotation are incorporated [2,3,4,5]. A more prominent analogy [6,7] is observed when cosmological constant is treated as a thermodynamic pressure, P , given as This also extends the first law of black hole thermodynamics as δM = T δS + V δP + .... To do so we need to introduce a quantity which is thermodynamically conjugated to P and is interpreted as a black hole thermodynamic volume [8,9] which is given as Immediately after the consideration of analogical pressure and volume, one may query about the nature of black hole's equation of state. Comparison between the temparetures of the black holes and concerned fluid or the comparison between the volumes or the pressures of the black holes to the corresponding physical properties of concerned fluid (with which the black holes' equation of state does resemble) can be built up. References like [10] have chosen the equation of state possessed by Van der Waal's fluids. For one mole of such fluid, the equation of state turns to be where v = V N , N being the fluid's degrees of freedom. The attraction between two molecules of the concerned fluid is measured by the parameter ′ a ′ (a > 0). On the other hand, the volumetric measure is kept by the parameter ′ b ′ . It is studied in the references [6,7] that the rotating (or nonrotating as well) AdS black holes' thermodynamic natures resemble with this particular type of fluid to a large extent. Whenever a phase transition is observed in the black hole, a swallowtail catastrophe is exposed by Gibb's free energy for both the fluid and the black hole. An exact match between the properties of Van der Waal's fluid and a particular type of black holes are tried to be obtained in the references [11,12]. The static spherically symmetric black hole metric obtained by these article is given as with the lapse function where M is the mass of the Van der Waal's black hole. For a small scale connected to the inverse cosmological constant Λ, l is a parameter with dimensions of length (Hubble length) and the parameter a vw > 0 measures the attraction between the molecules of the fluid, and the parameter b vw measures their volume. We find several literatures, nowadays, where [13,14] the thermodynamic natures of the Van der Waal's black holes have been studied. A common conclusion drawn from all these articles is that the Van der Waal's black hole solution is qualitatively analogical (on a thermodynamic perspective) with the Van der Waal's fluid. Else we can treat it as just another type of static spherically symmetric black hole ansatz. As we know, since almost twenty years from now, that our universe is experiencing a late time cosmic acceleration and to justify such kind of accelerated expansion, we have proposed many models, where a homoganeous exotic fluid coined as 'quintessence', 'dark energy' or 'phantom field' etc is assummed to be present in the universe (spreaded all over in it) and is exerting negative pressure. Until now, many dark energy models have been speculated. Between these models, the most appealing one is the cosmological constant [16] which is the simplest candidate of dark energy, distinguished by the equation of state p = ωρ with ω = −1. However, two problems raise for the cosmological constant model : the coincidence problem [15] and the fine tuning problem. Some methods are given to solve these problems, such as considering the holographic principle [17], anthropic principle [18], invoking an interaction between dark matter and dark energy [19] and variable cosmological constant scenario [20]. In this context, several well-known models such as phantom, quintom, Chaplygin gas, quintessence, geographic dark energy and holographic dark energy, etc., have been proposed [21,22,23,24,25,26,27,28]. The Chaplygin gas is an exotic type of fluid whose energy density ρ and pressure p fit the equation of state p = −B/ρ, where B is a positive constant [29]. At large value of scale factor, the Chaplygin gas tends to accelerate the universe's expansion, however, at small value of the scale factor it acts as pressureless fluid. In general terms, when such exotic fluids violate the strong energy condition 3p + ρ > 0, we call them to be in the quintessence era. But as they violate the weak energy condition, i.e., p + ρ > 0 we say that these fluids are in phantom era. In phantom era, a future singularity named "Big Rip" may occur where all the four fundamental forces might be defeated by the dark energy and every matter will be destroyed. Now it was a challenging question that the compact objects like black holes will exist at big rip or not. Babichev et al [30] for the first time in literature, have general relativistically studied the phantom energy accretion on Schwarzschild black hole and shown that if the black hole is surrounded by a fluid which is following the equation p + ρ < 0 then the mass of the black hole will be decreased by the effect of such kind of fluid's accretion. Dark energy accretion and its properties for different black holes as central engine and different equation of states of accreting fluid are studied in different references [20,32,33,34,35,36,37]. The general overview is that the black holes will loss mass due to such kind of exotic matter accretion. A series of pseudo Newtonian studies of accretion disc nature for dark energy accretion are done in references [38,39,40,41]. It is speculated that dark energy accretion strengthens the wind branch and weakens the accretion branch which as a result faints the accretion disc, i.e., it weakens the feeding process of the black holes. Effects of different modified gravity parameters on accretion are interesting topics to study. Van der Waal's black hole is a thermodynamically modified version of static spherically symmetric black hole which incorporates extra parameters a vw and b vw which modify the black hole's gravitating power. As a fluid, Van der Waal's gas incorporats the mutual attractions between molecules and the pressure exerted by molecules on the container's wall. It is expected that the Van der Waal's black hole will take into account the mutual interactions of microstates in it. This mutual attraction might have effect on the external power of the black hole. Our motivation for this paper is to study how does the black hole accretion disc behave if both the central black hole is modified as a Van der Waal's black hole and the accreting fluid is taken to be an exotic one. This may give rise to a new construction of black hole's mass reduction formula. Besides the nature of the mass of the black holes, the density variation near the black hole will be studied. This may speculate how much of negative pressure generating nature is carried out by the dark energy models near this particular type of compact objects. In the present paper, we consider most general static spherically symmetric black hole solution in section 2. An investigation regarding the accretion of any general kind of fluid flow around the black hole is done in the same section. Next, we analyze the accretion of the fluid flow around Van der Waal's black hole in section 3. Here we calculate the existence(s) of critical point(s), velocity of sound and the fluid's four velocity during the process of accretion. Finally, we briefly conclude through a discussion. Accretion onto a General Static Spherically Symmetric Black Hole We will consider a general static spherically symmetric metric [11,12] written as equation (3), where f (r)(> 0) considered as a function of r and M as the mass of the black hole. For the accreting fluid, energy-momentum tensor is given by (8πG = c = 1) where p and ρ are the pressure and energy density of the fluid. Also, u µ = dx µ ds = (u 0 , u 1 , 0, 0) is the four-velocity vector of the fluid flow, where u 0 and u 1 are the components (non-zero) of velocity vector satisfying Let us consider the radial velocity of the flow u 1 = u. Therefore, u 0 = g 00 u 0 = u 2 + f where √ −g = r 2 sinθ. From the equation (5), we get T 1 0 = (ρ + p)u 0 u. For inward flow, assuming u < 0 (as the fluid flows towards the black hole). For the fluid flow, we may take that the fluid is any kind of dark energy or dark matter. When a static spherically symmetric black hole is considered, a proper dark-energy accretion model should be gained by generalizing Michel's theory [42]. Babichev et al. [43,44] have performed the generalization of the dark energy accretion onto Schwarzschild black hole. T µν ;ν = 0 is the energy-momentum conservation law for the relativistic Bernoulli equation (the time component). Taking the radial temporal component of relativistic energy momentum conservation equation, we have d where C 1 is an integrating constant, having the dimension of the energy density. For the energy momentum tensor, the energy flux equation can be defined by the projection of the conservation law, i.e., u µ T µν ;ν = 0 ⇒ u µ ρ ,µ +(ρ+p)u µ ,µ = 0. From this, we get (taking µ = 1), where C is a constant of integration constant and for convenience the minus sign is taken. Moreover, ρ ∞ and ρ h denote the the energy densities at infinite distance from the black hole and at the black hole horizon respectively. From equation (6) and (7), we get, where where C 3 is another integrating constant. From (6) and (9), we get, Let us assume From the equation (9), (10) and (11), we get, If one or the other of the bracketed factors in (12) is terminated, we obtain a turn-around point and for this case, the solutions will give two values in either r or u. The solutions are passing through a critical point that assembles the material falling out (or flowing into) and along with the particle trajectory the object has monotonically increasing velocity. Critical point is a point where the speed of the flow is equal to the speed of sound inside the fluid. Assuming at r = r c , where the critical point of accretion is located, which can be obtained by assuming the two bracketed terms (the coefficients of dr and du) in equation (12) to be zero. Therefore, at r = r c , we get, and where u c is the critical speed of the flow at r = r c (at the critical point) and the subscript c is denoting the critical value. From (13), we get, and At r = r c , the sound speed can be obtained by The solutions are physically admissible if u 2 c > 0 and V 2 c > 0. Accretion onto a Van der Waal's Black Hole Considering that the fluid flow accretes upon the Van der Waal's black hole, we will compute the expressions of u 2 c , V 2 c and c 2 s at r = r c (i.e., at the critical point). We get (using equations (14) and (15)): and c 2 s can be obtained by using the equations (16), (17) and (18). The physically admissible solutions of the above equations are obtained if u 2 c > 0 and V 2 c > 0, i.e., and Now, we consider p = Aρ is the equation of state and A is constant and it accretes upon the Van der Waal's black hole. Then we get c 2 s = A, V 2 c = 0 and u 2 c = 0. Therefore, from (14), we get, LetṀ is the rate of change of mass of Van der Waal's black hole which is computed by integrating the flux over the two dimensional surface of the black hole and is defined by [45], If we assume M 0 be the initial mass corresponding to the initial time and if we neglect the cosmological evolution of ρ ∞ , then using the equation (22), we get the mass of the black hole to be The result (22) can be written for any general ρ and p as done in [32,33,34] (satisfying the holographic equation of state and violating weak energy condition), i.e., can be written aṡ Again, black hole simultaneously radiates energy when it accretes fluid . This radiation is known as Hawking radiation [46]. The black hole evaporates for this radiation which is balanced by the accretion of matter into the black hole and as a outcome the total system is guessed to be under equilibrium. But when we examine the parameters (eg., temperature) of the accreting fluid at very far from the black hole with very close to the black hole, there will be a big difference. But the parameters show equilibrium nature in local cells. Such type of equilibrium is called quasi equilibrium. In this work, we have considered large black holes (in general). For small black holes, the relation between temperature and mass is given by T = (8πM ) −1 . For this reason, the black holes radiate more following to the standard fourth order rule of black body radiation. The accretion radiation equilibrium may not be the equilibrium one under such large amount of Hawking radiation. For this type of cases we will unable to talk whether the accretion process is at all dependent of the mass or not. The process of accretion for very small black holes is still a fact to research with. From the equation (4) and (10), we have (with r h = 1) the index of the equation of state as Note: ω D > or < −1 depends on the sign of the constant C 4 . Thermodynamic Analysis of Accreting Matter on Van der Waal's Black Hole Now, we will discuss about the thermodynamics of the dark energy accretion. The equation related to the thermodynamic studies is given by the equation of state p = ω D ρ. First, we wish to evaluate the value of C such that the sign of M can be determined and secondly, we verify the exactness of the generalized second law of thermodynamics which is an invariant law and will search any limitation on the equation of state ω D from thermodynamic point of view. The energy supply vector ψ i and the work density(W ) are defined as [47,48] where T j i is the projected energy-momentum tensor (normal to the 2-sphere). Therefore, the change of energy (across the event horizon) is given by [48] The amount of energy crossing the event horizon is [48,49] (taking r e = 1) given by From (22) and (26) (as c = 1 and E = mc 2 ), we get, the arbitrary constant C, given by C = u 2 , i.e.,Ṁ = 4πu 2 (ρ + p) . In quintessence era, we can say thatṀ > 0, i.e., the black hole mass is increasing although the rate of increment is slowly decreasing as we move to the line of phantom barrier, whereas, in phantom era,Ṁ < 0, i.e., the black hole mass is decreasing. The holographic energy density given by where Ha 2 which directs to result balanced with observations. Here 'a' is the scale factor of the background metric of the universe and H is the Hubble parameter. We can identify the dimensionless dark energy density parameter as: For a dark energy subjected universe, dark energy enlarge similar to the conservation laẇ ρ + 3H(ρ + p) = 0 (29) or identically [35]:Ω where p = ω D ρ is the equation of state. Also, the equation of state of the index is of the form [35]: Here, w D depends on the parameter c. Since, the observation predicts [36] Ω h → 1 for the present time, therefore, at c = 1, ω D → −1, i.e., our model acts like cosmological constant. Also for c > 1, we get, −1 < ω D < − 1 3 , i.e., our model shows the quintessence region and if c < 1, we get, ω D < −1, i.e., the phantom type behaviour occurs. Using the equation (27) and (31), we get, If R h < 3 2 R H , thenṀ increases where R H is the Hubble radius and R h is the radius of the event horizon. If R h > 3 2 R H , thenṀ decreases. Dark Energy Accretion upon Van der Waal's Black Hole Here, we will discuss about dark energy model such as extended Chaplygin gas. We consider the spatially flat, homogeneous and isotropic Friedmann-Robertson-Walker (FRW) model of the universe is described by the following metric where a(t) represents time-dependent scale factor. The Einstein's equations for FRW universe are It is also assumed that the total matter and energy are conserved with the following conservation equation (29) Now, we consider the extended Chaplygin gas [37] as dark energy model. The equation of state is given by I. For n = 1 Special case of n = 1 reduces the equation (35) to the modified Chaplygin gas equation of state with the density (using the equation (29)) where C > 0 is an integration constant. Therefore, the current value of the energy density For MCG model, we get, and fig.1a we have plotted ρ vs M , the mass of the central gravitating object. Accretion has been considered. The plots show if the mass of the central engine is increased, the density of the accreting dark energy is reduced. Not only that but also we observe that the density profile is high if α is low. So dark energy, whenever is strongly repulsive (i.e., α is high) we find it to reduce the density to be accreted in. Super massive black holes are less capable to accrete a strongly dense dark energy flow towards it than the local stellar mass black holes can do. For α = 0, we get back the barotropic fluid accretion. We see for small mass of the black hole, the accretion is very high (than dark energy cases) and as black hole's mass is increased, density of the exotic fluid decreases. The negative pressure of dark energy faints the strength of accretion. Here, we assume the last term of expression in EoS (35) is dominant. In this case, we can write the energy density in terms of the scale factor as (using the equation (29)) where C is an arbitrary integrating constant. Therefore, the current value of the energy density looks like Also, we get, and Density vs mass for α = −1 case has been plotted in fig.1b. The basic features do match with fig.1b. However, as we increase the value of n, the initial (for low M ) decrease of density becomes more steeper. If we take n = 1, it is obsereved that the accretion density almost becomes constant with the variation of mass M . In this case the equation of state given by equation (35) can be written as, Using the equation (29), we get, Therefore, the energy density is given by where φ 1 can be evaluated by expressing ρ as a function of a by using the equation (45). Therefore, the current value of the energy density is ρ 0 = φ 1 (1). Also, we get, and In fig.1c we have plotted ρ vs M and we fix r c , a vw , b vw , l, A 1 , A 2 and B, and see that if the mass is increasing then the density is decreasing. In this case the EoS (35) can be written as, Using the equation (29), we get, Therefore, the current value of the energy density ρ 0 = φ 2 (1). Also, we get, and Relation between ρ and M r c = 10, a vw = 0.2, b vw = 1, l = 1.2, Density vs mass for "n = 3 and α = 1 2 " case has been plotted in fig. 1d. The basic features do match with figures 1a, 1b and 1c. Now, using the equation (29), (33) and (34), we get, which implies where M 0 is the current value of the Van der Waal's black hole's mass. If a is very large (z → −1), i.e., at the last stage of the universe, the mass of the black hole will be M = M 0 + 8πu 2 √ 3 √ ρ 0 . Using the solution of ρ in equation (54), the black hole mass M can be written in terms of scale factor a and then using z = 1 a − 1, the formula of redshift, M will be written in terms of redshift z. For n = 1, M can be written as, We will now compare Chaplygin gas accretion for Van der Waal's black hole and Schwarzschild black hole. A quantitative study of Chaplygin gas accretion and detailed density profile are studied in the reference [41]. For a nonrotating (j = 0) i.e., Schwarzschild black hole, we observe that if the specific angular momentum (λ c ) of the accretion is 2.7, for adiabatic accretion (Fig 2a) the density of accretion near to the black hole is very high (∼ 10 −13 gm/cc), whereas, at thousand Schwarzschild radius this will be of the order of10 −27 gm/cc. As we increase x, we observe that the density become asymptotic to the distance axis. Finally, in fig 2b, we draw the Chaplygin gas accretion on Schwarzschild black hole [41]. Here, we see effect of dark energy accretion terminates the accretion disc very near to the black hole. So we can speculate that the Chaplygin gas type dark energy accretion on the Van der Waal's black hole is stronger than the dark energy accretion on Schwarzschild black hole (i.e., the Van der Waal's black hole is able to even attract dark energy from distant regions) and is weaker than an adiabatic fluid accretion on Schwarzschild black hole (i.e., Schwarzschild black hole is able to accrete adiabatic fluid from distant region but Van der Waal's black hole is unable do that). Now M vs z is drawn in figures 3a(i), 3a(ii) and 3a(iii). Since our solution for extended Chaplygin gas model produces only quintessence, so from the figures, we can say that the mass M of the Van der Waal's black hole always increases with decreasing z. So we conclude that the mass of the Van der Waal's black hole increases if the extended Chaplygin gas accretes onto the Van der Waal's black hole. Also, if we fix C and vary α then increment of α decreases the value of the mass. However, if we fix α and vary C then increasing C increases the value of the mass. For α = −1, M can be written as, The basic features of the figures 3b(i), 3b(ii) and 3b(iii) do match with the figures 3a(i), 3a(ii) and 3a(iii). So we conclude that the mass of the Van der Waal's black hole increases if the extended Chaplygin gas accretes onto the Van der Waal's black hole. Also, if we fix C and vary n, then increment of n decreases the value of the mass. However, if we fix n and vary C then increment of C increases the value of the mass. where φ 1 can be evalutated by expressing ρ as a function of scale factor ′ a ′ by using the equation (45). For n = 3 and α = 1 2 , M can be written as, where φ 2 can be evalutated by expressing ρ as a function of scale factor ′ a ′ by using the equation ( Discussions In this work, first we have considered the most general static spherically symmetric black hole metric. Then we have studied the accretion onto the Van der Waal's black hole and found some inequalities for physical validation. Next, we have analyzed the thermodynamics of accreting matter around the black hole. We can say that the mass is increasing, i.e.,Ṁ > 0 in quintessence era, however when we move towards the phantom barrier line, the rate of increment is slowing down, and in phantom era the mass of the black hole is decreasing, i.e.,Ṁ < 0, which is a point of interest. Finally, we have discussed about dark energy model such as extended Chaplygin gas and the nature of the universe's density. For special case of the modified Chaplygin gas, we have seen that the universe was infinitely dense at its beginning but when the scale factor has turned higher, the universe has started to grow in size. For α = −1 in extended Chaplygin gas, the density of accreting fluid is increasing with a steep slope firstly and then the slope is reduced down. In extended Chaplygin gas for n = 2, α = 1 2 and n = 3, α = 1 2 we obtain an identical nature of the accretion density, i.e., increment in scale factor causes a loss of the density of the accreting fluid. Another reason for decrease of density of the accreting dark energy is increasing mass of the central engine. Since in our solution of modified Chaplygin gas, this model generates only quintessence dark energy and so the Van der Waal's black hole mass increases during the whole evalution of the accelerating universe. We find that the strength of dark energy accretion process on Van der Waal's black hole lies between dark energy accretion on Schwarzschild black hole and adiabatic accretion on Schwarzschild black hole. We speculate that the mutual interactions of microstates of Van der Waal's black hole reduce the outward acting attracting power of it as compared to Schwarzschild black hole when both of them are to attract adiabatic fluid. But when the accreting fluid is Chaplygin gas, due to its negative pressure, Van der Waal's black hole attracts it more than the Schwarzschild one. The accreting power of Van der Waal's black hole lies somewhere between different extremities. It is not so high or not so fainted for different cases like Schwarzschild black hole.
2018-07-19T10:50:19.000Z
2018-02-21T00:00:00.000
{ "year": 2019, "sha1": "e4403a313cf9b78353df4b34b49d36d0bd44daa7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1802.08553", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e4403a313cf9b78353df4b34b49d36d0bd44daa7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221371876
pes2o/s2orc
v3-fos-license
Approximating morphological operators with part-based representations learned by asymmetric auto-encoders : This paper addresses the issue of building a part-based representation of a dataset of images. More precisely, we look for a non-negative, sparse decomposition of the images on a reduced set of atoms, in order to unveil a morphological and explainable structure of the data. Additionally, we want this decomposition to be computed online for any new sample that is not part of the initial dataset. Therefore, our solution relies on a sparse, non-negative auto-encoder, where the encoder is deep (for accuracy) and the decoder shallow (for explainability). This method compares favorably to the state-of-the-art online methods on two benchmark datasets (MNIST and Fashion MNIST) and on a hyperspectral image, according to classical evaluation measures and to a new one we introduce, based on the equivariance of the representation to morphological operators. Introduction Mathematical morphology is strongly related to the problem of data representation.Applying a morphological filter can be seen as a test on how well the analyzed element is represented by the set of invariants of the filter.For example, applying an opening by a structuring element B tells how well a shape can be represented by the supremum of translations of B. The morphological skeleton [18,24] is a typical example of description of shapes by a family of building blocks, classically homothetic spheres.It provides a disjunctive decomposition where components -for example, the spheres -can only contribute positively as they are combined by supremum.A natural question is the optimality of this additive decomposition according to a given criterion, for example its sparsity -the number of components needed to represent an object.Finding a sparse disjunctive (or part-based) representation has at least two important features: first, it allows saving resources such as memory and computation time in the processing of the represented object; secondly, it provides a better understanding of this object, as it reveals its most elementary components, hence operating a dimensionality reduction that can alleviate the issue of model over-fitting.Such representations are also believed to be the ones at stake in human object recognition [25]. Similarly, the question of finding a sparse disjunctive representation of a whole database is also of great interest and will be the main focus of the present paper.More precisely, we will approximate such a representation by a non-negative, sparse linear combination of non-negative components, and we will call additive this representation.Given a large set of images, our concern is then to find a smaller set of non-negative image components, called dictionary, such that any image of the database can be expressed as an additive combination of the dictionary components.As we will review in the next section, this question lies at the crossroad of two broader topics known as sparse coding and dictionary learning [17]. Besides a better understanding of the data structure, our approach is also more specifically linked to mathematical morphology applications.Inspired by recent work [1,28], we look for image representations that can be used to efficiently calculate approximations to morphological operators.The main goal is to be able to apply morphological operators to massive sets of images by applying them only to the reduced set of dictionary images.This is especially relevant in the analysis of remote sensing hyperspectral images where different kinds of morphological decomposition, such as morphological profiles [19] are widely used.For reasons that will be explained later, sparsity and non-negativity are sound requirements to achieve this goal.What is more, whereas the representation process can be learned o ine on a training dataset, we need to compute the decomposition of any new sample online.Hence, we take advantage of the recent advances in deep, sparse and non-negative auto-encoders to design a new framework able to learn part-based representations of an image database, compatible with morphological processing.To that extent, this work is part of the resurgent research line investigating interactions between deep learning and mathematical morphology [9,22,23,27,32].However with respect to these studies, focusing mainly on introducing morphological operators in neural networks, the present paper addresses a different question. The existing work on non-negative sparse representations of images is reviewed in Section 2, that stands as a baseline and motivation of the present study.Then we present in Section 3 new results about part-based approximations of morphological operators.The proposed model for part-based representation learning is described in Section 4, a preliminary version of which can be found in [20].Results on two image datasets (MNIST [13] and Fashion MNIST [29]) are discussed in Section 5, and we show how the proposed model compares to other deep part-based representations.An example on hyperspectral images is illutrated as well.We finally draw conclusions and suggest several tracks for future work.The code for reproducing our experiments is available online¹. Non-negative sparse mathematical morphology The present work finds its original motivation in [28], where the authors set the problem of learning a representation of a large image dataset to quickly compute approximations of morphological operators on the images.They find a good representation in the sparse variant of Non-negative Matrix Factorization (sparse NMF) [11], that we present hereafter. Consider a family of M images (binary or gray-scale) x (1) , x (2) , ..., x (M) of N pixels each, aggregated into a M × N data matrix X = (x (1) , x (2) , ..., x (M) ) T (the i th row of X is the transpose of x (i) seen as a vector).Given a feature dimension k ∈ N * and two numbers s H and s W in [0, 1], a sparse NMF of X with dimension k, as Figure 1: A subset of 30 images, extracted from a larger synthetic dataset of 1000 images, built as non-negative linear combinations of the five atom images of Figure 2(a).Although some images may look identical they are not, as the gray levels slightly differ. defined in [11], is any solution (H, W) of the problem where the second constraint means that both H and W have non-negative coefficients, and the third constraint imposes the degree of sparsity of the columns of H and lines of W respectively, with σ the function defined by Note that σ takes values in [0, 1].The value σ(v) = 1 characterizes vectors v having a unique non-zero coefficient, therefore the sparsest ones, and σ(v) = 0 the vectors whose coefficients all have the same absolute value.Hoyer [11] designed an algorithm to find at least a local minimizer for the problem (1), and it was shown that under fairly general conditions (and provided the L 2 norms of H and W are fixed) the solution is unique [26]. In representation learning, each row h (i) of H is called the encoding or latent features of the input image x (i) , and W holds in its rows a set of k images called the dictionary.In the following, we will refer to the images w j = W j,: of the dictionary as atom images or atoms.As stated by Equation (1), the atoms are combined to approximate each image x (i) := X i,: of the dataset by an estimate x(i) , which writes as follows: where h i,j is the coefficient at row i and column j in matrix H (see Figures 3 and 4 for illustration).The assumption behind this decomposition is that the more similar the images of the set, the smaller the required dimension to accurately approximate this set.Note that only k(N + M) values need to be stored or handled when using the previous approximation to represent the data, against the NM values composing the original data. For illustration purposes, we propose a toy example.We generated a dataset of 1000 images of size 32×32 pixels, as non negative linear combinations of the five atom images shown on Figure 2 (a).We call this dataset the Rectangles dataset and show 30 samples of it in Figure 1.Here the matrix X counts M = 1000 rows and N = 32 × 32 = 1024 columns.We apply the sparse NMF algorithm to recover five atoms (stored in a nonnegative matrix W ∈ R 5×1024 + ) and 1000 encodings (stored in a non-negative matrix H ∈ R 1000×5 + ) such that X = HW approximates well X.The five recovered atoms are shown in Figure 2 (b), and Figure 3 shows two examples of approximate non-negative reconstructions.Note that the excellent results here are due to the 1000 images of the Rectangles dataset being created precisely as sparse, non-negative combinations of only five, pairwise disjoint, atoms.As such, it is close to verify the hypothesis for which the NMF yields a unique and accurate part-based representation of data [6].In the remaining of the paper we will no longer work with this dataset and focus on more realistic data.The leftmost images are x (i) s, the second column images are their approximations x(i) s, the gray coeflcients are the h i,j and the other images are the five computed atoms w j , 1 ≤ j ≤ 5, also shown in Figure 2 (b). By choosing the sparse NMF representation, the authors of [28] aim at approximating a morphological operator on the data X by applying it to the atom images W only, before projecting back into the input image space.That is, they want (x (i) ) ≈ Φ(x (i) ), with Φ(x (i) ) defined by where the h i,j and w j are the same as in Equation (3).The operator Φ in Equation ( 4) is called a part-based approximation to .To understand why non-negativity and sparsity help this approximation to be a good one, we can point out a few key arguments.First, sparsity favors the support of the weighted atom images to have little pairwise overlap.Secondly, a sum of images with disjoint supports is equal to their (pixel-wise) supremum.Finally, dilations commute with the supremum and, under certain conditions that are favored by sparsity, this also holds for the erosions.This will be developed in more details in Section 3.For now, Figure 4 illustrates the part-based approximation D B of the dilation δ B by a structuring element B, expressed as: Deep auto-encoders approaches The main drawback of the NMF algorithm is that it is an o ine process, and the encoding of any new sample with regards to the previously learned basis W requires either to solve a computationally extensive constrained optimization problem, or to relax the non-negativity constraint by using the pseudo-inverse W + of the basis.Some approaches proposed to overcome this shortcoming rely on Deep Learning, and especially on deep auto-encoders, which are widely used in the representation learning field, and offer an online representation process [8,10,15].An auto-encoder, as represented in Figure 5, is a model composed of two stacked neural networks, an encoder and a decoder whose parameters are trained by minimizing a loss function.A common example of loss function is the mean square error (MSE) between the input images x (i) and their reconstructions by the decoder x(i) : Part-based approximation In this framework, and when the decoder is composed of a single linear layer (possibly followed by a nonlinear activation function), the model approximates the input images as: where h (i) is the encoding of the input image by the encoder network, b and W respectively the bias and weights of the linear layer of the decoder, and f the (possibly non-linear) activation function, that is applied pixel-wise to the output of the linear layer.The output x(i) is called the reconstruction of the input image x (i) by the auto-encoder.It can be considered as a linear combination of atom images, up to the addition of an offset image b and to the application of the activation function f .The images of our learned dictionary are hence the columns of the weight matrix W of the decoder.We can extend the definition of part-based approximation, described in Section 2.1, to our deep learning architectures, by applying the morphological operator to these atoms w 1 , ..., w k , as pictured by Figure 5.Note that a central question lies in how to set the size k of the latent space.This question is beyond the scope of this study and the value of k will be arbitrarily fixed (we take k = 100) in the following. The NNSAE architecture (for Non-Negative Sparse Autoencoder), from Lemme et al. [15], proposes a very simple and shallow architecture for online part-based representations using linear encoder and decoder with tied weights (the weight matrix of the decoder is the transpose of the weight matrix of the encoder).Both the NCAE architectures (Nonnegativity-Constrained Autoencoder), from Hosseini-Asl et al. [10] and the work from Ayinde et al. [2], that aims at extending it, drop this transpose relationship between the weights of the encoder and of the decoder, increasing the capacity of the model.Those three networks enforce the non-negativity of the elements of the representation, as well as the sparsity of the image encodings using various techniques. Enforcing sparsity of the encoding The most prevalent idea to enforce sparsity of the encoding in a neural network can be traced back to the work of H. Lee et al. [14].This variant penalizes, through the loss function, a deviation S of the expected activation of each hidden unit (i.e. the output units of the encoder) from a low fixed level p. Intuitively, this should ensure that each of the units of the encoding is activated only for a limited number of images.The resulting loss function of the sparse auto-encoder is then: where the parameter p sets the expected activation objective of each of the hidden neurons, and the parameter β controls the strength of the regularization.The function S can be of various forms, which were empirically surveyed in [31].The approach adopted by the NCAE [10] and its extension [2] both rely on a penalty function based on the KL-divergence between two Bernoulli distributions, whose parameters are the expected activation and p respectively, as used in [10]: The NNSAE architecture [15] introduces a slightly different way of enforcing the sparsity of the encoding, based on a parametric logistic activation function at the output of the encoder, whose parameters are trained along with the other parameters of the network. Enforcing non-negativity of the decoder weights For the NMF (Section 2.1) and for the decoder, non-negativity results in a part-based representation of the input images.In the case of neural networks, enforcing the non-negativity of the weights of a layer eliminates cancellations of input signals.In all the aforementioned works, the encoding is non-negative since the activation function at the output of the encoder is a sigmoid.In the literature, various approaches have been designed to enforce weight positivity.A popular approach is to use an asymmetric weight decay, added to the loss function of the network, to enact more decay on the negative weights that on the positive ones.However this approach, used in both the NNSAE [15] and NCAE [10] architectures, does not ensure that all weights will be non-negative.This issue motivated the variant of the NCAE architecture [2,15], which uses either the L 1 rather than the L 2 norm, or a smooth version of the decay using both the L 1 and the L 2 norms.The source code of this method being unavailable at the time the present work was done, we did not use this more recent version as a baseline for our study. Another type of approaches consists in initializing the decoder weights with non-negative values and ensure they remain so after each update during the optimization process.The simplest strategy, as implemented in the projected gradient descent [5], is to project the weights onto the positive orthant by setting negative components to zero.More recently, the exponentiated gradient descent was proposed as an alternative to the projected gradient descent [8].The idea is to update the weights by multiplying them by a positive coefficient, which is an exponentially decreasing function of the partial derivative of the loss with respect to the weights.Although promising, the latter proposition does not include any sparsity constraint and the authors provide no quantitative measure on image reconstruction errors. As far as non-negativity of weights is concerned, we may also mention [30], which uses an optimization process inspired by the NMF to satisfy the non-negative probability constraints of Random Neural Networks stacked in auto-encoders. We will present in Section 4 our own auto-encoder solution for an online, non-negative and sparse representation of data, compatible with the approximation of morphological operators.In the next section we provide some mathematical insights on how non-negativity and sparsity are connected to such an approximation. Equivariance of morphological operators to non-negative linear combinations In this section we precise the intuitions sketched in Section 2.1 about the part-based approximation of morphological operators.Let L be the complete lattice of images with N pixels and with values in [0, +∞] ordered by the Pareto ordering (x ≤ y iff for any q, 1 ≤ q ≤ N, xq ≤ yq ).Consider a flat, extensive dilation δ B on L and its adjoint anti-extensive erosion ε B , B being a flat structuring element.Let x ∈ L be an image approximated by the non-negative combination x = ∑︀ k j=1 h j w j of k atom images w 1 , . . ., w k ∈ L. Following Equation ( 4), we define the part based approximations of the four operators δ B , ε B , B = δ B ε B and φ B = ε B δ B as: We focus on establishing whether these expressions approximate well their exact counterparts δ B (x), ε B (x), B (x) and φ B (x), assuming x is well approximated by x = ∑︀ k j=1 h j w j = Wh.It is likely to be so as soon as , which is to say as soon as the four operators commute with the non-negative linear application W = [w 1 , . . ., w k ] ↦ → Wh = ∑︀ k j=1 h j w j .As sketched earlier, sums can be identified to suprema if the involved images have disjoint supports, and this also favors the commutation of the erosion with the supremum.This is why we introduce the following hypothesis that characterizes the disjunction of supports (i.e. the regions where the image is non-zero) of the h j w j . Let H 1 denote the hypothesis: where 0 denotes an image equal to zero everywhere (i.e. with empty support), and more generally, for an integer n, Hn: "For any is the identity for n = 0. Note that, since δ B is extensive, Hn implies any Hp with p ≤ n.In particular, any Hn implies H 0 , which simply states the disjunction of the supports of any two images h i w i and h j w j , i ≠ j.We can now state the following result: If H 1 holds for the representation x = ∑︀ k j=1 h j w j , then: If additionally H 2 holds, then we also have: A proof of this result is detailed in Appendix A. Proposition 1 implies that under the Hn hypothesis the error || B (x) − Φ B (x)|| 2 between the actual transformed image and its part-based approximation only depends on the quality of the reconstruction, that is to say on the error ||x and so on.Obviously, the more constrained the representation, the smaller the class of images that can be accurately represented.The non-negativity and sparsity constraints are therefore likely to increase the representation error ||x − x|| 2 .Hence, unless the data can be perfectly represented by non-negative combinations of atoms complying with a hypothesis Hn, a trade-off needs to be found to achieve a good approximation of morphological operators.This is the target of our asymmetric auto-encoder presented in Section 4. We shall now generalize Proposition 1 by applying it to the representation that we note x(n−1) = ∑︀ k j=1 h j δ (n−1)B (w j ).Notice that H 1 holds for x(n−1) if and only if Hn holds for x.This yields the following corollary. Corollary 1. If Hn holds for the representation x = ∑︀ k j=1 h j w j , then for any integer p ≤ n: and for any integer p ≤ n − 1 Remarks Choice of the complete lattice L. At the beginning of this section we chose L as the complete lattice of images with N pixels and with values in [0, +∞] ordered by the Pareto ordering.However, in practice we deal more commonly with images whose values are in a bounded interval such as [0, 1].The previous results still hold in the latter case, provided we add the hypothesis h j ∈ [0, 1].More generally, we only need to make sure that w ∈ L ⇒ hw ∈ L. Interpretation of Hn.The hypothesis Hn, n ≥ 0, characterizes the degree of disjunction of the supports of the h j w j involved in the part-based approximation of an image x.The dilation δ B being extensive, the degree of disjunction, intended as distance between supports of the initial images, "increases" with n.Note that no assumption is made on the disjunction of the whole set of atom images w j , but only on those atoms that are used in the approximation of x, in other words the w j weighted by a positive h j .This helps realize that the number of atoms used to approximate an image matters.In the limit case where only one atom is used, Hn is verified for any n.By contrast, if as many as N atoms contribute to the approximation, then even H 1 becomes impossible.In the context of the representation of a large dataset, the ideal case seems to be when every image is well approximated by few atoms, as disjoint as possible.This indicates that the Hn are not unrealistic hypotheses in practice, provided a sparse part-based representation approximates well the data, and nB is small enough compared to the supports of the atoms. How necessary is Hn?The proof of Proposition 1 mainly stands on points 3 and 5 (see Appendix A).Therefore, we may ask whether the hypothesis δ B (x) ⋀︀ δ B (y) = 0 is necessary to have δ B (x + y) = δ B (x) + δ B (y) and ε B (x + y) = ε B (x) + ε B (y), which comes down to questioning the necessity of H 1 and H 2 in Proposition 1, or Hn in the corollary.The answer is they are not necessary in general.For example, for any increasing function g : R + → R + and y = [g(x 1 ), . . ., g(x N )] such that x + y ∈ L, we do have δ B (x + y) = δ B (x) + δ B (y) and ε B (x + y) = ε B (x) + ε B (y).However, if we consider rather "independent" components, it is easy to build fairly general configurations where a certain degree of disjunction is necessary.In particular, as shown in the examples of Figures 6 and 7, a simple disjunction (corresponding to H 0 ) is not sufficient in general.This section was meant to precise mathematically the role played by sparsity and non-negativity in the part-based approximation of morphological operators.Motivated by previous approaches described in Section 2.2, we present in the next section our proposed auto-encoder, designed to achieve the desired trade-off between explainability, accuracy of the data reconstruction and accuracy of the approximation of morphological operators. Proposed model We propose an online part-based representation learning model, using an asymmetric auto-encoder with sparsity and non-negativity constraints.As pictured in Figure 8, our architecture is composed of two networks: a deep encoder and a shallow decoder (hence the asymmetry and the name of AsymAE we chose for our architecture).The encoder network is based on the discriminator of the infoGAN architecture introduced in [4], which was chosen for its average depth, its use of widely adopted deep learning components such as batch-normalization [12], 2D-convolutional layers [7] and leaky-RELU activation function [16].It has been designed specifically to perform interpretable representation learning on datasets such as MNIST and Fashion-MNIST.The network can be adapted to fit larger images.The decoder network is similar to the one presented in Figure 5.A Leaky-ReLU activation has been chosen after the linear layer.Its behavior is the same as the identity for positive entries, while it multiplies the negative ones by a fixed coefficient α lReLU = 0.1.This activation function has shown the best performances in similar architectures [16].The sparsity of the encoding is achieved using the same approach as in [2,10], that consists in adding to the previous loss function the regularization term described in Equations ( 8) and ( 9).We only enforced the non-negativity of the weights of the decoder, as they define the dictionary of images of our learned representation and as enforcing the non-negativity of the encoder weights would bring nothing but more constraints to the network and lower its capacity.Similarly to [5], we enforced this non-negativity constraint explicitly by projecting our weights on the nearest points of the positive orthant after each update of the optimization algorithm (such as the stochastic gradient descent).The main asset of this other method that does not use any additional penalty functions, and which is quite similar to the way the NMF enforces non-negativity, is that it ensures positivity of all weights without the cumbersome search for good values of the parameters of the various regularization terms in the loss function. Experiment 1 on MNIST and Fashion MNIST To demonstrate the goodness and drawbacks of our method, we have conducted experiments on two wellknown datasets MNIST [13] and Fashion MNIST [29].These two datasets share common features, such as the size of the images (28 × 28), the number of classes represented (10), and the total number of images (70000), divided into a training set of 60000 images and a test set of 10000 images. Setting the parameters For our AsymAE algorithm, we studied the effect of the sparsity objective p and regularization weight β in the loss function in Equation (8).In Figure 9 we present the results of the proposed approach on the Fashion-MNIST dataset.The maximum of the sparsity measure was reached with the sparsity parameters p = 0.01 and β = 0.01, whose atoms are shown in Figure 10e.It appears that these atoms are closer to full clothes shapes than parts.A possible interpretation is that, as the sparsity constraint gets stronger, the model is pushed to the limit where an atom should be involved in the reconstruction of a proportion p of the training images, that is approximately p • M images.When the number k of atoms is much smaller than p • M (which is the case for k = 100, p = 0.01 and M = 60000), each atom needs to be shared by a whole subset of images as their unique (or almost) representative.The model is therefore performing some sort of k-means clustering, each atom being a barycenter of a subgroup of the training set. In Figure 10 we show examples of atom images for other values of sparsity parameters.The representations shown in Figures 10b, 10c and 10d are quite close to a part-based representation, even though the supports of the atom images are less disjoint as they would be in an ideal part-based representation, such as the sparse NMF, whose atom images are very neat.From this visual inspection as well as the plots of Figure 9, we found that a better trade-off seems to be reached for the values p = 0.05 and β = 0.0005 in the case of the Fashion-MNIST dataset.A similar study led to choose p = 0.05 and β = 0.001 with the MNIST dataset. Comparison to state of the art methods We compared our method to three baselines: the sparse-NMF [11], the NNSAE [15], and the NCAE [10].The three deep-learning models (the proposed AsymAE, NNSAE and NCAE) were trained until convergence on the training set, and evaluated on the test set.The sparse-NMF algorithm was ran and evaluated on the test set.Note that all models but the NCAE may produce reconstructions that do not fully belong to the interval [0, 1].In order to compare the reconstructions and the part-based approximations produced by the various algorithms, their outputs will be clipped between 0 and 1.There is no need to apply this operation to the output of NCAE as a sigmoid activation enforces the output of its decoder to belong to [0, 1].We used three measures to conduct this comparison: -the reconstruction error, that is the pixel-wise mean squared error between the input images x (i) of the test dataset and their reconstruction/approximation x(i) : ∑︀ N j=1 (x (i) j − x(i) j ) 2 ; -the sparsity of the encoding, measured using the mean on all test images of the sparsity measure σ in Equation 2: 1 M ∑︀ M i=1 σ(h (i) ); -the approximation error to dilation by a disk of radius one, obtained by computing the pixelwise mean squared error between the dilation δ B by a disk of radius one of the original image and the part-based approximation D B to the same dilation, using the learned representation: 2 .The parameter settings used for NCAE and the NNSAE algorithms are the ones provided in [10,15].For the sparse-NMF, a sparsity constraint of S h = 0.6 was applied to the encodings and no sparsity constraint was applied on the atoms of the representation.1) and the reconstruction images (Figure 11) demonstrate the capacity of our model to reach a better trade-off between the accuracy of the reconstruction and the sparsity of the encoding (that usually comes at the expense of the former criteria), than the other neural architectures.Indeed, in all conducted experiments, varying the parameters of the NCAE and the NNSAE as an attempt to increase the sparsity of the encoding came with a dramatic increase of the reconstruction error of the model.We failed however to reach a trade-off as good as the sparse-NMF algorithm that manages to match a high sparsity of the encoding with a low reconstruction error, especially on the Fashion-MNIST dataset.The major difference between the algorithms can be seen in Figure 12 that pictures 16 of the 100 atoms of each of the four learned representations.While sparse-NMF manages, for both datasets, to build highly explainable and clean part-based representations, the two deep baselines build representations that picture either too local shapes, in the case of the NNSAE, or too global ones, in the case of the NCAE.Our method suffers from quite the same issues as the NCAE, as almost full shapes are recognizable in the atoms.We noticed through experiments that increasing the sparsity of the encoding leads to less and less local features in the atoms.It has to be noted that the L 2 Asymmetric Weight Decay regularization used by the NCAE and NNSAE models allows for a certain proportion of negative weights.As an example, up to 32.2% of the pixels of the atoms of the NCAE model trained on the Fashion-MNIST dataset are negative, although their amplitude is lower than the average amplitude of the positive weights.The amount of negative weights can be reduced by increasing the corresponding regularization, which comes at the price of an increased reconstruction error and less sparse encodings.Finally Figure 13 pictures the part-based approximation to dilation by a structuring element of size one, computed using the four different approaches on ten images from the test set.Although the quantitative results state otherwise, we can note that our approach yields an interesting part-based approximation, thanks to a good balance between a low overlapping of atoms (and dilated atoms) and a good reconstruction capability. Experiment 2: the Pavia University hyperspectral image In order to test our approach on more realistic and complex data, we carried an experiment on the Pavia University hyperspectral image², of spatial size 610 × 340 pixels and containing M = 103 spectral bands (Figure 14).For memory issues and in order to take advantage of the previous experiment, we divided each channel image into 9 × 5 = 45 non-overlapping 64 × 64 patches, covering 576 × 320 pixels starting from the top left hand corner.The database thus counted 45 × 103 = 4635 patches, that we split into a training set and a test set by dedicating a fixed proportion ρ ∈ [0, 1] of the spectral bands to the training.This means the patches of a given spectral band were all assigned to the training set or all to the test set.What is more, the spectral bands assigned to the test set were sampled regularly (not randomly).We trained on these data the asymmetric auto-encoder presented earlier, with the same latent dimension (k = 100), same parameter p = 0.05 but larger β = 0.005.For comparison, as before, we also trained the sparse-NMF [11] and the NCAE [10] model, with the same parameters as before (those suggested by the authors).Despite all our attempts, we did not succeed in training the NNSAE [15] model to achieve sufficiently good performances so as to be interestingly compared to the other models.This might be a limitation of the model but could also be a misunderstanding on our part on how to set its parameters properly.We decided anyway no to report the obtained results, which were well below those presented hereafter.The two others deep-learning models were trained until they reached a reconstruction error of approximately 10 −3 on the test.Regarding the sparse NMF, we observed that both the reconstruction error and the sparsity of the encoding could be easily controlled, and high quality results could be achieved that were out of reach for the online methods -at least during the tests we ran.Therefore, the sparse NMF shall be considered as a reference for the online methods, and this is why here we decided to apply it a posteriori to the whole dataset (training set and test set) targeting the best performance of the online models: a reconstruction error of approximately 10 −3 and a sparsity of the encoding of approximately 0.7 (we set S h = 0.7).In this comparison, the training set represented ρ = 6/7 of the whole set of patches. Since the present experiment applies to richer data, the methods are compared on the four basic morphological operators (dilation, erosion, opening and closing) with several sizes of structuring elements.In order to enhance the differences across methods, we present the quality of the morphological approximations through the Peak Signal to Noise Ratio (PSNR), defined here by PSNR = −10 log 10 (MSE), (11) where MSE is the pixel-wise mean squared error between the actual morphological operator and its partbased approximation.We recall that this comparison was made among models achieving a similar reconstruction error of the original images (≈ 10 −3 ) and a similar level of sparsity of the encoding (0.72 for our AsymAE, 0.75 for NCAE and the Sparse NMF).The plots of Figure 15 and Table 2 sum up the results, whereas provide visual examples for a structuring element of size three.2. In generel, the Sparse NMF achieves the best part-based approximations and our model (AsymAE) is the best online method.This is the case except for the erosions of sizes two and onward, and the openings of sizes three and onward, where NCAE achieves better PSNRs than the AsymAE (and sometimes even than the Sparse NMF).This seems surprising as the visual examples for a structuring element of size three (Figures [16][17][18][19][20] do not show a better accuracy for the NCAE.These exceptions might have the same cause as the U-shape of the erosion curve, that we observe for all methods: for darker images, such as the eroded and openings of large structuring elements, the PSNR tends to favor over-dark approximations.In the limit case, it seems that As for the atom images, shown in Figures 21-23, they might not correspond to the intuition of a partbased representation, as their supports are quite extended.However there seems to be approximately one scale represented per atom, as in a granulometry decomposition, which is also a possible approximation of a part-based representation.Furthermore, we note that NCAE's atoms are the noisiest whereas the Sparse NMF's are the least noisy. Another important remark is that the Sparse NMF could achieve even better results, but it still has the drawback of an o ine method.By contrast, it is remarkable that both NCAE and AsymAE maintain almost exactly the same performance when we reduce the relative size of the training set down to ρ = 0.5.We do not report the results here as the difference with the presented ones is negligible.This shows the great interest of having a good online model when the training set is statistically representative of the whole data.16); following rows: approximation using the sparse-NMF, the NCAE and the AsymAE (from top to bottom). Conclusions and future works We have presented an online method to learn a part-based dictionary representation of an image dataset, designed for accurate and efficient approximations of morphological operators.This method relies on autoencoder networks, with a deep encoder for a higher reconstruction capability and a shallow linear decoder for a better interpretation of the representation.Among the online part-based methods using auto-encoders, it achieves the state-of-the-art trade-off between the accuracy of reconstructions and the sparsity of image encodings.Moreover, it ensures a strict (that is, non approximated) non-negativity of the learned representation.These results would need to be confirmed on color images, as the proposed model is scalable, but the illustration on the hyperspectral image already shows the potential use of the proposed approach in real applications.We especially evaluated the learned representation on an additional criterion, that is the commutation of the representation with morphological operators, and noted that all online methods perform worse than the o ine sparse-NMF algorithm.A possible improvement would be to impose a major sparsity to the dictionary images with an appropriate regularization.Additionally, using a morphological layer [3,21,32] as a decoder may be more consistent with our definition of part-based approximation, since a representation in the (max, +) algebra would commute with the morphological dilation by essence. Then ε B (x ∨ y) i = 0 = ε B (x) i = ε B (y) i = (︀ ε B (x) ∨ ε B (y) )︀ i .Case 2: x i > 0. Then for any j ∈ B i , y j = 0, otherwise there would be j 0 ∈ B i such that y j0 > 0 and therefore δ B (y) j0 ≥ y j0 > 0; since i ∈ Bj0 we would also have δ B (x) j0 ≥ x i > 0 yielding δ B (x) j0 ∧ δ B (y) j0 > 0 which contradicts the initial hypothesis.We just showed x i > 0 ⇒ ∀j ∈ B i , y j = 0, which also implies y i = 0 and ε B (y) i = 0.As a consequence, ∀j ∈ B i , x j ≥ y j which leads to ε B (x ∨ y) i = ⋀︀ j∈B i (x j ∨ y j ) = ⋀︀ j∈B i x j = ε B (x With the five points listed here above, the conclusions of Proposition 1 are straightforward.Assuming H 1 is true: ∑︀ k j=1 h j δ B (w j ) = ∑︀ k j=1 δ B (h j w j ) = δ B ( ∑︀ k j=1 h j w j ) = δ B (x), where point 1 was applied in the second equality and point 3 in the third equality.The first and last equalities are definitions. -E B (x) = ∑︀ k j=1 h j ε B (w j ) = ∑︀ k j=1 ε B (h j w j ) = ε B ( ∑︀ k j=1 h j w j ) = ε B (x), where point 1 was applied in the second equality, point 5 in the third equality.The first and last equalities are definitions. - )︀ = B (x), where point 1 was applied (twice) in the third equality, point 3 was applied to the ε B (h j w j ) in the fourth equality, since the ε B (h j w j ) verify H 1 as the h j w j do and ε B (h j w j ) ≤ h j w j ; and the fifth equality is given by point 5.The other equalities are definitions. - , where point 1 was applied (twice) in the third equality.If the δ B (h j w j ) comply with H 1 , or equivalently if H 2 is true, then point 5 applies and we get F B (x) = ε B (︀ ∑︀ k j=1 δ B (h j w j ) )︀ = φ B (x). Figure 2 :Figure 3 : Figure 2: (a) The five atom images used to build a dataset of 1000 images such as those of Figure 1.(b) Computed atoms by the sparse NMF of the latter dataset.Up to a permutation in indexing, the computed atoms are very similar (but not strictly identical) to the original ones. Figure 4 : Figure 4: Process for computing the part-based approximation to dilation, based on Equations (3) and (5). Figure 5 : Figure 5: The auto-encoding process and the definition of part-based approximation to a morphological operator in this framework. Figure 6 :Figure 7 : Figure 6: An example of non-equivariance of the dilation to non-negative linear combination.(a) The components h 1 W 1 and h 2 W 2 are piece-wise constant, equal to h 1 > 0 (in green) and h 2 > 0 (in red) respectively, where they are non-zero.(b) Dilation of the sum δB (h 1 W 1 + h 2 W 2 ), where B is the cross structuring element shown in blue.The color yellow represents the valueh 1 ∨ h 2 .(c) Sum of the dilations δ B (h 1 W 1 ) + δ B (h 2 W 2 ).The color purple represents the value h 1 + h 2 which is larger than h 1 ∨ h 2 .Thus although the two components do not overlap (H 0 holds), (b) and (c) are not equal. ( a ) Reconstruction error as a function of the parameter β (sparsity penalty strength).(b)Sparsity measure(Hoyer 2004) as a function of the parameter β (sparsity penalty strength). ( c ) Max-approximation error to dilation (of the original images) as a function of the parameter β (sparsity penalty strength). ( d ) Max-approximation error to dilation (of the reconstructed images) as a function of the parameter β (sparsity penalty strength). Figure 9 : Figure9: Some evaluation measures for sparse non-negative asymmetric auto-encoders for various parameters of the sparsity regularization, using a test set not used to train the network. ( a )Figure 10 : Figure 10: Some atoms (out of the 100 atoms) of various versions of the proposed asymmetric auto-encoder. Figure 13 : Figure 13: Part-based approximation of the dilation by a structuring element of size one (first row), computed using the sparse-NMF, the NNSAE, the NCAE and the AsymAE. Figure 14 : Figure 14: Four bands of the Pavia University hyperspectral image and two examples of patch per band. Figure 15 : Figure15: Quality of the approximation of different morphological operators on the test set (full lines) and training set (dashed lines), depending on the size of the structuring element (always a discrete disk).The quality of the approximation is expressed by the peak signal to noise ratio (PSNR, as defined by Eq. (11)).Higher is better.The size 0 corresponds the identity operator, showing therefore the reconstruction PSNR of the corresponding method.The results on training and test sets are almost identical.Here the proportion of the training set is ρ = 6/7.The figures are also shown in Table2. Figure 16 : Figure 16: Examples of test patches (first row) and their reconstructions computed using the sparse-NMF, the NCAE and the AsymAE (from top to bottom). Figure 17 : Figure 17: Part-based approximation of the dilation by a structuring element of size three.First row: dilation by a disc B of radius three(same patches as in Figure16); following rows: approximation using the sparse-NMF, the NCAE and the AsymAE (from top to bottom). Figure 18 : Figure 18: Part-based approximation of the erosion by a structuring element of size three.First row: erosion by a disc B of radius three; following rows: approximation using the sparse-NMF, the NCAE and the AsymAE (from top to bottom). Figure 19 : Figure 19: Part-based approximation of the opening by a structuring element of size three.First row: opening by a disc B of radius three; following rows: approximation using the sparse-NMF, the NCAE and the AsymAE (from top to bottom). Figure 20 : Figure 20: Part-based approximation of the closing by a structuring element of size three.First row: closing by a disc B of radius three; following rows: approximation using the sparse-NMF, the NCAE and the AsymAE (from top to bottom). Figure 21 : Figure 21: Examples of atoms for the sparse NMF in the experiment on the Pavia University image. Figure 22 : Figure 22: Examples of atoms for the NCAE auto-encoder in the experiment on the Pavia University image (proportion of the training set: ρ = 6/7). Figure 23 : Figure 23: Examples of atoms for the AsymAE auto-encoder in the experiment on the Pavia University image (proportion of the training set: ρ = 6/7). Table 1 : Comparison of the reconstruction error, sparsity of encoding and part-based approximation error to dilation produced by the sparse-NMF, the NNSAE, the NCAE and the AsymAE, for both MNIST and Fashion-MNIST datasets. Table 2 : Peak signal to noise ratio (PSNR, as defined by Eq. (11)) for the approximation of morphological operators on the test set for different models and different sizes of structuring elements (disks).Higher is better.The size zero corresponds to the identity operator, showing therefore the reconstruction PSNR of the model.The figures can also be visualized in the plots of Figure15. approximating such dark images by a constant zero-valued image yields a better PSNR than an approximation which would try to keep some structure.
2020-08-27T09:12:51.149Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "502109b3703869f9936efc38768789fc310385bf", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/mathm-2020-0102/pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b2633ecde03ca7d64b9daabbfb3a457d928a2a47", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
259305550
pes2o/s2orc
v3-fos-license
Special Issue “Genomics of Fungal Plant Pathogens” Plant diseases can be classified according to pathogenic organisms, and 70-80% of them are fungal diseases [...]. Stagonospora tainanensis 38. 25 12,206 51.49% 13.20% [6] Ultimately, one of the direct ways to determine how a gene (and the protein it encodes) functions in cellular processes is to see what happens to the organism when that gene is lacking or overexpressed. Genes are contained within the nucleus, mitochondria and chloroplasts (plant) in eukaryotes. In this Special Issue, the function of two mitochondrial genes in Botrytis cinerea and Fusarium graminearum were elucidated. One is mitochondrial transport protein (MTP), which catalyzes the transport of biochemical substances across the mitochondrial inner membrane. Shao et al. [7] generated the Bcmtp1 mutant of B. cinerea. The results demonstrate that BcMTP1 is involved in the regulation of the vegetative growth, asexual reproduction, stress tolerance and virulence of B. cinerea. Han et al. [8] investigated F. graminearum mitrochondrial porin, or voltage-dependent anion-selective channels, which regulate the complex interactions associated with organellar and cellular metabolism. The authors generated Fgporin mutant in F. graminearum and characterized the function of FgPorin. The results showed that FgPorin is involved in the regulation of fungal hyphal growth, conidiation, sexual reproduction, virulence on wheat and autophagy. The small Rho GTPase family regulates most fundamental processes of eukaryotic cells, including (but not limited to) morphogenesis, polarity, movement, cell division, gene expression and cytoskeleton reorganization [9,10]. In order to elucidate the function of the small GPTase MoRho3 in M. oryzae, Li et al. [11] performed comparative transcriptomic analysis of MoRho3 constitutively active mutant (MoRho3-CA) and MoRho3 dominant-negative mutant (MoRho3-DN). In MoRho3-CA vs. WT, about 874 up-regulated, differentially expressed genes (DEGs) were detected, and the DEGs were significantly enriched in the ribosome biogenesis pathway, while 1511 down-regulated DEGs were also detected and were enriched in different amino acids and chemical metabolism pathways. Meanwhile, in MoRho3-DN vs. WT, the authors detected 986 up-regulated DEGs which were enriched in genes associated with some metabolic pathways, ABC transporters and regulators for autophagy. They similarly detected 1215 down-regulated DEGs which were enriched in genes associated with some selected metabolic pathways. This reveals that MoRho3 plays crucial roles in ribosome biogenesis and protein secretions. In another study, Zheng et al. [12] used another phytopathogenic fungus, Fusarium odoratissimum, to elucidate the function of a small GTPase FoSec4. The results also showed that FoSec4 plays a crucial role in vegetative growth, reproduction, pathogenicity and response to environmental stress in F. odoratissimum. Sorting nexins (SNX) are a highly conserved and diverse family of cellular trafficking proteins that confer a wide variety of functions, including signal transduction, membrane deformation and cargo binding. Moreover, sorting nexins are key modulators of endosome dynamics and autophagic functions [13]. In this Special Issue, Yu et al. [14] generated Chsnx4 and Chsnx41 mutants and elucidated their functions in Cochliobolus heterostrophus. The results demonstrated that both ChSNX4 and ChSNX41 are involved in regulating vegetative growth, asexual reproduction, appressorium formation, oxidative stress, adaptation to antifungal agents and virulence of C. heterostrophus. These phenotypes are similar to those characterized in M. oryzae and F. graminearum [15][16][17][18][19]. This indicates that, at the very least, the morphological functions of SNX4 and SNX41 are similar in phytopathogenic fungi. Members of the Glycosyltransferase 2 (GT2) family include cellulose synthase, chitin synthase, glycosyl-transferase, mannosyltransferase, galactosyltransferase, rhamnosyltransferase, etc. [20]. Glycosyltransferases (GT) play crucial roles in fungal biosynthesis pathways, including fungal cell wall synthesis, and many glycosyltransferases are unique to these fungi [21]. In this Special Issue, Blandenet et al. [22] generated the membrane protein glycosyl-transferase BcCps1 deletion mutant in B. cinerea and elucidated the functions of BcCps1. The results indicate that BcCps1 is essential for the mycelial growth, sexual reproduction, stress tolerance and cell wall biosynthesis of B. cinerea. These phenotypes are also similar to those observed in M. oryzae, F. graminearum, F. verticillioides and Zymoseptoria tritici [23][24][25], indicating that the morphological functions of Cps1 are similar in phytopathogenic fungi. The regulator family of protein, zinc binuclear cluster proteins (Zn(II)2Cys6), are unique to fungi. Zn(II)2Cys6 play crucial roles in fungal development, carbon and nitrogen utilizations, secondary metabolites biosynthesis, stress response, virulence, chromatin remodeling and so on. In this Special Issue, Bansal et al. [26] used the Zn(II)2Cys6 coding sequences from nine ascomycetes phytopathogenic fungal species and yeast to analyze their composition and codon usage bias patterns. The nine fungal species were divided into two major groups based on their zinc binuclear cluster coding sequences, and the phytopathogenic fungal species in cluster-1 (B. maydis, B. oryzae, Alternaria alternate, F. graminearum and Aspergillus flavus) showed a lower number of GC-rich high-frequency codons than the species in cluster-2 (Gaeumannomyces tritici, P. oryzae, Colletotrichum graminicola and Verticillium dahliae), while C. cerevisiae tends to be AT-rich. The presence of Zn(II)2Cys6 GC-rich codons could facilitate the invasion process. The results also showed that specific codons and sequences can modulate the interaction between a host and pathogen through genome editing functional genomics tools. In plant pathology, unveiling the mechanisms of host-pathogen interactions is of paramount importance. From both sides, many genes are involved in the process. In this Special Issue, Wang et al. [27] obtained 229 isolates of Blumeria graminis (Bgh) and analyzed their virulence and genetic traits. Isolates form Yunnan showed the highest diversity in virulence complexity and genetic diversity. The results demonstrated that inter-group genetic variation was 54.68%, while inter-and intra-group genetic variation were 21.4% and 23.9%, respectively. The results indicated that the Bgh population in Tibet has undergone expansion recently, resulting in increased virulence on wheat and a loss of genetic diversity. These results are similar to the virulence and genetic diversity of B. graminis in Southeastern and Southwestern China [28]. In conclusion, we would like to thank all authors who contributed to this Special Issue on "Genomics of Fungal Plant Pathogens". We also thank each of the peer reviewers who provided valuable comments on the 13 manuscripts, and the members of the Journal of Fungi Editorial Office, for their support.
2023-07-01T23:40:34.210Z
2023-06-29T00:00:00.000
{ "year": 2023, "sha1": "17d53a556261ed6c8480e6093d8d005b50983317", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "17d53a556261ed6c8480e6093d8d005b50983317", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
201172915
pes2o/s2orc
v3-fos-license
Mutated WT1, FLT3-ITD, and NUP98-NSD1 Fusion in Various Combinations Define a Poor Prognostic Group in Pediatric Acute Myeloid Leukemia Acute myeloid leukemia is a life-threatening malignancy in children and adolescents treated predominantly by risk-adapted intensive chemotherapy that is partly supported by allogeneic stem cell transplantation. Mutations in the WT1 gene and NUP98-NSD1 fusion are predictors of poor survival outcome/prognosis that frequently occur in combination with internal tandem duplications of the juxta-membrane domain of FLT3 (FLT3-ITD). To re-evaluate the effect of these factors in contemporary protocols, 353 patients (<18 years) treated in Germany with AML-BFM treatment protocols between 2004 and 2017 were included. Presence of mutated WT1 and FLT3-ITD in blasts (n=19) resulted in low 3-year event-free survival of 29% and overall survival of 33% compared to rates of 45-63% and 67-87% in patients with only one (only FLT3-ITD; n=33, only WT1 mutation; n=29) or none of these mutations (n=272). Including NUP98-NSD1 and high allelic ratio (AR) of FLT3-ITD (AR ≥0.4) in the analysis revealed very poor outcomes for patients with co-occurrence of all three factors or any of double combinations. All these patients (n=15) experienced events and the probability of overall survival was low (27%). We conclude that co-occurrence of WT1 mutation, NUP98-NSD1, and FLT3-ITD with an AR ≥0.4 as triple or double mutations still predicts dismal response to contemporary first- and second-line treatment for pediatric acute myeloid leukemia. Introduction Pediatric acute myeloid leukemia (AML) is a rare and heterogeneous disorder, for which continuous improvement of risk-adapted treatment approaches over the last 30 years has led to overall survival rates of approximately 70% [1,2]. In current pediatric AML treatment protocols, cytogenetic abnormalities of the leukemic blasts at initial diagnosis are important indicators for risk group stratification and treatment assignment [1,2]. Approximately, 25% of pediatric patients have AML blasts with a normal karyotype, but even these cases often harbor somatic mutations in genes such as WILMS TUMOR 1 (WT1), NPM1, NRAS, KRAS, Fms-like tyrosine kinase 3 (FLT3), and/or c-KIT/CD117 [1,2]. The WT1 gene is located on chromosome 11, has ten exons and four zinc finger domains, and functions as a transcription factor and master regulator of tissue development [3]. Within normal hematopoiesis, WT1 has two distinct roles: in early 2 Journal of Oncology stages, it mediates quiescence of primitive progenitor cells, and later, WT1 expression is important for differentiation towards the myeloid lineage [4]. In AML, WT1 mutations are present in approximately 10% of patients and predominantly located in exons 7 and 9, which contain the DNA-binding zinc finger domains of the protein. The majority of these mutations are out-of-frame deletion/insertions or premature termination codons that will lead to truncated proteins with altered functional consequences for the cells [5]. If these truncated proteins are stable, they might have dominant negative effects by partially blocking the wild-type WT1 protein; if unstable, the diminished WT1 protein levels may lead to haploinsufficiency [5]. Nevertheless, it has been clearly established that the occurrence of WT1 mutations in AML blasts with normal karyotypes is associated with adverse clinical outcomes in adult [6][7][8][9] as well as pediatric patients [10,11]. Somatic WT1 mutations in AML blasts often co-occur with other genetic aberrations, most frequently with an internal tandem duplication in the juxta-membrane domain of the tyrosine kinase receptor FLT3 (FLT3-ITD) [5]. Classified as type-I or proliferating mutation, FLT3-ITDs are present in 10-15% of pediatric AML cases and lead to poor clinical outcomes [12][13][14]. We previously demonstrated in a cohort of 298 pediatric patients with de novo AML treated before 2004 on AML-BFM protocols that the combination of FLT3-ITD and mutated WT1 is associated with even worse survival [10]. Comparably, an independent study from the Children's Oncology Group (COG) in a cohort of 842 children with de novo AML showed that the poor prognostic impact of WT1 mutations depends on the FLT3-ITD status [11]. These two pediatric studies confirmed earlier findings in adults that first established the adverse prognostic impact of both WT1 and FLT3-ITD mutations [15,16]. Two additional prognostic indicators in FLT3-ITDpositive AML cases established in the last few years are the mutational burden in each patient defined as the ratio between mutant and wild-type FLT3-ITD alleles (allelic ratio, AR) [12,17,18] and the co-occurrence of FLT3-ITD with a cytogenetically cryptic translocation of chromosomes 5 and 11 or t(5;11)(q35;p15) [19]. This translocation leads to fusion of the nucleoporin (NUP98) gene on chromosome 11 and the gene for nuclear receptor binding SET-domain protein 1 (NSD1) of chromosome 5 (NUP98-NSD1). As the breakpoints for the NUP98 gene are often not detected by classical cytogenetic due to its terminal localization at 11p15, it has been described in AML cases with a "normal" karyotype [20]. Importantly, this rare recurrent aberration is mutually exclusive with other recurrent translocations and more prevalent in pediatric AML, in which it is associated with the presence of FLT3-ITD and poor survival outcomes [21,22]. In the present study, we re-evaluated the role of mutations in WT1, FLT3-ITD, and the NUP98-NSD1 translocation as prognostic factors in two contemporary pediatric treatment protocols by analyzing their association with co-occurring genetic and cytogenetic aberrations and by determining their clinical significance and influence on treatment outcome. Thereby, we were able to define a group of high-risk patients for which the efforts for salvage/second line treatment largely failed. Materials and Methods From April 2004 to May 2017, 841 patients aged 0-18 years with de novo AML (excluding FAB M3 and Down Syndrome) were treated in Germany according to the AML-BFM 04 trial (ClinicalTrials.gov Identifier: NCT00111345) or the AML-BFM 2012 registry and trial (EudraCT number: 2013-000018-39) (Figure 1(a)). Both trials were approved by the ethical committees and institutional review boards of university hospitals of Münster and Hannover and an informed consent was obtained from each patient or their legal guardians before the beginning of treatment. Standard procedures for the diagnosis of AML were carried out by the German AML-BFM reference laboratory as previously described [23][24][25]. This included mutation analysis in WT1, FLT3-ITD, NPM1, NRAS, and c-KIT by Sanger and/or next-generation sequencing or GeneScan analysis. In 353 patients (42%), sufficient material and clinical data were available for further analysis. As a confirmation, material from WT1 and/or FLT3-ITD positive and negative cases was re-analyzed by nextgeneration sequencing (NGS) using the TruSight Myeloid Panel (Illumina) [26] with median read counts for WT1 and FLT3-ITD of around 4,200 and 6,000 reads, respectively, as we described previously [27]. In addition, the allelic ratio of FLT3-ITD to FLT3 wild-type was calculated via GeneScan analysis [13] and the expression of NUP98-NSD1 was analyzed in 246 out of 353 patients with available material by realtime quantitative PCR using previously described primers [19]. Initial analysis demonstrated that the selected cohort was representative for all patients treated between 2004 and 2017 on the AML-BFM protocols for features such as gender, age, AML subtype, initial cytogenetics, and preliminary, early response to treatment (data not shown). Clinical end-points were defined as previously described [28,29] and survival rates were calculated via Kaplan-Meier analysis and compared by log-rank test. Multivariate analysis was performed using Cox regression model evaluating the hazard ratio (HR) of each covariate with 95% confidence interval (CI). Stem cell transplantation was included in the Cox regression model as a time-dependent variable. Differences with a p value less than 0.05 were considered as significant. Data were analyzed using the Statistical Analysis System software version 9.4 (SAS Institute, Cary, NC). Data acquisition was stopped at June 30, 2018, with a median follow-up of 3.6 years. Study Cohort and Patient Characteristics. In this study, we included 353 patients treated on either the AML-BFM 2004 or AML-BFM 2012 protocol for whom sufficient material and information were available (Figure 1(a)). As shown in Table 1, 48 (14%) patients had WT1 and 52 (15%) FLT3-ITD mutations in their leukemic blasts at diagnosis. Mutations in NPM1, NRAS, and c-KIT were present in the blasts of 9%, 17%, and 12% of patients, respectively. Most patients with mutated WT1 (n=35, 73%) harbored at least one co-occurring mutation in the AML blasts, with the most common being FLT3-ITD (n=19, 40%) followed by NRAS mutations (n=11, Journal of Oncology Journal of Oncology 5 23%, Table 1 and Figure 1(b)). Comparably, the majority of patients with FLT3-ITD had additional mutations in other genes (n=32, 62%), most commonly in WT1 (n=19, 37%) and NPM1 (n=11, 21%). Patients with mutated WT1 or FLT3-ITD were older compared to the rest of the study cohort, and AML FAB M1/M2 was the most common morphologic subtype in both groups (Table 1). In addition, the AML blasts of more than half of patients with WT1 (n=25/48, 52%) and FLT3-ITD (n=28/52, 54%) mutations had a normal karyotype at diagnosis; these percentages were significantly higher than those in patients without mutations in each of the two genes (p<0.0001, Table 1). Characteristics of WT1 Mutations. We identified 64 different WT1 sequence alterations in 48 patients ( Table 2). These alterations were frequently located in exon 7 (n=55, 86%) and predominantly resulted in frameshifts producing premature termination codons (PTCs). In total, nine single nucleotide variants (SNVs) were found, mostly in exon 9 (n=7, 78%). Only three of the nine SNVs were not previously reported as pathogenic ( Table 2). Using NGS, we characterized multiple distinct WT1 mutations with highly diverse variant allele frequencies in 13 patients (11 patients had two and 2 patients, three distinct mutations). We then analyzed the heterozygosity of these mutations via the integrative genomic viewer (Broad Institute, MA, USA) and determined that they were all located on individual/different alleles/reads ( Table 2). Survival Significance of the Genomic Aberrations. Next, we analyzed the impact of each mutation on the clinical outcomes. Our analysis identified WT1 and FLT3-ITD, but not NRAS, NPM1, or c-KIT mutations as single factors that significantly increased the chance of relapse or treatment failure and reduced the probability of 3-year overall survival (OS) in our patient cohort (Figures 2(a), 2(b), and 3). In addition, FLT3-ITD but not WT1 mutations significantly decreased the 3-year probability of event-free survival (EFS, Figure 2(b)). When we grouped the two mutations together, the survival analysis revealed a 3-year EFS of 29±11% for patients with both WT1 and FLT3-ITD mutations compared to 63±3% for patients with none of these mutations (p=0.0004) and 61±11% or 45±9% for patients with only WT1 mutation (p=0.016) or FLT3-ITD (p=0.16), respectively ( Figure 2(c)). Corresponding to this low EFS, co-occurrence of these two mutations was associated with an increased cumulative incidence of relapse (CIR) of 65±12% compared to 32±12% for patients with none of these mutations (p=0.002) and 39±11% or 46±9% for patients with only WT1 mutation (p=0.05) or FLT3-ITD (p=0.08), respectively ( Figure 2(c)). Furthermore, we identified a low 3-year OS probability of 33±12% in patients with co-occurrence of WT1 and FLT3-ITD, which was significantly lower than those of patients without these mutations (81±3%, p<0.0001), patients with only mutated WT1 (87±7%, p=0.0007), and patients with only FLT3-ITD (67±9%, p=0.017, Figure 2(c)). Comparing the curves for EFS and OS clearly demonstrated that our second line treatment was not able to rescue any patient with co-occurrence of WT1 and FLT3-ITD mutations, while the OS rates increased by more than 20% for the other three subgroups (Figure 2(c)). Impact of NUP98-NSD1 Fusion. To further characterize the prognostic significance of WT1 and FLT3-ITD mutations, we analyzed the expression of NUP98-NSD1 fusion in our patient cohort (Figure 1(a)). From 246 patients with available material for this retrospective real-time quantitative PCR analysis, 15 (6%) of them were identified to have the NUP98-NSD1 translocation. Most of these patients (12/15, 80%) harbored additional WT1 or FLT3-ITD mutations: 3 patients carried both WT1 and NUP98-NSD1, 4 had a co-occurrence of FLT3-ITD and NUP98-NSD1, and 5 patients carried all three genetic alterations (Figure 1(b)). Only 1 of these 15 patients had a previous known status of NUP98-NSD1 by conventional karyotyping: 2 others were previously diagnosed with deletion of chromosome 5, 1 carried an inversion of chromosome 16 (no other mutations and still in continuous complete remission), 4 carried complex karyotypes or rare aberrations, and 7 had no other cytogenetic abnormalities (data not shown). We then analyzed the prognostic significance of NUP98-NSD1 in the cohort of 246 patients with the known status of this fusion gene (Figure 1(a)). As a single factor, the presence of NUP98-NSD1 in AML blasts of patients at diagnosis was associated with a significant increase in CIR (81%) in addition to decreased probabilities of 3-year EFS and OS (Figure 4(a)). Combining NUP98-NSD1 with WT1 and FLT3-ITD mutations in our multifactor survival analysis revealed that patients with all three or either two of these mutations had worse survival outcomes. These patients had a higher CIR of 73±11% compared to the CIR of 30±4% for patients with none of these aberrations or NUP98-NSD1 alone (p<0.0001) and the CIR of 37±13% or 38±10% for patients with only mutated WT1 (p=0.0078) or FLT3-ITD (p=0.013), respectively (Figures 4(a) and 4(b)). The increased CIR translated into a lower 3-year EFS probability of 23±10% for patients with triple or double mutations compared to the EFS of 62±4% for patients with none of these mutations or only NUP98-NSD1 (p<0.0001) and the EFS of 63±13% or 54±10% for patients with only WT1 (p=0.003) or FLT3-ITD (p=0.036) mutations, respectively (Figure 4(b)). Moreover, co-occurrence of all three or any double mutations resulted in a significantly lower 3-year OS probability of 42±12% compared to 80±8% for patients with none of the mutations or only NUP98-NSD1 (p=0.0003) and 88±8% or 73±10% for patients with only WT1 (p=0.0007) or FLT3-ITD (p=0.049) mutations, respectively (Figure 4(b)). Discussion Treatment of pediatric AML has significantly improved over the past three decades due to the development of intensified first-line treatments, efficient second-line therapies, and optimized supportive care [2,30]. The success is, at least partly, achieved by more efficient risk group stratification using factors such as somatic mutations and cytogenetic aberrations of AML blasts at diagnosis as well as considering the primary response to treatment to optimize the allocation of patients to standard or enhanced treatment options [1]. In the present study, we analyzed the influence of three parameters, mutations in WT1 and FLT3 and the translocation of NUP98-NSD1, on the outcome of pediatric patients in the German AML-BFM 2004 and 2012 protocols. Although all three parameters have been established by us and others as important prognostic factors in both pediatric and adult patients [8][9][10][11][12][13][14][20][21][22], their combined utility to identify highrisk patients likely to experience dismal treatment results has not yet been reported in a contemporary pediatric AML trial. In a cohort of 237 patients treated within the AML-BFM 2004 and 2012 protocols and with sufficient material for re-analysis, we observed favorable outcomes for 3-year EFS of 61% and 69% and OS of 79% and 90% in patients without WT1 mutations or NUP98-NSD1 fusion or with only one of these factors. Patients with leukemic blasts that were FLT3-ITD positive but negative for WT1 and NUP98-NSD1 mutations and that had an FLT3-ITD AR ≥0.4 still achieved an EFS of 45% and an OS of 73%. Surprisingly, our data therefore suggests that without WT1 and NUP98-NSD1 mutations, the negative impact of FLT3-ITD even with an AR≥0.4 might not be as severe as previously published [12,17]. However, all patients positive for at least two of the three risk factors and with an FLT3-ITD AR ≥0.4 had events within the first three years and only 27% could be rescued by our salvage therapies. These unfavorable results in our double or triple mutated group unequivocally demonstrate that our current first-line treatment strategies for these patients are still insufficient/inadequate and urgently need improvement. Of the three risk factors, currently only the FLT3-ITD mutation can be specifically targeted with inhibitors [31]. Although the first generations of these drugs only achieved limited and often transient efficacy due to intrinsic and extrinsic adaptations in the AML blasts and/or the environment [31], combination therapies of newer tyrosine kinase inhibitors such as Quizartinib with standard chemotherapy seem to be relatively well tolerated and in initial studies have demonstrated survival improvement in relapsed or refractory AML patients [32][33][34]. Due to the important role of FLT3 pathway activation in AML, numerous combinations of FLT3 inhibitors with other drugs are currently being tested. Whether these results will also be helpful for the treatment of pediatric AML will need to be carefully determined in future studies, especially considering the clonal heterogeneity of FLT3-ITD and the additional survival burden that it causes by increasing drug resistance through clonal evolution or selection and further expansion of resistant AML clones [35,36]. Nevertheless, it is tempting to speculate that the simple addition of a newer FLT3 inhibitor to our standard therapy might be a feasible, well-tolerated, and effective approach for all patients with blasts that are positive for the FLT3-ITD mutation, regardless of the status of alterations in WT1 or NUP98. The role of WT1 in patients with AML is still controversial [4]. Although WT1 is overexpressed in the majority of leukemias and can be used as a marker for minimal residual disease and maybe even vaccination attempts, the prognostic and therapeutic relevance of high or absent WT1 expression levels is not unequivocally accepted [37][38][39]. In contrast, mutations in WT1 are clearly identified as determinants of poor prognosis and, as we showed here, confer a dismal prognosis especially in combination with FLT3-ITD or NUP98-NSD1 fusion. In the present study, we identified 64 monoallelic WT1 sequence alterations in exon 7 or exon 9 in the leukemic blasts of 48 patients. The majority of these alterations leads to frameshifts and/or premature terminations codons and thus shortened proteins. These mutant proteins can act in a dominant negative manner [40], which may contribute to a myeloid differentiation block present in AML blasts [41]. However, similar mutations have also been described in the context of Wilms tumors as gain-of-function mutations promoting proliferation [42]. Here, we show a favorable prognosis for patients with single WT1 mutations, with 26 out of 29 cases reaching continued complete remission (CCR) (Figure 1(b)). Therefore, based on a 3-year EFS of 69% and an OS of 90%, the development of new treatment approaches is not as urgently needed for these patients with WT1 mutated blasts that do not harbor FLT3-ITD or NUP98-NSD1 mutations. Among the 31 different fusion gene partners of NUP98 identified so far, the NUP98-NSD1 t(5:11) translocation is the most frequent and present in 4-7% of patients in pediatric AML patients [20][21][22]. Importantly, the NUP98 translocations that occur in AML all share the N-terminus of the protein and are thought to initially lead to epigenetic dysregulation of different leukemia-associated genes including HOXA7, HOXA9, and HOXA10 in myeloid precursor cells [20]. Additional somatic mutations in other genes occur as secondary events and promote malignant transformation and uncontrolled cell growth [20]. As also shown in our patient data set, these secondary alterations often include activating mutations in FLT3 (FLT3-ITD) or truncating mutations in WT1 [21]. Strikingly, only three patients in our study had a NUP98-NSD1 translocation without mutations in FLT3 or WT1; two of these patients achieved and remained in first CCR at the end of data acquisition. The third patient had no other genetic risk factors but a very high initial white blood cell count of almost 400,000 cells/ l. Complete remission induction was delayed, and the patient relapsed a year later but was successfully treated by allogeneic stem cell transplantation with a follow-up of 10 years. Therefore, as also described previously [21], our patients with NUP98rearranged blasts with WT1 and/or FLT3-ITD mutations had a poor prognosis, especially in contrast to patients with only WT1 and FLT3-ITD mutations, who could at least partially be rescued by allogeneic transplantation. However, due to the high risk of failure of the first-line treatment, stem cell transplantation already in first CCR seems to be an attractive option for cases of NUP98-rearranged AML [21,22]. Nevertheless, it should be noted that even allogeneic stem cell transplantation is not always effective in improving the treatment outcome in patients with a high probability of treatment failure based on risk stratification. Thus, introducing novel treatment approaches such as the use of small inhibitors, e.g., venetoclax and isadanutlin [43] or cellular therapies with allogeneic NK-cells or engineered T-cells with chimeric antigen receptors (CARs) [44] targeting leukemic blasts harboring NUP98 rearrangement or WT1 mutations should be taken into consideration in future clinical studies. Recent analysis from a collaborative study between the American and Dutch children oncology groups (COG and DCOG) included patients from three clinical COG/DCOG trials and also young adults less than 39 years of age in the Therapeutically Applicable Research to Generate Effective Treatments (TARGET) AML initiative [45]. Analysis of the different cohorts revealed similarly unfavorable outcomes with an EFS of 14-25% and an OS of 15-40% for patients with FLT3-ITD and WT1 mutations and/or the NUP98-NSD1 translocation [45]. In contrast to our findings however, the authors reported an EFS range of 15-35% in patients with FLT3-ITD only, which is lower than that achieved with current protocols, for which an EFS of 45% and an OS of 73% were found for patients with FLT3-ITD only. Notably, in the American-Dutch study, patients with co-occurrence of NPM1 mutations and FLT3-ITD (and without WT1 and NUP98-NSD1) were separated from patients with FLT3-ITD only and had a slightly increased, albeit probably not statistically significant, survival. Similarly, we have previously observed favorable outcomes for patients with NPM1 mutations in their AML blasts with normal karyotype and proved this impact was not affected by the presence of FLT3-ITD [46]. In the current cohort, five patients were positive for mutations in FLT3-ITD and NPM1 and negative for WT1 and NUP98 alterations. At present, four patients with a normal karyotype are still in first CCR, and the fifth patient with a complex karyotype and an FLT3-ITD AR >11 experienced early death. In summary, the principle findings of this American-Dutch study and the present study are very similar. However, the treatment outcomes for our patient groups are superior, most likely due to the fact that we included only patients between 0 and 18 years of age treated in Germany according to two contemporary protocols from the AML BFM study group. Conclusion Despite the fact that our study was partly based on data collected prospectively since 2004 and partly on data assessed de novo on stored material by either NGS or PCR, we can safely conclude that co-occurrence of the three factors, mutated WT1 and FLT3-ITD and/or NUP98-NSD1 translocation, still defines a subgroup of AML patients with devastating EFS and OS outcome, even with our current treatment protocols. Although the number of pediatric AML patients available for analysis of these three risk factors was limited and therefore not all interesting factors could be assessed in multivariate analysis, it is obvious that patients with double or triple mutations benefitted very little from the improved EFS and OS in our AML-BFM studies in recent years. Thus, for these pediatric patients, new and more targeted approaches are urgently needed for both first-and second-line treatments. Data Availability The data used to support the findings of this study are included within the article.
2019-08-23T02:03:43.413Z
2019-07-30T00:00:00.000
{ "year": 2019, "sha1": "1a1e617fa37c95a2cf16db6a75d5b0c33d809c12", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/1609128", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21819ba58e19f04efc79f9c14408edb7210f8f73", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201741031
pes2o/s2orc
v3-fos-license
The histone code reader PHD finger protein 7 controls sex-linked disparities in gene expression and malignancy in Drosophila Drosophila l(3)mbt malignant brain tumors present sexual dimorphism. INTRODUCTION Cancer susceptibility and mortality rate are significantly higher in the male population, even after occupational and behavioral risk factors are taken into account (1)(2)(3). Male predominance is also observed in childhood malignancies that present themselves in very young infants before puberty (4)(5)(6)(7). Studies covering a wide panel of cancer types show that the expression of many clinically relevant genes is strongly sex biased in malignant tumors (8,9), hence suggesting molecular sexual dimorphism at the cellular level as a key determinant of sex-linked disparities in cancer (1,2,10). Understanding the molecular basis of sex-linked differences in cancer incidence and survival may pave the way for gender-specific, more efficient therapeutic strategies. However, the molecular cell biology for sex disparities in cancer remains very poorly understood. Drosophila can be used to experimentally induce a wide range of tumors that affect a variety of organs in both adult flies and developing larvae (11). These tumor types range from hyperplasias to frankly malignant neoplasias that exhibit classic hallmarks of mammalian cancer. In addition, natural hyperplasias can develop in the adult fly testis and gut and are age dependent (12,13). Some Drosophila tumors are being used as experimental models for leukemia, neuroblastoma, glioblastoma, ovarian cancer, and others [reviewed in (11,(14)(15)(16)]. Available information about sex-biased phenotypes in Drosophila tumors in organs with nonreproductive function is limited to nonmalignant, genetically induced hyperplastic tumors induced by altering Notch (N) or APC-ras signaling in the adult midgut (17) and natural hyperplasia formed in the aging gut (13), which are more frequent in females. To determine whether Drosophila experimental models of malignant growth may serve to investigate the cell biological axes that control sex-linked tumor dimorphism, we have studied brain tumor (brat) (18) and lethal(3)malignant brain tumor [l(3)mbt] (19) malignant brain neoplasias (henceforth referred to as brat and mbt tumors, respectively). The TRIpartite Motif -NCL-1/HT2A/LIN-41 (TRIM-NHL) protein Brat inhibits translation, and its loss of function results in tumors that originate from type II intermediary neuronal progenitors in the larval brain (20,21). Human Brat ortholog TRIM3 is a tumor suppressor that regulates neural stem cell equilibrium (22). L(3)mbt harbors three MBT repeats and a zinc finger domain (23) and has been shown to repress the expression of dozens of genes, including germline genes, in somatic tissues as well as testisspecific and neuronal genes in the female germline (24)(25)(26)(27). L(3) mbt interacts sub-stoichiometrically with the dREAM/Myb-Muv B complex (25,28) and is a stoichiometric component of the L(3) mbt-interacting (LINT) complex (29). Loss-of-function conditions for l (3)mbt result in neoplastic growth that originates in the neuroepithelial regions in the larval brain lobes (30). Some of the germline genes that are ectopically expressed in l(3)mbt mutant brains are essential for mbt tumor growth (24,31). embedded in strongly DAPI-positive tissue (likely a tumorous version of the medulla) that invades most of the brain lobe including the central brain (CB) (Fig. 1A, outlined in blue). In female mbt brain lobes, however, the NE and the tissue that stains strongly with DAPI are restricted to the lateral half of the lobe, and the CB remains distinct (Fig. 1A). Differences in the fraction of the brain lobe area occupied by the NE and the CB are highly significant between mbt males and their female siblings (P < 10 −4 ) and insignificant between mbt females and wild-type brains of either sex (Fig. 1B). Mean maximum Feret diameter of the brain lobe is also significantly larger (P < 10 −10 ) in male than in female mbt individuals ( fig. S2). To determine growth potential, we allografted mbt tissue dissected from late third instar larvae [132 ± 12 hours at 29°C after egg laying (AEL)] and ts1/Df samples from early second instar larvae (36 ± 12 hours at 29°C AEL), a stage at which male and female mbt and wild-type brains are still indistinguishable. Under all four experimental conditions, male mbt implants (Fig. 1C, red lines) killed more than 85% of the hosts, half of them within 14 to 18 days after implantation, while female mbt implants (Fig. 1C, blue lines) took longer to develop and never reached the 50% host lethality mark. Notably, these results also apply to second instar male ts1/Df brains [ Fig. 1C, ts1/Df(L2)] that grow faster and kill more hosts than the much larger late third instar female mbt implants. Male and female brat implants are equally aggressive in allograft tests ( fig. S1). Thus, in summary, mbt tumors present sex-dependent dimorphism, while brat tumors do not. The mbt proteome is sex dependent To investigate the molecular basis of the sex-dependent dimorphism that we have observed in mbt tumors, we searched for proteins that present sex-biased expression levels in these tumors. We analyzed the proteome of male and female ts1/Df and w 1118 brain lobes by Tandem Mass Tag (TMT) labeling and nanoliquid chromatography electrospray ionization tandem mass spectrometry (nanoLC-ESI-MS/MS). We identified a total of 7035 proteins and obtained reliable quantitated data for 5985. Among them, we found a group of 127 proteins that are expressed at significantly different levels in males and females in mbt tumors, but not in wild-type samples ( Fig. 2A). This mbt proteome sex-linked dimorphic signature (pSDS) includes 66 proteins that are more expressed in males (M-pSDS) and 61 that are overexpressed in females (F-pSDS) ( Fig. 2A and table S1). None of these proteins appears to be expressed in one sex only. M-pSDS and F-pSDS proteins (Fig. 2, B and C, red and blue dots, respectively) appear as two distinct clouds that are well apart in plots showing the expression level of proteins in male versus female mbt tumor samples (Fig. 2B), but are mixed in plots showing sex-dependent expression levels in wild-type samples (Fig. 2C). Expression of the sex determination and Male-Specific Lethal (MSL)complex proteins that we have unequivocally identified (i.e., Sxl, Msl-1, Msl-3, and Mof) is equally sex biased in mbt tumors and wildtype brains (Fig. 2, B and C, black dots), thus suggesting that the sexual identity of the XX and XY mbt tumors is not compromised. ] larvae stained with DAPI (gray) and anti-DE-cadherin antibody (green). Male mbt lobes present reduced central brains (CBs; blue) and overgrown neuroepithelia (NE; yellow) that invade medial regions. In contrast, in female mbt lobes, the NE do not overgrow and CBs remain as distinct as in wild-type larvae. Scale bar, 50 m. (B) Relative sizes of NE and CB (as a fraction of brain lobe area) in male (red) and female (blue), control (w 1118 ) and mbt mutant larvae. Differences in NE and CB sizes between mbt male and female brain lobes and between mbt males and control larvae of either sex are highly significant. (C) Tumor growth rate and host lethality caused by male (red) and female (blue) mbt larval brain lobes allografted to adult hosts. Allografted tissues were dissected from third instar larvae (132 ± 12 hours AEL; 29°C) except for ts1/Df (L2) samples that were dissected from second instar (36 ± 12 h AEL; 29°C). Male implants kill significantly more hosts and faster than do female implants. MED, medulla; LAM, lamina. Enriched Gene Ontology (GO) terms refer to DNA replication and mitosis in the M-pSDS and to signaling, ecdysone, redox, and lipid transport in the F-pSDS (table S2). These results are fully consistent with the very different proliferative potential of mbt male and female samples and strongly substantiate the sex-dimorphic nature of mbt tumors. Differences in the expression level of L(3) mbt itself and other proteins of the LINT and Myb-MuvB/Dream complexes (25,29) between male and female wild-type brains are not significant, and therefore unlikely to account for the different levels of transformation observed in male and female mbt mutants (table S3). As a first step toward investigating the functional relevance of pSDS proteins, we focused our attention on those of the male signature (M-pSDS), whose contribution to the enhanced malignant traits observed in mbt male tumors can be tested by simple loss-offunction studies. Among them, the group that includes CG15930, CG2812, HP1D3csd, TrxT, and Phf7 stands out, first, because those genes are up-regulated in Drosophila snf 148 ovarian tumors (henceforth referred to as snf tumors) that are driven by the unscheduled expression of PHD finger protein 7 (Phf7) (32)(33)(34) and, second, because the group includes Phf7 itself. Phf7 encodes two transcripts: Phf7-RA and Phf7-RC. Phf7-RA transcripts have been found in a variety of tissues including adult ovaries and salivary glands, and larval central nervous system, trachea, and salivary glands (www.flyatlas2.org). Phf7-RC, which includes an additional small exon, is transcribed from an upstream testis-specific transcription start site (TSS), and consistently, has only been reported to be expressed in testis. These two mRNA isoforms have different 5′ untranslated regions that affect translation efficiency, and the PHF7 protein has only been detected in the male germline (33,34,35,36). Ovarian snf tumors up-regulate Phf7-RC and therefore express the Phf7 protein, which is both necessary and sufficient for the tumor-forming pathway (34). Visualization of our RNA-sequencing (RNA-seq) data in Integrated Genome Browser shows that Phf7-RC, estimated by the number of reads that map to the Phf7-RC-specific exon, is absent in wild-type samples, but is expressed in mbt tumor samples, at a higher level in males than in females (Fig. 3A). This conclusion is further substantiated by reverse transcription quantitative polymerase chain reaction (RT-qPCR) data (Fig. 3B). Phf7 depletion suppresses sex-linked dimorphism by inhibiting male-specific phenotypic traits Because ectopic expression of Phf7 is a necessary step in the development of snf ovarian tumors, we wondered whether it may also have a function in the enhancement of malignant traits that we have observed in mbt male tumors. To test this hypothesis, we investigated the effect of Phf7 depletion on mbt tumor development in situ and on its behavior in allograft assays. We found that depletion of Phf7 has no effect on wild-type brain lobe development ( Fig. 4A; compared to Fig. 1A), which is consistent with its reported strictly male germline function in flies (33). Depletion of Phf7 has little effect on the anatomy of female mbt tumors but significantly suppresses the two main anatomy traits that make male mbt tumors distinct: It brings about a significant reduction of the NE (P = 6 × 10 −5 ) that no longer spreads over the brain lobe and the recovery of a well-defined CB region (P = 10 −8 ) (Fig. 4, A and B). As a result, double-mutant Phf7 N2 ; l(3)mbt ts1 brain lobes lack sex-linked dimorphism. In allograft assays, depletion of Phf7 results in a significant drop in host lethality, from 93% [male l(3)mbt ts1 , n = 28] down to 38% [malePhf7 N2 ; l(3)mbt ts1 , n = 24], a rate that is very similar to that of mbt female implants [female l(3)mbt ts1 , 38%, n = 26] (Fig. 4C). The same applies to the timing of tumor development and to the onset of host lethality that are delayed in male Phf7 N2 ; l(3)mbt ts1 compared to double-mutant larvae stained with DAPI (gray) and anti-DE-cadherin antibody (green). Phf7 N2 mutant lobes appear wild type. Compared to lobes from l(3)mbt ts1 males, those from Phf7 N2 ; l(3)mbt ts1 males present a much reduced NE (yellow) and a sizeable CB (blue). Scale bar, 50 m. (B) Relative sizes of NE and CB (as a fraction of brain lobe area) in male (red) and female (blue) control (w 1118 ), mbt [l(3)mbt ts1 ], and Phf7-depleted mbt [Phf7 N2 ; l(3)mbt ts1 ] larvae. Differences in NE and CB sizes between male l(3)mbt ts1 and Phf7 N2 ; l(3)mbt ts1 are highly significant. (C) Tumor growth rate and host lethality caused by allografted l(3)mbt ts1 single-mutant and Phf7 N2 ; l(3)mbt ts1 double-mutant, male (red and green, respectively) and female (blue and purple, respectively) brain lobes. Male Phf7 N2 ; l(3)mbt ts1 implants develop and kill hosts at a much lower rate than do male l(3)mbt ts1 implants. male l(3)mbt ts1 implants (Fig. 4C). Phf7 depletion has no significant effect on the parameters of growth and lethality of female mbt tissue in allograft assays. Thus, depletion of Phf7 suppresses both the anatomy traits and the greater growth potential that make mbt tumors more aggressive in males than in their female siblings. Phf7 depletion erases phenotypic dimorphism by respectively down-and up-regulating the male and female sex-linked dimorphic signatures in males Given Phf7 reported function as a histone code reader that binds lysine 4 di-and tri-methylated histone H3 (H3K4me2/me3) and controls gene expression programs (33)(34)(35), we decided to test whether Phf7 function in mbt sex-linked dimorphism could be due to a role for Phf7 in controlling sex-dependent differences in gene expression. To this end, we carried out RNA-seq to quantify the transcripts that present sex-biased expression in l(3)mbt ts1 , but not in w 1118 larval brains [i.e., the male and female transcriptome sexlinked dimorphic signatures of l(3)mbt ts1 tumors; M-tSDS and F-tSDS, respectively] and then determined whether such a bias is affected in Phf7 N2 ; l(3)mbt ts1 double-mutant samples. To analyze these data, we plotted the expression levels of transcripts in male and female brain samples from wild-type (w 1118 ), mbt (l(3)mbt ts1 ), and double-mutant Phf7 N2 ; l(3)mbt ts1 larvae (Fig. 5A). As expected, dots corresponding to the expression levels of M-tSDS (red) and F-tSDS (blue) genes are mixed in control w 1118 and form two distinct clouds that are well apart in l(3)mbt ts1 samples. The red and blue clouds remain distinct in Phf7 N2 ; l(3)mbt ts1 double mutant, but they are much closer than in l(3)mbt ts1 alone, showing that sex-biased transcription of M-tSDS and F-tSDS genes is reduced in mbt tumors that lack Phf7 (Fig. 5A). Such a reduction of the difference in expression levels of the M-tSDS and F-tSDS gene sets in Phf7 N2 ; l(3)mbt ts1 samples could be accounted for by changes in expression that affect either of the two signatures, or both, in either sex, or in both. To determine which are the actual changes that account for the observed results, we plotted the expression levels of each of the genes of M-tSDS and F-tSDS in mbt (l(3)mbt ts1 ) and Phf7-depleted mbt [Phf7 N2 ; l(3)mbt ts1 ] male and female tissues. For ease of understanding, signature genes were ordered from left to right along the x axis as a function of their expression level in l(3)mbt ts1 (Fig. 5B). The same conclusions are derived from plots showing the significance of the fold change in the expression level of M-tSDS and F-tSDS genes between Phf7 N2 ; l(3)mbt ts1 and l(3)mbt ts1 in male and female samples (volcano plots; fig. S3). In male tissues, most dots representing M-tSDS and F-tSDS transcripts are shifted to negative (i.e., down-regulated) and positive (i.e., up-regulated) values, respectively, while in female tissues fold-change value distribution is rather symmetric and most changes are not significant for both signatures. In contrast to its effect in controlling gene expression differences in male and female tumors, Phf7 appears to have a minor role in controlling gene expression differences between tumor and wildtype samples. Thus, for instance, only six MBTS genes are downregulated to any significant extent after the loss of Phf7 in mbt males and only one in females [Phf7 N2 ; l(3)mbt ts1 compared to l(3)mbt ts1 ]. None of these are germline genes (table S4). These results strongly suggest that Phf7 contributes to bringing about sex-linked molecular disparities in mbt tumors by acting in male tissue both up-regulating M-tSDS genes and down-regulating F-tSDS genes while having little, if any, effect in the expression of either of these signatures in female samples. DISCUSSION Epidemiological studies show that in a wide range of cancer types unrelated to reproductive function, men have a worse prognosis than women (1,2,4,6). The molecular basis for such disparities remains very poorly understood. We have found that the tumors that develop in Drosophila l(3)mbt mutant larvae are strongly dimorphic: Malignant traits are much more prominent in males than in females, to the extent that they can be used to objectively stratify mbt tumor samples into two populations that correlate tightly with the sex of the tumor bearer. Using mbt tumors as a genetically tractable experimental model to investigate the molecular basis of sexlinked disparities in malignant growth, we have identified two protein signatures that include those proteins that are significantly upregulated in one sex compared to the other (male and female proteomic sex-linked dimorphic signatures; M-pSDS and F-pSDS, respectively). Many of the proteins that belong to these signatures have homologs in humans and are therefore promising leads for future research. A conspicuous group of the proteins that we have found to be expressed at a higher rate in male mbt tumors are also ectopically expressed in the tumors that develop in the ovaries of flies homozygous for the viable allele of sans fille, snf 148 . One of these common proteins is Phf7, which is both necessary and sufficient for the snf tumor-forming pathway (33,34). Phf7 is a histone code reader that bears three PHD domains and binds histone H3 N-terminal tails with a preference for dimethyl lysine 4 (H3K4me2) (33). In wildtype flies, expression of Phf7 is restricted to male germline stem cells and spermatogonia (33,35). Loss of Phf7 impairs the ability of male germline cells to transit through the different stages of spermatogenesis and results in reduced fertility (33,35). There are notable similarities as well as differences in Phf7 function, regulation of expression, and targets in wild-type testis, snf ovarian tumors, and mbt tumors. As far as function is concerned, similarly to its oncogenic effect in ovaries, we have found that Phf7 has a key role in enhancing malignant traits in male mbt brain tumors both in situ and in allograft test. Our transcriptomics data and the little, if any, phenotypic consequences brought about by the loss of Phf7 in female mbt tumors strongly suggest that Phf7 exerts this role by contributing to the dysregulation of dozens of genes mostly in male mbt tumors. We do not know the reason for such sex-linked differential impact of Phf7 depletion. It could be quantitative (i.e., the PHF7 level in HP1D3csd, TrxT, nos, and piwi) and F-tSDS (CG31997 and CG32006) genes. female tumors is not sufficient enough), qualitative [i.e., dependent upon male-specific (or male-enriched) factors], or both. Unscheduled expression of Phf7-RC in wild-type female germ cells is prevented by deposition of the H3K9me3 repressive mark over the testis-specific TSS of Phf7-RC through a process that is controlled by the female sex determination protein Sxl and depends on the eggless/SETDB1 methyltransferase and other members of the H3K9me3 pathway (36). Phf7-RC is expressed in snf tumors because the homozygous condition for snf 148 interferes with the splicing of Sxl in germ cells (32). The mechanisms that repress Phf7-RC transcription in wild-type somatic cells of both sexes remain unknown, but certainly in males, cannot depend on Sxl. Ectopic Phf7-RC expression in mbt tumors and the overlap of the TSS of Phf7-RC with a L(3)mbt binding site identified in cephalic complex samples from third instar larvae (30) suggest that repression of Phf7-RC in the soma may depend on L(3)mbt itself. Something similar may apply to another M-pSDS protein, CG15930, also known as Tudordomain-containing protein 5-prime (Tdrd5p), that is also normally highly expressed in male germ cells and is repressed by Sxl in female germ cells (34,37). Expression of CG15930 is strongly up-regulated in mbt tumors of both sexes and, like Phf7-RC, more so in males than in females. The TSS of CG15930 overlaps with both L(3)mbt (30) and LINT binding sites (29). A common theme between ovarian snf and larval brain mbt tumors is the unscheduled expression of hundreds of genes including many testis genes. However, the extent of overlap between the genes dysregulated in both tumors is low: Only 10% of the MBTS genes (11) are up-regulated in snf compared to wild-type ovaries. Included among these is the CG15930 gene referred to above, which we have previously shown to be essential for mbt tumor growth (31). In wild-type flies, CG15930 localizes to cytoplasmic granules with some characteristics of RNA processing (P-) bodies and has been shown to promote proper male fertility and germline differentiation (37). Likewise, the overlap between genes controlled by Phf7 in testis and those dysregulated in mbt is negligible. Transcriptomic comparisons between single-mutant bag-of-marbles (bam) testis and double-mutant Phf7; bam testis identified 45 genes that are dysregulated upon Phf7 loss (35). Only one of those (EbpIII) is also dysregulated in male mbt tumors that lack Phf7 [Phf7 N2 ; l(3)mbt ts1 compared to l(3)mbt ts1 ]. These results reflect a fundamental difference in the targets of Phf7 function in snf tumors and wild-type testis, and those that we have identified in mbt tumors, and suggest possible differences in the distribution of H3K4 methylation marks. A distinct feature of mbt tumors is the up-regulation of a signature of transcripts, the MBTS, that can be used to unequivocally tell these tumors apart not only from wild-type brains but also from other malignant brain neoplasms, like those caused by loss-of-function conditions for lethal giant larvae (lgl), miranda (mira), prospero (pros), or brat. Our new RNA-seq data obtained from male and female samples provide an opportunity to determine the extent to which up-regulation of the MBTS, which was identified in a study where male and female tissues were not examined separately, is sex dependent. Our data show that the majority of the MBTS genes (80) are up-regulated in both sexes: 8 only in males and 1 only in females. These results show that L(3)mbt safeguards larval brain cells against unscheduled gene expression in both sexes. This includes the germline genes that account for a quarter of the MBTS genes, many of which are necessary for mbt tumor growth (24,31). Notably, however, 19 of the 80 MBTS genes up-regulated in both sexes are significantly more expressed in male than in female mbt tumors (i.e., belong to the M-tSDS). Whether such quantitative-not qualitative-differences reflect a sex-dependent efficiency in the role of L(3)mbt safeguarding against ectopic gene expression remains unclear. L(3)mbt's function as a repressor of unscheduled gene programs is not limited to the somatic cells of the larval brains. Loss of l(3)mbt in some Drosophila cell lines and in the somatic cells of the ovary leads to the ectopic activation of germline genes, including components of the PIWI ping-pong cycle, vas, nos, and others (26,27). Loss of l(3)mbt in the female germline results in the ectopic activation of testis and neuronal genes (26). The human genome contains three orthologs to Drosophila l(3) mbt-L3MBTL1, L3MBTL3, and L3MBTL4-that, like the fly gene, encode chromatin-interacting transcriptional repressors (23). L3MBTL3 maps to chromosomal region 6q23 that is frequently altered in acute leukemia cells, and homozygous deletion of this gene has been observed in human patients with medulloblastoma (38,39). Moreover, reexpression of L3MBTL3 attenuates malignancy in human medulloblastoma cell lines that are deleted for L3MBTL3 (39). Medulloblastoma groups 3 and 4, which are very frequently metastatic, exhibit a 2:1 male to female incidence ratio (40,41). The cause for such gender disparities remains unknown. Homologs of Drosophila Phf7 have been identified in vertebrates including mammals, and human Phf7 expression is also highly enriched in the testis (42,43), but there is currently no evidence suggesting a role for Phf7 in sex dimorphism in human cancer. Despite substantial sequence homology, human Phf7 and Drosophila Phf7 did not evolve from a common Phf7 ancestor, but rather, both genes evolved in parallel through independent duplication events from an ancestral G 2 -M phase-specific E3 ubiquitin protein ligase (G2E3) (42). Functional overlap between both homologs is high, but not full: Human Phf7 can rescue the fertility defects brought about by the loss of Phf7 in Drosophila males, but does not have the deleterious effect brought about by the fly protein when expressed in the female germline (33,42). A significant number of clinically actionable genes show strong sex-biased signatures in different cancer types, but the functional relevance of such disparities remains to be determined (8). Our results show that proteins that belong to sex-biased tumor signatures can be targeted to eliminate sex-linked enhanced malignancy. Fly stocks The following mutant alleles were used in this study: l(3)mbt ts1 (19), l(3)mbt E2 (44), Df(3R)ED10966 (DGRC#150208 from Kyoto Stock Center), brat K06028 (45), and Phf7 N2 (33). The wild-type strain used was w 1118 . To distinguish male from female mbt mutant larvae, the strains Dp(1:Y)y + and an X insertion of pUbq-tub84B-GFP were used (46). Because of the temperature-sensitive condition of mbt, all crosses, including controls, were maintained at 29°C. Doublemutant Phf7 N2 ; l(3)mbt ts1 was generated using standard genetic techniques. E2 . In addition, sexing of larvae in some allograft assays was achieved by using the X insertion pUbq-tub84B-GFP. A further unbiased method for sexing larvae was to allograft the lobes of unmarked larvae (i.e., the same genotype for male and female progeny) and to subsequently assign the sex by immunostaining of the corresponding salivary glands with an H4K16ac-specific antibody (17). The actual method used for sexing larvae did not alter the dimorphic outcome of independent allograft assays done with the same allelic combination. brat experiments Crosses for brat experiments were done with females w; brat k08026 /CyO,Tb and males pUbq-tub84B-GFP; brat K08026 /CyO,Tb. The progeny used for experiments were males w; brat k06028 and females pUbq- tub84B-GFP /+; brat k06028 . Allograft assays Larval brain lobe grafts were carried out in female hosts as described in (48). Tumor lethality (%) was calculated by the number of hosts killed by the developing tumor over the total of allografted adults. Implanted hosts were kept at 29°C. Quantification and characterization of phenotypes Eggs were collected for 24 hours and allowed to develop for up to 7 days (156 ± 12 hours AEL) for anatomy analysis and up to 8 days (180 ± 12 h AEL) for Feret diameter measurement, except for control w 1118 larvae that were dissected at 5 days AEL. The ratios area of NE/area of the brain lobe (ratio NE/BL) and area of CB/area of the brain lobe (ratio CB/BL) were calculated by using images acquired with an SP8 Leica confocal image microscope and by measuring the areas corresponding to the NE, the CB, and the brain lobe using ImageJ software. The Feret diameter measurement of the brain lobe pairs was performed by analyzing the images of brains taken using a Leica EC3 camera coupled to a Nikon SMZ800 stereoscope. The images were analyzed by a purpose-made macro (31) written in ImageJ software to measure the maximum brain Feret diameter of the brain lobe pair. Ventral ganglions were digitally masked before measurement. The results were represented in boxplots, and P values were calculated by nonparametric Mann-Whitney U tests using GraphPad Prism 7.00 for MacOS X (GraphPad Software, La Jolla, CA, USA) (www.graphpad.com). All genotypes and crosses were done as described above. Protein extraction Larval brains were dissected in PBS, transferred to a 1.5-ml tube, and frozen in liquid nitrogen after removing the buffer. One hundred fifty brains of each genotype were homogenized in 150 l of a buffer containing 4% SDS, 100 mM tris-HCl (pH 7.6), and 0.1 M dithiothreitol and incubated at 95°C for 3 min. The samples were sonicated to shear the DNA to reduce the viscosity of the sample. Before starting sample processing, the lysate was clarified by centrifugation at 16,000g for 5 min. Sample preparation, TMT labeling, and basic reversed-phase prefractionation Protein extracts were quantified using the Pierce 660 Protein Assay Kit (#22662) and Ionic Detergent Compatibility Reagent (#22663). They were alkylated with 2-iodoacetamide and digested with trypsin following the Filter Aided Sample Preparation (FASP) protocol (49). After digestion and requantification at the peptide level by the Colorimetric Peptide Assay (Pierce Thermo, #23275), samples were isotopically labeled with the corresponding TMT10plex reagent (Thermo Fisher Scientific) according to the experimental design (labels 126 to 131). Validation for correct isotopic labeling was performed by LC-MS/MS, and samples were then mixed in two different batches (taking into account peptide quantification) and desalted using PolyLC C18 and PolyLC SCX strong cationic exchange tips. Each of the two TMT10plex experiments was fractionated offline by high-pH reversed-phase peptide chromatography using Pierce columns (ref. 84868). Ten fractions were collected for each batch (F0 to F9), dried, and reconstituted in 1% formic acid, 3% acetonitrile for nanoLC-ESI-MS/MS analysis (600 ng of protein on column). Nanoliquid chromatography electrospray ionization tandem mass spectrometry Peptides from basic reversed-phase prefractionation (20 fractions from two TMT10plex experiments) were analyzed using an Orbitrap Fusion Lumos Tribrid mass spectrometer (Thermo Fisher Scientific) equipped with a Thermo Scientific Dionex Ultimate 3000 ultrahigh-pressure chromatographic system (Thermo Fisher Scientific) and Advion TriVersa NanoMate (Advion Biosciences Inc.) as the nanospray interface. Peptide mixtures were loaded to a -precolumn (300 m inside diameter × 5 mm, C 18 PepMap100, 5 m, 100 Å, C 18 Trap column; Thermo Fisher Scientific) at a flow rate of 15 l/min and separated using a C 18 analytical column (Acclaim PepMap TM RSLC: 75 m × 75 cm, C 18 2 m, nanoViper) with a flow rate of 200 nl/min and a 300-min run, comprising three consecutive steps with linear gradients from 1 to 35% B in 262 min, from 35 to 50% B in 5 min, and from 50 to 85% B in 2 min, followed by isocratic elution at 85% B in 5 min and stabilization to initial conditions (A = 0.1% formic acid in water, B = 0.1% formic acid in acetonitrile). The mass spectrometer was operated in a data-dependent acquisition mode. In each data collection cycle, one full MS scan (400 to 1600 m/z) was acquired in the Orbitrap [1.2 × 10 5 resolution setting and automatic gain control (AGC) of 2 × 10 5 ]. The following MS2-MS3 analysis was conducted with a top speed approach. The most abundant ions were selected for fragmentation by collision-induced dissociation (CID). CID was performed with a collision energy of 35%, 0.25 activation Q, an AGC target of 1 × 10 4 , an isolation window of 0.7 Da, a maximum ion accumulation time of 50 ms, and turbo ion scan rate. Previously analyzed precursor ions were dynamically excluded for 30 s. For the MS3 analyses for TMT quantification, multiple fragment ions from the previous MS2 scan (SPS ions; synchronous precursor selection) were co-selected and fragmented by HCD using a 65% collision energy and a precursor isolation window of 2 Da. Reporter ions were detected using the Orbitrap with a resolution of 60,000, an AGC of 1 × 10 5 , and a maximum ion accumulation time of 120 ms. Spray voltage in the NanoMate source was set to 1.60 kV. Radio frequency lenses were tuned to 30%. The spectrometer was working in positive polarity mode, and singly charge state precursors were rejected for fragmentation. Database search Database searches were performed with Proteome Discoverer v2.1.0.81 software (Thermo Fisher Scientific) using Sequest HT search engine and UniProt Canonical and Isoforms DROME_2017_06 with contaminants. Search was run against targeted and decoy database to determine the false discovery rate (FDR). Search parameters included trypsin, allowing for two missed cleavage sites, carbamidomethyl in cysteine and TMT peptide N terminus as static modification and TMT in K, methionine oxidation, and acetylation in protein N terminus as dynamic modifications. Peptide mass tolerance was 10 parts per million (ppm), and the MS/MS tolerance was 0.6 Da in MS2 and 20 ppm in MS3. Peptides with a q value lower than 0.1 and an FDR of <1% were considered as positive identifications with a high confidence level. Quantitative analysis TMT reporter ion intensities were used for protein quantification. Unique peptides (peptides that are not shared between different protein groups) were considered for further quantitative and statistical analysis. Within each TMT experiment, peptide quantitation was normalized by summing the abundance values for each channel over all peptides identified within an experiment, and then the channel with the highest total abundance was taken as a reference and all abundance values were corrected in all other channels by a constant factor per channel so that, at the end, the total abundance is the same for all channels. Protein quantitation was done by summing all peptide normalized intensities for a given protein. Protein intensities were scaled so that, for every protein in an experiment, the average of all channels is 100. Proteins were only considered quantifiable if all quan channels have abundance values. DanteR (50), by Pacific Northwest National Laboratory, was used to preprocess, visualize data (boxplots and principal components analysis), and perform relative quantification of proteins labeled with TMT. Protein quantitative measurements were log 2 -transformed, and normalization across the four TMT10plex experiments was performed using quantile normalization (51). Two-way analysis of variance (ANOVA) was performed at the protein level using a linear model. Conditions were considered as the principal factor and TMT batch as the second factor. Weighting function was used to allow data variability to depend on data value. Comparisons considering condition or age were performed. Last, P values were adjusted for multiple testing using the Benjamini-Hochberg FDR correction. Data were also processed by performing a one-way ANOVA statis-tical analysis to take also into account those proteins found only in one batch. Differential expressed proteins were determined using an adjusted P value cutoff of 0.05 and a fold change lower than 0.67 (down) or higher than 1.5 (up). GO analysis Functional annotation of GO terms was performed using the online tool Database for Annotation, Visualization and Integrated Discovery (DAVID 6.8; http://david.abcc.ncifcrf.gov/). GO terms for biological process (GOTERM_BP_DIRECT) with a P value of <0.05 were accepted as a significant enrichment. RNA-seq sample preparation and sequencing RNA was isolated using magnetic beads (RNAClean XP, Beckman Coulter, A63987) from 10 Drosophila larval brains following the protocol described in (24). RNA concentration was determined with a Qubit fluorometer (Thermo Fisher Scientific), and integrity was assessed on the Agilent 2100 Bioanalyzer. RNA poly(A) purification was performed from 0.8 to 1.2 g of total RNA using the NEBNext Poly(A) mRNA Magnetic Isolation Module Kit (NEB, E7490). Then, complementary DNA (cDNA) generation, adaptor ligation, and library amplification were done with NEBNext Ultra RNA II Library Prep Kit for Illumina and NEBNext Multiplex Oligos for Illumina Set 1, 2, and 3 (NEB E7770, E7335, E7500, and E7710, respectively) following the manufacturer's instructions. Library amplification was performed with SYBR Green (Sigma, S9430) to establish the necessary number of cycles to quantify (Qubit fluorometer) and to check size distribution (2100 Bioanalyzer) and sequencing. Libraries were sequenced in 125-nucleotide paired-end lanes of an Illumina HiSeq 2500 system, obtaining between 27 million and 56 million of reads per sample. RNA-seq data processing Data were processed with the Grape RNA-seq pipeline (https:// github.com/guigolab/grape-nf). Raw reads were aligned to the fly genome (dmel6 assembly from http://hgdownload.soe.ucsc.edu/ goldenPath/dm6/bigZips/dm6.fa.gz) and transcriptome (dmel6-05 from ftp://ftp.flybase.net/genomes/Drosophila_melanogaster/ dmel_r6.05_FB2015_02/gff/dmel-all-no-analysis-r6.05.gff.gz) using STAR (https://doi.org/10.1093/bioinformatics/bts635, v2.4.0j). A maximum of four mismatches per sequence was used, and only reads with at most 10 multiple mappings were retained. Genes and transcripts were quantified using RNA-seq by Expectation Maximization (RSEM) (https://doi.org/10.1186/1471-2105-12-323, v1.2.21) with default parameters. RPKM (reads per kilobase of exon per million fragments mapped) was used as a measure of gene and transcript abundances. The library size for each sample was scaled according to the TMM (trimmed mean of M values) normalization method (https://doi.org/10.1186/gb-2010-11-3-r25). Several filters were applied to the gene expression matrix to obtain a stable gene set for the analysis. We removed all ribosomal RNA (rRNA) and transfer RNA (tRNA) genes as well as messenger RNA (mRNA) genes coding for ribosomal proteins. Lowly expressed genes with less than 10 reads in all replicates were discarded. To ensure reproducibility across replicates, we applied nonparametric Irreproducible Discovery Rate (IDR) (53) on read counts for all pairwise combinations of three replicates: If IDR < 0.01 for any comparison, then the read counts in all replicates are set to 0. Last, we only kept genes with RPKM > 1 in at least three samples. This resulted in 9340 genes out of 17,159. Expression coverage files in BigWig format (https://genome. ucsc.edu/goldenPath/help/bigWig.html) were generated using STAR (v2.4.0j) and later normalized according to the scaling factors obtained by the TMM normalization method. The resulting files were visually inspected with the Integrated Genome Browser (http://igb.bioviz.org/) software. RNA-seq data analysis Batch effect correction was applied to the gene expression matrix using Limma (https://doi.org/10.1093/nar/gkv007, v3.30.13). The gender was used as the fixed factor for batch correction. Differential expression analysis Pairwise comparisons were performed to identify differentially expressed genes between females and males and genotypes using edgeR. Genes with fold change > 2 and FDR < 0.05 were considered differentially expressed. To define the transcriptomics signatures (M-tSDS and F-tSDS), we filtered out genes with a coefficient of variation (CV%) greater than 50% for all replicates. Wilcoxon test and generation of volcano plots Two-sample Wilcoxon tests were performed using the function wilcox.test from R standard library (default parameters). Volcano plots represent log 2 fold change of the expression of M-tSDS and F-tSDS genes (x axis) versus negative log 10 of the P values (y axis), as resulting from the differential expression analysis between Phf7 N2 ; l(3)mbt ts1 and l(3)mbt ts1 from male and female samples. Gene expression analysis by RT-qPCR Total RNA was isolated from dissected larval brains using proteinase K and deoxyribonuclease (DNase) (Invitrogen) and purified using magnetic beads. RNA yield and quality were assessed with Qubit, followed by reverse transcription using random hexamers. Transcript levels were measured with PowerUp SYBR Green Master Mix in QuantStudio 6 Flex System (Applied Biosystems). Initial activation was performed at 95°C for 20 s, followed by 40 cycles of 95°C for 5 s and 60°C for 15 s. The melting curve was generated ranging from 50° to 95°C with an increment of 0.5°C each 5 s. Primers specific for the transcript Phf7-RC used for RT-qPCR are as follows: AGTTCGG-GAATTCAACGCTT (forward) and GAGATAGCCCTGCAGCCA (reverse) (34). Measurements were performed on biological triplicates, with technical duplicates of each biological sample. RNA levels were normalized to rp49. The relative transcript levels were calculated using the 2 −CT method (54). SUPPLEMENTARY MATERIALS Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/5/8/eaaw7965/DC1 Fig. S1. brat tumors do not present sex-dependent dimorphism. Fig. S2. mbt tumor size is sex dependent. Fig. S3. Loss of Phf7 reduces the expression of mbt M-tSDS genes and increases the expression of mbt F-tSDS genes in male tissue. Table S1. Proteomic sex-linked dimorphic signatures. Table S2. GO terms enriched in the M-pSDS and the F-pSDS. Table S3. Relative expression levels of L(3)mbt and other components of the LINT and Myb/ Muv/Dream complexes between male and female wild-type brain lobes. Table S4. Sex-dependent dysregulation of MBTS genes in l(3)mbt ts1 tumors versus wild-type and in Phf7 N2 ; l(3)mbt ts1 versus l(3)mbt ts1 .
2019-08-28T13:05:40.003Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "138ccff10b449ea0dc1ad79ee024b65c28d438d9", "oa_license": "CCBYNC", "oa_url": "https://advances.sciencemag.org/content/advances/5/8/eaaw7965.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d45ba343b9f40ccc63fb72ba3fd854c44bf0ace6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250343796
pes2o/s2orc
v3-fos-license
Staple aneurysmorrhaphy and suture venoplasty for repair of large bilateral external iliac vein aneurysms in an adolescent Aneurysms of the iliac veins are very rare; thus, the best approach to management has not yet been defined. We have presented the case of a 17-year-old boy with incidentally identified large bilateral external iliac vein aneurysms. Given the risks of potentially fatal thromboembolism or rupture, he underwent definitive repair of his aneurysms using staple aneurysmorrhaphy combined with additional vein tailoring by suture venoplasty, a technique not previously described for these aneurysms. We have also discussed the etiology, presentation, and our surgical technique to manage this rare condition. Aneurysms of the iliac veins are exceedingly rare, especially in the absence of an underlying arteriovenous fistula or venous outflow obstruction. 1,2 Given the rarity of this condition, the optimal approach to management has not yet been identified. 1 We have presented the case of an adolescent boy with very large bilateral external iliac veins aneurysms that were repaired using staple aneurysmorrhaphy with additional tailoring by suture venoplasty. The patient provided written informed consent for the report of his case details and imaging studies. CASE REPORT A 17-year-old boy was referred to our Vascular Anomalies Center because of an incidental finding of bilateral large external iliac vein aneurysms. He had recently developed an episode of self-resolving hematuria after being hit in the abdomen by a ball, prompting an abdominal imaging study. He denied lower extremity or groin pain, lower extremity swelling, chest pain, and shortness of breath. Additionally, he denied a history of abdominal or pelvic trauma, prior surgical intervention, and a family history of vascular anomalies or connective tissue disorders. The physical examination findings were unremarkable, including no findings to suggest additional venous anomalies or an underlying connective tissue disorder. Computed tomography angiography confirmed the presence of large bilateral iliac vein aneurysms extending nearly from the inferior vena cava confluence to the internal aspect of the inguinal canal (Fig 1). The right external iliac vein aneurysm measured 6.8 Â 6.0 cm, with a length of 14 cm, and the left Given the risks of thrombosis, pulmonary embolism, and spontaneous rupture, surgical repair was recommended. After a multidisciplinary discussion, low-molecular-weight heparin was initiated in the interim to decrease the risk of thromboembolism since the history of these aneurysms could not be clearly defined without prior imaging studies. The patient was taken to the operating room electively for excision. The bilateral external iliac vein aneurysms were exposed through a lower laparotomy incision (Fig 2, A), with care taken to identify and protect the vas deferens, ureter, and gonadal vessels. The aneurysms were carefully separated from the iliac arteries. Vessel loops were placed for proximal and distal control at the level of the junction with the common iliac vein and at the inguinal ligament, respectively. Circumferential dissection of the aneurysms was performed. To repair the aneurysm in our patient, a fully grown adolescent, we elected to We started with the larger right-sided aneurysm. First, the aneurysm was manually compressed. Multiple loads of an Endo GIA stapler (Medtronic, Dublin, Ireland) were fired longitudinally along the anterior portion of the vein, using the nonaneurysmal proximal and distal vein as a guide and intentionally leaving the vein larger than normal to allow for additional tailored tapering using suture venoplasty. Running 5-0 Prolene suture was used along the length of the staple line, imbricating the vein further with each bite to tailor it to a size that was intentionally somewhat larger than a normal iliac vein (Fig 2, B). At completion, the shape and size match between the proximal and distal nonaneurysmal segments of the vein were excellent, with good flow visualized via Doppler ultrasound examination to ensure the manipulation had not caused thrombosis. The same procedure was then performed on the left aneurysm, with similarly excellent size, shape, and venous flow found at completion (Fig 2, C and D). Postoperatively, the patient continued receiving a therapeutic heparin infusion until postoperative day 5, when he was transitioned to low-molecular-weight heparin. He was discharged home the next day with a prescription for anticoagulation therapy. Computed tomography of the abdomen and pelvis was obtained 6 months postoperatively to guide anticoagulation management. The scan showed mild residual dilation bilaterally, as expected, with a maximal diameter of the external iliac vein of 1.9 cm on the right and 2.1 cm on the left without evidence of thrombosis (Fig 3, A). He was transitioned to aspirin, 81 mg daily, with annual duplex ultrasound. Computed tomography was performed 4 years postoperatively to allow for complete evaluation of the iliac venous system, which showed stable, mild residual ectasia (Fig 3, B). At last follow-up, he remained without symptoms. DISCUSSION Venous aneurysms are uncommon vascular abnormalities, with aneurysms of the iliac veins being particularly rare. 3,4 Owing to their rarity, diagnosis and management has remained a challenge. These aneurysms are classified as primary when arising de novo or secondary when occurring due to trauma, arteriovenous fistula, or proximal venous outflow obstruction. 2,5 Most iliac vein aneurysms are secondary aneurysms, most often occurring in the setting of an arteriovenous fistula. 1 The only three previously reported patients with bilateral iliac vein aneurysms were very active athletes. [6][7][8] Additionally, unilateral iliac vein aneurysms have been reported in patients who participated regularly in longdistance running and bicycling. 5,9,10 Given the young age of our patient, bilateral involvement, and absence of iliac vein compression, we believe our patient had a congenital venous malformation. Rather than occurring as a direct result of our patient's physical activity, the injury he had sustained from being an active adolescent had led to the incidental discovery of these asymptomatic aneurysms on the imaging study. The presentation of these aneurysms is highly variable. In a recent review, 16% of patients were asymptomatic, with the aneurysms identified incidentally. 1 Others have presented with unilateral extremity swelling, venous stasis, pain, abdominal mass or pain, testicular pain, back pain, and/or signs of venous insufficiency. 1,2 However, the initial presentation of some patients will be secondary to the potentially fatal complications of these aneurysms, including pulmonary thromboembolism 10-12 and rupture. 13 Given the rarity of this condition, the approach to management has varied widely and has included observation, anticoagulation, and/or endovascular or open surgical intervention. 1 Because of the known risks of these aneurysms and the young age of our patient, we elected for definitive surgical intervention. Although no approach has been standardized regarding the use of preoperative anticoagulation therapy, we administered preoperative anticoagulation therapy to decrease the risk of thrombosis while awaiting surgery, which also decreased the risk that intraoperative manipulation of the vein would lead to thromboembolism. Multiple surgical techniques, including open and endovascular approaches, have been successfully used. The most commonly used technique for primary aneurysms has been tangential aneurysmectomy with lateral venorrhaphy. 8,[13][14][15][16] Other approaches to primary aneurysms are shown in the Table. Given that bilateral repair was indicated for our patient, we performed primary repair to obviate the need for multiple prosthetic grafts or harvesting of multiple native veins. We used the stapler to resect a large portion of the aneurysm, followed by suture venoplasty, to effectively control the final size and shape of the vein. The repaired iliac veins were intentionally left slightly larger than normal to decrease the risk of thrombosis within a noncylindrical vein with intimal irregularity and imperfect laminar flow. In this setting, anticoagulation therapy was continued for 6 months postoperatively to further decrease the thrombotic risk. To the best of our knowledge, this approach has not been described for repair of an iliac vein aneurysm and certainly not for bilateral iliac vein aneurysms. CONCLUSIONS Iliac vein aneurysms are exceedingly rare but warrant strong consideration for intervention owing to the risk of potentially fatal complications. Staple aneurysmorrhaphy, combined with suture venoplasty, offers a safe and effective approach for the repair of iliac aneurysms, including bilateral aneurysms.
2022-07-08T15:11:44.798Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "c59e28dce1358443c9307117402150b33a55d7b1", "oa_license": "CCBY", "oa_url": "http://www.jvscit.org/article/S2468428722000934/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9ee51a3f52238ca3485ad4579866b1098fce2806", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
225603445
pes2o/s2orc
v3-fos-license
Implementation Of Multi Factor Evaluation Process (MFEP) In Assessment Of Employee Performance Achievement Human Resources is one of the competitive advantages and key elements that are important for success in competing to achieve goals. Therefore, which is an element of management in which there are manpower and other resources. Humans are always active and dominant in every activity and organizational activity. The purpose of a company is not possible without the active role of employees. Then the performance appraisal is a function of motivation and ability. To complete a task or work someone duly has a certain degree and level of ability. This study discusses how to make a selection in the case of selecting an employee who is performing and performing well. MFEP is a method to get the best solution from several alternative solutions by using ‘pairwise comparison’ as a basis for making choices. The Multi Factor Evaluation Process (MFEP) method is also described as a comparison. Based on performance appraisals, conclusions can be drawn which explain that employee performance appraisal in an organization is an important mechanism for a manager or leader. Introduction Human resources are one source of competitive advantage and key elements that are important for success in competing to achieve goals, therefore, the management of human resources for the organization that is important for service to the community human resources is part of management. Which is a management element in which there are workers in the company. Humans are always active and dominant in every activity of the organization. Objectives are not possible without the active role of employees even though the tools of the company are so sophisticated. Sophisticated tools owned by the company is of no benefit to the company. If the active role of the employee is excluded. Managing employees is difficult and complex, because they have heterogeneous thoughts, feelings, statuses, desires, and backgrounds that are brought into the organization, employees cannot be regulated and fully controlled like managing machines, capital or buildings [1]. Then the performance appraisal is a function of motivation and ability. To complete a task or work a person should have a certain degree of willingness and level of ability. A person's willingness and skills are not effective enough to do something without a clear understanding of what is done and how to do it. Performance is a real behavior that is displayed every person as a work achievement generated by employees in accordance with their role in the company. Based on the notion of performance appraisal, conclusions can be drawn explaining that performance appraisal in a modern organization, performance appraisal is an important mechanism for management to be used in explaining performance goals and standards and motivating individual performance next time [2]. Performance appraisal is the basis for decisions that affect salary, promotion, termination, training, transfers, and other staffing conditions. From observations and field studies it is known that employee performance appraisal found several problems, including: 1. The duration of the implementation of the next stage of the selection stage of the selection due to difficulties in data processing is still manual. 2. Difficulties in filing all assessment results from one period, for evaluation material for the next period. 3. Difficulties in presenting assessment data in a fast and transparent time. 4. Difficulties in making decisions for employee appraisals due to lack of data support from the results of the selection of previous stages. MFEP is a quantitative method that uses a "weighting system". For each decision that has a strategic influence, it is recommended to use a quantitative approach such as MFEP. The first step in the MFEP method requires that all criteria that are very important factors in making a consideration be given appropriate weighting. The same steps are taken for the alternatives to be chosen, which can then be evaluated in relation to these factors of consideration [3]. Decision Support System (DSS) uses data that provides an easy user interface, and can incorporate thinking into decision making. DSS is more intended to support management in carrying out analytical work in situations that are less structured and with unclear criteria [4] [5]. DSS is not intended to automate decision making, but rather provides an interactive tool that allows decision makers to carry out various analyzes using available models. MFEP is a decision making model that uses a collective approach to the decision making process. Below are the steps of the calculation process using the MFEP method [6], is: 1. Determine the factors and weighting factors where the total weighting must be equal to 1 (∑ weighting = 1), that is the factor weight. 2. Fill in the value for each factor that influences the decision making of the data to be processed. The value entered in the decision making process is an objective value, which is definitely a factor evaluation whose value is between 01. 3. The process of calculating weight evaluation which is the process of calculating the weight between factor weight and factor evaluation with and adding up all weight evaluations results to obtain the evaluation results. The use of the MFEP model can be realized: Where : WE = Weighted Evaluation FW = Factor Weight E = Evaluation ∑WE = Total Weighted Evaluation Research Method This research is a research development to produce a product in determining the factors that support the achievement and performance of employees in a company, in the study will be arranged stages that must be carried out from the beginning of data collection to produce outcomes or concluding results and useful in accordance with the introduction in the introduction. Before designing a system for employee performance appraisal, it must first describe the problems in the appraisal process from collecting data on the system that is running then doing data processing in this case using the MFEP method until drawing conclusions. Result and Discussion Assessment means choice, which is the choice mechanism of two or more possibilities that exist to achieve predetermined goals. Assessment is not just an activity of choosing an alternative to the available alternatives, but it is a systematic overall process of what is done for decision making so that a decision is the best choice [8]. The assessment process begins with the activity of identifying a problem, determining the need for a need, analyzing and choosing alternatives that can solve the problem, as well as implementing that alternative, and ending with evaluating the effectiveness of the decision. The stages that are passed in the process are as follows: 1. Setting goals (needs) identify problems The design of an appraisal system starts with the existence of a problem or a gap in the situation or with the desired condition. Before designing a decision support system in a company / agency, it must first determine what problems are being faced and what goals are achieved by the company. The problem faced in general is how to make a good and optimal assessment of employees and what are the conditions for making the right, fast and quality decisions. Global Weight of each Alternative for Completing the Pairwise Comparison Matrix of Testing Results can be seen in table 4 below. The results of the previous processes can be presented for the assessment of Criteria for Employees who will be appointed as Employees by the MFEP method can be seen in the following table. An example of comparison of employee assessment criteria data is seen in Table 8 below Conclusion The choice of decisions in evaluating employee performance and achievement with the multi factor evaluation process method is one solution to improve the efficiency and effectiveness of the employee evaluation process. This system can help companies in providing an overview to provide decision support data to the leadership in assessing an employee, namely: The MFEP method is more appropriate for solving multi-dimensional problems such as in employee performance appraisal, with many criteria as an assessment component for each alternative. 2. The implementation of the MFEP method in evaluating employee performance has advantages that can be used to conduct an assessment even if only one employee or object is assessed. 3. Factors that influence the results of calculations using the MFEP method are the criteria or subcriteria weight, preference weight, and the nature (type) of the criteria or sub-criteria in this case the criteria used in assessing employee performance and performance are Testing, Discipline, Working Time and Loyalty.
2020-07-30T02:03:18.008Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "55b4be9e11a8a110802c6a573cd780114d3b88ec", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1573/1/012022", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1a60f413db33db5188b1d3180fec0e8072c0c4e7", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
216222453
pes2o/s2orc
v3-fos-license
Glass surface modification using diffusion coplanar surface barrier discharge (DCSBD) The paper deals with the surface modification of the glass substrates using diffuse coplanar surface barrier discharge (DCSBD). The changes of surface properties after modification with plasma discharge were observed in time. During eight days, contact angles were measured by the method of sessile drop, using distilled water and diiodomethane as testing liquids. The surface free energy and its polar and dispersion component were calculated using Fowkes method. The morphology and rms-roughness of modificated substrates were evaluated, using atomic force microscopy. The effect of modification by plasma discharge on the observed surface properties is the most noticeable immediately after modification. After 3 days, the observed surface properties of the modified surface are comparable to the unmodified surface. Introduction At present, the economic aspect is a very important factor in the choice of material and its properties. It is not always necessary and advantageous to create new materials. From an economic point of view, it is very useful when we can achieve the desired properties only by modifying an existing material. Diffuse Coplanar Surface Barrier Discharge (DSCBD) is the latest plasma source that has tremendous potential in the surface treatment industry. DCSBD discharge has unique properties that distinguish it from other plasma sources for surface treatment. It generates a diffuse type of plasma at atmospheric pressure in the air, but also in many other working gases without adding noble gas. Very high plasma power densities of up to 100 W.cm -3 allow short plasma exposure times and hence high machining speeds [1]. Due to affordability and high industrial usability, modification of glass surface by DCSBD plasma discharge is a method with increasing interest and use in industry. Its great advantage is to maintain the original advantageous properties of the material and the possibility of major changes in the surface properties of the material. Significant changes in the surface of the material include an increase in the surface energy of the material and thus an increase in wettability. This change in surface properties is important, for example, when applying additional coatings [2]. The contact angle (CA) is one of the few solid-liquid-gas interface properties that can be measured directly. It is the angle that the tangent line to the surface of the drop, passing through the drop point and interface. Particularly important parameter in this method is the high sensitivity to the chemical structure of the top layer of molecules and is a relatively simple, inexpensive and widely used technique for characterizing different types of surfaces as a tool to calculate the surface energy of these surfaces [3,4]. Atomic force microscopy (AFM) is a complex instrument used to investigate the surface at the microand nanometer scale [5,6]. The pictorial representations of the surfaces can be similar to those seen from scanning electron microscope (SEM) measurements, but AFM provides much more information in the z-direction than SEM [7]. To our knowledge, there is no paper about effectiveness of glass surface modification using DCSBD. The aim of this paper is to determine the effectiveness of glass surface modification by DCSBD on the basis of monitoring changes in surface properties -hydrophobicity, surface free energy, rms-roughness. Preparation of samples Microscope slides with dimensions 76 × 26 mm and thickness 1-1.2 mm were used for experiments. The samples were washed with water, detergent, rinsed with distilled water and isopropyl alcohol. The cleaned slides were dried at 100 °C for 10 minutes and placed in a container in which they were protected from light and air humidity. Plasma treatment The cleaned slides were placed to the KPR 200 plasma reactor electrodes and treated after 60 seconds at 375 W (figure 1). The distance between the electrode and the sample was 0.16 mm. We carefully put the treated slides into a container that was protected from light and air humidity. Measurement of contact angles Contact angles (CA) were measured at 0, 1, 2, 3, 4, 7 and 8 days after surface modification of the DCSBD by plasma discharge. The contact angle values of distilled water and diiodomethane of the modified surfaces were determined by the sessile drop method. 10 drops of 10 µl test liquid were placed on the modified surface to be examined by means of a micropipette so that they were evenly distributed over the surface. The contact angles of distilled water and diiodomethane were calculated from the scanned drop profiles. The results were statistically processed [8]. Determination of surface free energy From the calculated contact angles of distilled water and diiodomethane, the surface free energy ( ) values of the modified surfaces and their polar ( ) and dispersive ( ) components were calculated using the Fowkes method [9]. First, the contact angle  for the solid surface was determined using a nonpolar liquid. Then the value of was calculated from the equation (1): where is the surface energy for the non-polar liquid to which = is valid. The value of the contact angle determined for the polar liquid where = + and the calculated value of was used to calculate the value of according to equation (2) Diiodomethane was chosen as a non-polar liquid ( = = 50.8 mJ m -2 ) and distilled water as the polar liquid ( = 21.8 mJ m -2 , = 51.0 mJ m -2 ). Measurement on AFM Atomic force microscopy (AFM) images were measured by NT-206 (Micro test Machines Belarus) device with probe head operated in contact regime. Si3N4 tip (Micro Masch NSC 11/AlBS) with the toughness k = 3 N m -1 was used. Dimension of analysed area were 10 × 10 µm. The surface topography was measured under room temperature and ambient atmosphere. AFM images were analysed using Surface Xplorer software. AFM was used as method for observing the surface quality. During topography evaluation, using AFM method, height irregularity values at certain points of the surface were determined. A set of height values was obtained, on the basic of adjustment where the obtained height values were based on the plane passing through the three lowest values for the given image and it lead to the measurement output. A similar adjustment was also used in the fast scan direction where the obtained height values were related to the line assigned to the two lowest values for given scan. With these adjustments, the most relevant surface image to reality is obtained. From the obtained set of height values (zij), the values of the mean square deviation of the measured values zij around -so-called rms-roughness of surface were calculated using the following equation [10]: where n is the number of rows and m is the number of columns corresponding to the AFM raster image, and zij is the height for point ij and is the average of measured height values of zij. Parameters of rms were used to quantify the surface roughness. The graphical dependence (figure 4) of the calculated surface free energy versus the time of surface modification by diffuse coplanar surface barrier discharge (DCSBD) shows a gradual decrease in the surface free energy value. During the day of surface modification, surface free energy value was highest and gradually decreased over time. After 8 days, its value was close to the surface free energy of the surface without modification. Based on this, we can conclude that surface modification is most effective in the shortest possible time after treatment, and its effects gradually disappear. Relating to polar component is similar to the trend for surface free energy dependence on time, including all samples (figure 5). During the day of surface modification, the polar component had the highest value. Over the time, the given value dropped to the same value which was observed for the sample without surface modification. The values of the dispersion component of surface free energy (figure 6) showed similar behaviour in comparison with the polar component of the surface free energy. Figure 7 shows 2D AFM images of the surface of glass samples from the time of surface modification by DCSBD. AFM The surface of the plasma-untreated glass has scratches that disappear after the modification by DCSBD of the plasma discharge and the bumps are uniformly distributed on the surface. The uniform distribution of the bumps is observed even during the 2 nd day after the modification, during the 3 rd day, there is a partial disappearance of the bumps further days after the plasma was applied to the glass surface, the bumps on the glass surface gradually disappeared.
2020-04-02T09:10:21.942Z
2020-04-02T00:00:00.000
{ "year": 2020, "sha1": "99db3822cc4f85d448345f1861e2a016ef9428fa", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/776/1/012105", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0dfe13ca7a23db1faba32762e98c817ab7bb6bd6", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
52898558
pes2o/s2orc
v3-fos-license
Telomeric Repeat-Binding Factor Homologs in Entamoeba histolytica: New Clues for Telomeric Research Telomeric Repeat Binding Factors (TRFs) are architectural nuclear proteins with critical roles in telomere-length regulation, chromosome end protection and, fusion prevention, DNA damage detection, and senescence regulation. Entamoeba histolytica, the parasite responsible of human amoebiasis, harbors three homologs of human TRFs, based on sequence similarities to their Myb DNA binding domain. These proteins were dubbed EhTRF-like I, II and III. In this work, we revealed that EhTRF-like I and II share similarity with human TRF1, while EhTRF-like III shares similarity with human TRF2 by in silico approach. The analysis of ehtrf-like genes showed they are expressed differentially under basal culture conditions. We also studied the cellular localization of EhTRF-like I and III proteins using subcellular fractionation and western blot assays. EhTRF-like I and III proteins were enriched in the nuclear fraction, but they were also present in the cytoplasm. Indirect immunofluorescence showed that these proteins were located at the nuclear periphery co-localizing with Lamin B1 and trimethylated H4K20, which is a characteristic mark of heterochromatic regions and telomeres. We found by transmission electron microscopy that EhTRF-like III was located in regions of more condensed chromatin. Finally, EMSA assays showed that EhTRF-like III forms specific DNA-protein complexes with telomeric related sequences. Our data suggested that EhTRF-like proteins play a role in the maintenance of the chromosome ends in this parasite. INTRODUCTION Telomeres are specialized protein-DNA complexes localized at the end of eukaryotic chromosomes (Blackburn and Gall, 1978;Meyne et al., 1989;Giraud-Panis et al., 2013). Telomeric DNA consists of tandem arrays of G + T-rich repetitive sequences ending in a single-stranded G-rich overhang which is added by the ribonucleoprotein enzyme telomerase (O'Sullivan and Karlseder, 2010;Giraud-Panis et al., 2013). The length of these arrays varies among species, ranking from few hundred base pairs disposed in irregular tandem repeats in Saccharomyces cerevisiae (Lue, 2010), to thousands of base pairs of a TTAGGG repeat in vertebrate telomeres (Moyzis et al., 1988;Giraud-Panis et al., 2013). Proteins that bind to telomeric DNA play critical roles in telomere length regulation and chromosomal end protection in eukaryotic organisms. They conform a machinery known as Telosome or Shelterin complex (Palm and de Lange, 2008). These protein complexes include members or functional homologs of the TTAGGG Repeat Binding Factors, Telomeric Repeat Binding Factors (TRF) and telobox family members (Chong et al., 1995;Broccoli et al., 1997b). TRF proteins are architectural nuclear proteins involved in diverse roles, such as telomere length regulation, chromosome end protection, prevention of chromosomes fusion, sense of DNA damage, and regulation of senescence (de Lange, 2005;Palm and de Lange, 2008). These proteins are conserved from lower eukaryotes, to plants and mammals (Horvath, 2000(Horvath, -2013. The genome of H. sapiens contains two genes coding for TRF proteins: TRF1 and TRF2, these proteins bind as homodimers to the double-stranded DNA telomeric sequence (Chong et al., 1995;Broccoli et al., 1997b). TRF1 controls the length of telomeric repeats, whereas TRF2 is involved in the assembly of the terminal t-loop, negative telomere length regulation and chromosomal end protection (Palm and de Lange, 2008). Proteins of the TRF family have similar architectures, defined by the presence of two domains: (i) the conserved single MYB-type helix-turn-helix (HTH) DNAbinding domain (MYB DBD; 55 amino acids) located in their C-terminal; this domain contains three evenly spaced tryptophan residues and in the third α-helix presents a telobox motif (VDLKDKWRT, consensus VxxKDxxR) (Bilaud et al., 1996). (ii) The TRF-homology domain (TRFH; 200 amino acids) situated in the N-terminal, which is unique for members of this family (Broccoli et al., 1997b); and its function is related to homodimerization and protein-protein interactions with other telomeric proteins (Fairall et al., 2001). Additionally, TRF1 and TRF2 proteins diverge in their N-terminal domain, which is rich in acidic or basic residues, respectively (Broccoli et al., 1997a;Palm and de Lange, 2008). Gene encoding homologs to TRF1 and TRF2 have been found in the genomes of Trypanosoma brucei, Trypanosoma cruzi and Leishmania. major based on similarities to the C-terminal MYB DBD (Li et al., 2005;da Silva et al., 2010). Besides TRF1 and TRF2, telomeric DNA also requires the binding of other specific proteins, such as Rap1 (replication protein A 1), POT1 (protection of telomeres 1), TIN2 (TRF1interacting nuclear factor 2) and TPP which together conform the Shelterin complex in H. sapiens (Palm and de Lange, 2008). In Trypanosomes besides the TRF2, a homolog of Rpa-1 has also been identified, suggesting that the telomeric function is conserved and that the telomeric machinery evolved early in eukaryotes (Lira et al., 2007). In Entamoeba histolytica, the causative protozoan of human amoebiasis, the MYB DBD is the most abundant domain related to transcriptional regulation (Clark et al., 2007). MYB DBD-containing proteins in this parasite are clustered into three monophyletic groups. Families I and III are related to transcriptional factors and were dubbed as EhMybR2R3 and EhMybSHAQKYF, respectively (Meneses et al., 2010). Family II includes single-repeat proteins related to human telomeric binding proteins (Meneses et al., 2010). In E. histolytica the identification of telomeric signatures has been a challenging task. Although the first draft of E. histolytica genome was published in 2005, it has not been possible to identify sequences that referenced the terminal ends of the chromosomes neither canonical telomeric sequences nor orthologs of telomerase genes have been identified (Loftus et al., 2005;Clark et al., 2007;Lorenzi et al., 2010). However, 10% of the E. histolytica genome corresponds to tRNA genes which are associated with short tandem repeats (Loftus et al., 2005). There are 25 different types of long tandem arrays that contain between 1 and 5 tRNA types per repeat unit and STRs which resembles microsatellites (Clark et al., 2006;Tawari et al., 2008). It has been proposed that these arrays could localize at the chromosome ends, acting as telomeric regions that fulfill a structural role in the genome (Clark et al., 2006;Tawari et al., 2008). In addition, E. histolytica chromosomes do not completely condense and there is a considerable variation in their chromosome size, maybe due to expansion and contraction of telomeric repeats, as in other protists (Patarapotikul and Langsley, 1988;Melville et al., 1999;Willhoeft and Tannich, 2000). Until now, no telomeric sequences or protein complexes implicated in telomere function have been described in this parasite. The study of TRF homologs will help to gain insight into telomere biology of E. histolytica. Thus, in this work, we identified and characterized the TRF-like proteins of E. histolytica as homologous to human TRF1 and TRF2. We observed their nuclear localization in condensed chromatin regions, their co-localization with Lamin B1 and trimethylated H4K20, and their binding capacity to form DNA-protein complexes with telomeric related sequences. Our results suggest that the TRF-like proteins of E. histolytica play similar function as in their human counterparts. However, further experiments are still needed to address their role in the maintenance of telomeres in this parasite. In silico Analysis of the EhTRF-Like Proteins of E. Histolytica Amino acid sequences of proteins coded by genes from locus EHI_001090, EHI_148140 and EHI_001110, EHI_009820 and EHI_074810 were obtained from AmoebaDB database (http:// amoebadb.org/amoeba/). Such sequences were used as a bait to perform queries using the DELTA-BLAST (Domain Enhanced Lookup Time Accelerated BLAST) algorithm to identify orthologs in other members of Entamoeba or other eukaryotes (https://blast.ncbi.nlm.nih.gov/Blast.cgi). The Percent of Identity (PID) was calculated using the amino acids sequences from E. histolytica proteins coded by the above-mentioned genes, TRF1, TRF2, and proteins from organisms corresponding with the best hits, taking gaps into account using the following equation: PID = (identical positions/length of the alignment) × 100 . The predicted secondary structure of complete TRF1 and EhTRF-like I, II and III (corresponding to AmoebaDB ID: EHI_001090, EHI_001110, and EHI_148140, respectively) proteins was determined using PSIPRED (http://bioinf.cs.ucl.ac. uk/psipred/) tool and aligned by ClustalW2 software (https:// www.ebi.ac.uk/Tools/msa/clustalw2/). Molecular weight, pI and post-translational modifications were analyzed using the ExPaSy: Compute pI/Mw (https://web.expasy.org/compute_pi/), ProtPi (https://www.protpi.ch/) and Mod Pred tools (http:// www.modpred.org/). Nuclear localization signals (NLS) were located through SeqNLS (http://mleg.cse.sc.edu/seqNLS/) and cNLS (http://nls-mapper.iab.keio.ac.jp/cgi-bin/NLS_Mapper_ help.cgi) tools. The amino acid sequence of DBD MYB from TRF proteins was aligned using ClustalW2 and phylogenetic analysis was inferred using Neighbor-Joining method. The evolutionary distances were computed using the Poisson correction method. Evolutionary analyses were conducted using MEGA 7 (Kumar et al., 2016). Phylogenetic tree was constructed through a bootstrap of 1,000 replicates. To identify orthologous components of the Sheltering machinery in E. histolytica genome, the amino acid sequences of Homo sapiens Rap1 (Q9NYB0), TIN2 (Q9BSI4), TPP1 (Q96AP0), and POT1 (Q9NUX5) were obtained from the UniProt database (http://www.uniprot.org/) and used as a bait to make queries with Blast algorithm (http:// blast.ncbi.nlm.nih.gov). The presence of conserved functional domains in the identified proteins was analyzed in the Pfam database (http://pfam.xfam.org/). RT-PCR Assays Total RNA from E. histolytica trophozoites was isolated following the Trizol R LS Reagent (Invitrogen) protocol and then semiquantitative RT-PCR assays were performed with 100 ng of total RNA. Primers used to amplify the ehtrf-like I, ehtrf-like II, ehtrf-like III and 40s rps2 genes were designed using Primer-BLAST (https://www.ncbi.nlm.nih.gov/tools/primer-blast/) and manually corrected (Table S1). To validate the specificity of these primers, we amplified the specific sequence from 50 ng of genomic DNA, obtained with the Wizard Genomic purification kit (Promega). PCR products were separated by gel electrophoresis in 2% agarose gels, stained with RedGel TM Nucleic AcidGel Stain 10,000X (BIOTUM) and visualized in a standard UV transilluminator. Relative quantification by qRT-PCR was performed using the QuantiFast SYBR Green RT-PCR kit (Qiagen) and 50 ng of the total RNA, according to the manufacturer's instructions. Relative changes in gene expression were calculated using 40s rps2 as an internal gene calculated by CT method. Absolute quantitation was performed using a 10-fold serially diluted standard curve of each ehtrf-like I, II, III and 40s rps2 genes in parallel with qRT-PCR of RNA from trophozoites grown in basal conditions. Reaction volumes were set with 25 µl QuantiFast SYBR Green RT-PCR kit (Qiagen) and qRT-PCR was performed using 50 ng of the total RNA and 1 µM each primer, according to the manufacturer's instructions. Initial thermal cycling conditions were 1 cycle of 50 • C for 10 mins for reverse transcription, followed by 1 cycle of 95 • C for 5 min and 40 cycles of denaturation at 95 • C for 10 s and annealing/extension temperature of 55 • C for 30 s. Plotting Ct values vs. copy number of the different genes in a standard curve allowed to approximate copies ehtrf-like genes from Ct values. The data shown was displayed as mean with standard error in triplicate and repeated in independent experiments by duplicate. GraphPad Prism 6.0e software was used for student t-test by two-tailed analyses. Production of Polyclonal Antibodies Against EhTRF-Like I and EhTRF-Like III The complete amino acid sequences from EhTRF-like I and EhTRF-like III proteins were aligned using the ClustalW2 tool to identify unique regions of these proteins. Subsequently, to determine hydrophobic regions, EhTRF-like I and III amino acid sequences were analyzed using the Hopp-Woods program (Hopp and Woods, 1981). Finally, a prediction of B epitopes was made using the ABCpred server (http://crdd.osdd.net/raghava/ abcpred/). Differential peptides, CTLPSVGNALIPPS and CNKQKVQPQVSQPH for EhTRF-like I and III, respectively, were synthesized, conjugated to Keyhole Limpet Hemocyanin (KLH) (GL Biochem, Shanghai) and used to immunize New Zealand male rabbits. Before immunization, the pre-immune serum (PS) was obtained and then rabbits were subcutaneously inoculated with 400 µg of each peptide diluted in TiterMax R Gold (Sigma-Aldrich). Four booster injections (500 µg each) at 15-days intervals, were given. After 1 week of the last immunization, rabbits were bled, and polyclonal antisera were obtained and tested by western blot assays against total extracts of E. histolytica trophozoites. Subcellular Fractionation E. histolytica trophozoites were recovered by centrifugation and lysed as follows to obtain subcellular fractions. Soluble nuclear extracts (Ns) were prepared as described by Schreiber et al. (1989). Briefly, 1 × 10 6 trophozoites were resuspended by gentle pipetting in 400 µl of cold buffer A (10 mM HEPES pH 7.9, 10 mM KCl, 0.1 mM EDTA, 0.1 mM EGTA, 1 mM DTT, 0.5 mM PMSF). Trophozoites were allowed to swell on ice for 15 min, after which 25 µl of 10% NP-40 solution (Sigma-Aldrich) was added and vigorously vortexed for 30 s. Homogenate was centrifuged for 10 min at 14,000 rpm, to separate the supernatant containing cytoplasmic extracts (C), and the pellet containing nuclei. Pellet was resuspended in 1 ml of buffer A and layered on 1 ml of buffer A containing 0.34 M sucrose (Sigma-Aldrich), then mixed and centrifuged for 10 min at 14,000 rpm. The nuclear pellet was resuspended in 150 µl ice-cold buffer C (20 mM HEPES pH 7.9; 0.4 M NaCl; 1 mM EDTA; 1 mM EGTA; 1 mM DTT; 1 mM PMSF) and vigorously rocked at 4 • C for 15 min. Nuclear soluble extracts (Ns) were obtained after centrifugation for 10 min at 13,000 rpm at 4 • C and frozen in aliquots at −70 • C until used. The remaining pellet was lysed in RIPA buffer (50 mM Tris-Cl, pH 8, 150 mM NaCl, 2 mM EDTA, 1% Triton X-100, 0.1% SDS) and sonicated with three pulses of 30 s (30% power) and 2 min refractory period each on ice, to disrupt the chromatin. Later, samples were centrifuged at 14,000 rpm at 4 • C for 10 min and supernatants were labeled as nuclear insoluble fractions (Ni). Total extracts (T) were obtained by freezethawing trophozoites in the presence of 100 mM Tris-HCl pH 7.9, 100 mM p-hydroxymercuribenzoate (PHMB) (Sigma-Aldrich), Complete TM protease inhibitor cocktail (Sigma-Aldrich), 10 µM E64 (Sigma-Aldrich) and 100 mM PMSF (Sigma-Aldrich). Protein concentration was determined by Bradford method (Bradford, 1976) using the Bio-Rad protein assay or the DC Protein Assay (BIO-RAD). Subcellular Localization by Confocal Microscopy Trophozoites cultured on cover slides were fixed with absolute ethanol for 20 min at −20 • C and washed with PBS pH 6.8. Then, coverslips were incubated with 50 mM NH 4 Cl for 30 min at 37 • C and blocked with 1% bovine serum albumin for 30 min. For co-localization experiments, α-EhTRF-like I, α-Lamin B1, α-EhTRF-like III and α-H4K20me3 antibodies were labeled with Alexa Flour 647, 555, 488 and 488, respectively, using Molecular Probes R Antibody Labeling kit (ThermoFisher Scientific) and following the manufacturer's instructions. In other experiments, samples were incubated ON at 4 • C with α-TRF-like III (1:200) and α-Myc (1:100, Cell Signaling) antibodies. Cells were washed and then, incubated with fluorescein labeled secondary antibody (1:200, Jackson Immuno Research). Slides were mounted with Vectashield containing DAPI (Vector Lab), visualized through a Nikon inverted microscope attached to a laser Confocal scanning system (Leica TCS SP2) and analyzed by Confocal Assistant software image. Construction of pehtrf-like Iox, pehtrf-like IIIox, and pCold-ehtrf-like I Plasmids The ehtrf-like I (1,210 pb) and ehtrf-like III (1,383 bp) complete open reading frames (ORF) were cloned in the pKT-3M (Saito-Nakano et al., 2004) plasmid to express the EhTRF-like I and III proteins tagged with Myc sequence at their N-terminal. The ehtrf-like I gene was amplified by PCR using the following oligonucleotides: sense 5 ′ -TCCCCCCGGGATGAATAACCCTCAGTTGC-3 ′ and antisense 5 ′ -GCGCGCCTCGAGTTATTGAGAAAG ATCCAATTGTTTAAAT-3 ′ . The ehtrf-like III gene was amplified using the following oligonucleotides: sense 5 ′ -TCCCCCCCGGGATGGAGAAAAAACTAA-3 ′ and antisense 5 ′ -GGGGCCTCGAGTTAAAATTATCAGAATTA-3 ′ . Forward primers contained a SmaI site and the reverse primers contained a XhoI site (underlined). PCR was performed with 100 ng of E. histolytica genomic DNA, using the following conditions: 92 • C for 5 min, 92 • C for 1 min, 55 • C for 1 min, and 70 • C for 1 min (28 cycles). PCR products were digested with SmaI and XhoI and cloned into a previously digested pKT-3M vector. For the EhTRF-like I recombinant protein (rEhTRF-like I), the 1,210 pb entire ORF was amplified using the forward primer 5-CGCGGATCCTTATTGAGAAAGATCCAATTG-3 and reverse primer 5-GGAATTCCATATGAATAACCCTCAGTTTGC-3, which contained BamHI and NdeI restriction sites, respectively. The 1,210 bp amplicon was digested and cloned into a previously linearized pCold I vector (Takara). All plasmids obtained were confirmed by sequencing using the BigDye Terminator v3.1 Cycle Sequencing Kit and the ABI Prism R 310 Genetic Analyzer (Applied Biosystems). Transfection of Trophozoites Trophozoites were transfected by lipofection as previously described (Olvera et al., 1997;Abhyankar et al., 2008). Briefly, 8 × 10 5 trophozoites were placed for adhesion in 6-well plates during 20 min, and then washed with M-199 medium (Invitrogen) supplemented with 5.7 mM cysteine, 1 mM ascorbic acid and 25 mM HEPES pH 6.8. Later, 20 µg of pKT-3M or pehtrf-like Iox or pehtrf-like IIIox plasmids were added to a tube containing 20 µl of Superfect (Qiagen), incubated at room temperature (RT) for 20 to 30 min, mixed with 0.8 ml of M199 supplemented medium plus 15% bovine serum and added to each well-plate. Trophozoites were incubated for 4 h at 37 • C and then harvested and added to a 125 mm cell culture tube, containing pre-warmed TYI-S-33 medium. Transfected trophozoites were grown for 48 h and selected initially with 3 µg ml −1 G-418 (Thermofisher Scientific) and gradually increased to 20 µg ml −1 . Expression and Purification of rEhtRF-Like I Escherichia coli Rosetta (DE3) competent cells (Novagen) were transformed with the pCold-ehtrf-like I construct. The expression of rEhTRF-like I was induced with 1 mM isopropyl-1-thio-β-D-galactopyranoside (IPTG) at 16 • C for 18 h and analyzed by SDS-PAGE and WB assays. The WB assays were performed by using an anti-histidine monoclonal antibody (α-His) (Santa Cruz Biotechnology, Inc) (1:1,000) as the primary antibody and a peroxidase-conjugated goat anti-mouse polyclonal secondary antibody (Invitrogen-Gibco) at a dilution of 1:3,000. The recombinant protein was obtained from the soluble fraction and purified using an affinity chromatography by Ni Sepharose High Performance agarose (GE Healthcare), following the manufacturer's instructions. The rEhTRF-like I was dialyzed with 25 mM Tris-HCl pH 7.5 and 300 mM NaCl. Protein was quantified and used as described below. Ethic Statement The Institutional Animal Care and Use Committee (Cinvestav IACUC/ethics committee) reviewed and approved our protocol for the animal care and use of rabbits employed to produce antibodies (Protocol Number 0313-06, CICUAL 001). All steps were taken to ameliorate the welfare and to avoid the animals suffering. Food and water were available ad libitum. Animals were monitored pre-and post-inoculation. All procedures were conducted by trained personnel under the supervision of veterinarians and, all invasive clinical procedures were performed while animals were anesthetized and when it was required, animals were humanely euthanized. The ethics committee verified that our Institute fulfills the NOM-062-ZOO-1999, regarding the Technical Specifications for Production, Care and Use of Laboratory Animals given by the General Direction of Animal Health of the Minister of Agriculture of Mexico (SAGARPA-Mexico). The technical specifications approved by SAGARPA-Mexico fulfill of the international regulations/guidelines for use and care of animals used in laboratory and were verified and approved by Cinvestav IACUC/ethics committee (Verification Approval Number: BOO.02.03.02.01.908). In silico Characterization of E. Histolytica EhTRF-Like Proteins In this work, we focus on the family II of the MYB DBDcontaining proteins of E. histolytica (Meneses et al., 2010). This family comprises genes encoding hypothetical proteins with AmoebaDB accession numbers (AE): EHI_001090, EHI_001110, EHI_148140, EHI_009820, and EHI_074810, which were reported as single-repeat telomeric proteins related to human telomeric binding proteins (Meneses et al., 2010). On one hand, analysis of the amino acid sequences of the proteins coded by genes from locus EHI_009820 and EHI_074810 showed identities with the MYB DBD-containing proteins related to c-Myb, with narrowed identity to TRF-related proteins ( Table 1). In contrast, the amino acid sequences of proteins coded by genes from locus EHI_001090, EHI_001110, and EHI_148140 present about 25.8 to 29% identity with TRF1 from H. sapiens (Table 1). Moreover, these proteins share 29.6 to 30.3% identity with TRF proteins from other organisms, such as Pan troglodytes or Desmus rotundus (Table 1). Therefore, we considered these E. histolytica hypothetical proteins as TRF-like proteins and from now we named them as EhTRF-like I (EHI_001090), II (EHI_001110), and III (EHI_148140). E. histolytica EhTRF-like proteins showed higher homology between each other than with human homologs. The sequence conservation between full length EhTRF-like proteins was 35.89 to 63.94% identity, in contrast to human TRF1 and TRF2 which had a 27.77% identity between them. Interestingly, the Myb DBD domain, which characterizes TRF proteins, showed 80.32 to 93.44% identity in EhTRF-like proteins, exhibiting a higher identity between EhTRF-like I and II ( Table S3). The alignment of the MYB DBD of the EhTRF-like proteins with TRF1 and TRF2 from H. sapiens showed that the sequences conserved the three α-helices characteristics of the MYB DBD, as well as the positions of the second and third tryptophan residues responsible of HTH conformation ( Figure 1A; Figure S1). In addition, EhTRF-like I, II and III have a telebox motif (VxKDxxR) in the third α-helix, and a lysine and arginine residues involved in the DNA telomeric recognition in human TRF1 and TRF2 (Figure 1A; Figure S1). Even though there was scarce amino acid identity among the TRFH domains from EhTRF-like proteins and human TRFs, several hydrophobic residues required for the dimer interface, such as leucines, isoleucines, and phenylalanines were conserved ( Figure S1). In addition, we deduced the secondary structure of the complete amino acid sequences for EhTRF-like proteins and human TRF1 and we aligned them to have a structural comparison ( Figure S2). Based on their secondary structure, we proposed the presence of a TRFH domain located in the Nterminal of EhTRF-like proteins, because we found at the same position, the nine α-helices essential for domain dimerization in human TRF1 (Figure 1B; Figure S2). According to the ORFs size (1,215, 1,326, and 1,383 pb for ehtrf-like I, II and III, respectively), the predicted molecular FIGURE 2 | Phylogenetic tree of TRF proteins. The amino acid sequence of DBD MYB from TRF proteins was aligned using ClustalW2 and phylogenetic analysis was inferred using Neighbor-Joining method. Evolutionary distances were computed by the Poisson correction method. Bootstrap values >50% (from 1,000 replicates) are shown near to the individual branches. Evolutionary analyses were conducted using MEGA 7 (Kumar et al., 2016). HsTRF1 (Homo sapiens, NP_059523. weight of EhTRF-like I, II and III is 46.8, 51. 5, and 53.1 kDa, respectively, showing similarity to their human counterparts ( Figure 1B). TRF1 and TRF2 have isoelectric points (pI) of 5.99 and 9.22, respectively, which are related to their amino acids content in their N-terminal end. Theoretical pI of EhTRF-like I and II were predicted as 5.84 and 6.68, respectively, similar to that of TRF1, which is an acidic protein. On the contrary, EhTRF-like III presented a predicted pI of 8.39, alike to the basic protein TRF2. Consequently, the first 50 amino acids of EhTRFlike I and II were acidic residues, similar to those found at the same region of TRF1. On the other hand, the EhTRF-like III Nterminal was composed of basic residues as in TRF2 (Figure 1B). TRF2 also contains binding sites for the Shelterin proteins, Rap1 and TIN2 (de Lange, 2005). In concordance, in EhTRF-like III we identified the presence of two α-helices with conserved hydrophobic residues, corresponding to a Rap1-binding domain (RBM domain; Figure 1B). Likewise, all EhTRF-like proteins showed a nuclear localization signal (NLS) and susceptible sites of phosphorylation and SUMOylation, present in their human counterparts ( Figure 1B). Altogether, we predict that EhTRF-like proteins from E. histolytica have a similar architecture to TRF1 and TRF2 from H. sapiens and are homologs to mammalian telomeric repeat-binding factors. In summary, EhTRF-like I and II share common properties with TRF1, while EhTRF-like III does with TRF2. Phylogenetic Analysis of TRF Proteins With MYB DBD or Telebox Domain In order to shed light into the evolutionary relationships of EhTRF-like proteins, we aligned the MYB DBD amino acid sequence of TRFs from E. histolytica, H. sapiens, representative vertebrates, plants and deep branching protozoa, including other members of Entamoeba genus and Trypanosomatids ( Figure S3). Proteins coded by genes from locus EHI_009820 and EHI_074810 from E. histolytica and TvTBP protein from Trichomonas vaginalis served as outgroup for tree reconstruction. Interestingly, the alignment with the MYB DBD showed that the first tryptophan residue in TRF-like proteins from Entamoeba genus was replaced with the aromatic amino acid phenylalanine, and unlike to TRF1 and TRF2 they present serine or cysteine residues in the third α-helix of their MYB DBD. In addition, all Entamoeba TRF-like proteins conserved the Telebox signature in a greater extent than other unicellular organisms like Trypanosomatids, where homologs to TRF have been already characterized (Li et al., 2005;da Silva et al., 2010). The phylogenetic inference showed that the TRF proteins are separated into two branches, one of them included the vertebrate proteins related to TRF1 and TRF2 and the TRF-like proteins from plants, and in the other branch contained TRFlike proteins from protozoan parasites including members from the Entamoeba genus (Figure 2). It is important to highlight that all TRF-like proteins from Entamoeba genus were clustered into a unique clade and subdivided in two groups that separated members of the EhTRF-like I and II (group A) from members of the EhTRF-like III (group B). The topology observed in the phylogenetic analysis could be the result of a gene duplication event. Interestingly, genes that encoded EhTRF-like I and II proteins are contiguous located in the same genomic location (DS571146). These results suggest that TRF proteins evolved from a common ancestor before vertebrate TRFs diverged and that in the case of E. histolytica, a gene duplication events could occur and therefore increased the TRF gene number. Shelterin Machinery Survey To determine whether E. histolytica contains other components of Shelterin machinery besides TRFs, we searched for proteins homologous to Rap1, TIN2, TPP1 and POT1 in its genome. Through this analysis, we identified a hypothetical protein (AmoebaDB AN: EHI_064550, 156 residues) with 26% identity to H. sapiens Rap1 (399 amino acids; Table 2). Sequence analysis of this protein revealed the presence of a TRF2-interacting telomeric protein/Rap1 C-terminal domain (e-value of 2e-19), suggesting that in E. histolytica this hypothetical protein could bind to EhTRF-like III. For these reasons, we decided to name EHI_064550 as EhRap1-like protein. In H. sapiens, RAP1 binds to DNA through a MYB-type domain, which was absent in EhRap1-like protein, suggesting that its function is limited to telomeres protection. By using H. sapiens proteins as a bait, no other members of Shelterin machinery were identified in the E. histolytica genome. Nevertheless, the E. histolytica genome encodes a protein which is annotated in AmoebaDB database as the Replication Factor A1 (AN: EHI_062980), therefore this protein was dubbed EhRpa1-like. EhRpa1-like conserves an oligonucleotide/oligosaccharide (OB) domain (e-value of 9e-50 according to Pfam), which could be used for single-stranded telomeric DNA recognition in the absence of POT1 ( Table 2). These findings, support the idea that E. histolytica exhibits a rudimentary Shelterin-like complex conformed by EhTRF-like I, II, III, and EhRap1-like. In addition, EhRpa1-like could have a possible role in telomere maintenance. Expression of the EhTRF-Like Proteins in Trophozoites of E. Histolytica To determine whether all three trf-like genes were constitutively expressed by E. histolytica, the mRNA expression patterns were analyzed using qRT-PCR assays derived from trophozoites grown in basal culture conditions. Results showed that three trf-like genes had differential expression levels, being ehtrf-like III more apparently expressed than the other two ( Figure 3A). In order to analyse their protein expression, we produced polyclonal antibodies only against EhTRF-like I and III proteins, using synthetic peptides (N-term-CTLPSVGNALIPPS and Nterm-CNKQKVQPQVSQPH, respectively) to immunize rabbits. In western blot analysis using trophozoites lysates, these antibodies revealed two bands of ∼55 and 65 kDa, respectively ( Figure 3B). Those bands are of higher molecular weight than the expected for EhTRF-like I (46 kDa) and III (53 kDa), respectively. Differences between theoretical and experimental molecular weights could be explained by post-translational modifications, such as phosphorylation, ubiquitination and SUMOylation of EhTRF-like I and III (Figure 1B). To determine the subcellular localization of both proteins, we carried out a cellular fractionation according to (Schreiber et al., 1989), in which cytoplasmic and soluble nuclear fractions were isolated. To extract the insoluble nuclear proteins, RIPA buffer was added to the remaining nuclei pellet. Results evidenced that EhTRFlike I was present in cytoplasmic and nuclear fractions; however, in cytoplasmic and soluble nuclear fractions the antibody FIGURE 3 | Expression of the EhTRF-like proteins in E. histolytica trophozoites. (A) The trf-like genes have differential expression levels. Absolute RT-qPCR quantification of ehtrf-like genes in trophozoites grown in basal conditions. Ribosomal 40s rsp2 subunit gene was used as a control. (B) Detection of EhTRF-like I and EhTRF-like III proteins in different trophozoite fractions. Coomassie-blue stained SDS-PAGE of total protein (T), nuclear soluble (Ns), nuclear insoluble (Ni), and cytoplasmic (C) extracts. Replicates were transferred to nitrocellulose membranes to immunodetect EhTRF-like I and EhTRF-like III with the corresponding antibodies. recognized a 55 kDa band, while in insoluble nuclear fractions, it only detected a 36 kDa band ( Figure 3B). Otherwise, EhTRFlike III protein was only observed in the soluble nuclear fraction, with the same 65 kDa molecular weight than in total lysates. In these assays, to probe the fractions purity, different cellular fractionation markers were included. As a control of the soluble nuclear fraction, we detected the K27me3 modification of H3 in the total and soluble nuclear fractions previously identified in E. histolytica as a repressive epigenetic mark by Foda and Singh (Foda and Singh, 2015; Figure 3B). We also detected the K20me3 modification in the H4 histone, which is specific for heterochromatin and related to telomeric regions (Blasco, 2007) and was previously identified in E. histolytica by Borbolla-Vázquez (Borbolla-Vázquez et al., 2015). H4K20me3 was also present in the total soluble nuclear fractions. Then, we included the detection of the enzyme methyltransferase EhKMT4 as a control of cytoplasmic and soluble nuclear fractions. This protein was detected in total, cytoplasmic and soluble nuclear extracts ( Figure 3B). For the insoluble nuclear fraction, we used Lamin B1, which has been reported at the nuclear periphery in contact with the inner side of nuclear envelope in this parasite (Lozano-Amado et al., 2016). This protein was only detected in the total and insoluble nuclear fractions (Figure 3B), as expected. Additionally, we used actin as loading control for all cellular fractionations; however, a lesser amount of protein was detected in insoluble and soluble nuclear fractions than in cytoplasm, maybe due to different polymerization state of the actin within the nucleus. Despite, gel stained with Coomassie blue of the fractions obtained, demonstrated a similar protein amount in all samples ( Figure 3B). Our results revealed that E. histolytica trophozoites differentially express ehtrf-like I, ehtrf-like II and ehtrf-like III transcripts, and the EhTRF-like I and III proteins are concentrated at the nucleus. EhTRF-Like I and III Are Nuclear Proteins That Co-localize With Lamin B1 and H4k20me3 In order to confirm the EhTRFs-like proteins localization, we performed immunofluorescence assays. Trophozoites cultured in basal conditions were processed for immunofluorescence using α-EhTRF-like I, α-EhTRF-like III, α-Lamin B1 and α-H4K20me3 antibodies coupled to Alexa-647,−488,−555 and−488, respectively. Confocal images evidenced EhTRF-like I and III mainly at trophozoite nuclei and EhTRF-like I was also localized at cytoplasm (Figure 3C). Staining of both proteins appeared at nuclear periphery, thus we employed Lamin B1 as a specific marker of this localization (Goldman et al., 2002;Lozano-Amado et al., 2016). Images revealed that EhTRF-like I and III co-localized with Lamin B1 at nuclear periphery, but EhTRF-like I presented a more diffused stain pattern inside nuclei. Therefore, we also investigated if EhTRF-like I was present in telomeric regions, employing H4K20me3 as a telomeric chromatin marker (Blasco, 2007). We found that EhTRF-like I colocalized with H4K20me3, showing sometimes diffused patterns or well-defined foci ( Figure 3D). These results indicated that EhTRF-like I and III proteins are localized in specific nuclear regions as nuclear periphery or distributed in foci, co-localizing with Lamin B1 and H4K20me3. Considering the localization of EhTRF-like proteins they could be participating in the protection of the chromosome terminal ends of E. histolytica. To validate the localization and to gain insight into the functional effect of EhTRF-like proteins, in E. histolytica trophozoites, we over-expressed the ehtrf-like I and III genes fused to the Myc tag and cloned in the pKT-3M vector to generate the pTRF-like Iox and pTRF-like IIIox overexpression vectors. Trophozoites were transfected with the pTRF-like Iox, pTRF-like IIIox or empty (pKT-3M) plasmid and stably selected in medium supplemented with 20 µg ml −1 G-418. According to semi-quantitative and quantitative RT-PCR results, the expression of ehtrf-like I and ehtrf-like III was 10.2 and 8fold higher, respectively compared to the expression in pKT-3M transfected trophozoites (Figures 4A,B, 5A,B). In agreement, WB experiments of lysates from pEhTRF-like Iox showed that EhTRF-like I was overexpressed (55 kDa band corresponding to EhTRF-like I), when they are compared with lysates derived from trophozoites transfected with the empty vector ( Figure 4B). Similarly, WB experiments of lysates from pEhTRF-like IIIox transfected trophozoites and using α-Myc antibody, detected a 65 kDa band corresponding to EhTRF-like III ( Figure 5C). As expected, in pKT-3M transfected trophozoites this antibody did not recognize any protein. EhTRF-like I and EhTRF-like III overexpression was confirmed in confocal images of pEhTRFlike Iox and pEhTRF-like IIIox transfected trophozoites and using the α-EhTRF-like I or α-EhTRF-like III antibody. In these parasites, proteins were more abundant at the nucleus, but they were also observed at cytoplasm (Figures 4D, 5D,j-l), in comparison to the lesser staining of trophozoites transfected with empty vector (Figures 4D, 5D,f-h). Similar nuclear and perinuclear staining were obtained using the α-Myc antibody, which detected only the heterologous proteins in trophozoites overexpressing EhTRF-like I and EhTRF-like III (Figures 4D, 5D,r-t). Comparison with trophozoites transfected with the empty vector no staining was detected (Figures 4D, 5D,n-p). No signal was obtained in trophozoites incubated with both preimmune serums (Figures 4, 5,b-d). All of these data confirm the nuclear localization pattern of the EhTRF-like I and III proteins. EhTRF-Like III Localizes at Nuclear Heterochromatin Regions Results of EhTRF-like III localization at nuclear foci (Figures 3-5), suggested this protein could act in specific and functional regions of the nuclei. Thus, we analyzed the FIGURE 4 | using the 40s rsp2 gene as control. (C) Coomassie blue stained SDS-PAGE showing total extracts of transfected trophozoites (empty plasmid pKT-3M or pEhTRF-like Iox). A duplicate gel was transferred to nitrocellulose membrane and submitted to WB using α-EhTRF-like I and α-actin antibodies. Anti-Actin antibody was used as loading control. (D) Transfected trophozoites were processed for immunofluorescence and incubated with pre-immune serum (PS), α-EhTRF-like I or α-Myc antibodies, followed by the α-rabbit FITC-coupled secondary antibody. Nuclei were stained with DAPI and preparations were visualized by confocal microscopy. Arrowheads: EhTRF-like I location at nuclei. ph c, phase contrast. localization of this protein by transmission electron microscopy (TEM) in pEhTRF-like IIIox transfected trophozoites. We found EhTRF-like III abundantly in the trophozoite nuclei, enriched in the heterochromatin or highly condensed chromatin regions, close to the nuclear periphery (Figures 6C,D,g,h), using the α-EhTRF-like III antibody. This location was corroborated using the α-Myc antibody (Figure 6F,i). No signal was obtained in trophozoites incubated with pre-immune serum (Figures 6A,B) or in pKT-3M transfected trophozoites stained with the anti-Myc antibody ( Figure 6E). These data showed that EhTRFlike III is found in chromatin regions with high degree of compaction, which is suggestive of telomeric areas, where EhTRF-like proteins could protect the chromosomes terminal ends. EhTRF-Like III Forms DNA-Protein Complexes With Telomeres Related Sequences TRF proteins bind as homodimers to double-stranded DNA telomeric sequences (Broccoli et al., 1997b;Blasco, 2007). Hence, we analyzed if in the nuclear extracts obtained from this parasite there were proteins that could recognize telomeric canonical sequences. We employed EMSA assays using nuclear extracts obtained from wild type trophozoites and the human telomeric sequence (HsTRF). The nuclear extracts from wild-type trophozoites formed three DNA-protein complexes ( Figure 7A, lane 2). The presence of these three complexes is possibly due by the TRF homologs binding to the HsTEL DNA. Presumably, the formation of these complexes are due to the binding of the three EhTRF-like proteins with the HsTEL probe. In the absence of nuclear extract, no DNA-proteins complexes were formed ( Figure 7A, lane 1). The HsTEL probe competed with the formation of the three complexes when it was added at increased concentrations (50, 100 and 200 molar). These results suggested that proteins forming these complexes are related to telomeric sequences ( Figure 7A, lanes 3-5). Even more, when the competition was performed with an E. histolytica sequence, the EhTRF probe, it was more efficient (Figure 7A, lanes 6-8), showing a greater affinity for this sequence. Other mutated or non-related probes as mut-Tel or non-REL, did not competed with any of DNA-protein complexes ( Figure 7A, lanes 9-14). To investigate whether EhTRF-like proteins were able to bind to double-stranded telomeric DNA, we purified the EhTRFlike I recombinant protein (rEhTRF-like I) ( Figure S4). As shown in Figures 7B,C, rEhTRF-like I bound specifically to the HsTEL and EhTEL probes. Competition assays showed that the formed complex by the recombinant protein was abolished in the presence of unlabeled HsTEL and EhTEL excess probes (Figures 7B,C, lanes 3-8). There was no competition for binding when the mut-TEL probe was used with the same molar excess (Figures 7B,C, lanes 9-11). Next, we used the EhTEL probe and nuclear extracts derived from transfected trophozoites. In these EMSA assays, we also obtained three DNA-protein complexes, but the complex III was enriched when nuclear extracts from pEhTRF-like IIIox transfected trophozoites were used ( Figure 7D, lane 3). To corroborate the identity of the proteins that formed the DNAprotein complex, a super-shift assay was performed using the oligonucleotide EhTEL, nuclear extracts from pEhTRF-like IIIox transfected trophozoites and the α-EhTRF-like III antibody. We observed a shifted complex in the presence of the α-EhTRFlike III antibody ( Figure 7E, lane 3) and the pre-immune serum did not modify the formation of any DNA-protein complexes ( Figure 7E, lane 4). We also included an anti-Myc antibody but in our conditions, we didn't obtain slower migration complexes, probably the Myc epitope become hidden upon binding to DNA (inducing a conformational change). This data indicated that pTRF-like III is able to form DNA-protein complexes with EhTEL sequences. Overall, these results indicated that in the nucleus of this parasite there are proteins interacting with telomeric sequences. DISCUSSION All eukaryotes protect their chromosome ends through telomere binding proteins, which are well-conserved among these organisms (Linger and Price, 2009). These proteins conform a protein complex dubbed Shelterin and bind specifically to telomeric regions in mammalian organisms (de Lange, 2005). However, less complex machineries are present in fission yeast, protozoan ciliates, and plants (Watson and Riha, 2010). The stable interaction of Shelterin with telomeres depends on the association of two proteins, TRF1 and TRF2 to double-stranded telomeric repeats. The presence of TRF-like proteins in primitive unicellular eukaryotes is outstanding and might represent the ancestral scenario during evolution of telomeres and its protein counterparts. In silico analysis showed that Entamoeba histolytica has three genes coding for TRF-like proteins. These proteins conserve the Telobox motif in their MYB DBD, which is highly conserved as in higher eukaryotes, and showed high identity (25 to 35%) with the amino acid sequences of the human Sheltering proteins TRF1 and TRF2. It is very interesting that this parasite has three genes that encode for TRF proteins, which could be the result of a gene duplication event. It has been proposed that the selective pressure through mechanisms of recombination were involved during TRF paralog formation. Stress responses is a selection pressure which generate elevated paralog formation pKT-3M or pEhTRF-like IIIox). A duplicate gel was transferred to nitrocellulose membrane and submitted to WB using α-Myc antibody. (D) Transfected trophozoites were processed for immunofluorescence and incubated with pre-immune serum (PS), α-EhTRF-like III or α-Myc antibodies, followed by the α-rabbit TRITC-coupled secondary antibody. Nuclei were stained with DAPI and preparations were visualized by confocal microscopy. Arrowheads: EhTRF-like III location at nuclei. ph c, phase contrast. and lead to an exceedingly high rate of Telomeric Binding Proteins evolution (Lustig, 2016). In the case of E. histolytica, the host's environment submits the parasite to a variety of stress conditions (oxidative stress derived from the immune response, tissue invasion, migration, or the simply need of persistence in the host). It has been proposed that gene duplication is the main process by which new genetic material is obtained by an organism. Our qRT-PCR results showed a differential expression pattern of Ehtrf-like genes. ehtrf-like III is more expressed in basal conditions while ehtrf-like II has the minor expression. These results suggest that the mechanisms that control gene expression could have changed depending on the environment conditions. This differential behavior of the trophozoite has been observed changing the growing conditions of the parasite (Weber et al., 2016). The presence of three ehtrflike genes may result in a differential functionality of telomeric binding proteins improving the organisms' responses to the environmental challenges and protecting their chromosome ends. Few telomeric binding proteins have been identified in protozoan parasites. Our analyses suggested that E. histolytica has a simpler machinery to protect their telomeric DNA. This machinery Shelterin-like could be similar to other unicellular parasites, such as Trypanosoma and Leishmania, were orthologs of TRF-2, Rpa-1 (replication protein A subunit 1) and RAP1 have been identified and characterized, suggesting that the telomeric FIGURE 7 | TRF-like III forms DNA-protein complexes with telomeric sequences. (A) EMSA was done using radiolabeled double-stranded human canonical telomeric DNA (HsTEL) as probe and nuclear extracts obtained from E. histolytica trophozoites. Competition assays were done in the presence of 50, 100, and 200-fold excess of non-labeled sequences: HsTEL specific competitor, E. histolytica sequence (EhTEL), mutated telomeric sequence (mut-TEL) or non-related sequence (non-Rel). (B) EMSA assay using rEHTRF-like I recombinant protein and HsTEL probe competition assays were done in the presence of 50, 100, and 200-fold excess of non-labeled sequences (HsTEL, EhTEL and mut-TEL). (C) EMSA assay using rEHTRF-like I and EhTEL competition assays were done in the presence of 50, 100, and 200-fold excess of non-labeled sequences (HsTEL, EhTEL and mut-TEL). (D) EMSA assay using EhTEL sequence and nuclear extracts obtained from trophozoites transfected with the pKT-3M or pEhTRF-like IIIox plasmids. (E) Super-shift assay was done using radiolabeled double-stranded EhTEL sequence as probe, nuclear extracts obtained from transfected trophozoites and α-EhTRF-like III antibody or pre-immune serum (PS). Protein-DNA complexes were separated in a 6% PAGE. Black arrowheads: specific protein-DNA complexes. Open arrowhead: super shifted protein-DNA complex. machinery evolved early in eukaryotes (Lira et al., 2007;Yang et al., 2009). EhTRF-like I and III proteins span from 404 to 460 amino acids with theoretical molecular weight (MW) of 46.8 to 53.1 kDa, respectively; However, in our experiments, EhTRF-like I was recognized at a 55 or a 36 kDa band and EhTRFlike III was observed in all fractions at 60 kDa band. The MW increase observed in both proteins could be related to posttranslational modifications (PTMs), such as phosphorylation, ubiquitinations and SUMOylations that change protein mobility (Audagnotto and Dal Peraro, 2017). This is relevant since the function of the TRF proteins, their ability to bind to the telomeric DNA, their dimerization and location as well as their degradation and interaction with other proteins (Walker and Zhu, 2012) is regulated through PTMs. Therefore, we performed an in silico analysis and found that EhTRF-like proteins can be modified by SUMOylations in different residues, some of them conserved with respect to TRF-1 and TRF-2. The K302 and K303 SUMOylation sites of EhTRF-like I were conserved with respect to TRF1 (K338 and K339), which explain the MW difference since it has been reported that SUMOylated proteins increase their MW from 8 to 17 kDa for each unit of the bound SUMO peptide (Hilgarth and Sarge, 2005). We propose similar scenario for EhTRF-like III. SUMOylation regulates proteins with nuclear function since it is related to nuclear transport, transcription, location in subnuclear compartments, chromatin organization, DNA damage repair of DNA and is linked to cell cycle regulation, growth and apoptosis (Flotho and Melchior, 2013). In TRF1, FIGURE 8 | Model of EhTRF-like participation in the protection of chromosome ends in E. histolytica. Scheme at the nuclear periphery location of EhTRF-like I and EhTRF-like III linked to the double-stranded DNA. These proteins are close to nuclear lamina, co-localizing with the telomeric heterochromatin label H4K20me3. EhTRF-like I and III could interact with lamin B1, probably through an unknown protein with a similar function to the Sun anchoring protein as in H. sapiens (Giraud-Panis et al., 2013). Additionally, some putative Shelterin proteins are shown: EhRap1-like (EHI_064550) and the single-stranded DNA-binding protein EhRpa1-like (EHI_062980). SUMOylation is related to telomere maintenance through the ALT pathway (Alternative Lengthening of Telomeres), which occurs in the absence of telomerase. SUMOylated TRF-1 is recruited to the PML bodies where telomeric regions have been identified DNA (Yu et al., 2007;Royle et al., 2008). Therefore, the increase in the MW of the EhTRF-like I and III proteins could be explained by means of this PTM that suggest a similar role. Interestingly, all the enzymes involved in SUMOylation have been identified in E. histolytica (Bosch and Siderovski, 2013). This allows us to propose that EhTRF-like I could have similar PTMs as TRF-1 to modulate its activity and protect telomeric DNA. Detection of the EhTRF-like I protein in the insoluble nuclear fraction with a molecular weight of 36 kDa was also obtained. This molecular weight is lower than the predicted for EhTRF-like I. EhTRF-like I protein contains different residues susceptible to proteolytic cleavage, E352 was predicted as proteolytic cleavage site. If this site is functional, it would generate a peptide with a similar weight to that we found in the insoluble fraction. Interestingly, this residue is also conserved in H. sapiens TRF1. Finally, it has been reported that the presence of high content of acidic residues might affect the gel mobility shift of a protein and thereby explain the molecular weight variations found (Guan et al., 2015). Human cells telomeres are tethered to the nuclear envelope during post-mitotic nuclear assembly. TRF proteins are associated with the nuclear membrane through Lamin B1. Binding of lamins to telomeres is partially mediated by TRF2, via its interaction with RAP1 which interacts with the nuclear envelope protein Sun1 (Hediger et al., 2002;Crabbe et al., 2012;Gonzalo and Eissenberg, 2016). In agreement, we found that EhTRF-like I and III colocalized with Lamin B1 at the nuclear periphery suggesting that they occupy a similar position. In accordance with this localization, in EhTRF-like III overexpressing trophozoites, this protein was localized also in the nuclear periphery proximal to the nuclear membrane in condensed heterochromatin regions. Likewise, telomeres have been found located in interphase nuclei close to the nuclear envelope; However, not in all organisms telomeres occupy this position, for example in plants like A. thaliana they have been reported close to the nucleolus (Schrumpfová et al., 2014). Therefore, the subnuclear location of telomeres is species-, cell type-and cell-phase dependent (Giraud-Panis et al., 2013). Telomeres carry features of repressive chromatin associated with constitutive heterochromatin. Different histone signatures have recently been identified associated with mammalian telomeres: trimethylation of H3K9 and H4K20 (Blasco, 2007). Here we selected H4K20me3 because it was previously identified and described in this parasite (Borbolla-Vázquez et al., 2015) and could suggest its participation in the organization and regulation of telomeric DNA in the E. histolytica nuclear periphery. Consistent with this, we observed that EhTRF-like I colocalized with the H4K20me3 mark suggesting that EhTRF-like proteins occupy regions of silenced compacted chromatin, in consistence with the telomere compacted structure (Giraud-Panis et al., 2013). Given the position of EhTRF-like I and III at the nuclear periphery and their colocalization with Lamin B and the trimethylated H4K20, we propose that these proteins could participate in the protection of chromosome ends. Finally, we explored whether nuclear proteins from E. histolytica recognized the human telomeric sequence (HuTRF). Interestingly, we found three DNA-protein specific complexes from nuclear extracts of trophozoites that were competed with the human canonical telomeric sequence and with the probe derived from the STR repeat of the NK2 array that could be related to telomeric DNA (EhTEL). These DNA-protein complexes were not competed with mutated telomeric sequence (mut-TEL) or non-related (non-REL), showing specificity for telomeric sequences. Moreover, in an attempt to evaluate the telomeric DNA binding properties of EhTRF-like proteins, we observed that rEhTRF-like I recognized both HuTRF and EhTEL sequences. Since E. histolytica has three genes that code for three proteins with telomeric MYB DBD it would be interesting to determine if DNA-protein complexes correspond to these three proteins. Using nuclear extracts from trophozoites overexpressing EhTRF-like III we found that this protein formed DNA protein complexes with the STR sequence of E. histolytica. In our conditions, only complex III was enriched when extracts from TRF-like III overexpressing clones were used with the EhTEL probe indicating that the TRF-like III protein interacts with the STR repeat of the tRNA genes. This is the first report were a sequence derived from the tRNA arrays ([NK2]) is used to determine its recognition by nuclear proteins and rTRF-like I in EMSA assays. Previously, the YE array was used in fluorescence in situ hybridization (FISH) analysis, detecting six distinct signals at the parasite nucleus (Willhoeft and Tannich, 2000). However, as the ploidy of E. histolytica remains to be determined the interpretation of this evidence was difficult. It will be interest to determine the in vivo association of EhTRF-like proteins with sequences derived from the tRNA arrays. In conclusion, the protection of the chromosome ends is critical to the survival of any cell as their disruption can induce genomic instability and consequently compromise the viability of the organism. Although no canonical Shelterin proteins have been identified in this protozoan, TRF-like proteins might accomplish this role for their ability to recognize and interact with telomeric DNA through their MYB-DBD. Therefore, this work shows the first evidence of telomeric proteins in E. histolytica able to interact with the proposed Entamoeba telomeric DNA sequences conforming a simple Shelterin complex as in other protozoans (Figure 8). This work raises several interesting questions and further investigation will contribute to a better understanding of the role of EhTRF-like proteins in the telomeric function in E. histolytica. AUTHOR CONTRIBUTIONS All authors contributed equally to design and conception of this work. FR-G, VÁ-H, EC-O, and RC-G collected E. histolytica experimental data. HC-H performed the in-silico analysis; BC-M and AL-G performed MET analysis. AB contributes with confocal microscopy. FR-G, AB, JV, EO, LL-C, and EA-L contributed to experimental design, intellectual input, interpreting data and in writing the manuscript. FUNDING EA-L thanks CONACYT for a research grant to study proteins that preserve genome integrity in Entamoeba histolytica (Ciencia Básica # 222956).
2018-10-02T13:03:21.778Z
2018-10-02T00:00:00.000
{ "year": 2018, "sha1": "41fe0e170b42e418a39787fcfa5cc7c4798ba3b8", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2018.00341/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41fe0e170b42e418a39787fcfa5cc7c4798ba3b8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13339846
pes2o/s2orc
v3-fos-license
Cyclooxygenase-2 and B-cell lymphoma-2 expression in cystitis glandularis and primary vesicle adenocarcinoma Background Although cystitis glandularis (CG) is a common benign urinary bladder epithelial abnormality, it remains unclear whether CG is a premalignant lesion. Cyclooxygenase-2 (COX-2) and B-cell lymphoma-2 (Bcl-2) overexpression has recently been reported as a potential tumor initiator or promoter. We evaluated and compared COX-2 and Bcl-2 expression in CG, chronic cystitis (CC), and primary vesicle adenocarcinoma (ADC) tissues. Methods We conducted a retrospective study to investigate COX-2 and Bcl-2 levels in CG and ADC. We obtained tissue samples from 75 patients (including 11 cases of CC, 30 typical cases of CG (CGTP), 30 cases of intestinal CG (CGIT), and 4 cases of ADC) between 1989 and 2009 from the Surgical Pathology Archives of the No. 2 People’s Hospital of Zhenjiang, affiliated with Jiangsu University. COX-2 and Bcl-2 immunohistochemical staining was performed on all tissues. Nine normal bladder epithelial specimens were evaluated as control samples. Correlations between COX-2 and Bcl-2 expression in CG were also analyzed. Results COX-2 and Bcl-2 expression was higher in the ADC group compared to other groups (p < 0.05). COX-2 and Bcl-2 levels were higher in the CGIT group compared to the CGTP group (p = 0.000 for both). The CGIT and CGTP groups both showed higher COX-2 expression compared to the CC group (p = 0.000 for both). There was no difference in Bcl-2 expression between the CGTP and CC groups (p = 0.452). Additionally, the difference in COX-2 and Bcl-2 expression between the control and CC groups was also insignificant (p = 0.668 and p = 0.097, respectively). Finally, we found that COX-2 and Bcl-2 levels were positively related (r = 0.648, p = 0.000). Conclusion COX-2 and Bcl-2 overexpression in the CG group suggests that CG, particularly the intestinal type, may be a premalignant lesion that converts into a tumor in the presence of carcinogens. Background Cystitis glandularis (CG) is a common benign epithelial abnormality that occurs in the presence of chronic inflammation [1,2]. Based on morphology and behavior, CG has been subdivided into two subtypes. Typical CG (CGTP) is characterized by nests of columnar epithelial cells within the bladder lamina propria that form glandular structures. The intestinal type (CGIT) has similar glandular architecture in the lamina propria but contains abundant mucin-secreting goblet cells in the lining epithelium [3]. Although the cause of CG is debatable [4], it is generally agreed that in the presence of chronic inflammation, the bladder mucosa becomes hyperproliferative. When proliferation projects into the lamina propria, epithelial nests (von Brunn's nests) [5] and cystitis cystica or glandular lesions (CG) form [5,6]. CG, particularly the intestinal type, has been described as premalignant; however, not all studies agree with this conclusion [3,7]. Due to rare reported instances of CG progression to adenocarcinoma or CG associated with adenocarcinoma, the relationship between CG and subsequent bladder adenocarcinoma remains unclear. Cyclooxygenase is an important enzyme that catalyzes the conversion of arachidonic acid to prostaglandin. COX-1 is constitutively expressed in most tissues and regulates multiple physiological processes. In contrast, COX-2 is frequently undetectable in normal tissues, but can be induced by a variety of stimuli, including mitogens, cytokines, growth factors, and hormones, thereby resulting in inflammation and cellular proliferation [8]. COX-2 overexpression is observed in chronic inflammation as well as in various tumors, including bladder, prostate, colon, and lung [9][10][11][12]. To this end, we assessed the differential expression of COX-2 in normal bladder transitional cell tissue, chronic cystitis, two subtypes of CG, and bladder adenocarcinoma tissue. In addition, we determined if COX-2 expression is associated with expression of Bcl-2, a regulator and marker of apoptosis. Patient samples Tissues from 75 patients, including 60 cases of CG, 11 cases of chronic cystitis (CC), and 4 cases of primary vesicle adenocarcinoma (ADC), were obtained from the Surgical Pathology Archives of the No. 2 People's Hospital of Zhenjiang, affiliated with Jiangsu University between 1989 and 2009. Normal bladder specimens from nine subjects who underwent cystectomy for benign causes were used as controls. The Institutional Review Board of Nanjing Medical University (Nanjing, China) approved this study. At the time of patient recruitment, written informed consent was obtained from all participants. We classified CG into CGTP and CGIT based on routine hematoxylin and eosin-stained sections. One of the ADC patients had the intestinal type of CG and interrupted use of antibiotics rather than intravesical instillation of the anticancer agent. Another patient had neurogenic bladder with suprapubic cystostomy for fifteen years. The other two patients had classic bladder exstrophy with an unsuccessful initial closure. Immunohistochemistry and staining evaluation Sections (5 μm thick) were cut from formalin-fixed, paraffin-embedded tissue blocks and stained with hematoxylin and eosin. Additional sections from appropriately selected blocks were cut for use in immunohistochemical analyses as described previously [13,14]. Two primary antibodies were used for immunochemical staining: monoclonal antibodies against COX-2 (monoclonal mouse anti-human D12; Santa Cruz Biotechnology, USA) and Bcl-2 (monoclonal mouse anti-human; Dako, Carpinteria, USA). Briefly, sections were baked for 2 hour at 72°C and deparaffinized by sequential immersion in xylene, 95% ethanol, 80% ethanol, and distilled water for 5 min each. Next, slides were placed in an autoclave containing antigen retrieval solution (0.1 M citrate buffer from BDH at pH 6.0) for 2 min at 121°C. Diluted primary antibodies (100 μl) were applied to the sections and slides were incubated in a humid chamber for 2 h at 37°C. Slides were rinsed gently with PBS and placed in a fresh PBS bath for 5 min. Next, one or two drops of diluted biotinylated secondary goat anti-mouse antibodies (Dako Cytomation) were applied to the sections and the slides were incubated in a humid chamber for 2 h at 37°C. After rinsing, one or two drops of streptavidin-horseradish peroxidase reagent (Dako Cytomation) was added to the sections and slides were incubated for 30 min at 37°C. Next, the prepared DAB substrate chromogen solution was applied to the sections and slides were incubated in the dark at room temperature for 5 min. Mayer's hematoxylin stain was used as a counterstain, and slides were dehydrated and mounted. Staining was evaluated as described previously [14,15]. Briefly, two pathologists who were unaware of the clinical data scored immunohistochemical expression in a semiquantitative fashion. Expression levels were assessed by evaluating the percentage of the cell that was stained, and recorded as absent, weakly, moderately, or markedly positive. (5-25% indicated weakly, 25-50% indicated moderately, >50% indicated markedly) Using light microscopy, the mean percentage of \ positively stained cells in each section was calculated from three dense, medium, and light staining areas. In each area, the percentage of brown stained cells was calculated from the total number of countable cells in five high power fields. Therefore, expression scoring was determined to be discernible and reproducible. Statistical analyses Kruskal-Wallis H tests were employed to evaluate differences in the amount of COX-2 and Bcl-2 expression among control, CC, CGTP, CGIT and ADC specimens. To further compare the expression of two groups, we performed Mann-Whitney U tests. Spearman's tests were used to analyze the correlation between COX-2 and Bcl-2 expression in CG specimens. P values less than 0.05 were considered significant, and all P values are two-sided. All analyses were performed using SPSS version 13.0 (SPSS, USA). Results The immunohistochemical staining results are summarized in Table 1, and typical examples from the CGIT and ADC groups are shown in Figure 1. There were significant differences in COX-2 and Bcl-2 expression among the five groups (χ 2 = 58.917, p = 0.000; χ 2 = 50.993, p = 0.000, respectively). The ADC group showed the highest levels of COX-2 and Bcl-2 expression compared to the other groups (p < 0.05). COX-2 and Bcl-2 expression levels were higher in the CGIT group compared to the CGTP group (Z = −4.473, p = 0.000; Z = −5.580, p = 0.000, respectively), and both of these groups showed higher COX-2 expression compared to the CC group (Z = −5.227, p = 0.000; Z = −4.482, p = 0.000, respectively). However, the difference in Bcl-2 expression between the CGTP and CC groups was not significant (Z = −0.752, p = 0.452). COX-2 and Bcl-2 levels were not different between the control and CC groups (Z = −0.429, p = 0.668; Z = −1.658, p = 0.097, respectively). To determine whether increased COX-2 expression was associated with up-regulation of the anti-apoptotic protein Bcl-2 in CG patients, Spearman's tests were performed to analyze the correlation between expression of the two proteins in specimens. We found that COX-2 and Bcl-2 expression were positively related (r = 0.648, p = 0.000). Discussion COX-2 overexpression contributes to tumorigenesis through multiple and complex mechanisms [9]. Liu et al. reported that strong COX-2 expression in murine mammary gland epithelial cells resulted in breast tumor development [16]. Nevertheless, other mouse models of skin carcinogenesis found that COX-2 plays a role in tumor promotion rather than initiation [17,18]. In the current study, we observed that COX-2 expression in CGIT and CGTP specimens were significantly higher compared to CC and control. CGIT tissue had a stronger COX-2 expression compared to CGTP. Additionally, COX-2 was aberrantly expressed in ADC tissue. These data suggest that the COX-2 overexpression in these two CG subtypes likely contribute to sensitizing premalignant lesions to genotoxic carcinogens. Apoptosis is a programmed cell death process that depends on a balance of pro-and anti-apoptotic factors. It is vital for tissue homeostasis and defense against pathogens. Decreased apoptosis has been observed in premalignant lesions. It is well known that COX-2 overexpression increases expression of the proto-oncogene Bcl-2 and inhibits apoptosis [19]. Bcl-2, the first apoptotic regulator identified, was originally discovered as the defining oncogene in follicular lymphomas [20]. Unlike other oncogenes that increased cell proliferation, Bcl-2 inhibited programmed cell death and affected the apoptotic pathway, which are critical for cancer development [19]. Our results demonstrate that Bcl-2 expression in CGIT, but not in CGTP, was significantly higher compared to CC and control. However, ADC cases had the highest levels of Bcl-2. Additionally,, Bcl-2 expression in CG cases was positively related to COX-2 expression, similar to the report by Tsujii et al. [21]. These data suggest that impaired apoptosis may occur both in both CG subtypes and play a critical role in premalignant lesions. Several reports have shown that adenocarcinoma of the bladder is associated with CG [22][23][24][25]. However, after more than ten years of data tracking, Corica et al. reported that none of the 53 patients with CG developed bladder cancer [26]. As a result, the association between CG and adenocarcinoma remains unclear. Although inflammation is regarded as a possible initiator of cancer [27,28] and COX-2 and Bcl-2 expression were reported to be tumor initiators or promoters [28], the malignant potential of CG should be examined in future studies. In conclusion, COX-2 and Bcl-2 overexpression in CG suggests that CG, particularly the intestinal type, may be a premalignant lesion that converts into a tumor in the presence of carcinogens. However, further molecular and clinical studies are needed to test this hypothesis. Conclusion COX-2 and Bcl-2 overexpression in CG suggests that CG, particularly the intestinal type, may be a premalignant lesion that converts into a tumor in the presence of carcinogens.
2016-05-04T20:20:58.661Z
2014-01-03T00:00:00.000
{ "year": 2014, "sha1": "e7a0ffb8dea10eaee138b264e77d75177da6da92", "oa_license": "CCBY", "oa_url": "https://bmcurol.biomedcentral.com/track/pdf/10.1186/1471-2490-14-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20220173f2a621fabcc5e2e1b3a7c10444765118", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16041136
pes2o/s2orc
v3-fos-license
BeppoSAX Observations of PKS 2155-304 during an active Gamma-ray State PKS 2155-304 was observed with BeppoSAX in November 1997 for 64 ksec (total elapsed time 33.5 hours) and, for the first time, simultaneously in gamma-rays with EGRET on board the Compton Gamma Ray Observatory and with the ground based TeV telescope CANGAROO, during a phase of high brightness in the X-ray band. The LECS and MECS light curves show a pronounced flare (with an excursion of a factor 3.5 between min and max), with evidence of spectral hardening at maximum intensity. The source is weakly detected by the PDS in the 12-100 keV band with no significant evidence of variability. The broad band X-ray data from Beppo SAX are compared with the gamma-ray results and discussed in the framework of homogenous synchrotron self Compton models. Introduction PKS 2155-304 is one of the brightest BL Lacertae objects in the X-ray band and one of the few detected in γ -rays by the EGRET experiment on CGRO (Vestrand et al. 1995). No observations at other wavelengths simultaneous with the gamma-ray ones were ever obtained for this source, yet it is essential to measure the Compton and synchrotron peaks at the same time in order to constrain emission models unambiguously (e.g Dermer et al. 1997, Tavecchio et al. 1998a. For these reasons having been informed by the EGRET team of their observing plan and of the positive results of the first days of the CGRO observation, we asked to swap a prescheduled target of our BeppoSAX blazar program with PKS 2155-304. During November 11-17 1997 (Sreekumar & Vestrand 1997) the γ -ray flux from PKS 2155-304 was very high, roughly a factor of three greater than the previously published value from this object. BeppoSAX pointed PKS 2155-304 for about 1.5 days starting Nov 22. Here we report and discuss the data obtained from the BeppoSAX observation. A complete paper including a detailed description of the data analysis procedure is in preparation (Chiappetti et al. 1998) and we plan to submit it jointly with a full EGRET paper (Vestrand et al. 1998). Light curves Here we summarize some of the results. Fig 1 (left frame) shows the light curves binned over 1000 sec obtained in different energy bands. The light curves show a clear high amplitude variability: three peaks can be identified. The most rapid variation observed (the decline from the peak at the start of the observation) has a halving timescale of about 2 × 10 4 s, similar to previous occasions (see e.g. Urry et al. 1997). No shorter time scale variability is detected although we would have been sensitive to doubling timescales of order 10 3 s. The variability amplitude is energy dependent as shown by the hardness ratio histories plotted at the bottom of Fig 1. The HR correlates positively with the flux, indicating that states with higher flux have harder spectra. Spectral analysis We found that the LECS and MECS spectra are individually well fitted by a broken power law model with galactic absorption (N H = 1.36·10 20 cm −2 , while single power law fits are unacceptable. The fitted spectral parameters are given in Table 1. The change in slope between the softest (0.1-1 keV) and hardest (3-10 keV) bands is ≃ 0.8 A broken power law fit to the combined LECS and MECS spectra yields unsatisfactory results indicating that the spectrum has a continuous curvature. Fitting together the MECS and PDS data yields spectral parameters very similar to those obtained for the MECS alone. The residuals show that the PDS data are consistent with an extrapolation of the MECS fits up to about 50 KeV. Above this energy the PDS data show indication of an excess, indicating a flattening of the spectrum. Spectral Energy Distributions and Discussion The deconvolved spectral energy distributions (SED) measured by SAX (0.1-300 keV) at maximum and minimum intensity during this observation are compared in Fig. 1 (right frame) with non simultaneous data at lower frequencies and with the gamma-ray data from the discovery observation (Vestrand, Stacy and Sreekumar 1995). The latter are also shown multiplied by a factor three to represent the gamma-ray state of November 1997 as communicated in IAU circular (Sreekumar & Vestrand 1997). The final γ-ray data are not available yet. From the public X-ray data obtained by the All Sky Monitor on XTE we infer that the source was brighter during the first week of the EGRET pointing, which yielded the high γ-ray flux (Sreekumar & Vestrand 1997 ) than during the Beppo SAX observations. We therefore suppose that the γ-ray flux simultaneous to our observations could be intermediate between the two states reported in the figure. Note also that the PDS data refer to an "average" state over the SAX exposure time. In order to estimate the physical parameters of the emitting region in PKS 2155-304 we fitted the observed SEDs in the full X-ray range with a simple SSC model involving a homogeneous spherical region of radius R, magnetic field B, filled with relativistic particles with energy distribution described by a broken power law (4 parameters: n 1 , n 2 , γ b and a normalization constant, K) and with Doppler factor δ. This seven parameter model is strongly constrained by the data which yield a determination of the two slopes (X-ray and gamma-ray slope) the frequency and flux of the synchrotron peak, a flux value for the Compton component and a lower limit to the Compton peak frequency. Assuming R = ct var with t var = 2 hours the system is practically closed. A general discussion of the parameter determination procedure for this class of models, with analytic formulae is given in Tavecchio et al. (1998a, b). In Fig.1 we show two models representing the high and low X-ray intensity intervals in our observation. We arbitrarily assumed that the lower intensity state corresponds to the gamma-ray intensity reported in 1995 and we chose not to fit the low frequency data since there the variability time scales are longer and they could refer to a larger emission region. In order to account for the flaring state the break energy of the electron spectrum was shifted to higher energies, leaving the other parameters unchanged. Correspondingly also the Compton peak increases in flux and shifts to higher energies. Both effects are however reduced with respect to the "quadratic" relation expected in the Thomson limit, since for these very high energy electrons the Klein-Nishina limit plays an important role. The predicted TeV flux is F (> 1T eV ) = 10 −11 ph cm −2 s −1 and F (> 1T eV ) = 2.5 · 10 −12 ph cm −2 s −1 in the two states respectively. Unfortunately the CANGAROO telescope did not detect PKS 2155-304 in November 1997, but the upper limits are consistent with the predicted values (Kifune, priv. comm.). The sensitivity of the CANGAROO observatory is expected to improve significantly in the next year, with the addition of new telescopes. It will therefore be worthwhile to repeat the "experiment" of simultaneous X-ray and TeV observations to verify whether the predicted TeV flux is actually observed.
2014-10-01T00:00:00.000Z
1998-08-18T00:00:00.000
{ "year": 1998, "sha1": "d9ba63f463f836d7141806220cb6c31e224240a1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "74dd71f7b1cdf68a0f6919cdfe72f6aaf43fb03d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257986035
pes2o/s2orc
v3-fos-license
Identification of natural killer cell associated subtyping and gene signature to predict prognosis and drug sensitivity of lung adenocarcinoma Introduction: This research explored the immune characteristics of natural killer (NK) cells in lung adenocarcinoma (LUAD) and their predictive role on patient survival and immunotherapy response. Material and methods: Molecular subtyping of LUAD samples was performed by evaluating NK cell-associated pathways and genes in The Cancer Genome Atlas (TCGA) dataset using consistent clustering. 12 programmed cell death (PCD) patterns were acquired from previous study. Riskscore prognostic models were constructed using Least absolute shrinkage and selection operator (Lasso) and Cox regression. The model stability was validated in Gene Expression Omnibus database (GEO). Results: We classified LUAD into three different molecular subgroups based on NK cell-related genes, with the worst prognosis in C1 patients and the optimal in C3. Homologous Recombination Defects, purity and ploidy, TMB, LOH, Aneuploidy Score, were the most high-expressed in C1 and the least expressed in C3. ImmuneScore was the highest in C3 type, suggesting greater immune infiltration in C3 subtype. C1 subtypes had higher TIDE scores, indicating that C1 subtypes may benefit less from immunotherapy. Generally, C3 subtype presented highest PCD patterns scores. With four genes, ANLN, FAM83A, RHOV and PARP15, we constructed a LUAD risk prediction model with significant differences in immune cell composition, cell cycle related pathways between the two risk groups. Samples in C1 and high group were more sensitive to chemotherapy drug. The score of PCD were differences in high- and low-groups. Finally, we combined Riskscore and clinical features to improve the performance of the prediction model, and the calibration curve and decision curve verified that the great robustness of the model. Conclusion: We identified three stable molecular subtypes of LUAD and constructed a prognostic model based on NK cell-related genes, maybe have a greater potential for application in predicting immunotherapy response and patient prognosis. Background Lung cancer is a leading cause of cancer mortality in the world (Hirsch et al., 2017). Statistics reported that in 2022 in the United States will die from cancer, and approximately 350 of them die from lung cancer every day (Siegel et al., 2022). Adenocarcinoma (lung adenocarcinoma, LUAD) is currently the predominant histologic type, which accounts for approximately 50% of all lung cancer cases, and is notable for its high incidence, high mortality, and poor prognosis (Succony et al., 2021). Currently, surgery is recommended for early-stage lung cancer and is considered the most effective treatment option, while those with advanced disease are often further supplemented with radiotherapy, chemotherapy, targeted therapy, and immunotherapy (Hoy et al., 2019). Regardless of the interventions used, the overall 5-year survival of LUAD patients remains below 20% (Duma et al., 2019). Therefore, it is necessary to develop current understanding on the pathogenesis of LUAD to provide a theoretical basis for reducing the occurrence of LUAD, improving the treatment of LUAD and its prognosis. The development of LUAD involves external environment, gene mutation, tumor immunity, and family genetics, and is a multistep, cascade process (Suster and Mino-Kenudson, 2020). As a component of the tumor microenvironment, tumor immune cells are present in all stages of LUAD and play an important role in shaping tumor development (Saab et al., 2020). For example, tumor-associated macrophages can accelerate tumor progression by promoting tumor angiogenesis, metastasis and immune escape. Regulatory T cells inhibit anti-tumor immune responses, thereby promoting the development of immunosuppressive tumor microenvironments and promoting cancer progression (Hsieh et al., 2012). Cytotoxic CD8 + memory T cells kill tumor cells by recognizing specific antigens on them and stimulating an immune response (Arneth, 2019). Dendritic cells are antigen-presenting cells, which are an important bridge between innate and adaptive immunity. Dendritic cells can not only induce cellular immunity and humoral immunity, but also activate natural killer (NK) cells and NK T cells (Sadeghzadeh et al., 2020). NK cells are anti-tumor immune cells that kill cancer cells in the body, but in the tumor microenvironment NK cells are generally reduced in number and impaired in function (Russell et al., 2022). Basic experiments and clinical studies together have shown that NK cells are in the first line of defense against tumors and do not require pre-stimulation to cause NK cells to migrate to the lesion and play an immunomodulatory role (Guillerey, 2020). Phenotypically, NK cell subpopulations display potent antitumor immune cytotoxicity via MEK/ERK and PI3K/Akt/mTOR pathways upon stimulation through cytokines such as interleukin (IL) (Valipour et al., 2019). Although patient's immune system can recognize neoantigens produced by tumors with high mutational load (immunogenic "hot" tumors), in terms of its mutational load, lung cancer is immunogenic, only moderately, to some extent. Therefore, the highly complex interaction between LUAD and NK cells is a major challenge to improve immunotherapy. Studies on the pathogenesis of NK cells in LUAD have delved into the genetic-molecular field, and it is mostly believed that the development of LUAD is the result of a multigene, multistage involvement (Crinier et al., 2020). However, the genetic landscape and immune profile of NK cells in LUAD are unclear, also the prognosis and immune efficacy prediction of LUAD based on NK cells have not been reported. This study first identified stable molecular subtypes of LUAD by consistent clustering of NK cell-associated genes, and further compared clinicopathological, mutational, immunological, and pathway characteristics among the subtypes. Then, we constructed a risk model and a clinical prognostic model, which can be used to evaluate personalized treatment for LUAD patients. 2 Materials and methods 2.1 Source of clinical information and gene expression profile data of NK cells The clinical information and mRNA transcriptome data of LUAD patients were downloaded from the TCGA GDC API (Colaprico et al., 2016). To verify the accuracy of the results, we also downloaded the clinical and mRNA gene expression data of LUAD patients from the Gene Expression Omnibus database (GEO) database (Toro-Domínguez et al., 2019), including GSE72094, GSE31210 datasets. The TCGA dataset contained 500 LUAD samples as the training set, while the GSE72094 and GSE31210 datasets contained 398 and 226 LUAD samples, respectively, as the validation set. To ensure the quality and reliability of the downloaded data, quality control was performed, and the inclusion and exclusion criteria were (Hirsch et al., 2017) to remove samples with incomplete clinical information; (Siegel et al., 2022); to remove samples with unknown survival time and survival status; (Succony et al., 2021); to remove probes with one probe matching to multiple genes, and the mean value was taken as the expression value of that gene when multiple probes matched to one gene. Subtyping of LUAD patients based on NK cell-associated genes A total of 213 NK cell-associated genes and 18 NK cellassociated pathways were obtained from the three databases, and we used the single sample gene set enrichment analysis (ssGSEA) method to evaluate these 213 NK cell-associated genes and 18 NK cell-associated pathways in the TCGA and GEO datasets, respectively. The samples were then clustered by ConsensusClusterPlus using these pathway scores in the TCGA and GEO cohorts, and the "K-M" algorithm and "1-Pearson correlation" as the metric distance (Azman et al., 2006). We conducted 500 bootstraps, with each one including 80% patients of in the training set and 20% those of the validation set. Finally, based on the cumulative distribution function (CDF), the optimal number clusters were decided, and the optimal classification and the sample molecular subtyping was obtained by calculating the consistency matrix and the consistency cumulative distribution function (Zhang et al., 2021a). Immunological features and pathway analysis among different molecular subtypes We obtained the molecular characteristics of LUAD genomic alterations from published literature, including LOH, Aneuploidy Score, tumor mutation burden (TMB), purity, and ploidy, Homologous Recombination Defects, Intratumor heterogeneity. The relative abundance of 22 immune cells were calculated using CIBERSORT R package. At the same time, we used the ESTIMATE algorithm R package to calculate the proportion of immune cells and finally compared the inflammatory and immune activity scores (Chakraborty and Hossain, 2018;Chen et al., 2018). We performed gene set enrichment analysis (GSEA) on all NK cell-associated genes in the Hallmark database, and then used the ssGSEA method to calculate the pathway scores for both TCGA and GEO datasets in the GSVA package (Barbie et al., 2009). A false discovery rate (FDR) of <0.05 in this study was considered statistically significant. Drug sensitivity analysis between molecular subtypes Immune checkpoint inhibitor (ICI)-based therapy has become one of the standard treatments for advanced lung cancer (Zhang et al., 2021b). We first assessed the expression of genes associated with immunotherapy, such as CTLA4, PD-L1, and PD-1, among various molecular subtypes to determine whether there were differences in immunotherapy responsiveness among them. Next, we used the TIDE software (http://tide.dfci.harvard.edu/) to assess the potential clinical effects of immunotherapy in our defined molecular subtypes. Greater likelihood of immune escape was correlated with a higher TIDE prediction score, suggesting that patients may benefit less from immunotherapy (Jiang et al., 2018). Finally, we performed drug sensitivity prediction for LUAD in the "pRRophetic" package (Geeleher et al., 2014). Identification of key NK cell-related genes among molecular subtypes The differentially expressed genes among different molecular typing were calculated by the "limma" package, using FDR <0.05 and | log2FC| > 1 as the statistical difference criteria, and visualized the differentially expressed genes by "pheatmap" and "ggplot2" R packages in a heatmap and volcano map. Then, all genes with statistically significant differences were enriched using the "clusterProfiler" package. Next, we performed univariate Cox regression analysis for differentially expressed genes between molecular subtypes, and then reduced the prognosis-related genes by Lasso regression (Sun et al., 2021), which can better solve the problem of multicollinearity in regression analysis by compressing some coefficients and setting some coefficients to zero at the same time. With the gradual increase of lambda, we selected the number of factors when the coefficients of independent variables tended to zero. Then, we used the AIC deficit pool information criterion through stepwise regression, which has the advantage of considering the statistical fit of the model and the number of parameters used to fit it, and at the same time indicates a sufficient fit of the model obtains with fewer parameters (Zhang, 2016). Construction and validation of the prognostic model We calculated the NK cell-related prognostic RiskScore for each sample according to the formula defined by the sample RiskScore (below), and normalized it (Nie et al., 2021). After that, LUAD patients were divided into high-and low-risk groups based on the relationship between RiskScore and 0, where those with RiskScore >0 were considered as having a high risk and those with RiskScore <0 were considered as having a low risk. Finally, the survival differences between the two groups were compared by log-rank test. In order to verify the robustness of the model, we performed immune signature analysis, survival curve, and drug treatment difference analysis for the patients in the two groups. Improvement of prognostic models and survival prediction in LUAD patients To more accurately quantify the risk assessment and survival probability of LUAD patients, we combined the RiskScore with other clinicopathological characteristics of LUAD patients and constructed a nomogram using the "nomogramEx" R package. To validate the accuracy of the model, a calibration curve was plotted by the "PredictABEL" function to visualize the goodnessof-fit. This was followed by decision curve analysis (DCA) to describe the change in net benefit as the threshold probability changed under the intervention of the predicted value by the model (Van Calster et al., 2016;Van Calster et al., 2018). Programmed cell death (PCD) analysis 12 PCD patterns (apoptosis, necroptosis, pyroptosis, ferroptosis, cuproptosis, entotic cell death, netotic cell death, parthanatos, lysosome-dependent cell death, autophagydependent cell death, alkaliptosis, and oxeiptosis) have been taken from the previous research (Zou et al., 2022). ssGSEA analysis based on the expression data of PCD related genes using the R package GSVA. Spearman analysis was conducted to know the relationship among PCD patterns, clinical features, RiskScore in LUAD samples. Statistical analysis Unless otherwise specified, all statistical tests were bilateral and conducted using R software (version 4.1.3, https://www.r-project. org/), and p < 0.05 was considered statistically significant. Molecular subtyping of LUAD based on NK cell-associated genes We first calculated the NK cell-related genes showing close relationship with LUAD survival chance by univariate Cox regression analysis, and screened 63 prognostically significant genes (p < 0.05, Figure 1A), including the prognostic (Protective) genes SHC1, TICAM1, PVR, RAET1E, RAC1 (HR > 1), and KLRB1, CD160, KIR3DL2, CLEC12B, and KIR2DL1 (HR < 1). Then, we used these 63 differential genes for consistent clustering, and determined the best cluster number according to CDF. And we could see from Figures 1B, C that Cluster = 3 had more stable clustering results, hence, k = 3 was selected to obtain three molecular subtypes (C1, C2, and C3) ( Figure 1D). Then, we Frontiers in Genetics frontiersin.org 05 performed survival analysis of patients with these three molecular subtypes using the K-M survival method, and the results identified a significant difference in prognosis among the three molecular subtypes, with C1 patients having the worst prognosis and C3 patients having the optimal prognosis ( Figure 1E). The results were also validated in the GSE72094 dataset ( Figure 1F). Meanwhile, we found that the "Risk" genes were high-expressed in the C1 subtype and the "Protective" genes were high-expressed in the C3 subtype in the heat map ( Figure 1G). These results suggested that the molecular subtyping based on NK cell-related genes was reasonable, and there were significant differences in gene expression and prognosis among patients with different subtypes. Genetic landscape between molecular subtypes of LUAD To explore the differences in specific gene expression profiles among different molecular subtypes, we compared the molecular profiles among C1, C2, and C3 subtypes of LUAD samples, respectively, and it is obvious from Figure 2A that purity, and ploidy, TMB, Aneuploidy Score, LOH, Homologous Recombination Defects expression were the highest in C1 but the lowest in C3, which was consistent with previous studies (Thorsson et al., 2018). In addition, we compared the differences between the molecular subtyping of published studies and that in this study. Here it was found that the C3 subclass occupied the most of the C3 subtypes we defined, suggesting that the C3 subtype was the major subtype of LUAD ( Figure 2B). In addition, a significant correlation between molecular subtypes and gene mutations was detected after analyzing the correlation between gene mutations and molecular subtypes, and observed. TTN, MUC16, CSMD3, and RYR2 were the most widely mutated genes in LUAD ( Figure 2C), and this finding indicated that the development of LUAD was closely related to the above-mentioned gene mutations. Pathways enrichment analysis among the molecular subtyping of LUAD To investigate pathway differences in LUAD among different molecular subtypes, we performed GSEA enrichment analysis among molecular subtypes. As shown in Figure 3A, we enriched a total of 33 significant pathways in the TCGA-LUAD dataset, including MYC_TARGETS_V2, E2F_ TARGETS, NFLAMMATORY_ RESPONSE, MYOGENESIS, INTERFERON_GAMMA_RESPONSE, MYC_TARGETS_V1, GLYCOLYSIS, G2M_CHECKPOINT, EPITHELIAL_ MESENCHYMAL_TRANSITION, ALLOGRAFT_REJECTION, suggesting that these NK cell genes were mainly associated with cell cycle and immunity in C1 and C3. Additionally, pathways different between C1 and C3 subtypes, between C2 and C3 subtypes, between C1 and C2, were analyzed ( Figure 3B). Overall, the cell cycle pathway was activated in C1 patients, while the immune-related pathway was suppressed, therefore we hypothesized that these NK cell genes might play an important role in the cell cycle pathway as well as in the tumor microenvironment. To validate these results, we presented the pathway differences between C1 and C2, and C2 and C3 as radar plots, and the results showed that they both had significant consistency in cell cycle (MYC_TARGETS_V2, MTORC1_SIGNALING, MYC_ TARGETS_V1) and immune-related pathways (G2M_ CHECKPOINT, E2F_TARGETS, UNFOLDED_PROTEIN_ RESPONSE) ( Figure 3C). Immune characteristics among different molecular typologies of LUAD The immune system plays a dual role in the development of LUAD, as it can recognize and destroy tumor cells, while tumor cells can also evade host immune attack by forming a complex immunosuppressive network under the pressure of immune selection using the immune system's own negative regulatory mechanisms, thus the TME is in a constant state of change (Anichini et al., 2020;Spella and Stathopoulos, 2021). To explore the immune landscape among different molecular subtypes of LUAD, we first assessed the differences in the components of immune cells in the TCGA-LUAD cohort using the CIBERSORT algorithm and observed that most immune cells (B cells, T cells, NK cells, etc.) were significantly different (p < 0.05) ( Figure 4A). We then used the ESTIMATE algorithm to assess immune cell infiltration, and the results showed that StromalScore, ImmuneScore and EstimateScore were significantly different between C1, C2, and C3 (p < 0.05)), with ImmuneScore accounting for the largest proportion of C3 types, suggesting a higher degree of immune infiltration in C3 subtypes ( Figure 4B). Similarly, we obtained results in the GSE72094-LUAD cohort that were consistent with the TCGA-LUAD cohort (Figures 4C,D). In addition, we assessed the inflammatory activity of C3, C2, C1, except for IgG, the remaining six out of 7 metagenes clusters (HCK, Interferon, LCK, MCH I, MCH II, and STAT1) showed significantly different enrichment scores, with the C4 subtype having higher inflammatory activity ( Figure 4E). The findings were consistent in the GSE72094-LUAD cohort ( Figure 4F). Differences in immunotherapy between molecular subtypes In recent years, immunotherapy has led to new opportunities in the treatment of small cell lung cancer. Clinical trials of some immune checkpoint inhibitors have demonstrated their efficacy and safety in LUAD (Hua et al., 2021). Based on this, we first evaluated the expression of the representative molecules of immunotherapy (PD-1, PD-L1, CTLA4) among the three molecular subtypes, and observed that PD-1, PD-L1, and CTLA4 were significantly more expressed in C3 subtype (p < 0.05) ( Figure 5A). We also applied the "T-cell-inflamed GEP Frontiers in Genetics frontiersin.org score" to assess the predictive potential of different molecular subtypes to cancer immunotherapy, and the results also showed that the score was highest in C3 ( Figure 5B). Considering that IFN-γ is a cytokine that plays a key role in immunomodulation and immunotherapy, we downloaded the GOBP_RESPONSE_ TO_INTERFERON_GAMMA gene set from the GO database for Frontiers in Genetics frontiersin.org 07 ssGSEA analysis, and found that the IFN-γ response was significantly enhanced in the C1 subtype ( Figure 5C). We also compared the differences in INFG gene expression in the three subtypes and found that INFG was noticeably highexpressed in the C3 subtype ( Figure 5D). Moreover, CYT score, which reflects the cytotoxic effect, was significantly higher in the C3 subtype than in the other subtypes ( Figure 5E). In addition, the TIDE prediction data indicated that the C1 subtype had a higher TIDE score, suggesting that the C1 subtype was less likely to benefit from immunotherapy ( Figure 5F). The estimated IC50 of docetaxel, vincristine, paclitaxel and cisplatin among 3 subtypes showed that C1 was more sensitive to the four chemotherapy drugs ( Figure 5G). The above results indicated that predicting immunotherapy for LUAD based on NK cell-related genes was a practical approach. The ssGSEA analysis calculated the score of 12 PCD patterns in each sample in TCGA dataset and GSE72094 dataset. We found that 9 PCD scores had differences among 3 subtypes in both two datasets ( Figures 6A, B). In TCGA dataset, Stage, Gender, especially, Age had closely associated to PCD patterns ( Figure 6C), but in GSE72094 dataset, clinical features had litter associated to PCD patterns ( Figure 6D). Autophagy score were increased in early Stage, the scores of Pyroptosis, Autophagy, Necroptosis and Oxeiptosis were enhanced in Male samples, and samples with age > 60 had higher Pyroptosis, Entotic. cell.death scores in TCGA dataset ( Figure 6E). In GSE72094 dataset, Oxeiptosis score was highest in StageⅢ, and Ferroptosis and Necroptosis scores were greater in patients with age>60 ( Figure 6F). Establishment of LUAD risk model We first calculated the NK cell-related genes significantly differentially expressed among the three molecular subtypes by the limma package, significant expression differences of NK cell-related genes among C1, C2, and C3 were detected, including 11 upregulated genes and 180 downregulated genes (Supplementary Figures S1A, B). Differentially expressed downregulated genes were related to immune-related pathways, as shown by the results of enrichment analysis (Supplementary Figure S1C). Genes with upregulated level were related to inflammatory and immune pathways (Supplementary Figure S1D). 173 genes with high prognostic impact (p < 0.05), including 159"Protective" and 14"Risk" genes, were identified from those genes by conducting one-way Cox regression analysis (Supplementary Figure S2A). Further, we observed the trajectory of each gene with lambda using Lasso analysis, and the model was optimal when lambda = 0.0382, which corresponded to 9 differential genes (Supplementary Figures S2B, C). After that, we reduced the Figure S2D). Then, we calculated the Riskscore score for each TCGA-LUAD patient using these four genes and the above formula ( Figure 7A). We classified those RiskScore with 0 ≤ as low-risk group and with RiskScore >0 as high-risk group. Then, we performed a prognostic classification ROC analysis in the "timeROC" package for analyzing 1-year, 2-year, 3-year, and 5-year prognostic prediction classification efficiency, and we found that the model had a high AUC (0.71, 0.69, 0.7, and 0.67) ( Figure 7B). The results of survival analysis showed that patients in the low-risk group developed a significantly better prognosis (p < 0.001) ( Figure 7C). To confirm the robustness of this clinical prognostic model, we validated it in the GSE72094 and GSE31210 cohorts and used the same approach to calculate the RiskScore of patients (Figures 7D-G). Pathological characteristics of high-and low-risk groups To investigate the reliability of this risk model classification method, we first compared the clinical characteristics of patients in both high-and low-risk groups. The results showed that the RiskScore scores of patients with Stage III-IV, M Stage, N Stage, T Stage were significantly higher than Stage I-II ones. In addition, we also found that male patients had a higher RiskScore ( Figure 8A). Also, we compared the differences in RiskScore by molecular subtype and found that the RiskScore for the C1 subtype with poorer prognosis was significantly higher than C3 with a better prognostic outcome ( Figure 8B), and that the majority of the samples with high RiskScore were "C1" patients ( Figure 8C). In addition, we also compared whether there was a prognostic difference in the-high-and low-risk groups between the different clinicopathological characteristics subgroups in the TCGA-LUAD cohort. Across different clinical subgroups, the risk grouping performed equally well, pointing to the reliability of the grouping ( Figure 8D). This finding also applied to the GSE72094-LUAD cohort (Supplementary Figure S3). Immune infiltration and pathway characteristics of low-risk and high-risk patients We compared the relative abundance of 22 immune cell types in the two subgroups of the TCGA-LUAD cohort in high-and low-risk Frontiers in Genetics frontiersin.org groups. We discovered that the majority of immune cells (B cells, macrophages, T cells, and mast cells) were significantly different in high-and low-risk groups (p 0.05, Figure 9A). It is worth noting that activated NK cells had no significance between high-and low-group. We also examined the connection between the RiskScore and 22 immune cell components ( Figure 9B). Also, we assessed the immune cell infiltration using the ESTIMATE method. The three scores were significantly different between two risk groups (p < 0.05), and the low-Riskscore group had higher immune infiltration ( Figure 9C). The relationship between biological function in different samples with RiskScore was analyzed by "ssGSEA" analysis and found that the high risk group was significantly enriched to some cell cycle-related pathways, such as HALLMARK_SPERMATOGENESIS, and HALLMARK_REPAIR, SPERMATOGENESIS, HALLMARK_DNA_REPAIR, ALLMARK_ MYC_TARGETS_V2, HALLMARK_UNFOLDED_PROTEIN_ RESPONSE, etc. ( Figure 9D). Further, we selected functional pathways with correlations greater than 0.4, from which we could see that RiskScore showed positive correlation with cell cycle-related pathways, such as HALLMARK_MYC_TARGETS_V1, Frontiers in Genetics frontiersin.org Differences in immunotherapy/ chemotherapy for patients in high-and lowrisk groups First, we used the "T-cell-inflamed GEP score" to assess the predictive potential of the different RiskScore subgroups in cancer immunotherapy. The results showed that the "T-cellinflamed GEP score" was elevated in the low-risk group, but the difference was not statistically significant ( Figure 10A), however, in the low-risk group the IFN-γ response was noticeably elevated ( Figure 10B). The CYT score, which reflects cytotoxic effects, was elevated in the low-risk group, showed no statistically significant differences ( Figure 10C). The expression of representative molecules of immunotherapy (CTLA4, PD-L1, and PD-1) was calculated in the risk groups and showed that CTLA4 was significantly more expressed in the low-risk group (p < 0.05), while the difference in PD-1 and PD-L1 expression was not significant ( Figure 10D). We looked at the connection between RiskScore and medication response in cancer cell lines to better understand the impact of RiskScore on drug response. We found 49 substantially linked relationships between RiskScore and drug sensitivity in the Genomics of Drug Sensitivity in Cancer (GDSC, http://cancer. sanger.ac.uk/cell_lines#) database using Spearman correlation analysis. Of these 49 pairs, 15 pairs were significantly associated with Riskscore correlations, such as Vinorelbine, Sabutoclax, Vinblastine, Entinostat, Vincristine, and Sorafenib ( Figure 10E). We found that these drugs mainly target the EGFR signaling and TNKS2 pathways through the study on the signaling pathways of the FIGURE 10 Differences in immunotherapy/chemotherapy between RiskScore subgroups. (A) Difference in "T cell inflamed GEP score" between molecular subtypes. (B) Difference in "response to IFN-γ" between molecular subtypes. (C) Differences in "Cytolytic activity" between molecular subtypes. (D) Differences in expression of immune checkpoint genes between molecular subtypes. (E) 15 pairs drugs were significantly associated to RiskScore. (F) 15 pairs drugs mainly target EGFR signaling and TNKS2 pathways. (G) IC50 box plots of docetaxel, vincristine, paclitaxel and cisplatin in TCGA-LUAD dataset. Frontiers in Genetics frontiersin.org genes targeted by these drugs ( Figure 10F). In addition, we also explored the response of different molecular subtypes in the TCGA-LUAD cohort to the traditional chemotherapeutic agents Docetaxel, Vinorelbine, Paclitaxel and Cisplatin, and found that overall patients in the high-risk group were more sensitive to all the four chemotherapeutic agents ( Figure 10G), suggesting that patients in the high-risk group may benefit from these four drugs. PCD characteristics in high-and lowrisk groups We also determine the PCD characteristics in high-and low-risk groups using ssGSEA analysis. 6 of 12 PCD styles had differences between high-and low-risk groups in TCGA dataset ( Figure 11A). In GSE72094 dataset, 10 PCD patterns scores presented Frontiers in Genetics frontiersin.org differentiation in high-and low-risk groups ( Figure 11B). Moreover, the differences of 9 PCD scores between high-and low-groups was observed in GSE31210 dataset ( Figure 11C). RiskScore as well as four model genes were obviously related to PCD patterns ( Figure 11D). RiskScore combined with clinicopathological features to further improve prognostic models and survival prediction Univariate and multifactorial Cox regression analyses revealed RiskScore as the most significant prognostic factor ( Figures 12A, B). We created a nomogram ( Figure 12C) combining RiskScore and other clinicopathological traits for the risk assessment and prediction of survival probability for LUAD patients. The model results revealed that RiskScore had the biggest influence on survival prediction. The prediction calibration curves at the three calibration points of 1, 3, and 5 year(s) nearly overlapped with the standard curve, which indicated that the nomogram plot had excellent prediction performance. We further assessed the prediction accuracy of the model using the calibration curve ( Figure 12D). We also used DCA (Decision curve) to test the model's dependability, and it was shown that RiskScore and Nomogram performed much better than the extreme curve and had the strongest ability to predict survival among other clinicopathological factors ( Figure 12E). Discussion Lung cancer is currently the most aggressive malignancy in the world, of which LUAD is the most common histological subtype of primary lung cancer, accounting for 64% of peripheral lung cancers, and has been reclassified from invasive precancerous lesions to invasive adenocarcinoma (Denisenko et al., 2018;Hutchinson et al., 2019). Despite the current advances in the treatment of LUAD, the median survival is only 8.6 months and immune escape is considered one of the main factors leading to treatment failure in LUAD (Yotsukura et al., 2021). In contrast to the remarkable efficacy of immune checkpoint inhibitor (ICI) in metastatic melanoma, Hodgkin's lymphoma, and bladder cancer, not all patients with LUAD are sensitive to ICI . Mechanisms of immune escape that lack adaptive immune response include hypoxia-driven immunosuppressive factors, anti-apoptotic pathways, chronic inflammation, metabolic damage, and immune Frontiers in Genetics frontiersin.org cells such as regulatory T (Treg) cells, tumor-associated M2 macrophages (TAM), myeloid-derived suppressor cells (MDSC) (Yu et al., 2021). Recent studies have shown that T and NK cell dysfunction and depletion or deficiency of antitumorspecific effector cells are involved in LUAD immune escape (Hong et al., 2019), and although the exact mechanism is unclear, it points to new ideas for the study of immune escape in LUAD and provides new targets for immunotherapy in LUAD. LUAD is usually resistant to chemotherapy and/or radiotherapy and leads to the development of distant metastases . NK cell dysfunction and failure in patients with LUAD could be caused by immune escape mechanisms mediated by lung cancer cells or tumor microenvironment, leading to failure of immunotherapy. The reason for this is related to tumor upregulation of inhibitory ligands (e.g., HLA-C molecules) and recognition by autoinhibitory KIR receptors carrying ITIM motifs (Daëron et al., 2008). Cellular experiments showed that other inhibitory receptors, for instance, KLRG-1, LAG-3, CD94/ NKG2A, TIM3, TIGIT, and their ligands were also frequently upregulated on NK cells from LUAD patients (Lee et al., 1998;Nayyar et al., 2019), which was consistent with our study, where we found significantly different NK cell-related gene expression in different subtypes. CTLA-4 (ipilimumab) improved clinical prognosis of patients with LUAD (Paulsen et al., 2017) in addition to the common PDL-1 inhibitors (avelumab, atezolizumab, durvalumab) and PD-1 (camrelizumab, spartalizumab, nivolumab, pembrolizumab). Our study identified the expression patterns of PD-1/PD-L1 and CTLA-4 in different subtypes, confirming a possible immune escape mechanism of NK cells in LUAD and providing a new perspective for blocking immune dysregulation. The tumor microenvironment (TME) consists of associated fibroblasts (CAF), tumor cells, other immune cells, and endothelial cell constituents (ECs) (Vitale et al., 2019). Ghiringhelli F et al. showed that suppressive immune cells such as Treg cells, CTLA-4+ regulatory, and that N2 neutrophils and M2 macrophages can disrupt the anti-lung cancer activity of NK cells (Domagala-Kulawik et al., 2014). Similarly, our data showed significant differences in the proportion of NK cells, B cells, and T cell content between different molecular subtypes, suggesting that other immune cells may impair the cytotoxic and migratory activity of NK cells with numerical and functional advantages, and thus causing NK cell depletion (Bi and Tian, 2017). But we found that activated NK cells had no differences between high-and low-group, maybe caused by insufficient samples. Changes in NK cell counts, including peripheral blood, circulation and TME in healthy individuals, can be used as prognostic markers in patients with head and neck and lung tumors (Lin et al., 2017;Lin et al., 2020;Zhong et al., 2021). We constructed the prognosis model by NK cell-related genes (ANLN, FAM83A, RHOV, and PARP15), which is a powerful tool to assist clinical decision-making with effective prediction of patient survival and drug sensitivity. ANLN is an actinbinding protein, and previous studies have demonstrated that ANLN is associated with actin cytoskeleton dynamics (Xu et al., 2019). Xu J et al. showed that ANLN overexpression promotes distant metastasis of lung cancer cells and is associated with epithelial mesenchymal transformation (EMT) of LUAD cells transformation (EMT) in LUAD cells. Similar to previous bioinformatic analyses, our study found that upregulated FAM83A in LUAD tissues, which was relate to LUAD prognosis (Suzuki et al., 2005;Deng et al., 2021). Knockdown of FAM83A inhibited proliferation, migration and invasion of LUAD cells. In addition, the lncRNA FAM83A-AS1 regulates FAM83A expression by acting as a competing endogenous RNA for miR-495-3p . These results suggested that FAM83A plays an oncogenic role in LUAD and that FAM831-AS1 can regulate FAM83 expression by taking up miR-495-3p. Similar to FAM83A, invasion, migration and proliferation of LUAD cells could be stimulated by RHOV overexpression, while knockdown of RHOV inhibits the functionalistic behavior of the cells. In addition, RHOV knockdown inhibits metastasis and LUAD tumor growth of nude mice, which may be related to RHOV activation of the JNK/c-Jun signaling pathway (Zhang et al., 2021c). There are fewer basic studies on PARP15 in LUAD, and genomic data with large sample sizes suggested that RHOV is a useful marker for immunotherapy and survival in LUAD (Han et al., 2020). The above studies revealed a novel regulatory mechanism of NK cells in LUAD tumor development, which may be a new biomarker and therapeutic target for LUAD. Docetaxel, Vinorelbine, Paclitaxel and Cisplatin are currently widely used chemotherapy drugs for lung cancer, which cause cell cycle arrest (Clegg et al., 2001;Dasari and Tchounwou, 2014). However, resistance can develop, leading to further tumor development and side effects such as myelosuppression, drug nephritis, nausea, vomiting, hearing loss and polyneuropathy, which will significantly reduce the patient's quality of life (Dasari and Tchounwou, 2014). Acquired chemotherapy resistance is a major problem faced by clinicians and a major cause of treatment failure. Regardless of the type of resistance, loss of tumor sensitivity to the drug leaves very little time for therapy to correct, with the goal of improving patient survival. Patients' clinical outcomes can be significantly improved by personalizing treatment regimens and predicting the effects of drug therapy. The results of this study showed that patients in C1 subtype and high-risk group were more sensitive to and benefited from four chemotherapy drugs. We speculated that may be the number of NK cells affects drug sensitivity. Although this study reveals the immune signature of NK cellrelated genes in LUAD and confirms the role in prognosis and immunotherapy of LUAD, the following limitations remain: (Hirsch et al., 2017): The wide variety, rapid development of bioinformatics tools can help predict potential key molecules and pathways, narrow the scope and improve the efficiency of the study, but the final findings should be validated based on real genetic data in basic and clinical settings; (Siegel et al., 2022); The database used to conduct functional and signaling pathway enrichment analysis has comprehensive and complete data, but its slow updates may have some unpredictable effects on the results; (Succony et al., 2021); The results were based on extrapolation of the raw signal algorithm and should be supported by further laboratory and clinical evidence. Conclusion Based on NK cell-related genes, we identified three stable molecular subtypes of LUAD, which differed significantly in Frontiers in Genetics frontiersin.org terms of immunity, pathways, prognosis and drug sensitivity among different molecular subtypes. Based on NK cell-related genes, this study developed a prognostic model, which was highly robust and had a greater potential for application in predicting immunotherapeutic response and patient prognosis. Data availability statement The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding author. Author contributions All authors contributed to this present work: DZ, designed the study. YZ, acquired the data. YZ, drafted the manuscript. DZ, revised the manuscript. All authors read and approved the manuscript. Funding This study was supported by Clinical Study on Diagnosis and Treatment of Peripheral Pulmonary Nodules by Bronchoscopic Navigation and Thoracic Wall Navigation (No. S2023-YF-YBSF-0407). Conflict of interest Author YZ was employed by Yuce Biotechnology Co, Ltd. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-04-07T13:06:56.489Z
2023-04-07T00:00:00.000
{ "year": 2023, "sha1": "56b72f31a965c36c06e15a823f6f44fabb46c3e2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "56b72f31a965c36c06e15a823f6f44fabb46c3e2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
228918002
pes2o/s2orc
v3-fos-license
The FAM3C locus that encodes interleukin-like EMT inducer (ILEI) is frequently co-amplified in MET-amplified cancers and contributes to invasiveness Background Gene amplification of MET, which encodes for the receptor tyrosine kinase c-MET, occurs in a variety of human cancers. High c-MET levels often correlate with poor cancer prognosis. Interleukin-like EMT inducer (ILEI) is also overexpressed in many cancers and is associated with metastasis and poor survival. The gene for ILEI, FAM3C, is located close to MET on chromosome 7q31 in an amplification “hotspot”, but it is unclear whether FAMC3 amplification contributes to elevated ILEI expression in cancer. In this study we have investigated FAMC3 copy number gain in different cancers and its potential connection to MET amplifications. Methods FAMC3 and MET copy numbers were investigated in various cancer samples and 200 cancer cell lines. Copy numbers of the two genes were correlated with mRNA levels, with relapse-free survival in lung cancer patient samples as well as with clinicopathological parameters in primary samples from 49 advanced stage colorectal cancer patients. ILEI knock-down and c-MET inhibition effects on proliferation and invasiveness of five cancer cell lines and growth of xenograft tumors in mice were then investigated. Results FAMC3 was amplified in strict association with MET amplification in several human cancers and cancer cell lines. Increased FAM3C and MET copy numbers were tightly linked and correlated with increased gene expression and poor survival in human lung cancer and with extramural invasion in colorectal carcinoma. Stable ILEI shRNA knock-down did not influence proliferation or sensitivity towards c-MET-inhibitor induced proliferation arrest in cancer cells, but impaired both c-MET-independent and -dependent cancer cell invasion. c-MET inhibition reduced ILEI secretion, and shRNA mediated ILEI knock-down prevented c-MET-signaling induced elevated expression and secretion of matrix metalloproteinase (MMP)-2 and MMP-9. Combination of ILEI knock-down and c-MET-inhibition significantly reduced the invasive outgrowth of NCI-H441 and NCI-H1993 lung tumor xenografts by inhibiting proliferation, MMP expression and E-cadherin membrane localization. Conclusions These novel findings suggest MET amplifications are often in reality MET-FAM3C co-amplifications with tight functional cooperation. Therefore, the clinical relevance of this frequent cancer amplification hotspot, so far dedicated purely to c-MET function, should be re-evaluated to include ILEI as a target in the therapy of c-MET-amplified human carcinomas. Supplementary Information The online version contains supplementary material available at 10.1186/s13046-021-01862-5. healing [9]. In cancer, EMT activities might be switched on transiently and reversibly to convert adherent epithelial tumor cells into motile and invasive mesenchymal cells [10]. In murine and human cellular models of breast, hepatocellular carcinoma, and lung cancer ILEI is required and sufficient to induce EMT and invasion in vitro and metastasis in vivo [6,11,12]. ILEI is overexpressed in several human tumors and shows altered subcellular localization, which is related to changes in the secretion levels of the protein [13]. ILEI localization strongly correlates with metastasis formation and survival in human breast and hepatocellular carcinomas [6,11,12]. In colorectal cancer, upregulation of ILEI protein expression correlates with EMT and poor prognosis [14]. However, whether increased expression of ILEI in some cancers is associated with increased CN of the FAM3C locus is unclear. The proximity of FAM3C to the MET hotspot suggests that the two genes may be often co-amplified. The aim of this study was to investigate whether increased FAM3C CN was evident in various cancers and reveal whether there was a relationship with MET amplification. Our results suggested a strong correlation between the copy number of the two genes. Further in vitro mechanistic investigations and xenograft experiments with combined counteraction of ILEI and c-MET activities suggest that c-MET and ILEI cooperate to increase the invasiveness of cancer cells. 26) and NIH3T3 (CRL-1658) cell lines were obtained from ATCC (http://www. lgcstandards-atcc.org/en.aspx) or DSMZ (http://www. dsmz.de/), tested for mycoplasma infection on a regular basis using a commercial biochemical test (Lonza) and authenticated using STR profiling. All cells were cultured in Dulbecco's Modified Eagle's Medium/Nutrient Mixture F-12 Ham, 1:1 mixture supplemented with 10% fetal calf serum (FCS), or in case of MKN45 with 20% FCS. Upon serum withdrawal, FCS was replaced by 0.1% bovine serum albumin. Animals Eight-twelve week-old female severe combined immunodeficiency disease (SCID) mice (Envigo, Italy) were used for xenograft experiments in this study. All animal work was done by following earlier protocols ethically approved by the Institutional Animal Care and Use Committee of the Medical University of Vienna and by the Austrian Bundesministerium für Bildung, Wissenschaft und Forschung (BMWFW-66.009/0081-WF/V/3b/2015). Human tumor samples Formalin fixed paraffin embedded primary tumor samples of advanced stage colorectal carcinoma were obtained from stored samples from de-identified patients treated in Kecskemet General Hospital, Hungary, who had previously provided informed consent for their use in clinical research. Genomic DNA isolation and qPCR-based determination of gene CNs from formalin fixed paraffin embedded human tumor samples Genomic DNA was isolated from non-stromal regions of 3-4 10 μm thick sections of formalin fixed paraffinembedded tumors of 49 advanced-stage colorectal carcinoma patients using the Gentra Puregene Tissue Kit (Qiagen) according the manufacturer's instructions. 60 ng of isolated genomic DNA was used as template in the quantitative real-time PCR reaction. All samples were done in triplicates and the MET and FAM3C copy numbers were derived by standardizing the input DNA to the control signal (TOP3A, chromosome 17p11) as described earlier [15]. The sequences of the primer pairs and probes for TOP3A and MET were as described in [16] using FAM as flourogenic label. For FAM3C the primers Hsp.FAM3C_F 5′-GTCACACTCTTGTGCCAG TCT-3′ and Hsp.FAM3C_R 5′-GAGCAAAGGTCAGG GTTGAAAG-3′ were used with the HEX-labeled probe Hsp.FAM3C_probe 5′-TCTGCAGCTTCAAATCCC CTCCTG-3′ allowing duplex PCR with the TOP3A control gene. Copy number (CN) data of 200 tumor cell lines were generated using the GeneChip Human Mapping 250 K Nsp Arrays (Affymetrix) and subsequently analyzed on the Affymetrix® Genotyping Console™ software (GTC) using the unpaired CNAT 4.0 analysis algotrythms. Gene copy number ± 3 was considered as amplification, < 3 as non-amplified. Generation of stable cell lines expressing ILEI shRNA vectors For ILEI and mammalian non-targeting control shRNA knock-down in NCI-H1993, NCI-H441, MKN45, OE33 and SKBR3 cells, MISSION shRNA lentiviral transduction particles (Sigma, St Louis, MS, USA) were used according to the manufacturer's instructions. Five shRNA sequences were pretested for ILEI knock-down (sh261 CCGGGATGCAAGTTTAGGAAATCTACTCGAGTAG ATTTCCTAAACTTGCATCTTTTTG, sh328 CCGG CCAGATATAAGTGTGGGATCTCTCGAGAGATCCC ACACTTATATCTGGTTTTTG, sh506 CCGGAGGAGA AGTATTAGACACTAACTCGAGTTAGTGTCTAATA CTTCTCCTTTTTTG, sh579 CCGGGCCATACAAG ATGGAACAATACTCGAGTATTGTTCCATCTTGTA TGGCTTTTTG and sh1767 CCGGCCTGTGTTTATC TAACTTCATCTCGAGATGAAGTTAGATAAACACA GGTTTTTG) and two were selected (sh261 and sh506) as the most efficient for later studies. In studies with only one shILEI cell line, "shILEI" indicates sh506. Stable cell lines were established using selection for puromycin resistance of transduced cells. ILEI expression was validated in whole cell lysates and conditioned medium (CM) by Western blotting. H-thymidine incorporation assay Cells were pretreated with different concentrations of crizotinib for 24 h and seeded in triplicates in 96 well plates in the presence of the same inhibitor concentrations. After 24 h of incubation, cells were labeled with 30 μCi/ml methyl 3 H-thymidine for 2 h. Radioactive media was removed, cells were washed in phosphatebuffered saline (PBS) and trypsinized. Cells were fixed by Tomtec cell harvester (Tomtec Inc., USA) onto a waxembedded filtermat, and radioactive intensity was determined by a Wallac 1450 MicroBeta liquid scintillator (PerkinElmer Inc., USA). Results were normalized according to cell number. Trans-well invasion assay Cells were pre-starved overnight in starvation or low (1%) FCS medium and seeded in the same medium into transwell invasion chambers with 8 μm pore size, coated with Matrigel (Corning Inc., USA) and pre-equilibrated with medium (50.000 cells/inlet, each condition in triplicates). The lower chamber of the trans-well unit contained conditioned medium of NIH3T3 cells or human HGF (40 ng/ ml) as attractant. For c-MET inhibition, medium was supplemented with 500 nM crizotinib both in the upper and lower chambers. Cells were allowed to invade for 24 h, non-invaded cells were removed from the upper side of the inlets, and cells were fixed and stained with 4′,6-diamidino-2-phenylindole (DAPI). Total numbers of invasive cells were counted using fluorescent microscopy imaging followed by ImageJ analysis. RNA isolation and real-time quantitative PCR analysis Total RNA was extracted, reverse transcribed and cDNA was amplified with primers for the genes MMP9, MMP2 and CDH1 as described earlier [17]. Gelatin zymography For MMP-9 and MMP-2 detection in the CM of NCI-H441 and NCI-H1993 shCont and shILEI cells upon HGF stimulus with or without crizotinib treatment cells were plated in 6-well plates. At 70-80% confluency, cells were crizotinib-(500 nM) or DMSO-treated for 30 min followed by medium change to FBS-free media with continued crizotinib supply and the addition of human HGF (40 ng/ml). 24-h CM were collected and concentrated as described for Western blot analysis and all samples were adjusted to the same protein concentration followed by equal volume gel loading. For MMP-9 and MMP-2 detection in protein extracts, NCI-H441 and NCI-H1993 shCont and shILEI snapfrozen tumor pieces with or without crizotinib treatment were homogenized in lysis buffer (see Western blot analysis for composition), total protein concentration was determined and equal protein amounts were loaded. Experimental mouse xenografts Mouse xenografts were established with subcutaneous injection of 1.7 × 10 6 control (shCont) or ILEI KD (sh506) NCI-H1993 and NCI-H441 cells into 8-12-week old female SCID mice (n = 4). Injected mice were distributed into randomized cohorts and vehicle and compound treatment started as the mean tumor size reached 100 mm 3 . Crizotinib (LC Laboratories) was applied orally (50 mg/kg, dissolved in 5% DMSO, 10% ethanol and 10% Cremophor) in a 5-days treatment 2-days pause protocol. For NCI-H1993 tumors, at the time point of sacrifice of vehicle-treated mice, crizotinib treated animals were monitored for an additional 11 days without further supply of the compound. Tumors were measured regularly by a caliper and tumor volume was calculated by the formula a x b 2 / 2 (a for the major and b for the minor tumor diameter). The tumors were dissected, and tumor mass was determined 40-50 days after injection. Statistical analysis Data are expressed as the mean ± standard error of the mean (SEM) where applicable. Data normality was checked using Shapiro-Wilk and Kolmogorov-Smirnov tests. Statistical significance was determined by unpaired two-sided Student's t-test, one-way and two-way analysis of variance (ANOVA) tests and Kruskal-Wallis tests followed by Dunn's multiple testing adjustment using Graph Prism software (version 5.0). Survival differences were calculated using log rank tests. p < 0.05 was considered significant. Kendall's tau-b tests and Chi-square tests were calculated using R software (version 3.2.1). Alluvial plots were generated using Caleydo 3.0 software [18]. Results Gene amplification of FAM3C and MET is tightly linked in several human carcinomas and correlates with increased gene expression and poor prognosis To determine the frequency of CN amplification of the FAM3C and MET genes, we investigated a variety of datasets of different cancer entities from the TCGA database. Connections in gene function and physical location of the two genes were uncoupled by analysis of two additional RTKs with dedicated driver functions in the progression of many malignancies: Epidermal growth factor receptor (EGFR), located on the p arm of the same chromosome and fibroblast growth factor receptor 1 (FGFR1) located on chromosome 8. Of 501 lung squamous cell carcinoma (LUSC) cases, 8 (1.9%) indicated MET, out of these 6 (1.4%) also FAM3C CN amplifications (Fig. 1a). Of 516 lung adenocarcinoma (LUAD) cases, 18 (3.5%) had MET and 9 (1.7%) FAM3C CN amplification, 8 deriving from the MET amplification group (Fig. 1d). 1480 hepatocellular carcinoma (LIHC) cases had in 11 (0.7%) and 6 (0.4%) cases with amplification for the MET and FAM3C genes, respectively, in 5 cases with a shared amplification for both, 615 colorectal adenocarcinoma (COADREAD) cases showed CN amplification in 1 case (0.2%) shared for both MET and FAM3C and 1080 breast cancer (BRCA) cases had CN amplification in 10 cases (0.9%) for both MET and FAM3C, 7 sharing amplification for both (Supplemental Fig.S1A). Correlation analyses showed that the FAM3C gene CNs were tightly correlated with CNs of MET but not with the distant EGFR or unlinked FGFR1 genes in all analyzed data cohorts for LUSC (Fig. 1a), LUAD (Fig. 1d), LIHC, COADREAD, and BRCA (Supplemental Fig.S1A), indicating that co-amplification might be a consequence of chromosomal proximity. To test if genomic amplification influenced gene expression, MET and FAM3C gene CNs were then compared to mRNA levels and relapse-free survival of LUSC and LUAD patients from the TCGA database. The analysis showed that CNs of both genes significantly correlated with gene expression levels, patients with strong MET and/or FAM3C amplification showing the highest expression of these genes ( Fig. 1b and e). Patients with MET and/or FAM3C amplification also had a significantly worse survival compared to the pooled cohort of patients with deletions, normal, or slight gain in the CN of the two gene loci, albeit low case number and early loss on patient follow-up prevented a proper analysis on the survival of LUSC FAM3C amplified patients ( Fig. 1c and f). These data indicate that the FAM3C and MET genes are frequently co-amplified in human cancers contributing to increased gene expression and poor survival. We then tested the linkage of the two genes on genomic DNA isolated from formalin fixed paraffin-embedded tumors of 49 advanced-stage colorectal carcinoma patients by qPCR. Over 72% (24/33) of the FAM3C-and/or MET-amplified tumors showed a co-amplification of the two genes, further supporting the tight linkage of these loci (Supplemental Fig.S1B). In addition, cluster and correlation analysis of FAM3C amplification with available clinicopathological parameters elucidated a significant enrichment of FAM3C amplification in patients with extramural venous invasion (EMVI) (Supplemental Fig.S1B, C). EMVI, the spreading of cancer cells into the nearby blood vessels, is an invasive characteristic connected to worse prognosis. Although ILEI has not been linked so far to EMVI, our finding is in accordance with the described function of ILEI in inducing EMT and invasion and reflects that FAM3C amplification might affect gene function resulting in a clinically worse outcome. So, these results support the database analysis showing frequent co-amplification of MET and FAM3C and the likelihood of poor survival rates in patients with increased CNs of these genes. Increased CNs of FAM3C and MET are tightly linked and frequently present in multiple human cancer cell lines To evaluate if our findings on FAM3C-MET coamplification in human primary tumors can be b, e mRNA expression levels of MET and FAM3C copy number calls defined in A and D. Statistical significance was determined by one-way analysis of variance (ANOVA) followed by Dunn's multiple comparison adjustment and marked with asterisks (*p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001). c, f Kaplan-Meier plot on relapse-free survival of LUSCC (c) and LUAD (f) patients without (DD_SD-Dipl_Gain) and with (Amp) FAM3C and MET amplification recapitulated in cultured human cancer cell lines, FAM3C and MET gene CNs were determined in a panel of 200 human cancer cell lines of diverse tissue origins by microarrays and analysed using the CNAT 4.0 analysis algotrythms in the GTC analysis (Fig. 2a). Increased CN, using a cut-off of CN 3 or higher, of both genes was present in cell lines of all tumor entities at a frequency of 47% on average. The frequency varied in the different cancer types, breast and lung cancer had the lowest (25 and 30%) and melanomas the highest (76%) (Fig. 2a, left panel). Importantly, over 90% of the cell lines with an increased CN for at least one of the genes showed an increase for both loci (Fig. 2a, right panel), confirming that the amplification event of the two genes is tightly coupled and indicating that cancer cell lines representatively illustrate in vitro the FAM3C-MET coamplification characteristics of primary tumors. Of the 85 cell lines with increased CN for both FAM3C and MET, we selected five for detailed investigation (Fig. 2b). These were a pair of gastrointestinal cancer cell lines (MKN45 and OE33), a pair of lung adenocarcinoma cancer cell lines (NCI-H1993 and NCI-H441), and a breast cancer cell line (SKBR3). One cell line in each of the gastrointestinal and lung adenocarcinoma pairs had previously been described as sensitive and the other as resistant to the c-MET inhibitor PHA665752 [19]. Both these pairs of cell lines expressed high levels of ILEI and c-MET (Fig. 2c) as compared to control samples from non-metastatic MCF7 and the metastatic MDA-MB-231 human breast cancer cell line lacking FAM3C and MET amplifications but with upregulated ILEI expression as a characteristics of metastatic capacity [4]. These cell lines with high expression levels also secreted ILEI into the CM during culture. In SKBR3 cells, c-MET expression is absent despite locus amplification due to epigenetic silencing [5]. Accordingly, this cell line did not express c-MET and interestingly, though not silenced, ILEI expression was also only moderate and secretion almost absent despite increased gene CN (Fig. 2c). Stable ILEI knock-down does not influence proliferation capacity and sensitivity towards c-MET-inhibitor induced proliferation arrest Since there are no specific pharmacological inhibitors for ILEI available, we mimicked ILEI inhibition by RNA interference-mediated (RNAi) stable knock-down (KD) of the protein expression. Both intracellular and secreted [19]. c Western blot analysis of ILEI secretion into conditioned media (CM) and expression within the cells and MET and Erk expression and activity in above five selected cell lines. MCF-7 and MDA-MB-231 human breast cancer cell lines were used as normal FAM3C CN controls with low and high ILEI expression ILEI protein levels showed an apparent reduction by two independent shRNAs in all five cell lines, which was most evident in the levels of the functionally relevant secreted form in the CM (Fig. 3a). For cMET blockade, we used crizotinib, a small-molecule tyrosine kinase inhibitor that efficiently inhibits c-MET, anaplastic lymphoma kinase 5 (ALK5) and ROS1 and is approved by the FDA for treatment of ALK-rearranged NSCLC [4]. As none of the selected cell lines including the two lung adenocarcinoma lines NCI-H1993 and NCI-H441 harbored an ALK rearrangement, inhibitor effects were expected to occur primarily due to c-MET inhibition. Since SKBR3 cells did not express c-MET despite of a MET and FAM3C amplification, they served as a control to monitor the potential influence of ALK5 and ROS1 targeting effects of crizotinib. First, we analyzed the dose-dependency of crizotinib on the proliferation capacity of the five selected cancer cell lines. The OE33 cell line that had previously shown resistance to another small molecule c-MET inhibitor displayed sensitivity to increasing concentrations of crizotinib in proliferation capacity (Fig. 3b), comparable to the sensitive cell lines NCI-H1993 and MKN45. NCI-H441 and SKBR3 tolerated high doses of the drug without remarkable drop in their proliferation rate or viability (Fig. 3b), confirming their described resistance towards MET inhibitors [19]. ILEI KD did not influence the proliferation behavior of the selected five cell lines (Fig. 3c). Furthermore, ILEI KD also did not influence the sensitivity of these cells towards crizotinib-induced growth arrest (Fig. 3c), indicating that ILEI does not affect proliferation and does not influence c-METdependent regulation of proliferation. To address any concerns of the polypharmacological action of crizotinib, which inhibits other targets such as ALK5, we also investigated the action of two additional c-MET inhibitors: PHA665752 and savolitinib. The results showed similar effects on cell viability as crizotinib (Supplemental Fig.S2). ILEI KD impairs both c-MET-independent and c-METdependent invasion of cancer cells with FAM3C and MET CN gains Next, we investigated invasiveness and its sensitivity to c-MET and ILEI signaling inhibition in the five selected cancer cell lines. First, we tested c-MET-independent invasive capacity in an in vitro trans-well invasion assay by using NIH3T3 CM as chemoattractant, as murine HGF produced by these cells does not cross-activate the human c-MET receptor [20,21]. ILEI KD strongly impaired the invasiveness of all five cancer cell lines, whereas invasion capacity was not influenced by crizotinib treatment (Fig. 4a). This supported the view that ILEI signaling induces invasiveness. To test the influence of ILEI on c-MET induced invasion, the same assay was performed this time using human HGF as chemoattractant (Fig. 4b). As expected, crizotinib efficiently inhibited HGF-induced invasion in the four c-MET-expressing cell lines. Importantly, ILEI KD also significantly impaired HGF-induced invasiveness in all c-MET-expressing cells, indicating that ILEI might be a contributing factor in c-MET-driven cellular invasion. ILEI KD derivatives of NCI-H441 and OE33, even showed an increased sensitivity towards crizotinib with invasion almost completely eliminated, suggesting that ILEI depletion might have an additive inhibitory effect to crizotinib in these cells. In summary, stable ILEI KD efficiently reduced both c-MET-dependent and c-MET-independent invasion in all tested cells. While ILEI does not influence c-MET signaling activity, c-MET acts on ILEI signaling activity by regulating ILEI secretion As c-MET and ILEI interact during c-MET-dependent invasion, we tested the possibility of the two signaling pathways being linked by investigating the effect of crizotinib and ILEI KD on c-MET signaling. Crizotinib efficiently inhibited c-MET autophosphorylation in all four c-MET-expressing cell lines (Fig. 5a). Activation of Erk, an important downstream effector of c-MET, was also significantly reduced upon drug treatment in these cells, whereas it remained unaltered in SKBR3 cells, which do not express c-MET (Fig. 5a). Knock-down of ILEI did not have an influence on the expression and activation levels of c-MET and Erk (Fig. 5a). Similarly, c-MET inhibition did not influence ILEI expression levels in the tested cell lines (Fig. 5a), indicating that c-MET and ILEI expression is not cross-regulated. Importantly, however, crizotinib decreased the secretion of ILEI in all c-Met expressing cell lines, but not in SKBR3 cells (Fig. 5a). Similar results were also found with the c-MET specific inhibitor savolitinib (Supplemental Fig.S3A). This suggests that c-MET may positively regulate ILEI secretion. Elevated expression and secretion of MMPs upon c-MET activation depends on ILEI and the two pathways cooperate in E-cadherin repression To uncover potential mechanisms by which c-MET and ILEI might cooperate to increase invasiveness, we next investigated markers of invasion. MMPs remodel the extracellular matrix (ECM) and are activated during invasion to ease the movement of cancer cells [22]. So, we explored the mRNA expression levels of two prominent MMPs, MMP-2 and 9 in response to HGF in the five cell lines. Of note, none of the five cells showed expression of both of these MMPs; NCI-H441, MKN45 and OE33 expressed only MMP-9, whereas NCI-H1993 and . 5b) and [17]. Secondly, in NCI-H441, MKN45, and OE33 cells HGF induced expression of MMP-9 mRNA and this expression was inhibited by crizotinib supporting the view that c-MET-dependent invasion activates MMP-9 (Fig. 5b) and [23,24]. Even more importantly, however, in ILEI KD derivatives of these cells HGF treatment was not able to elevate MMP-9 mRNA expression (Fig. 5b). This result suggests that c-METmediated increased expression of MMP-9 mRNA during invasion is dependent on ILEI. A similar pattern was seen in NCI-H1993 cells for MMP-2 mRNA expression, but not in SKBR3 cells, where c-MET is not expressed and hence, HGF did not increase MMP-2 mRNA levels ( Fig. 5b). As MMPs act on ECM, their activity is dependent upon secretion, so we analyzed the effect of c-MET and ILEI signaling inhibition on MMP secretion. For easier detection of inhibitory effects, high baseline secretion was ensured by HGF trigger. The secretion of MMP-9 from NCI-H441 cells decreased slightly with crizotinib treatment or ILEI KD, and the combination of the two lead to a significant reduction (Figs. 5c, d). In NCI-H1993 cells ILEI KD was sufficient for remarkable reduction of MMP-2 secretion (Fig. 5c, d). The specificity of these results to c-MET inhibition was also tested with savolitinib with similar results (Supplemental Fig.S3b). Overall, these results suggest that both c-MET and ILEI contribute to efficient secretion of MMPs in a cooperative and partially complimentary manner, c-Met most probably acting indirectly, via regulating ILEI secretion. Another important marker of invasion and EMT status is the loss or reduction of the cell adhesion molecule Ecadherin [25]. Therefore, we checked potential changes in the levels of E-cadherin mRNA (CDH1) and protein upon crizotinib treatment and ILEI KD in each of the cell lines used in this study, and found high variance according to the cell type (Fig. 5e, f). On the one hand, in the crizotinib-resistant NCI-H441 and SKBR3 cells no significant differences in CDH1 mRNA expression levels were observed (Fig. 5e). On the other hand, NCI-H1993, MKN45 and OE33 cell lines showed a significant c-MET and ILEI mediated regulation of CDH1 transcription. All three cell lines showed increase in CDH1 mRNA levels in ILEI KD derived cells or with crizotinib, and combination of these conditions resulted in more stable or superior effects. Cells tested with savolitinib showed similar results (Supplemental Fig.S3c). E-cadherin protein levels showed a similar trend of differences as of transcription (Fig. 5f). This once again suggests a cooperation between c-MET and ILEI in the regulation of E-cadherin transcription, however, these data also point out that cancer cells show very different sensitivity towards this regulation. Combined ILEI KD and crizotinib treatment significantly reduced the outgrowth of NCI-H441 and NCI-H1993 tumor xenografts To assess the in vivo relevance of the above findings, we next investigated the growth of tumor xenografts induced by NCI-H441 and NCI-H1993 cells and their ILEI KD derivatives in the presence and absence of crizotinib treatment in a mouse model. The original rationale was to compare the effect of ILEI KD on the growth capacity of crizotinib-sensitive vs crizotinib resistant tumors. Because we were expecting a very low response of the NCI-H441 cell line towards crizotinib we did not plan a withdrawal step for that cell line, while the NCI-H1993 cell induced xenografts were evaluated for crizotinib withdrawal for an additional 11 days. However, in accordance with some earlier findings [26], we observed that inhibition of c-MET with crizotinib reduced the growth and tumor mass of both of the xenografts (Fig. 6a, b, j, k), indicating that crizotinib was able to (See figure on previous page.) Fig. 5 HGF-induced expression and secretion of MMPs requires ILEI, efficient ILEI secretion requires c-MET signaling. a Western blot analysis of ILEI secretion and expression, and c-MET and Erk activity and expression in the five selected cell lines (parental) and their control (shCont) and ILEI KD (sh261 and sh506) derivatives after crizotinib (500 nM) treatment for 24 h. b qPCR analysis of MMP-9 (for NCI-H441, MKN45 and OE33) and MMP-2 (for NCI-H1993 and SKBR3) mRNA expression in control (shCont) and ILEI KD (shILEI) cells after 24 h of HGF treatment (40 ng/ml) in the absence or presence of crizotinib (500 nM). Data are normalized as fold change to untreated control cells. Error bars represent SEM of three independent experiments. Statistical significance was determined by one-way ANOVA. c Secretion of MMP-9 and MMP-2 by control (shCont) and ILEI KD (shILEI) NCI-H441 and NCI-H1993 cells treated with HGF (40 ng/ml) for 24 h in the absence or presence of crizotinib (500 nM) determined by gelatin zymography from harvested conditioned medium. The three lanes of each treatment group represent samples of three independent assays. Recombinant pro-MMP-9 was used as assay control. d Quantification of the gelatin zymography gels shown in C. Relative differences in secreted MMP-9 and MMP-2 levels were determined by ImageJ analysis and normalized to HGF treatment-induced control cells. Error bars represent SEM of three independent experiments. Statistical significance was determined by Student's t-test. e qPCR analysis of E-cadherin mRNA expression (CDH1) in NCI-H1993, NCI-H441, MKN45, OE33 and SKBR3 control (shCont) and ILEI KD (shILEI) cells treated or non-treated with crizotinib (500 nM) for 24 h. Data are normalized as fold change to untreated control cells. Error bars represent SEM of three independent experiments. Statistical significance was determined by one-way ANOVA and marked with asterisks (*p < 0.05; **p < 0.01). f Representative Western blot analysis of E-cadherin expression in the control (shCont) and ILEI KD (shILEI) derivatives of the five selected cell lines after crizotinib (500 nM) treatment for 24 h counteract the resistance of NCI-H441 cells most probably via non-cell intrinsic mechanisms not addressed here. Importantly for this study, ILEI KD also slowed the growth of both xenografts. Tumor growth was most efficiently reduced when ILEI KD was combined with crizotinib (Fig. 6a, j). The tumor mass was significantly decreased in cells with ILEI KD compared to those with ILEI expression and was lowest in cells with ILEI KD and crizotinib combined (Fig. 6b, k). Immunohistochemistry for ILEI (Supplemental Fig. S4a, g), c-MET (Supplemental Fig. S4b, h), and phospho-cMET (Supplemental Fig. S4c, i) confirmed significant ILEI KD, unaltered c-MET expression upon ILEI KD and inhibitor treatment, as well as efficient inhibition of c-MET activation by crizotinib, respectively. The latter was also quantified as percentage of phospho-c-MET positive tumor cells (Supplemental Fig. S4d, j). It is notable that decrease in c-MET activation was no more evident in the NCI-H1993 derived tumors due to crizotinib withdrawal over the last 11 days of the experiment (Supplemental Fig. S4j). To investigate the reasons for smaller tumors resulting from crizotinib treatment and ILEI KD we investigated the proliferation and apoptosis of the tumor cells by quantifying the percentage of Ki67 and activated Cas-pase3 positive cells on tissue sections, respectively. In NCI-H441 xenografts, crizotinib treatment significantly reduced the proliferation of cells, but ILEI KD alone did not influence this parameter (Fig. 6c) [26]. So, this result supported the cell-based assays and showed that the cells were behaving in a similar manner in vivo to the in vitro analysis. In NCI-H1993 xenografts, the graph shows the recovery of the tumor cells after crizotinib treatment was halted on day 25 and observed for an additional 11 days. This indicated that after a period of drug withdrawal, proliferation was apparently no longer affected by the previous crizotinib treatment (Fig. 6l). Apoptosis was also decreased in NCI-H441 xenografts treated with crizotinib and in ILEI KD derived cells (Fig. 6d). In NCI-H1993 xenografts that had a withdrawal period from the crizotinib treatment phase there was no apparent significant difference in apoptosis with crizotinib or ILEI KD (Fig. 6m). So, the smaller tumors are likely to be due to the decreased proliferation upon crizotinib treatment rather than an increase in apoptotic cell death. A higher level of apoptosis in larger tumors without c-MET inhibition or ILEI KD may be indicative of the fast turnover of cells in these rapidly proliferating tumors that was also manifested in a highly ulcerated appearance. We also addressed if c-MET and ILEI had a consequence on tumor vascularization by determining blood vessel density and size on CD31 immunostained tumor sections. The vessel density remained constant between tumors (Supplemental Fig. S4e, k), and though NCI-H441 tumors showed decreased vessel size upon crizotinib treatment, it was less evident in the tumors with ILEI KD and not evident in any of the NCI-H1993 tumors (Supplemental Fig. S4f, l). These data indicate that decreased tumor size upon combined ILEI KD and c-MET inhibition is not primarily due to a switch in vascularization capabilities. Combined ILEI KD and crizotinib treatment decreased MMP expression in NCI-H441 and NCI-H1993 tumor xenografts To investigate whether the relationship between MMPs, c-MET, and ILEI seen in the cell lines was also evident in vivo, the expression of MMP-9 and MMP-2 from the tumors was investigated. In line with the results from gelatin zymography, the expression of MMP-9 protein in NCI-H441 tumors decreased slightly with crizotinib inhibition and even more when the cells also had ILEI KD, while MMP-2 expression, that became detectable only at in vivo conditions, was significantly decreased in ILEI KD tumors (Fig. 6e, f). At mRNA level, there was a significant decrease of MMP-9 in tumors from ILEI KD cells (Fig. 6g) and quantification of MMP-9 immunohistochemistry in tumor sections showed a similar trend (See figure on previous page.) Fig. 6 Combined ILEI KD and crizotinib treatment reduces tumor growth by inhibiting proliferation and MMP expression. a, j Fold growth ± SEM of NCI-H441 (a) and NCI-H1993 (j) control (shCont) and ILEI KD (shILEI) tumors upon vehicle or crizotinib (Crizo) treatment normalized to size at treatment start. Crizotinib-treated NCI-H1993 tumors were allowed to grow for an additional 11 days after treatment termination. b, k Tumor masses ± SEM of NCI-H441 (b) and of NCI-H1993 (k) shCont and shILEI tumors of vehicle or Crizo-treated mice. c, l Percentage of Ki67 positive tumor cells ± SEM of NCI-H441 (c) and NCI-H1993 (l) shCont and shILEI tumors of vehicle or Crizo-treated mice. d, m Percentage of activated caspase 3 (actCasp3) positive tumor cells ± SEM of NCI-H441 (d) and NCI-H1993 (m) shCont and shILEI tumors of vehicle or crizotinib-treated mice. e, n Gelatin zymography of protein extracts of NCI-H441 (e) and NCI-H1993 (n) shCont and shILEI tumors of vehicle-or crizotinib-treated mice (n = 3 pro group). Recombinant pro-MMP9, assay control. f, o Quantification of gels of panel e (f) and n (o). Pro-(filled color) and activated (patterned color) MMP-9 and MMP-2 levels ± SEM were normalized to respective total MMP levels of vehicle-treated control tumors. Statistics compares total MMP-9 and MMP-2 levels. g, p mRNA expression levels of MMP9 in NCI-H441 (g) and of MMP2 in NCI-H1993 (p) shCont and shILEI tumors of vehicle-or crizotinib-treated mice (n = 3 pro group). Expression was normalized to GAPDH and shown as fold change ± SEM over vehicle treated control tumors. h Representative images of MMP-9 IHC on NCI-H441 shCont and shILEI tumor sections of vehicle or crizotinib-treated mice. Arrowheads mark intracellular granular MMP-9 localization. Scale bar, 100 μm. i Percentage of MMP-9 positive tumor cells ± SEM in NCI-H441 shCont and shILEI tumors of vehicle or crizotinib-treated mice. Statistical significance was determined by two-way ANOVA (a, j), Student's t-test (a, j) and one-way ANOVA (b, c, d, f, g, i, k, l, m, o, p) and is marked with asterisks (*p < 0.05; **p < 0.01; ***p < 0.001) with a significant difference between the shCont tumors and those with ILEI KD and crizotinib in combination (Fig. 6h, i). NCI-H1993 tumors also expressed lower levels of MMP-2 protein upon ILEI KD (Fig. 6n, o) and this result was supported at the mRNA level, though without significance (Fig. 6p). Overall, these results show that both c-MET and ILEI cooperate for efficient MMP expression during growth of tumor xenografts and support the results from the cell-based assays. Combined ILEI KD and crizotinib treatment increased Ecadherin membrane localization To further compare tumor invasiveness and EMT status, E-cadherin-mediated cell-cell adhesion was investigated. In both, NCI-H441 and NCI-H1993 xenografts immunohistochemistry of tumor sections showed a slight increase of E-cadherin at the membranes of tumors treated with crizotinib and those derived from ILEI KD cells, and this became significant when they were in combination (Fig. 7a, b, f, g). Similar to their in vitro behavior, none of the two xenografts showed regulation of E-cadherin expression at the mRNA level upon different conditions (Fig. 7c, h), nor E-cadherin levels of tumor protein extracts showed a uniform trend of regulation (Fig. 7d, e, i, j). Therefore, these results suggest that ILEI and c-MET mainly cooperate to reduce E-cadherin protein localization at the membrane to decrease cell-cell adhesion and increase the potential for invasion. Discussion The aim of this study was to investigate whether the FAM3C CN contributes to elevated ILEI expression in cancer and its potential relationship to MET. The results show a close correlation between FAM3C and MET CNs and that cancers with high CN had higher gene expression levels of both ILEI and c-MET, which was also related to poorer outcome. Investigation of the mechanisms involved suggests that there is a cooperation between ILEI and c-MET signaling during cancer invasion as summarized in the model in Fig. 8a. During c-MET-dependent invasion, as seen in some previous studies, MMP secretion was increased [23,24] and Ecadherin levels at the cell membranes were decreased [27,28]. This study showed that both these processes were supported by ILEI expression and that c-MET also increased the secretion of ILEI. The secretion of active ILEI requires mobilizing its intracellular protein pool in a urokinase-type plasminogen activator receptor (uPAR)-dependent manner [13]. So, we suggest that i) c-MET might be involved in that process and ii) regulatory functions of c-Met on invasion might work indirectly via regulating ILEI secretion and thus, ILEI signaling activity. c-MET signaling was not cross-regulated by ILEI and ILEI did not have an influence on c-MET-dependent proliferation in cancer cell lines, showing that the interplay between these two signaling pathways on proliferation, invasion, and overall tumor growth acts rather in a complimentary manner as shown in Fig. 8b. Although the MET gene undergoes many different types of mutation to become oncogenic, amplification of the MET locus has been reported in a variety of human cancers [4]. Amplified MET CN has been shown to negatively influence patient survival in a lot of cancer types including esophageal squamous cell carcinoma [29], NSCLC [30], clear-cell renal cell carcinoma [31], and ovarian carcinoma [32]. However, this association with poor prognosis is not always evident with high c-MET protein expression [32,33], highlighting the complexities of the MET amplicon's involvement in cancer. This observation also suggests that additional genes of the amplified region might contribute to poor patient survival. The focus of this study concentrated on one of MET's closest neighbors, FAM3C. For its gene product, ILEI, high levels of protein expression correlated with poor prognosis in colorectal cancer [14]. We found in this study that FAM3C frequently co-amplified with MET and patients with MET and FAM3C amplification had poor prognosis. The high efficiency of the combined inhibition of c-MET and ILEI function on the inhibition of invasion and tumor growth of cancer cells bearing MET and FAM3C amplifications found in this study justifies the relevance of this co-amplification on clinical outcomes. Other neighboring genes in close proximity with possible co-amplification, e.g. Wnt family members and B-Raf, might have additional modulatory effects on c-MET and/or ILEI action and future studies of them will be interesting to fully resolve all functionally important players of this amplification hotspot in cancer. A previous study about the variety of MET mutations in cancer used database analysis of 14,466 cancer cases and identified 186 cases with MET CN amplification [34]. This is around 1%, similar to the TCGA database analysis in our study which showed CN amplification rates of 1.9% (8 in 501) for LUSC cases, 3.5% (18 in 516) for LUAD cases, 0.7% (11 of 1480) for LIHC cases, 0.2% (1 of 615) for COADREAD cases, and 0.9% (10 in 1080) for BRCA cases, with a total rate of 1.1% (48 in 4192). For further comparison, increased MET gene CN measured by fluorescence in situ hybridization was found in 1 to 4% of tumors from NSCLCs [30], 8.5% of lung sarcomatoid carcinomas [35], and 4.2% of colorectal cancer tissue samples [36]. However, these values are lower than the 47% rate in cancer cell lines and 67% in colon carcinoma patient samples that had CN amplification of MET and FAM3C. The main reason for the differences observed between database, cell lines and patient samples are different cut-off points. We considered a CN of 3 or higher to be amplification in qPCR and gene chip analyses of this study, while databases use higher cut-off points and a CN of 3 would be recorded as a gain rather than amplification. When the higher numbers of cases with gains in CN were added to the amplified CNs for the TCGA analysis the overall rate of CN amplification increased to nearly 30% of the total cases, which is much closer to the rates for the cell lines and COAD patients. The differences in CN frequencies that remain between the different samples might result from various factors. As c-MET activation leads to increased proliferation and growth of cancer cells, it is possible that this type of growth factor receptor gene amplification will be of benefit during selection of stable cancer cell lines. Indicating that, cancer cell lines might be more likely to gain CN amplifications. Furthermore, the 49 colorectal carcinoma cases in this study were all advanced stage Fig. 7 Combined ILEI KD and crizotinib treatment increases E-cadherin membrane localization in tumor xenografts. a, f Representative images of E-cadherin IHC on NCI-H441 (a) and NCI-H1993 (e) shCont and shILEI tumor sections of vehicle or crizotinib-treated mice. Scale bar, 100 μm. b, g E-cadherin membrane score of NCI-H441 (b) and NCI-H1993 (f) shCont and shILEI tumors of vehicle-or crizotinib-treated mice. Error bars represent SEM. Statistical significance was determined by one-way ANOVA and marked with asterisks (*p < 0.05; **p < 0.01). c, h qPCR analysis of E-cadherin mRNA expression (CDH1) in NCI-H441 (c) and NCI-H1993 (g) shCont and shILEI tumors of vehicle-or crizotinib-treated mice (n = 3 pro group). Relative expression was normalized to GAPDH and shown as fold change to vehicle treated control tumors. Error bars represent SEM. Statistical significance was determined by one-way ANOVA and marked with asterisks (*p < 0.05). d, i Western blot analysis of E-cadherin protein expression in NCI-H441 (d) and NCI-H1993 (h) shCont and shILEI tumors of vehicle-or crizotinib-treated mice (n = 3 pro group). e, j Quantification of E-cadherin protein expression from the Western blot analyses shown in panels d and i. Expression was normalized to actin loading control and shown as fold expression relative to vehicle treated shCont tumors. Error bars represent SEM. Statistical significance was determined by one-way ANOVA and marked with asterisks (*p < 0.05) cancer patients at stage T3 and T4, with 55% already displaying lymph node and/or distant metastasis, which as our data suggests have invasion and poorer prognosis, that might result an additional enrichment of MET and FAM3C CN. Our results are also in agreement with the conclusion of the above mentioned database analysis that MET CNs in general show a wide variation among cancer types, albeit relative frequencies for different tumor entities show differencies between the studies. All these data show that MET amplification has broad relevance in many cancers and our study indicates that FAM3C co-amplification may play a comparably important role in all these different cancer types. As the alternative name of c-MET indicates, Hepatocyte Growth Factor Receptor (HGFR) has an important role in liver development and regeneration [37], thus predestinating a pivotal oncogenic driver function of the gene in liver cancer. Indeed, aberrant c-MET function, including gene amplification is frequent in HCC [38]. However, the above statistics show that it is not less frequent in other epithelial cancer types, and MET amplification is one of the frequent aquired alterations (5-20% of patients) upon resistance towards EGFR tyrosine kinase inhibitor (TKI) therapies in lung cancer [39]. Thus, our mechanistic studies on NSCLC cell lines with MET amplification are of direct clinical relevance to many cancer types. c-MET signaling involves many different processes and significant crosstalks with other signaling pathways. For example, there is an interaction between c-MET signaling and the vascular endothelial growth factor (VEGF) and VEGF receptor (VEGFR) pathways [40]. While interactions between c-MET and human epidermal growth factor receptor (HER) family members allow tumor progression and treatment resistance, and cooperative signaling between c-MET and HER2 might be a mechanism by which c-MET promotes cancer progression [41]. Interestingly, in breast cancer cells the ILEI-uPAR score, which is indicative of the potential for active secreted ILEI, was shown to be significantly correlated with the HER2 status of the tumor cells [13] suggesting that these three signaling pathways may cooperate to increase invasiveness. In addition, uPARbound activated uPA is required for the proteolytic maturation of the c-MET ligand HGF [4]. At the same time, active uPA is needed for the activation of Plasminogen, the protease responsible for the maturation of ILEI, and activation of the uPA-uPAR system has also a key role in triggering ILEI secretion [13]. This shared use of the same proteolytic cascade by both signaling pathways for activation also indicates a strong positive regulatory connection. Because of the importance of c-MET in many cancer types, inhibitors of c-MET are in clinical trials as cancer treatment, but a significant percentage of tumors acquire resistance to these treatments [4,19]. This may in part be due to the ability of c-MET to crosstalk and interact with alternative RTKs such as VEGFR and HER2 [40,41]. Combination treatments may help address this problem and the results of this study suggest that ILEI might be a potential target for these treatments as highlighted by the xenograft experiments that showed tumor growth was mostly inhibited by combined crizotinib and ILEI KD. ILEI is less well understood than c-MET, but its role in cancer progression is starting to emerge. ILEI translation appears to be stimulated by transforming growth factor beta (TGF-β) and silenced by heterogeneous nuclear ribonucleoprotein E1 (hnRNP E1). Part of the signaling pathway, at least in breast cancer cells, involves the leukemia inhibitory factor receptor (LIFR) and signal transducer and activator of transcription 3 (STAT3) signaling [42]. The active form of ILEI requires proteolysis and is self-dimerized [13,17,43]. However, it was unclear so far whether over-expression of ILEI could result from CN amplification until this study. Once we had established that MET and FAM3C were often co-amplified and this led to combined overexpression of c-MET and ILEI, it was important to investigate whether they interact during cancer progression. For this, we used five cancer cell lines that had shown coamplified MET and FAM3C. For c-MET inhibition, we used crizotinib and ILEI KD was used to mimic ILEI inhibition. Somewhat different to the published data on the MET inhibitor PHA665752 [19], OE33 cells showed high sensitivity to increasing concentrations of crizotinib in proliferation capacity and were also sensitive to both PHA665752 and savolitinib in this study. The other cell lines showed in vitro sensitivity (NCI-H1993 and MKN45) or resistance (NCI-H441 and SKBR3) towards crizotinib as expected from previous publication. ILEI KD did not influence the proliferation of the cell lines suggesting that ILEI is not involved in proliferation of cancer cells or c-MET-regulated proliferation. However, ILEI KD strongly impaired the invasiveness of all five cancer cell lines. This was in accordance with earlier findings [6] and further supported that view that ILEI signaling induces invasiveness in different types of cancer [44]. Importantly, the impairment of invasiveness by ILEI KD on HGF-induced, hence c-MET-dependent invasion suggests that c-MET driven invasion depends on ILEI. In addition, the importance of ILEI in c-METindependent invasion suggests that ILEI has a broad influence on invasion and does not rely exclusively on c-MET secretion trigger. In terms of invasion, an important step is ECM degradation which allows tumor dissemination. MMPs are implicated in this process because they mediate the constant remodeling of the ECM [22]. Our results suggested that ILEI was required for HGF-induced expression and secretion of MMP-9 and MMP-2 through c-MET signaling. Although the expression levels of ILEI were not influenced by crizotinib treatment and ILEI did not affect activation of c-MET signaling, there was an apparent decrease in ILEI secretion upon c-MET inhibition. This suggests c-MET may regulate MMPs indirectly via ILEI secretion. So, the independent c-MET and ILEI processes show a vital interplay and cooperation to support invasiveness. The results of the cell-based studies were then further investigated in xenografts in mice. Our expectation was that the growth of crizotinib-sensitive tumors would be slower than for crizotinib resistant tumors when c-MET was inbitited. However, crizotinib reduced the growth and tumor mass of xenografts induced by both sensitive (NCI-H1993) and resistant (NCI-H441) cell types. This was is in accordance with some earlier findings [26] and suggested that crizotinib was able to counteract the resistence of NCI-H441 cells to inhibit c-MET, most probably via non-cell intrinsic mechanisms that will require further investigation. This result meant that we then focused on ILEI and found that both cell types showed reduced growth of tumor xenografts from ILEI KD and a superior reduction with crizotinib treatment in combination supporting the interplay between the two signaling pathways. Combination of ILEI KD and crizotinib also resulted in lower expression of MMPs in a similar way to the cell-based studies. MMP-2 and MMP-9 can degrade components of the ECM such as type IV collagen to release tension and allow growth and invasion of tumors and as such are implicated in the late stages of cancer [45]. During EMT cell-cell junctions begin to disassemble [9]. The best characterized alteration at this point involves the loss of E-cadherin, a key cellto-cell adhesion molecule [25]. E-cadherin helps to assemble and maintain epithelial cell sheets through adherence junctions. Therefore, increased expression of E-cadherin acts as an antagonist of invasion and metastasis [46]. When E-cadherin was investigated in the xenograft tumors, combined crizotinib and ILEI KD significantly increased E-cadherin at the membranes. This was less obvious at the mRNA and protein levels. Overall, our study showed high variance among tumor cells how E-cadherin levels were regulated, indicating that it is a highly dynamic process with strong control at the transcription level, but also via internalization or altered stability in cancer cells [47]. Also, in some cancers, such as hepatocellular carcinoma, E-cadherin protein accumulation is prevented by mRNA retention in the nucleus [48]. Conclusions The results of this study show that amplification of FAM3C CN can contribute to increase the level of ILEI expression in a wide range of cancer types. There was a close correlation between FAM3C and MET CNs in cancer patients and those with high CNs had poorer outcomes. Investigation of the mechanisms involved showed interplay between the two separate ILEI and c-MET signaling pathways during cancer invasion, suggesting MET amplifications are in reality MET-FAM3C co-amplifications with tight functional co-operation. In vivo investigation showed that ILEI knock-down and c-MET-inhibition in combination significantly reduced the invasive outgrowth of lung tumor xenografts in mice, apparently by inhibiting proliferation, MMP expression and E-cadherin membrane localization. Therefore, including ILEI as a
2020-11-05T09:07:43.874Z
2020-11-04T00:00:00.000
{ "year": 2021, "sha1": "2dc3ff047ebf32f4e6931492a35940e913669e15", "oa_license": "CCBY", "oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-021-01862-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a4e9464feab74f30ca551c059dadce1fcdd8a95", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267146178
pes2o/s2orc
v3-fos-license
Synthesis of carboxylated magnetite nanoparticles covalent conjugates with folic acid antibody FA-1 for lateral flow immunoassay : Magnetite nanoparticles (MNPs) are quite preferable material for different bioassays because of their quite low toxicity both for cells and for mammals and big variety of their surface functionalization approaches. We have synthesized MNPs via simple and convenient co-precipitation method with preliminary filtration of FeCl 2 and FeCl 3 solution, under argon atmosphere and non-magnetic stirring. MNPs were citrate-stabilized and then modified stage-by-stage with tetraethoxysilane (TEOS), (3-Aminopropyl)triethoxysilane (APTES) and acyl-ated with succinic anhydride resulting in carboxylated MNPs. Carboxylated MNPs were covalently bounded with folic acid antibody (FA-1) via 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDC). MNP-EDC-FA-1 were passed through test-stripe with the line consisting folic acid-gelatin conjugate. The conjugation of MNP-EDC-FA-1 with folic acid was observed visually, and the magnetic signal distribution was scanned through the test-stripe with magnetic particle quantification technique (MPQ) developed earlier. Visually, the line with folic acid-gelatin conjugate on the test-stripe has turned dark, with color intensity strongly depending on MNP-EDC-FA-1 concentration. MPQ has shown that the great majority of MNP-EDC-FA-1 was bounded with acid-gelatin conjugate. MPQ technique allowed quantifying down to 5 ng of MNP-EDC-FA-1 in this experiment with MNPs synthesized, with strong peak at the acid-gelatin conjugate line. Introduction Magnetite nanoparticles (MNPs) are quite preferable material for different bioassays [1] because of their quite low toxicity both for cells [2] and for mammals [3] and big variety of their surface functionalization approaches [4] [5].Superparamagnetic behavior provides an application of MNPs as magnetic labels both for cells [6] and for molecules [7].Combination of MNPs optical properties at visible range with magnetic properties has been led into the fundament of magnetometric lateral flow immunoassay on test-stripes for rapid and sensitive qualitative and quantitative analysis of different biomolecules for which magnetic particle quantification (MPQ) technique has been de- veloped earlier [8][9]. Methods Figure 1.Synthesis of MNP via co-precipitation method. We have synthesized MNPs via simple and convenient co-precipitation method (Figure 1).Briefly, FeCl2 and FeCl3 were dissolved in degassed water in stoichiometric rate and filtered in order to exclude hydroxy-and oxychlorides that may act as undesirable and big crystallization centers due to their low solubility.The synthesis of MNPs was carried out by adding NaOH in degassed water solution to the mixture of FeCl2 and FeCl3 and stirred non-magnetically in order to minimize the formation of non-spherical structures, under argon atmosphere to prevent the MNPs from oxidation.MNPs were washed and stabilized with sodium citrate.Citrate-stabilized MNPs (MNP-cit) were modified stage-by-stage with tetraethoxysilane (TEOS), (3-Aminopropyl)triethoxysilane (APTES) resulting in aminated MNPs (MNP-NH2) and acylated with succinic anhydride resulting in carboxylated MNPs (MNP-COOH).Carboxylated MNPs were covalently bounded with folic acid antibody (FA-1) via 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDC) by incubation of carboxylated MNPs with EDC, washing them and consequent incubation with FA-1.MNP-EDC-FA-1 suspension was mixed with albumin buffer solution in order to block the unreacted EDC and to simulate blood serum media.Porous test-stripes with folic acid-gelatin conjugates were put into MNP-EDC-FA-1 suspension and left for 15 min.Then, the test-stripes were scanned with MPQ-scanner in order to measure magnetic signal distribution along the test-stripes. Citation: To be added by editorial staff during production.Academic Editor: Firstname Lastname Published: date Publisher's Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.Copyright: © 2022 by the authors.Submitted for possible open access publication under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/license s/by/4.0/). Figure 1 . Figure 1.XRD of MNPs synthesized: MNP (Ar) -pristine MNPs synthesized in Ar atmosphere, MNP-cit (Ar) -citrate-stabilized MNPs synthesized in Ar atmosphere, MNP-cit (air) -citrate-stabilized MNPs synthesized in air atmosphere.The suspension of synthesized nanoparticles was of dark black color.The magnetic response was strong.XRD (Figure1) has shown that synthesized nanoparticles consisted of pure magnetite, with crystallite size about 12 nm, according to the Scherrer equation.Sodium citrate dihydrate peaks are observed on the difractogram of MNP-cit synthesized in air atmosphere that may indicate it's excess on the MNP surface due to the insufficient washing of MNPs.Peaks corresponding to the magnetite are better pronounced when MNPs are synthesized in Ar atmosphere. Figure 2 . Figure 2. Hydrodynamic radii of particles in MNP suspensions: MNP -pristine MNPs, MNP-citcitrate-stabilized MNPS, MNP-NH2 -aminated MNPs, MNP-COOH -carboxylated MNPs.Hydrodynamic radii (Figure 2) of pristine MNPs agglomerates were about 380 nm and decreased to 136 nm after modification with sodium citrate.Carboxylation caused no sufficient resultant change in MNPs agglomerates size.ζ-potential has changed from neutral to -48±7 mV after modification with sodium citrate that among with size decrease indicate stabilization of the suspension; turned to +25±7 mV after amination with APTES and to -25±10 mV after acylation indicating that carboxylation was successful.
2024-01-24T18:17:23.736Z
2023-10-20T00:00:00.000
{ "year": 2023, "sha1": "0cd487d9dc7fbb2d2008cd4b672006b436438b70", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-4591/48/1/66/pdf?version=1705315446", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e2f37625213340e35fdf746f2a694430f4910662", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
212778275
pes2o/s2orc
v3-fos-license
The Course of Chinese Culture’s New Life Since the May 4th Movement: From Cultural Awakening to Cultural Self-Confidence The May 4th Movement is the beginning of Chinese culture. Summarizing the development of Chinese culture since the May 4th Movement has important practical significance for enhancing the cultural confidence of the Chinese people and the Chinese nation. Before the May 4th Movement, the changes in Chinese culture were mostly cultural movements led by a few enlighteners, top-down, and lacking a strong leadership core. As a result, they failed to lead Chinese culture to prosperity. After the May 4th Movement, history and people handed over the baton of Chinese cultural development to the Chinese Communist Party. Chinese culture has embarked on the path of cultural renewal led by the Communist Party of China, guided by Marxism and centered on the people, and has achieved the overall awakening of the people. Socialism with Chinese characteristics has entered a new era. Chinese culture has gradually moved toward cultural self-confidence. This has united the consensus for the Chinese dream of realizing the great rejuvenation of the Chinese nation and provided China's wisdom and China's plan for the development of world civilization. The great banner of socialism with Chinese characteristics based on Chinese culture is elevated in the world. Keywords—comprehensive awakening; cultural selfconfidence; the May 4th Movement; course of new life I. INTRODUCTION This year marks the 100th anniversary of the May 4th Movement and the 70th anniversary of the founding of the People's Republic of China. Summarizing the process of the Chinese Communist Party leading the Chinese people to realize the new life of the national culture is of great practical significance to enhancing the self-confidence of the Chinese people and the Chinese nation, uniting the social consensus, accelerating the building of a socialist modern country, and realizing the Chinese nation's great rejuvenation. II. THE ATTEMPT OF CULTURAL CHANGE BEFORE THE MAY 4TH MOVEMENT: ATTEMPTS BY A FEW ENLIGHTENED PEOPLE The changes in Chinese culture before the May 4th Movement were mostly cultural movements led by a few enlightened people, top-down, and lacking a strong leadership core. After the changes in science and technology, political culture, and ethical culture, Chinese culture was still not prosperous at the time. This shows that Chinese culture cannot be reborn without changing the nature of China's semi-colonial and semi-feudal society and not mobilizing the broad masses of the people. In the face of serious social problems caused by the Opium War, a small number of people of insight in the feudal intellectuals began to face up to and learn about science and technology. Lin Zexu is the first person in modern China to open his eyes to the world. When he banned opium in Guangzhou, he organized people to translate Western books and magazines, and introduced the geographical and historical conditions of other countries in the world in detail, making "opening the world" an unstoppable cultural trend. Since then, Wei Yuan has further proposed the idea of "learn from foreigners to compete with them" in "Records and Maps of the World", and advocated studying foreign advanced science and technology in order to achieve the goal of enriching the country and resisting external abuse. The "learn from foreigners to compete with them" has realized the ideological leap from "observing foreign countries" to "learning foreign advanced technology" and laid the ideological foundation for the Westernization Movement. The Westernization Group actively promoted modern industry and put forward the idea of "westernized Chinese style" in culture. That is, the Confucian ethical culture is the "subject" and the western science and technology culture is the "method" to achieve the goal of "selfimprovement" and "seeking wealth" and continuing to maintain feudal rule. The fiasco of the Sino-Japanese War declared the Westernization Movement, which was based on the "westernized Chinese style", went bankrupt. In the late Qing Dynasty, intellectuals attempted to maintain the purpose of feudal rule through the transformation of science and technology and determine the destiny of its inevitable defeat. Because the new productive forces are incompatible with the feudal production relations, it is impossible to develop under the shackles of feudalism. The failure of the Sino-Japanese War of 1894 made the Chinese people realize that learning Western powers only on the material and technical level could not free China from the situation of backward beatings. It must carry out the "legal reform" at the institutional level, and the reform of Chinese political culture began. The Reform Movement of 1898 initiated by the bourgeois reformists represented by Kang Youwei and Liang Qichao hopes to adopt the "present item" of the imperial court, implement the reform and reform, and take the path of "Yingdiwang". Kang Youwei launched the "Gongche Shangshu movement" in conjunction with the examinations in Beijing to express the necessity and urgency of the reform. Although the reformists also set up books and set up schools, these activities are basically limited to the small circles of bureaucrats and intellectuals. Because of the fear of the strength of the masses of the people, the inability to mobilize the masses as a guarantee of power, and the inherent weakness of the bourgeois reformists, the Reform Movement of 1898 quickly failed. The failure of the Reform Movement of 1898 led some people to abandon their political reforms and embark on the path of changing the Chinese political culture with the social revolution. The 1911 Revolution led by Sun Yat-sen established the first bourgeois republican government, which enabled the concept of democratic republic to be deeply rooted in the hearts of the people and promoted political and cultural changes, due to the weakness and compromise of the bourgeoisie itself, it is impossible to propose a revolutionary program that is completely anti-imperialist and anti-feudal. Without the leadership of political parties armed with advanced theories, the bourgeois revolutionaries quickly handed over the power to the so-called "strong man" 1 who has both new ideas and old means, and naturally gave up the power to lead China's political and cultural changes. without the support of the broadest masses of the people, achieving national independence and cultural rejuvenation can only be a piece of empty talk. The Westernization Chinese style of the Westernization School fundamentally hopes to learn Western scientific and technological civilization to maintain feudal rule. The reformism of the bourgeois reformers is even more illusory to "reform" without touching the foundation of the feudal economy. The bourgeois revolutionaries did not propose a clear anti-imperialist slogan, hoping to compromise the concession in exchange for imperialism's support for the Chinese revolution. Different from the natural weakness and compromise of the bourgeoisie, the Communist Party of China clearly proposed the democratic revolutionary program with the anti-imperialist and anti-feudal as the struggle goal. The CCP's two majors defeated the warlords, overthrew the imperialist oppression, unified China as the true democratic republic as the current stage of the revolutionary program, and pointed out clear goals for the Chinese people. Under the leadership of the Communist Party of China, the Chinese people adhered to the combination of Marxism and China's reality, carried out a resolute antiimperialist and anti-feudal struggle, and finally won the victory of the new-democratic revolution and changed the semi-feudal and semi-feudal society in China. The May Fourth Movement promoted the establishment of the Communist Party of China, and the Chinese revolution has since had a strong and unified leadership. The absence of a strong political party leader is an important reason for the failure of the bourgeois democratic revolutionaries. As Tung Meng Hui, who should unite all revolutionary forces, the internal organization is lax, the factions are mixed, and there is no core leadership. Sun Yat-sen deplores that "the internal elements are divided and the steps are messy. There is no spirit of unity and self-government. The virtues of inheritance are not guaranteed. The party leader is equal to the shackles, and the party members have scattered sand." 3 Unlike the bourgeois revolutionaries, the Chinese Communists were honed in the harsh and difficult struggle environment, forging the spirit of steel, such as the spirit of the Red Boat, the spirit of Jinggangshan and the spirit of the Long March. General Secretary Xi Jinping pointed out: "The great spirit of the Long March is to regard the fundamental interests of the people of the whole country and the Chinese nation above all else, to resolute the ideals and beliefs of the revolution, and to firmly believe in the spirit of the cause of justice. It is the spirit that not afraid of any difficulties and obstacles to save the country and save the people, and will not hesitate to give up all the spirit. It is the spirit of adhering to independence, seeking truth from facts, and proceeding from reality. It is the spirit of taking care of the overall situation, strict discipline, and close unity. It is the spirit of relying closely on the people and the people, living and dying with each other, suffering and sharing, and arduous struggle." 4 The great spirit of the Long March is a concentrated expression of the firm revolutionary ideals of the Chinese Communists and a vivid reflection of the Chinese people's hard-working national spirit. The May 4th Movement promoted the combination of the Communists and the workers and peasants. The Chinese Communist Party has always relied on the masses and mobilized the masses to promote the overall awakening of the Chinese people and the Chinese nation. At the beginning of the founding of the Communist Party of China, we adhered to the rural encirclement of urban routes, focusing on rural areas and mobilizing the masses in rural areas. The implementation of the comprehensive war of resistance is a concentrated expression of this thinking and a more comprehensive and thorough spiritual awakening conducted by the party leading the people throughout the country. The Kuomintang regime, which represents the interests of the big landlords and the big bourgeoisie, does not dare to rely on and mobilize the masses to implement a one-sided war of resistance. The Communist Party of China is convinced that only by mobilizing and relying on the masses and carrying out a protracted war can the final victory be achieved. Therefore, the Chinese Communist Party has focused its work on the countryside behind the enemy, actively opened up the enemy's back battlefield, and continued to follow the route of "surround the cities from the countryside". IV. NEW LIFE OF CHINESE CULTURE: TOWARDS CULTURAL SELF-CONFIDENCE IN PRACTICE EXPLORATION After the founding of New China, the Chinese Communist Party led the Chinese people to gradually embark on a road of socialist cultural construction with Chinese characteristics. Entering a new era, the Chinese people and the Chinese nation have gradually moved toward cultural self-confidence, and have reached consensus on the Chinese dream of realizing the great rejuvenation of the Chinese nation and provided China's wisdom and China's plan for the development of world civilization. The great banner of socialism with Chinese characteristics based on Chinese culture is elevated in the world. In the new era, cultural self-confidence is systematically developed. First, the basic connotation of cultural selfconfidence is clarified. Xi Jinping clearly pointed out in the "July 1" speech that "the Chinese traditional culture cultivated in the 5,000 years of civilization development, the revolutionary culture and the advanced socialist culture fostered in the great struggle of the party and the people, accumulate the deepest spirit pursuit of the Chinese nation, represent the unique spiritual identity of the Chinese nation." It can be seen that the connotation of cultural self-confidence includes traditional cultural self-confidence, revolutionary cultural self-confidence and socialist advanced culture selfconfidence. The establishment of cultural self-confidence has laid a solid foundation for the in-depth study of cultural selfconfidence. Second, the "four self-confidence" theory is put forward, emphasizing that cultural self-confidence is a more basic, broader and deeper self-confidence. Cultural selfconfidence provides deep cultural support for road selfconfidence, theoretical self-confidence and institutional selfconfidence. The road of socialism with Chinese characteristics is the choice of history and people. This choice itself is consistent with the value orientation of Chinese culture. The theoretical system of socialism with Chinese characteristics is a scientific theory based on the forefront of the times and advancing with the times. There is no precedent to follow, and strong cultural self-confidence is needed to provide theoretical strength. The socialist system with Chinese characteristics has distinctive Chinese characteristics. It does not copy the Western model, and does not succumb to self-restraint and self-respect. It requires a firm cultural confidence to take its own path. The "four self-confidences" complement each other, dialectical unity, and together constitute a complete cultural theory. The systematic presentation of cultural self-confidence has important practical value. Second, cultural self-confidence is to unite people's consensus. First of all, cultural self-confidence embodies the Chinese spirit and promotes the leadership of ideological work. Ideology is about flags and roads. Cultural self-confidence makes the whole people firmly unite in the ideals, beliefs, values and moral concepts, consolidates the guiding position of Marxism in the field of ideology, and firmly grasps the leadership of ideological work. Second, cultural selfconfidence embodies Chinese values and is conducive to nurturing and practicing socialist core values. "It is necessary to make clear the historical origins, development context, and basic direction of China's excellent traditional culture, and the unique creation, values, and distinctive characteristics of Chinese culture, and enhance cultural self-confidence and confidence in values." Culture is an important source of values. The core values of socialism are based on the fine traditional Chinese culture and revolutionary culture, and are cast into the advanced socialist culture. Therefore, cultural self-confidence is an important premise for cultivating and practicing the core values of socialism, and it helps them to play an important role in cohesiveness and maintaining national spirit. Third, cultural self-confidence unites China's strength and leads and promotes comprehensive deepening of reforms. The new era is an era in which the people of all nationalities in the country unite and struggle, constantly create a better life, and gradually realize the common prosperity of all the people. Finally, cultural self-confidence provides China's wisdom and China's program for the development of world civilization. The great banner of socialism with Chinese characteristics based on Chinese culture is lifted high in the world. With the advancement of economic globalization and multi-polarization of the world, on the one hand, the links between economy, politics and culture are increasingly close. Culture has become an important factor in the country's comprehensive national strength competition, and cultural exchanges between the international communities have become increasingly frequent. On the other hand, the world is in a period of great development, great change, great adjustment, instability, lack of certainty, and the international community urgently needs to build a more just and reasonable international system and order. Since the 18th National Congress, the party and state undertakings have undergone historic changes, and socialism with Chinese characteristics has entered a new era. China's development concept has been increasingly recognized. China has the confidence and ability to make greater contributions to the world. Based on the new international and domestic Advances in Social Science, Education and Humanities Research, volume 378 situation, General Secretary Xi Jinping proposed the idea of constructing a community with shared future for mankind. This provides China's wisdom and China's program for the development of world civilization. "In today's world, countries are interdependent and co-existing. People need to inherit and carry forward the purposes and principles of the UN Charter, build a new type of international relations with cooperation and win-win, and build a community with shared future for mankind." 5 To build a community with shared future for mankind, in terms of culture, people need to respect the diversity of world civilizations, to communicate with civilizations, to transcend civilizations, to learn from civilizations, to surpass civilizations, and to coexist with civilizations. Cultural differences should not be the root of conflict, but should be the driving force for the progress of human civilization. V. CONCLUSION It will be necessary to grasp the logic of Chinese cultural development in a historical and holistic manner, especially the development of Chinese culture since the May 4th Movement. People need to understand profoundly that the party's leadership of the Chinese nation to achieve cultural renaissance is the choice of history and people, and it is the right choice to pass the practice test and stand the test of practice. This is of great significance to enhancing cultural self-confidence, maintaining political strength, and pushing the cause of socialism with Chinese characteristics to a new starting point.
2020-02-20T09:12:50.025Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "0eb7e096cafbc66e1fbf296131a659e2a66b42e7", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125934262.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aed4104204a0431f5b13a8e1588a630ec877ef8d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
49665980
pes2o/s2orc
v3-fos-license
A new species of Kurixalus from western Yunnan, China (Anura, Rhacophoridae) Abstract A new species of the genus Kurixalus (Anura: Rhacophoridae) is described from western Yunnan, China. Genetically the new species, Kurixalus yangi sp. n., is closer to Kurixalus naso than to other known congeners. Morphologically the new species is distinguished from all other known congeners by a combination of the following characters: smaller ratios of head, snout, limbs, IND, and UEW to body size; male body size larger than 30 mm; curved canthus rostralis; weak nuptial pad; brown dorsal color; absence of large dark spots on surface of upper-middle abdomen; presence of vomerine teeth; gold brown iris; single internal vocal sac; serrated dermal fringes along outer edge of limbs; granular throat and chest; rudimentary web between fingers; and presence of supernumerary tubercles and outer metacarpal tubercle. Introduction The genus Kurixalus Ye, Fei, & Dubois in Fei (1999) distributes widely in eastern India, Indochina, Sunda Islands, Philippine archipelago, montane forests of southern China, and adjacent continental islands, and cur rently contains 15 species (Frost 2018). Owing to its morphological conservativeness, the taxonomy and systematics of Kurixalus were once very confusing (Yu et al. 2017a). For instance, Kurixalus hainanus (Zhao, Wang, & Shi in Zhao et al. 2005) was once thought to be a synonym of Kurixalus odontotarsus (Ye & Fei in Ye et al. 1993) by some authors (e.g., Fei et al. 2010) or a synonym of Kurixalus bisacculus (Taylor, 1962) by Yu et al. (2010). On the basis of broad sampling, recently Yu et al. (2017a) suggested that K. hainanus is valid and revealed six lineages that might represent undescribed species in the genus Kurixalus, one of which occurs in western Yunnan, China and northern Myanmar and is genetically closer to Kurixalus naso (Annandale, 1912) than to other known congeners with a divergence of 6.18% estimated from COI sequences (clade C, Fig. 1). Here we further describe the lineage consisting of specimens from western Yunnan, China as a new species. Morphological comparisons demonstrate that the new species is distinctive from K. naso and other known congeners and therefore warrants taxonomic recognition. Materials and methods Sampling. Specimens were collected during fieldwork in Dehong Autonomous Prefecture, western Yunnan, China in June and July, 2014 (Fig. 2). They were euthanized with diethyl ether anesthesia and fixed by 90% ethanol before being stored in 70% ethanol. Liver tissues were preserved in 99% ethanol. Specimens were deposited at Kunming Institute of Zoology, Chinese Academy of Sciences. Morphology. Morphometric data were taken using digital calipers to the nearest 0.1 mm. Morphological terminology follows Fei (1999). Measurements include: SVL snout-vent length (from tip of snout to vent); HL head length (from tip of snout to rear of jaws); HW head width (width of head at its widest point); SL snout length (from tip of snout to anterior border of eye); IND internarial distance (distance between nares); IOD interorbital distance (minimum distance between upper eyelids); UEW upper eyelid width (maximum width of upper eyelid); ED eye diameter (diameter of exposed portion of eyeball); TD tympanum diameter (the greater of vertical or horizontal diameter of tympanum); DNE distance from nostril to eye (from posterior border of nostril to anterior border of eye); FLL forelimb length (distance from elbow to tip of third finger); THL thigh length (distance from vent to knee); TL tibia length (distance from knee to heel); FL foot length (distance from proximal end of inner metatarsal tubercle to tip of fourth toe); TFL length of foot and tarsus (distance from tibiotarsal joint to tip of fourth toe). A multivariate principal component analysis (PCA) was conducted using SPSS 17.0 (SPSS Inc.) based on a correlation matrix of size-standardized measurements (all measurements divided by SVL). Scatter plots of the scores of the first two factors of the PCA were used to examine the differences between the new species and K. naso. Additionally, the differences between the new species and its two congeners known from Yunnan, China (K. odontotarsus and K. hainanus) were also similarly examined based on morphometric data. Results Morphometric data of the new species and K. naso are summarized in Table 1. We retained the first two principal components which accounted for 63.03% of the total variance and had eigenvalues above 2.0 ( Table 2). Loadings for PC 1, which accounted for 48.69% of the total variance, were all positive except for TD and were most heavily loaded on HL, SL, IND, UEW, FLL, THL, TL, and TFL (Table 2). Differentiation was found along the PC 1 axis between K. naso and the new species (Fig. 3). This result indicates that the new species differs from K. naso by a series of characters associated with the head and limbs such as shorter HL, shorter SL, narrower IND, narrower UEW, shorter FLL, shorter THL, shorter TL, and shorter TFL. The second principal component (PC 2) accounted for 14.34% of the total variance and loaded heavily and positively on IOD and negatively on HW (Table 2), but no clear separation was observed along this axis between the new species and K. naso (Fig. 3). In addition, the new species can be separated from K. odontotarsus and K. hainanus by having smaller ratio of head length to body size (Fig. 4). Diagnosis. The new tree frog species is assigned to the genus Kurixalus based on a combination of the following characters: tips of digits enlarged to discs, bearing circum-marginal grooves; small body size (SVL range of 31.6-34.7 mm in adult males; Table 1); finger webbing poorly developed and toe webbing moderately developed; ser rated dermal fringes along outer edge of forearm and tarsus; an inverted triangularshaped dark brown mark between eyes; dorsal brown ") (" saddle-shaped marking; and coarse dorsal and lateral surfaces with small, irregular tubercles (Nguyen et al. 2014a, Nguyen et al. 2014b, Yu et al. 2017b). Our previous molecular study placed the new species in Kurixalus with other known congeners (Yu et al. 2017a). Kurixalus yangi sp. n. can be distinguished from its congeners by a combina tion of the following characters: male body size larger than 30 mm; smaller ratio of head length to body size; curved canthus rostralis; weak nuptial pads; brown dorsal color; absence of large dark spots on upper-middle abdomen; presence of vomerine teeth; gold brown iris; single internal vocal sac; serrated dermal fringes along outer edge of limbs; granular throat and chest; interorbital space longer than upper eyelid; rudimentary web between fingers; and presence of supernumerary tubercles and thenar tubercle. Description of holotype. A small rhacophorid; HL shorter than HW; snout pointed, no dermal prominence on tip, projecting beyond margin of lower jaw in ventral view; canthus rostralis blunt and curved; lore region oblique, slightly concave; nostril oval, slightly protuberant, closer to tip of snout than eye; IND slightly narrower than IOD; pineal spot absent; pupil oval, horizontal; tympanum distinct, rounded, slightly less than half ED; supratympanic fold distinct, curving from posterior edge of eye to insertion of arm; vomerine teeth in two oblique patches, touching inner front edges of oval choanae; tongue notched posteriorly; single internal vocal sac. Relative length of fingers is I < II < IV < III. Tips of all four fin gers expanded into discs with circum-marginal and transverse ventral grooves; relative width of discs is I < II < IV < III; nuptial pad present on first finger; fingers weakly webbed at base; lateral fringes on free edges of all fingers; subarticular tubercles prominent and rounded, formula 1, 2, 2, 1; supranumerary tubercles present; two metacarpal tuber cles present; series of white tubercles forming serrated fringe along outer edge of forearm. Heels overlapping when legs at right angle to body; relative length of toes is I < II < III < V < IV; tips of toes expanded into discs with circum-marginal and transverse ventral grooves; toe discs smaller than finger discs; relative size of discs is I < II < III < V < IV; webbing moderate on all toes, webbing formula is I1.5-2II1-2III1-2IV2-1V following Myers and Duellman (1982); subarticular tubercles prominent and rounded, formula 1, 1, 2, 3, 2; supernumerary tubercles present; inner metatarsal tubercle distinct, oval; outer metatarsal tubercle absent; series of tubercles forming serrated dermal fringe along outer edge of tarsus and fifth toe. Numerous small to large tubercles scattered on top of head, upper eyelids, dorsum, and flanks; patch of white tubercles below vent; white tubercles on tibiotarsal articulation; throat and chest finely granulated and abdomen coarsely granulated; dor sal surface of limbs smooth with tuberculs and ventral surface of thighs granulated. Color of holotype in life. Iris golden brown; dorsal surface brown, mottled with green patches and a dark brown saddle-shaped mark on dorsum behind eye; a dark brown inverted triangular-shaped mark between eyes, posterior of which extends to and touches the saddle-shaped mark; lateral head and tympanic region brown, mottled with green patches below canthus and dark brown spots on edge of upper jaw; flank light yellow, mottled with green and brown patches; limbs dorsally brown with three clear dark brown bands, mottled with green; palm of hand light red; rear, anterior, and venter of thigh red; inner side of tarsus and foot red; chest and abdomen white, fringed with yellow and mottled with small brown spots; chin clouded with dark brown and mottled with yellow patches. Color of holotype in preservative. In preservative, green, yellow, and red faded. Dorsal ground color brown, pattern same as in life. Flank white with brown patches; margin of lower jaw clouded with dark brown; chin, chest, and abdomen white with scattered brown spots; palm of hand dirty white; anterior, posterior, and venter of thigh dirty white, with many fine brown speckling scattered on venter of thigh; inner side of tarsus and foot dirty white. Variations. Because the holotype and paratypes of the new species are all male, sexual dimorphism could not be determined. IND is smaller than IOD in holotype and most paratypes, but IND is larger than IOD in paratype KIZ 14102913 (Table 1). In addition, IOD is larger than UEW in holotype and most paratypes, but IOD is smaller than UEW in paratypes KIZ 14102912 and KIZ 14102913 (Table 1). Additionally, color pattern of paratype KIZ 14102912 also differs from other specimens in that its chin has much less spotting. Distribution and natural history. The new species is known from border region with northern Myanmar in western Yunnan, China (Fig. 2) and northern Myanmar according to Yu et al. (2017a). At the type locality, the new species was found calling on leaves of bushes adjacent to a road at night (Fig. 8). Specimens from the other two sites were found calling on broad leaves at the edge of an evergreen forest. Tadpoles, eggs and females were not found. Comparisons. The new species, Kurixalus yangi sp. n., is genetically closer to K. naso than to other known members of Kurixalus according to our previous work (Yu et al. 2017a), but morphologically it can be separated from K. naso by having smaller ratios of head, snout, IND, UEW, and limbs divided by SVL (Table 2 and Fig. 3). The smaller IND and UEW ratios in the new species can be observed when comparing these distances with the IOD, which is generally larger in the new species but smaller in K. naso (Table 1). Currently, three Kurixalus species (K. odontotarsus, K. hainanus, and K. lenquanensis Yu, Wang, Hou, Rao, & Yang, 2017) are recognized in Yunnan, China (Yu et al. 2017a, Yu et al. 2017b). The new species differs from K. odontotarsus and K. hainanus by having smaller ratio of head length to body size and no large dark spots on abdomen (versus larger ratio of head length to body size and large dark spots on entire abdomen; Figs 4, 9) and from K. lenquanensis by larger body size (SVL of 31.6-34.7 mm in adult males), more pointed snout, and presence of green coloration on dorsal surface and lateral side of head and body (versus smaller body size [SVL of adult males less than 30 mm], somewhat rounded snout, and absence of green coloration on dorsum; Fig. 9, Yu et al. 2017b). The new species is distinguished from Kurixalus idiootocus (Kuramoto & Wang, 1987) by larger body size, absence of a pair of symmetrical large dark patches on chest, and single internal vocal sac (versus smaller body size [SVL of adult males less than 30 mm], presence of a pair of symmetrical large dark patches on chest, and single external vocal sac; Yu et al. 2017a); from Kurixalus berylliniris Wu, Huang, Tsai, Li, Jhang, & Wu, 2016 by gold brown irises, weak nuptial pads, and coarsely granular abdomen (versus emerald to light green irises, greatly expanded nuptial pads, and smooth abdomen; Wu et al. 2016); from Kurixalus wangi Wu, Huang, Tsai, Li, Jhang, & Wu, 2016 by larger body size, weak nuptial pads, and presence of supernumerary tubercles on foot (versus smaller body size [SVL of 28.6-31.6 mm in adult males], greatly expanded nuptial pads, and absence of supernumerary tubercles on foot; Wu et al. 2016); and from Kurixalus eiffingeri (Boettger, 1895) by weak nuptial pads, oblique loreal region, and curved canthus rostralis (versus greatly expanded nuptial pads, vertical loreal region, and straight canthus rostralis; Wu et al. 2016). Discussion Species diversity of the genus Kurixalus seems to be underestimated, with at least five unnamed lineages in the K. odontotarsus species group, with the exception of the new species described here, remaining to be described according to our earlier work (Yu et al. 2017a; Fig. 1). Taxonomic confusion in the K. odontotarsus species group mainly involved K. bisacculus. Of the remaining five clades that might represent unnamed species, four (clades F, G, H, and K; Fig. 1) were placed in K. bisacculus (Stuart and Emmett 2006, Thy et al. 2010, Yu et al. 2010, Nguyen et al. 2014a, Nguyen et al. 2014b). Even K. hainanus (clade J) was considered a synonym of K. bisacculus (Yu et al. 2010). A reason for this situation is the relatively low divergence of 16S rRNA sequences between K. bisacculus and these clades, which resulted in these lineages being considered conspecific though morphological differences exist between them (e.g., Yu et al. 2010). Another source of taxonomic confusion in the K. odontotarsus species group involves K. verrucosus, as specimens from northern Myanmar (Kurixalus sp5; Fig. 1) and Kurixalus naso from southern Tibet (clade A, Fig. 1) had been wrongly treated as K. verrucosus in previous molecular studies (Yu et al. 2010, Yu et al. 2013, Li et al. 2013, Nguyen et al. 2014a, Nguyen et al. 2014b) according to Yu et al. (2017a). Additionally, with the exceptions of those unnamed lineages revealed by Yu et al. (2017a), cryptic species likely also exist in Philippine populations of K. appendiculatus according to Gonzalez et al. (2014). In short, combination of the two recent molecular studies based on broad sampling (Gonzalez et al. 2014, Yu et al. 2017a has provided a relatively clear genetic framework for the taxonomy of Kurixalus and more morphological studies will be necessary to verify the specific status of those lineages. Phylogenetically, the K. odontotarsus species group is comprised of two clades; one contains K. yangi sp. n., K. naso, and K. sp5 and one contains other species from Indochina and southern China (Fig. 1). Kurixalus yangi sp. n. is known from western Yunnan, China and northern Myanmar, K. naso is known from southern Tibet and northeastern India, and K. sp5 is known from northern Myanmar. This pattern suggests that frogs of Kurixalus might have colonized the Indian subcontinent from northern Indochina.
2018-07-14T00:27:50.128Z
2018-07-04T00:00:00.000
{ "year": 2018, "sha1": "010a14efc9a6a5e667feef7b57f4e4108fe38379", "oa_license": "CCBY", "oa_url": "https://zookeys.pensoft.net/article/23526/download/pdf/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "010a14efc9a6a5e667feef7b57f4e4108fe38379", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
222179617
pes2o/s2orc
v3-fos-license
Lactational amenorrhoea among adolescent girls in low-income and middle-income countries: a systematic scoping review Introduction Fertility levels among adolescents remain high in many settings. The objective of this paper was to review the available literature about postpartum and lactational amenorrhoea among adolescents in low-income and middle-income countries (LMICs). Methods We searched Medline, Embase, Global Health and CINAHL Plus databases using terms capturing adolescence and lactational or postpartum amenorrhoea. Inclusion criteria included publication date since 1990, data from LMICs, and topic related to lactational amenorrhoea as a postpartum family planning method or as an effect of (exclusive) breast feeding among adolescents. Thematic analysis and narrative synthesis were applied to summarise and interpret the findings. Results We screened 982 titles and abstracts, reviewed 75 full-text articles and included nine. Eight studies assessed data from a single country (three from India, two from Bangladesh, two from Turkey, one from Nigeria). One study using Demographic and Health Survey data included 37 different LMICs. The five studies measuring duration of postpartum or lactational amenorrhoea reported a wide range of durations across the contexts examined. Four studies (from Bangladesh, Nigeria and Turkey) examined outcomes related to the use of lactational amenorrhoea as a family planning method among adolescents. We did not find any studies assessing adolescents’ knowledge of lactational amenorrhoea as a postpartum family planning method. Likewise, little is known about the effectiveness of lactational amenorrhoea method among adolescents using sufficiently large samples and follow-up time. Conclusion The available evidence on lactational amenorrhoea among adolescents in LMICs is scarce. Given the potential contribution of lactational amenorrhoea to prevention of short interpregnancy intervals among adolescents and young women, there is a need for a better understanding of the duration of lactational amenorrhoea, and the knowledge and effective use of lactational amenorrhoea method for family planning among adolescents in a wider range of LMIC settings. Introduction Fertility levels among adolescents remain high in many settings. The objective of this paper was to review the available literature about postpartum and lactational amenorrhoea among adolescents in low-income and middle-income countries (LMICs). Methods We searched Medline, Embase, Global Health and CINAHL Plus databases using terms capturing adolescence and lactational or postpartum amenorrhoea. Inclusion criteria included publication date since 1990, data from LMICs, and topic related to lactational amenorrhoea as a postpartum family planning method or as an effect of (exclusive) breast feeding among adolescents. Thematic analysis and narrative synthesis were applied to summarise and interpret the findings. Results We screened 982 titles and abstracts, reviewed 75 full-text articles and included nine. Eight studies assessed data from a single country (three from India, two from Bangladesh, two from Turkey, one from Nigeria). One study using Demographic and Health Survey data included 37 different LMICs. The five studies measuring duration of postpartum or lactational amenorrhoea reported a wide range of durations across the contexts examined. Four studies (from Bangladesh, Nigeria and Turkey) examined outcomes related to the use of lactational amenorrhoea as a family planning method among adolescents. We did not find any studies assessing adolescents' knowledge of lactational amenorrhoea as a postpartum family planning method. Likewise, little is known about the effectiveness of lactational amenorrhoea method among adolescents using sufficiently large samples and follow-up time. Conclusion The available evidence on lactational amenorrhoea among adolescents in LMICs is scarce. Given the potential contribution of lactational amenorrhoea to prevention of short interpregnancy intervals among adolescents and young women, there is a need for a better understanding of the duration of lactational amenorrhoea, and the knowledge and effective use of lactational amenorrhoea method for family planning among adolescents in a wider range of LMIC settings. INTRODUCTION Early pregnancy and parenthood are some of the most prominent challenges with which adolescents globally are dealing. Approximately 16 million girls aged 15 to 19 years and 2.5 million girls under 16 years give birth each year in low-income and middle-income countries (LMICs). [1][2][3] Despite the global decline in adolescent birth rate between 1990 (64.8 births per 1000 girls 15-19 years of age) and 2020 (42.5 births per 1000), 4 the number of adolescent pregnancies globally will continue to increase due to the size of the adolescent cohorts, with the greatest proportional increase in West and Central Africa and Eastern and Southern Africa. 5 Key questions What is already known? ► Lactational amenorrhoea method (LAM) is an effective postpartum contraceptive method available to breastfeeding women and does not require a health provider or replenishment of contraceptive supplies. ► The duration of lactational amenorrhoea and the role of LAM as a family planning method among adolescents in low-income and middle-income countries (LMICs) are not known. What are the new findings? ► We identified nine studies from LMICs, all of which were quantitative and observational. ► There was heterogeneity in the findings about the duration of lactational amenorrhoea among adolescents compared with older women across these settings. ► We identified evidence gaps surrounding adolescents' knowledge of LAM and transition from the use of LAM to other contraceptive methods. What do the new findings imply? ► This study highlights the need for a better understanding of breastfeeding practices, barriers and enablers of LAM use among adolescents. ► There is a need for additional research in a wider range of settings and using qualitative research methods. BMJ Global Health It is estimated that in 2020, 257 million women globally had an unmet need for modern contraception, and 218 million of them were in developing countries. 6 Accessibility and availability of contraceptives for adolescent girls, especially unmarried, in LMICs are even more problematic compared with older women of reproductive age. 7 Different barriers including stigma, social pressure, legal restrictions, provider biases and misinformation may prevent adolescents from obtaining contraceptives. 8 9 Additional barriers include interruptions in contraceptive supplies and lack of financial affordability. 10 A study estimated that 90% of the over 6 million annual unplanned pregnancies, either unwanted or mistimed, among adolescent girls in Sub-Saharan Africa, Latin America and the Caribbean, and South Central and Southeast Asia are due to non-use of a modern method of contraception. 11 In 2016, an estimated 38 million adolescents in developing regions wanted to avoid pregnancy, 23 million of them have an unmet need for modern contraception and are thus at elevated risk of unintended pregnancy. 11 Nearly one-fifth of young women in LMICs are estimated to become pregnant before age 18, and 2 million births occur to girls under age 15 annually in LMICs. 12 For example, median age at first childbirth among women 20-24 years was <20 years in all 16 Sub-Saharan African countries with a Demographic and Health Survey collected since 2010 and where this indicator is available. 13 While not all adolescent pregnancies are unintended, almost half of the 20 million pregnancies among adolescents in LMICs are. 14 Pregnancy and childbirth complications are the leading cause of death among girls 15 to 19 years old globally. [15][16][17] Compared with babies of women in their twenties, infants born to adolescents face a higher risk of preterm birth, which is among the leading causes of neonatal mortality and morbidity. 18 First-order births carry an increased risk of complications, 19 20 and in many LMICs, the majority of first-order births are to adolescent girls. 21 Existing limited research from LMICs shows that repeat teenage pregnancy or childbirth is common. 22 23 New evidence is emerging on the length of what constitutes a short interpregnancy interval and its effects on maternal, perinatal and child survival and health outcomes. 24 25 In LMICs, short interpregnancy intervals and other factors appear to play an important role in an increased risk of adverse outcomes among adolescent mothers with repeat pregnancies and their babies. [26][27][28] The concept of an ideal interpregnancy interval emerged from a report published by WHO in 2005 and, based on the best available evidence at that time, consensus was reached that an optimal interval was a minimum of 24 months, 29 consistent with the joint WHO and Unicef recommendation for women to breast feed for at least 2 years. 30 Immediately following childbirth, the inhibitory effect of oestrogen and progesterone levels of pregnancy decreases, with the resumption of regular ovulation at around 25 days after birth. 24 Consequently, all postpartum women are assumed to be protected from conception for 4 weeks following childbirth. The period of postpartum amenorrhoea can be prolonged by breast feeding (lactational amenorrhoea), which changes the level and rhythm of gonadotropin-releasing hormone (GnRH) secretion by sending neural signals to the mother's hypothalamus through stimulation of the nipple. GnRH influences the pituitary release of follicle-stimulating hormone and luteinising hormone, the hormones needed to stimulate and resume ovulation. 31 Lactational amenorrhoea method (LAM) is the name given to the informed use of breast feeding as a method of contraception. For lactational amenorrhoea to serve as an effective method of contraception, the woman must be exclusively or near exclusively breast feeding (at least 85% of infant feeding coming from breast feeding), 29 be within the first 6 months following childbirth, and remain amenorrhoeic. The typical use failure rate of LAM is 0.45%-7.5%. 32 In the first 6 months post partum, amenorrhoeic women have a very low cumulative chance of conception, even if they are not exclusively breast feeding, because a large fraction of first menstrual cycles in this period are anovulatory. 33 As the duration of post partum lengthens, the protective effect of amenorrhoea progressively weakens. Nevertheless, among amenorrhoeic women, the level of risk of conception remains at 6% at 12 months post partum, 34 which is not substantively different from that of condoms or oral contraception under real-life conditions. Literature has shown that reliance on the absence of menses as an indicator that conception is unlikely is widespread. 35 However, most women do not associate breast feeding with a reduced risk of conception. 3 35 Beyond 6 months post partum, women might continue breast feeding and remain amenorrhoeic. While this period is included in calculations of the duration of lactational amenorrhoea, it is no longer considered to be an effective contraceptive method. Addressing the unmet need for family planning is of paramount importance to improve the lives of girls and young women, particularly in LMICs. Despite the significant implications that countries face if this issue remains neglected, adolescents' sexual and reproductive health has traditionally been overlooked. LAM is available to breastfeeding women, does not require a health provider or replenishment of contraceptive supplies, and is effective at preventing pregnancy (it is classified as a modern contraceptive method). 36 37 Therefore, it can play a role in efforts to address unintended repeat childbearing among adolescents, including pregnancies preceded by a short interpregnancy interval. However, little is known about the extent to which adolescents in LMICs know about this method, are aware of the criteria for its effective use and are using it. If adolescent girls face different barriers, a targeted approach to awareness-raising about LAM might be required in contrast to mothers from older age groups. In order to understand these issues better, a thorough search in the literature can help map current evidence. This scoping review was conducted to BMJ Global Health answer the following primary research question: What is the current state of evidence on knowledge about postpartum/lactational amenorrhoea among adolescents in LMICs? Objective The objective of this review is to systematically scope the published literature, to synthesise what is known about postpartum/lactational amenorrhoea among adolescents in LMICs and to identify existing gaps in available evidence. Search strategy Our review was guided by the standard principles of Arksey and O'Malley's framework and the PRISMA-ScR checklist (online supplemental material 1). 38 39 Arksey and O'Malley's approach can be described as an iterative process involving post hoc inclusion and exclusion criteria. According to this framework, there are five stages: (1) identifying the research question, (2) identifying relevant studies, (3) study selection, (4) charting the data and lastly (5) collating, summarising and reporting the results. The optional step of consultation exercise involving key stakeholders to validate findings was not found necessary in this study and was not performed. The protocol for this scoping review was not registered. We searched four databases, Medline, Embase, Global Health and CINAHL Plus, using a combination of search terms comprising the terms adolescents and lactational amenorrhoea (full electronic search strategy is presented in online supplemental material 2). After deduplication, titles and abstracts of identified references were all screened independently by two reviewers (MNSF and LB). Additional references were identified through hand searching the DHS programme publications site, 40 the website of the Population Council, 41 the WHO Reproductive Health Library 42 and reference lists of all articles reviewed in full text. Eligibility criteria We applied the following inclusion criteria during title/abstract and full-text search: (1) studies published between 1990 (included) and 23 September 2019 (date of the search), because only after August 1988 consensus on LAM was reached through the Bellagio consensus 43 ; (2) contained data collected in LMIC as defined by the World Bank 44 ; (3) can be a research paper (qualitative or quantitative), editorial or commentary, peer-reviewed paper or not (report, research paper), but conference abstracts were included only if they presented research results; (4) data were presented (or disaggregated) for adolescent girls between the age of 10 and 19 years; (6) the topic of lactational amenorrhoea was examined as a postpartum family planning method, or as an effect of (exclusive) breast feeding, including through quantitative indicators such as median duration or knowledge of lactational amenorrhoea, or qualitative analysis such as perceptions or barriers to use. Studies were excluded if they (1) mentioned amenorrhoea in a context without previous childbirth/ pregnancy (eg, amenorrhoea among girls treated for BMJ Global Health anorexia or following cancer treatment, primary amenorrhoea among adolescents), (2) presented measures of postpartum infecundability (amenorrhoea and abstinence combined, without disaggregating lactational amenorrhoea separately), and (3) examined contraceptive/family planning use and lactational amenorrhoea combined with other family planning methods without disaggregation. Two reviewers (MNSF and LB) independently screened all full-text articles. Differences were reconciled through discussion and consensus. Figure 1 presents the full search flowchart. Data charting process To extract relevant data from the references included in full text, a standard template sheet was used specifying the author(s), year of publication, journal, time of data collection, country(ies), site within country(ies), objective of each study, study design, sample size of adolescents, description of the sample, recruitment and eligibility, follow-up period (prospective studies) or time since childbirth (retrospective studies), completeness of follow-up or missingness of data, measurement/analysis method, definition(s) of the lactational amenorrhoea outcome(s) or indicator(s) used, and key findings in reference to adolescents. Two coauthors (MNSF and LB) independently extracted all data from studies included in full text. Any differences were resolved through discussion. As is common for scoping reviews, we did not formally assess the quality of included studies. Collating, summarising and reporting the results Descriptive information about the included studies is summarised in a table. To synthesise and interpret the findings of this scoping review, we used thematic analysis and narrative synthesis. We summarised the methods used by included studies and report the findings of studies according to the key theme identified. Based on these findings, we highlighted the gaps in available literature. Patient and public involvement No patient or public involvement took place in the design or conduct of this literature review. The results are intended for wide dissemination, including to researchers, programme implementers and governmental agencies, all of whom reach the public and the key population of this study. Overview of included studies We screened 982 unique references in title and abstract, 75 in full text, and included nine in this review (figure 1). The main reason for exclusion of studies in full-text screening (51 out of 66 studies) was that while adolescents were included in the analysis sample, they were not disaggregated from women older than 20 years (at all or were included in a broader age category conflating adolescents and young women, eg, an age group from 15 to 24 years). Table 1 shows the characteristics of the included studies. Three studies were published in the 1990s, three in the 2000s, and three since 2010. Eight studies assessed data from a single country (three from India, two from Bangladesh, two from Turkey and one from Nigeria), and one study using Demographic and Health Survey data included 37 different LMICs (18 countries in Sub-Saharan Africa, 4 near East/North Africa, 7 in Asia, and 8 in Latin America and the Caribbean). One study from Uttar Pradesh in India 45 appears to have used the same data as a second included study 46 ; the results reported are identical. Two of the nine included studies had as their main objective to investigate lactational amenorrhoea among adolescents. 47 48 Three additional studies were interested in differentials in durations of amenorrhoea according to mother's age, 46 demographic and biodemographic characteristics, 43 or were focused on sociodemographic influences. 47 The remaining four studies did not specifically set out to investigate lactational amenorrhoea among adolescents or the effect of age on lactational amenorrhoea, but presented findings relevant to this scoping review. All nine included studies were quantitative and used observational study designs (eight used retrospective and one prospective data collection). One of the nine included studies had only adolescents in their sample; all other studies included women older than 20 years and provided comparisons with adolescents. Five studies were analytical, three of which reported findings from crude analysis 47 49 50 and two conducted multivariable analyses of the association between age and lactational amenorrhoea. 46 51 The remaining four studies presented descriptive analyses only, meaning they did not perform statistical tests of the differences between indicators of lactational amenorrhoea among adolescents and older women. The studies included and disaggregated lactational amenorrhoea among females under the age of 20 (ie, adolescents). Five of the studies only included currently married and one study only ever-married women and girls. The remaining three studies did not specify any inclusion criteria or sample characteristics related to marital status. The age group relevant for this scoping review was defined as <20 years in four studies, 10-19 years in one study and 15-19 years in two studies, and 2 studies used further disaggregation by age among adolescents (10-14 and 15-19; <15 and 15-20). Maternal age was measured at the time of birth of the index child (three studies), at the time of marriage (one study), at time of receiving antenatal care (one study), and at the time of survey or at some point during the study (not further specified) in the remaining four studies. To establish women's age, two studies (both from Turkey) used medical records to identify women eligible for their sample and might have collected the birth date or age of study participants from this source (it is unclear whether this was further validated when interacting with the respondents). The BMJ Global Health remaining seven studies relied on either a household member or the woman's own report of her age. Table 2 summarises the nine studies' findings. We identified two key themes examined by included studies. The first theme, identified in five studies, captures the length of amenorrhoea. This was expressed either as median duration in months or weeks and/or the percentages of women in the sample resuming menses (or the opposite, remaining amenorrhoeic) at specific time intervals since the birth. The second theme concerned the use of lactational amenorrhoea as a family planning method and was used by four studies. No study presented findings related to both themes. All nine studies relied on respondent's recall to capture data relevant to the length of lactational amenorrhoea, the use of lactational amenorrhoea as a family planning method or return of menses. None of the studies used biomarkers to measure or validate selfreported outcomes. Theme 1: duration and outcomes of postpartum or lactational amenorrhoea The five studies measuring duration of postpartum or lactational amenorrhoea reported a wide range of durations across the contexts examined. In Assam 46 and Bangladesh, 47 duration of lactational amenorrhoea did not appear to differ between adolescents and women older than 20 years. In Uttar Pradesh, 46 47 the duration was substantially longer (around 12 months) among women >30 years compared with adolescents (3.5 months). Singh et al reported that in bivariate analysis, duration of amenorrhoea appeared longer in the two categories of adolescents (<15 and 15-20 at time of marriage) compared with older women (p<0.01), but this association was no longer significant in multivariate analysis. 51 Haggerty and Rutstein, in their analysis of 37 countries in the early 1990s, report a wide variability in duration of amenorrhoea. 52 Sub-Saharan African countries had the longest durations of amenorrhoea; 14 of 18 countries had medians longer than a year. 52 The four countries included in analyses for the Near East/North Africa region had the shortest durations, ranging from 4 to 6 months. 52 The largest variability was documented in the Latin America/Caribbean region where median durations ranged from 4 months to 11 months. In most countries, there was either a pattern where duration of amenorrhoea increased with each older age group or a U-shape pattern where duration was longer among adolescents and women 35 years and older compared with the age groups between age 20 and 34 at time of birth. 52 Across the four regions examined, the increase in duration of postpartum amenorrhoea with older age group was most notable in Sub-Saharan Africa. 52 The most recent data available across the five studies examining duration of lactational amenorrhoea were collected in 2009 in India. 51 Theme 2: use of lactational amenorrhoea as a family planning method Four studies (from Bangladesh, Nigeria and Turkey) examined outcomes related to the use of lactational amenorrhoea as a family planning method among adolescents. Two of these studies included respondents who have never had a child together with those who have. 47 50 The first such study, 47 assessing married adolescents from Bangladesh, found that 13.2% of adolescents who were not using contraceptives at the time of survey cited postpartum amenorrhoea as a reason for non-use. Audu and colleagues found that in their sample which included women with and without children in Nigeria, the percentage reporting ever-use of LAM was lowest among adolescents (5.0% among those 15-19 years), increasing to 10.0% among those 20-24 years and highest among women 35-39 years old (p value of differences<0.001). 50 One potential reason for the low percentage among the adolescent age group is that not all respondents in this sample have ever had children and therefore had the opportunity to have ever used LAM. In a small sample of postpartum adolescents age <20 years (n=10) in Turkey, Türk et al found that 70% considered themselves to be users of lactational amenorrhoea for family planning. 49 This compared with 33% of those 20-29 years old (n=135) and 30% of those ≥30 years old (n=43); p value <0.0010. 49 However, many who considered themselves users of LAM also reported having menses (28 of 64 self-reported users of LAM, not disaggregated by age group), and one-third of them became pregnant during the study follow-up period. 49 Authors of this study highlighted that while LAM is one of the main family planning methods used in their sample, women might not be sufficiently aware of the criteria/ conditions for its effective use. The study on use of LAM as a family planning method with the most recent data was conducted in Turkey in 2010-2012. 48 It found that 50.6% of those 10-19 years old were using lactational amenorrhoea as a family planning method compared with 33% among women 20-35 years. 48 LAM was the most preferred method of contraception in this study (no quantitative indicators were provided to support this statement). 48 Contraceptive failure in the adolescent age group was 2.37% in the first year post partum (12 unintended pregnancies among 506 adolescents in sample), compared with 2.0% in older age group. However, the failure rates were not available for LAM users separately. 48 Mechanisms for differences in lactational amenorrhoea between adolescents and older women Given the variability in findings on duration of amenorrhoea and the use of lactational amenorrhoea method between adolescents and older women in the included studies, we attempted to understand the potential mechanisms that study authors found or proposed. The main determinant of the length of lactational amenorrhoea is the duration and intensity of breast feeding. Few of the included studies attempted to interpret the findings relevant to adolescents or elucidate the mechanisms which might lead to differences in lactational amenorrhoea between adolescents and women older than 20 years, whether on a more granular or more distal levels. For Percentage of females who consider themselves users of lactational amenorrhoea for family planning Descriptive (analytical) analysis Percentage of females who considered themselves users of lactational amenorrhoea for family planning was 70% of those ≤19 years old (n=10), compared with 33% of those 20-29 years old (n=135) and 30% of those ≥30 years old (n=43); p value 0.001. However, many of the women who considered themselves users of lactational amenorrhoea method also reported having menses (28 of 64 users, not disaggregated by age group) Continued BMJ Global Health example, are any differences identified due to duration of breast feeding, suckling frequency (including nighttime feeding), timing and pattern of supplementation (including formula feeding), nutritional profile of the mother, or potentially an artefact of self-reporting by adolescents, or older women; or variations in knowledge and purposeful use of lactational amenorrhoea as a family planning method (potentially affected by education, literacy, and/or parity), or empowerment levels (ability to negotiate breastfeeding duration/frequency with other household members)? Nath et al, 46 who found large differences between the duration of lactational amenorrhoea between adolescents and older women in Uttar Pradesh but no differences in Assam, recognised that without detailed data on suckling pattern and supplementation, it is difficult to understand causes for the differences in duration of lactational amenorrhoea. Mechanisms they listed include different patterns of night nursing and nutritional status of women (malnourished women produce less breast milk which is also less nutritious, therefore their babies suckle longer, meaning women are likely to remain amenorrhoeic for longer periods). They explain the difference in duration of lactational amenorrhoea by age they found in Uttar Pradesh as being due to a biological delay in hormonal mechanisms responsible for ovulation (no more detail). Bhattacharya et al, 45 who analysed the same data as Nath et al from Uttar Pradesh, mention the maternal nutrition mechanism to explain differences in duration of lactational amenorrhoea found between social groups and castes. If this mechanism were to be involved in the differences by age, then we would expect older women to be more malnourished as their lactational amenorrhoea duration is longer. Neither of the two studies using data from Uttar Pradesh tested this hypothesis. Lastly, Rahman et al, 47 who found an inverse U-shape pattern in duration of lactational amenorrhoea across age groups, offered no explanation or potential mechanisms for this association. However, they note that women in their sample did not always know their exact ages. Gaps in the available literature This review identified a scarcity of studies in LMICs focusing on lactational amenorrhoea among adolescent mothers, or more broadly, comparing lactational amenorrhoea characteristics and determinants across maternal age groups. The only comparative study across countries used data collected between 1990 and 1996, 52 and only one of the nine included studies used data collected in the past decade (since 2000). 48 Geographically, we found gaps in available literature in several world regions, including Middle East/North Africa, where the comparative study reported very short durations of lactational amenorrhoea, and studies conducted in Latin America/Caribbean. 52 Further, the studies using primary data included only married adolescents, meaning we have essentially no recent evidence about In multivariate Cox regression with the outcome of duration of postpartum amenorrhoea (included variables were duration of breast feeding, age at marriage, sex of index child, survival of index child, use of contraceptives, place of residence, husbands education, family income), the relative risk for variable of 'age (years) at marriage of wife' was 1.011 (95% CI 0.994 to 1.029), meaning that there was no association after other factors were accounted for. However, it is unclear whether the variable age in this model was categorised (<15, 15-20, 20-25, 25-30, >30) BMJ Global Health lactational amenorrhoea among unmarried adolescents. Last, none of the included studies used an intervention study design. In regard to thematic areas, we did not find any studies assessing adolescents' knowledge of LAM as a postpartum family planning method. The study by Kaplanoglu et al 48 from Turkey suggested there is insufficient knowledge among adolescents, but no formal assessment was conducted. Likewise, little is known about the effectiveness of LAM among adolescents using sufficiently large samples and follow-up time. In many of the included studies, sample sizes of adolescents were limited, and analysis methods restricted to descriptive or bivariate. Further analyses attempting to understand the factors associated with lactational amenorrhoea duration using more sophisticated analyses (multivariate adjusted models) are also needed. Crucially, more work is needed to understand the mechanisms leading to different durations of lactational amenorrhoea overall and the use of LAM as a family planning method across women's age groups. We found no qualitative research on lactational amenorrhoea among adolescents. Research on perceived enablers and barriers of breast feeding and lactational amenorrhoea, as well as on the reliability and validity of adolescents' report duration of lactational amenorrhoea are lacking. We did not identify any studies assessing the role of postpartum or lactational amenorrhoea within the framework of proximate determinants of fertility 53 among adolescents in LMICs. Such approach would need to incorporate a broader understanding of lactational amenorrhoea within a postpartum infecundability period, which combined lactational amenorrhoea with postpartum abstinence. Last, we did not find any studies on double (redundant) use of lactational amenorrhoea and other modern methods, 35 or on the characteristics of transitions from the LAM to other contraceptives after 6 months post partum. The importance of postpartum use of modern methods appears key given the finding from Turkey that a high percentage of respondents incorrectly believed they were being protected from pregnancy by lactational amenorrhoea despite their menstrual period having returned. 49 DISCUSSION This scoping review systematically identified and summarised the findings of studies on lactational amenorrhoea among adolescent girls in LMICs. Using a set of selection criteria, two independent reviewers screened 75 full-text research papers published in the past 30 years and included a total of nine studies. Among these, only two had a main objective related to adolescents. The main reason for exclusion was a lack of disaggregation of individuals under study in the age categories 10-19 or 15-19 years old. Furthermore, several of the included studies which included adolescents and disaggregated lactational amenorrhoea estimates within this age group had very small sample sizes. There was heterogeneity in the findings about the duration of lactational amenorrhoea among adolescents compared with women >20 years. We also identified several important thematic gaps in currently available evidence, including adolescents' knowledge of LAM and transitions from the use of LAM to other contraceptive methods. The heterogeneity in findings about duration of lactational amenorrhoea among adolescents compared with older women is not surprising given differences in breastfeeding practices across countries and contexts, as documented by Haggerty and Rutstein. 52 Potential mechanisms leading to differences in duration of lactational amenorrhoea between adolescents and women older than 20 years were mentioned in two of the included studies. The authors highlighted the role of low maternal nutritional status (babies suckling more often and longer due to less nutritious milk) and biological delays in hormonal mechanisms responsible for ovulation. These mechanisms imply that adolescent mothers might have longer durations of postpartum amenorrhoea, which was not the case in all the settings from which studies included in this scoping review reported on this indicator. Weaning and supplementation patterns have an important effect on resumption of menses, as does complete cessation of breast feeding (eg, due to death of infant or transition to formula feeding among women with HIV). Socioeconomic factors appear to have bi-directional effects on breast feeding: maternal education (higher education being linked to better awareness of benefits of breast feeding, including its contraceptive effects, and more receptive to health promotion), maternal occupation (competing demands on woman's time and higher likelihood of supplementation/formula feeding) and urbanisation/household wealth are linked to better ability to access and afford formula. 54 Therefore, adolescents might have lower education levels, especially if they dropped out of education due to pregnancy and childrearing, and they might be more likely to supplement or wean earlier if they are returning back to school after childbirth. The effect of lower wealth among adolescent mothers compared with older mothers, 55-58 particularly availability of disposable income to purchase formula and other items necessary to formula feed, might contribute to higher rates and duration of breast feeding among adolescents. Examination of the effect of marital status on lactational amenorrhoea in general, and understanding of lactational amenorrhoea among unmarried adolescents in particular, is completely absent from the identified literature. One might hypothesise that unmarried adolescent mothers who reside with their own family receive different types of support and advice with breast feeding and child-rearing compared with those living with husband or in-laws. However, how this and other factors affect breast feeding and lactational amenorrhoea is not known. The effect of parity is crucial to the topic of this paper, as adolescent mothers have, on average, substantially BMJ Global Health lower parity compared with women >20 years of age. The effect of parity on duration of lactational amenorrhoea might also operate in both directions due to a combination of a cohort effect (older/higher parity women might breast feed longer) and mechanisms influencing breastfeeding patterns (nutritional depletion and/or lack of time to breast feed among high parity women). 54 Adolescent mothers, of which a higher proportion are primiparous, might be more likely to encounter difficulties with initiating and sustaining breast feeding (eg, poor latch, engorgement, painful nipples), which in turn could make them more likely to supplement or wean early. 59 Other issues linking low parity and breastfeeding behaviour among women in LMICs include, for example, higher rates of cesarean section 60 and delivery in health facilities. 61 It is possible that adolescent mothers have lower levels of knowledge about the existence and criteria for effective use of LAM compared with older women. This would likely be a combined effect of several factors, for example, lower education levels and lower parity (lack of previous use of reproductive/maternal health services where counselling on LAM use is covered) among adolescents. Lack of knowledge or inaccurate information about lactational amenorrhoea in general and LAM in particular can lead to two scenarios: (1) women are protected from conception by lactational amenorrhoea but are not using it intentionally as a contraceptive method (eg, due to lack of trust in it or knowledge about contraceptive effectiveness)-this might include women who also use another contraceptive method, and (2) women believe and report that they are using LAM, but are not doing so correctly, as the Turkish study found. 49 Therefore, understanding the extent to which adolescents who report they are using LAM fulfil the criteria for this method is critical, and could be assessed through existing secondary data such as Demographic and Health Surveys. The low accuracy of women's self-report on the use of LAM for family planning was shown previously (26% of reported users meet the criteria for correct LAM practice in analysis of data collected between 1998 and 2011 in 45 countries). 62 However, we found no studies assessing whether the levels of reporting accuracy are differential between adolescent and older mothers, and if so, what are the patterns and mechanisms leading to such differentials. We found no qualitative research on lactational amenorrhoea among adolescents. Research on perceived enablers and barriers of LAM use, as well as on the reliability and validity of adolescents' report of the duration of lactational amenorrhoea, are lacking. Levels of unmet need for family planning during the first year post partum are high in LMICs, [63][64][65] and in some settings are higher among adolescent mothers compared with older women. 66 No matter how effective the use of LAM is, it must be followed by another contraceptive method when one of the three criteria is no longer met and the woman desires to prevent a pregnancy. Evidence from LMICs shows that there is a gap in evidence on this transition among adolescents. [67][68][69] It would be important to describe and address the context-specific barriers adolescent mothers are facing in choosing another modern method. This is particularly important because the range of family planning methods in the postpartum period, as well as the range offered/available to adolescents, might already be limited, and discontinuation levels of short-term contraceptives such as pills and injections are high. [70][71][72] These barriers underscore the importance of health worker training and provision of counselling and support for breast feeding, 71 with strong adolescent-friendly components. Studies identified by this review were conducted mainly in four countries (India, Bangladesh, Turkey, Nigeria), while a cross-national study included 37 countries across Africa, Asia and Latin America. The four main countries have specific contexts in terms of sexual, reproductive and adolescent health, some of which might be similar to other LMIC settings, but may also may differ in terms of cultural and religious aspects. In addition, only one of these articles analysed data collected in the past 10 years, while national and international policies in terms of sexual and reproductive health and rights as well as maternal health service provision and utilisation has evolved. The limited number of settings and lack of recent data, in addition to the limited evidence base and the variable quality of measurement methods used, preclude us from making any broad generalisations. Limitations This scoping review has several limitations. First, we only conducted searches for references in English; relevant studies in other languages might have been excluded. This is particularly the case in regard to research describing high-fertility settings in French-speaking West Africa. Second, given our primary interest in adolescents, we might have missed studies which used maternal age or age groups in descriptive or multivariable analyses (eg, as a population characteristic or a confounder), but did not mention this in the title or abstract, and thus were not identified in title and abstract screening. However, we also reviewed reference lists of all included studies, which provided another opportunity to find such studies. Third, we only reviewed literature on adolescents from LMICs. Fourth, while the duration of postpartum insusceptibility is a result of whichever is longer, postpartum amenorrhoea or postpartum abstinence 73 ; our focus was solely on the former in this review. CONCLUSION While lactational amenorrhoea is not relevant for prevention or delay of first adolescent childbirth, it might be important for lowering of repeat births among adolescents and young women, particularly those preceded by a short interpregnancy interval. Therefore, there is a need for more studies on duration of lactational amenorrhoea, knowledge and effective use of LAM for family planning BMJ Global Health among adolescents in a wide range of LMIC settings. Related to this, this study highlights the need for a better understanding of breastfeeding context-specific practices, barriers and enablers of lactational amenorrhoea use among adolescents, and transitioning from LAM onto other modern methods. Twitter Lenka Benova @lenkabenova Contributors MNSF and LB conceptualised the study, with input from TD and SB. Data were collected, screened and extracted by MNSF and LB. All coauthors contributed to the interpretation of findings. MNSF, LB and SB wrote the first draft of the manuscript, and all coauthors revised it critically.
2020-10-08T05:04:09.677Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "ffa013940a42de6f4f35491a4c340deb24de1bb1", "oa_license": "CCBYNC", "oa_url": "https://gh.bmj.com/content/bmjgh/5/10/e002492.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "7e21cc29faeaee2885eb33a85a088048e35a76c6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
32200267
pes2o/s2orc
v3-fos-license
Apolipoprotein L, a New Human High Density Lipoprotein Apolipoprotein Expressed by the Pancreas In this study, we have identified and characterized a new protein present in human high density lipoprotein that we have designated apolipoprotein L. Using a combination of liquid-phase isoelectrophoresis and high resolution two-dimensional gel electrophoresis, apolipoprotein L was identified and partially sequenced from immunoisolated high density lipoprotein (Lp(A-I)). Expression was only detected in the pancreas. The cDNA sequence encoding the full-length protein was cloned using reverse transcription-polymerase chain reaction. The deduced amino acid sequence contains 383 residues, including a typical signal peptide of 12 amino acids. No significant homology was found with known sequences. The plasma protein is a single chain polypeptide with an apparent molecular mass of 42 kDa. Antibodies raised against this protein detected a truncated form with a molecular mass of 39 kDa. Both forms were predominantly associated with immunoaffinity-isolated apoA-I-containing lipoproteins and detected mainly in the density range 1.123 < d < 1.21 g/ml. Free apoL was not detected in plasma. Anti-apoL immunoaffinity chromatography was used to purify apoL-containing lipoproteins (Lp(L)) directly from plasma. Nondenaturing gel electrophoresis of Lp(L) showed two major molecular species with apparent diameters of 12.2–17 and 10.4–12.2 nm. Moreover, Lp(L) exhibited both pre-β and α electromobility. Apolipoproteins A-I, A-II, A-IV, and C-III were also detected in the apoL-containing lipoprotein particles. Epidemiological studies have demonstrated a strong inverse correlation between the levels of plasma high density lipoproteins (HDL) 1 and risk of premature coronary heart disease (1,2). However, the mechanisms by which HDL protect against atherosclerosis need further exploration. One proposed protective role of HDL involves reverse cholesterol transport (3)(4)(5), a process in which HDL acquire cholesterol from peripheral cells and facilitate its esterification and delivery to the liver. In this process, small, relatively lipid-poor HDL particles, termed pre-␤ 1 -HDL, have been postulated to be the first acceptors of cholesterol from the cells (4,6,7). An additional mechanism may involve the ability of HDL to impede the oxidation of other plasma lipoproteins (8 -10). A major difficulty in understanding HDL metabolism is the molecular heterogeneity of HDL (11,12). Until recently, ultracentrifugation was the most practical way to purify HDL. This methodology has been the basis for the vast majority of the studies in this field. However, it is now well documented that ultracentrifugation causes protein dissociation and can modify structures of HDL particles (13)(14)(15)). An alternative purification strategy that conserves lipoprotein integrity is immunoaffinity chromatography, which isolates lipoproteins on the basis of their protein content (15)(16)(17). The development of the strategy of selected affinity immunosorption is particularly suited to investigation of the protein constituents of lipoprotein complexes because it permits isolation of the lipoproteins under minimally perturbing conditions (17). For example, functional components such as lecithin:cholesterol acyltransferase and cholesterol ester transfer protein are present in higher concentrations in immunopurified lipoproteins, whereas they are depleted or absent in ultracentrifugally purified lipoproteins (12,15,18). These observations demonstrate the importance of immunoaffinity chromatography in identifying novel HDL-associated proteins of potential physiological significance. In this study, we employed selected affinity immunosorption and two-dimensional gel electrophoresis to identify a new protein we have designated apolipoprotein L (apoL) that is associated with plasma lipoproteins, predominantly with apoA-Icontaining lipoproteins (Lp(A-I)). We report here the isolation and plasma lipoprotein distribution of apoL and the cloning and characterization of the cDNA encoding apoL. The apoA-I-containing lipoproteins (Lp(A-I)) were isolated by selected affinity immunosorption (17). Plasma was applied to a selected affinity anti-apoA-I column. The unbound fraction was eluted with Tris-buffered saline (5 mM Tris (pH 7.4), 150 mM NaCl, 0.04% EDTA, and 0.05% NaN 3 ). The Lp(A-I) fraction was eluted with 0.2 M acetic acid (pH 3) and 0.15 M NaCl. The eluate was immediately neutralized to pH 7.4 with 2 M Trizma (Tris base), and preservatives were added as described above. Finally, Lp(A-I) were passed through protein A-Sepharose and anti-albumin columns to remove traces of albumin and immunoglobulins. * This work was supported by National Institutes of Health Grants HL-31210, HL-50782, HL-50779, and AA-11205; by the Joseph Drown Foundation; and by Donald and Susan Schleicher. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM /EBI Data Bank with accession number(s) AF019225. ‡ To whom correspondence should be addressed. First, 200 g of apoL was purified by electroelution from two-dimensional gels. The purified protein was used to raise rabbit antisera. Antibodies were adsorbed to protein A-Sepharose, and the IgG fraction was eluted with 0.2 M acetic acid and neutralized with 2 M Tris. The IgG fraction was cross-linked to CNBr-activated Sepharose (Pharmacia, Uppsala) to construct an anti-apoL column. Identification of ApoL-ApoL was purified by a combination of preparative liquid-phase isoelectric focusing (ROTOFOR, Bio-Rad) and high resolution two-dimensional gel electrophoresis (20). 200 mg of Lp(A-I) was fractionated with the ROTOFOR into 20 fractions over a pH range of 3.5-10.0. Fractions of interest were then subjected to two-dimensional gel electrophoresis. After electrophoresis, the proteins were electrotransferred to a polyvinylidene difluoride membrane (Bio-Rad). Proteins were stained with Coomassie Blue, and individual spots were subjected to N-terminal sequence analysis (Model 473 A, Applied Biosystems, Inc., Foster City, CA) (21). The amino-terminal sequences were compared with the SWISS-PROT and GenBank™ data bases (22). Northern Hybridization-A multiple-tissue Northern blot (CLON-TECH, Palo Alto, CA) was probed, strictly adhering to the recommended protocol. Each lane contained 2 g of highly purified poly(A) ϩ RNA from various human tissues. The Northern blot was probed with a synthetic guessmer (5Ј-AIGGGCTTIGACTGGGG(G/A)TCGCCTGT-(G/A)TCTGTGCCIGAGGGCACATTCTGCTGCACIC(T/G)GGCGCCA-GCCTCCTCC-3Ј) that corresponds to the first 25 residues of the circulating form of apoL. cDNA Cloning-We used a reverse transcription-polymerase chain reaction (PCR)-based cDNA cloning strategy (23) to isolate a cDNA encoding apoL. RNA was prepared from human pancreas using a total RNA isolation kit (CLONTECH). mRNA was purified using an mRNA purification kit (Pharmacia Biotech Inc.). Single-stranded cDNA was synthesized using 1 g of human pancreas mRNA and 500 ng of oligo(dT)-primer 5Ј-(T) 18 10 l of single-stranded cDNA was used for amplification of apoL cDNA. The first round of PCR contained 100 ng each of oligo(dT) and primer 331 (5Ј-CACTTTTCCTTGGTGTGAGAGTGAG-3Ј) in a final volume of 50 l. The PCR conditions were 40 cycles of denaturation for 30 s at 95°C, annealing for 30 s at 55°C, extension for 1 min at 72°C, and a final extension for 7 min. An aliquot (0.5 l) of this product was used in a second round of PCR using 100 ng each of oligo(dT) and primer 329 (5Ј-GAGGAAGCTGGAGCGAGGGTGCAAC-3Ј) under the same reaction conditions. Oligonucleotides 329 and 331 were designed using the expressed tag DNA sequence recently cloned (24). A band of 1.3 kilobase pairs, in accordance with the apoL mRNA size on a Northern blot, was extracted from an agarose gel. A third round of PCR was carried out using the same primers as the second. The final PCR product was directly cloned using the pCR-Script™ SK(ϩ) cloning kit (Stratagene, La Jolla, CA). 30 clones were found to have the correct insert. Both strands were sequenced by chain termination using the Thermo Sequenase cycle sequencing kit (Amersham Life Science, Inc.). Proteins were transferred to nitrocellulose membranes (0.2 m; Bio-Rad) (28). The membranes were soaked in 10 mM Tris (pH 7.4) and 0.5 M NaCl with 5% nonfat dry milk and then incubated for 16 h with antiserum. The blots were washed extensively in Tris buffer and incubated for 1 h with horseradish peroxidase-conjugated secondary antibodies. After four washes, proteins were detected by 4-chloro-1naphthol. Pre-␤-and ␣-HDL were prepared by starch block electrophoresis of immunopurified Lp(A-I) (29). Briefly, purified potato starch was hydrated with 50 mM barbital (pH 8.6) and poured into a Plexiglas form. After removal of excess buffer, the starch formed a rigid block. The Lp(A-I) sample (10 mg) was loaded onto the cathodic end, and 200 V was applied across the block. After the lipoprotein had migrated 20 cm as judged by a dye marker (bromphenol blue), the block was fractionated into 1-cm segments, and the apoA-I content of each was determined by immunonephelometry (30). Pre-␤-and ␣-HDL were recovered from the appropriate fractions, and purity was verified by immunoelectrophoresis on 1% agarose with anti-apoA-I antibodies. Quantitative Assay of ApoL-To quantify apolipoprotein L in plasma, we developed a competitive enzyme-linked immunosorbent assay using IgG purified from rabbit anti-apoL antiserum with purified apoL as a standard. A series of dilutions of plasma were incubated for 16 h at 4°C with a constant amount of antibody diluted 4000 times with phosphatebuffered saline. The samples were then added to a 96-well plate coated with immunopurified apoA-I-containing lipoprotein (500 ng/well) to quantify the uncoupled antibody. After a 1-h incubation at 23°C, the plate was washed with phosphate-buffered saline. Horseradish peroxidase-labeled anti-rabbit IgG was added. After 45 min, the plate was washed, and the substrate 3,3Ј,5,5Ј-tetramethylbenzidine was added. Plates were read at 450 nm using a computer-linked plate reader (Vmax, Molecular Devices, Menlo Park, CA). RESULTS Identification of Apolipoprotein L-Through the use of our minimally perturbing method of anti-apoA-I selected affinity immunosorption (17), we have identified subspecies of HDL that contain a new protein that we have designated apolipoprotein L. ApoL was identified from 200 mg of immunoisolated Lp(A-I) prepared from normolipidemic human plasma. Lp(A-I) were depleted of albumin by anti-albumin immunosorption. Because apoA-I is the predominant protein in the Lp(A-I) fraction, our initial purification step involved preparative liquidphase isoelectric focusing. This was followed by high resolution two-dimensional gel electrophoresis. Fig. 1A shows a typical fractionation obtained after preparative isoelectric focusing. ApoA-I, the most predominant band in fraction 6, was greatly depleted in succeeding fractions. Fractions enriched in apoL (fractions 9 -10) were then submitted to analytical two-dimensional gel electrophoresis, and the resolved proteins were electrotransferred to polyvinylidene difluoride membranes (Fig. 1B). Each Coomassie Blue-stained spot was submitted to Nterminal microsequencing, and their sequences were compared with known proteins. The first 27 amino acid residues were identified (Table I). This sequence was confirmed in 15 samples from three donors. By analytical two-dimensional gel electrophoresis, two major proteins (Fig. 1B, indicated by arrows) were determined to have identical N-terminal sequences (Table I). This sequence was compared with those in the SWISS-PROT and GenBank™ data bases. At this time, no sequence matching that of apoL was found. However, during this study, our search for matching sequences revealed the existence of a 143-base pair expressed tag DNA sequence (clone C22-280), recently cloned from human chromosome 22 (24), that matched the apoL sequence. This clone contained 12 amino acid codons upstream of our sequence. Computerized analysis of the sequence using prediction program (31,32) indicated that these 12 amino acids constitute the signal peptide. Characterization of ApoL-We prepared apoL antigen by two-dimensional gel electrophoresis, isolating 200 g by gel electroelution (Fig. 2A). Fig. 2B illustrates the reactivity of rabbit antiserum to Lp(A-I). Unexpectedly, the antiserum detected two bands with apparent molecular masses of 42 and 39 kDa. No proteins were detected with preimmune serum (Fig. 2B). The same pattern was obtained under nonreducing and reducing conditions, suggesting the absence of disulfidebridged subunits. To rule out cross-reaction with other Lp(A-I) proteins, we analyzed the sequence of the 39-kDa protein detected by the antiserum. The N terminus of the 39 kDa protein did not correspond, at this time, to any known protein. Later, we found that apoL cDNA and part of clone C22-280 (Table I) matched, showing that the 39-kDa protein identified by immunoblotting and amino acid sequence analysis was a truncated form of mature apoL (42 kDa). Molecular Cloning and Sequence Analysis of ApoL-To clone apoL cDNA, we performed a Northern analysis of poly(A) ϩ RNA from various human tissues (CLONTECH). A single mRNA transcript of ϳ1.3 kilobase pairs was detected in the pancreas, but not in the heart, brain, placenta, lung, liver, skeletal muscle, kidney, spleen, thymus, prostate, testis, ovary, small intestine, colon, or perileukocytes (Fig. 3). The mark between the ovary and testis lanes in the lower right blot is an artifact. Using oligonucleotides 331 and 329, an oligo(dT)-primer, and purified pancreas mRNA as a template, we were able to amplify a band of ϳ1.3 kilobase pairs, as expected according to the Northern analysis (Fig. 3). This was cloned into pCR-Script™ SK(ϩ), and both strands were sequenced. The sequence re-vealed only one possible open reading frame, encoding 383 amino acids with a typical signal peptide of 12 residues and a mature protein of 371 amino acids (Fig. 4). The molecular mass FIG. 2. Immunoreactivity of the antiserum raised against apoL. ApoL was purified using two-dimensional gel electrophoresis. A shows an SDS-polyacrylamide gel of purified protein used to raise rabbit antisera. In B, 40 g of Lp(A-I) protein was separated by SDS-PAGE and stained with Coomassie Blue R-250 or electrotransferred to nitrocellulose for Western blotting. CBB, Coomassie Blue-stained protein profile; anti A-I, immunoblot with antibodies to apoA-I; anti L, immunoblot with antiserum to apoL; pre imm., immunoblot with preimmune serum. Lp(A-I) protein were separated by a combination of liquid-phase isoelectric focusing (Bio-Rad) and two-dimensional gel electrophoresis. After transfer to a polyvinylidene difluoride membrane and staining with Coomassie Blue R-250, N-terminal sequencing was performed on each spot. 15 different samples confirmed these sequences. Using antibodies against the 42-kDa form of apoL, we detected a second form of apoL with an approximate molecular mass of 39 kDa. Clone C22-280 represents the amino acid sequence deduced from the matching DNA sequence found in the SWISS-PROT and GenBank™ data bases. Residues Ϫ12 Ϫ11 Ϫ10 Ϫ9 Ϫ8 Ϫ7 Ϫ6 Ϫ5 Ϫ4 Ϫ3 Ϫ2 Ϫ1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Apolipoprotein L Is Not Free in Plasma and Is Mainly Associated with Apolipoprotein A-I-containing Lipoproteins-We studied the distribution of apoL among apoA-I and apoB lipoproteins using selected affinity immunosorption. The experimental procedure is shown in Fig. 6A. Using successive anti-apoA-I and anti-apoB columns, we obtained the following fractions: lipoprotein-deficient plasma, apoA-I-containing lipoproteins (Lp(A-I)), and apoB-containing apoA-I-deficient lipoproteins (Lp(B w/o A-I)). Fig. 6B shows an immunoblot obtained after SDS-PAGE of 20 g of protein from each. The 42-kDa apoL was present as a single reactive band in Lp(A-I). ApoL was undetectable in the Lp(B w/o A-I) and lipoproteindeficient plasma fractions. This result suggests that apoL is associated chiefly with the apoA-I-containing lipoproteins. Apolipoprotein L Is Present in a Dense HDL Fraction-The distribution was also compared among lipoproteins prepared by ultracentrifugation. VLDL, IDL, LDL, HDL 2 , and HDL 3 were isolated by conventional sequential ultracentrifugation. Because this technique is known to disrupt lipoprotein structure (13)(14)(15)(33)(34)(35), we attempted to minimize lipoprotein alteration by submitting each lipoprotein class to equivalent ultracentrifugal forces as the 2 t product, using the same rotor. Qualitative detection of apoL in the different subclasses was maximized by immunoblotting samples overloaded on SDSpolyacrylamide gel (100 g of each fraction). Only minor amounts of apoL were detected in VLDL and HDL 2 . ApoL was primarily present in HDL 3 (1.21 Ͼ d Ͼ 1.12 g/ml) (Fig. 7A). Because of the long exposure to the substrate, artifact bands (larger than apoL) were also revealed in IDL and LDL. The quantitative assay for apoL confirmed this by showing an apoL content in HDL of 2 Ϯ 0.7 g of apoL/mg of total protein versus 0.13 Ϯ 0.13 g of apoL/mg of total protein in VLDL (Fig. 7B). ApoL was also detected in the bottom fraction (d Ͼ 1.25 g/ml) in trace amounts. Immunosorption with an Anti-apolipoprotein L Affinity Gel-To find the subpopulation of Lp(A-I) containing apoL (Lp(A-I:L)), we constructed an anti-apoL column using purified anti-apoL IgG. We isolated apoL-containing lipoproteins (Lp(L)) directly from normolipidemic plasma. Fig. 8 shows an SDS gel comparing the bound fraction (Lp(L)) with the apoA-I-containing lipoproteins. By immunoblotting with specific antiserum, we were able to detect the presence of apolipoproteins A-I, A-II, A-IV, and C-III (data not shown). Fractionation of (Fig. 9). ApoL was mainly distributed in large apoA-I-containing lipoproteins (12.2-17 and 10.4 -12.2 nm) and was totally absent in the small particles. Moreover, the analysis of Lp(L) lipoproteins by immunoelectrophoresis revealed ␣and pre-␤-migrating components (Fig. 10). DISCUSSION We have reported here the identification, characterization, and cloning of a new human apolipoprotein that we have designated apolipoprotein L. This new apolipoprotein is mainly associated with the apoA-I-containing lipoproteins of plasma. High density lipoproteins comprise a number of molecular subspecies that differ with respect to protein and lipid composition, particle morphology, and size. The numerous HDL molecular species are not fully apparent when HDL is prepared by ultracentrifugation. Hydrostatic pressure developed in the ultracentrifuge causes the dissociation of a portion of the complement of apolipoproteins (such as apolipoproteins A-I, A-II, C, and E) from HDL and leads to concomitant protein and lipid rearrangements (13)(14)(15)(33)(34)(35). The contents of proteins such as lecithin:cholesterol acyltransferase and cholesterol ester transfer protein shown to interact and to form physical complexes with apoA-I-containing lipoproteins are diminished or totally depleted in HDL altered by ultracentrifugal isolation (15,18). Thus, ultracentrifugation hinders identification of the molecular species of HDL and characterization of their constituent proteins. Since first proposed by Alaupovic (16) protein constituents that dissociate during isolation by ultracentrifugation (13)(14)(15)(33)(34)(35). In this study, we combined liquid-phase isoelectric focusing and high resolution two-dimensional gel electrophoresis to surmount the problem posed by the predominance of apoA-I in the immunoisolated Lp(A-I) fractions that would otherwise hinder purification of proteins present at lower concentrations. ApoL isolated from the Lp(A-I) particles was observed in two forms: 42 and 39 kDa (minor form) (Fig. 1). The truncated species could represent a proteolytically activated form of the protein, as is the case for several other plasma apolipoproteins (36,37). If so, the putative precursor form (42 kDa) represents the main constituent. So far, we have not been able to determine if this truncation occurs in vivo or during isolation. Recently, Trofatter et al. (24) published an expressed sequence tag (clone C22-280, human chromosome 22) that matched the N-terminal sequence we had found for apoL. This sequence revealed 12 residues upstream of the first amino acid of the plasma form of apoL. Since this structure is typical of a signal peptide (38) and since the cDNA sequence of apoL reveals only one possible open reading frame and encodes a mature protein of 371 amino acids with a molecular mass of 41,041 Da, in agreement with the experimental value, we propose that these 12 residues (starting with a methionine) represent the signal peptide of apoL. Therefore, the cDNA presented in this report encodes the full-length apoL protein. The analysis of apoL cDNA (32) reveals one putative Nglycosylation site ( 246 NISN 249 ) and several candidate serine and threonine residues for O-glycosylation. Post-transcriptional modifications at these sites could explain the charge isoforms of apoL found in plasma (Fig. 1). Because we did not find any significant homology between the apoL sequence and any present in SWISS-PROT or Gen-Bank™ (22), it is not yet possible to predict any function of apoL based on homologies. However, the transcription of apoL mRNA by the pancreas suggests a very specific function, possibly enzymatic, in lipid metabolism. Indeed, preliminary data (not shown) seem to indicate a positive correlation between plasma levels of apoL and plasma triglyceride levels. Analysis of the secondary structure of apoL (31) reveals four possible amphipathic helices (Fig. 5). These would confer a high level of lipophilicity, in agreement with our finding of very little detectable free apoL in plasma. That apoL in plasma is entirely bound to lipoproteins and remains associated with them during exposure to large volumes of buffer during column washing supports the view that it has very high affinity for HDL. Hence, it should be regarded as a true apolipoprotein rather than a plasma protein that exists partially in a lipoprotein-associated form such as haptoglobin. This is the basis of our designating it an apolipoprotein. ApoL, with a mean plasma concentration of 5.9 Ϯ 0.9 g/ml (n ϭ 5), is a marker for a distinct subpopulation of HDL. Indeed, apoL was found almost exclusively in association with apoA-I in lipoproteins prepared by immunoaffinity chromatography (Fig. 6). Moreover, the presence of apoL in plasma lipoproteins isolated by ultracentrifugation and its localization to HDL 3 (Fig. 7A) corroborate results obtained by immunoaffinity chromatography. Because of close association between apoA-I and apoL and because it is well known that lipoprotein integrity is better preserved by immunoaffinity isolation, we used the latter methodology to isolate specific lipoprotein subpopulations containing apoL. In agreement with our previous data, the apoL-containing lipoproteins (Lp(L)) contained apoA-I (Fig. 8). Moreover Lp(L) exhibited diameters typical of HDL (Fig. 9). However, it is interesting to note the discordance of the data between HDL purified by ultracentrifugation and the lipoprotein purified by immunoaffinity, showing the protein redistribution occurring during ultracentrifugation (13)(14)(15)(33)(34)(35). We found apoL to be preponderantly in HDL 3 ; however, the Lp(L) particles isolated by selected immunosorption exhibited heterogeneity of size. ApoL was chiefly associated with large HDL particles (Fig. 9). Fig. 9 also shows the existence of a very large apoL-containing lipoprotein corresponding to VLDL. Due to their low content in Lp(L) particles, these minor populations were not detectable by immunoblotting of apoB-containing lipoprotein (Fig. 7), but were only measurable by enzyme-linked immunosorbent assay. Fig. 7B shows the amount of apoL relative to the amount of total protein in the lipoprotein. The apoL content of HDL was Ͼ10 times higher than that in VLDL. No apoL was detectable in LDL. Apparently due to protein dissociation during ultracentrifugation, we also found apoL in the fraction of d Ͼ 1.25 g/ml. Moreover, apoA-II, apoA-IV, and apoC-III were also detected in Lp(L) (data not show), indicating, as for other subclasses of HDL, the presence of a complex protein complement in Lp(L). Populations of HDL designated as pre-␤-HDL have been postulated as serving key roles in reverse cholesterol transport (4,6,39). One, pre-␤ 1 -HDL, appears to act as the initial acceptor of cellular unesterified cholesterol (6). In this study, we showed that a subpopulation of apoL-containing lipoproteins also exhibits pre-␤ mobility (Fig. 10), possibly belonging to the larger pre-␤ 2 -or pre-␤ 3 -HDL particle populations. In summary, we have reported in this study the nucleotide and deduced amino acid sequences for a new human apolipoprotein that we have designated apolipoprotein L. This is the first apolipoprotein shown to be secreted by the pancreas. Its origin in that organ may reflect a non-insulin-dependent role of the pancreas in lipid metabolism. This new apolipoprotein is found in plasma, mainly associated with apoA-I-containing lipoproteins. Moreover, apoL-containing lipoproteins clearly define new HDL subspecies. Since no sequence homology was found with any known protein, its function cannot be inferred on a structural basis.
2018-04-03T02:06:14.654Z
1997-10-10T00:00:00.000
{ "year": 1997, "sha1": "59fda4f10926fa6773ecc64d730cc4d34fdb61fd", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/272/41/25576.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "6bf35275dc113caf3d3f31b820199ac08c66e302", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
89605095
pes2o/s2orc
v3-fos-license
Optimization of Process Parameters for Friction Stir Welding of Aluminum and Copper Using the Taguchi Method Producing joints of aluminum and copper by means of fusion welding is a challenging task. However, the results of various studies have proven the potential of friction stir welding (FSW) for manufacturing aluminum-copper joints. Despite the proven feasibility, there is currently no series application in automotive industry to produce aluminum-copper joints for electrical contacts by means of FSW. To make FSW as efficient as possible for large-scale production, maximized welding speed is desired. Taking this into account, this paper presents results of a parametric investigation, the objective of which was to increase the welding speed for FSW of aluminum and copper in comparison to welding speeds that are considered to be state of the art. Taguchi method was used to design an experimental plan and target figures of the investigations were the resultant tensile strengths and electrical resistances. Dependencies between input parameters and target figures were determined systematically. The optimal welding parameters, at which joints failed in the weaker aluminum material, included a welding speed of 700 mm/min. Consequently, it could be shown that joints with a performance similar to those of the base materials can be obtained using significantly higher welding speeds than reported in the relevant literature. Introduction Excellent electrical and thermal conductivity combined with high ductility, creep resistance and corrosion resistance are the reasons for copper materials being considered to be state of the art in current-carrying components for automotive applications.However, using copper is disadvantageous regarding the high procurement costs and the high material density.Taking this into account, dissimilar aluminum-copper joints represent a solution with great potential for weight and cost-optimized conductors [1,2].In order to produce joints for electrical contacts, it is well-known that firmly bonded joining is preferred to interlocking and force-locking joining techniques, due to better electrical performance of the joint [3].However, joining aluminum and copper is a challenging task by means of conventional fusion welding.Different melting temperatures of the base materials, the high thermal conductivities, and the low mutual solubility, which leads to the formation of brittle intermetallic phases, make it difficult to achieve sound welds [4].Instead, joining processes in which the formation of a melt is avoided are receiving much interest [5].Friction stir welding (FSW) also belongs to these so-called solid state joining techniques and various authors report on the suitability of this process for joining aluminum and copper materials [2,[6][7][8][9][10]. FSW was developed and patented in 1991 by Thomas et al. [11].In order to perform a firmly bonded joint, this process uses a non-consumable tool, which typically consists of a shoulder and a pin.This rotating tool is pressed into the joint gap and then traversed along the joint line.As a result of tool rotation and feed, the two joining partners are plasticized and stirred [12]. Most studies carried out in the field of FSW of aluminum and copper provide proof of feasibility and focus on the influence of tool and process parameters on the resulting mechanical and microstructural joint properties.Important findings have been obtained through the work of Xue et al. [9] and Akinlabi [7].These authors inform unanimously on the importance of positioning the harder copper material on the advancing side (AS) and the softer aluminum workpiece on the retreating side (RS) in order to manufacture sound welds free of defects.Moreover, a lateral offset towards the softer aluminum material is recommended to improve the material flow, and thus, the weld quality.Further publications on FSW of aluminum and copper are summarized in Table 1.All of these literature references report on the successful joining of aluminum and copper using FSW. Table 1.Overview of previous studies on dissimilar FSW of aluminum and copper.application in the automotive industry to produce aluminum-copper joints for electrical contacts by means of FSW.In order to achieve this, there are several aspects that require further investigation.This study addresses a research questions that is of particular relevance to the use of FSW for the production of aluminum-copper joints in the automotive industry.As can be seen in Table 1, FSW in previous research studies has been conducted at relatively low welding speeds.The objective of this work is to determine a significantly higher welding speed than in published studies, at which butt welds with excellent mechanical and electrical performance can be manufactured in order to make the FSW process as efficient as possible for large-scale production. Materials and Methods The applied materials in this study were EN AW-1050A and EN CW004A.Table 2 shows the chemical compositions of both materials, which were taken from the material supplier.The dimensions of the blanks were 160 mm, 100 mm, and 3 mm (length, width, thickness).The FSW experiments were performed on a PTG Powerstir portal system (PTG Heavy Industries Ltd, West Yorkshire, UK) in position-controlled operation.The clamping setup used for fixation of the blanks is shown in Figure 1.The FSW tool used for the welding tests was made of heat treated steel (X40CrMoV5-1) and consisted of a flat shoulder with a diameter of 18 mm and an unthreaded pin with a diameter of 6 mm.The length of the variably adjustable pin was set to 2.9 mm.All the friction stir welds produced within this study had a length of 120 mm.In order to increase the welding speed in comparison to published studies in the field of Al-Cu FSW, it was necessary to consider a wider parameter window for the parametric investigation.Design of experiments (DoE) was used to ensure an efficient procedure in terms of test effort and quality of results.Using the statistics software Minitab 18 (Minitab GmbH, Munich, Germany), an experimental plan was created.This was a fractional factorial Taguchi L25 design with three factors and five levels.Taguchi orthogonal plans are known to be suitable for parameter optimization purposes.The process parameters that were kept constant during the welding tests are listed in Table 3.The plunge depth and the tool tilt angle were determined based on preliminary tests and were not varied during the welding tests in order to achieve a complete penetration depth.As recommended by Xue et al. [9] and Akinlabi [7], the copper workpiece was positioned on the AS throughout the investigations.The process parameters, hereinafter also referred to as factors, which were varied equidistantly during the parametric investigation, are the traverse speed (factor 1), the tool rotation speed (factor 2), and the offset towards the aluminum side (factor 3).The structure of the Taguchi L25 design with 25 individual experiments is shown in Table 4, and the levels for each process parameter are listed in Table 5.For statistical purposes, three samples were welded for each factor-level combination.In order to increase the welding speed in comparison to published studies in the field of Al-Cu FSW, it was necessary to consider a wider parameter window for the parametric investigation.Design of experiments (DoE) was used to ensure an efficient procedure in terms of test effort and quality of results.Using the statistics software Minitab 18 (Minitab GmbH, Munich, Germany), an experimental plan was created.This was a fractional factorial Taguchi L25 design with three factors and five levels.Taguchi orthogonal plans are known to be suitable for parameter optimization purposes.The process parameters that were kept constant during the welding tests are listed in Table 3.The plunge depth and the tool tilt angle were determined based on preliminary tests and were not varied during the welding tests in order to achieve a complete penetration depth.As recommended by Xue et al. [9] and Akinlabi [7], the copper workpiece was positioned on the AS throughout the investigations.The process parameters, hereinafter also referred to as factors, which were varied equidistantly during the parametric investigation, are the traverse speed (factor 1), the tool rotation speed (factor 2), and the offset towards the aluminum side (factor 3).The structure of the Taguchi L25 design with 25 individual experiments is shown in Table 4, and the levels for each process parameter are listed in Table 5.For statistical purposes, three samples were welded for each factor-level combination.The evaluation of joint quality for the individual factor-level combinations was carried out by means of tensile testing and electrical resistance measurement.Moreover, hardness tests and metallographic analyses were performed on selected samples by digital microscopy and scanning electron microscopy (SEM) to assess the quality of the welds. The tensile tests were conducted according to DIN EN ISO 25239-5 [20] by the test machine Zwick Z100 (Zwick GmbH & Co. KG, Ulm, Germany) at an operating speed of 10 mm/min.Transversal sections of the friction stir welds were detached by water jet cutting for evaluation of the mechanical joint properties.In order to avoid excessive material consumption, a distance of 20 mm from the plunging spot has been set for detaching the samples.This distance deviates from the 50 mm specified in DIN EN ISO 25239-5 [20].The shape of the samples for tensile testing accorded with DIN EN ISO 4136 [21].In addition to the friction stir welds, five samples each of the respective base materials were tensile tested. For analyzing the electrical joint properties, the four point resistance measurement setup that is shown in Figure 2 was applied.This setup consists of the Micro-Ohmmeter MR5-600 (Schuetz Messtechnik GmbH, Teltow, Germany) and a clamping device that was designed for the rectangular samples with widths of 40 mm and lengths of 190 mm.A test current of 200 A was chosen and the measuring tips had a distance of 30 mm.The used setup allowed the measurement of the electrical resistance of the weld seam and the respective base materials simultaneously.The electrical resistance of the copper base material was measured via measuring tips 1 and 2, and the aluminum base material was analyzed via measuring tips 3 and 4. The welded area was positioned between tips 2 and 3.For each of the three areas ten values were recorded that were averaged subsequently.The evaluation of joint quality for the individual factor-level combinations was carried out by means of tensile testing and electrical resistance measurement.Moreover, hardness tests and metallographic analyses were performed on selected samples by digital microscopy and scanning electron microscopy (SEM) to assess the quality of the welds. The tensile tests were conducted according to DIN EN ISO 25239-5 [20] by the test machine Zwick Z100 (Zwick GmbH & Co. KG, Ulm, Germany) at an operating speed of 10 mm/min.Transversal sections of the friction stir welds were detached by water jet cutting for evaluation of the mechanical joint properties.In order to avoid excessive material consumption, a distance of 20 mm from the plunging spot has been set for detaching the samples.This distance deviates from the 50 mm specified in DIN EN ISO 25239-5 [20].The shape of the samples for tensile testing accorded with DIN EN ISO 4136 [21].In addition to the friction stir welds, five samples each of the respective base materials were tensile tested. For analyzing the electrical joint properties, the four point resistance measurement setup that is shown in Figure 2 was applied.This setup consists of the Micro-Ohmmeter MR5-600 (Schuetz Messtechnik GmbH, Teltow, Germany) and a clamping device that was designed for the rectangular samples with widths of 40 mm and lengths of 190 mm.A test current of 200 A was chosen and the measuring tips had a distance of 30 mm.The used setup allowed the measurement of the electrical resistance of the weld seam and the respective base materials simultaneously.The electrical resistance of the copper base material was measured via measuring tips 1 and 2, and the aluminum base material was analyzed via measuring tips 3 and 4. The welded area was positioned between tips 2 and 3.For each of the three areas ten values were recorded that were averaged subsequently.The samples for digital microscopy were prepared using the standard metallographic procedures.After mounting, the samples were ground using 1200 SiC abrasive paper and then polished using 1 µm aluminum oxide suspension and 50 nm colloidal silica suspension.Grinding and polishing were done manually to avoid the shifting of aluminum particles into the copper side and vice versa.A digital microscope VHX-2000 (Keyence Deutschland GmbH, Neu-Isenburg, Germany) was used to analyze the metallographic features of the friction stir welds. The samples for scanning electron microscopy were mounted, ground with 1200 and 2400 SiC abrasive papers, and then polished with 6 µm, 3 µm, and 1 µm diamond suspension.This procedure prevented topographical differences at the Al-Cu interfaces, so that the relevant areas could be analyzed properly.Scanning electron microscope model Scios (Field Electron and Ion Company, Hillsboro, OH, USA) was used for further analysis of the Al-Cu interfaces by means of backscattered electrons (BSE). Vickers hardness tests were carried out using the Leco AMH-43 test device (Leco Corporation, Saint Joseph, MO, USA) with a test load of 0.1 kp.The samples for digital microscopy were prepared using the standard metallographic procedures.After mounting, the samples were ground using 1200 SiC abrasive paper and then polished using 1 µm aluminum oxide suspension and 50 nm colloidal silica suspension.Grinding and polishing were done manually to avoid the shifting of aluminum particles into the copper side and vice versa.A digital microscope VHX-2000 (Keyence Deutschland GmbH, Neu-Isenburg, Germany) was used to analyze the metallographic features of the friction stir welds. The samples for scanning electron microscopy were mounted, ground with 1200 and 2400 SiC abrasive papers, and then polished with 6 µm, 3 µm, and 1 µm diamond suspension.This procedure prevented topographical differences at the Al-Cu interfaces, so that the relevant areas could be analyzed properly.Scanning electron microscope model Scios (Field Electron and Ion Company, Hillsboro, OH, USA) was used for further analysis of the Al-Cu interfaces by means of backscattered electrons (BSE). Vickers hardness tests were carried out using the Leco AMH-43 test device (Leco Corporation, Saint Joseph, MO, USA) with a test load of 0.1 kp. Once the optimal FSW parameters had been determined, the scalability of the results was tested.Since the ratio of tool rotation speed to traverse speed is a key figure for the heat input in FSW, these parameters were scaled up, while the optimal ratio that was determined through the parametric investigation was kept constant.The motivation for these experiments was a further increase in welding speed. In order to be able to compare the properties of the friction stir welds to those of the respective base materials, the base materials are characterized at first.Five tensile specimens were tested per base material.Moreover, the electrical properties of the base materials were analyzed using the four point resistance measurement method.The measured values of the 75 samples from the parametric investigation were used for both base materials. The last part of this section describes the labelling of the samples.Table 6 includes all the different variants.Table 6.Tabular list of material and specimen labeling. Al Aluminum base material EN AW-1050A Cu Copper base material EN CW004A AlCu Friction stir welds produced as part of the Taguchi experimental plan AlCu opt Friction stir welds produced using welding parameters with the optimal ratio of tool rotation speed to traverse speed Mechanical and Electrical Properties of the Base Materials Table 7 provides an overview on the mean values and standard deviations of the tensile strengths and the electrical resistances of the base materials used. Mechanical and Electrical Properties of the Al-Cu Friction Stir Welds The evaluation of the welding experiments for parameter optimization starts with analyzing the mechanical properties of the friction stir welds for the different factor-level combinations (Figure 3).The diagram shows that the averaged tensile strength for parameter settings 1, 6, 22, and 23 is at the level of the aluminum base material.For three of the four parameter combinations mentioned, failure in all tensile specimens occurred in the weaker aluminum base material, which is always the objective when welding dissimilar joints.However, it was observed that most specimens failed in the area of the weld seam.This leads to the conclusion that the parameter window for the production of welds with optimal tensile strength is relatively small.In order to be able to compare the heat input between different welds, the ratio between tool rotation speed and traverse speed (n/v-ratio) will be used in the following.This n/v-ratio indicates the number of revolutions per mm feed, and thus, allows a rough estimation of the heat input [22].Low heat input is represented by a low n/v-ratio and a high n/v-ratio stands for high heat input into the workpieces.Throughout the welding experiments, the n/v-ratio was in a range from 0.15 1/mm to 1.2 1/mm.Taking into account that the n/v-ratios for the parameter settings that lead to the highest tensile strengths are comparatively low, with values of 0.4 1/mm (AlCu_1), 0.29 1/mm (AlCu_6), 0.23 1/mm (AlCu_22), and 0.31 1/mm (AlCu_23), it can be concluded that cold welding tends to lead to better mechanical properties. Figure 3 also shows that the tensile strengths of samples that were produced with parameter settings AlCu_5, AlCu_9, AlCu_13, AlCu_17, and AlCu_21 are amongst the lowest values.All of these parameter settings included an offset of 3.0 mm into the aluminum side.Since the tool pin has a diameter of 6 mm, no scratching of the copper workpiece should have taken place, leading to an insufficient material mixing.Consequently, the joint strength can only be attributed to an adhesive bonding of the base materials.In order to follow up this consideration, further examination is given in Subsection 3.5 by means of metallographic analyses. The results of the electrical resistance measurements are given in Figure 4.The diagram shows that the averaged electrical resistances for the 25 parameter combinations are at a level of approximately 5.7 µΩ.Since this value corresponds to the resistance average of both base materials, it can be concluded that the mass proportions of aluminum and copper in the joining area are balanced and that welds with excellent current-carrying behavior have been produced.This observation confirms a good choice of the considered parameter window for the experimental design. A comparison of the results for tensile testing with electrical resistance measurements shows that the electrical resistances are subject to significantly lower deviations than the resultant tensile strength.Consequently, it is evident that the target figure electrical resistance is more robust against parameter changes than the tensile strengths of the friction stir welds.In order to be able to compare the heat input between different welds, the ratio between tool rotation speed and traverse speed (n/v-ratio) will be used in the following.This n/v-ratio indicates the number of revolutions per mm feed, and thus, allows a rough estimation of the heat input [22].Low heat input is represented by a low n/v-ratio and a high n/v-ratio stands for high heat input into the workpieces.Throughout the welding experiments, the n/v-ratio was in a range from 0.15 1/mm to 1.2 1/mm.Taking into account that the n/v-ratios for the parameter settings that lead to the highest tensile strengths are comparatively low, with values of 0.4 1/mm (AlCu_1), 0.29 1/mm (AlCu_6), 0.23 1/mm (AlCu_22), and 0.31 1/mm (AlCu_23), it can be concluded that cold welding tends to lead to better mechanical properties. Figure 3 also shows that the tensile strengths of samples that were produced with parameter settings AlCu_5, AlCu_9, AlCu_13, AlCu_17, and AlCu_21 are amongst the lowest values.All of these parameter settings included an offset of 3.0 mm into the aluminum side.Since the tool pin has a diameter of 6 mm, no scratching of the copper workpiece should have taken place, leading to an insufficient material mixing.Consequently, the joint strength can only be attributed to an adhesive bonding of the base materials.In order to follow up this consideration, further examination is given in Section 3.5 by means of metallographic analyses. The results of the electrical resistance measurements are given in Figure 4.The diagram shows that the averaged electrical resistances for the 25 parameter combinations are at a level of approximately 5.7 µΩ.Since this value corresponds to the resistance average of both base materials, it can be concluded that the mass proportions of aluminum and copper in the joining area are balanced and that welds with excellent current-carrying behavior have been produced.This observation confirms a good choice of the considered parameter window for the experimental design. A comparison of the results for tensile testing with electrical resistance measurements shows that the electrical resistances are subject to significantly lower deviations than the resultant tensile strength.Consequently, it is evident that the target figure electrical resistance is more robust against parameter changes than the tensile strengths of the friction stir welds. Analysis of the Taguchi Experimental Plan After the tensile strengths and the electrical resistances of the friction stir welds from the Taguchi experimental plan have been compared with each other and initial dependencies have been identified, the influence of each factor on the respective target figure is presented by the main effect plots in Figure 5. Analysis of the Taguchi Experimental Plan After the tensile strengths and the electrical resistances of the friction stir welds from the Taguchi experimental plan have been compared with each other and initial dependencies have been identified, the influence of each factor on the respective target figure is presented by the main effect plots in Figure 5. Analysis of the Taguchi Experimental Plan After the tensile strengths and the electrical resistances of the friction stir welds from the Taguchi experimental plan have been compared with each other and initial dependencies have been identified, the influence of each factor on the respective target figure is presented by the main effect plots in Figure 5. From the main effect plots for the target figure tensile strength it can be seen that only the offset has a steady influence, whereby the tensile strength decreases with larger offsets.In contrast, the traverse speed and the tool rotation speed do not have a steady effect on the tensile strength.Due to the fact that the joint properties, and thus, also the tensile strength depend essentially on the heat input and the associated n/v-ratio during FSW, the effect of the factors traverse speed and tool rotation speed are difficult to separate from each other clearly.Instead, the interaction of these two factors, which is expressed by the n/v-ratio, is crucial for the joint quality.Consequently, no clear correlation between traverse speed and tensile strength or tool rotation speed and tensile strength results from the main effect plots.However, it should be noted that the most powerful levels for these two parameters (traverse speed 700 mm/min and tool rotation speed 400 rpm) result in a n/v-ratio of 0.57 1/mm.This value is relatively low compared to the highest n/v-ratio from the Taguchi experimental plan (1.2 1/mm).Hence, the observation that relatively cold welds achieve better tensile strengths could be confirmed by the main effect plots. In accordance with the main effect plots for the target figure tensile strength, the main effect plots for the mean of electrical resistance also show a steady influence from the offset and an unsteady influence from the factors traverse speed and tool rotation speed.In addition, it can be seen that for all three factors the courses for the tensile strength are nearly contrary to those for the electrical resistance.Since each tensile strength maximum results in a minimum electrical resistance, the following optimal welding parameters can be considered to maximize the tensile strength, and at the same time minimize the electrical resistance of the friction stir welds. Scaling of Optimal Welding Parameters In order to verify the optimal welding parameters to maximize the tensile strength and minimize the electrical resistance, which were determined by the analysis of the Taguchi experimental plan, welding tests were carried out using these parameter settings.Since the aim of the parametric investigation is to maximize the welding speed, further welding experiments were performed.Therefore, the factors traverse speed and tool rotation speed were scaled up, while maintaining the n/v-ratio of 0.57 1/mm and using a constant offset of 1.4 mm.Table 8 gives an overview on the parameter combinations used.Three welds were produced per parameter setting.The tensile strengths and electrical resistances resulting from these parameter settings are presented in Figure 6.The diagram shows that only parameter combination AlCu opt _1 leads to tensile strengths on the level of the aluminum base material.The average tensile strength for this parameter set is even higher, by 1.98 Mpa, than that for the most effective factor-level-combination from the Taguchi experimental design (AlCu_1).Also, the resulting electrical resistance for parameter combination AlCu opt _1 is by 0.03 µΩ lower than that for the most low-resistant parameter combination from the Taguchi experimental design (AlCu_22).The conclusion is that the optimal welding parameters determined by the main effect diagrams could be verified.On the other side, it can be seen that scaling up the traverse speed and the tool rotation speed leads to an almost linear decrease in tensile strength and to an increase in electrical resistance.Therefore, it is evident that scaling up these parameters is not feasible without a loss in joint quality. However, based on the welding experiments carried out within this study, it was proved that significantly higher welding speeds than those specified in the state of the art can be achieved. Metallographic Analysis of the Al-Cu Friction Stir Welds and Hardness Testing In order to be able to understand the observations made in the previous subsections, metallographic analyses were carried out on selected specimens. The first objective within this subsection is to explain why welds with lower offset lead to higher weld quality.Then, it is to be shown why parameter settings that represent lower heat input achieve friction stir welds with better tensile strengths.In addition, the reduced joint qualities when scaling up the factors traverse speed and tool rotation speed while maintaining the optimal n/v-ratio will be discussed. As could be determined during the evaluation of the mechanical and electrical joint properties and the analysis of the Taguchi experimental plan, both the tensile strength and the electrical resistance are clearly dependent on the choice of the offset.Considering the macrostructures shown in Figure 7, it can be seen that the quantity as well as the size of copper particles stirred into the aluminum side vary depending on the chosen offset.Furthermore, it can be seen from the figure that with an offset of 3 mm there was no scratching of the copper through the tool pin.As a result, no copper particles were stirred into the aluminum side.These findings lead to the conclusion that more intense material mixing, which is achieved by smaller offsets, leads to better electrical and mechanical properties.However, it should be said that as shown by Xue et al. [9] and Akinlabi [7], the offset should not be too small to ensure a beneficial material flow.On the other side, it can be seen that scaling up the traverse speed and the tool rotation speed leads to an almost linear decrease in tensile strength and to an increase in electrical resistance.Therefore, it is evident that scaling up these parameters is not feasible without a loss in joint quality. However, based on the welding experiments carried out within this study, it was proved that significantly higher welding speeds than those specified in the state of the art can be achieved. Metallographic Analysis of the Al-Cu Friction Stir Welds and Hardness Testing In order to be able to understand the observations made in the previous subsections, metallographic analyses were carried out on selected specimens. The first objective within this subsection is to explain why welds with lower offset lead to higher weld quality.Then, it is to be shown why parameter settings that represent lower heat input achieve friction stir welds with better tensile strengths.In addition, the reduced joint qualities when scaling up the factors traverse speed and tool rotation speed while maintaining the optimal n/v-ratio will be discussed. As could be determined during the evaluation of the mechanical and electrical joint properties and the analysis of the Taguchi experimental plan, both the tensile strength and the electrical resistance are clearly dependent on the choice of the offset.Considering the macrostructures shown in Figure 7, it can be seen that the quantity as well as the size of copper particles stirred into the aluminum side vary depending on the chosen offset.Furthermore, it can be seen from the figure that with an offset of 3 mm there was no scratching of the copper through the tool pin.As a result, no copper particles were stirred into the aluminum side.These findings lead to the conclusion that more intense material mixing, which is achieved by smaller offsets, leads to better electrical and mechanical properties.However, it should be said that as shown by Xue et al. [9] and Akinlabi [7], the offset should not be too small to ensure a beneficial material flow.In order to explain why parameter settings representing lower heat input tend to achieve higher tensile strengths than hot welds, the formation of intermetallic compounds (IMC) was investigated by scanning electron microscopy.Backscattered electron (BSE) images from the stir zone were taken for welds that were performed using parameter combinations AlCu_1 (n/v-ratio 0.4 1/mm), AlCuopt_1 (n/v-ratio 0.57 1/mm), and AlCu_10 (n/v-ratio 0.86 1/mm).These three parameter sets include an offset of 1.4 mm, and thus, differ only by the heat input.Figure 8 shows that IMC could not be detected using parameter combination AlCu_1, neither at the Al-Cu interface nor at the copper particle stirred into the aluminum side.From this it can be concluded that no IMC were formed or that these phases are too small to be detected by the SEM.Taking into account the BSE images in Figure 9 for parameter combinations AlCuopt_1 and AlCu_10, it can be seen that at both welds a continuous layer of IMC was formed at the transition between the examined copper particle to the aluminum matrix.The average thickness of this layer is 150 nm for the specimen that was welded according to parameter combination AlCuopt_1 (n/v-ratio 0.57 1/mm) and 265 nm for parameter setting AlCu_10 (n/v-ratio 0.86 1/mm).As a result, a correlation between heat input and resulting intermetallic compound formation could be observed.This effect was also shown in previous work by Galvão et al. [14] and Khodir et al. [23].However, the thickness of the determined IMC layers is so small that an effect of the IMC formation on the resultant tensile strengths is to be excluded, according to publications by Xue et al. [10], Khodir et al. [23], and Schmidt [24].Due to the low thickness of the respective layers formed, it was not possible to determine an exact composition of the IMC by means of energy dispersive X-ray spectroscopy.In order to explain why parameter settings representing lower heat input tend to achieve higher tensile strengths than hot welds, the formation of intermetallic compounds (IMC) was investigated by scanning electron microscopy.Backscattered electron (BSE) images from the stir zone were taken for welds that were performed using parameter combinations AlCu_1 (n/v-ratio 0.4 1/mm), AlCu opt _1 (n/v-ratio 0.57 1/mm), and AlCu_10 (n/v-ratio 0.86 1/mm).These three parameter sets include an offset of 1.4 mm, and thus, differ only by the heat input.Figure 8 shows that IMC could not be detected using parameter combination AlCu_1, neither at the Al-Cu interface nor at the copper particle stirred into the aluminum side.From this it can be concluded that no IMC were formed or that these phases are too small to be detected by the SEM.Taking into account the BSE images in Figure 9 for parameter combinations AlCu opt _1 and AlCu_10, it can be seen that at both welds a continuous layer of IMC was formed at the transition between the examined copper particle to the aluminum matrix.The average thickness of this layer is 150 nm for the specimen that was welded according to parameter combination AlCu opt _1 (n/v-ratio 0.57 1/mm) and 265 nm for parameter setting AlCu_10 (n/v-ratio 0.86 1/mm).As a result, a correlation between heat input and resulting intermetallic compound formation could be observed.This effect was also shown in previous work by Galvão et al. [14] and Khodir et al. [23].However, the thickness of the determined IMC layers is so small that an effect of the IMC formation on the resultant tensile strengths is to be excluded, according to publications by Xue et al. [10], Khodir et al. [23], and Schmidt [24].Due to the low thickness of the respective layers formed, it was not possible to determine an exact composition of the IMC by means of energy dispersive X-ray spectroscopy.For further investigation of the tensile strength differences between parameter sets representing low or high heat input, Figure 10 shows hardness profiles on cross-sections of welds that were obtained using parameter settings AlCu_1 (n/v-ratio 0.4 1/mm) and AlCu_10 (n/v-ratio 0.86 1/mm).By means of hardness testing, process-related hardening or softening of the examined welds can be detected, so that any occurred strength-reducing microstructural features can be localized.Vickers hardness of the respective base material was found to be 37,7 HV 0.1 for the aluminum base material and 80,1 HV 0.1 for the copper base material.As shown for parameter setting AlCu_1 (n/v-ratio 0.4 1/mm) in Figure 10a, both in the stir zone (SZ) and on both sides in the thermo-mechanically affected zone (TMAZ), there is a significant increase in hardness compared to the respective base materials, with a hardness peak of 122 HV 0.1 in the SZ.This increase in hardness is to be explained by the effect of work hardening due to the cold welding parameters.On the other side, for the weld that was obtained using the parameter combination AlCu_10 (n/v-ratio 0.86 1/mm), the peak hardness values are significantly lower.Furthermore, a decrease in hardness can be seen in aluminum-sided in the SZ, and the plateau, on which the copper bulk material undergoes cold hardening, is clearly smaller.Therefore, the effect of recrystallization seems to dominate here.For further investigation of the tensile strength differences between parameter sets representing low or high heat input, Figure 10 shows hardness profiles on cross-sections of welds that were obtained using parameter settings AlCu_1 (n/v-ratio 0.4 1/mm) and AlCu_10 (n/v-ratio 0.86 1/mm).By means of hardness testing, process-related hardening or softening of the examined welds can be detected, so that any occurred strength-reducing microstructural features can be localized.Vickers hardness of the respective base material was found to be 37,7 HV 0.1 for the aluminum base material and 80,1 HV 0.1 for the copper base material.As shown for parameter setting AlCu_1 (n/v-ratio 0.4 1/mm) in Figure 10a, both in the stir zone (SZ) and on both sides in the thermo-mechanically affected zone (TMAZ), there is a significant increase in hardness compared to the respective base materials, with a hardness peak of 122 HV 0.1 in the SZ.This increase in hardness is to be explained by the effect of work hardening due to the cold welding parameters.On the other side, for the weld that was obtained using the parameter combination AlCu_10 (n/v-ratio 0.86 1/mm), the peak hardness values are significantly lower.Furthermore, a decrease in hardness can be seen in aluminum-sided in the SZ, and the plateau, on which the copper bulk material undergoes cold hardening, is clearly smaller.Therefore, the effect of recrystallization seems to dominate here.For further investigation of the tensile strength differences between parameter sets representing low or high heat input, Figure 10 shows hardness profiles on cross-sections of welds that were obtained using parameter settings AlCu_1 (n/v-ratio 0.4 1/mm) and AlCu_10 (n/v-ratio 0.86 1/mm).By means of hardness testing, process-related hardening or softening of the examined welds can be detected, so that any occurred strength-reducing microstructural features can be localized.Vickers hardness of the respective base material was found to be 37.7 HV 0.1 for the aluminum base material and 80.1 HV 0.1 for the copper base material.As shown for parameter setting AlCu_1 (n/v-ratio 0.4 1/mm) in Figure 10a, both in the stir zone (SZ) and on both sides in the thermo-mechanically affected zone (TMAZ), there is a significant increase in hardness compared to the respective base materials, with a hardness peak of 122 HV 0.1 in the SZ.This increase in hardness is to be explained by the effect of work hardening due to the cold welding parameters.On the other side, for the weld that was obtained using the parameter combination AlCu_10 (n/v-ratio 0.86 1/mm), the peak hardness values are significantly lower.Furthermore, a decrease in hardness can be seen in aluminum-sided in the SZ, and the plateau, on which the copper bulk material undergoes cold hardening, is clearly smaller.Therefore, the effect of recrystallization seems to dominate here.In order to understand how the mechanisms of work hardening and recrystallization affect a sample produced using the determined optimal parameter combination, the hardness profile shown in Figure 11 was analyzed.It can be seen that the aluminum material in the TMAZ as well as in the SZ is slightly hardened compared to the aluminum base material.A hardening of the copper particles introduced into the aluminum matrix cannot be detected, whereas the plateau, on which the copper bulk material undergoes cold hardening, is slightly wider than for AlCu_10.Taking into account parameter combination AlCuopt_1 achieving the highest tensile strength and the lowest electrical resistance, it is to be concluded that using these parameter settings, the ideal window for sufficient plasticization of the copper and for avoiding excessively high recrystallization in the SZ was determined.At the end of this subsection, it is aimed to understand why scaling up of the optimal welding parameters while maintaining the n/v-ratio 0.57 1/mm could not be realized without losses in mechanical and electrical properties.An explanation for this is provided by the cross-sectional macrostructures in Figure 12.From the macrostructures, it becomes clear that the number and size of defects in the welded area increases with increasing traverse speed.While parameter combination AlCuopt_1 shows a homogeneous distribution of the copper particles without the occurrence of cavities or any other defects, parameter setting AlCuopt_4 leads to areas with insufficient bonding and strength-reducing tunnel defects in the root of the SZ.The parameter set AlCuopt_7 finally leads to a In order to understand how the mechanisms of work hardening and recrystallization affect a sample produced using the determined optimal parameter combination, the hardness profile shown in Figure 11 was analyzed.It can be seen that the aluminum material in the TMAZ as well as in the SZ is slightly hardened compared to the aluminum base material.A hardening of the copper particles introduced into the aluminum matrix cannot be detected, whereas the plateau, on which the copper bulk material undergoes cold hardening, is slightly wider than for AlCu_10.Taking into account parameter combination AlCuopt_1 achieving the highest tensile strength and the lowest electrical resistance, it is to be concluded that using these parameter settings, the ideal window for sufficient plasticization of the copper and for avoiding excessively high recrystallization in the SZ was determined.In order to understand how the mechanisms of work hardening and recrystallization affect a sample produced using the determined optimal parameter combination, the hardness profile shown in Figure 11 was analyzed.It can be seen that the aluminum material in the TMAZ as well as in the SZ is slightly hardened compared to the aluminum base material.A hardening of the copper particles introduced into the aluminum matrix cannot be detected, whereas the plateau, on which the copper bulk material undergoes cold hardening, is slightly wider than for AlCu_10.Taking into account parameter combination AlCuopt_1 achieving the highest tensile strength and the lowest electrical resistance, it is to be concluded that using these parameter settings, the ideal window for sufficient plasticization of the copper and for avoiding excessively high recrystallization in the SZ was determined.At the end of this subsection, it is aimed to understand why scaling up of the optimal welding parameters while maintaining the n/v-ratio 0.57 1/mm could not be realized without losses in mechanical and electrical properties.An explanation for this is provided by the cross-sectional macrostructures in Figure 12.From the macrostructures, it becomes clear that the number and size of defects in the welded area increases with increasing traverse speed.While parameter combination AlCuopt_1 shows a homogeneous distribution of the copper particles without the occurrence of cavities or any other defects, parameter setting AlCuopt_4 leads to areas with insufficient bonding and strength-reducing tunnel defects in the root of the SZ.The parameter set AlCuopt_7 finally leads to a At the end of this subsection, it is aimed to understand why scaling up of the optimal welding parameters while maintaining the n/v-ratio 0.57 1/mm could not be realized without losses in mechanical and electrical properties.An explanation for this is provided by the cross-sectional macrostructures in Figure 12.From the macrostructures, it becomes clear that the number and size of defects in the welded area increases with increasing traverse speed.While parameter combination AlCu opt _1 shows a homogeneous distribution of the copper particles without the occurrence of cavities or any other defects, parameter setting AlCu opt _4 leads to areas with insufficient bonding and strength-reducing tunnel defects in the root of the SZ.The parameter set AlCu opt _7 finally leads to a completely open seam root.From this, it can be concluded that although the tool rotation speed has been adjusted according to the feed speed, the material transport in the vertical direction has been reduced with increasing welding speeds.Thus, the plasticized material does not have enough time to be stirred behind the tool pin and sufficiently compacted by the tool shoulder.The shorter the time for plasticizing and stirring the materials is, the more the inertia of the joining partners promotes the formation of defects in the weld.Moreover, as shown by two publications from Lambiase et al. [25,26], the heat exchange mechanisms during the friction stir welding process need to be considered.The authors have found that the heat dissipation into the clamping device and the preheating of material in front of the welding tool vary depending on the traverse speed and the tool rotation speed.Actually, it is stated that the parameters "traverse speed" and "tool rotation speed" have a different influence on the heat exchange mechanisms, and thus, on the resulting temperature in the welding area.Consequently, it is to say that using the n/v-ratio as a heat index allows only a rough comparison of the heat input between different parameter settings in a limited range of process parameters.completely open seam root.From this, it can be concluded that although the tool rotation speed has been adjusted according to the feed speed, the material transport in the vertical direction has been reduced with increasing welding speeds.Thus, the plasticized material does not have enough time to be stirred behind the tool pin and sufficiently compacted by the tool shoulder.The shorter the time for plasticizing and stirring the materials is, the more the inertia of the joining partners promotes the formation of defects in the weld.Moreover, as shown by two publications from Lambiase et al. [25,26], the heat exchange mechanisms during the friction stir welding process need to be considered.The authors have found that the heat dissipation into the clamping device and the preheating of material in front of the welding tool vary depending on the traverse speed and the tool rotation speed.Actually, it is stated that the parameters "traverse speed" and "tool rotation speed" have a different influence on the heat exchange mechanisms, and thus, on the resulting temperature in the welding area.Consequently, it is to say that using the n/v-ratio as a heat index allows only a rough comparison of the heat input between different parameter settings in a limited range of process parameters. Conclusions In this study, a parametric investigation on dissimilar friction stir butt welding of 3 mm thick aluminum EN AW-1050A and copper EN CW004A was performed, with the objective to maximize the welding speed at which joints with excellent mechanical and electrical performance can be produced.After designing a Taguchi experimental plan, welding tests were carried out and dependencies between input parameters and the target figure tensile strength and electrical resistance were determined.1.The target figure electrical resistance is more robust against parameter changes than the tensile strengths of the friction stir welds.2. It was found that the lowest offset in the considered parameter window (1.4 mm) led to the best mechanical and electrical properties.Cross-sectional macrostructures have proved that more intense material mixing when using low offsets improved the performance of the joint.3. The main effect plots did not show a steady effect of the factors traverse speed and tool rotation speed on the resultant tensile strength and electrical resistance.Instead, it was shown that the interaction of these two factors, which was expressed by the n/v-ratio, is crucial for the quality of the friction stir welds.4. It was recognized that cold welds, which were represented by a low n/v-ratio, tended to lead to better mechanical and electrical properties.This observation could be confirmed by the analysis of the Taguchi experimental plan. Conclusions In this study, a parametric investigation on dissimilar friction stir butt welding of 3 mm thick aluminum EN AW-1050A and copper EN CW004A was performed, with the objective to maximize the welding speed at which joints with excellent mechanical and electrical performance can be produced.After designing a Taguchi experimental plan, welding tests were carried out and dependencies between input parameters and the target figure tensile strength and electrical resistance were determined. 1. The target figure electrical resistance is more robust against parameter changes than the tensile strengths of the friction stir welds. 2. It was found that the lowest offset in the considered parameter window (1.4 mm) led to the best mechanical and electrical properties.Cross-sectional macrostructures have proved that more intense material mixing when using low offsets improved the performance of the joint. 3. The main effect plots did not show a steady effect of the factors traverse speed and tool rotation speed on the resultant tensile strength and electrical resistance.Instead, it was shown that the interaction of these two factors, which was expressed by the n/v-ratio, is crucial for the quality of the friction stir welds. 4. It was recognized that cold welds, which were represented by a low n/v-ratio, tended to lead to better mechanical and electrical properties.This observation could be confirmed by the analysis of the Taguchi experimental plan. 5. The effect of IMC on the resultant joint properties could be excluded.Instead, the varying tensile strength when welds were obtained with low or high heat input could be explained by the results of Vickers hardness testing.6. It was found that the optimal welding parameters for sufficient plasticization of the copper and for avoiding excessively high recrystallization in the SZ were traverse speed 700 mm/min, tool rotation speed 400 rpm, and offset 1.4.Friction stir welds that were manufactured using this parameter combination failed in the weaker aluminum base material during tensile testing and achieved an electrical resistance that was exactly between the resistances of the respective base materials.Scaling up the traverse speed and the tool rotation speed while maintaining the optimal n/v-ratio of 0.57 1/mm could not be realized without losses in mechanical and electrical joint properties.However, it could be shown by the investigations carried out that joints with a performance similar to those of the base materials used can be obtained using significantly higher welding speeds than reported in the relevant literature. Figure 3 . Figure 3. Tensile strengths of the friction stir welds from the Taguchi experimental plan. Figure 3 . Figure 3. Tensile strengths of the friction stir welds from the Taguchi experimental plan. Figure 4 . Figure 4. Electrical resistances of the friction stir welds from the Taguchi experimental plan. Figure 5 . Figure 5. Main effect plots for mean of tensile strength and mean of electrical resistance. Figure 4 . Figure 4. Electrical resistances of the friction stir welds from the Taguchi experimental plan. Figure 4 . Figure 4. Electrical resistances of the friction stir welds from the Taguchi experimental plan. Figure 5 . Figure 5. Main effect plots for mean of tensile strength and mean of electrical resistance.Figure 5. Main effect plots for mean of tensile strength and mean of electrical resistance. Figure 5 . Figure 5. Main effect plots for mean of tensile strength and mean of electrical resistance.Figure 5. Main effect plots for mean of tensile strength and mean of electrical resistance. Figure 6 . Figure 6.Tensile strengths and electrical resistances for welding experiments with optimal n/v-ratio. Figure 9 . Figure 9. (a) BSE images of Al-Cu friction stir welds produced with parameter setting AlCuopt_1 (Cu particle).(b) BSE images of Al-Cu friction stir weld produced with parameter setting AlCu_10 (Cu particle). Figure 10 . Figure 10.Hardness profiles on cross-sections of Al-Cu joints: (a) Produced with parameter setting AlCu_1; (b) weld produced with parameter setting AlCu_10. Figure 11 . Figure 11.Hardness profile on cross-section of Al-Cu joint produced with parameter setting AlCuopt_1. Figure 10 . Figure 10.Hardness profiles on cross-sections of Al-Cu joints: (a) Produced with parameter setting AlCu_1; (b) weld produced with parameter setting AlCu_10. Figure 10 . Figure 10.Hardness profiles on cross-sections of Al-Cu joints: (a) Produced with parameter setting AlCu_1; (b) weld produced with parameter setting AlCu_10. Figure 11 . Figure 11.Hardness profile on cross-section of Al-Cu joint produced with parameter setting AlCuopt_1. Figure 11 . Figure 11.Hardness profile on cross-section of Al-Cu joint produced with parameter setting AlCuopt_1. Figure 1.Clamping setup used for FSW experiments. Table 3 . Constant process parameters for the welding experiments. Table 4 . Structure of Taguchi L25 design with three factors and five levels. Table 5 . Factors and their levels. Table 3 . Constant process parameters for the welding experiments. Table 4 . Structure of Taguchi L25 design with three factors and five levels. Table 5 . Factors and their levels. Table 7 . Tensile strength and electrical resistance of the base materials used. Table 8 . Parameter combinations for welding experiments with optimal n/v-ratio.
2019-04-01T13:12:50.882Z
2019-01-10T00:00:00.000
{ "year": 2019, "sha1": "3def4ce6b27da4cfc15fad889e35944543e40cbc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/9/1/63/pdf?version=1547458811", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3def4ce6b27da4cfc15fad889e35944543e40cbc", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
2251909
pes2o/s2orc
v3-fos-license
Spatial relationship between Taenia solium tapeworm carriers and necropsy cyst burden in pigs Background Taenia solium, a parasite that affects humans and pigs, is the leading cause of preventable epilepsy in the developing world. Geographic hotspots of pigs testing positive for serologic markers of T. solium exposure have been observed surrounding the locations of human tapeworm carriers. This clustered pattern of seropositivity in endemic areas formed the basis for geographically targeted control interventions, which have been effective at reducing transmission. In this study, we further explore the spatial relationship between human tapeworm carriers and infected pigs using necroscopic examination as a quantitative gold-standard diagnostic to detect viable T. solium cyst infection in pigs. Methodology/Principal findings We performed necroscopic examinations on pigs from 7 villages in northern Peru to determine the number of viable T. solium cysts in each pig. Participating humans in the study villages were tested for T. solium tapeworm infection (i.e., taeniasis) with an ELISA coproantigen assay, and the distances from each pig to its nearest human tapeworm carrier were calculated. We assessed the relationship between proximity to a tapeworm carrier and the prevalence of light, moderate, and heavy cyst burden in pigs. The prevalence of pig infection was greatest within 50 meters of a tapeworm carrier and decreased monotonically as distance increased. Pigs living less than 50 meters from a human tapeworm carrier were 4.6 times more likely to be infected with at least one cyst than more distant pigs. Heavier cyst burdens, however, were not more strongly associated with proximity to tapeworm carriers than light cyst burdens. Conclusion/Significance Our study shows that human tapeworm carriers and pigs with viable T. solium cyst infection are geographically correlated in endemic areas. This finding supports control strategies that treat humans and pigs based on their proximity to other infected individuals. We did not, however, find sufficient evidence that heavier cyst burdens in pigs would serve as improved targets for geographically focused control interventions. Introduction Taenia solium, the pork tapeworm, is a parasite that affects 50 million people worldwide [1]. When the parasite infects the human central nervous system, the result is a severe neurological condition called neurocysticercosis (NCC), which may lead to seizures, headaches, and stroke. In Latin America alone, 1.3 million people have epilepsy from NCC [2], and, in rural Peru, 1 in 200 people suffer from epilepsy caused by NCC [3]. T. solium is transmitted between humans and pigs, and is commonly found in rural areas of low income countries where access to sanitation is limited and free-roaming pigs have access to human feces. An adult tapeworm residing in the human gut produces millions of infectious eggs over its lifespan that are expelled through the feces of the infected human host (a condition called taeniasis). When infected humans defecate outside, T. solium eggs may be consumed by free-ranging pigs and develop into larval cysts that lodge in the soft tissue of the pigs (a condition called porcine cysticercosis). Humans may, in turn, be infected with the intestinal tapeworm by consuming these cysts in undercooked pork. Transmission of the T. solium parasite varies considerably by location, as significant variations in prevalence have been observed both at a regional scale [3][4][5][6], and among households within a community [7,8]. Detecting these spatial patterns of T. solium infection, whether on a regional scale or at the household level, is an important step in the development of effective control strategies. The most common spatial pattern analysis that has been used to study T. solium has been the detection of clusters of a single type of infection (e.g., porcine cysticercosis). To this end, studies in both Latin America [4,9] and Africa [7,10] have found that cases of porcine cystiercosis tend to be clustered within the same households and among neighboring households within the same communities. Such studies that identify univariate clusters of infection are important first steps in understanding disease distribution, and may be used to prioritize the allocation of scarce resources for prevention. Other studies have sought to examine these clusters of cysticercosis in relation to the locations of human tapeworm carriers as potential sources of infection. Examining the precise spatial relationship between human and porcine hosts allows for the investigation of physical and biologic mechanisms dictating T. solium transmission, which can then be used to design spatially explicit control strategies. In separate studies conducted in endemic regions of Peru, Garcia et al. [4] and Lescano et al. [11] found that living in the same household as a tapeworm carrier was an important risk factor for cysticercosis seropositivity among humans. Similarly, clusters of porcine cysticercosis have been found to occur in hotspots surrounding human tapeworm carriers. Lescano et al. found that pigs living less than 50 meters from a tapeworm carrier were much more likely to be seropositive than more distant pigs [12], and O'Neal et al. found that the prevalence of human taeniasis was significantly increased among individuals residing within 100 meters of an infected pig [13]. The results of these latest distance analyses directly led to the development of a control methodology known as "ring strategy". Ring strategy targets anti-helminthic treatment to only those humans and pigs that reside within 100 meters of a positively identified pig. This targeted approach to treatment was developed as an alternative to mass anti-helminthic treatment in Peru, and was based on the assumption that pig and human disease are spatially dependent and likely be found in close proximity to each other. Ring interventions have now been trialed in endemic communities of Peru, and have shown early success, with significant reductions in pig seroincidence observed in intervention communities [14]. The strong associations observed between cysticercosis infection and tapeworm carriers in previous spatial analyses, together with the early success of ring strategies, suggests that location and proximity are important determinants of T. solium transmission. Despite this knowledge, important gaps remain in our understanding of the spatial dynamics of T. solium transmission that impede our ability to understand transmission mechanisms, and improve upon existing control strategies. First, most studies that have investigated the spatial association between human tapeworm carriers and infected pigs have relied on testing pig sera for the presence of antibodies against T. solium, which does not distinguish active cyst infection from cleared infection or exposure to T. solium eggs without infection [15]. Necroscopic examination of pigs, which provides a count of the total number of viable T. solium cysts in pigs, is the most sensitive and specific diagnostic currently available for T. solium cyst infection in pigs, and would allow us to draw more confident conclusions about the spatial relationships that have been observed. In addition, previous studies have been limited to only assessing the presence or absence of pig infection based on serologic markers. Counting the total number of T. solium cysts in necropsied pigs provides a quantitative measure of the degree of infection. A spatial analysis of cyst burden would allow us to detect a biologic gradient between the degree of infection (i.e., number of cysts counted on pig necropsy) and their proximity to a tapeworm carrier. This association could provide important insight into the environmental and biologic mechanisms driving T. solium egg dispersion and cyst infection, and may lead to the identification of more specific diagnostic targets (e.g., pigs of a specific cyst burden) for ring strategies. In order fill these knowledge gaps we performed a distance analysis examining the relationship between T. solium cyst infection in pigs and their distance to infected human tapeworm carriers in an endemic region of northern Peru. Specifically, we assessed this spatial relationship at different burdens of cyst infection and at different distances. Our objectives were to determine if pigs with heavier cyst burdens were more likely to be found in close proximity to tapeworm carriers, and to determine if a critical distance threshold could be identified at which the relationship between human tapeworm carriers and infected pigs could no longer be observed. Based on the positive findings of previous studies, we hypothesized that a strong association between infection in pigs and their proximity to human tapeworm carriers would exist, and would become stronger at heavier burdens of infection. Methods Data for this study were collected in 2015 as part of a trial testing a community-driven ring treatment strategy for the control of T. solium in 7 villages of northern Peru. By the time data for this study were collected, pigs in these communities had not received antiparasitic treatment for at least 9 months, and the human population had not yet been intervened upon. At the conclusion of the trial, all humans were offered testing and treatment for taeniasis, and seropositive pigs were purchased and euthanized for necroscropic examination of cyst burden. Human participants We performed a door-to-door survey of all households in the 7 villages and attempted to recruit all human residents older than 2 years of age for participation. Consenting participants were interviewed for household and demographic characteristics. Survey questions included the age and sex of each pig, the presence and condition of a pig corral on the property, the source of household drinking water, and human waste disposal (open field defecation or latrine). We used handheld GPS receivers (GeoExplorer II; Trimble, Sunnyvale, CA) with post-processed differential correction for sub-meter accuracy to record a single set of coordinates in front of each household. These coordinates were used to represent the locations of both human and pig participants in each household. At the conclusion of the trial period, all participants were presumptively treated for taeniasis with a single oral dose of niclosamide according to their weight (11-34 kg received 1 g; 35-50 kg received 1.5 g; > 50 kg received 2 g), and were instructed to collect their next stool. Niclosamide was chosen for mass treatment because it is highly effective against taeniasis [16,17], and does not affect the cystic stage of T. solium like other available chemotherapies, which could cause neurological symptoms in undiagnosed cases of NCC [18]. Post-treatment stool samples were first tested with enzyme-linked immunosorbent assays for T. solium coproantigens (CoAg-ELISA) as previously described [19]. Reactive samples (optical density ratio (ODR) > 7.5%) were examined microscopically for the presence of Taenia spp. eggs in stool using the test tube spontaneous sedimentation technique [20], and humans with reactive samples were followed up after two weeks with further testing and treatment to confirm clearance. Other intestinal parasites that were detected during stool screening were provided appropriate treatment through the local health center. For this analysis, we considered humans to be positive for T. solium taeniasis if Taenia spp. eggs were visualized in stool or the CoAg-ELISA test produced an ODR greater than 20%. We choose to use ODR > 20% as a case definition in this analysis to reduce the rate of false positives due to non-specific binding and cross-reaction with other Taenia spp., which may occur at low ODR values [21,22]. Swine participants and necropsy All pigs older than four weeks of age were eligible for participation in this study. Serum samples were collected from all eligible pigs in the study villages at the conclusion of the year-long trial, and were analyzed by enzyme-linked immunoelectrotransfer blot (EITB) to detect the presence serum antibodies that indicate exposure to T. solium eggs. Briefly, the EITB assay measures reactivity of pig serum to seven lentil-lectin purified glycoprotein antigens isolated from native cysts [23]. Reaction to 1 or more of these glycoprotein antigens bands is highly sensitive for detecting active cyst infection (89%), however lacks specificity (48%); while reaction to 4 or more bands is less sensitive (61%), but has improved specificity (92%) [15]. Given that the expected prevalence of active cyst infection among pigs in this region of Peru is around 5-10%, the predictive value of a negative EITB assay is high (Garcia et al. found that 99% (144 out of 146) of seronegative pigs in this region were necropsy-negative [17]). Results from the EITB assay were used to select pigs for necroscopic examination. In order to prevent the unnecessary sacrifice of uninfected pigs, we attempted to purchase only pigs with one or more positive EITB bands for necropsy. Pigs with negative serologic results were assumed to contain zero cysts, as negative EITB results are highly predictive of negative necropsy results [17]. Of the 791 pigs tested from the seven study villages, 419 (53%) seropositive pigs were identified. Study staff attempted to purchase all seropositive pigs for necroscopic examination, however, due to reluctance of villagers to sell their animals, only 146 (35%) of these seropositive pigs were able to be purchased. Purchased pigs were anesthetized and humanely euthanized. To determine the number of viable T. solium cysts in each necropsied pig, the entire carcass was dissected and systematically inspected using fine tissue slices of less than 0.5 cm. Viable cysts were those with well-delineated thin-walled cystic structures containing clear vesicular fluid and a visible white protoscolex, however a formal bile test was not conducted to confirm viability. Degenerated and calcified cysts, while enumerated by examiners, were not included in this analysis. For pigs with particularly dense cyst burdens, a weighed sample of forelimb muscle was counted for cysts and extrapolated to estimate the total body burden. Our final analysis was carried out on a sample of 515 pigs (65% of 791 total pigs). This sample was composed of the 146 (28%) seropositive pigs that study staff purchased from pig-owners for necroscopic examination and 369 (72%) seronegative pigs for which a cyst count of zero was imputed. The remaining 272 serepositive pigs in our sample were excluded because necroscopic examination was not performed and cyst burden could not be estimated. Statistical analysis In ArcMap 10.3 (ESRI; Redland, CA), we plotted the household locations of study participants (humans and pigs) using a transverse Mercator projection (UTM Peru 17S, 1996). We then calculated the Euclidean distance in meters from each pig's household to the nearest human tapeworm carrier household. Pigs living in the same household as a human tapeworm carrier were given a distance value of zero. Distances were categorized into bins of < 50 meters, 50-500 meters and > 500 meters. These groupings were chosen because they produced a welldelineated gradient of infection prevalence at increasing distances. The 100 meter distance threshold was not included in our results because no effect was observed among pigs 50-100 meters from a tapeworm carrier. Due to the lack of normality in the dependent variable (cyst burden), we elected to categorize this variable into bins based on the following schema: heavy infection (! 100 viable cysts), moderate infection (10-99 viable cysts), light infection (1-9 viable cysts) and no infection (zero viable cysts or negative EITB serology). We used logistic regression models with binary outcomes to examine predictors for three different cyst burden thresholds (! 1 cyst, ! 10 cysts, and ! 100 cysts), using pigs with no infection (zero viable cysts or negative EITB serology) as a reference group in all models. Logistic regression models with robust sandwich estimators from the generalized estimating equations (GEE) family were used to account for household clustering (i.e., dependence between pigs from the same household). We first created bivariate models for pig-and household-level predictors and selected covariates to include in our final multivariable models if they were significant (α = 0.05) in any of the three cyst burden models. Ethics This study was reviewed and approved at Oregon Health and Science University, Portland, Oregon, USA, by the Institutional Review Board (protocol #10116) and the Institutional Animal Care and Use Committee (protocol #2843). It was also reviewed and approved at Universidad Peruana Cayetano Heredia, Lima, Peru, by the Institutional Ethics Committee (protocol #61326), and the Institutional Committee for the Ethical Use of Animals (protocol #61326). Written informed consent was obtained from all human participants. The consent of an adult or guardian was required for the participation of children <18 years old. Treatment of animals adhered to the Council for International Organizations of Medical Sciences (CIOMS) International Guiding Principles for Biomedical Research Involving Animals. Pigs were humanely euthanized by administering 0.1 mg/kg of xylazine with 5 mg/kg of ketamine intravenously to achieve deep anesthesia, followed by injection of 100 mg/kg of sodium pentobartital. Human population The 7 participating villages ranged in population from 130 to 596 human inhabitants, for a total population of 1,890 individuals ( Table 1). 32% of the population reported practicing open field defecation, and 63% of the population reported raising pigs. In total, 1,420 (75%) participants submitted stool samples for parasite testing. Residents who declined to submit stool samples were more likely to be male, younger, and practice open defecation (S1 Table). A geographic analysis of participating and non-participating households across the 7 study villages revealed no concerning spatial patterns of non-participation (Ripley's K1-K2 test for random labelling [24,25], S1 Appendix). Swine serology Serum samples were collected from all eligible pigs in the study villages (n = 791 pigs). Overall, 53% tested positive for antibodies against T. solium cyst (at least one positive EITB band), and seropositivity ranged from 38% to 69% among the seven study villages ( Table 1). 9% of the pigs had 4 or more positive bands, and the prevalence of 4 or more bands ranged from 1% to 20% among the study villages. Swine characteristics and necropsy The 515 pigs included in this study consisted of 146 (28%) pigs that were necropsied, and 369 (72%) seronegative pigs for which a cyst count of zero was imputed. Among study pigs, the median age was 8 months and 54% were female. In terms of T. solium cyst burden, 471 (92%) of the study pigs were uninfected, 26 (5%) pigs had light infection (1-9 cysts), 8 (2%) pigs had moderate infection (10-99 cysts), and 10 (2%) pigs had heavy infection (!100 cysts). was greatest among pigs living within 50 meters of a tapeworm carrier, and decreased proportionally at greater distances. The prevalence of at least one viable cyst was 15.6% (12 out of 77) at < 50 meters from a tapeworm carrier, 8.3% (27 out of 325) between 50 and 500 meters, and 4.4% (5 out of 113) at > 500 meters (p < 0.01 for trend) (Fig 2). Of the 12 infected pigs living within 50 meters of a tapeworm carrier, 3 (25%) pigs resided in the same household as the tapeworm carrier. Overall, the prevalence of T. solium infection among pigs owned by tapeworm carriers was 12.5% (3 out of 24). This was not significantly different from the prevalence of pig infection among all pigs living within 50 meters of a tapeworm carrier. The prevalence of moderate-to-heavy cyst infection (! 10 viable cysts) and heavy infection cyst infection (! 100 viable cysts) showed similar trends of increasing prevalence at closer distances to tapeworm carriers, however the distance trend for heavy infection was non-significant. The prevalence of pigs with ! 10 viable cysts was 6.5% (5 out of 77) at < 50 meters, 3.7% (12 out of 325) between 50 and 500 meters, and 0.9% (1 out of 113) at > 500 meters (p = 0.04 for trend), while the prevalence of heavy infection (!100 viable cysts) was 3.9% (3 out of 77) at < 50 meters, 1.8% (6 out of 325) between 50 and 500 meters, and 0.9% (1 out of 113) at > 500 meters (p = 0.15 for trend). Logistic regression When examined in bivariate logistic regression, only two predictors, distance to the nearest human tapeworm carrier and the age of the pig, were significantly associated with cyst infection. Pigs residing within 50 meters of a tapeworm carrier were significantly more likely to be infected than pigs living more than 500 meters from a tapeworm carrier (Table 2). This Spatial relationship between Taenia solium tapeworm carriers and necropsy cyst burden in pigs association increased in strength between pigs with at least one viable cyst (OR = 4.6; 95% CI: 1.4, 15.4) and 10 or more viable cysts (OR = 8.7, 95% CI: 1.0, 76.1). The 50 meter distance threshold, however, was not significant when tested for heavily infected pigs (100 or more Table 2. Crude associations between select pig characteristics and cyst burden (n = 515 pigs). Based on our findings from the bivariate analysis, only distance to nearest tapeworm carrier and pig age were included in the final adjusted GEE logistic regression models (Table 3). After adjusting for pig age, pigs living less than 50 meters from a human tapeworm carrier were 4.56 times (95% CI: 1.33, 15.6) more likely to be infected with at least one cyst than pigs living more than 500 meters from a tapeworm carrier. In the two models that assessed the effect of distance at heavier cyst burdens, we found strong but non-significant effects of living less than 50 meters from a tapeworm carrier (OR = 7.27, p = 0.07 for !10 cysts; OR = 4.25, p = 0.21 for ! 100 cysts). Similar to our findings in the unadjusted analysis, distances greater than 50 meters from a tapeworm carrier, including pigs living 50 to 100 meters from a tapeworm carrier, were not significantly associated with increased pig infection at any cyst burden. Number of infected pigs The distance bins of <50 meters, 50-500 meters, and >500 meters from a tapeworm carrier were chosen for logistic regression models above because of the strong positive association we found among pigs residing <50 meters from a tapeworm carrier. In order to compare our results with previously trialed ring interventions, which initiated targeted interventions within 100 meters of infected pigs [13,14], we also evaluated the odds of cyst infection among pigs living < 100 meters from a tapeworm carrier. We found that pigs residing <100 meters from a tapeworm carrier had a significantly increased odds of cyst infection compared to pigs living > 500 meters from a tapeworm carrier (OR = 3.54, 95%: 1.09, 11.6); however this association was driven by the strong association among pigs residing < 50 meters from tapeworm carriers, and was not significant for moderate or heavy cyst burdens. Overall, there were few infected pigs residing 50 to 100 meters from tapeworm carriers (8%, 3 out of 36 pigs), and pigs residing only in the distance band of 50 to 100 meters from tapeworm carriers did not have significantly increased odds of infection compared to the reference distance of >500 meters (OR = 2.01, 95% CI: 0.31, 12.9) (S2 Table). Discussion In this analysis, we investigated the association between T. solium cysts burden and proximity to human tapeworm carriers in villages of northern Peru where T. solium is endemic. There were a few key findings to highlight in our analysis. First, consistent with our hypothesis, the Table 3. Multivariable regression of distance to tapeworm carriers and different cyst burdens in pigs. locations of human tapeworm carriers and pigs infected with viable T. solium cysts were geographically correlated in the study communities. Prevalence of T. solium cysticercosis decreased monotonically as distance from a human tapeworm carrier increased (15.6%, 8.3%, and 4.4% for pigs living < 50 meter, 50-500 meters, and > 500 meters from a tapeworm carrier, respectively). Our second hypothesis was that proximity to human tapeworm carriers would show a stronger association with pig infection when examined at heavier cyst burdens, thus representing a gradient effect between distance and cyst burden. However, the only statistically significant association observed in the final adjusted models was the comparison of all infected pigs (at least one cyst) with uninfected pigs. At moderate (! 10 cysts) and heavy (! 100 cysts) cyst burdens, where we expected to find stronger associations, we found that the associations with proximity to human tapeworm carriers became non-significant. Therefore, we were unable to detect any significant biologic gradient between the burden of infection and proximity to tapeworm carriers. Finally, we found that distances less than 50 meters from human tapeworm carriers were associated with an increased prevalence of viable T. solium cyst infection in pigs. Pigs living less than 50 meters from a human tapeworm carrier were 4.6 times more likely to be infected with at least one cyst than pigs living more than 500 meters from a tapeworm carrier. Pigs living more than 50 meters from a tapeworm carrier, including pigs living between 50 and 100 meters from a tapeworm carrier, did not have an increased odds of infection at any cyst burden analyzed. These findings are consistent with a previous study that examined the effect of proximity to human tapeworm carriers on the prevalence of pig seropositivity (as measured by EITB serology) in a similar rural region of Peru [12]. Lescano et al. found that the prevalence of T. solium seropositivity in pigs decreased as distance from a human tapeworm carrier increased (69%, 36%, and 18% among pigs living < 50 meters, 50-500 meters, and > 500 meters from a tapeworm carrier, respectively). Additionally, they concluded that the 50 meter areas surrounding human tapeworm carriers represented significant foci of transmission, with pigs living in these rings 3.7 times more likely to be seropositive than pigs living more than 500 meters from a tapeworm carrier. Our study, therefore, contributes additional evidence that pigs living within 50 meters from a human tapeworm carrier in this region are at increased risk for T. solium infection, and uses the gold-standard diagnostic for pig infection to demonstrate the proclivity for tapeworm carriers to shed infectious T. solium eggs in the areas immediately surrounding their homes. Any infection (!1 cyst) Moderate-heavy infection (! 10 cysts) Heavy infection (! 100 cysts) Neither our study nor previous work provide evidence that distances greater than 50 meters (e.g., 100 meter rings used in ring strategies) are associated with an increased risk of T. solium infection. While a distance gradient was observed in both our study and the study referenced above, neither found a significant independent effect of distances greater than 50 meters on pig infection. O'Neal et al. found that 100 meter rings represented significant foci of T. solium transmission in rural Peru; however, this study did not specifically evaluate 50 meter rings to determine which distance was responsible for the increased level of transmission [13]. The idea that 50 meters is a critical distance at which pigs are exposed to increased risk of T. solium infection in this region is consistent with our understanding of pig range and behavior. A GPS tracking study of pigs in rural Peru found that pigs spent an average of 70% of their time within 50 meters of their residence and interacted with human defecation areas nearly 30 minutes per day inside these 50 meters rings (compared to just 7 minutes per day outside of 50 meters) [26]. Based on these findings, we propose that 50 meter rings accurately represent T. solium transmission foci in endemic areas of rural Peru. Our finding that the association between proximity to tapeworm carriers and pig infection did not strengthen at heavier cyst burdens was unexpected. While we observed a strong gradient of infection among heavily infected pigs (prevalence of heavy infection was 3.9%, 1.8%, and 0.9% at < 50 meters, 50-500 meters, and > 500 meters, respectively), we expected to also find an increase in the strength of the proximity effect at higher cyst burdens. The independent effect of proximity to tapeworm carriers on the odds of moderate (! 10 cysts) and heavy (! 100 cysts) infection, however, were not statistically significant. There are a few possible explanations for this unexpected finding. First, it is possible that pigs living in close proximity to tapeworm carriers are, in fact, exposed to greater concentrations of T. solium eggs in their residential environments, but that the burden of established cyst infection in pigs is mediated by host factors such as differential immune responses, rather than being driven purely by exposure dose. It is also possible that the lack of an observed dose-response distance effect in this analysis could simply be explained by the small numbers of infected pigs that were represented in our sample. Only 1.9% (10 out of 515) of pigs in our sample were heavily infected, and 3.5% (18 out of 515) had more than 10 cysts. These low cell counts likely made it difficult to observe a significant effect in these groups. Although distance to human tapeworm carriers was an important predictor for T. solium infection among pigs, many infected pigs in our study did not reside in close proximity to a tapeworm carrier. In fact, only 27% (12 out of 44) of the infected pigs in this study lived within 50 meters of a tapeworm carrier, and some infected pigs lived more than 1 km from it's the nearest identified tapeworm carrier. There are a number of possible explanations for this unexpected finding that should be investigated with future studies. First, due to the cross-sectional nature of this study, we were only able to detect prevalent cases of porcine cysticercosis, meaning that cyst infection in older pigs could have been caused by previously treated or recovered tapeworm carriers that were not detected. This could have caused pigs with older infections to appear further from tapeworm carriers than they were at the time of infection. It is also possible that pigs appearing distant from tapeworm carriers were infected through egg dispersion mechanisms, such as dung beetles and flies, which have been identified as possible mechanical vectors capable of dispersing T. solium eggs over long distances [27][28][29][30][31]. Finally, it is possible that we did not identify all tapeworm carriers in the study area. 25% of human inhabitants did not provide stool specimens for testing, which could represent a significant number of undetected human tapeworm carriers, and could explain the appearance of large distance values for some pigs. There are a few additional limitations to our study that must be noted. First, while the CoAg-ELISA is the most sensitive and specific diagnostic available to detect human taeniasis [22], cross-reaction with other Taenia species and non-specific binding of the CoAg-ELISA assay with host factors are known to occur [21,22]. Therefore, it is possible that our use of the CoAg-ELISA assay for T. solium tapeworm detection could have allowed for false positive diagnoses, which may have diluted the observed spatial relationship. Our sensitivity analysis, however, found that this scenario is unlikely to have occurred. Additionally, our use of household coordinates to represent human and pig locations could have misrepresented the true location of transmission events. For example, it is possible that some free-roaming pigs in our study were infected by consuming infected human feces distant from their household location. Previous studies of human and pig behavior in this region, however, have shown that pigs tend to roam in close proximity to their owner's homes, and that open human defecation areas are concentration near household locations [26], suggesting that transmission events are most likely to occur in the immediate proximity of household locations used in this study. Finally, in order to reduce unnecessary animal sacrifice, we imputed cyst burdens of zero for seronegative pigs without performing full necropsies on these animals. Based on our knowledge of the sensitivity of EITB serologic tests [15,17], it is possible that a small proportion of these seronegative pigs were truly infected, likely biasing our estimates towards observing weaker associations. Cysticercosis has a substantial economic and health burden on populations in endemic rural areas of Peru. The results of this study provide an important first step in understanding the spatial dynamics of cysticercosis infection to support the use of ring strategy in Peru. In order to advance control efforts, however, more research must be done to improve diagnostic tests and improve our understanding of factors affecting the transmission of T. solium between humans and pigs. Answering these questions and optimizing ring strategy could lead to profound reductions in the burden of cysticercosis and ultimately contribute to elimination in the region. Supporting information S1 Table. Demographic comparison of participants and non-participants (human). (DOCX) S2 Table. Crude associations between cyst burden in pigs and alternative distances from tapeworm carriers (n = 515 pigs). (DOCX) S1 Appendix. Spatial analysis of clustering among of participants and non-participants (human). (DOCX)
2018-04-03T03:16:17.869Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "818dbf6280cb13b48d96b968792db13266be924e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0005536&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fbc07254eafef7430bfc4d881d4231fb9db97f44", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245986583
pes2o/s2orc
v3-fos-license
A two-strain reaction-diffusion malaria model with seasonality and vector-bias To investigate the combined effects of drug resistance, seasonality and vector-bias, we formulate a periodic two-strain reaction-diffusion model. It is a competitive system for resistant and sensitive strains, but the single-strain subsystem is cooperative. We derive the basic reproduction number $\mathcal {R}_i$ and the invasion reproduction number $\mathcal {\hat{R}}_i$ for strain $i~(i=1,2)$, and establish the transmission dynamics in terms of these four quantities. More precisely, (i) if $\mathcal {R}_1<1$ and $\mathcal{R}_2<1$, then the disease is extinct; (ii) if $\mathcal {R}_1>1>\mathcal{R}_2$ ($\mathcal {R}_2>1>\mathcal{R}_1$), then the sensitive (resistant) strains are persistent, while the resistant (sensitive) strains die out; (iii) if $\mathcal {R}_i>1$ and $\mathcal {\hat{R}}_i>1~(i=1,2)$, then two strains are coexistent and periodic oscillation phenomenon is observed. We also study the asymptotic behavior of the basic reproduction number with respect to small and large diffusion coefficients. Numerically, we demonstrate the phenomena of coexistence and competitive exclusion for two strains and explore the influences of seasonality and vector-bias on disease spreading. Introduction Malaria, one of the most common vector-borne diseases, is endemic in over 100 countries worldwide and causes serious public health problems and a significant economic burden worldwide [1]. Human malaria infection is caused by the genus Plasmodium parasite, which can be transmitted to humans by the effective bites of adult female Anopheles mosquitoes (after taking a blood meal from humans) [2]. According to the 2020 WHO report [3], the global tally of malaria cases was 229 million in 2019, claiming some 409 000 lives compared to 411 000 in 2018. Therefore, a deep understanding of malaria transmission mechanisms will undoubtedly contribute to disease control. Mathematical models have been proposed to study the dynamics of malaria outbreaks in different parts of the world, the earliest model dates back to the Ross-Macdonald model [4,5]. Since then, various mathematical models have been designed to describe and predict the spreading of malaria (see, e.g., [6,7,8,9,10,11,12,13,14]). However, few studies consider the following three biological factors for malaria transmission simultaneously. Vector-bias effect. The vector-bias describes that mosquitoes prefer biting infectious humans to susceptible ones. Kingsolver [15] first introduced a vector-bias model for the dynamics of malarial transmission. Following Kingsolver's work, Hosack et al. [16] included the incubation time in mosquitoes to study the dynamics of the disease concerning the reproduction number. Further, Chamchod and Britton [7] extended the model from previous authors by defining the attractiveness in a different way. Motivated by these works, Wang and Zhao incorporated the seasonality into a vector-bias model with incubation period [11]. Bai et al. formulated a time-delayed periodic reaction-diffusion model with vector-bias effect [12] and found that the ignorance of the vector-bias effect will underestimate the infection risk. All these results show that the vector-bias has an important impact on the epidemiology of malaria. Drug-resistance. Currently, due to the lack of effective and safe vaccine, the main strategy in controlling malaria is drugs. However, the use of anti-malarial drugs such as chloroquine, malaraquine, nivaquine, aralen and fansidar results in the appearance and spread of resistance in the parasite population [2,17,18]. This poses a significant challenge to the global control of malaria transmission or eradication of the disease. Therefore, it is essential to investigate the resistance in malaria transmission. Seasonality. It is generally believed that climatic factors such as temperature, rainfall, humidity, wind, and duration of daylight greatly influence the transmission and distribution of vector-borne diseases [19,20,21]. For example, rising temperatures will reduce the number of days required for breeding, and thereby increase mosquito development rates [22]. There have been some mathematical models and field observations suggesting that the strength and mechanisms of seasonality can change the pattern of infectious diseases [22,23]. These results are beneficial for forecasting the mosquito abundance and further effectively controlling the disease. Except these considerations above, human and vector populations have also contributed to the spread of vector-borne diseases [6,9]. Therefore, this paper will investigate a periodic two-strain malaria model with diffusion, which is an extension of autonomous limiting system in [24]. In view of the intrinsic mathematical structure of the model, we choose a time-varying phase space to carry out dynamical analysis. This idea has also been used in [25]. In particular, we prove that no subset forms a cycle on the boundary with the aim of using uniform persistence theory. Its proof is nontrivial (see Theorem 4.3). The rest of this paper is organized as follows. In the next section, we formulate the model and study its well-posedness. In Section 3, we define the basic reproduction number R i and the invasion reproduction numberR i (i = 1, 2) for the sensitive and resistant strains, respectively. In Section 4, we investigate the uniform persistence and extinction in terms of the reproduction numbers. In Section 5, we analyze the asymptotic behavior of the basic reproduction number concerning small and large diffusion coefficients. In Section 6, we conduct numerical study for our model. And the paper ends with a brief discussion. Model formulation Motivated by [12,24], we consider the model with no immunity; that is, individuals who recovered from malaria cannot resist reinfection of the disease and can become susceptible directly. We assume that no susceptible individual or mosquito can be infected by two virus strains. The total human population N h (t, x) is divided into three groups: susceptible S h (t, x), infected individuals with drug sensitive strain I 1 (t, x) and infected individuals with drug resistant strain I 2 (t, x). For the vector population, only adult female mosquitoes can contract the virus due to adult males and immature mosquitoes do not take blood. Thereby, we consider only adult female mosquitoes in our model. The vector population M (t, x) has the epidemiological classes denoted by S v (t, x), I v1 (t, x) and I v2 (t, x) for the susceptible, infected with sensitive and resistant strains, respectively. Assume that all populations remain confined to a bounded domain Ω ⊂ R m (m ≥ 1) with smooth boundary ∂Ω (when m ≥ 1). Following the line in [12], we suppose that the density of total human population N h (t, x) = S h (t, x) + I 1 (t, x) + I 2 (t, x) satisfies the following reaction-diffusion equation: where ∆ is the usual Laplacian operator. D h > 0 is the diffusion coefficient of humans, b and d (0 < d < b) are respectively the maximal birth rate and the nature mortality rate of humans, and K(x) denotes the local carrying capacity, which is supposed to be a positive continuous function of location x. By employing [26, Theorems 3.1.5 and 3.1.6], we arrive at that system (2.1) admits a globally attractive positive steady state N (x) in C(Ω, R + ) \ {0}. We also assume that the equation of the total mosquito population M (t, where D v > 0 is the diffusion coefficient of mosquitoes, Λ(t, x) is the recruitment rate at which adult female mosquitoes emerge from larval at time t and location x, and η(t, x) is the natural death rate of mosquitoes at time t and location x. Functions Λ(t, x) and η(t, x) are Hölder continuous and nonnegative nontrivial on R ×Ω, and ω-periodic in t for some ω > 0. It easily follows that system (2.2) admits a globally stable positive ω-periodic solution M * (t, x) in C(Ω, R + ) (see, e.g., [27,Lemma 2.1]). Biologically, we may suppose that the total human and mosquito density at time t and location x respectively stabilize at N (x) and M * (t, x), that is, x) for all t ≥ 0 and x ∈ Ω. For model parameters, since the impact of climate change on mosquitoes activities is much more than that on humans, the parameters corresponding to mosquitoes are assumed to be time-dependent. To incorporate a vector-bias term into the model, we use the parameters p and l to describe the probabilities that a mosquito arrives at a human at random and picks the human if he is infectious and susceptible, respectively [7,11]. Since infectious humans are more attractive to mosquitoes, we assume p ≥ l > 0. Let β(t, x) be the biting rate of mosquitoes at time t and location x; c 1 (α 1 ) be the transmission probability per bite from infectious mosquitoes (humans) with sensitive strain to susceptible humans (mosquitoes), and c 2 (α 2 ) be the transmission probability per bite from infectious mosquitoes (humans) with resistant strain to susceptible humans (mosquitoes). According to the induction in [24], we obtain diffusion model: Here, the positive constants γ 1 and γ 2 denote the recovery rate of the sensitive and resistant strains for humans, respectively. The function β(t, x) is Hölder continuous and nonnegative but not zero identically on R ×Ω, and ω-periodic in t. Other parameters are the same as above. Let X := C(Ω, R 4 ) be the Banach space with supremum norm · and X + := C(Ω, R 4 + ). For each t ≥ 0, we define Let Y := C(Ω, R) and Y + := C(Ω, R + ). Let T 1 (t, s), T 2 (t, s), T 3 (t, s) : Y → Y, t ≥ s, be the linear evolution operators associated with subject to the Neumann boundary condition, respectively. Noting that T j (t, s) = T j (t − s), j = 1, 2, we have T j (t + ω, s + ω) = T j (t, s) for (t, s) ∈ R 2 with t ≥ s, j = 1, 2. Proof. For any given ϕ ∈ X(0), one easily sees , it then follows from the parabolic maximum principle [31, Proposition 13.1] that u i (t, x, ϕ) > 0 for all t > t 0 and x ∈Ω. Reproduction numbers In this section, we first define the basic reproduction number R 0 of (2.3), and then introduce the invasion reproduction numberR i for strain i (i = 1, 2). Basic reproduction number In order to derive the basic reproduction number of (2.3), we first consider subsystems: one involves sensitive strains alone and the other involves resistant strains alone. We fix i ∈ {1, 2} and let I j (t, x) ≡ 0, I vj (t, x) ≡ 0, ∀t ≥ 0, x ∈Ω, j = 1, 2 and j = i. Then system (2.3) reduces to the following single-strain model: subject to the Neumann boundary condition. The exponential growth bound of Ψ i (t, s) is defined as By the Krein-Rutman Theorem and [31, Lemma 14.2], we have where r(Ψ i (ω, 0)) is the spectral radius of Ψ i (ω, 0). Then, it follows from [32, Proposition Let C ω (R, E) be the Banach space of all ω-periodic and continuous functions from R to E equipped with the maximum norm. Following the theory developed in [33,34], we define two linear operators on C ω (R, E) by Motivated by the concept of next generation operators [32,35], we define the basic reproduction number as The disease-free state of (2.3) is (0, 0, 0, 0) and the corresponding linearized system is Similarly, we can derive the basic reproduction number of (2.3), which is given by For any given t ≥ 0, let P i (t) be the solution map of (3.2) on E. Then P i := P i (ω) is the associated Poincaré map. Let r(P i ) be the spectral radius of P i . By [34,Theorem 3.7] with τ = 0, we have the following nice property. Lemma 3.1. R i − 1 has the same sign as r(P i ) − 1, i = 1, 2, and thus R 0 − 1 has the same sign as r(P ) − 1, where r(P ) = max{r(P 1 ), r(P 2 )} is the spectral radius of the Poincaré map P associated with (3.3). Invasion reproduction number In this subsection, we define the invasion reproduction number for each strain. The invasion reproduction number gives the ability of strain i (i = 1, 2) to invade strain j (j = 1, 2, j = i) measured as the number of secondary infections strain i one-infected individual can produce in a population where strain j is at an endemic state [36]. We express it byR i (i = 1, 2) and give their definition by analyzing the boundary ω-periodic solution of (2.3), that is, the sensitive strain ω-periodic solution or resistant strain ω-periodic solution. For each t ≥ 0, let E(t) be subset in E defined by After a similar process in [25, Lemma 3], we obtain that for any ψ ∈ E(0), system (3.1) has a unique solution v i (t, ·, ψ) = ( for all t ≥ 0. Moreover, by employing the arguments in [25, Theorem 1], one immediately obtains the following result. For ease of presentation, we introduce the following notations: The resistant strain ω-periodic solution of (2.3). By Theorem 3.1, we see that when R i > 1 (i = 1, 2), system (2.3) admits a unique semitrivial boundary ω-periodic solution E i (t, x). Linearizing (2.3) at the E j (t, x), j = i, i, j = 1, 2, and considering only the equations for I i (t, x) and I vi (t, x), we get (3.4) Similar to Section 3.1, we can define the invasion reproduction numbersR i (i = 1, 2). Further, we have the following characterization ofR i . Lemma 3.2.R i −1 has the same sign as r(P i )−1, whereP i is the Poincaré map associated with (3.4), and r(P i ) is the spectral radius ofP i . Disease extinction and uniform persistence In this section, we establish the dynamics of (2.3) in terms of R i andR i , i = 1, 2. Therefore, the desired result is established. In order to study the coexistence of strains, we first give the following lemma for our subsequent coexistence result. Next we prove that Q is uniformly persistent with respect to (X 0 (0), ∂X 0 (0)). Recalling the definitions of E 0 , E 1 (t, x), E 2 (t, x) in Section 3.2, we let Then we have the following claims. This claim directly follows from Lemma 4.1. In a similar way, we can prove the following claim. Asymptotic behavior of R 0 In this section, we use the recent theory developed in [38] to study the asymptotic behavior of the basic reproduction number as the diffusion coefficients go to zero and infinity. To do this, we write Observe that for each x ∈Ω, the equation admits a globally stable positive ω-periodic solution M 0 (t, x), and it is continuous on R ×Ω. Define g(t) := |Ω| −1´Ω g(t, x)dx. One immediately sees that the following scalar periodic equation which is globally asymptotically stable. It is easy to verify that assumptions (H1)-(H5) in [38] are valid. An direct application of [38,Theorems 5.2 and 5.5] For each x ∈Ω, let {Γ i x,0 (t, s) : t ≥ s} (i = 1, 2) be the evolution family on R 2 associated with the following system: and define Let { Γ i ∞ (t, s) : t ≥ s} (i = 1, 2) be the evolution family on R 2 of the following system: Let C ω (R, R 2 ) be the Banach space of all continuous and ω-periodic functions from R to R 2 , which is endowed with the maximum norm. For each x ∈Ω, we respectively define bounded linear positive operators L i x,0 and L i Numerical simulations To verify these analytic results and examine the effects of seasonality and vector-bias on the malaria transmission, we perform illustrative numerical investigations. Competitive exclusion and coexistence We choose the period of our model to be T = 12 months and concentrate on one dimensional domain Ω = [0, π]. For illustrative purpose, we only let β(t, x) be the time-dependent parameters, given by which is adapted from [8]. Unless stated otherwise, the baseline parameters are seen in Table 1. We use the numerical scheme proposed in [34, Lemma 2.5 and Remark 3.2] to compute the reproduction number of each strain. In order to demonstrate the outcomes of competitive exclusion and coexistence, we consider the following three cases. Case 1. R 1 > 1, R 2 > 1,R 1 > 1 andR 2 > 1. We choose γ 1 = 0.096 month −1 , γ 2 = 0.082 month −1 , α 1 = 0.56, α 2 = 0.6, c 1 = 0.25, c 2 = 0.2. Then we obtain R 1 = 11.1267, R 2 = 10.3022,R 1 = 2.2919, andR 2 = 2.2605. Fig. 1 shows that the disease is uniformly persistent, and periodic oscillation phenomenon occurs, which is consistent with Theorem 4.3. Effects of parameters on R 0 In order to explore the effect of seasonality, we set the biting rate β(t) ≈ a 0 (1−b 0 cos(0.523599t)), where a 0 is the average biting rate, and b 0 ∈ [0, 1] is the strength of seasonal forcing. We use the same parameter values as in Case 1 in Section 6.1. Fig. 4 describes the dependence of R 0 on a 0 and b 0 . The More precisely, Fig. 4(a) shows that R 0 is an increasing function of a 0 for fixed b 0 . Fig. 4(b) compares the influences of the time-dependent biting rate and the time-averaged biting rate on R 0 . As can be seen in Fig. 4(b), R 0 increases as b 0 increases. This implies that the use of the time-averaged biting rate may underestimate the risk of disease transmission. It should be emphasized that this phenomenon is not observed in all malaria models, which is dependent on model parameters. Next, we investigate the vector-bias effect. We use q := l/p to measure the relative attractivity of susceptible host versus infection one. Our numerical result in Fig. 5 shows that R 0 decreases as q increases, which indicates that the ignorance of the vector-bias effect will underestimate the value of R 0 . In fact, we can analytically prove the monotonicity of R 0 with respect to q. Let A i and B i (i = 1, 2) be two bounded linear operators on C ω (R, E) given by where Ψ i and F i (t) are defined as in Section 3. Inspired by Section 4.2 in [37], we write [B i 1 v 2 ](t) = c i β(t, ·)v 2 (·), [B i 2 v 1 ](t) = α i β(t, ·)pM * (·) lN (·) v 1 (·), i = 1, 2. Discussion In this paper, we have proposed a two-strain malaria model with seasonality and vector-bias. It is of interest to note that our model is a competitive system for sensitive and resistent strains, but the corresponding subsystem of each strain is cooperative. To characterize this mathematical structure, we define a time-dependent region X(t). Although the introduction of time-varying region brings out some mathematical difficulties, the solution map Q(t) : X(0) → X(t) is an ω-periodic semiflow. This nice property makes us use uniform persistence theory for model dynamics. Our results show that the zero solution is global attractiveness if R 0 = max{R 1 , R 2 } < 1 (see Theorem 4.1); sensitive (resistent) strains are uniformly persistent if R 1 > 1 > R 2 (R 2 > 1 > R 1 ) (see Theorem 4.2); and the model is uniformly persistent and admits a positive periodic solution if R 1 > 1, R 2 > 1,R 1 > 1 andR 2 > 1 (see Theorem 4.3). We also have analyzed the asymptotic behavior of the basic reproduction number with small and large diffusion coefficients. Numerically, we have demonstrated the long-time behaviors of solutions: competitive exclusion and coexistence, and revealed the influences of some key parameters on the basic reproduction number. It is found that R 0 increases as the strength of seasonal forcing increases, but it is a decreasing function of the relative attractivity of susceptible host versus infection one. Finally, we mention that under certain condition, system (2.3) is a monotone system with respect to the partial order ≤ K , which is induced by the cone K = E + × (−E + ). Hence, if we can prove the uniqueness of positive periodic solution in Theorem 4.3, then the positive periodic solution is globally attractive in X(0) \ {0} by the virtue of the theory of monotone systems. This is a challenging problem and left for future study.
2022-01-17T02:15:12.526Z
2022-01-14T00:00:00.000
{ "year": 2022, "sha1": "33d22aaafa9a4d9b31cdd5b77636674a5ad644e0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2201.05559", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "33d22aaafa9a4d9b31cdd5b77636674a5ad644e0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
247568003
pes2o/s2orc
v3-fos-license
Inflammatory Response-Related Long Non-Coding RNA Signature Predicts the Prognosis of Hepatocellular Carcinoma Background Hepatocellular carcinoma (HCC) is a high mortality malignant tumor with genetic and phenotypic heterogeneity, making predicting prognosis challenging. Meanwhile, the inflammatory response is an indispensable player in the tumorigenesis process and regulates the tumor microenvironment, which can affect the prognosis of tumor patients. Methods Using HCC samples in the TCGA-LIHC dataset, we explored lncRNA expression profiles associated with the inflammatory response. The inflammatory response-related lncRNA signature was constructed by univariate Cox regression, LASSO regression, and multivariate Cox regression methods based on inflammatory response-related differentially expressed lncRNAs in HCC. Results Seven inflammatory response-related lncRNA signatures were identified in predicting HCC prognosis. Kaplan–Meier (K-M) survival analysis indicated that high-risk group HCC patients were associated with poor prognosis. The utility of the inflammatory response-related lncRNA signatures was proved by the AUC and DCA analysis. The nomogram further confirmed the accuracy of the novel signature in predicting HCC patients' prognoses. In validation, our novel signature is more accurate than traditional clinicopathological performance for prognosis prediction of HCC patients. GSEA analysis further elucidated the underlying mechanisms and pathways of HCC progression in the low- and high-risk groups. Moreover, immune cells infiltration responses and immune function analyses revealed a significant difference between high- and low-risk groups in cytolytic activity, MHC class I, type I INF response, type II INF response, inflammation-promoting, and T cell coinhibition. Finally, HHLA2, NRP1, CD276, TNFRSF9, TNFSF4, CD80, and VTCN1 were expressed higher in high-risk groups in the immune checkpoint analysis. Conclusions A novel inflammatory response-related lncRNA signature (AC145207.5, POLHAS1, AL928654.1, MKLN1AS, AL031985.3, PRRT3AS1, and AC023157.2) is capable of predicting the prognosis of HCC patients and providing new immune targeted therapies insight. Introduction Hepatocellular carcinoma (HCC) is the primary histological subtype of liver cancer, with genetic and phenotypic heterogeneity, and the third most invasive and lethal tumor globally [1]. e inflammatory response caused by risk factors such as chronic hepatitis B, hepatitis C virus infection, smoking, obesity, and diabetes promotes liver fibrosis, which progresses to cirrhosis and ultimately to HCC [2][3][4]. HCC patients are asymptomatic early, which delays timely diagnosis. Patients who are only diagnosed at an advanced stage of liver cancer are not candidates for radical surgery, and treatment options are limited in availability and effectiveness [5]. erefore, novel biomarkers to discriminate high-risk HCC patients are urgently needed to improve personalized liver cancer therapy. Over the past few decades, research studies on tumorassociated inflammatory responses has increased rapidly, and inflammation is also regarded as one hallmark of cancer [6]. Many tumors arise from inflammatory responses, a critical component of the neoplastic process. e tumor microenvironment, primarily orchestrated by inflammatory cells, is indispensable in promoting tumor proliferation, survival, and migration [7]. Inflammatory response-related therapies induction holds promise as an opportunity to inhibit HCC development. Meanwhile, long non-coding RNA (lncRNA) is an endogenous cellular RNA molecule (>200 nucleotide sequences) that regulates gene expression and is involved in a variety of inflammatory biological pathways, including oncogenesis, development, and metastasis [8,9]. Functional lncRNAs are considered to play a critical role in the process of inflammatory response regulation [10][11][12][13]. However, studies elucidating the mechanisms of inflammatory response-related lncRNAs in HCC progression remain scarce. A systematic evaluation of inflammatory response-related lncRNAs prognostic signature in liver patients may deepen our understanding of HCC progression mechanisms and offer novel approaches for specific, precise diagnosis and effective therapies. In our study, a novel prognostic signature was first established based on inflammatory response-related differentially expressed lncRNAs. We then studied the roles of the novel lncRNA signature-associated mRNAs, immune responses, and N6-methylated adenosine (m6A) modification status in HCC prognosis. Data Collection. RNA sequencing data with complete clinical information annotation were downloaded from the public database TCGA-LIHC (https://portal.gdc.cancer.gov/ repository) dataset. Clinical information of HCC patients is shown in Table 1. e corresponding inflammatory responserelated genes were identified from the Molecular Signatures (http://www.gsea-msigdb.org/gsea/login.jsp) database [14] and are provided in Table S1. Pearson's correlation analysis was used to identify inflammatory response-related lncRNAs by comparing the expression levels of inflammatory response-related genes and lncRNAs. Correlations are considered significant when the correlation coefficient |R 2 | > 0.4 and the P < 0.05, and then inflammatory response-related lncRNAs were selected. e vital differential expression of lncRNA associated with inflammatory response was set as |log 2 f c | ≥ 1.00 and false discovery rates (FDR) < 0.05. e biological functions, including biological process (BP), cellular component (CC), and molecular function (MF) of inflammatory response-associated differentially expressed lncRNAs (DEGs), were investigated using Gene Ontology (GO). And the pathways of differentially expressed inflammatory response-related lncRNAs involved in HCC progression were analyzed using the "clusterProfiler" package in R software (version 4.1.0) by the Kyoto Encyclopedia of Genes and Genomes (KEGG). Development of the Inflammatory Response-Related lncRNAs Prognostic Signature. e inflammatory responserelated DEGs significantly associated with the prognosis of HCC patients were first screened via univariate Cox regression analysis. en, LASSO regression analysis was used to reduce the number of lncRNAs filtered by univariate Cox regression and prevent the risk model of overfitting. Finally, an inflammatory response-related lncRNA signature was constructed by multivariate Cox proportional hazards regression analysis, and the risk score formula stratified HCC patients. e formula of risk score model � 7 i x i × y i (X: coefficients, Y: lncRNA expression level). Additionally, HCC patients were divided into highrisk and low-risk groups based on the median risk score. e Predictive Nomogram. A hybrid nomogram model incorporating independent predictive factors including risk signature and gender, age, TMN, stage, and grade was established for predicting the 1-, 3-, and 5-year overall survival rate of HCC patients. en, the fit degree of the calibration curve versus the actual observed value was used to judge the accuracy of the hybrid nomogram for clinical prognosis judgment. Immune Profile Analysis. e single-sample gene set enrichment analysis (ssGSEA) proceeded to quantify the individual specimens' immune cell infiltration levels of lowrisk and high-risk groups. e immune response differences between the two risk groups were assessed based on the results of multialgorithms including CIBERSORT [15,16], CIBERSORT-ABS [17], QUANTISEQ [18], MCPCOUNTER [19], XCELL [20], EPIC [21], and TIMER [22]. In addition, the heatmap demonstrated the differences of immune responses in two risk groups stratified by inflammatory response-related lncRNA signatures under different algorithms. Moreover, the immune function of tumor-infiltrating immune cell subsets in the low-risk and high-risk groups was analyzed. 2.5. Statistical Analysis. We used packages including "limma," "survival," and "survminer" in RStudio software (version 1.4.1106) for analyzing data. e Wilcoxon test and unpaired Student's t-test were used to compare nonnormal and normal distribution expression variables. Based on the FDR, the differential expressions of lncRNA were corrected by the Benjamin Hochberg method. e "GSVA" package in R was used to compare the ssGSEA-normalized HCC DEGs. We applied the time-dependent receiver operator characteristic (ROC) and the decision curve analysis (DCA) [23] to compare the performance between the inflammatory response-related lncRNA signature and clinical characteristics in predicting HCC prognosis. Furthermore, a clinical heatmap graph with Fisher's test was utilized to assess the relationship between inflammatory response-related lncRNAs and clinical characteristic manifestations. e overall survival of HCC patients was evaluated with a Kaplan-Meier (K-M) survival analysis based on the inflammatory response-related lncRNA signature. P < 0.05 was considered statistically significant in all analyses. e flow chart summarized this study in Figure 1 Enrichment Analysis of Inflammatory Response-Related Genes. We identified 154 inflammatory response-related DEGs between HCC and noncancerous liver tissues (36 upregulated and 118 downregulated; Table S2). Enriched BP includes inflammatory response, positive regulation of defense response, and the immune system process. Meanwhile, the MF of DEGs in HCC were cytokine activity, cytokine receptor binding, and receptor ligand activity. e collagen-containing vesicle lumen, extracellular matrix, and the plasma membrane were predominantly enriched in CC. Additionally, pathways of DEGs analysis by KEGG indicated that the PI3K-AKT signaling pathway, the NOD-like receptor signaling pathway, focal adhesion, the TNF signaling pathway, the NF-kappa B signaling pathway, and proteoglycans in cancer were highly enriched in Figure 2. In the primary screening, 62 inflammatory response-related lncRNAs associated with HCC prognosis were obtained using univariate Cox analysis from differential expressed inflammatory response-related lncRNAs in HCC (Figure 3(a)). Next, LASSO regression was used to penalize 62 inflammatory response-related lncRNAs (Figures 3(b) and 3(d)). Finally, multivariate Cox regression analysis constructed seven inflammatory response-related lncRNA signatures as independent prognostic indicators for HCC patients (Figure 3(c); Table S3). en, the novel risk score model was calculated by formula as follows: risk score � (coefficient AC1452 07.5 × expression of AC145207.5) Kaplan-Meier analysis confirmed that the high-risk group patients have worse overall survival than patients in the low-risk group (Figure 4(a)). Meanwhile, the inflammatory response-related lncRNAs signature had the AUC of 0.758, which outperformed traditional clinical characteristics in the prediction of prognosis of HCC patients (Figure 4(b)). From the risk survival status plots and heatmaps, it could be seen that a higher risk score is associated with a lower survival rate of patients with HCC ( Figure 4(c)). e AUCs of ROC analysis was 0.784, 0.739, and 0.670 for the predictive value of HCC patients for 1-year, 3-year, and 5-year survival, respectively (Figure 4(d)). Besides, the net benefit of the DCA plot revealed a stable and robust prognostic-predictive ability of the inflammatory response-related lncRNA signature (Figure 4(e) and Table S4). Univariate and multivariate Cox analyses verified that the novel risk score model (HR: 1.41, 95 CI: 1.26-1.59) is an independent prognostic predictor of HCC patients' overall survival (Figures 5(a) and 5(b)). e inflammatory response-related lncRNA-mRNA interaction was presented in the correlation network ( Figure 5(c)). Also, the clinical heatmap analyzed the relevance among the inflammatory response-related lncRNA signature and the clinicopathological manifestation ( Figure 6). e calibration curves showed excellent uniformity between predicted overall survival and actual observed values with longer follow-up, which confirmed that the nomogram is reliable (Figure 7). us, this nomogram model is suitable for the clinical management of HCC patients. Gene Set Enrichment Analysis. e pathways and bioprocess involved in tumorigenesis were analyzed by GSEA, which revealed that the inflammatory responserelated lncRNA signature modulated both the progression of tumor and the essential pathways associated with immunity, mainly including the JAK-STAT signaling pathway, the toll-like receptor signaling pathway, the WNT signaling pathway, the T cell receptor signaling pathway, the MAPK signaling pathway, the NOTCH signaling pathway, and natural killer cell-mediated cytotoxicity ( Figure 8; Table S5). Immunological Reaction and Related Gene Expression. e heatmap discovered that the expression of immune responses was markedly different between low-and highrisk groups using multiple algorithms ( Figure 9 and Table S6). Single-sample GSEA correlation analyses showed significant differences in the expression of corresponding immune functions between the low-and high-risk groups. In high-risk groups, immune functions such as T cell coinhibition and costimulation, type-II INF response, and T cell coinhibition were markedly attenuated ( Figure 10(a)). Given the critical role of checkpoint inhibitors in HCC immunotherapy, we examined the differences in immune checkpoint expression between the two risk groups. In the high-risk group, immune checkpoints expression including HHLA2, NRP1, CD276, TNFRSF9, TMIGD2, TNFSF4, CD80, and VTCN1 were higher than in the low-risk group (Figure 10(b)). Expression comparison of m6A-related modification in two risk groups indicated that the highrisk group had higher expression of RNA methyltransferases (METTL3, METTL14, RBM15, and WTAP), demethylases (ALKBH5 and FTO), and readers (YTHDF1, YTHDF2, YTHDC1, YTHDC2, and HNRNPC) ( Figure 11). Discussion e inflammatory response is critical in neoplastic progression by causing reactive oxygen species and deoxyribonucleic acid damage, increasing the frequency of genomic DNA mutations and causing oncogenesis [24,25]. Meanwhile, inflammation-induced changes in the hepatic immune system make cancer cells prone to escape immune surveillance and destruction [6,25]. We first identified 154 inflammatory response-related DEGs by comparing HCC and normal liver tissues. KEGG analysis discovered that these DEGs mainly participated in focal adhesion, proteoglycans in cancer, the PI3K-AKT signaling pathway, NF-kappa B and TNF signaling pathway. Some recent studies have shown that inflammatory interferon regulates cellular metastasis, vasculogenic mimicry, and antiapoptosis activity of tumor cells, mainly activating the PI3K/AKT/ mTOR pathway [26]. At the same time, FGFR1 and TLR4 regulate tumor cell hyperplasia and migration and promote proinflammatory response production via the PI3K/Akt signaling pathway [27]. Studies by Balkwill [28], Ringelhan et al. [29], and Taniguchi et al. [30] reported that TNFα activated the NF-κB signaling pathway, contributing to the promotion and progression of human HCC through hepatic Journal of Oncology inflammation, hepatocyte death, and compensatory proliferation. erefore, blocking the link between inflammation and liver cancer may inspire a new strategy for HCC treatment. In our study, the seven inflammatory response-related lncRNA signature based on clinical features were confirmed as independent prognostic factors for HCC patients. Among the seven inflammatory response-related lncRNA signatures, only a small part of lncRNAs have been reported to be studied. Zhou et al. [31] reported that AC145207.5 and AL031985.3 were overexpressed in HCC cell lines and were related to the poor prognosis of HCC patients. MKLN1-AS could intensify the hyperplasia, migration, and invasion of liver cancer cells by positively regulating YAP1 expression [32]. Li et al. [33] revealed that silencing of lncRNA PRRT3-AS1 could activate the expression of the PPARc gene and then block the mTOR signaling pathway to inhibit prostate cancer cell proliferation and promote apoptosis and NRP1 CD86 HAVCR2 CD276 TNFRSF9 LGALS9 CD274 CD200R1 CD44 CD80 CD28 IDO1 ICOS TNFRSF4 VTCN1 LAG3 TMIGD2 CTLA4 TNFSF18 TNFSF15 TNFSF9 TNFSF4 TNFRSF18 TNFRSF14 LAIR1 *** *** ** ** *** *** * ** ** ** * ** ** ** ** * * *** * *** *** * * * ** *** autophagy. Although the other three lncRNAs have not been reported yet, according to the coexpression network, we found that POLH-AS1 has a coexpression relationship with GPS2, DHX9, and MAPK7. AL928654.1 has a coexpression relationship with PSEN1. AC023157.2 has a coexpression relationship with FCGR2B and IL20RB. From this, we speculated that these inflammatory response-related lncRNAs are likely to participate in the proliferation, migration, and immune response in cancer. However, the function of these inflammatory response-related lncRNAs and their roles in hepatocarcinogenesis and progression need to be explored through further clinical and experimental studies. en, the risk score model classified HCC patients into high-and low-risk groups. e survival analysis determined that patients in the high-risk group had a poor prognosis for HCC. Furthermore, the risk score model had an AUC of 0.758 and performed well with the net benefit of DCA validation. Moreover, the nomogram's calibration curves validated that our novel risk score model performs better than traditional clinicopathological characteristics in predicting the prognosis of HCC. e direct correlation between lncRNAs and cancer-derived inflammatory responses emphasises their potential as tumor biomarkers and therapeutic targets [34]. Accumulating evidence suggests that lncRNAs are crucial in mediating inflammatory responses and dysregulation in HCC [35][36][37][38]. So lncRNAs could be the essential class of prevalent genes involved in the development of liver cancer. However, the biological and molecular mechanisms of lncRNAs in HCC are not fully understood. erefore, this novel signature could help us further explore the roles of lncRNA in cancer. In our study, GSEA analyzed the immune and tumor-related pathways of the novel signature in individuals in high-and low-risk groups. Relevant immune function analysis indicated that the high-risk group's patients exhibited significantly reduced cytolytic activity, type II INF response, and T cell coinhibition. However, the high-risk group patients had increased expression of immune checkpoints including HHLA2, NRP1, CD276, TNFRSF9, and TNFSF4. Recently, lncRNAs have been gaining attention as critical regulators in gene expression and regulation via versatile interactivity with DNA, mRNA, or proteins. Notably, lncRNAs play vital roles in developing diverse immune cells by controlling dynamic transcriptional programs that are hallmarks of immune cell activation and inflammatory gene expression [39,40]. Some studies have found that activation of inflammatory response pathways, such as the IFN response, can ameliorate sensitivity to immune checkpoint inhibitors in cancer patients and have a positive effect on antitumor activity [41], but also that lncRNA Mirt2 functions as a checkpoint to prevent aberrant activation of inflammation [42]. For now, few studies have delved into the association between inflammatory response and lncRNAs and immune checkpoint inhibitors. us, inflammatory response-related lncRNAs may be critical factors in the immune microenvironment causing HCC transformation. Although we revealed a novel inflammatory responserelated lncRNA prognostic risk signature and demonstrated the reliability of this risk model, our study has several limitations. is bioinformatics research needs to be confirmed by multicenter experiments with larger samples. Further exploration of the relationship between the seven inflammatory response-related lncRNAs in the model and immune activity deserves further exploration. Conclusion A specific inflammatory response-related lncRNA signature is capable of predicting the prognosis of HCC patients and providing new immune targeted therapies insight. HCC: Hepatocellular carcinoma TCGA: e Cancer Genome Atlas GO: Gene Ontology BP: Biological processes MF: Molecular function CC: Cellular components KEGG: Kyoto Encyclopedia of Genes and Genomes GSEA: Gene set enrichment analyses FDR: False discovery rate SsGSEA: Single-sample gene set enrichment analysis. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare no conflicts of interest. Authors' Contributions Zhang SQ and Li XY designed and analyzed the research study; Zhang SQ, Tang CZ, Kuang WH, and Zhang SJ wrote and revised the manuscript; Li XY collected and analyzed the data; and all authors have read and approved the manuscript. Xinyu Li and Shuqiao Zhang have contributed equally to this work.
2022-03-20T15:19:28.164Z
2022-03-17T00:00:00.000
{ "year": 2022, "sha1": "0ef1003488b9e64f924118ca041c18a14f8c0a5f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jo/2022/9917244.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37f34ce0a79e37e4b9466e67728f8766faf84832", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251629159
pes2o/s2orc
v3-fos-license
Osteosarcoma pre-diagnosed as another tumor: a report from the Cooperative Osteosarcoma Study Group (COSS) The course of osteosarcoma patients primarily treated as such has been well described. Little, however, is known about patients who were primarily treated assuming a different tumor diagnosis. The database of the Cooperative Osteosarcoma Study Group COSS was searched (4.435 primary high-grade central osteosarcomas registered prior to 01/01/21). A different tumor entity had to have been assumed for at least one month after the initial diagnostic procedure before the correct diagnosis of osteosarcoma was finally made. Identified patients were analyzed for demographic, tumor-, and treatment-related factors as well as for survival outcomes. 37 patients were identified. They were a median of 19.7 (2.7—60.4) years old at first presentation and were more likely to be females than males (23:14). Bone cysts (n = 8), giant cell tumor of bone (n = 6), and osteoblastoma (n = 6) were the most frequent of 29/37 (78%) benign, chondrosarcoma and its variants (n = 6) the most frequent of 8/37 (22%) malignant original diagnoses. Tumors affected the extremities in 23 (62%), the trunk in 11 (30%), and the craniofacial bones in 3 (8%). Only one patient received systemic treatment while assuming the different diagnosis (1/37, 3%). The median time until the correct diagnosis of osteosarcoma was made was 8 months (range: 1 month–14.1 years). At that time, 6/37 (16%) presented with metastatic disease. All patients went on to receive chemotherapy, 17/37 (46%) neo-adjuvantly. Histologic response was only evaluated in 13/17 (76%) patients and was good (< 10% viable tumor) in only 4/13 (31%) patients. In 31/37 (84%) patients, a surgically complete resection of all macroscopically identified tumor manifestations could be achieved. Five-year overall and event-free survival rates at 5 years were 50.2% (standard error: 8.6%) and 42.6% (8.5%), respectively. Osteosarcoma may initially be misdiagnosed and hence subjected to inappropriate treatment including misguided surgery. Once diagnosed correctly, some of the affected patients may still be cured if finally treated according to modern osteosarcoma standards. Introduction Osteosarcoma treatment has been well established for many decades. A combination of intensive chemotherapy and surgery can cure approximately 60-70% of patients with apparently localized extremity disease and above 20% of those with axial tumors or with primary metastases (Bielack et al. 2002(Bielack et al. , 2021Whelan and Davis 2018). Prospective trials, however, characteristically exclude pretreated patients. This includes those pretreated under the correct diagnosis, but also those who first received treatment assuming they were suffering from a completely different disease. Searching the literature, we were not able to detect a single analysis focusing on this particular group of patients. It is therefore unknown if certain conditions are more likely to lead to misdiagnoses and if and to which extent they will be able to survive following their protracted course to the correct diagnosis and to correct therapy. The Cooperative Osteosarcoma Study Group (COSS) has been running a comprehensive osteosarcoma registry for more than four decades (Bielack et al. 2009;Ferrari et al. 2018;Smeland et al. 2019). In addition to those patients eligible for trials, it is open for all other patients suffering from osteosarcoma. We searched the COSS database for patients with high-grade, central osteosarcoma who had started treatment under the assumption of a different tumor. Affected patients were analyzed for presenting signs and symptoms, treatments received under the distinct diagnoses, and outcomes. Patients and methods The database of the Cooperative Osteosarcoma Study Group COSS, open for recruitment since 11/1979, was searched for all of 4,435 patients registered prior to 01/01/21 with a primary high-grade central osteosarcoma who had received at least one month of pre-treatment (surgery, chemotherapy, or radiotherapy) for their primary tumor under assumption of a different diagnosis. The analysis was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki. All registered patients and/or their parents, whichever appropriate, were required to give their informed consent into treatment, data capture, and unlimited follow-up. All study and registry protocols were approved by the appropriate Ethics committees. Once the correct diagnosis had been made, patients were to be treated according to the various COSS regimens active at time of enrollment, generally with neoadjuvant and adjuvant multidrug chemotherapy and surgery (Bielack et al. 2009;Ferrari et al. 2018;Smeland et al. 2019). Affected patients were analyzed for presenting patient and tumor-related factors and treatments at time of the original, erroneous diagnosis as well as for that of the final osteosarcoma diagnosis, the interval between both, treatments, and outcomes. Tumor response to osteosarcoma therapy, if available, was coded according to Salzer-Kuntschik et al. (1983), with a good response assumed when tumor viability was below 10%. Survival outcomes were calculated using the Kaplan-Meier method (Kaplan and Meier 1958). The starting point was that of the osteosarcoma diagnosis. Overall survival was calculated from this date until death from any cause. Event-free survival was calculated until the date of diagnosis of any renewed osteosarcoma manifestation or death from any cause, whichever occurred first. Only patients considered to have achieved a macroscopically complete surgical remission of all diseased sites were considered to have been made disease-free, all others were coded as having suffered an event at day 1 after recurrence. Secondary malignancies were not to be considered as events, but to be analyzed separately. All patients had been subjected to a diagnostic biopsy or primary surgery. In total, 36/37 (97%) received an operation intended to remove the lesion. Radiotherapy or chemotherapy were administered to one (3%) patient each. In total, 35/37 (95%) of all patients were considered cured by firstline therapy. The correct diagnosis of osteosarcoma was finally made at the second disease manifestation in 22/37 (59%), the third in 11 (30%), the fourth in two (5%), and the fifth and sixth in one (3%) patient each. By then, it was a correct assumption in 9/22 (41%) and an unexpected finding in 13/22 (59%) among 22/37 (59%) with appropriate information. The latency period from the first incorrect diagnosis to the diagnosis of osteosarcoma was reported as 8 months (range: 1 month-14.1, interquartile range: 2.1 years). It was longer than one year in 15/37 (42%) and longer than five years in 2/37 (5%) affected patients. Patients were 22.1 (5.4-60.4) years old when the osteosarcoma diagnosis was made. Among patients with involvement of the primary site at correct diagnosis, treatment included surgery of this site in 32/34 (94%) tumors. In the subgroup of 20/21 (95%) operated extremity primaries, it was reported as ablative in 12/20 (60%), limb-salvage procedures were performed in 8/20 (40%). Local radiotherapy of 40, 51, and 71 Gy was given to 3/37 (8%) patients, all as an adjunct to surgery. As a result of local therapy, a complete macroscopic remission of all disease sites was achieved in 31/37 (84%), one of these with known microscopic residuals. Median follow-up from osteosarcoma diagnosis was 4.3 (range: 0.3-32.1) years for all patients and 15.7 (0.3-32.1) years for survivors. The corresponding event-free observation period was 2.1 (one day-32.1) years. During this period, 21/37 (57%) patients developed in an event as defined (six without surgical remission (including one without disease progression after radiotherapy), ten metastatic, three local, two combined, one death of unknown causes), 16/37 (43%) remained event-free. There were no secondary malignancies. At last follow-up, 18/37 (49%; 16 1st, 1 2nd complete remission, one with irradiated tumor residual) patients were still alive and 19/37 (51%; five without ever having achieved a remission, ten 1st, three 2nd recurrence, one death of unknown causes while in first remission) had died. Discussion Diagnostic and therapeutic delays caused by erroneous diagnoses may still occur in osteosarcoma. Diagnostic diligence including reference pathology may help reduce the incidence of such cases. Despite inappropriate initial treatment, some affected patients may still have a realistic chance to achieve cure. The unparalleled size of the COSS-registry allowed us to amass a relevant cohort of patients with osteosarcomas who had originally received therapy assuming a divergent diagnosis. We can only assume that not all of these were erroneous, but that some patients indeed developed an osteosarcoma arising in the same location as a previous tumor of divergent histogenicity. This is probably most likely for tumors with particularly long interim periods between both events. Most of the tumors we were able to analyze were, however, clearly osteosarcomas from the very beginning and misdiagnosed at initial presentation. The unselected nature of our analysis is a clear advantage. We must nevertheless assume that we are not describing anything close to the true incidence of misdiagnoses. Centers may have been reluctant to register affected patients and to thus make their own mistakes public. In other, registered patients, pretreatments may have been concealed. Medicolegal concerns may have been another reason for not reporting everything. Osteosarcoma was camouflaged by a wide variety of primary diagnoses. If assessed as malignant, distinguishing chondro-from osteosarcoma seems to have posed the most challenging distinction. This distinction is, however, most relevant, as systemic therapies are largely ineffective against chondrosarcoma, while they are an essential part of osteosarcoma therapy (Whelan and Davis 2018). Aneurysmatic bone cysts and giant cell tumor of bone were the main benign mimics of osteosarcoma. Experienced bone pathologists should be able to reliably distinguish these as well as the variety of other lesions from a life-threatening malignancy such as osteosarcoma. Reference pathology should therefore be encouraged in all bone tumors, independent of whether they are assumed to be malignant or benign (Strauss et al. 2021). The median latency period between the original diagnosis and that of osteosarcoma was eight months, but much longer time-spans occurred. It is of note that the latency was longer than one year in almost 40% of patients and longer than five years in two or 5% of these. There is no interval that could distinguish two clearly distinct neoplasms from one single, misdiagnosed malignancy. We can only assume that the former were among those with the longest lag-times. The age at original presentation, a median of 19.7 years, was some years higher than that of osteosarcoma in general (Bielack et al. 2002), but no age was safe. It is not unusual to include osteosarcoma into the differential diagnosis in school-age children, adolescents, and even young adults. The infrequency of osteosarcomas in other age groups, however, may have impeded its inclusion into the differential diagnosis in older adults. Our study cohort consisted of more females than males, which is rather unexpected for osteosarcoma (Bielack et al. 2002). We have no unequivocal explanation for this finding other than clinicians might have been less aware of a potential osteosarcoma in females. In general, the rate of non-extremity osteosarcomas is considered rather low, certainly much lower than the frequency detected in our (Whelan and Davis 2018;Bielack et al. 2021). Here, this atypical region of presentation obviously prevented physicians from including osteosarcoma into their differential diagnosis. Again, a correct diagnosis requires an appropriate index of suspicion. The interpretation of tumor size is not straightforward and hindered by the paucity of data in our cases. On the one hand, osteosarcomas can be expected to progressively grow over time until detection. On the other hand, relevant parts of the tumor are prone to be removed during surgery performed under the assumption of a different diagnosis. This would explain why most lesions were considered small when the correct diagnosis was made. It is quite evident that more metastases were present when osteosarcoma was finally diagnosed than when another tumor was still assumed. This probably represents a clear sign of disease progression in the ensuing interval. It cannot be excluded, however, that some of the metastases were already originally present but had not been searched for. While still assuming the first (usually erroneous) diagnosis, systemic therapy was administered for one patient only. This is no surprise, as benign tumors would pose no indication for such treatment and the assumed malignancies (mis-) diagnosed in our cohort are also largely considered largely chemo-refractory (Whelan and Davis 2018). On the other hand, chemotherapy was generally as intensive as for other osteosarcomas once the correct diagnosis had been made (Bielack et al. 2021). This was not influenced by whether the osteosarcomas were still localized at time of their diagnosis or had spread detectably. Preoperative chemotherapy, a standard in modern osteosarcoma treatment, was, however, not quite as routinely administered as in common practice. This may be explained by surgical procedures which were performed while the diagnostic process had not yet been finalized. Other than preventing to assess the response to chemotherapy, this primary surgery is, however, unlikely to have had any negative effects on prognosis (Bielack et al. 2002; Provisor et al. 1997). In those few patients treated preoperatively, the response rate to upfront chemotherapy seems to have been rather low. The limited number of patients prohibits us from concluding if this was a true finding or due to chance. Given the somewhat higher rate of axial primaries, the overall prognosis of the analyzed cohort seems to have been in the lower range of that of previously untreated patients (Bielack et al. 2002(Bielack et al. , 2021Whelan and Davis 2018). It is of note that those osteosarcomas initially misdiagnosed as other malignant tumors did worse than those misdiagnosed as benign. The former may have presented with more dramatic symptoms or have been more likely to involve unfavorable sites. There was also a clear prognostic disadvantage for those patients in whom metastatic spread was present when osteosarcoma was finally unveiled. This reflects the situation with primary metastases in general (Bielack et al. 2002). Too few tumors were eligible for response assessment to draw definitive conclusions, but the data point in the same direction as usual with good responders doing better. Not all patients who would otherwise have been considered candidates for limb-salvage surgery might still have been considered as such following unsuitable surgical procedures. It is, however, remarkable that limb-salvage surgery was still performed for many extremity tumors upon osteosarcoma diagnosis. It is particularly important to include previous surgical fields into the surgical planning, as exemplified by involvement of the former primary site in five of 16 individuals with osteosarcoma recurrences during later follow-up. Incorrect diagnoses might lead to inappropriate surgical attempts, thereby predisposing to local failure. However, while the local failure rate seems to have been somewhat higher than expected (Andreou et al. 2011), the overall risk of recurrence was almost that which we would have expected had the osteosarcoma been diagnosed primarily (Bielack et al. 2002(Bielack et al. , 2021. Given the resulting lag-time, this may come as a surprise, as osteosarcomas usually seem to progress rather rapidly. It may have been that the most aggressive tumors were more likely to be diagnosed correctly from the very beginning, while it was those with a slower evolution and fewer symptoms that were more likely to be misdiagnosed. Whatever the case: If still amenable to surgery, cure was often still possible even when the osteosarcoma was only unveiled following unsuitable approaches at local treatment. Our results clearly demonstrate that combined local and systemic therapy is still indicated when the correct diagnosis of osteosarcoma is only made after pretreatment under another assumed tumor diagnosis. In summary, erroneous diagnoses seem particularly likely with somewhat atypical osteosarcoma presentations. The likelihood of detectable metastatic spread may increase in the time leading to the correct diagnosis. Ablative surgery may be indicated more frequently than usual due to pre-diagnostic, unsuitable operations. Systemic therapy is still possible and usually requires little to no alterations from standard. The extra latency period leading to the correct diagnosis should not deter from providing state of the art osteosarcoma care. Have a high index of suspicion: If something does not fit, rethink!
2022-08-18T13:38:14.751Z
2022-08-18T00:00:00.000
{ "year": 2022, "sha1": "3ce0293b497e82bf5edc484fde676f1c1d48806b", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1512569/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "3ce0293b497e82bf5edc484fde676f1c1d48806b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
162170063
pes2o/s2orc
v3-fos-license
Development and interlaboratory evaluation of a NIST Reference Material RM 8366 for EGFR and MET gene copy number measurements Background The National Institute of Standards and Technology (NIST) Reference Material RM 8366 was developed to improve the quality of gene copy measurements of EGFR (epidermal growth factor receptor) and MET (proto-oncogene, receptor tyrosine kinase), important targets for cancer diagnostics and treatment. The reference material is composed of genomic DNA prepared from six human cancer cell lines with different levels of amplification of the target genes. Methods The reference values for the ratios of the EGFR and MET gene copy numbers to the copy numbers of reference genes were measured using digital PCR. The digital PCR measurements were confirmed by two additional laboratories. The samples were also characterized using Next Generation Sequencing (NGS) methods including whole genome sequencing (WGS) at three levels of coverage (approximately 1 ×, 5 × and greater than 30 ×), whole exome sequencing (WES), and two different pan-cancer gene panels. The WES data were analyzed using three different bioinformatic algorithms. Results The certified values (digital PCR) for EGFR and MET were in good agreement (within 20%) with the values obtained from the different NGS methods and algorithms for five of the six components; one component had lower NGS values. Conclusions This study shows that NIST RM 8366 is a valuable reference material to evaluate the performance of assays that assess EGFR and MET gene copy number measurements. Introduction Reference materials (RMs) are intended to provide a uniform source of stable samples that can be used to ensure reliable measurement results. RMs can be used to track and compare the performance over time of different methods, instruments, laboratories and operators. NIST has developed reference material RM 8366 to improve the gene copy number measurements of EGFR (epidermal growth factor receptor) and MET (proto-oncogene, receptor tyrosine kinase). The amplification (increased copies) of the EGFR gene and its protein overexpression are useful biomarkers for determining the therapeutic treatments and predictive clinical outcomes of cancer patients in response to anti-EGFR targeted therapy [1,2]. Abnormal MET activation in cancer, which may be triggered by MET overexpression, correlates with poor prognosis, tumor growth and metastasis and tumor angiogenesis [3]. Clinical trials are ongoing to evaluate the safety and efficacy of selective MET inhibitors in cancer patients [4]. Rapid and specific quantitative PCR (qPCR) and digital PCR (dPCR) assays are used to measure gene copy number measurements of cancer biomarkers in patient samples. The results from qPCR analysis for ERBB2 (HER2) testing positively correlated with the results from immunohistochemistry and fluorescence in situ hybridization methods [5]. Nextgeneration sequencing (NGS) assays are being used more frequently in clinical laboratories and provide a powerful tool to detect multiple genetic alterations in a quantitative manner. However, the assessment of copy number variation (CNV) poses challenges because different NGS assay platforms may use different chemistries (hybrid capture versus amplification-based target enrichment), different bioinformatic approaches for the calculation of copy number alteration, and different algorithms to adjust for tumor cellularity in the specimen tested. While several NGS platforms have demonstrated strong correlations with fluorescence in situ hybridization in the assessment of CNV [6][7][8], not all laboratories have access to FISH for CNV validation. The availability of CNV reference materials evaluate the performance of NGS assays. NIST developed Standard Reference Material (SRM) 2373 for the measurement of HER2 (ERBB2) gene amplification and showed that the reference material was useful for evaluating NGS assay performance and increasing confidence in CNV measurements [9,10]. Digital and quantitative PCR measurements were used for the determination of ERBB2 (HER2) copy number levels in the five components (genomic DNA from breast cancer cell lines) of SRM 2373. Digital PCR (dPCR) is a sensitive and mature tool for the measurements of DNA target concentrations. Efforts are underway at NIST to make dPCR a traceable measurement method [11][12][13]. Guidelines on the quality management of NGS in clinical applications have been proposed, including test validation, quality-control procedures, proficiency testing and the use of reference materials [14]. NGS assays intended for clinical oncology applications have been performance evaluated using pooled cancer cell lines and clinical samples [15,16]. In this report, a new NIST reference material (RM) 8366 was shown to be useful to evaluate and monitor the performance of assays for EGFR and MET gene copy number measurements. We developed, and performance evaluated new dPCR assays for the target genes, EGFR and MET. Digital PCR assays for the reference genes have been performance evaluated [9]. These assays were used to measure values for the ratios of the gene targets to the reference genes in the six different genomic samples derived from human cancer cell lines. We then compared the reference values (established using the dPCR assays) to values obtained from different NGS assay platforms and bioinformatic pipelines to illustrate the utility of the RM 8366 to compare measurements done with different methods. Cell lines and cell culture The NIST RM 8366 consists of genomic DNA samples prepared from six human cancer cell lines. The cell lines were obtained from ATCC (Manassas, VA, USA) as frozen stocks and cultured in the NIST laboratory using standard tissue culture methods. The identities of the cell lines were authenticated when received from the repository and after production of the genomic DNA using short tandem repeat (STR) DNA genotyping (Supplementary material). This project was approved by the NIST Human Subjects Protection Office for human subject research and ethical principles. DNA extraction and purification Large batches of cells were prepared from each cell line and used to prepare the genomic DNA. The cells were sub-cultured for four or five passages and were harvested when they reached 85% to 95% confluence from 10 T-175 flask cultures. The culture medium was removed, and the cells were washed twice with Dulbecco's Phosphate Buffered Saline (DPBS). The cells were detached from the flask surface using 0.25% (w/v) Trypsin-0.53 mM EDTA solution (Life Technologies, Carlsbad, CA, USA, Cat# 25200-056). Large scale DNA extraction was accomplished using the modified Zymo Quick-gDNA™ midiPrep kit (Cat# D3100) procedure. After the initial extraction, the samples were pre-treated with bovine pancreatic ribonuclease A before re-extraction. All purified genomic DNA samples were dissolved or eluted in TE −4 buffer (10 mmol/L Tris, 0.1 mmol/L EDTA, pH 8.0) and stored at 4 °C (range 4-6 °C). Digital PCR assays and control gene selection We used the guidelines for the minimal information of quantitative PCR experiments to guide the development and reporting of the digital PCR assays [17]. Four sets of PCR assays were developed for both EGFR and MET, and the details and characterization of the assays are contained in the Supplementary materials section. The dPCR assays were done using a Bio-Rad QX200 digital PCR system using TaqMan™ fluorescent probebased methods. All the assays worked well according to the minimal information for publication of quantitative digital PCR experiments guidelines [17]. The PCR products obtained from the four EGFR primer pairs and the four MET primer pairs were analyzed by agarose gel electrophoresis (details and results in the Supplementary materials, Figure S1). For each assay, one PCR product band was detected at the expected position. These results indicate the success of using such primer pairs for PCR reaction, and they can be used in qPCR to calculate the efficiency of the assays and used for melting curve analysis. The amount of DNA added to the assays was determined by absorbance at 260 nm and the same amount of DNA (20 ng) was added to the dPCR assays, although in the case of highly amplified targets (MET and EGFR) the amount of DNA added was decreased in those target assays (4 ng). The single plex dPCR assays at NIST were done using only FAM labeled probes (Tables 1 and 2). The assays were transferred to the Molecular Characterization (MoCha) Laboratory at Frederick National Laboratory for Cancer Research (Frederick, MD, USA) and Thermo Fisher Scientific laboratories (Fremont, CA, USA). However, these laboratories used duplex assays with one of the targets (MET or EGFR) in conjunction with a reference gene. MoCha used a single reference gene (RPS27A, probe name 2PR4-P) that was labeled with HEX ( Table 2). The Thermo Fisher Scientific laboratory used duplex assays with the target probes (MET and EGFR) labeled with VIC ( Table 2) and one of the four reference gene probes labeled with FAM. The Thermo Fisher Scientific results from four duplex assays, respectively pairing the target with each reference gene, were averaged to calculate the ratios for each target. NGS assays Three different NGS-based assays were used to characterize RM 8366. Whole genome sequencing (WGS) at greater than 30 × coverage depth was done by Macrogen (Rockville, MD, USA). WGS sequencing runs were conducted with approximately 1 × and 5 × coverage depth, respectively, at the MoCha Laboratory. Whole exome sequencing (WES) and Oncomine targeted amplicon sequencing on the samples were done at the MoCha laboratory. Peter MacCallum (Peter Mac) Cancer Centre, Australia ran their targeted hybridization pancancer panel on the samples. Details of the sequencing methods are described in the Supplementary materials section. Genomic DNA concentration and reference material packaging DNA concentration and purity were determined by absorption measurements at 260 nm and 280 nm as in previous preparation of NIST SRM 2373 [9,18]. The RM was prepared at 110 μL in 0.5 mL polypropylene tubes (approx. 20 ng/μL DNA) for each of the six components. Additional details on the preparation of RM 8366 are in the Supplementary materials section. Calculation of the ratios of MET and EGFR to reference gene ratios The ratios for MET and EGFR in each of the components of RM 8366 were calculated by measurements of 10 sets of RM 8366 by dividing the average copy numbers of the target genes by the average copy numbers from either the four reference genes (for components A and C) or three reference genes (for components B, D, E and F). Measurements for each set of components were done in triplicate. Tables 3 and 4 Metrological traceability is to the natural counting unit ratio one [19]. The gene abundance ratio, Ratio s , is defined as: where s denotes one of the six components included in RM 8366, G s denotes the number of reference genes considered for component s, g denotes one of the reference genes, Target s denotes the measured abundance of EGFR or MET gene in sample s, and Ref sg denotes the measured abundance of reference gene g in sample s. Tables 3 and 4 were calculated by fitting a statistical model to the measurements made on the RM 8366 materials using the dPCR assays. The Bayesian paradigm with vague priors was used for statistical inference [20]. Further details regarding the statistical model are provided in the Supplementary materials. The values in The 95% posterior credible interval (PCI), used in place of a 95% confidence interval to characterize the uncertainty of NIST scientists regarding the true copy number ratios, is an interval calculated in a manner consistent with the International Organization for Standardization/Joint Committee for Guides in Metrology (ISO/JCGM) Guide [21,22]. The 95% PCI can be interpreted as the approximate range of values within which the true EGFR or MET copy number ratios to the average among the selected set of reference genes (as listed in the "Reference Genes Used for Analysis" column of Tables 3 and 4) fall for each of the six components. That is for each 95% PCI there is a 0.95 probability that the corresponding true copy number ratio for a randomly chosen RM 8366 set falls within the provided bounds. The posterior predictive intervals (PPI) can be interpreted as the approximate range of values within which NIST would expect the next independent, triplicate measurement of the EGFR or MET copy number ratios (formed using the average among the selected set of reference genes as listed in the "Reference Genes Used for Analysis" column of Tables 3 and 4) to fall for each of the six components in a randomly chosen RM 8366 set, based upon the measurement performance of NIST analysts and instruments. The observed value to fall within the provided interval approximately 95% of the time. The ratios of gene copy number for either EGFR or MET were multiplied by 2 to give gene copy numbers that are frequently used in clinical laboratories. Cell line authentication for RM 8366 components Established human cancer cell lines were screened for EGFR and MET amplification based on scientific literature and their availability from biological repositories. The cell lines were confirmed to have different levels (low and high amounts) of MET and EGFR amplification for inclusion into RM 8366. The identities of the cell lines were confirmed before and after production of RM 8366 using short tandem repeat (STR) genotyping of the DNA from the cells. Complete concordance was observed for all six of the DNA samples prepared before and after scale-up. The results also agreed to the nine loci STR profile provided by ATCC (method and results are shown in the Supplementary material section, Table S2). Development of EGFR and MET dPCR assays The EGFR gene has a total length of 192.6 kilobase pairs (kbp). Four primer pairs were designed to span the EGFR gene at different exon and intron positions (locations shown in Supplementary Table S3). These locations were chosen to ensure that the entire gene was present at the same degree of amplification. The expected PCR products (amplicons) range in length from 79 to 112 bp. The locations of the amplicons are: primer pairs 1 and 2 are in intron 1, primer pair 3 is in exon 12, and primer pair 4 spans the region between exon 22 and intron 22 (details and results in the Supplementary materials, Figure S3). The MET gene has a total length of 126 kbp. Four primer pairs were designed to span the MET gene at different exon and intron positions (locations shown in Supplementary Table S3). The expected PCR products (amplicons) range in length from 81 to 112 bp. The locations of the amplicons are: primer pair 1 is in exon 2, primer pair 2 is in intron 2, primer pair 3 is in intron 5, and primer pair 4 is in exon 8 (details and results in the Supplementary material, Figure S3). SYBR green was used for qPCR measurements for the four primer pairs from each gene. The amplification efficiency of a qPCR reaction was calculated based on the slope of the calibration curve and the primer specificity was determined by the melt/dissociation curve. All eight primer pairs sets used for qPCR assays showed satisfactory amplification efficiencies (within the range of greater than 90%) and primer specificity (results and details in Supplementary material, Figures S2 and S3). Non-specific amplification products which have a different melt curve profile from the target sequence were not detected, indicating that the amplified gene products have the expected single product based on G + C content (Supplementary material). As we obtained similar results from all of the four primer pairs for both EGFR and MET assays, we used EGFR_2 and MET_2 assays for further extensive measurement of EGFR and MET gene copy number (primers in Table 1 and probes in Table 2). Selection of reference genes for ratio calculations The literature and cancer mutation databases were screened to avoid selecting reference genes in the region of chromosomes where amplifications, deletions or mutations frequently occurred in cancer cell lines. The selection of the reference genes is important in cancer cell studies because of the frequent gene mutations and gains or losses of DNA that are frequently observed in tumor samples and cancer cell lines. We previously developed four assays for the reference genes: EIF5B, RPS27A, DCK and PMM1 [9] (primers in Table 1 and probes in Table 2). All the primers passed the quality control steps prior to measuring the reference copy numbers (details in previous studies [9,10] and Supplementary material). The selection of the reference genes used to calculate the ratios was based on the agreement between the reference gene measurements. If for a given component all four reference genes gave similar values, then all four reference genes were used (components A and C, Figure 1A); and for the other components, the reference gene with the lower copies/μL was excluded and the other three reference genes with higher values were used for the ratio calculations of those components. The reference values for the dPCR measurements are valid only when used with these indicated reference genes that were used for the calculations (Tables 3 and 4). The concentrations of the reference and target genes were measured in 10 selected vials of each component in RM 8366 using the dPCR assays, shown in Figures 1 and 2. These measurements were used to calculate ratios of the target gene to the selected reference genes in shown in Tables 3 and 4. The reference genes were used to normalize for the amount of genomic DNA in the assays by calculating ratios of the target gene copies (MET and EGFR) to the individual reference gene copies. The amount of DNA (20 ng) added to each assay was based on 260 nm absorbance measurements, so that the copies per μL should be similar. The ratios of the target gene to each of the reference genes should be equal to 1 for a gene that has not been amplified or deleted. When the ratios of targets to reference genes were calculated for the control human genomic DNA (genomic DNA from Coriell Institute for Medical Research, Camden, NJ, USA, cell line GM 24385) the ratios were all close to 1 (Tables 3 and 4). These results show that the reference assays can be used to normalize the target gene copies with a normal karyotype. The reference genes were selected from regions in the genome where copy number changes were not frequently seen in cancer, but the agreement of the reference genes with the cancer cell lines components were not perfect due to the extensive copy number changes. Stability study The gene copy concentrations of the target genes (EGFR and MET) and the reference genes (EIF5B and RSP27A) in selected vials were measured using the dPCR assays. This data did not show any significant drift in values (within the uncertainty of the measurements) for the six components for periods of time up to 408 days ( Figure S4 in Supplementary material section). The samples were stored at 4 °C (range 4-6 °C) in the dark for the indicated times before analysis. Homogeneity study The components of RM 8366 were distributed into tubes (550 for each component) that were then stored at 4 °C (range 4-6 °C) in the dark. Homogeneity studies were accomplished by selecting 10 vials of each component, distributed throughout the order of dispensing. These vials were analyzed using the dPCR assays for the four reference genes and the target genes (EGFR, and MET). Visually the data did not indicate any obvious trend in the values in any of the six components that varied with their dispensing order ( Figure 2). Data was collected from the two analysts and the position of the samples on the 96-well plates and data did not show any obvious trend in the values due to an individual analyst or plate position ( Figure 2). Tables 3 and 4 show the reference values for the ratios of the EGFR and MET gene copies to the indicated reference genes. The 95% PCIs (reflecting uncertainty in true copy number ratio for a randomly chosen RM 8366 set), and the 95% prediction intervals (PIs) (reflecting uncertainty in measured copy number ratio for a randomly chosen RM 8366 set, based on triplicate measurements) were calculated. Reference ratio values The six cell lines used for RM 8366 represent a diversity of EGFR and MET gene copy levels and tissue of origin ( Table 5). Two of the components (A and B) had high levels of EGFR amplification, but no MET amplification, and had tissue origins from skin and breast cancers, respectively. Two of the components (E and F, both derived from gastric cancer) had high levels of MET amplification and normal or low levels of EGFR amplification, respectively. Component C (derived from a melanoma cancer) had low levels of MET amplification and no amplification of EGFR. Component D (derived from a brain cancer) had low levels of EGFR and low levels of MET amplification. Inter-laboratory dPCR comparison study The NIST dPCR single plex assay methods were transferred to the MoCha and Thermo Fisher Scientific laboratories in order to compare interlaboratory performance of the assays. The dPCR assays were performed using different reagents, operators and instruments. The dPCR assays in the MoCha laboratory were performed using a duplex assay with both gene targets (EGFR or MET) with a single reference gene (2PR4). The MoCha duplex assay used a FAM labeled probe for the target gene and a HEX labeled probe for the reference gene. The Thermo Fisher Scientific laboratory used duplex assays with the gene target (MET and EGFR) labeled with VIC paired with one of the four reference genes labeled with FAM (Table 2). Figure 3 shows the correlation of the MoCha values and Thermo Fisher Scientific values to the NIST-reference values. The assays from each laboratory were done in triplicate for the six components. The Thermo Fisher Scientific laboratory data was the average of four duplex assays using all of the four reference genes, while the MoCha measurements used a duplex assay with a single reference gene (RPS27A). The results from both laboratories showed good correlations with the NIST values ( Figure 3). Comparison of NGS methods with the NIST reference values RM 8366 was used to assess MET and EGFR copy number determined by five NGS assay platforms. The NGS assays included two pan-cancer gene panels, MoCha used an amplificon-based assay, and the other, at Peter Mac, used a hybridization-enriched random fragments assay. WGS at done at median coverage levels of 1 ×, 5 × and over 30 ×, and WES at 30 × median depth of coverage (Supplementary material section). Each assay used different bioinformatic approaches to assess copy number variants and three different CNVcalling algorithms were used for analysis of the WES data. Comparison of the WGS data at the three median coverage levels indicated that the three levels gave consistent results for all of the components for this data set. The EGFR and MET copy numbers evaluated from WES data using the three CNV-calling algorithms were comparable with the reference values. The targeted methods (Oncomine and Peter Mac methods) gave results that were similar to the more complex and extensive WGS and WES methods. Discussion In this study, we demonstrated that the five components (components A-D and F) of RM 8366 provided consistent results across multiple testing laboratories using two measurements (dPCR and NGS). The results of this study showed that the five different NGS methods and with different bioinformatic analysis pipelines compared favorably to the reference values obtained from extensive dPCR measurements for five of the six components of NIST RM 8366. We do not know why the NGS methods gave lower values for component E (Hs 746T cell line DNA), a genomic DNA from a gastric cancer cell that we measured very high level of MET amplification and near normal levels of EGFR. Hs 746T has a highly abnormal karyotype associated with many structural variants (https:// www.atcc.org/products/all/HTB-135.aspx#characteristics). Highly abnormal karyotypes are a significant challenge to accurate measurements of copy numbers using both digital and NGS methods. The dPCR and NGS methods both measured high levels of MET amplification and close to normal levels of EGFR for Hs 736T sample (E component). Mutations in the splice site of exon 14 in the gene for MET can result in skipping that exon, and these mutations are frequently found in lung and other cancers [23,24]. Screening of 34 gastric cancer cell lines found that four of the cell lines, including SNU5 (component F) and Hs 746T (component E) were MET amplified, and that cell line Hs 746T had a mutation for exon 14 skipping and the altered protein was overexpressed [25]. Comparison of the three different CNV-calling algorithms using the same WES data also yielded similar EGFR and MET copy number results. These results confirm our previous results with the WES data for ERBB2 (HER2) [10]. Not surprisingly, each NGS method demonstrated slightly different EGFR and MET copy numbers, likely associated with platform-specific biases, which may depend on the total size, G + C contents, and complexity of the genes. The data shows that varying coverage level beyond 1 × for the WGS method did not substantially affect the performance of the EGFR and MET gene amplification assays. Our data indicate that both low-coverage levels (1 × and 5 ×) performed as well as higher coverage (and more expensive) WGS (>30 ×). WGS has also been shown to be useful for copy number measurements at low coverage levels even for single cell analysis [26]. We used a control DNA sample from a "normal" cell line, GM 24285, one of the cell lines used for producing NIST Genome in a Bottle human reference materials. Differences observed in the EGFR and MET copy number measurements between NGS platforms may be attributed to differences in the chromosomal locations of the interrogated regions, the biases of the measurement method (e.g. capture efficiency in WES and primerannealing efficiency in target-amplicon sequencing), and the choice of data analysis pipelines. The availability of stable and uniform reference materials (such as RM 8366, SRM 2373 and the NIST Genome in a Bottle samples) will allow the greater in-depth investigation into the factors that cause the differences among the measurement methods. Standards made from well-established cell lines have advantages including: a history of research studies, and they are renewable resources that can be scaled up to produce large amounts of materials. However, these materials have limitations as simulants for patient samples. Cell lines do not reflect the complexity of a tissue biopsy sample that contains tumor and non-tumor cells (e.g. stroma fibroblasts, endothelial cells, inflammatory cells and others). We are working on reference materials that will be better simulants for clinical samples. An example of improved reference materials would be matched cell lines established from tumor and normal somatic cells, that would allow us to make mixtures of different fractions in an isogenic background. NIST will be pursuing this approach in the future to determine the utility of such paired cell line materials for standards. These results demonstrate the value of RM 8366 to performance evaluate copy number measurements for MET and EGFR and determine assay performance over time using a consistent basis to compare intra-laboratory and extra-laboratory results. Along with NIST SRM 2373 (standard reference material for HER2/ERBB2 copy numbers) amplification measurements, these reference materials will be useful to improve the confidence and reliability of research and clinical measurements for copy number amplification of the important cancer therapeutic targets using NGS and dPCR methods. EGFR and MET gene copy numbers measured by NGS assays. Data are expressed as means with error bars of 1 standard deviation (A and B) and 1 coefficient of variation (C and D). All NGS assays were n = 3 samples, except for the Peter Mac and 30 × WGS were single measurements. A. EGFR gene copy numbers; B. MET gene copy numbers; C. EGFR percentage of NIST reference value (in parenthesis). D. MET percentage of NIST reference values (in parenthesis) The orders of components in Figure 4C and D are from low (EGFR or MET) copy number to high, respectively. TaqMan® fluorescent probe sequences. Reference values of the ratios of EGFR copies to reference gene copies.
2019-05-23T13:02:49.544Z
2019-05-21T00:00:00.000
{ "year": 2019, "sha1": "e435c3d68a1b9f97a9bdc0bf68ce3d81fb4dc576", "oa_license": "CCBYNCND", "oa_url": "https://www.degruyter.com/downloadpdf/journals/cclm/57/8/article-p1142.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c25a86b2c8e6c18c0723f6684bf155b42c8f05de", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
260203449
pes2o/s2orc
v3-fos-license
Applying a castling tree of tight Dyck words All tight Dyck words form, via a castling operation, the vertices of an ordered tree T , from which a blowing reaches all Dyck words. These represent both: (a) the cyclic and dihedral vertex classes of the odd and middle-levels graphs, respectively, and (b) the cycles of their 2-factors, as found by T. M¨utze et al. The vertices of T can be updated all along T , which simplifies an existing arc-factorization view of their Hamilton cycles. Introduction An investigation of the relation of Dyck words [14] controlled by an infinite ordered tree T of tight restricted-growth strings [1, p. 325] to odd and middle-levels graphs [2,11] is undertaken. The definitions of such graphs are recalled below and that of T in Sections 2-4. (Tight) Dyck nests F are introduced in Section 3 associated to corresponding (tight) Dyck words f via their Dyck path heights. A castling procedure assigning tight Dyck nests F to the vertices of T is given in Section 4. Sections 5-6 show how to blow and anchor these words and nests, adapting them as vertex representatives of the said graphs. In Section 7, an edge-supplementary arc-factorization F of each odd graph based on those representatives is given. In Section 8, a permutation is assigned to each Dyck nest F , leading Section 9 to establish uniform 2-factors both in the odd graphs and in the middle-levels graphs, as in [13]. This allows (Section 13) an arc-factorization approach [6] via F to Hamilton cycles [14]. The said uniform 2-factors yield one of two partitions, presented in Section 10, of the vertex sets of the cited graphs, the other partition based on the actions of cyclic and dihedral groups on those vertex sets. In Section 11, we assign a clone to each Dyck nest F , that results to be equivalent to F (Theorem 8), and that can be universally updated along T (Theorem 7). This generates an infinite sequence of such updates. A convenient set of strings is established in Section 12, allowing to associate the elements of the said graphs as arising from the tree T (Theorems 17-18 and Corollary 19) based on either of the two partitions. For 0 < k ∈ Z, the odd graph O k [2] has as vertices all k-subsets of the set [0, 2k] = {0, 1, . . . , 2k}, with any two such vertices adjacent if and only if their intersection as ksubsets of [0, 2k] is empty. We represent the vertices of O k by the characteristic vectors of those k-subsets, so that each such k-subset is the support of its characteristic vector. Those vectors represent the members of the k-level L k of the Boolean lattice B 2k+1 on [0, 2k]. The complements of the reversed strings of those vectors are taken to represent the (k + 1)-level L k+1 of B 2k+1 . Recall that the union of these two levels, L k ∪ L k+1 , is the vertex set of the middle-levels graph M k , with adjacency given by the inclusion of L k in L k+1 . This yields a 2-covering graph map Ψ k : M k → O k expressible via the reversed complement bijection ℵ : V (M k ) → V (M k ) given by ℵ(v 0 v 1 · · · v 2k−1 v 2k ) =v 2kv2k−1 · · ·v 1v0 , where0 = 1 and1 = 0, so that ℵ(L k ) = L k+1 and ℵ(L k+1 ) = L k , and extending to a graph automorphism of M k , again denoted w.l.o.g. by ℵ. In fact, Ψ : M k → O k is characterized by its restriction to the identity map over L k and to ℵ over L k+1 . A set of such strings is said to be lexicographically ordered if each two of its members can be compared via such precedence criterion. Dyck words, Dyck nests and blowing By a Dyck word we will understand any binary 2k-string f = f 1 f 2 · · · f 2k of weight k with the number of 0-bits at least equal to the number of 1-bits in each prefix of f , (which differs from [10,12,13,14] just by complementation). Each such f determines a Dyck path, given as a continuous piecewise-linear curve g f in the Cartesian plane R 2 such that g f (0) = g f (2k) = 0 with g f (x) > 0, for 0 < x < 2k, and formed by replacing successively from left to right each 0-bit of f by an up-step and each 1-bit of f by a down-step, where up-steps and down-steps are segments of the forms (x, y)(x + 1, y + 1) and (x, y)(x + 1, y − 1), respectively. We assign the integers in [1, k] successively to the up-steps (respectively, down-steps) of g f in the horizontal unit layers [y, y + 1] ⊂ R 2 , for y = 0, 1, 2, . . ., as needed, and from right to left at each layer. The resulting 2k-string is said to be the Dyck nest F associated to f . A Dyck nest F = " · · · i(i + 1)(i + 1)i · · · " (arising from a Dyck word f = " · · · 0011 · · · ") is said to be obtained by blowing the shorter Dyck nest F = " · · · ii · · · " (arising from a corresponding Dyck word f = " · · · 01 · · · "). The inverse operation to blowing, adequately iterated, leads to an irreducible Dyck nest, meaning it cannot be further reduced. Dyck words f yielding irreducible Dyck nests F will be said to be irreducible. When lexicographically ordered, the tight Dyck nests start as follows, according to displays (2), (3) and (6) The corresponding originating Dyck words here start the sequence of associated tight Dyck words: . . , f (13), . . .) = (01, 0101,001011,010011,010101,00010111,00101101,00100111,00110011,01001101,00101011,01001011,01010011,01010101,...), (8) so that the associated tight Dyck nests in (7) act as their pull-backs, before replacing, in each of them, the first appearance j 1 of each integer entry j by a 0-bit and each second appearance j 2 of j by a 1-bit. That is: each j 1 in an F (n) of N corresponds to a 0-bit in the corresponding f (n) of W, so j 2 in F (n) corresponds to a 1-bit in f (n). The concept of empty Dyck word ǫ also makes sense here and is used in Section 13. The procedure in items (a)-(b) yields the lexicographically-ordered tight Dyck nests F (n) = F k (n) ∈ N for all n ≥ 0, and the corresponding Dyck words f (n) = f k (n) ∈ W. Dyck nests blown from Dyck nests Every Dyck nest F = F k = F (n) = F k (n) of length 2k contains a substring kk = k 1 k 2 . Given a Dyck nest F k of length 2k, let F k+1 = q(F k ) be the string of length 2(k + 1) obtained from F k by inserting the substring (k+1)(k+1) = (k+1) 1 (k+1) 2 between the two entries k 1 and k 2 equal to k in F k , so that q(F k ) contains the substring k(k + 1)(k + 1)k = k 1 (k + 1) 1 (k + 1) 2 k 2 . Then, F k+1 = q(F k ) is a Dyck nest of length 2k + 2 and its corresponding f k+1 = q(f k ) is a Dyck word of length 2k + 2. We say that F k+1 = q(F k ) and f k+1 = q(f k ) are blown from F k and f k , respectively. By repeating the q(·)-operation, we say that we blow F k successively to Dyck nests q 2 (F k ), q 3 (F k ), etc. We also say that F k (n) and f k (n) are k-blown, where necessarily 0 ≤ n < C k = (2k)! k!(k+1)! , with upper bound C k equal to the k-th Catalan number [15, A000108]. Note that λ(n) = k, ∀n ∈ [C k−1 , C k ). Arc factorizations of odd graphs . Then, such a class can be expressed (Example 1, Table 1 Each entry of value j ∈ [0, k] in a (2k + 1)-tuple χ = ℓ F as in (10) (ℓ ∈ [0, 2k]) is either the first, j 1 , or second, j 2 , appearance of j in χ counting from the entry of value 0, from left to right and cyclically mod 2k + 1, e.g., 2 F has value 0 in the penultimate entry (see (10)). Let ℓ ∈ [0, 2k]. Given the entry of value j = 0 in ℓ F or the first appearance j 1 of an integer j ∈ [1, k] in ℓ F , there exists just one anchored k-blown Dyck nest ℓ j+ℓ F = ℓ F (with j + ℓ taken mod n) adjacent as a vertex of O k to ℓ F via one of the two arcs forming an edge e of O k , namely the arc − → e = ( ℓ F, ℓ j+ℓ F ), and having: 1. (k − j) 1 as first appearance of k − j in the same position as j 1 in 0 F , and 2. the other j = 0 or first (resp., second) appearance positions of integers j ∈ [1, k] in ℓ F as second (resp., j = 0 or first) appearance positions of integers j ∈ [1, k] in ℓ j+ℓ F . Remark 2. Assigning color j to the arc − → e implies assigning color (k − j) to its opposite arc ← − e = ( ℓ j+ℓ F, ℓ F ). This insures an arc factorization of O k that we call the edge-supplementary 1-arc-factorization of O k , since by considering the color set [0, k] as if it were a set of weights with values in N, then the weight of each edge seen as the sum of the weights of its two arcs is constantly equal to k in O k . This justifies our supplementary terminology. We use the following notation for the vertices v ∈ V (O k ): v = n j k , where n is the order of the TRGS b(n) ∈ S yielding the Z 2k+1 -class of v, and j is taken by expressing v as j F in (10). Thus, for each k ∈ N, the arcs of O k form an arc factorization F composed by the arc factors F j formed by those arcs (n j k , ℓ j ′ k ) with fixed j ∈ [0, k] and adequate n, ℓ and j ′ . Example 3. Table 2 shows the twenty adjacency pairs that represent the arcs (n 0 k , ℓ j k ), for k = 3, in a sandwich fashion, with each of the two shown layers being " v j k = j F "; j F shown with concatenation bars delimiting either j = 0 or the first appearance j 1 of j ∈ [1, 2k] on top (showing n 0 k ) and the first appearance (k − j) 1 of k − j below (showing ℓ j k ). Lifting F via pull-back from O k to M k via the 2-covering graph map Ψ k : M k → O k yields the so-called modular arc factorization of M k [7,13,14], again mentioned in relation to item 2 in Section 10. Add a terminal parenthesis to f ′ , so that the last "1" in f ′ is transformed into "1)". Denote by g the string resulting from such addition of a closing parenthesis to f ′ . 2. By proceeding from left to right, replace the bits of g by the successive integers from 1 to |g|, keeping all pre-inserted parentheses and commas of g in their position. This yields a version h 0 of g. 3. Set h 0 as a concatenation (w 1 )|(w 2 )| · · · |(w t ) of expressions (w i ), (1 ≤ i ≤ t), the terminal ")" of each (w i ) being the closing ")" nearest to its opening "(". Let w ′ i be the number string obtained from w i by removing parentheses and commas. For i = 1, . . . , t, perform a recursive step R consisting in transforming w ′ i into its reverse substring w ′′ i and then resetting w ′′ i in place of w ′ i in (w i ), with the parentheses and commas of (w i ) kept in place. Denote the resulting expression by R(w i ). This yields a string h 1 = R(w 1 )|R(w 2 )| · · · |R(w t ). For i,j = (w i,j ) has terminal ")" being the closing ")" nearest to its opening "(". Apply the treatment of the (w i )'s in item 3 to each In each such concatenation, the strings η 2 I ="(,)" are of the form (w I ) and must be treated as (w i,j ) is in item 4 (or (w i ) in item 3), producing a modified string R(w I ) that forms part of the subsequent string h 3 . Eventually ahead, to pass from In each such concatenation, those η ℓ−1 I ′ ="(,)" would be of the form (w I ′ ), to be treated again as in items 3-4. 6. A sequence (h 0 , . . . , h s+1 ) is eventually obtained for some s ≥ 0 when all innermost expressions (w I ) = (a, a ± 1) with a, a ± 1 ∈ [1, 2k] have been processed. Disregarding parentheses and commas in h s+1 yields a 2k-string g ′ and an assignment i → p(i), Example 4. Table 3 illustrates the determination of the permutation π for the five cases with k = 3 and just four of the 14 cases with k = 4, (exemplifying that not necessarily π = p). Each such case is headed by an indication b(n) → b ′ (n) = 0 k−ℓ(n) |b(n), where ℓ(n) is the length of b(n) and k is the length of b ′ (n). The second and third lines of Table 3 show F ′ (b ′ (n)) and f ′ (b ′ (n)) with parentheses and commas as in item 1 of the procedure. The rest of each case follows items 2-6 in order to produce p expressed without parentheses or commas, followed by the identity permutation ι = 12 · · · (2k) to ease visualizing π as the inverse permutation of p, in the final line of each case of the table. Uniform 2-factors of the odd graphs Departing from each anchored k-blown Dyck word taken as ∈ [0, 2k)) the edge arc whose assigned color, according to Remark 2, is the corresponding entry of the reversed permutation rev(π) of π. Now, the terminal vertex v 2k of −→ P 2k v is at distance 1 from v by means of an arc − → e v in the arc factor F 0 . An oriented (2k k yields an oriented 2(2k + 1)-cycle −−→ M C k v in M k containing all pairs of opposite vertices (w, ℵ(w)) (i.e, at distance k from each other along −−→ M C k v via two internally disjoint paths, one oriented, the other one anti-oriented). This provides O k (resp., M k ) with a 2-factor of C k components, all as oriented cycles of uniform length 2k + 1 (resp., 2(2k + 1)) [6,13,14]. Example 5. Table 4 is headed by k = 1, 2, 3 and n ∈ [0, C k ) followed by the corresponding anchored Dyck words 0f k (n); for k = 3 it contains information clarified in Section 13. Below its second horizontal line, the table presents each − → e k v in vertical fashion, with each pair of contiguous downward lines, say χ, χ ′ , representing an arc whose j-th entry is underlined, where j, shown as a subindex to the right, is the position containing the pair of k-supplementary entries in χ, χ ′ . The vertical column of such subindices j conform the reversed permutation rev(π v ) of π v associated to the anchored k-blown Dyck word v, for each such word v in V (O k ). For each value of k, the columns of (2k + 1)-tuples χ will be denoted L(n) (0 ≤ n < C k ). For k = 3, the subindex j of each (2k + 1)-tuple χ in a vertical list L(n) is further extended, first with the value m of the TRGS b(m) such that 0F k (m) is the Dyck nest representing the corresponding Z 2k+1 -class [0F k (m)] of χ in V (O k ), and second with the index j ′ such that χ = j ′ F k (m), in the notation of (10). Additional underlined entries and superindices with middle up-or-down vertical arrows are explained in Section 13. Partitions of odd-graph vertex sets So far, we have two different partitions of V (O k ) into (2k + 1)-subsets. In terms of anchored k-blown Dyck nests, these partitions are: where v = 0F k (n), for 0 ≤ n < C k in Section 9, from [13]. Item 2 refers to the graph-theoretical partition of V (O k ) that leads to the determination of Hamilton cycles both in O k and M k (k > 3) [14] via the modular arc factorization of M k mentioned at the end of Section 7. Note that the Petersen graph M 3 is hypo-hamiltonian, a constraint for Section 13 and its Theorem 23 that reformulate those determinations. Let (P 0 , P 1 , P 2 , . . .) be a partition of V (T ) into threads P i , each thread inducing a path T [P i ] with an initial vertex b(n i 0 ) such that γ(n i 0 ) > 1 and its remaining vertices b(n i j ) = b(n i 0 + j) such that γ(n i j ) = γ(n i 0 + j) = 1, for 0 < j ≤ s i , where the length s i of P i is maximal. Here, the indices n i 0 form an integer sequence (n 0 0 , n 1 0 , n 2 0 , . . .), as shown vertically on the second, sixth and tenth columns on the triptych left of Table 5. The induced paths T [P i ] have respective lengths s i and values γ i = γ(n i 0 ) shown subsequently in the triptych. Table 6 shows the first 42 TRGS b(n) = b(n i j ), with the columns (divided again as in a triptych) headed n i j , b(n i j ), ρ(n i j ), γ(n i j ) and h(n i j ), where h(.) is introduced in Observation 6. We reunite the threads P i into braids, which are subsets Q ℓ (0 ≤ ℓ ∈ N) of V (T ), each inducing a maximal connected ordered subtree with initial vertex b(m ℓ 0 ) such that γ(m ℓ 0 ) > 2 and its remaining vertices b(m ℓ j ) such that γ(m ℓ j ) = γ(m ℓ 0 +j) ∈ {1, 2}, for 0 < j ≤ P i ⊆Q ℓ s i . This yields a partition {Q ℓ } of V (T ) coarser than {P i }. The growth of the tree T of Dyck nests F (n) will be simplified further from that given in the procedure of Section 4 by recursively updating just one entry of the parent clone σ(ρ(n)) to obtain the clone σ(n). This uses an equivalence of the set of anchored k-blown Dyck nests F (n) and that of their clones σ(n), provided in Theorem 8, below. Observation 6. In the transformation from F (ρ(n)) to F (n) in Section 4, k 1 k 2 is present either in X or in Y ; if k 1 k 2 is in X, then σ γ(n) (n) depends on k because of blowing, and so σ γ(n) (n) = k + h(n), for some value h(n) < 0; if on the contrary k 1 k 2 is in Y , then σ γ(n) (n) does not depend on k, and so σ γ(n) (n) = h(n), for some h(n) ≥ 0. In both cases, the remaining entries of σ(n) other than σ γ(n) (n) are kept the same in F (n) as in F (ρ(n)). Theorem 8. The correspondence that assigns each n-nest to its clone is a bijection. Setting successively 1 2 , 1 1 , 2 2 , 2 1 , . . . , (k − 1) 2 , (k − 1) 1 instead of the zeros of F (n, 0) from right to left according to the indications σ i (n), for i = 1, 2, . . . , k − 1, is done in stages F (n, i − 1) → F (n, i) by setting each pair (i 1 , i 2 ) as an outermost pair, only constrained by the presence of already replaced positions; after setting a value i 1 in the initial position, we restart if necessary on the right again with the replacement of the remaining zeros by the remaining pairs (i 1 , i 2 ) in ascending order from right to left. This allows to recover F (n) from the σ(n)'s by finally replacing the only resulting substring 00 in F (n, k −1) by We introduce a family {Φ j |j > 0} of subsequences of N defined by the following properties: Theorem 9. If either b(n) = 10 · · · 0 or b(n) is the endvertex of a thread, then h(n) = 0 ∈ S. Proof. The first case in the hypothesis insures the presence of all feasible substrings j 1 j 2 in F (n). The second case insures at least the presence of the substring 1 1 1 2 in F (n). Theorem 10. Let n ∈ N. Then, ∃ r > n such that b(r) = 1|b(n) and Proof. Item 1 occurs exactly when the substring k 1 k 2 in F (n) changes position from one side of 1 1 to the opposite side in F (r), in the procedure of Section 4 starting at b(ρ(n)) and b(ρ(r)) and ending at b(n) and b(r), respectively. Otherwise, item 2 happens. Example 11. In the upper-left box of Table 8, the values h(·) for the five initial threads P i (i = 0, 1, 2, 3, 4) are disposed in order to start illustrating Theorems 9-10. Each such thread is shown with a singly underlined heading containing the value γ(·) > 1 of its initial vertex, and in the entries below the heading, the subsequent values h(·) for the vertices of the thread. In this upper-left box, the braid Q 0 = P 0 ∪ P 1 is over the braid Q 1 \ P 4 = P 2 ∪ P 3 . The final column for P 4 has its values h(·) also determined by the Theorems 9 and 10. In the upper-right box of Table 8, similar changes are observed with Q 0 ∪ Q 1 on top of Q 2 ∪ Q 3 , and to the right of them, the columns of Q 4 for threads P 10 , P 11 , P 12 , P 13 have their values h(·) determined. The lower box of Table 8 shows an upper level composed by threads P 0 , . . . , P 13 , a middle level composed by threads P 14 , . . . , P 25 and a lower level composed by threads P 26 , . . . , P 41 , disposed vertically as to facilitate verifying our assertions. Similarly with Table 9, where the threads P 0 , . . . , P 126 are disposed in four levels with their underlined headings γ(·) as above and the vertically disposed values of h(·) replaced by rings "•" if h(·) / ∈ Φ 1 and bullets "•" if h(·) ∈ Φ 1 . The final five threads here are shown in the box on the right hand side of Table 5 with the data disposed as in Table 8. Controlling odd and middle-levels graphs via T We introduce strings ξ b γ , for all pairs (γ, b) ∈ N 2 with 1 < γ ≤ b. The entries of each ξ b γ are integer pairs (α, β), denoted α β , starting with α β = 1 1 , initial case of the more general notation 1 β , for β ≥ 1. The strings ξ b γ are determined following Table 10. The components α in the entries α β represent the indices γ = γ(b(n)) (see Section 2) in their order of appearance in S, and β is an indicator to distinguish different entries α β with α locally constant. Next, consider the infinite string J of integer pairs α β formed as the concatenation with ξ 1 1 = * |1 1 standing for the first two lines of Tables 6 and 7, where * , representing the root of T , stands for the first such line, and ξ 1 1 for the second line. A partition of a string A is a sequence of substrings σ 1 , σ 2 , . . . , σ n whose concatenation σ 1 |σ 2 | · · · |σ n is equal to A. We recur to Catalan's reversed triangle ∆ ′ , whose lines are obtained from Catalan's triangle ∆ (see [4]) by reversing its lines, so that they may be written as in Table 11 that shows the first eight lines ∆ ′ k of ∆ ′ , for k ∈ [0, 7]. Each ξ b γ in the statement of Theorem 17 is presented in the Tables 6-7 in γ(n)-columnwise disposition. Proof. The statement represents the set of vertices of the induced truncated tree T [S ∩ b([0, C k ))], (1 ≤ k ∈ N) via the prefix ξ k k of J and the line ∆ ′ k−1 of ∆ ′ . Theorem 18. The sequence h(S\b(0)) can be recreated by stepwise generation of the induced truncated trees T [S ∩ b([0, C k ))], (1 ≤ k ∈ N). In the k-th step, the determinations specified in Theorems 9-10 are performed in the natural order of the TRGS's. The k-step completes those determinations, namely (n, h(n)) → (r, h(r)), for the lines of ∆ ′ corresponding to the sets ξ j j (j = 1, . . . , k −1), and ends up with the determinations (n, h(n)) → (r, h(r)) for j = k in the line corresponding to ξ k k−2 and (n, h(n)) → (r, h(r)) in the final line for j = k + 1, corresponding to X k k−2 . Proof. Theorem 18 is used to express the stepwise nature of the generation of the sequence h(S \ b(0)). The methodology in the statement is obtained by integrating steps applying Theorem 12 in the way prescribed, that yields the correspondence with the lines of ∆ ′ . allows to retrieve v by locating either its oriented (2k + 1)-(resp., (4k + 2)-) cycle in the uniform 2-factor (Section 9) or in a specific Z 2k+1 -(resp., D 2k+1 -) class (Sections 6-8) and then locating v in such cycle or class by departing from its only anchored Dyck word. The sequence (12) allows to enlist all vertices v by ordering their cycles or classes, including all vertices in each such cycle or class, starting with its anchored Dyck word. Proof. Let n ∈ N. Then, γ(n) yields the required update location in the TRGS b(n) ∈ S with respect to the parent TRGS b(ρ(n)) ∈ S, while h(n) yields the specific update, as determined in Theorems 17-18. This produces the corresponding clone. Then, Theorem 7 allows to recover the original Dyck word from that clone, and thus the corresponding vertex of O k (resp., M k ) by local translation in its containing cycle in the cycle factor of Section 9, or cyclic (resp., dihedral) class (as pointed out in Section 10). Hamilton cycles in odd and middle-levels graphs A flippable tuple [14] in a vertical list L(n) is a pair F T (n, j) of contiguous lines in L(n) having its k-supplementary entry pair at the j-th position counted from the right (j ∈ [0, 2k]). Let 2 < κ ∈ Z. A flipping κ-cycle [14] is a finite sequence of pairwise different vertical lists L(n j ), (n = 1, . . . , κ) determining a 2κ-cycle in O k containing successive pairwise disjoint edges whose endvertex pairs {χ j 0 , χ j 1 } are flippable tuples in their corresponding vertical lists L(n j ) (n = 1, . . . , κ), with the vertical pairs of number k-supplementary entries happening at pairwise different coordinate positions. Example 20. As mentioned for k = 3, the five right columns in Table 4 contain the lists L(n) (n ∈ [0, 4]). These lists contain in those five final columns additional information that allows to assemble the flipping triples τ 0 = (L(0), L(1), L(2)) and τ 1 = (L(0), L(3), L(4)), with the initial line 0F 3 (n) of each such L(n) having a sole underlined entry per flippable tuple corresponding to the underlined entries of its two constituent contiguous (2k + 1)tuples, say χ 0 , χ 1 . Here, χ 1 is provided with a superindex containing: (i) an index z ∈ {0, 1} relating to a sole associated triple τ z (z ∈ {0, 1}); (ii) a vertical arrow indicating a definite orientation of the edge χ 0 χ 1 which determines an arc χ ′ χ ′′ with {χ 0 , χ 1 } = {χ ′ , χ ′′ }; (iii) an index n ′ such that L(n ′ ) is in τ z and contains a flippable tuple determining an arc χ ′′′ χ ′′′′ so that the arc χ ′′ χ ′′′ is in τ z . By considering the three flippable tuples obtained in this way and the additional neighbor adjacencies, an oriented 6-cycle is obtained. The triangles τ 0 , τ 1 form the two hyperedges of a connected acyclic hypergraph on the vertex set {L(n)|i ∈ [0, 4]} that yields the simplest case of the construction of a Hamilton cycle in the odd graphs, in this case in O 3 . Each of τ 0 and τ 1 yields a 21-cycle in O 3 by means of symmetric differences. The presence of L(0) in both τ 0 and τ 1 then allows to transform both 21-cycles into the claimed Hamilton cycle of O 3 by means of corresponding flippable tuples in L(0), one for τ 0 and the other one for τ 1 . The respective triples of Dyck words ξ j 1 w or ξ j 1 w or ξ j i or ξ j i (j = 2, 3, 4) may be expressed as follows by replacing the Greek letters ξ by the values of the correspondence Φ: where we can also write (5 1 3 , 4 2 3 , 0 3 3 ) = (0 1 3 , 1 2 3 , 5 3 3 ). The flippable tuples F T (i, j) allow to compose five flipping 6-cycles and one flipping 8-cycle, allowing to integrate by symmetric differences a Hamilton cycle in O 4 . We represent H k as a simple graph ψ(H k ) with V (ψ(H k )) = V (H k ) by replacing each hyperedge e of H k by the clique K(e) = K(V (e)) so that ψ(H k [e]) = K(e), being such replacements the only source of cliques of ψ(H k ). A tree T of H k is a subhypergraph of H k such that: (a) ψ(T ) is a connected union of cliques K(V (e)); (b) for each cycle C of ψ(H k ), there exist a unique clique K(V (e)) such that C is a subgraph of K(e). A spanning tree T of H K is a tree of H k with V (T ) = V (H k ). Clearly, the subhypergraphs H ′ k of H k for k = 3 and 4 are corresponding spanning trees. A subset G of hyperedges of H k is said to be conflict-free [14] if: (a) any two hyperedges of G have at most one vertex in common; (b) for any two hyperedges g, g ′ of G with a vertex in common, the corresponding images by Φ (as in display (15)) in g and g ′ are distinct. A proof of the following final result is included, as our viewpoint and notation differs from that of its proof in [14]. Theorem 23. A conflict-free spanning tree of H k yields a Hamilton cycle of O k , for every k ≥ 3. Moreover, distinct conflict-free spanning trees of H k yield distinct Hamilton cycles of H k , for every k ≥ 6. Proof. Let D k be the set of all Dyck words of length 2k and, recalling display (13), let In particular, 0101(01) k−2 ∈ E k and 0011(01) k−2 ∈ F k . Now, let Let us set F k as a function of E 2 , . . . , E k−1 , F 2 , . . . , F k−1 , T k−2 , as follows: For 1 < j ≤ k, let F j k = ∪ j i=2 {0u1v|u ∈ D i−1 , v ∈ D k−1 }. Since F k = F k k , then the following implies the existence of a spanning tree of H k [F k ]. Lemma 24. For every 1 < j ≤ k, there exists a spanning tree F j k of H k [F j k ]. Proof. Lemma 7 [14] asserts that if τ is a flippable tuple and u, v are Dyck words, then: (i) uτ v is a flippable tuple if |u| is even; (ii) uτ v is a flippable tuple if |u| is odd. Lemma 8 [14] insures that the collections in (13) are flippable tuples. Using those two lemmas of [14], we define Ψ as the set of all the flippable tuples uτ v and uτ v arising from (13). Moreover, we define Ψ 2 = ∅ and Ψ k = Ψ ∩ D k , for k > 2. Proof. For each vertical list L(i), let L M (i) be a corresponding vertical list in M k which is obtained from L(i). Then, Theorem 23 can be adapted to producing Hamilton cycles in the M k by repeating the argument in its proof in replacing the lists L(α) by lists L M (α), since they have locally similar behaviors, being the cycles provided by the lists L M (α) twice as long as the corresponding lists L(α), so the said local behavior happens twice around opposite (rather short) subpaths. Combining Dyck-word triples and quadruples as in display (13) into adequate pullback liftings (of the covering graph map M k → O k in the lists L M (α) of those parts of the lists L(α) in which the necessary symmetric differences take place to produce the Hamilton cycles in O k will produce corresponding Hamilton cycles in M k .
2023-06-27T01:01:30.023Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "602d76809a47f29ed567ad8536eafdf5cc108222", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "602d76809a47f29ed567ad8536eafdf5cc108222", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }