text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Matched Paired Primary and Recurrent Meningiomas Points to Cell-Death Program Contributions to Genomic and Epigenomic Instability along Tumor Progression Simple Summary Meningioma (MN) is the most frequent primary brain tumor with a high frequency of recurrences and a lack of objective tools for predicting their prognosis. In this study, we analyzed a careful selection of patients in which both the primary tumor and at least one recurrence were available, allowing us to extend the changes that occur during tumor progression. We developed a histological, genetic, and epigenetic analysis of the samples. Thus, we identified markers of quick recurrence, increased tumor instability by copy number alterations, and the accumulation of epigenetic changes during tumor progression. Interestingly, the genes involved seemed to be randomly distributed along the genome but eventually suggest a common impact on cell-death programs such as apoptosis and autophagy. Abstract Meningioma (MN) is an important cause of disability, and predictive tools for estimating the risk of recurrence are still scarce. The need for objective and cost-effective techniques addressed to this purpose is well known. In this study, we present methylation-specific multiplex ligation-dependent probe amplification (MS-MLPA) as a friendly method for deepening the understanding of the mechanisms underlying meningioma progression. A large follow-up allowed us to obtain 50 samples, which included the primary tumor of 20 patients in which half of them are suffering one recurrence and the other half are suffering more than one. We histologically characterized the samples and performed MS-MLPA assays validated by FISH to assess their copy number alterations (CNA) and epigenetic status. Interestingly, we determined the increase in tumor instability with higher values of CNA during the progression accompanied by an increase in epigenetic damage. We also found a loss of HIC1 and the hypermethylation of CDKN2B and PTEN as independent prognostic markers. Comparison between grade 1 and higher primary MN’s self-evolution pointed to a central role of GSTP1 in the first stages of the disease. Finally, a high rate of alterations in genes that are related to apoptosis and autophagy, such as DAPK1, PARK2, BCL2, FHIT, or VHL, underlines an important influence on cell-death programs through different pathways. Introduction Meningiomas (MN) are the most frequent primary intracranial tumors [1,2]. They cause many surgical procedures every year, and despite being usually benign, their delicate location makes them a lurid cause of disability. It is disconcerting that after initial successful treatments, the overall recurrence rate is above 20% of the cases [3,4]. In other words, after the first neurosurgery, many patients are condemned to a second one with all clinical consequences affecting their wellness. The diagnosis of meningioma relies on morphological features related to their tendency to recur. The last edition of the WHO's classification describes 15 different meningioma subtypes: 9 grade 1 MN that show slow growth rates and benign biological behavior, 3 grade 2 MN, which show an increased risk of recurrence, and 3 grade 3 MN, displaying an aggressive clinical outcome and elevated recurrence rates. In addition, some molecular biomarkers such as TERT promoter mutation and/or the homozygous deletion of CDKN2A/B have now been included as the criteria for the diagnosis of grade 3 MN because of their associations with tumor aggressiveness [1]. The loss of chromosome 22 and/or del (22q) is by far the most frequent chromosomal alteration in meningiomas and involves the NF2 tumor-suppressor gene (TSG). Its alteration is an early event in all WHO grades, and it is relevant for MN development and progression [5]. Higher-grade MN usually exhibit more complex genetic changes, with losses on 1p, 6p/q, 10q, 14q, and 18p/q, and the deletions of CDKN2A/B, in which the latter is confirmed as a progression event in NF2-altered mice models [1,6]. Genomic sequencing of different series of sporadic MN defined two subsets of MN according to NF2 status: Mutated or lost NF2 characterizes the first while different alterations in AKT1 or SMO, TRAF7, KLF4, or PIK3CA characterizes the second. In non-NF2 MN, single mutations of TRAF7, AKT1, KLF4, and SMO (TRAKLS mutation genotype) or combinations of some of them seem to be associated with WHO grade 1 and favorable progression-free survival [7][8][9][10][11]. Conversely, MNs with altered NF2 are more likely to be atypical, and additional copy number alterations or general genomic instability tend to be more frequent in this MN group [7,8]. In this context, the genes recently involved in MN have been mainly linked to WHO grade 1 MN, displaying a favorable outcome. However, grade 1 MN causes the majority of recurrences in absolute numbers, emphasizing that there is still scarce knowledge on how to predict which grade 1 MN will behave in an aggressive manner [2]. An interesting approach for improving our understanding is the study of matched primary and recurrent samples, although there are not many large series previously reported. Recurrent meningiomas constitute a major health problem and the follow-up of the patients is problematic. A common situation in patients diagnosed as grade 1 MN is that, after the radiological identification of a recurrence, clinicians are cautious and ask the patients to wait in order to observe an evolution. This fact leads many patients to seek a second opinion to achieve surgical resections rapidly, complicating the generation of a large series of matched primaryrecurrent MNs. In fact, we can find previous studies with many tumor samples collected over 10 years but the studies only have a small percentage of paired recurrences [9]. The present study addresses this problem and focuses on paired primary and recurrent tumors. We aim to retrospectively characterize primary tumors that have recurred and study their genetic and epigenetic landscape both on the primary neoplasm and in their recurrences. The strength of this study is the long period of sample collection that allow us to present a series of paired samples of meningioma from 20 patients. Our cohort includes primary tumors and one or more recurrences, reaching a total of 50 paired samples. In these samples, we offer a picture of copy number alterations (CNAs) and epigenetic aberrations (HMs) to better understand the characteristics of these recurrent meningiomas from both grade 1 and grade 2-3 primary neoplasms. Patients, Samples, and Clinical Study Tumor samples from 20 patients diagnosed with meningioma from the Hospital Clínico Universitario in Valencia (HCUV) were collected between 1986 and 2011. They comprised 50 samples: 20 primary tumors (PT) and the recurrences (RCs) suffered by these patients. First, RC was achieved from all patients except one (19 RC1). In addition, 6 matched second recurrences (RC2), 3 samples from third recurrences (RC3), and 2 fourth recurrences (RC4) were included in this study. Globally, 20 PTs and 30 RCs were collected (50 samples). For the case in which RC1 was not accessible, clinical data and the second recurrence were available. The study was conducted according to the guidelines of the Dec-laration of Helsinki and approved by the Institutional Ethics Committee at the University of Valencia and Hospital Clínico Universitario de Valencia (protocol 2014/183). Clinical information was accessed from the historical archive of the hospital, including data on age, sex, and recurrence-free survival period (RFS). None of the patients received chemotherapy or radiotherapy before the first surgery. After surgery, tumor specimens were fixed in neutral-buffered formalin, embedded in paraffin, sectioned, and stained with hematoxylineosin. Samples were categorized according to the WHO classification [1]. Among the histopathological features, mitoses were counted on 10 high-power fields (10 HPF), and the presence of prominent nucleoli, increased cellularity, necrosis, the infiltration of the dura-matter (or CNS adjacent structures), and sheeting was determined. Molecular Analysis Selected areas of the paraffin blocks from each sample were used for DNA extraction using the QIAamp DNA FFPE tissue kit (Qiagen, Inc., Valencia, CA, USA). The quality and quantity of DNA improved by using the standard ethanol precipitation procedure in all tumor and control samples. Multiplex ligation-dependent probe amplification using SALSA MLPA Probemix P044-NF2 kit (MRC-Holland, Amsterdam, The Netherlands) was performed following the manufacturer's instruction to assess the genetic status of NF2. This kit included 17 probes for all the exons of the gene, and CNAs of NF2 were determined as its average value. A threshold of x < 0.75 was established to classify losses and 0.7 < x < 1.3 was considered wild-type (wt) based on previous descriptions [12]. Methylation-specific multiplex ligation-dependent probe amplification (MS-MLPA) was performed to determine the methylation status of 24 genes using a SALSA MLPA kit (ME001-C2, lot 0808), following the manufacturer's instructions (MRC-Holland). The genes included were TP73*, CASP8, FHIT, RASSF1*, VHL, MLH1*, CASR, RARB*, APC, ESR1*, PARK2, CDKN2A*, CDKN2B*, DAPK1*, CREM, PTEN*, CD44, GSTP1*, CD27, ATM*, PAH, BRCA2*, MLH3, TSC2, CDH13*, HIC1*, BCL2, KLK3, and TIMP3*. The (*) points to the ones for which its probes allowed us to determine both the CNA and the methylation status (HM). They were mainly TSGs that were selected because they are already known to display frequently genetic alterations in meningioma but little is known about their epigenetic status or vice versa (e.g., CDKN2A/B, PTEN, TIMP3, CD44, RASSF1, and TP73). Other genes have important functions in different cancer-related processes, e.g., the regulation of tumor growth, cell-cycle control, differentiation and proliferation, angiogenesis, cell adhesion, DNA damage repair, and apoptosis [12][13][14][15][16]. Briefly, DNA was denatured at 98 • C for 5 min and hybridized with the appropriate probe mix at 95 • C for 1 min followed by a 60 • C overnight incubation. Ligation and digestion reactions with Hha I were carried out at 48 • C for 30 min followed by a step at 98 • C for 5 min. PCR was performed using the SALSA PCR primer mix and SALSA polymerase and consisted of 35 cycles of 95 • C/30 s, 60 • C/30 s, and 72 • C/1 min with a final step at 72 • C/20 min (all reagents from MRC-Holland). The thresholds established were x < 0.75 as losses, 0.7 < x < 1.3 as normal, 1.3 < x < 2 as unspecific, and x > 2 as gains, according to previous reports [12,15]. The studied genes that were wt in all the samples were removed from the data shown. The most frequent CNA detected was the loss of one allele; however, in order to refer to all alterations detected, including sporadically homozygous deletions or gains, we refer to them as CNA. The amplified fragments were separated by capillary electrophoresis in an ABI 310 Sequencer (Applied Biosystems, Inc., Foster City, CA, USA) and were analyzed with Coffalyser excel-based software (MRC-Holland). Data were intra-normalized and results above 20% were considered positive for promoter hypermethylation, as previously described [12,16]. We used three non-related blood samples from healthy donors as negative controls. MLPA results were analyzed to obtain information about what genes were genetically or epigenetically affected in meningioma. Furthermore, based on previous reports, we took into account the total amount of genetic and epigenetic changes per case to determine the copy number's alteration burden [15,17] and also the epigenetic burden. Fluorescence In Situ Hybridization Fluorescence in situ hybridization (FISH) studies for chromosomes 1, 14, and 22 were performed to validate the MLPA kits used. A random cohort of 50% of the paraffinembedded samples was selected. Non-neoplastic tissues from the brain were used as control. To carry out the FISH analysis probes, LSI 22q12, LSI 1p36/LSI 1q25, and t (11;14) IGH/CCND1 were used according to the manufacturer's instructions (Vysis, Abbot scientific, Madrid, Spain). The process of counterstaining nuclei was performed using DAPI. The fluorescent signals were detected using a Leica LAS AF photomicroscope with appropriate filters. Signals were counted in a range of 100-150 non-overlapping tumor cell nuclei per case. An interpretation of deletion was made when >20% of the nuclei harbored losses based on the cutoffs established in control samples [18]. FISH probe LSI 22q12 was compared to the data obtained from the P044-NF2 kit. The FISH analysis of chromosomes 1 and 14 was compared to the values obtained from the TP73 and MLH3 MLPA probes from the ME001-C2 kit, respectively, which are located on 1p36 and 14q24.3, respectively. Cohen's Kappa (K) statistic was used to determine the agreement between both assessments. 0 < K < 0.2 was considered as slight agreement, 0.2 < K < 0.4, as fair agreement, 0.41 < K < 0.60 as moderate agreement, 0.61 < K < 0.8 as substantial agreement, and 0.81 < K < 1 as almost perfect agreement. Statistics Statistical analysis was performed with IBM SPSS v. 24 software (IBM, Madrid, Spain). When possible, the variables were categorized. Quantitative variables were evaluated by Kolmogorov-Smirnov and Levene tests; depending on their results, Student's T, Mann-Whitney's U, or Kruskal-Wallis tests were carried out. Categorical variables were evaluated using the Chi-square (χ 2 ), Fisher's exact, and Cramer's V statistic tests depending on their characteristics. Bivariate correlation analysis was performed using Pearson's statistic for association among variables. Significance was accepted when the probability level was p < 0.050. Kaplan-Meier curves were built in SPSS for primary tumors stratified by genetic alterations to evaluate differences in recurrence-free survival. In addition, we built Cox regression hazards models to evaluate these genetic alterations as independent prognostic factors in primary meningiomas. Clinical Data and Histopathological Results This study analyzed 50 samples that came from 20 patients: 60% were men and 40% were women. Patient age at diagnosis ranged from 7 to 68 years, with an average of 52.0 ± 3.4 years. Of note is that 70% of the cases were under 60 years old at diagnosis. The primary tumors were located at the sphenoid wing in 20% of the patients; in the olfactory groove and frontal and parietal location in 15% of patients; in posterior cranial fossa, parasagittal, and occipital in 10% of patients; and ventricular location in 5% of patients. The average tumor size was 5.3 cm 3 , ranging from 2 to 8 cm 3 . Upon initial diagnoses, the standard treatment consisted of maximal surgical resection in all patients. Recurrence-free survival period (RFS) ranged from 10.2 to 120.0 months, with a mean RFS of 46.8 months. Fifteen percent of the cases recurred before 1.5 years from diagnosis. Among the 20 patients, 10 suffered a single recurrence and 10 suffered more than one recurrence. Thus, the series included 20 primary tumors (PT), 19 first recurrences (RC1), and 11 subsequent recurrences (RC+). The main clinical features are shown in Table 1. Histologically, all 20 PT demonstrated characteristics of MN. Following the last WHO classification [1], primary MNs were diagnosed as grade 1 in 8 cases, grade 2 in 10 cases, and grade 3 in 2 cases. All the recurrences were diagnosed as grade 2 and 3, in which there were 14 grade 2 and 5 grade 3 recurrences (Table 1). For subsequent analysis, grades 2 and 3 were considered together. Primary grade 1 MNs were diagnosed upon histology as transitional (four cases), meningothelial (three cases), and fibrous (one case). All these grade 1 cases showed less than three morphologic criteria of aggressiveness: High cell densities were found in one case, nuclear atypia was observed in two cases, prominent nucleoli were observed in one case, sheeting was observed in four cases, necrosis was observed in two cases, and mitosis was observed at an average of 1.1 ± 0.5. All grade 2-3 primary MNs presented three or more morphologic criteria of aggressiveness: high cell density was observed in 7 cases, nuclear atypia was observed in 5 cases, prominent nucleoli were observed in 6 cases, sheeting was observed in 10 cases, necrosis was observed in 7 cases, and mitosis count showed an average of 3.5 ± 1.1. The infiltration of the dura was quite frequent, and it is observed in seven grade 2-3, and in five grade 1; but the infiltration of the CNS surrounding structures was only found in one grade 2-3 meningioma. The Ki-67 index showed similar values in grade 1 and grade 2-3 PTs as an average 4.7% of the cells in the former and 4.1% in the latter. Primary Meningiomas with Similar Outcomes Displayed Genetic Differences Depending on the Grade The average of CNA detected in PT was 4.9 ± 0.6 CNA per case, showing a similar tumor mutation burden (TMB) in the PT of the different WHO grades: It was 5.1 ± 1.1 CNA per case in grade 1 PT and 4.9 ± 0.7 in grade 2-3 PT ( Table 2). The NF2 gene showed a loss of heterozygosity (LOH) in 65.0 % of the PT. It showed LOH in 50% of grade 1 PT, and 75% of grade 2-3 PT (p = 0.029). Regarding other analyzed genes, the ones that displayed CNA the most were TP73 in 1p36 (40%); ESR1 and PARK2 in 6q (50 and 35.0%, respectively); BCL2 in 18q21 (40.0%); and TIMP3 in 22q12 (45.0%) ( Table 2). Comparing MN from different grades, we found that grade 1 PT demonstrated a significantly higher frequency of CNA on CDH13, showing losses in 50% of grade 1 PT and none in grade 2-3 PT (p = 0.014). Conversely, grade 2-3 PT showed significantly higher rates of alterations in MLH3 (14q), accounting for 50.0% of grade 2-3 PT and none in the grade 1 PT that recurred (p = 0.048), and BCL2 (18q) in 58.3% of grade 2-3 vs. 12.5% in grade 1 (p = 0.054) (Figure 1a). The hypermethylation study showed a tumor epimutation burden (TEB) measured as the average genes hypermethylated per case of 1.5 ± 0.3. PT showed an average of 1.1 ± 0.5 TEB in grade 1 PT and 2.1 ± 0.3 in grade 2-3 PT (p > 0.05). The most frequently hypermethylated genes in PT were RASSF1A, CDKN2A/B, and CDH13 (Table 2). No epigenetic difference in an isolated gene reached statistical meaning per se on these PT. However, the increase in Ki67 index in the cases showing CNA of TP73 (7.7% vs. 2% in wt), ESR1 (6.7% vs. 1.7%), and CD44 (7.7% vs. 3.5%) is notorious. Interestingly, cases with losses of NF2 showed an average of 2.2 ± 1.3 hypermethylated genes while it was 0.4 ± 0.5 in the cases with NF2 wild-type (p = 0.001). Recurrence-Free Survival Associated with the Epigenetic Burden and Specific Changes We analyzed the association between these genetic (CNA) and epigenetic alterations (HM) and also the recurrence-free survival period (RFS) until the first recurrence. Independently of the tumor grade, patients that recurred before 1.5 years (n = 3) presented 3.7 ± 0.3 HM genes per case while patients that suffered recurrences after 1.5 years (n = 17) presented 1.4 ± 0.3 (p = 0.007) (Figure 1b). It was significant that all these cases that recurred early showed the epigenetic alteration of RASSF1A (p = 0.031) and 2/3 of them in PTEN (p = 0.046). Regarding CNA burden, the average was also higher but not significant, reaching 7.3 ± 1.2 genes in the early recurrent patients but 4.6 ± 0.7 in the late ones (p > 0.05). Recurrence-free survival (RFS) curves were calculated on primary meningioma samples using the Kaplan-Meier analysis and the differences between curves were assessed by the log-rank test. Interestingly, the CNA of HIC1 and HM of genes CDKN2B and PTEN showed statistical associations with the time-to-recurrence curves (Figure 1c). The CNA of HIC1 caused a 3.2-fold reduction in RFS (p = 0.002) and the epigenetic alteration of PTEN and CDKN2B resulted in a 3.5 and 2.2 times drop in the time to the first recurrence, respectively (p = 0.001 for PTEN and p = 0.021 for CDKN2B). The MN with these genetic and epigenetic alterations had a significantly shortened time until recurrence. To confirm these results, we used the Cox proportional-hazards regression method, confirming a higher hazard ratio of recurrence in the same period for tumors with a CNA of HIC1 (HR = 39, 95% CI between 5.2 and 294, p < 0.001), HM of PTEN (HR = 26.3, 95% CI between 2.9 and 235, p = 0.003), or HM of CDKN2B (HR = 5.9, 95% CI between 1.4 and 25, p < 0.017) than for tumors without these alterations (Figure 1d). Afterwards, we explored the status of chromosomes frequently altered in MN 22, 1, and 14 by FISH as a second technique to validate our MLPA results. FISH analysis revealed losses of chromosome 22 in 60.9% of the samples, losses of chromosome 1p in 38.1% of cases, and losses of chromosome 14q in 40.0%. Comparing them to the MLPA data obtained by NF2 for chromosome 22, TP73 for 1p, and MLH3 for 14q (Table 2), we found a concordance of 88.9% when the status was wt, and a concordance of 92.9% for the losses detected for chr. 22. This resulted in a Cohen's K (chr. 22) = 0.817, which means an almost perfect agreement. For chr. 1, we found a concordance of 84.6% for wt samples and 75.0% for the losses, which led to a Cohen's K (chr. 1) = 0.512 and means a moderate agreement. Finally, for chr. 14 we found concordance for wt status in 83.3% and for losses in 87.5%, resulting in a Cohen's K (chr. 14) = 0.694, which means a substantial agreement. Thus, globally, the concordance was good. Copy Number Alterations and Hypermethylation Increased in the First Recurrence All cases included in this study recurred; thus, we could analyze the genetic and epigenetic features that changed from the PT to the RC1. The average of CNA increased from the initial 4.9 ± 0.6 up to 7.2 ± 0.7 genes per case in those first recurrences (RC1, p = 0.021). Regarding the epigenetic status, the average number of hypermethylated genes was 1.5 ± 0.3 in PT and 2.9 ± 0.6 in RC1 (p = 0.029). Globally, the first recurrence (RC1) displayed subtle increases in CNA and HM, affecting a variety of genes, but no one reached statistical significance (Figure 1e). However, it is worthy to mention that NF2 deletion that was present in 65.0% of PT increased to 78.9% in RC1. Other changes were less frequent but displayed increases of 2.5-fold or more for CNA on CASP8 (in 2q) and TSC2 (in 16p), which raised from 10.0% of PT to 26.3% of RC1 in both cases; CNA on CDH13 (in 16q) from 20.0% to 36.8%; CNA on KLK3 (in 14q) from 10.0% of PT to 31.6% of RC1; and HM in GSTP1 (in 11q) from 5.0% of PT to 26.3% of RC1 (Table 2). Pearson's correlation analyses offered a correlation matrix between genetic and/or epigenetic alterations in these primary meningiomas and first recurrences that showed numerous significant differences (Figure 1f). PT showed less statistical correlations (25 correlations) that were weaker and only positive between CNA, offering a landscape of co-alterations of those genes in PT (Figure 1(f1)), lower left quadrant) different to RC1. RC1 displayed more statistically significant correlations (46 correlations) with higher positive and negative correlation coefficients; this meant that RC1 displayed both co-alterations and alternative alterations that affected pairs of genes involved in distinct pathways (Figure 1(f2)), lower left quadrant). The CNA of CDH13 is the alteration correlating with CNA in most other genes in both PT (6 correlations) and RC1 (10 correlations). Most correlations between CNA events were conserved in RC1 vs. PT despite the increase and occurrence of those exclusive changes. Interestingly, the CNA in CASP8 did not show any association with other CNA in PT but showed negative correlations with CNA in four genes in RC1 (CD44, CDH13, TIMP3, and TP73). KLK3 also showed only one correlation for CNA in PT, whereas it correlates with nine other CNA in RC1. Associations between CNA and epigenetic changes showed 8 positive correlations in PT (Figure 1(f1)), upper left quadrant) while it showed 10 positive correlations and 3 negative correlations in RC1 (Figure 1(f2)), upper left quadrant). None of these correlations can be explained by the proximity of gene chromosomal loci. These last correlations were in general not conserved between primary and recurrent meningiomas as most of the correlations found between CNA and hypermethylation in RC1 were not observed in PT (only 1 out of 14). Conversely, only 1 out of the 10 correlations between CNA and hypermethylation in PT was present in RC1. Noteworthy, TP73, a well-known tumor suppressor gene reported previously in meningiomas, showed no correlation in PT but demonstrated three negative correlations in RC1 with DNA hypermethylation (TP73 itself, PTEN, and HIC1). Finally, correlations between hypermethylation events (upper right quadrants) were scarce both in PT and RC1 (only 3 and 4, respectively). The lower right quadrants are empty by the method. Recurrence Genetic Background Depended on the Histologic Grade of the Primary Tumor The lapse of time from the PT to the first recurrence was 53.5 ± 13.8 months for recurrent grade 1 PT and 45.0 ± 11.3 for grade 2-3 PT (p > 0.050), but their genetic and epigenetic changes were different as is described above. Thus, we analyzed the genetic background of the first recurrence of these cases (n = 19) according to the WHO grade of the primary tumor from which each one evolved instead of their own WHO grade ( Figure 2). As an average, we found 7.5 ± 1.4 CNA per case in RC1 from grade 1 PT (compared to 4.9 on grade 1 PT, p > 0.050) and 6.9 ± 0.8 in RC1 from grade 2 to 3 (compared to 4.8 on grade 2-3 PT, p > 0.050). Regarding epigenetics, we found 2.9 ± 0.8 HM genes per case in RC1 from grade 1 PT (compared to 1.1 on grade 1 PT, p > 0.050) and 3.0 ± 0.8 in RC1 from grade 2 to 3 PT (compared to 1.8 on grade 2-3 PT, p > 0.050). Thus, globally, the figures seemed similar. Although the burden was similar, it is of interest that we found that RC1 from grade 1 PT displayed CNA in DAPK1 with a significantly higher frequency, showing losses in 50% of RC1 from grade 1 PT and none in RC1 from grade 2 to 3 PT (p = 0.018). A similar observation was observed for CDH13, which was altered in 62.5% of RC1 from grade 1 PT and in 18.2% of RC1 from grade 2 to 3 PT (p = 0.048); for HIC1, it was altered in 75.0% of RC1 from grade 1 PT and in 20.0% of RC1 from grade 2 to 3 PT (p = 0.031); for GSTP1, it was altered in 8% of RC1 from grade 1 PT and none in RC1 from grade 2 to 3 PT (p = 0.058). The CNA in TSC2 did not reach a significant meaning but it was found in 50.0% of RC1 from grade 1 PT and in 9% of RC1 from grade 2 to 3 PT (p = 0.071). The opposite happened in CREM, which displayed CNA in 63.6% of RC1 from grade 2 to 3 PT and in 12.5% of RC1 from grade 1 PT (p = 0.037) and also with the methylation status of GSTP1, which was hypermethylated in 45.5% of RC1 from grade 2 to 3 PT and none in RC1 from grade 1 PT (p = 0.040). 1, 14, and 18), while grade 1 PTs display randomly distributed alterations. RCs from primary grade 2 to 3 MN introduce a few changes because it is higher than its initial burden, while the progression of grade 1 MN, which displayed more random alterations initially, introduced many concrete changes when it recurs (b). Timing of the disease according to the grade of the PT (grade 1 in upper part/ grade 2-3 in lower part) indicating the location of the primary in each case. Genetic Evolution in Subsequent Relapses Was Heterogeneous Out of the 20 patients studied, 13 suffered only one recurrence but 7 suffered additional recurrences. As expected, 87.5% of multiple recurrences came from grade 2 to 3 primary meningiomas and only 12.5% came from grade 1 primary MN (p = 0.040). The timing according to the grade of the PT and the location is represented in Figure 2b. The genetic burden on these RCs continued to increase up to 7.5 ± 0.6 CNA per case (p = 0.003), and all of them displayed more than five CNAs among the loci explored. Similar findings were detected regarding TEB, with an increase of up to 2.9 ± 0.4 HM genes per case in RCS (p = 0.015). The recurrence-free survival period until the first recurrence was significantly shorter in patients that suffered multiple recurrences (27.92 ± 4.8 months) than in patients that suffered only one MN RC (68.7 ± 13.9 months, p = 0.018). Heterogeneous alterations were found in the different cases showing a random genetic and epigenetic evolution. It is noteworthy that homozygous deletions that were scarce in RC1 occurred from RC2 and were sustained in subsequent RCs. The most outstanding findings were the homozygous deletions in TP73 in 4/7 RC2 and hypermethylated in the rest. CNAs were found in CD27, CDKN2A/2B, and in CDH13, in two RC2, and the hypermethylation of GSTP1 was observed in 3/7 RC2. Other alterations seemed to be more randomly distributed, affecting VHL, FHIT, KLK3, or DAPK1. Discussion Recurrent MNs are lurid causes of disability and their frequencies, together with the evolution in the conception of wellness and aging, have changed the status quo: middleaged people that develop this intracranial neoplasm suffer many complications derived from the disease. Their high frequency, delicate location, complicated risk prediction, the presence of different morphological patterns within the same sample, and interobserver biases [1] underline the need for additional objective molecular markers to improve the clinical management of affected patients. The series presented here sheds light on the clinical, genetic, and epigenetic features of matched samples of primary and recurrent tumors from the same patients. It constitutes, to our knowledge, the second-largest series of these characteristics: 50 samples that represent the primary tumor of 20 patients, half of them suffering one recurrence and the other half suffering more than one recurrence [19]. We deepen on the stepwise progression of the disease by analyzing features on those recurrences. Studying this type of series offers some results different from the standard descriptions of MN as a consequence of the selected cohort. The first one is that our patients, who suffered recurrences in all the cases, are younger than the average in the literature [1], and they are more than 2/3 under 60 years old at diagnosis. The second differential finding is the inversion of the ratio of men:women (here set at 1.5:1), which agrees with previous descriptions about aggressive MN in men and is also concordant with the highest frequent location on the skull's base [20]. Whether surgical difficulties that lead to Simpson's grade 2 or higher resections are responsible for the recurrence [21,22] or whether they occur because of the selective pressures in that region [20,23,24] continue to be unanswered, but in the present study, all resections are macroscopically complete, suggesting an important effect by this particular region. Regarding histological features, sheeting was the main histopathological characteristic in this series of tumors indistinctively of the tumor grade. Here, we show that genetic and epigenetic changes are abundant in these aggressive MN, similarly to what is usually described for high-grade MN. We also agree that these DNA changes increase with the grade of the tumor and with the development of recurrences in concordance with previous reports [1,4,25]. It has been described that radiosurgery and/or radiotherapy increase tumor instability in addition to cause other side effects [7,21,25]. Our patients did not receive any adjuvant therapy before the first surgery. However, all cases with multiple recurrences progressed to grade 2 tumors at some moment of the disease, and when the gross total resection of a recurrence was not possible, post-surgical fractioned radiotherapy was administrated. Although this could influence the continuous increase in tumor instabilities detected, it does not detract from the changes detected between PT and RC1 that were different depending on the grade of primary (there was only one grade 3 PT with adjuvant therapy before RC1). In addition, the different growth rates between genetic and epigenetic burdens found are also notable. The most common alteration, in agreement with the literature, is the loss of NF2 in chromosome 22q [1,8,10,26]. It is an early event occurring in half of the primary tumors analyzed, but its loss increases with tumor grade and with the progression. Indeed, we observed this loss in the last recurrence of every of the cases that developed multiple recurrences. Interestingly, primary NF2-altered MNs show higher TEB in this series, and TEB is statistically associated with early recurrences in concordance with previous descriptions [7,12,27,28]. The involvement of both types of changes in aggressiveness raises the need for improving the comprehension of common underlying mechanisms between them. On the other hand, CNAs are high from the debut of the disease and increase only subtly. Regarding CNAs, the main affected genes in this series of primary MN that recurred are located on chromosomes 22q (TIMP3), 1p (TP73), 6q (ESR1 and PARK2), and 18q (BCL2); the association among these chromosomes' alterations and tumor recurrence has been previously demonstrated [1,4,29]. The increase in CNAs in primary grade 2-3 MN affecting 14q (MLH3), 18q (BCL2), and 10p (CREM) is significant. Barresi et al. demonstrated the association of 18q losses with RFS in atypical meningiomas [26], in agreement with a shrinkage of 17 months in our series. However, our data did not reach statistical significance. When all these primary MNs progress, the highest rates in their RC1 affect TIMP3, TP73, ESR1, and PARK2 again. These findings are consistent with cytogenetic descriptions of losses on those chromosomes in MN and the development of recurrences [1,4,29]. From them, losses on TIMP3 (22q) have been suggested to take part in reduced apoptosis [30,31] or angiogenesis [32], and our finding supports the extensive belief that additional genes located on 22q should contribute to MN tumorigenesis [3,4]. The influence of estrogens in MN has been widely discussed on MN, with many discrepancies in the literature but little has been described at the genetic level [33,34]. Both ESR1 and PARK2 are located in a frequently altered chromosomal region in MN (6q), and both display effects over the Wtn/bcatenin pathway [35][36][37][38][39]. Interestingly, it has been suggested that this pathway influences the formation of recurrences in MN [38,39], emphasizing the interest in further research on the effects of ESR1 or PARK2 losses. Regarding CREM, it is located in chromosome 18, and its loss is widely associated with high-grade MN and progression [1,40,41]. BCL2 immunostaining has been reported to increase with tumor grades in MN [21], but the dysregulation of the control of apoptosis could impact in both directions [42], suggesting a potential role of BCL2 losses in cell-death control that would agree with our findings. Finally, MLH3 losses on 14q completes the set of altered genes that are located in cytogenetic hotspots of meningioma [1,29]. The co-alteration of these genes could be a mechanism for acquiring aggressiveness in some subgroups of meningiomas. Although CNAs seem to be high since the diagnosis in these potentially recurrent PTs, it increases significantly during the disease, as we can corroborate with the analysis of the different recurrences. This fact is not only aligned with the idea that tumor instability increases with MN progression [26] but also highlights the high rate of random CNAs in grade 1 primary meningiomas that behave aggressively. Our most outstanding finding, thanks to the comparison of paired PT-RC, is that TEB has a negative impact on recurrence-free survival and it also increases during the progression of the disease, similarly to previous descriptions from ours and others [12,43,44]. Although our previous work explored the epigenetic burden effect as a 'yes' or 'no' issue, here, we delve deeper into its ability to shorten the disease-free survival period. Our results emphasize an important role of the epigenetic inactivation of CDKN2B and PTEN in the timing of the disease. Both genes, together with CNA in HIC1, demonstrate independent prognostic value. Although losses in PTEN (10p) are related to the progression of this disease [1,40,41], no references to its epigenetic inactivation have been reported. Of note is that, in contrast to the abundant literature about the influence of CDKN2B and PTEN on MN, as far as we have reviewed the literature, there is no previous study describing HIC1 in MN. However, its epigenetic inactivation has been described in other benign brain tumors such as intracranial ependymoma and medulloblastoma [45,46]. Unexpected Pearson's correlations suggest that different pathways activated in tumor initiation compared to progression: CNAs in HIC1 were inversely associated with HM in PTEN on PT and in itself, offering at least two clusters of genes differentially affected when the tumor progresses and both causing a shortened recurrence-free survival. This novel finding highlights the heterogeneity of MN and plays in favor of the frequently proposed model of clonal evolution in meningioma [47][48][49][50]. Our next question was whether or not recurrences from grade 1 PT that progress are genetically/epigenetically similar to those from grade 2 to 3 PT. With a marked involvement of the aforementioned chromosomes 14, 6, and 10, we found clusters of alterations of interest in the recurrences that depend on the grade of the primary tumor. The high rate of genetic/epigenetic changes that are present on grade 2-3 tumors from the beginning of the disease softens its increase during progression, while the substantial boost on alterations becomes most evident in the recurrences that come from grade 1 PT. Interestingly, GSTP1 seems to represent a bridge on these recurrences as it is the only individual gene for which its inactivation was significant in RC from grade 2 to 3 tumors via epigenetic alteration and from grade 1 tumors via copy number alteration. GSTP1 (11p) hypermethylation has been previously reported in MN literature [28,43], But it is not associated with an additional parameter. Different studies report that GSTP1 silencing activates JNK and ERK1/2 pathways to explore apoptosis and uncontrolled growth [51][52][53][54]. In either cases, the adjacent connection to cell-death programs lead us to think of a major role for the apoptosis-necroptosis-autophagy axis. A subtle association between losses on BCL2 and PARK2 is of interest, which would promote autophagy in MN. On the other side, the lack of an influence of TSC2 over the mTOR pathway would result in reduced autophagy that could be reinforced by the reduction in DAPK1 phosphorylating Beclin [55,56]. These interrelations are more notorious in the multiple RCs that we have had the chance to evaluate in this series. Patients that developed more than one RC offer a wide spectrum of alterations. DNA methylation profiling seems to be the most robust method for estimating the risk of recurrence [26], although it can be unaffordable for many clinical facilities. Our data emphasize not only the interest of studying the accumulation of epigenetic changes in MN progression but also the possibility of its assessment by MLPA. It is a cost-effective technique available for almost every kind of laboratory that in addition is of straightforward interpretation. Of the entire epigenetic burden found in MN progression, the most outstanding is GSTP1 hypermethylation occurring in five of seven RC2 that presented simultaneously TP73 loss. This may represent a pathway different than others that affected CASP8-CASR-PARK2-MLH3 from the first recurrence. Surprisingly, both recurrences that displayed hypermethyltion in GSTP1 or the ones with other changes affecting the MLH3-CREM-BCL2 axis, DAPK1-HIC1-TSC2 axis, or even displaying stochastic alterations of VHL and/or FHIT and/or KLK3 and/or DAPK1 underline the involvement of apoptosis-autophagy in MN aggressiveness. Although previous descriptions associating these genes are scarce, their adjacent relation with cell-death pathways is remarkable. In addition to the above-described functions, VHL acts as a target recruitment subunit in the E3 ubiquitin ligase complex (as PARK2), which recruits hydroxylated hypoxia-inducible factor (HIF) and its loss may help apoptosis evasion. FHIT plays a role in the induction of apoptosis via SRC and AKT1 signaling pathways; DAPK1 is involved in apoptosis and autophagy; KLK3 is an androgen receptor responsive gene and its downregulation has been associated with increased autophagy [39]. These findings, taken together, underline the unexplored role of apoptosis autophagy as a relevant mechanism of MN progression worthy of further research. Conclusions The study of tumors' self-evolution is important for understanding disease progression and developing novel therapies. Most previous studies focused on which meningiomas recurred and which did not within the histological grades classified by the WHO, but to the best of our knowledge, this is the second-largest pair-matched collection and the first report studying the recurrence by considering the grade of the primary neoplasm with an emphasis in grade 1 MN with aggressive behavior. We observed similarities not only between these recurrent tumors but we also identified differences in the prognosis thanks to the wide follow-up conducted. The different statistical analysis shows interesting interrelations between apoptotic genes and autophagy, and a surprising involvement of GSTP1, BCL2 DAPK1, and HIC1 that deserves further consideration in meningioma. Our approach sheds some light on MN heterogeneity and emphasizes the importance of introducing user-friendly and cost-effective methods in the molecular characterization of brain tumors in the daily clinical routine.
9,542.4
2022-08-01T00:00:00.000
[ "Medicine", "Biology" ]
The thermodynamics of large-N QCD and the nature of metastable phases In the limit of a large number of colors (N), both Yang-Mills and quantum chromodynamics are expected to have a first-order phase transition separating a confined hadronic phase and a deconfined plasma phase. One aspect of this separation is that at large N, one can unambiguously identify a plasma regime that is strongly coupled. The existence of a first-order transition suggests that the hadronic phase can be superheated and the plasma phase supercooled. The supercooled deconfined plasma present at large N, if it exists, has the remarkable property that it has negative absolute pressure --- i.e. a pressure below that of the vacuum. For energy densities of order unity in a 1/N expansion but beyond the endpoint of the hadronic superheated phase, a description of homogeneous matter composed of ordinary hadrons with masses of order unity in a 1/N expansion can exist, and acts as though it has a temperature of $T_H$ in order unity. However, the connection between the canonical and microcanonical descriptions breaks down and the system cannot fully equilibrate as $N \rightarrow \infty$. Rather, in a hadronic description, energy is pushed to hadrons with masses that are arbitrarily large. The thermodynamic limit of large volumes becomes subtle for such systems: the energy density is no longer intensive. These conclusions follow provided that standard large N scaling rules hold, the system at large N undergoes a generic first-order phase transition between the hadronic and plasma phases and that the mesons and glueballs follow a Hagedorn-type spectrum. I. INTRODUCTION Quantum chromodynamics (QCD) at finite temperature and zero chemical potential consists of a confined hadronic regime and a deconfined plasma regime, which are connected by a crossover [1]. In this crossover regime, the medium is neither unambiguously hadronic nor unambiguously a plasma, and the physical description is not straightforward. The nature of matter near the crossover temperature can be probed with heavy-ion experiments. Phenomenological models of heavy-ion collisions have been studied to describe the collisions and have successfully explained many aspects of the experiments [2][3][4][5]; however, the assumptions underlying these models, particularly in the crossover regime, are not always consistent with experiment [6]. Many of the awkward ambiguities of the crossover regime vanish in the large-N limit of QCD [7][8][9]. At large N the hadronic and plasma regimes are expected to become unambiguously distinct phases separated by a well-defined phase transition. The clean separation of two phases arises due to different characteristic scalings with N : the energy density of the hadronic phases scales as N 0 , whereas that of the plasma phases scales as N 2 . Thus, they cannot be smoothly connected-at least at infinite N . Moreover, there is strong reason to believe that the transition between the phases is first-order at sufficiently large N . We expect that the thermodynamics of large-N QCD becomes equivalent to that of SU (N ) Yang-Mills theory due to the suppression of quark loops. Yang-Mills with N = 3 is known to have a first-order *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>‡<EMAIL_ADDRESS>transition [10], and lattice simulations suggest that the first-order transition persists at larger N [11,12], with a latent heat that appears to grow with N 2 . This behavior is precisely what one would expect if the first order transition persists up to infinite N . In the rest of the discussion in this paper, we will make the standard assumption that a first-order phase transition exists between the hadronic phase and the plasma phase in large N QCD. With a first-order transition, the structure of the phase diagram becomes cleaner. The temperature of a homogeneous medium naively determines the phase of that medium-if the medium's temperature is lower than the confinement temperature, the medium is in the hadronic phase and above it is in the plasma phase. However, as a feature of the first-order transition, a hadronic medium can be superheated and a plasma supercooled, up to endpoints of those metastable regimes. Apart from the assumption that a first-order transition persists in the N → ∞ limit, the analysis in this paper will also assume it is of the generic type, with exactly two stable phases, and exactly two metastable phases. The metastable regimes are globally unstable homogeneous phases. While the free energy would be reduced if the system went into an inhomogeneous mixed phase at the transition temperature, the system is locally stable against small amplitude fluctuations in energy density over large volumes. Since large amplitude fluctuations over large regions are exponentially unlikely, such metastable regimes can be very long-lived-at least in the absence of external perturbations introducing spatially large fluctuations that could induce transitions to an inhomogeneous regime. The metastable superheated (supercooled) phase does not extend to arbitrarily high (low) temperatures. Beyond endpoints of the metastable regimes, the homogeneous medium becomes locally unstable-any spatial fluctuation over a large region, no matter how small the amplitude, will grow exponentially. Since the existence of such random fluctuations is a necessary feature of thermal systems, the homogeneous system will decay towards an inhomogeneous one, eventually yielding a mixed phase at the transition temperature. This paper focuses on the thermodynamic features of these metastable and unstable regimes in the large-N limit. Much of the analysis will be from the perspective of the microcanonical ensemble in which the key functional relation is the entropy as a function of energy. However for certain parts of the analysis it is sensible to use a canonical description in which the free energy as a function of temperature is the key relation. Note there is no distinction between the canonical and grand canonical ensembles for this system as we are taking all chemical potentials to be zero. Most of the analysis in this paper is done for large-N QCD in the absence of any chemical potentials and with the standard version of the large-N limit with quarks assumed to be in the fundamental representation of color. The qualitative conclusions of this analysis hold equally well for a variety of large-N gauge theories, including pure gauge(Yang-Mills). It is worth noting that for some observables the large-N world is a recognizable caricature of the physical world with N = 3 and can be used to make qualitative [9,13] and sometimes semiquantitative predictions (for example baryon axial coupling constant ratios [14]), of direct relevance to the physical world. The thermodynamic issues at the heart of this paper are not of this type. Indeed, the behavior of QCD at large N with a first-order transition is qualitatively quite different from the N = 3 world with a smooth crossover; the metastable phases on which this paper principally focuses do not even exist in the N = 3 world. Nevertheless, it is worth trying to understanding QCD thermodynamics at large N as may give significant insight into QCD more generally. For example, as will be discussed in Sec. III, in the large N limit one can explicitly demonstrate that a strongly interacting plasma must exist-this is in accord with the phenomenological understanding from the analysis of heavy ion collisions that a regime of strongly interacting plasma is formed [15]. The analysis in this paper depends on three assumptions: 1. Standard large-N scaling rules hold for both properties of hadrons and for properties of a quark gluon plasma. 2. As the large-N and thermodynamic limits are approached, the system, when fully equilibrated in a stable phase, has a single phase transition between a hadronic and plasma phases; the transition is generic first order and thus allows the system to superheat into a metastable hadronic regime and supercool into a metastable plasma phase (over a nonzero range of temperatures). There are no other (meta)stable phases. 3. The mesons and glueballs have a Hagedorn spectrum [16]. This is a spectrum for which N had (m), the number of hadrons with mass less than m, at asymptotically high mass behaves as N had (m) ∝ (m/T H ) −d exp(m/T H ); T H is a parameter with dimensions of mass. Moreover, d, the power in the subexponential prefactor is assumed to greater than 7/2. Assumption 2 is quite plausible in light of lattice studies of gauge theories at multiple values of N . The Hagedorn spectrum of Assumption 3 is discussed in detail in Subsection IV A. In Section II, we review general features of first-order phase transitions from the perspective of the microcanonical ensemble; we also summarize the scaling of thermodynamic quantities in the two phases of large-N QCD. Section III focuses on the plasma phase of large-N QCD and establishes two facts: firstly, that the deconfined medium near the transition is a strongly coupled plasma, and secondly that if supercooled plasma exists over a finite temperature range in the large-N limit, then it must achieve negative absolute pressure. Section IV focuses on the hadronic phase and the regime after the endpoint of the metastable hadronic regime for the situation when d > 7/2. We show that in the N → ∞ limit, the energy density of homogeneous matter can be increased without limit while its temperature remains fixed to the Hagedorn temperature. However, this requires the energy of the system to reside in hadronic states with arbitrarily high masses that may be beyond the regime of validity of standard large N scaling rules for hadrons. We summarize these results and discuss open questions in section V. A. The microcanonical ensemble Since much of the analysis in this paper is done using microscanonical reasoning, we begin by reviewing some elementary general features of first-order phase transitions as described via the microcanonical ensemble [17]. The key quantity in a microcanonical description of a finite volume system is the entropy as a function of the energy S(E), where the entropy of the system is the logarithm of the number of accessible states at that energy. The temperature is defined as 1/T = S (E). We will focus on homogeneous systems in the thermodynamic limit; therefore we will work with the intensive quantities such as the entropy and energy densities, s = S/V and = E/V . The pressure P will play a central role in this work. For a homogeneous system, the pressure (or equivalently, the negative of the free energy density) is FIG. 1. An entropy density-energy density curve for a system with a generic first-order phase transition. As drawn, the curve is analytic everywhere. Despite this, the thermodynamics has nonanalaytic behavior. Note that, depending on details a system with a generic first-order transition can have nonanalytic behavior of s( ) at the points of inflection-while the first derivative is continuous and the second derivative zero, higher derivatives can be discontinuous or divergent. Geometrically, both the temperature and pressure for a system with energy density 0 are given in terms of the line t( ), which is defined as the line in the -s plane that is tangent to the curve s( ) evaluated at the point ( 0 , s( 0 )): The temperature is the multiplicative inverse of the slope of this line while the pressure is given by the negative of the -intercept. Fig. 1 shows the entropy density s( ) as a function of energy density, for a homogeneous medium in a system which has a first-order phase transition at a critical temperature T c . A key feature of a first-order transition is a region in which s ( ) > 0 (see Fig. 1). This region corresponds to an absence of stable homogeneous configurations: at fixed energy, introducing an appropriate inhomogeneity will increase the entropy, and therefore inhomogeneous configurations are prefered. Critically, in the region where the entropy density is concave up, this is true even locally, so that even small amplitude fluctuations can 'trigger' the instability (in contrast to metastable regimes). The condition s ( ) > 0 describes locally accessible instabilities, which can begin from arbitrarily small fluctuations. Homogeneous systems in the thermodynamic limit can have regions which are globally unstable against the formation of inhomogeneities regardless of the sign of s ( ). If two points ( 1 , s 1 ), ( 2 , s 2 ) lie on the homogeneous curve s( ), and the line between them is above s( ) everywhere in 1 < < 2 , then a spatially separated system-a mixed phase-with part of the system with energy density 1 and part with 2 will have a higher entropy than the homogenous phase with the same average energy density. Clearly, the optimal choice for a mixed phase is one which none of the system is concave upwards. Thus, the equilibrium curve is constructed as the convex hull of all points on s( ). This is the famous Maxwell construction. The illustrative curve shown in Fig. 1 contains two phases-as we expect happens for large N QCD. Note that the convex hull contains a line segment from t c ( ) that is tangent to s( ) at two points. These two points represent the properties of the two regions in the mixed phase. Since the two points lie on the same line, the tangent lines share a common slope, and hence correspond to the same temperature, T c , and a common -intercept and hence correspond to the same pressure. For a medium at T c , any energy density between l and h is achievable in equilibrium-albeit in a mixed phase. In addition to the region H < < L where s ( ) > 0 and thus the system is locally unstable, there are two regions in Fig. 1 where the medium is homogeneous and locally stable with s ( ) < 0, but nevertheless globally unstable. These metastable regions, defined by l < < H and L < < h , and are respectively termed the superheated and supercooled phases. The two endpoints of metastable phases, H and L , are the inflection points of s( ) curve. Metastable phases do not decay until a sufficiently large thermal fluctuation appears, and their decay time is exponentially long in the size of the required fluctuation. The superheated phase is smoothly connected to the phase defined by T < T c , but nevertheless has a temperature above the critical temperature; correspondingly, the supercooled phase is smoothly connected to the hot phase with T > T c , but nevertheless has a temperature below the critical temperature. To summerize, there are five regions on the homogeneous s( ) curve: Let us now move to the specific case of large-N QCD. The cold and hot phases correspond respectively to the hadronic phase and the quark-gluon plasma phase 1 . The transition between them is assumed, on the basis of strong lattice evidence, to be first-order [11,12]. We denote the line of homogeneous medium for QCD with N colors as s N ( ), and will focus on properties that persist in the limit of large N . The two phases possess different characteristic scalings of energy and entropy density with N . In the hadronic phase, both scale with N 0 (meaning that they have a finite limit as N → ∞), while in the plasma phase, both scale with N 2 , the number of species of gluon: where Λ is an arbitrary but fixed scale (chosen for convenience to be of order of the mass of a typical low-lying hadron), and σ and Σ are dimensionless functions characterizing the hadronic phase and plasma phase respectively. When plotted on the s-e plane, in the large-N limit, the plasma phase scales out when focusing on the hadronic phase, and the hadronic phase shrinks into the origin when focusing on the plasma phase. In the hadronic phase at low energy densities, the system becomes a noninteracting gas of hadrons as the N → ∞ limit is approached. Since the baryon mass grows linearly with N [9], their contributions are exponentially suppressed at large N for systems in thermal equilibrium, and can be ignored entirely. In contrast, the masses of mesons and glueballs are of order unity at large N [7,8]. The scattering amplitude for mesonmeson interactions is of order O(N −1 ) while the mesonglueball and glueball-glueball scattering amplitudes are of order O(N −2 ) [9]. Thus, at particle densties of order unity (so as energy densities) the effects of interactions become negligible as N → ∞. Along with interaction strengths, hadronic widths also vanish: Γ meson ∼ O(N −1 ) and Γ glueball ∼ O(N −2 ). Finally, as shown by Witten [9] the number of distinct glueballs and mesons becomes infinite in the large N limit. Thus, up to corrections of order 1/N c the entropy density as a function of the energy density when the energy density is of order N 0 is Here s nbg ( ; m k ) is the entropy density of a noninteracting gas of bosons of a single species with mass m k and (2J k + 1) is the spin degeneracy factor; effects of any flavor degeneracies are included by treating these states as separate hadrons in the sum. In the plasma phase at (asymptotically) high energy densities, the system is known to behave like a weakly interacting plasma of quarks and gluons, with gluons dominating the the large-N limit. The characteristic momentum scale for the gluons is ( /N 2 ) 1/4 . The β function at large N is for the 't Hooft coupling λ = N g 2 , rather than g 2 itself [7]; the renormalization group evolution of λ is independent of N at large N . Thus the characteristic momentum scale at which the theory becomes weakly coupled is independent of N , and therefore the corresponding characteristic energy density scales with N 2 . Equivalently, the characteristic argument to Σ is of order unity. The previous argument implies that when the energy density is high enough for the perturbative expansion to be accurate-which happens in the domain of ∼ N 2s N scales as in Eq. (3). However, the plasma phase may extend to sufficiently low that the system is no longer perturbative. Nevertheless, the scaling in Eq. (3) should hold; anything else would introduce an additional nonanalyticity in s( ), in contradiction to the assumption of a signle generic first-order transition. One important consequence of Eq. (3) is the behavior of the specific heat c V : The specific heat scales as N 0 in the hadronic regime and N 2 in the plasma phase. Thus the ratio r(T ) ≡ c V / is order unity in either regime and remains finite as N → ∞ except at putative points where c V diverges as at a 2nd order phase transition. More generally c V is analytic except at phase transitions; thus r(t) will be analytic at large N except at phase transition points. Thus Assumption 2 implies that in the large N limit, r(T ) is finite and analytic everywhere except at T c . The behavior of s( ) for homogeneous matter at large but finite N in a intermediate regime between ∼ N 0 and ∼ N 2 must interpolate between the two regimes. The precise way it does so is not determined from general scaling considerations. This regime is of comparatively little interest in any case, since in this regime the system is locally unstable. Fig. 2 summarizes the scaling of thermodynamic quantities in the two regimes and sketches the mixed phase, t c ( ). The transition temperature and pressure are respectively denoted T c and P c . The scalings given in Eq. (3) imply that T c and P c are independent of N for large N . These scalings imply a remarkable cancellation in the plasma phase. Generically the plasma phase has a pressure of order N 2 . For example, at asymptically large energy densities, the pressure is given by P = /3 ∼ N 2 . However, at the phase transition point which is denoted h in Fig. 2 a very large cancellation must take place: For P c to be of order N 0 , the two terms of Eq. (6) must exactly cancel at leading order. The effects of this cancellation are central to Sec. III, where we show the existence of a strongly coupled plasma, and demonstrate that at large N the supercooled plasma phase has negative absolute pressure. Before we turn to Sec. III, it is important to note that this cancellation depends on the first-order transition occuring at a temperature for which the hadronic phase has an energy density of N 0 . This is a consequence of Assumption 2, which implies that in the large-N limit, r(T ) ≡ c V / is finite and analytic throughout the hadronic phase. From the definition of r(T ) it follows that the energy density at two different temperatures T a and T b are related by If the hadronic phase contained regimes with both ∼ N 0 and ∼ N 2 , then taking T b to be in the latter and T a to be in the former, we would find that r(T ) ∼ N 2 in between, in contradiction to Assumption 2. Therefore, if there were a hadronic regime with energy density of order N 2 , at large N , it would need to be separated from the regime with energy density of order unity by a phase transition; such a putative phase transition would need to be between two distinct hadronic phases and would be in addition to the first-order phase transition between a hadronic and plasma phase. Assumption 3-that the mesons and glueballs satisfy a Hagedorn spectrum-is fully consistent with this behavior. Using the type of analysis discussed in Sec. IV A it is easy to see that with a Hagedorn spectrum at large N , The assumption of a single first-order transition with a nonvanishing superheated phase implies that T c < T H . III. PLASMA PHASE In this section we discuss the behavior of the lowtemperature end of the plasma phase of large N QCD, including the supercooled regime. A. Achieving negative absolute pressure In this subsection, we show that if the assumption that a generic first-order transition persists in the large N limit with the latent heat growing with N 2 , a supercooled plasma with negative absolute pressure-a pressure less than the vacuum pressure-must exist. Such a situation is quite unusual. Consider the following Gedankenexperiment: suppose one had a rigid cylinder with a movable piston that was capable of containing large-N QCD matter in the plasma phase. The piston starts locked so the system has fixed volume and the system starts in the stable plasma phase. The cylinder is then brought into thermal contact with a heat bath whose temperature is slowly lowered so that the plasma in the cylinder equilibrates in the metastable phase. At this point, the cylinder is thermally insulated so that it can no longer exchange energy with the outside and the system is isolated from other matter and is sitting in the vacuum. Next, the piston is allowed to move freely. Remarkably, instead of pushing outward into the vacuum, it sucks inward. This behavior is counterintuitive from the viewpoint of the kinetic theory of a gas, where particles in a box are hitting the wall of a chamber and transferring momentum to it when they bounce back; this necessarily results in a positive pressure. The breakdown of such intuition would indicate that the system is not describable, even qualitatively, as a plasma or gas of weakly interacting particles or quasiparticles. Moreover, a negative absolute pressure runs counter to intuition gleaned from stable phases. One does not come across stable phases with negative absolute pressure for stable phases for systems with zero chemical potentials. Indeed, it is easy to see that a negative absolute pressure is impossible for such systems provided that they also satisfy the condition that only positive temperatures are possible. This follows from the requirement of global stability, which implies that The temperature is thus everywhere a nondecreasing function of (provided T is nonnegative). This in turn implies that for > 0, and therefore the pressure P = s/s − is positive. This argument holds only for the curve s( ) describing globally stable matter. It does not apply to a metastable supercooled phase. The supercooled phase does not lie on the purely concave downward curve for the stable phase. Rather, as seen in Fig. 2 it lies along the curve for homogenous phases which includes regions that are concave upward as well as regions that concave downward. Thus, systems need not have positive pressure relative to the vacuum in the supercooled phases. The scaling relations of Eq. (3) along with the cancellation seen in Eq. (6) imply that not only is negative pressure possible for the supercooled phase of large N QCD, it is necessary provided that Assumption 2 holds. We assume the existence of a supercooled plasma phase over a nonzero range of temperature in the large-N limit in the following discussion. At the transition temperature T c , cancellations ensure that the pressure is order unity, so that T c s h = h +O(N 0 ) ( h , s h are as defined in Fig. 1). Recall that for zero chemical potential the Helmholz free energy is minus the pressure. Thus, ∂P ∂T = s with s ∼ N 2 in the supercooled phase. Given these conditions, consider what happens as we lower the temperature from T c to some temperature T sc in the supercooled phase, an order-unity distance below T c . The pressure decreases by an amount of order sT ∼ N 2 . However, the pressure was only of order unity at T c , and therefore must be negative at T sc . Algebraically: where the inequality follows from the fact that s N ( ) is positive and monotonically increasing with T . This argument is quite simple and the conclusion is striking: given the assumptions outlined in the introduction, at large N , the supercooled region has negative absolute pressure. A key part of Assumption 2 is that a supercooled metastable phase persists in the large-N limit over a nonzero range of temperatures. It is plausible that this is not the case-that in the large-N limit, the supercooled phase vanishes. Based on the analysis of supercooled metastable phase above, one of the following must be true if a first order: As N gets large either the medium achieves absolute negative pressure in a supercooled regime, or no supercooled regime exists in the large-N limit. B. A strongly interacting plasma Next we consider the implications of N scaling for the existence of strongly coupled plasma at large N . In QCD with N = 3 and physical quark masses, the hadronic regime goes over to the quark-gluon regime via a continuous crossover as the temperature increases; there is no true phase transition. In the vicinity of the crossover, there is strong empirical evidence that the system is strongly coupled. For example, the ratio of viscosity to entropy density, η/s, extracted from hydrodynamic simulations of heavy-ion collisions is small, η/s ∼ 1/4π [15]-which is an indication that the system, whatever it is, must be strongly coupled. Conventionally, this medium has come to be called a strongly interacting quark-gluon plasma (sQGP). While the evidence that the system is strongly coupled is compelling, its description as a quark-gluon plasma might be regarded as less so. The principal logic behind such a description is that the energy density is too high for the system to be described as a gas of clearly discernible hadrons. While this is true, it is also true the system is not in a regime where it can be described as behaving as a plasma of clearly discernible quarks and gluons (as one has in a weakly coupled plasma). Thus the description as an sQGP as opposed to a strongly interacting hadronic gas could be thought of as merely a convention. This raises an interesting issue of principle: does there exist a gauge theory that shares with N = 3 QCD an unambiguously hadronic regime and an unambiguous plasma, but also has the feature there exists a regime that is both unambiguously strongly coupled and unambiguously a plasma? If the answer to this question is in the affirmative, it gives at least some justification for the conventional description of an sQGP in QCD for N = 3 where the situation is more ambiguous. In this section we will see that the large-N limit of QCD is precisely such a theory. The first-order transition cleanly separates two phases with two different N -scalings, and thus defines unambiguous hadronic and plasma regimes. In what follows, we will show that as the energy density approaches h from above, the system becomes arbitrarily strongly coupled. It may not be obvious how to measure the strength of the interaction for this system. With present numerical and theoretical methods, there is no practical way to determine η/s for a large-N QCD system. One could imagine a computation of η/s from first-principles lattice QCD simulations; however such numerical evaluation of of the shear viscosity has technical issues stemming both from the largeness of N [18] and the real-time nature of the observable [19,20]. So what are the other useful observables to distinguish strongly coupled from weakly coupled systems? For massless constituents such as gluons, a useful measure of the strength of the interaction between constituents in a medium is the ratio /3P . For a non-interacting system of massless particles, this ratio is 1; it deviates from unity for a system of massive particles or for a strongly interacting medium. Thus the condition /3p 1, in a plasma might be taken as a signal for a strongly coupled plasma. One might worry that the quarks in a quarkgluon plasma are not massless. However, in the plasma phase, their contributions are suppressed by relative order 1/N . It is not clear a priori how large /3P should be for the system to be identified as strongly coupled. However, at large N , the ratio can be made arbitrarily large, even as the system remains in the plasma phase. This occurs in the double limit N → ∞, → (+) h , where the superscript (+) indicates that the limit is taken from above to ensure the system is in the plasma phase. The unbounded growth happens in this limit since in the plasma phase, the energy density is of order N 2 throughout and this extends down to h . However, the pressure is of order N 0 at h . Thus ratio /3P diverges in the large-N limit, and so the plasma can be made arbitrily strongly coupled. It is worth noting that if the plasma supercools at large-N , as discussed above, the ratio /3P wraps around infinity to become negative, remaining strongly coupled. In summary we have demonstrated that at large N , there exists a regime in which the system is unambiguously in the plasma phase and strongly coupled. In light of the issue of principle set out at the beginning of this subsection, it is worth noting that while a strongly coupled plasma regime exists at large N , a strongly coupled hadronic regime does not. This gives at some modest support to the notion that the strongly interacting medium seen empirically in QCD with N = 3 might be identified as being an sQGP as opposed to strongly interacting hadronic gas. IV. SUPERHEATED HADRONIC PHASE AND BEYOND Now we move to considering the hadronic phase. In the large-N limit this phase consists of a non-interacting gas of hadrons. The analysis in this section is based on Assumption 3 from the introduction: we assume an exponential Hagedorn spectrum with d > 7/2 for a subexponential prefactor m −d . The motivation for this assumption is discussed below. In Subsection IV A key thermodynamic quantities such as s and are discussed in terms of the Hagedorn spectrum. A key aspect of this analysis is the assumption d > 7/2. It implies that at large N , as the temperature approaches the Hagedorn temperature T H , the energy density remains finite and independent of N . We will denote the limiting energy density as T → T H by H . Subsection IV B considers the extension of s( ) curve into the locally unstable regime with > H in the N → ∞ limit. All of the analysis in this section is aimed at the hadronic regime. The properties of this regime are independent of N in the large N limit. Accordingly, in all of the analysis, we take the large N limit at the outset, unless otherwise specified. Energy densities play a central role in the analysis; in this section we will assume that the energy densities under consideration do not grow with N as N gets large-they scale as N 0 . A. Hagedorn spectrum In the N → ∞ limit, the maximum temperature of the locally stable hadronic phase is the Hagedorn temperature, T H . It is necessary that T c ≤ T H , but whether the inequality is strict is unknown from first principles. If the inequality is strict, then there exists a metastable super-heated hadronic phase; if not, then this metastable phase (if it exists at all) must disappear in the large-N limit. In the analysis here, Assumption 2 requires a generic first-order phase transition that allows superheating and so we take the inequality to be strict. It has long been believed that large-N QCD has a Hagedorn spectrum [21]. This belief is partly motivated by the fact that that highly excited states at large N appear to act like excitations of long flux tubes, which may be regarded as string-like; string theories have Hagedorn spectra [22]. However, there is a compelling argument that large-N QCD has a Hagedorn spectrum that does not explicitly assume stringy dynamics; rather it is based on commonly accepted properties of QCD correlation functions and the fact that the number of local operators of fixed mass dimension grows exponentially with the mass dimension [23]. Supported by arguments above, it is believed that the spectrum of mesons and glueballs at large N is a Hagedorn spectrum [16], with the number of hadrons N had (m) of mass less than m governed at asymptotically large masses by the constant C is a dimensionless numerical factor, and ∆(m) is a correction term that, at large m is asymptotically smaller than N had (m) . Note that N had (m) increases by discrete steps of unity as m increases. While the leading term is smooth, the discrete behavior is encoded in ∆(m). Given the current state of the art, the power law prefactor specified m −d in the spectrum Eq. (10) cannot be determined from first principles theoretically nor can it be determined reliably from fits to the currently available spectrum [24,25]. Although Hagedorn originally proposed d = 5/2 [16], there are very good reasons to believe that d = 4. A simple bosonic string theory that has d = 4 [22]; such a string theory is natural at large N if flux tube dynamics dominate [26]. Here we assume that d > 7/2 so that energy density and entropy density are finite up to T H as is discussed below. The value of d plays a nontrivial role in the large-N QCD thermodynamics [21,27]. The Hagedorn spectrum creates divergences in various thermodynamic quantities due to the exponential growth in the number of hadrons with mass, but the nature of these divergences depends strongly on d. We begin by establishing that the entropy density and energy density diverge as T approaches T H from below, only when d ≤ 7/2. As the divergences are related to the large-mass asymptotics of the Hagedorn spectrum, we are justified in working in the limit of m T . For an ideal gas of one species in this limit, the energy and entropy density are given by To understand the behavior of (and equivalently, s) for T near T H -which we refer to as the Hagedorn pointit helps to divide the energy density into contributions from low and high mass hadrons, split arbitrarily at some large mass m 0 : (T ) = low (T ; m 0 ) + high (T ; m 0 ). (13) The mass m 0 is chosen to be much larger than T Hlarge enough that the asymptotic form of the Hagedorn spectrum is accurate. After fixing m 0 , low (T ; m 0 ) is an analytic function of T , as it is the sum of the contributions from a finite number of species. The form of the Hagedorn spectrum does not determine low , but neither is low relevant to understanding divergences at the Hagedorn point. From Eqs. (10) and (11), the contribution from the large-mass portion of the Hagedorn spectrum is given by Two approximations have been made, both becoming arbitrarily accurate as m 0 is increased. First, we have assumed the large-mass limit in Eq. (11). Second, in neglecting subleading terms in m −1 , we are permitted to replace the discrete sum over masses by an integral and treat the Hagedorn spectrum as if it were continuous. The integral in Eq. (14) converges for any T < T H and diverges for any T > T H . Because high contains the entire nonanalytic part of , we see that for any fixed m 0 , (T, m 0 ) is a nonanalytic function of T at T = T H . It is important, however, to recall that the behavior is only required to be nonanalytic in the large N limit. It has long been known that the precise nature of the nonanyticity at T = T H depends on the value of d [21,27]. We see from Eq. (14) that when d ≤ 7/2 the integral diverges and thus goes to infinity as T H is approached from below. However for d > 7/2, as is assumed here, the integral converges even at T = T H : a finite energy density is attained in the limit T → T H . We denote this this value H . Although the integral in Eq. (14) converges when d > 7/2, the nonanalyticity is manifested in the divergent derivatives of with respect to T as T → T H : The situation for the entropy density is essentially identical. Decomposing s(t) = s low (T ; m 0 ) + s high (T ; m 0 ), a quick calculation reveals that Clearly s(T ) is also nonanlytic at T H and also has a finite limit as T → T H from below when d > 7/2. For the remainder of this section we will employ Assumption 3 and take d > 7/2. With this assumption, both the energy density and entropy density are finite at large N in the entire locally stable hadronic phase, but are ill-defined at any T > T H . We note that if d = 4 (as expected from string theory)or more generally for any d satisfying 9/2 ≥ d > 7/2while the energy density remains finite at large N as T → T H , the specific heat c V diverges at the Hagedorn point and the speed of sound, c s = ∂P ∂ = s c V goes to zero. An interesting question arises when the energy density of a system composed of hadrons exceeds H . We describe such a system as having an energy density beyond the Hagedorn point. B. Beyond the Hagedorn point At large N , the maximum temperature in the hadronic phase is T H . Moreover, if d > 7/2 as we are assuming here, the energy density remains finite, limiting to H , as This creates a somewhat paradoxical situation. Consider a very large box in the metastable superheated phase at a temperature just below T H , so that the energy density is just below H . Now suppose that a number of hadrons with a total energy density of order unity and sufficient to raise the energy density above H are carefully injected into the system. Given that T H is the maximum temperature, the temperature cannot increase while the system remains in the hadronic phase. The paradox is that at large N , hadrons are very weakly interacting, so that there is nothing to keep us from continuing to introduce hadrons into a large box until their energy density exceeds H , with the system still being composed of clearly identifiable hadrons. It is difficult to reconcile this fact with T H being the upper bound for temperature. This is at least suggestive of a breakdown of the canonical ensemble, so we will proceed with an analysis in the microcanonical ensemble. A natural way to try to reconcile these is to focus on the behavior of the curve s( ) for homogenous matter, into the region with > H ; this is the regime that is locally unstable for a generic first-order transition. We will show that in the limit N → ∞, the curve s( ) extends past = H linearly, with the temperature (given by the inverse slope) constantly equal to T H . This is illustrated in Fig. 3. One consequence of this behavior is that there is a discontinuity in the third derivative ∂ 3 s ∂ 3 at the Hagedorn point. Such behavior is typically seen at a second-order phase transition. Beyond H , as increases the temperature remains fixed, as typically happens for a mixed phase after a first-order transition 3 . The behavior containing features of both first-and second-order transitions is quite unusual. It may be a hint that at large N , the systems does not in fact behave like a generic first-order transition, invalidating Assumption 2. In the remainder of this section, however, we will assume that Assumption 2 remains correct and that higher-order 1/N effects in the region beyond H reconcile this behavior with a generic first-order transition. Let us see how this behavior comes about. To do so we must calculate s( ), in the N → ∞ limit, in the region > H . While our interest is in s( ), a quantity which arises naturally in a microcanonical description, we will exploit the equivalence in the thermodynamic limit of large volumes between predictions using canonical and microcanonical ensembles. As will be seen below this equivalence breaks down when > H , but the way it breaks down will allow us to determine the behavior for > H . Up to the Hagedorn point, where = H , one may work with the canonical ensemble to calculate both s and as a function of T ; the microcanonical curve s( ) is obtained parametrically by varying T . This method works up to the Hagedorn point. However, if one tries to go beyond H by increasing the temperature, the canonical expressions diverge. To proceed beyond the Hagedorn point, we will first impose a constraint on the class of microstates we consider. This is a legitimate procedure in microcanonical physics. If one computes s const ( )-the logarithm of the number of microstates subject to the constraint, one knows that by construction, the unconstrained entropy density s( ), which is calculated with a superset of those constrained states, must satisfy s( ) > s const ( ). The constraint we impose is that only configurations in which all of the hadrons in the system have masses that are less than some large value, m max , are considered. This constraint might be envisioned either as arising in some possible physical realization (at least in principle 4 ) or simply as a mathematical device to facilitate counting. In practice, it is easy to see how this works: the constraint cuts off at m max the contributions of the high mass hadrons-those responsible for the divergences-and thereby renders finite and analytic and s. Ultimately we consider the behavior of the system when we remove the constraint by letting m max become arbitrarily large. When the constraint is imposed, the problem becomes one of a relativistic ideal Bose gas containing a finitealbeit very large-number of species. The fact the number of (noninteracting) species is finite ensures the microcanonical and canonical descriptions agree. This holds whether or not > H . Note moreover that with a finite number of species of free particles, there is no possibility of a phase transition so that subject to the constraint, s( ) is an analytic function; nonanalyticities can arise only when the m max → ∞ limit is taken. Our goal is to determine the m max → ∞ limit of s( ; m max ), the entropy density as a function of energy density subject to the constraint that all hadrons have masses below m max in the region > H . To do so we will first determine the limit as m max → ∞ of s ( ) 2 , and then integrate twice to find s( ; m max ). The most straightforward way to proceed is to compute s ( ) for the hadronic regime with > H and asymptotically high values of m max . The calculation is somewhat long but is essentially straightforward. One obtains: Clearly as one takes m max to infinity, s ( ) goes to zero, and s ( ) = 1/T is a constant. The value of that constant, is its value at the low end of the regime 5 , 1/T H , and the form seen in Fig. 3 emerges. Equation (17) depended on a somewhat involved calculation. However, one can intuitively understand the form of the s( ) beyond the Hagedorn point by a very simple, if indirect argument. Recall that If we fix m max to be very large but finite, there is no possibility of a phase transition. Therefore, T ( ; m max ) must be a monotonically increasing function of . In particular, we obtain the lower bound T ( ; m max ) > T H for > H . The limit as we lift the cutoff, therefore, is similarly bounded below, T ( ) ≥ T H , with equality now permitted. Since the temperature in the canonical description, at m max → ∞, cannot be permitted to exceed T H , this suggests that T ( ) = T H for all above H (recalling that in this section we only consider energy densities that scales as N 0 ). Thus s( ) is linear in that regime, with a slope determined by the Hagedorn temperature. In fact, we can be more rigorous in determining the upper bound on T ( ), without invoking Eq. (17). Fixing a temperature T > T H , observe that (T ; m max ) can be made arbitrarily large by increasing m max , due to the Hagedorn divergence. This implies that, for any desired , and any T > T H (no matter how close to the Hagedorn temperature), we can find some finite m max for which (T ; m max ) > , and therefore T ( ; m max ) < T . As the limit m max → ∞ is taken, T ( ; m max ) is thus sandwiched between a lower bound of T H and an upper bound which asymptotically approaches T H from above, and we are forced to conclude that T ( ) = T H for all > H (in the order-one regime). This resolves our puzzle: one can introduce energy into a hadronic system so that > H without pushing the temperature beyond T H ; T hits T H and stays there. But this creates a new paradox. Clearly, the microcanonical and canonical ensembles differ in this regime, since knowledge that T = T H is not sufficient to determine the energy density. However, for any given species of hadron, the canonical description should be valid and thus determine the energy density associated with that species. If one sums these together one necessarily gets = H . The paradox is that there is more energy in the system than this. The resolution to this paradox is that the standard thermodynamic behavior of energy becoming an intensive quantity in the limit of large volumes breaks down. Since it is only in the thermodynamic limit that the canonical and microcanonical descriptions agree, there is no incompatability. depends on the value of − H ; the regime is pushed off to infinity as − H approaches zero. The underlying reason for this is easy to understand: s( ) is non-analytic at = H when mmax is infinite but is analytic for any finite value of mmax. Thus, the limiting behavior when mmax gets large and − H get small depends on the order in which limits are taken. However, our goal is to study the mmax → ∞ limit of the system, and in that limit s ( ) = 0 in the entire > H regime extending all the way down to H . Ultimately, it is straightforward to show that lim → + H (limm max→∞ s ( )) = 1/T H . To see why the standard thermodynamic limit breaks down, let us again introduce the constraint that only hadrons with a mass less than m max contribute. With this cutoff, we can determine which hadrons dominate − H when the system is beyond the Hagedorn point. By taking the cutoff m max to be arbitrarily large, we can take the temperature T at the desired energy density to be arbitrarily close to T H . The difference in energy densities is then determined by the specific heat. The specific heat diverges as T → T H from below in the absence of the constraint, but it remains finite when the constraint is imposed. Thus, the divergent behavior in the specific heat is dominated by contributions from large-mass hadrons. Therefore, it is highly plausible that with a cutoff imposed, the excess energy density would be dominated by hadrons at the scale of the cutoff. When the constraint is released, and m max is taken to infinity, these contributions are pushed to arbitrarily high masses. In effect, all of the light hadrons equilibrate normally as an ideal gas at T H , and all of the extra energy is pushed to arbitrarily high masses. This intuitive argument can be made precise. In the hadron regime at large N , the energy density of the system is constructed from the contributions of the various species of noninteracting hadrons. If we define j ( , m max ) as the contribution of the j th species of hadron to the energy density, when the total energy density is and a constraint of including only hadrons with a mass below m max is imposed. When > H , one can now define m to be the energy-weighted average mass of the hadrons contributing to the difference between and H as ; (18) m j is the mass of the the j th hadron. A straightforward but somewhat involved calculation gives the asymptotic form at large m max of m : As expected, when m max gets large, so does mass of a typical hadron contributing to the energy density above H . It is striking that the typical mass is not merely at the scale of m max , it is parametrically close to m max itself. Now consider what happens if one drops the constraint of m max and has a large volume V , into which energy E in the form hadrons is injected such that E V > H . Note that even without m max , the finiteness of the system imposes an upper bound of E on the possible masses. Thus, one could impose m max = E without changing the physics. If one assumes a standard thermodynamic limit with an intensive energy holds, one sees from that all of the additional energy between E and H V is contained in hadrons with typical masses very close to E itself. One has virtually all of the excess energy in a single hadron. But this is true regardless of the volume of the system, which is incompatible with the assumption of a standard thermodynamic limit; the assumption of a standard thermodynamic limit implies its own contradiction. While the lack of a well-defined thermodynamic limit is a reasonable mathematical explanation, it is important to understand what is happening physically. The physical picture is actually quite simple; after additional energy is injected into the system raising the average energy density above H , interaction effects (that are subleading in 1/N ) would continuously rearrange the energy so the lighter hadrons would move toward distributions compatible with the temperature of T H , and the additional energy would flow to higher and higher masses; for a system with infinite volume this process would never stop and the system would never fully equilibrate. This thermodynamic system would appear to exemplify the maxum that "there is always room at the top". For a finite volume system, this process would have to stop-eventually, the mass of the hadrons absorbing the excess energy would become comparable to the excess energy itself and the system can equilibrate. However, for large volumes the equilibration time becomes long. Of course, the behavior of infinite system with energy flowing to ever higher masses cannot be realized in practice. For one thing, real all systems have finite volume. Another obvious reason is that N is finite for any conceivable physical system-and rather small in QCD itself. There are two principal effects associated with finite N . The first effect is associated with the masses of the hadrons. As the masses of the hadrons are pushed ever upwards, in this scenario, they will eventually reaches a point at which they can no longer be regraded as being of order unity in a 1/N expansion. However, the notion of well-defined narrow mesons and glueballs with welldefined masses, which is the basis of this analysis, is only valid for hadrons with masses of order unity. Secondly, we have used ideal gas expressions since hadron-hadron interactions are subleading in the 1/N approximation. Presumably, these subleading effects restore a well-defined thermodynamic limit as volumes increase-but when N gets large it is approached very slowly. These 1/N corrections are essential if Assumption 2 of the introduction is to hold. It assumes a single generic first order phase transition. This implies that in the region where > H , s ( ) > 0. However, at large N , s ( ) = 0; instead of corresponding to a locally unstable system as expected in a generic first order transition, it is neutral, with neither positive or negative feedback from small amplitude fluctuations of large size. However, higher-order 1/N corrections can ensure that it s ( ) > 0, even if it is small. In that case one can have a generic first-order transition between a hadron regime with and s are both of order N 0 and a plasma regime where they are both of order N 2 . V. DISCUSSION To summarize the principal results of this paper: Given the three apparently innocuous assumptions of the introduction, QCD (and pure gauge theory) in the large N limit: 1. The system has negative absolute pressure-a pressure below the theory's vacuum-in the supercooled metastable plasma phase. 2. There exists a regime in which the theory is both unambiguously in the plasma phase and unambiguously strongly coupled. 3. A well-defined thermodynamic limit does not exist in the hadronic regime, when the energy density exceeds H . Of course, one possibility is that these assumptions are not innocuous. For example, one might imagine that Assumption 2 is not correct in a subtle way such that for every large value of N a first order transition exists with finite temperature domain of supecooling for the plasma phase, but the size of this domain drops to zero as N approaches infinity. Were that to occur, the conclusion that a negative absolute pressure well occur in the supercooled phase can be evaded. However, such a situation would be very interesting in its own right, since unlike in a typical first order transition, there would be large fluctuations as one approached the transition point from above; these would be driven by the nonanalyticity at the endpoint of the supercooled phase which at large N would have to coincide with the first-order point. One might also consider scenarios where something similar happens in the superheated hadronic phase. Lattice evidence on where T c approaches T H in the large-N limit is inconclusive [28]. The two temperatures are the same in N = 4 super Yang-Mills, as well as in some models of the large-N limit of SU (N ) Yang-Mills [29]. Any scenario of this sort is worth exploring. One very interesting possibility concerns the hadronic region with energy density beyond H . It may turn out that in certain circumstances, this region is long-lived at large N , in a parametric sense, despite being locally unstable. The basic issue is that at large N the system is not unstable, it is neutral. Thus, the instability must come about due to 1/N effects, which implies that the life-time of the matter before the instability substantially effects things must be long. There is a subtlety, however, there is also a natural time scale for which such a system can thermally equilibrate. This is also long at large N , since the interactions needed to equilibrate it are also 1/N suppressed. Thus, there remains an interesting open question: at large N does the system equilibrate more rapidly in a parametric sense than the natural time scale for the instability. If it does, then the possibility that a homogeneous medium could form and equilibrate and, if N were sufficiently large, live for a parametrically long time before the instability destroyed it-one would have a locally unstable but nevertheless long-lived system. At this stage it is unclear if this happens. It is worth noting that it is possible that this may depend on whether one is studying large-N QCD or pure gauge theory since the 1/N corrections for glueballs and mesons differ.
13,694.8
2020-06-25T00:00:00.000
[ "Physics" ]
Eco-Friendly Approach for Graphene Oxide Synthesis by Modified Hummers Method The aim of this study is to produce graphene oxide using a modified Hummers method without using sodium nitrate. This modification eliminates the production of toxic gases. Two drying temperatures, 60 °C and 90 °C, were used. Material was characterized by X-Ray Diffraction, Fourier Transform Infrared Spectroscopy, Raman Spectroscopy and Scanning Electron Microscopy. FTIR study shows various functional groups such as hydroxyl, carboxyl and carbonyl. The XRD results show that the space between the layers of GO60 is slightly larger than that for GO90. SEM images show a homogeneous network of graphene oxide layers of ≈6 to ≈9 nm. The procedure described has an environmentally friendly approach. Introduction Graphene has excellent mechanical, electronic, optical and thermal properties. It has a unique two-dimensional structure one atom thick [1]. Many researchers have been interested in investigating this two-dimensional (2D) form of carbon because it has become a relevant topic for the development of materials with many applications [2]. As reported in the literature, graphene has a large specific surface area [3], an efficient electron mobility (200,000 cm 2 v −1 s −1 ) [4,5], a high Young's modulus (1 TPa) [6], and good thermal conductivity (4.84 × 10 3 to 5.30 × 10 3 W/mK) [7]. Graphene oxide (GO) can be manufactured or self-assembled into materials with controlled compositions and microstructures for different applications [8]. Previous work has reported the use of graphene oxide combined with fullerene in thin-film form to produce lightweight three-dimensional hybrid structures with high surface area [9]. The arrangement of other molecules within graphene oxide layers has shown that multilayer structures exhibit high biocatalytic activity [10]. The Langmuir-Blodgett process has recently been used for the production of graphene oxide by which a uniform dispersion and controllable development of graphene oxide flakes has been achieved [11]. The most important and widely applied method for GO synthesis is that developed by Hummers and Offeman [12]. This method has three important advantages over other techniques. First, the reaction is complete in a few hours, second, potassium chlorate can be replaced by potassium permanganate for a safer reaction, and third, the use of sodium nitrate eliminates acid mist formation. However, the method also has some defects, since in the oxidation process, some toxic gases such as nitrogen dioxide and dinitrogen tetroxide are released. In addition, sodium and nitrate ions are difficult to remove from the wastewater formed during the process of synthesis and purification of graphene oxide. In previous works, the Hummers method has been improved by excluding sodium nitrate and increasing the amount of potassium permanganate, carrying out the reaction in a single mixture [13]. With this modification, it is possible to increase the performance of the reaction and reduce the release of toxic gases; also, phosphoric acid is introduced in the reaction system. Previous research has reported that the mixture of sulfuric acid and nitric Materials 2022, 15, 7228 2 of 9 acid used in the Hummers method acts as a "chemical scissors" for graphene planes that facilitates the penetration of the oxidation solution [14]. On the other hand, potassium permanganate can achieve the complete intercalation of graphite, forming graphite bisulfate [15,16]. This interaction ensures the effective penetration of potassium permanganate into the graphene layers for graphite oxidation. Due to this, potassium permanganate replaces the function of sodium nitrate, so it is not necessary for the reaction. In this investigation, we show an easy synthesis route to produce GO using a low-cost and environmentally friendly modified Hummers method. In addition, the synthesis route is highly reproducible in obtaining graphene oxide for its subsequent reduction (rOG) for possible biocatalytic applications as reported in previous works. Synthesis of GO Materials included: natural graphite (99%) supplied by Aldrich chemistry, potassium permanganate (KMnO 4 ) supplied by Sigma Aldrich, sulfuric acid (H 2 SO 4 ) supplied by Jalmex, hydrochloric acid (HCl) supplied by Sigma Aldrich, and hydrogen peroxide (H 2 O 2 ) supplied by J.T Baker. All the reagents were obtained in the city of Queretaro, Mexico. The synthesis process for obtaining graphene oxide is described below. Two glycerin baths are preheated to 45 • C and 98 • C, respectively. Then, 1 g of graphite was added in a ball flask in a cold bath for 5 min with 23 mL of sulfuric acid (H 2 SO 4 ), it was stirred for 5 min. Subsequently, potassium permanganate (KMnO 4 ) was added and placed in the glycerin bath at 45 • C for 2 h. After 2 h, the mixture was transferred to a glycerin bath at 98 • C, adding 46 mL of distilled water at room temperature; then, it was kept for 15 min. After 15 min, 140 mL of hot water was added along with 10 mL of hydrogen peroxide (H 2 O 2 ). The mixture obtained was emptied into the strainer to filter by vacuum. The sample was removed and placed in 6 jars with 1 g of sample. Finally, 2 mL of hydrochloric acid (HCl) and distilled water were added to wash the samples by centrifugation. The final sample was placed in a petri dish to be dried in oven at 60 • C and 90 • C for 24 h. In Table 1, the number of washes, the centrifugation revolutions and the time for each sample is shown. In addition, in Figure 1, the results after washing and after drying, respectively, are observed. Finally, in Figure 2, the synthesis process is shown in a flow chart. Table 2 shows a comparison between the traditional synthesis method and the modified Hummers method used in this work. of the reaction and reduce the release of toxic gases; also, phosphoric acid is introduced in the reaction system. Previous research has reported that the mixture of sulfuric acid and nitric acid used in the Hummers method acts as a "chemical scissors" for graphene planes that facilitates the penetration of the oxidation solution [14]. On the other hand, potassium permanganate can achieve the complete intercalation of graphite, forming graphite bisulfate [15,16]. This interaction ensures the effective penetration of potassium permanganate into the graphene layers for graphite oxidation. Due to this, potassium permanganate replaces the function of sodium nitrate, so it is not necessary for the reaction. In this investigation, we show an easy synthesis route to produce GO using a low-cost and environmentally friendly modified Hummers method. In addition, the synthesis route is highly reproducible in obtaining graphene oxide for its subsequent reduction (rOG) for possible biocatalytic applications as reported in previous works. Synthesis of GO Materials included: natural graphite (99%) supplied by Aldrich chemistry, potassium permanganate (KMnO4) supplied by Sigma Aldrich, sulfuric acid (H2SO4) supplied by Jalmex, hydrochloric acid (HCl) supplied by Sigma Aldrich, and hydrogen peroxide (H2O2) supplied by J.T Baker. All the reagents were obtained in the city of Queretaro, Mexico. The synthesis process for obtaining graphene oxide is described below. Two glycerin baths are preheated to 45 °C and 98 °C, respectively. Then, 1 g of graphite was added in a ball flask in a cold bath for 5 min with 23 mL of sulfuric acid (H2SO4), it was stirred for 5 min. Subsequently, potassium permanganate (KMnO4) was added and placed in the glycerin bath at 45 °C for 2 h. After 2 h, the mixture was transferred to a glycerin bath at 98 °C, adding 46 mL of distilled water at room temperature; then, it was kept for 15 min. After 15 min, 140 mL of hot water was added along with 10 mL of hydrogen peroxide (H2O2). The mixture obtained was emptied into the strainer to filter by vacuum. The sample was removed and placed in 6 jars with 1 g of sample. Finally, 2 mL of hydrochloric acid (HCl) and distilled water were added to wash the samples by centrifugation. The final sample was placed in a petri dish to be dried in oven at 60 °C and 90 °C for 24 h. In Table 1, the number of washes, the centrifugation revolutions and the time for each sample is shown. In addition, in Figure 1, the results after washing and after drying, respectively, are observed. Finally, in Figure 2, the synthesis process is shown in a flow chart. Table 2 shows a comparison between the traditional synthesis method and the modified Hummers method used in this work. Improved level of oxidation and, therefore, product performance. Separation and purification processes are tedious. Highly timeconsuming. X-ray Diffraction (XRD) In this study, X-Ray Diffraction (XRD) was used to determine the crystal structure and verify the spacing between the GO layers. The XRD pattern for the sample dried at 60 °C, GO60-1 and GO60-2 is presented in Figure 3. This sample exhibits a diffraction peak at 9.28° due to the (002) plane of GO [17]. In addition, a small peak at ≈26° is observed; according to the literature, this peak corresponds to graphite. When the graphite is oxidized, the diffraction peak should change from ≈26° to ≈11°; this agrees with the results observed in Figure 3. Improved level of oxidation and, therefore, product performance. Separation and purification processes are tedious. Highly timeconsuming. X-ray Diffraction (XRD) In this study, X-Ray Diffraction (XRD) was used to determine the crystal structure and verify the spacing between the GO layers. The XRD pattern for the sample dried at 60 • C, GO60-1 and GO60-2 is presented in Figure 3. This sample exhibits a diffraction peak at 9.28 • due to the (002) plane of GO [17]. In addition, a small peak at ≈26 • is observed; according to the literature, this peak corresponds to graphite. When the graphite is oxidized, the diffraction peak should change from ≈26 • to ≈11 • ; this agrees with the results observed in Figure 3. On the other hand, the XRD pattern for the sample dried at 90 • C is presented in Figure 4 GO90-1 and GO90-2. This sample presents a diffraction peak at 9.6 • , which is slightly different from the GO60 sample. In GO90-1, a small peak at ≈26 corresponding to graphite can also be observed; however, in GO90-2, this peak is no longer present, which suggests that in said sample, all the graphite was oxidized to become GO. In addition, the intensity of it is three times greater compared to GO90-1, which suggests a greater number of planes (002) in that direction. This difference could be due to the increase in the drying temperature compared to the GO60 sample. On the other hand, the XRD pattern for the sample dried at 90 °C is presented in Figure 4 GO90-1 and GO90-2. This sample presents a diffraction peak at 9.6°, which is slightly different from the GO60 sample. In GO90-1, a small peak at ≈26 corresponding to graphite can also be observed; however, in GO90-2, this peak is no longer present, which suggests that in said sample, all the graphite was oxidized to become GO. In addition, the intensity of it is three times greater compared to GO90-1, which suggests a greater number of planes (002) in that direction. This difference could be due to the increase in the drying temperature compared to the GO60 sample. In both samples, the observed peaks are sharp, which indicates that the graphite was completely oxidized by this method. The spacing between the GO layers was calculated using Bragg's law [18]. = 2 where is the diffraction series and is the X-ray wavelength (0.154 nm). The spacing between the GO60 and GO90 layers was 0.95 nm and 0.92 nm, respectively, according to Bragg's law. Both spacings are slightly different, since a different drying temperature was used. Usually, the interlayer spacing d of graphene oxide is in the range of 0.6-1.0 nm and is controlled according to the degree of oxidation of graphite and the number of intercalated water molecules in the interlayer space [19]. In previous works, it has been reported that the increase in the space between the layers is due to the intercalated functional group of oxygen and water molecule in the structure of the carbon layer [20]. On the other hand, Zeng et al. mention that the increase is related to the weaker Van der Waals bond formed by the epoxyl, hydroxyl, carbonyl and carboxyl groups in the basal planes [21]. Fourier Transform Infrared Spectroscopy (FTIR) The FTIR spectra of the GO60 and GO90 samples are shown in Figure 5. The spectra consist of vibrational groups of GO that include carbonyl (C=O), aromatic (C=C), and On the other hand, the XRD pattern for the sample dried at 90 °C is presented in Figure 4 GO90-1 and GO90-2. This sample presents a diffraction peak at 9.6°, which is slightly different from the GO60 sample. In GO90-1, a small peak at ≈26 corresponding to graphite can also be observed; however, in GO90-2, this peak is no longer present, which suggests that in said sample, all the graphite was oxidized to become GO. In addition, the intensity of it is three times greater compared to GO90-1, which suggests a greater number of planes (002) in that direction. This difference could be due to the increase in the drying temperature compared to the GO60 sample. In both samples, the observed peaks are sharp, which indicates that the graphite was completely oxidized by this method. The spacing between the GO layers was calculated using Bragg's law [18]. = 2 where is the diffraction series and is the X-ray wavelength (0.154 nm). The spacing between the GO60 and GO90 layers was 0.95 nm and 0.92 nm, respectively, according to Bragg's law. Both spacings are slightly different, since a different drying temperature was used. Usually, the interlayer spacing d of graphene oxide is in the range of 0.6-1.0 nm and is controlled according to the degree of oxidation of graphite and the number of intercalated water molecules in the interlayer space [19]. In previous works, it has been reported that the increase in the space between the layers is due to the intercalated functional group of oxygen and water molecule in the structure of the carbon layer [20]. On the other hand, Zeng et al. mention that the increase is related to the weaker Van der Waals bond formed by the epoxyl, hydroxyl, carbonyl and carboxyl groups in the basal planes [21]. Fourier Transform Infrared Spectroscopy (FTIR) The FTIR spectra of the GO60 and GO90 samples are shown in Figure 5. The spectra consist of vibrational groups of GO that include carbonyl (C=O), aromatic (C=C), and In both samples, the observed peaks are sharp, which indicates that the graphite was completely oxidized by this method. The spacing between the GO layers was calculated using Bragg's law [18]. where n is the diffraction series and λ is the X-ray wavelength (0.154 nm). The spacing between the GO60 and GO90 layers was 0.95 nm and 0.92 nm, respectively, according to Bragg's law. Both spacings are slightly different, since a different drying temperature was used. Usually, the interlayer spacing d of graphene oxide is in the range of 0.6-1.0 nm and is controlled according to the degree of oxidation of graphite and the number of intercalated water molecules in the interlayer space [19]. In previous works, it has been reported that the increase in the space between the layers is due to the intercalated functional group of oxygen and water molecule in the structure of the carbon layer [20]. On the other hand, Zeng et al. mention that the increase is related to the weaker Van der Waals bond formed by the epoxyl, hydroxyl, carbonyl and carboxyl groups in the basal planes [21]. Fourier Transform Infrared Spectroscopy (FTIR) The FTIR spectra of the GO60 and GO90 samples are shown in Figure 5. The spectra consist of vibrational groups of GO that include carbonyl (C=O), aromatic (C=C), and hydroxyl (O-H) groups; these groups appear in Table 3. The O-H stretching vibrations in the region of 3500-3000 cm −1 are attributed to the carboxyl and hydroxyl groups of the residual water present in the GO samples. These hydrophilic functional groups containing oxygen provide GO samples with good dispersibility in water [22]. residual water present in the GO samples. These hydrophilic functional groups containing oxygen provide GO samples with good dispersibility in water [22]. The peak in 1730 cm −1 is due to the ketone group (C=O), while the peak at 1567 cm −1 is the main graphitic domain and is due to sp2 hybridization [23]. Finally, the band at 1100 indicates the C-O stretching of the epoxy groups [24]. These results suggest that graphite powder is successfully oxidized in the presence of acid with potassium permanganate (KMnO4). Raman Analysis The Raman spectrum of GO60 and G090 is shown in Figure 6. Both Raman spectra contain bands marked as D and G bands. Peak D appears at ≈1300 cm −1 , while peak G appears at ≈1600 cm −1 . The G band is associated with graphitic carbons, and the D band is related to structural defects or partially disordered graphitic domains [25]. In both spectra, the D bands are strong, which confirms the distortions of the graphene basal plane lattice. In addition, the G band is prominent for sp2 carbon lattices. Ferrari et al. mention that the D band reveals disorders of crystalline materials and defects associated with vacancies and grains [26]. On the other hand, the G peak corresponds to the optical phonons in the center of the Brillouin zone that result from the stretching of the bond of the sp2 carbon pairs in the rings as well as in the chains [27]. Therefore, the intensity of the ratio of ID/IG was calculated for both samples, resulting in 1.26 and 1.2 for GO60 and GO90, respectively. These results provide evidence of the degree of The peak in 1730 cm −1 is due to the ketone group (C=O), while the peak at 1567 cm −1 is the main graphitic domain and is due to sp2 hybridization [23]. Finally, the band at 1100 indicates the C-O stretching of the epoxy groups [24]. These results suggest that graphite powder is successfully oxidized in the presence of acid with potassium permanganate (KMnO 4 ). Raman Analysis The Raman spectrum of GO60 and G090 is shown in Figure 6. Both Raman spectra contain bands marked as D and G bands. Peak D appears at ≈1300 cm −1 , while peak G appears at ≈1600 cm −1 . The G band is associated with graphitic carbons, and the D band is related to structural defects or partially disordered graphitic domains [25]. In both spectra, the D bands are strong, which confirms the distortions of the graphene basal plane lattice. In addition, the G band is prominent for sp2 carbon lattices. Ferrari et al. mention that the D band reveals disorders of crystalline materials and defects associated with vacancies and grains [26]. On the other hand, the G peak corresponds to the optical phonons in the center of the Brillouin zone that result from the stretching of the bond of the sp2 carbon pairs in the rings as well as in the chains [27]. Therefore, the intensity of the ratio of I D /I G was calculated for both samples, resulting in 1.26 and 1.2 for GO60 and GO90, respectively. These results provide evidence of the degree of functionalization of graphene oxide [28]. According to the literature, it is possible to obtain the number of layers in the graphene oxide flakes from the position and shape of the D band in the Raman spectra [29]. Our results show a few layers in the flakes in the order of 3. Silva et al. mention that the D band gives us information about the exfoliation of graphene, while the G band provides information about the number of layers [30]. functionalization of graphene oxide [28]. According to the literature, it is possible to obtain the number of layers in the graphene oxide flakes from the position and shape of the D band in the Raman spectra [29]. Our results show a few layers in the flakes in the order of 3. Silva et al. mention that the D band gives us information about the exfoliation of graphene, while the G band provides information about the number of layers [30]. Figure 7 shows SEM images of sample GO60 with magnifications of (a) ×15,000, (b) ×25,000 and (c) ×100,000. On the other hand, in Figure 8, SEM images of the GO90 sample are shown at the same magnifications of the GO60 sample. Scanning Electron Microscopy (SEM) Aggregated leaves are observed in both micrographs; a structure of ultrafine and homogeneous layers can be observed. Sheets are folded or continuous, and it is possible to distinguish the edges of individual sheets, including crooked and wrinkled areas. The SEM images revealed that our material consists of thin and wrinkled sheets; furthermore, they are randomly aggregated and closely associated with each other, forming a disordered solid. It was observed that the folded regions (Figure 7c) have an average width of ≈6 nm, while the fold thickness (Figure 8c) has an average of ≈9 nm. These measurements represent the thickness of the network of graphene oxide layers. High-resolution SEM data suggest the presence of individual leaves in GO60 and GO90. The measured value for fold thickness in both samples suggests a confidence limit of approximately ±1 nm. The absence of charge during the SEM image indicates that the network of graphene oxide-based sheets and the individual sheets is electrical [31]. Figure 7 shows SEM images of sample GO60 with magnifications of (a) ×15,000, (b) ×25,000 and (c) ×100,000. On the other hand, in Figure 8, SEM images of the GO90 sample are shown at the same magnifications of the GO60 sample. Scanning Electron Microscopy (SEM) 3. Silva et al. mention that the D band gives us information about the exfoliation of graphene, while the G band provides information about the number of layers [30]. Figure 7 shows SEM images of sample GO60 with magnifications of (a) ×15,000, (b) ×25,000 and (c) ×100,000. On the other hand, in Figure 8, SEM images of the GO90 sample are shown at the same magnifications of the GO60 sample. Scanning Electron Microscopy (SEM) Aggregated leaves are observed in both micrographs; a structure of ultrafine and homogeneous layers can be observed. Sheets are folded or continuous, and it is possible to distinguish the edges of individual sheets, including crooked and wrinkled areas. The SEM images revealed that our material consists of thin and wrinkled sheets; furthermore, they are randomly aggregated and closely associated with each other, forming a disordered solid. It was observed that the folded regions ( Figure 7c) have an average width of ≈6 nm, while the fold thickness ( Figure 8c) has an average of ≈9 nm. These measurements represent the thickness of the network of graphene oxide layers. High-resolution SEM data suggest the presence of individual leaves in GO60 and GO90. The measured value for fold thickness in both samples suggests a confidence limit of approximately ±1 nm. The absence of charge during the SEM image indicates that the network of graphene oxide-based sheets and the individual sheets is electrical [31]. Energy-Dispersive Spectroscopy (EDS) In order to obtain the elemental composition of the samples, EDS analyses were performed. A sweep over different regions was made, showing in both samples the presence of carbon and oxygen as predominant elements. Other impurities such as potassium, chlorine, sulfur and silicon are also present in less quantity. The presence of these elements is due to the precursors used in the synthesis process; however, the washings carried out on the samples reduce the presence of impurities. The presence of aluminum is due to the sample holder used in the study. Figure 9 shows the region analyzed and the spectrum obtained in each sample at an amplification of ×5000. Table 4 shows the elemental composition for GO60 and GO90. Aggregated leaves are observed in both micrographs; a structure of ultrafine and homogeneous layers can be observed. Sheets are folded or continuous, and it is possible to distinguish the edges of individual sheets, including crooked and wrinkled areas. The SEM images revealed that our material consists of thin and wrinkled sheets; furthermore, they are randomly aggregated and closely associated with each other, forming a disordered solid. It was observed that the folded regions (Figure 7c) have an average width of ≈6 nm, while the fold thickness (Figure 8c) has an average of ≈9 nm. These measurements represent the thickness of the network of graphene oxide layers. High-resolution SEM data suggest the presence of individual leaves in GO60 and GO90. The measured value for fold thickness in both samples suggests a confidence limit of approximately ±1 nm. The absence of charge during the SEM image indicates that the network of graphene oxide-based sheets and the individual sheets is electrical [31]. Energy-Dispersive Spectroscopy (EDS) In order to obtain the elemental composition of the samples, EDS analyses were performed. A sweep over different regions was made, showing in both samples the presence of carbon and oxygen as predominant elements. Other impurities such as potassium, chlorine, sulfur and silicon are also present in less quantity. The presence of these elements is due to the precursors used in the synthesis process; however, the washings carried out on the samples reduce the presence of impurities. The presence of aluminum is due to the sample holder used in the study. Figure 9 shows the region analyzed and the spectrum obtained in each sample at an amplification of ×5000. Table 4 shows the elemental composition for GO60 and GO90. Energy-Dispersive Spectroscopy (EDS) In order to obtain the elemental composition of the samples, EDS analyses were performed. A sweep over different regions was made, showing in both samples the presence of carbon and oxygen as predominant elements. Other impurities such as potassium, chlorine, sulfur and silicon are also present in less quantity. The presence of these elements is due to the precursors used in the synthesis process; however, the washings carried out on the samples reduce the presence of impurities. The presence of aluminum is due to the sample holder used in the study. Figure 9 shows the region analyzed and the spectrum obtained in each sample at an amplification of ×5000. Table 4 shows the elemental composition for GO60 and GO90. Conclusions In conclusion, we developed a modified Hummers method without using sodium nitrate (NaNO 3 ) to obtain graphene oxide. With this method, we eliminate the generation of toxic gases and simplify the procedure, thus reducing the cost of GO synthesis. GO characterizations indicate that the products have similar chemical structure, thickness and dimensions. The exclusion of sodium nitrate produces the same characteristic of graphene oxide and does not affect the overall reaction yield. The modified Hummers method that we developed can be used to prepare GO on a large scale and is the first step to obtain pure graphene and all its derivatives. The synthesis described in this work has an environmentally friendly approach. As such, graphene oxide can find uses in a variety of applications such as energy storage and as a conductive filler material in composite materials.
6,339.2
2022-10-01T00:00:00.000
[ "Materials Science" ]
Automated Microsegmentation for Lateral Movement Prevention in Industrial Internet of Things (IIoT) The integration of the IoT network with the Operational Technology (OT) network is increasing rapidly. However, this incorporation of IoT devices into the OT network makes the industrial control system vulnerable to various cyber threats. Hacking an IoT device at the network edge, an attacker can move laterally to compromise the main control server and manipulate the whole control system of the industrial infrastructure. In this paper, we have proposed an automated Micro-segmentation (MS) model based on Machine Learning (ML) algorithms to reduce the lateral movement of an attacker or malware. The proposed model generates the micro-segments based on network traffic and blocks the malicious traffic at each segment. We have taken UNSW-NB15 and IoTID20 datasets for our experiments. Experimental results show that after generating micro-segments and separating the normal traffic, the model limits redundant links and blocks malicious traffic. Limiting the usage of redundant links reduces the lateral movement or spreading of malware. We also considered the deterministic epidemic model to analyze the device infection rate due to lateral movement or malware propagation. I. INTRODUCTION Information technology (IT) is the application of computers, networking devices, communication technologies for collecting' processing, storing, and communicating digital data [1]. On the contrary, OT involves industrial infrastructures, which use SCADA or control system networks for direct monitoring and controlling the industrial equipment [2]. Unlike the IT, the OT network, which includes devices like Programmable Logic Controllers (PLC), has power issues, slower processing capability, low memory and a much longer upgrade cycle [3]. The integration of the Industrial Internet of Things (1IoT) in the industrial manufacturing environment converges the IT and OT networks.The convergence of IT and OT offers various benefits including improved safety, increased productivity, efficiency, and predictive maintenance [4]. Along with these benefits, the convergence of IT and OT networks faces severe security risks. Due to the connection with the IloT network, the OT network becomes accessible 978-1-7281-9266-6/21/$31.00 ©2021 IEEE 3 rd Sergei Petrovski School of Electric Stations Samara State Technical University Samara, Russian Federation petrovski.sv @samgtu.ru throughout the Internet [5]. Moreover, the OT devices like PLCs or other controlling devices were not designed with the consideration of security vulnerabilities [5]. An attacker can gain access to the OT network by bypassing the loT network using lateral movement. Lateral movement or eastwest traffic enables an attacker to compromise the entire network, including internal servers and other devices [6]. This compromisation of controlling devices may result in massive damage in the industrial domain. For instance, an attacker took control over the main server of Oldsmar's water treatment plant, Florida, the USA, in February 2021 [7]. By taking control at the water treatment plant, the attacker abnormally increased the amount of NaOH in the water, which may cause vision problems, pain, shock if consumed. Securing the loT devices may prevent lateral movement. However, the loT network is vulnerable to various security threats [8], and these vulnerabilities create loopholes for lateral movement. Moreover, replacing the cloud network with edge devices cut the centralized control over the loT devices. An attacker can hack or snitch the loT devices at the network edge and inject malware. Without special security measures, any device in the network can access any other device like in Mesh topology [9], which enables the malware to reach anywhere in the network. This malware enables an attacker to compromise the internal servers. Therefore, securing the loT network for preventing lateral movement has become indispensable. But according to a survey, 99% of security professionals are struggling to secure the loT devices and facing challenges to update security patches using firmware update [10]. Network MS is a promising way to prevent lateral movement throughout the loT network. MS prevents lateral movement and reduces the attack surface by splitting a large network into several smaller network segments [11]. Then, the access control of each device in a micro-segment is restricted within the segment perimeter by imposing specific security rules. Therefore, the devices within a micro-segment cannot communicate with other devices outside of its restricted perimeter. Restricting the access can confine a malware or an attacker within the segment and reduce further movement outside the compromised device's segment. Although MS is widely applied to secure the cloud and workloads of servers [11], it is challenging for the loT networks due to several reasons. First of all, the loT network is large and dynamic, which creates difficulty in identifying proper segments. Secondly, it is difficult to maintain and update a large number of micro-segments with the security rules periodically. Intelligent algorithms can be used to overcome these tedious jobs of maintaining MS and security policies for the loT networks. In this work, we have proposed an automated MS procedure and security rules generation for each segment based on ML algorithms. The micro-segments are generated through the OPTICS clustering algorithm. Then a Decision Tree (DT) classification algorithm is used to separate the malicious network traffic from the legitimate traffic data. These traffic data are then used to generate packet filtering policy. In section II we have discussed the related works. Section III presents the system model including, the network model, threat model, and proposed MS process. Section IV demonstrates the experimental results. In section V we have analyzed the security enhancement by MS, and finally, section VI concludes this paper with future works. II. LITERATURE REVIEW A very few research works have been conducted for preventing lateral movement in loT network domain -some related studies are discussed in this section. The authors in [6] proposed a micro-segmentation technique based on edge cloud architecture for smart home loT networks, using Open flow rules. The proposed model blocks attackers from accessing the LAN and WAN of the smart home loT network. However, the open flow rules are static and need to be updated manually. Also, the approach applied for smart homes is not suitable for large scale and dynamic IloT networks. The authors in [12] proposed an evidence reasoning lateral movement detection technique for the cloud-edge environment. The authors also introduced vulnerability correlation process in lateral movement detection. However, this model is not appropriate for networks which replace the cloud architecture with only edge computing devices. A micro-segmentation technique is proposed in [13] based on K-means clustering algorithm for enterprise network. However, it is required to define the number of clusters initially for the K-means algorithm, which is not effective for a large scale network like industrial loT or other sensor networks. The MITRE ATT&CK framework also takes into consideration lateral movements. MITRE ATT&CK can be defined as the set of individual techniques performed by an attacker to accomplish malicious tasks. It was shown in [14] that MITRE ATT&CK encompasses 440 attack techniques belonging to 27 different tactics. These malicious activities may include gaining access to the loT network through the use of phishing links that may compromise other devices through lateral movements. Furthermore, a public repository ( referred to as the MITRE ATT&CK Framework) is available which contains adversary tactics, techniques and procedures based on realworld observations [15]. This publicly available knowledge base provides a rich resource for the development of specific attack detection, prediction and mitigation models. A. Network Model In the traditional OT network like SCADA, all the data are collected and analyzed in the centralized server. However, the IloT network improves the SCADA network by introducing edge devices at the network edge. Figure 1 shows the IloT and edge enabled SCADA network, where edge devices are connected with the RTUs. These edge devices then receive data from the sensors, which are connected with the industrial equipment. After receiving data, the edge devices process and provide a real-time decision for maintaining the industrial machinery. An administrator can control the whole network from the control centre and send commands through the RTUs. Also, the data are stored in the central servers for future analysis and optimization [16]. This integration of IloT and edge devices enable the administrators to monitor and control the industrial control system remotely. B. Threat Model In this work, we considered the threat due to lateral movement by an attacker. Advanced Persistent Threats (APT) [17] are severe and long-lasting cyber attacks, where lateral movement is an attack phase in which the attacker moves from the compromised devices to other devices [18] [19]. APT can be defined as the theft of intellectual property or espionage as opposed to achieving immediate financial gain and are prolonged, stealthy attacks [20] [21] . For taking control of the main server of the industrial control system, the attacker moves deeper inside the network after hacking an loT device. Therefore, the attacker can gradually compromise the whole network. This compromisation may result in a devastating situation. Moreover, an internal employee may intentionally try to compromise a device and achieve a malicious goal. C. Background on ML algorithm 1) OPTICS Clustering algorithm: OPTICS is the upgraded version of the DBSCAN algorithm. It was demonstrated in [22] that DBSCAN performs well in clustering network traffic compared to other models. However, unlike DBSCAN, OPTICS is better suited for large scale dataset [23] and do not require the epsilon parameter (the domain knowledge). For these reasons, here we chose the OPTICS clustering algorithm. 2) DT algorithm: A DT is a supervised classification technique that includes internal nodes, which represent the features of the traffic data (for instance, IP address, Flow ID); branches represent the decision rules, and the leaf nodes represent the outcomes (Malicious or Normal). This algorithm uses various feature selection measures like information gain or Gini index to select the best features as the root node or the internal nodes. Information gain (IG) can be defined as in equation (1) [24], which tells us how much a feature provides information about a class. (1) where, n = number of attributes A, ISil = number of cases in partition Si, lSI =total cases and E is the Entropy as defined below: In this subsection, we will discuss the MS generation process using ML algorithms discussed in the previous subsection. Figure 2 shows the proposed MS creation model based on ML algorithms. As shown in [11], MS implementation consists of several steps. Firstly, we need to identify and group the devices which show similar functionalities or behavior. Here, we have chosen the similarity of traffic data to group the loT devices through the OPTICS clustering algorithm. Each group of loT devices will then work as a micro-segment. After generating the groups of similar devices, the traffic information of each group of devices will be classified as malicious or normal for creating the security policies. For classification tasks, we have considered the DT classifier algorithm. After classifying, the algorithm will look for multiple connection of each loT device and restrict the access of redundant links except one link for each loT nodes. Upon failure of the current link, the algorithm will make one of the restricted link available for use. This will result in blocking the malicious traffics as well. c. Training and testing The OPTICS clustering algorithm and the DT classifier are implemented using the pythons sci-kit learn library. We took 1000 samples from each dataset randomly to conduct OPTICS clustering operations since our experimental configuration fails to do clustering for the entire dataset. We set min_samples=2, max_eps=np.inf, metric=' chebyshev', cluster_method ='xi' for the OPTICS clustering method's parameters. We found that for the 'chebyshev' distance metric the OPTICS yields good results. On the other hand, for classification algorithm, our environment supported the entire dataset. We split the entire dataset to a 70 : 30 ratio for training and testing the DT classifier. D. Results After performing the OPTICS clustering algorithm, we got 178 clusters (micro-segments) for UNSW-NB15 dataset and 295 clusters (micro-segments) for loTID20 dataset based on the random 1000 samples of each dataset. Table I shows the clustering results. After training and testing the DT classifier on both of the dataset, we have computed the Accuracy, Sensitivity and Specificity metrics. Table II shows the evaluation results. From this table, we can see that the DT classifier performed similarly on both datasets in terms of Accuracy and Specificity, but the Sensitivity for UNSW-NB dataset is slightly lower than the one for the loTID20 dataset. Figures 3 and 4 show the confusion matrices of the DT classifier for loTID20 dataset and UNSW-NB15 dataset respectively. Then, we have used this trained DT classifier to differentiate between the normal and malicious traffic in each cluster or micro-segment (as depicted in Figure 2). Table III shows the security policies for a security group generated by a clustering algorithm. The MS model with the DT classifier will block the traffics generated outside of a security perimeter from entering into the micro network bestowed by that perimeter. Also, any malicious traffics will be blocked. From 6. Hence, a single device is restricted to access the redundant links (Section V explains in more detail). The malicious traffics will be blocked automatically. A. Dataset For the experiment, we have taken the UNSW-NB 15 [25] and loTID20 [26] datasets. These datasets contain various features of network traffic, including anomalous and normal data. The UNSW-NB 15 dataset contains 48 features of the network traffic. The last feature of this dataset is the class label that is either 0 for normal and 1 for malicious traffic. The loTID20 dataset comprises 80 network features including, three class labels. The Normal and Anomaly class are subdivided based on various cyber attacks. B. Data Pre-processing Before applying the ML models on the datasets, we performed data preprocessing. First, the categorical features are encoded using LabeIEncoder() function and normalized using StandardScaler() function. Both of the datasets are high dimensional. Therefore, we conducted a correlation analysis and found that 8 pairs of features are highly correlated with each other in the UNSW-NB15 dataset. However, among the 8 pairs of features, only ('swin', 'dwin'), ('Stime', 'Ltime') pairs of feature showed 100% correlation. Therefore, from UNSW-NB15 dataset 'swin' and 'Stime' features are dropped. On the contrary, from the loTID20 dataset 21 highly correlated features which showed 100% correlation are dropped from the dataset. We choose the correlation threshold as 0.95. To further reduce the dimensions of the datasets, we applied Principle Component Analysis (PCA). From PCA, we found it is sufficient to consider only the first 30 principal components to represent the overall information of the UNSW-NB 15 dataset. For the loTID20 dataset, the first 20 principal components are adequate. However, for the DT classifier, we did not conduct the PCA procedure. where (3 is the infection rate and it is constant for specific malware, N is the total number of devices, and 1(t) is the number of infected devices at time t. However, from the above analysis, we can see that the parameter {3 is proportional to the number of links in the loT network for any malware. As the number of links increases, the probability of device infection also raises. Therefore, we have considered (3 as the link parameter. A close form equation of the epidemic model is also shown in [27] as, where, I (0) is the number of devices infected at t == 0 unit of time. Figure 6 (Log plot) shows the device infection rate of the Mesh network shown in figure 5 with 1(0) == 1, t == 15 time unit, {3 == 10 for without MS and {3 == 4 for with MS, and finally N == 5. We can see device infection rate is higher without applying MS than the infection rate after applying MS. Therefore, it is evident that, MS reduces device infection rate by declining lateral movements (at t == 14 almost 3 devices are infected without MS but only 2 devices are infected with MS). The device infection rate increases exponentially according to equation 4. Therefore, if we consider a large network instead of the simple Mesh network depicted in Figure 5, the difference between the two lines shown in Figure 6 will increase. After applying MS, it will take more time to move from the compromised device to the internal nodes. Therefore, the administrator will be able to identify and revoke the compromised devices before the attacker takes control of the main server or device. VI. CONCLUSION In this work, we have proposed an automated MS model based on the OPTICS clustering algorithm and a DT classifier for preventing lateral movement in IloT. We have considered ML algorithms to automate the micro-segmentation process 2. Therefore, MS restricts the access of the links numbered 1, 3 and 4 for device Dl. Similarly, after restricting all the redundant links for other devices, the total number of allowed links in this security group will be reduced from 10 to 4. Now, we can use the deterministic epidemic model to figure out the loT device infection rate for MS and for without MS. The epidemic model can be defined as [27]- If any restriction is not imposed explicitly, an loT device can communicate with multiple other devices like in Mesh topology [9]. Therefore, multiple links help malware to spread more rapidly within a network. The attackers get more paths to move laterally within the network and compromise the devices. It is not acceptable to block the redundant links of an loT device since loT devices must communicate through other available links if the current link fails. However, we can control the number of links to reduce the spreading of malware through lateral movement. MS has the potential to minimize the spreading of malware over the network by imposing specific security policies. In this section, we will theoretically analyze the effectiveness of MS in terms of reducing malware dissemination. As an example, let us consider a segment of the loT network shown in Figure 5, where D5 is the gateway node and Dl, D2, D3 and D4 are the sensor nodes. The devices are connected in a Mesh topology. Without any specific security measures, the malware may spread through all the links. The number of links of this mesh topology is since it is difficult and tedious to maintain micro-segmentation for large-scale loT networks. We have considered the network traffic to find and group similar loT devices using the OPTICS clustering algorithm. The loT devices which produce similar traffic information can be grouped together. Then, we have trained a DT classifier and used the DT model obtained to separate the normal traffic from the malicious one. The model will restrict accessing the redundant links of each loT device, which will reduce the spreading of malware. MS will also reduce the lateral movement of an attacker or malware over the entire IloT network by imposing security rules. Furthermore, we have analyzed the effectiveness of MS in the IloT network and showed MS reduces device infection rate. However, in the security analysis section only a static Mesh topology of loT devices is considered. In reality, the loT network is more complex, heterogeneous, and dynamic. Therefore, in future work, we will apply statistical distribution for modeling the dynamic nature of large scale IloT networks. Also, we intend to integrate a malware detection model with the MS process to identify and revoke the infected device before an extensive portion of the network becomes compromised through lateral movement. We also believe our work will open the door to further experiments of lateral movement prevention using ML in loT networks.
4,519.2
2021-12-15T00:00:00.000
[ "Computer Science" ]
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers To build an interpretable neural text classifier, most of the prior work has focused on designing inherently interpretable models or finding faithful explanations. A new line of work on improving model interpretability has just started, and many existing methods require either prior information or human annotations as additional inputs in training. To address this limitation, we propose the variational word mask (VMASK) method to automatically learn task-specific important words and reduce irrelevant information on classification, which ultimately improves the interpretability of model predictions. The proposed method is evaluated with three neural text classifiers (CNN, LSTM, and BERT) on seven benchmark text classification datasets. Experiments show the effectiveness of VMASK in improving both model prediction accuracy and interpretability. Introduction Neural network models have achieved remarkable performance on text classification due to their capacity of representation learning on natural language texts (Zhang et al., 2015;Yang et al., 2016;Joulin et al., 2017;Devlin et al., 2018). However, the lack of understanding of their prediction behaviors has become a critical issue for reliability and trustworthiness and hindered their applications in the real world (Lipton, 2016;Ribeiro et al., 2016;Jacovi and Goldberg, 2020). Many explanation methods have been proposed to provide post-hoc explanations for neural networks (Ribeiro et al., 2016;Lundberg and Lee, 2017;Sundararajan et al., 2017), but they are only able to explain model predictions and cannot help improve their interpretability. In this work, we consider interpretability as an intrinsic property of neural network models. Furthermore, we hypothesize that neural network models with similar network architectures could have Ex . Two post-hoc explanation methods, LIME (Ribeiro et al., 2016) and SampleShapley (Kononenko et al., 2010), are used to explain the model predictions on example 1 and 2 respectively. Top three important words are shown in pink or blue for model A and B. Whichever post-hoc method is used, explanations from model B are easier to understand because the sentiment keywords "clever" and "gimmicky" are highlighted. different levels of interpretability, even though they may have similar prediction performance. Table 1 shows explanations extracted from two neural text classifiers with similar network architectures. 1 Although both models make correct predictions of the sentiment polarities of two input texts (positive for example 1 and negative for example 2), they have different explanations for their predictions. In both examples, no matter which explanation generation method is used, explanations from model B are easier to be interpreted regarding the corresponding predictions. Motivated by the difference of interpretability, we would like to investigate the possibility of building more interpretable neural classifiers with a simple modification on input layers. The proposed method does not demand significant efforts on engineering network architectures (Rudin, 2019;Melis and Jaakkola, 2018). Also, unlike prior work on improving model interpretability (Erion et al., 2019;Plumb et al., 2019), it does not require pre-defined important attributions or pre-collected explanations. Specifically, we propose variational word masks (VMASK) that are inserted into a neural text classifier, after the word embedding layer, and trained jointly with the model. VMASK learns to restrict the information of globally irrelevant or noisy wordlevel features flowing to subsequent network layers, hence forcing the model to focus on important features to make predictions. Experiments in section 5 show that this method can improve model interpretability and prediction performance. As VMASK is deployed on top of the word-embedding layer and the major network structure keeps unchanged, it is model-agnostic and can be applied to any neural text classifiers. The contribution of this work is three-fold: (1) we proposed the VMASK method to learn global task-specific important features that can improve both model interpretability and prediction accuracy; (2) we formulated the problem in the framework of information bottleneck (IB) (Tishby et al., 2000;Tishby and Zaslavsky, 2015) and derived a lower bound of the objective function via the variational IB method (Alemi et al., 2016); and (3) we evaluated the proposed method with three neural network models, CNN (Kim, 2014), LSTM (Hochreiter and Schmidhuber, 1997), and BERT (Devlin et al., 2018), on seven text classification tasks via both quantitative and qualitative evaluations. Related Work Various approaches have been proposed to interpret DNNs, ranging from designing inherently interpretable models (Melis and Jaakkola, 2018; Rudin, 2019), to tracking the inner-workings of neural networks (Jacovi et al., 2018;Murdoch et al., 2018), to generating post-hoc explanations (Ribeiro et al., 2016;Lundberg and Lee, 2017). Beyond interpreting model predictions, the explanation generation methods are also promising in improving model's performance. We propose an information-theoretic method to improve both prediction accuracy and interpretability. Explanation from the information-theoretic perspective. A line of works that motivate ours leverage information theory to produce explana-tions, either maximizing mutual information to recognize important features (Chen et al., 2018;Guan et al., 2019), or optimizing the information bottleneck to identify feature attributions (Schulz et al., 2020;Bang et al., 2019). The information-theoretic approaches are efficient and flexible in identifying important features. Different from generating post-hoc explanations for well-trained models, we utilize information bottleneck to train a more interpretable model with better prediction performance. Improving prediction performance via explanations. Human-annotated explanations have been utilized to help improve model prediction accuracy (Zhang et al., 2016). Recent work has been using post-hoc explanations to regularize models on prediction behaviors and force them to emphasize more on predefined important features, hence improving their performance (Ross et al., 2017;Ross and Doshi-Velez, 2018;Liu and Avci, 2019;Rieger et al., 2019). Different from these methods that require expert prior information or human annotations, the VMASK method learns global important features automatically during training and incorporate them seamlessly on improving model prediction behaviors. Improving interpretability via explanations. Some work focuses on improving model's interpretability by aligning explanations with humanjudgements (Camburu et al., 2018;Du et al., 2019b;Chen and Ji, 2019;Erion et al., 2019;Plumb et al., 2019). Similarly to the prior work on improving model prediction performance, these methods still rely on annotations or external resources. Although enhancing model interpretability, they may cause the performance drop on prediction accuracy due to the inconsistency between human recognition and model reasoning process (Jacovi and Goldberg, 2020). Our approach can improve both prediction accuracy and interpretability without resorting to human-judgements. Method This section introduces the proposed VMASK method. For a given neural text classifier, the only modification on the neural network architecture is to insert a word mask layer between the input layer (e.g., word embeddings) and the representation learning layer. We formulate our idea within the information bottleneck framework (Tishby et al., 2000), where the word mask layer restricts the in-formation from words to the final prediction. Interpretable Text Classifier with Word Masks For an input text x = [x 1 , · · · , x T ], where x t (t ∈ {1, . . . , T }) indicates the word or the word index in a predefined vocabulary. In addition, we use x t ∈ R d as the word embedding of x t . A neural text classifier is denoted as f θ (·) with parameter θ, which by default takes x as input and generates a probability of output Y , p(Y |x), over all possible class labels. In this work, beyond prediction accuracy, we also expect the neural network model to be more interpretable, by focusing on important words to make predictions. To help neural network models for better feature selection, we add a random layer R after the word embeddings, where R = [R x 1 , . . . , R x T ] has the same length of x. Each R xt ∈ {0, 1} is a binary random variable associated with the word type x t instead of the word position. This random layer together with word embeddings form the input to the neural network model, i.e., where is an element-wise multiplication and each Z t = R xt ·x t . Intuitively, Z only contains a subset of x, which is selected randomly by R. Since R is applied directly on the words as a sequence of 0-1 masks, we also call it the word mask layer in this work. To ensure Z has enough information on predicting Y while contains the least redundant information from x, we follow the standard practice in the information bottleneck theory (Tishby et al., 2000), and write the objective function as where X as a random variable representing a generic word sequence as input, Y is the one-hot output random variable, I(·; ·) is the mutual information, and β ∈ R + is a coefficient to balance the two mutual information items. This formulation reflects our exact expectation on Z. The main challenge here is to compute the mutual information. Variational Word Masks Inspired by the variational information bottleneck proposed by Alemi et al. (2016), instead of computing p(X, Y , Z), we start from an approximation distribution q(X, Y , Z). Then, with a few assumptions specified in the following, we construct a tractable lower bound of the objective in Equation 2 and the detailed derivation is provided in Appendix A. For I(Z; Y ) under q, we have I(Z; Y ) = y,z q(y, z) log(q(y|z)/q(y)). By replacing log q(y|z) with the conditional probability derived from the true distribution log p(y|z), we introduce the constraint between Y and Z from the distribution and also obtain a lower bound of I(Z; Y ), where H q (·) is entropy, and the last step uses q(x, y, z) = q(x)q(y|x)q(z|x), which is a factorization based on the conditional dependency 2 . Given a specific observation (x (i) , y (i) ), we define the empirical distribution q(X (i) , Y (i) ) as a multiplication of two Delta functions q( can be further simplified as Similarly, for I(Z; X) under q, we have an upper bound of I(Z; X) by replacing p(Z|X) with a predefined prior distribution p 0 (Z) where KL[· ·] denotes Kullback-Leibler divergence. The simplification in the last step is similar to Equation 4 with the empirical distribution q(X (i) ). Substituting (5) and (4) into Equation 2 gives us a lower bound L of the informaiton bottleneck The learning objective is to maximize Equation 6 with respect to the approximation distribution q(X, Y , Z) = q(X, Y )q(Z|X). As a classification problem, X and Y are both observed and q(X, Y ) has already been simplified as an empirical distribution, the only one left in the approximation distribution is q(Z|X). Similarly to the objective function in variational inference (Alemi et al., 2016;Rezende and Mohamed, 2015), the first term in L is to make sure the information in q(Z|X) for predicting Y , while the second term in L is to regularize q(Z|X) with a predefined prior distribution p 0 (Z). The last step of obtaining a practical objective function is to notice that, given X where R xt ∈ {0, 1} is a standard Bernoulli distribution. Then, Z can be reparameterized as The lower bound L can be rewritten with the random variable R as Note that, although β is inherited from the information bottleneck theory, in practice it will be used as a tunable hyper-parameter to address the notorious posterior collapse issue (Bowman et al., 2016;Kim et al., 2018). Connections The idea of modifying word embeddings with the information bottleneck method has recently shown some interesting applications in NLP. For example, Li and Eisner (2019) proposed two ways to transform word embeddings into new representations for better POS tagging and syntactic parsing. According to Equation 1, VMASK can be viewed as a simple linear transformation on word embeddings. The difference is that {R xt } is defined on the vocabulary, therefore can be used to represent the global importance of word x t . Recall that R xt ∈ {0, 1}, from a slightly different perspective, Equation 1 can be viewed as a generalized method on wordembedding dropout (Gal and Ghahramani, 2016). Although there are two major differences: (1) in Gal and Ghahramani (2016) all words share the same dropout rate, while in VMASK every word has its own dropout rate specified by q(R xt |x t ), (2) the motivation of wordembedding dropout is to force a model not to rely on single words for prediction, while VMASK is to learn a task-specific importance for every word. Another implementation for making word masks sparse is by adding L 0 regularization (Lei et al., 2016;Bastings et al., 2019;Cao et al., 2020), but this regularizer only distinguishes words as important or unimportant, rather than learning continuous importance scores. Model Specification and Training We resort to mean-field approximation (Blei et al., 2017) to simplify the assumption on our q distribution. For q φ (R|x), we have q φ (R|x) = T t=1 q φ (R xt |x t ), which means the random variables are mutually independent and each governed by x t . We use the amortized variational inference (Rezende and Mohamed, 2015) to represent the posterior distribution q φ (R xt |x t ) with using an inference network (Kingma and Welling, 2014). In this work, we adopt a single-layer feedforward neural network as the inference network, whose parameters φ are optimized with the model parameters θ during training. Following the same factorization as in q φ (R|x), we define the prior distribution p 0 (R) as p 0 (R) = T t=1 p 0 (R xt ) and each of them as p 0 (R xt ) = Bernoulli(0.5). By choosing this non-informative prior, it means every word is initialized with no preference to be important or unimportant, and thus has the equal probability to be masked or selected. As p 0 (R) is a uniform distribution, we can further simplify the second term in Equation 8 as a conditional entropy, We apply stochastic gradient descent to solve the optimization problem (Equation 9). Particularly in each iteration, the first term in Equation 9 is approximated with a single sample from q(R|x (i) ) (Kingma and Welling, 2014). However, sampling from a Bernoulli distribution (like from any other discrete distributions) causes difficulty in backpropagation. We adopt the Gumbel-softmax trick (Jang et al., 2016;Maddison et al., 2016) to utilize a continuous differentiable approximation and tackle the discreteness of sampling from Bernoulli distributions (Appendix B). During training, We use Adam (Kingma and Ba, 2014) for optimization and KL cost annealing (Bowman et al., 2016) to avoid posterior collapse. For a given word x t and its word embedding x t , in training stage, the model samples each r xt from q(R xt |x t ) to decide to either keep or zero out the corresponding word embedding x t . In inference stage, the model takes the multiplication of the word embedding x t and the expectation of the word mask distribution, i.e. x t · E[q(R xt |x t )], as input. Experiment Setup The proposed method is evaluated on seven text classification tasks, ranging from sentiment analysis to topic classification, with three typical neural network models, a long short-term memories (Hochreiter and Schmidhuber, 1997, LSTM), a convolutional neural network (Kim, 2014, CNN), and BERT (Devlin et al., 2018). Datasets. We adopt seven benchmark datasets: movie reviews IMDB (Maas et al., 2011), Stanford Sentiment Treebank with fine-grained labels SST-1 and its binary version SST-2 (Socher et al., 2013), Yelp reviews (Zhang et al., 2015), AG's News (Zhang et al., 2015), 6-class question classification TREC (Li and Roth, 2002), and subjective/objective classification Subj (Pang and Lee, 2005). For the datasets (e.g. IMDB, Subj) without standard train/dev/test split, we hold out a proportion of training examples as the development set. Table 2 shows the statistics of the datasets. Models. The CNN model (Kim, 2014) contains a single convolutional layer with filter sizes ranging from 3 to 5. The LSTM (Hochreiter and Schmidhuber, 1997) has a single unidirectional hidden layer. Both models are initialized with 300-dimensional pretrained word embeddings (Mikolov et al., 2013). We fix the embedding layer and update other parameters on different datasets to achieve the best performance respectively. We use the pretrained BERT-base model 3 with 12 transformer layers, 12 self-attention heads, and the hidden size of 768. We fine-tune it with different downstream tasks, and then fix the embedding layer and train the mask layer with the rest of the model together. Baselines and Competitive Methods. As the goal of this work is to propose a novel training method that improves both prediction accuracy and interpretability, we employ two groups of models as baselines and competitive systems. Models trained with the proposed method are named with suffix "-VMASK". We also provide two baselines: (1) models trained by minimizing the crossentropy loss (postfixed with "-base") and (2) models trained with 2 -regularization (postfixed with "-2 "). The comparison with these two baseline methods mainly focuses on prediction performance as no explicit training strategies are used to improve interpretability. Besides, we also propose two competitive methods: models trained with the explanation framework "Learning to Explain" (Chen et al., 2018) (postfixed with "-L2X") and the "Information Bottleneck Attribution" (Schulz et al., 2020) (postfixed with "-IBA"). L2X and IBA were originally proposed to find feature attributions as post-hoc explanations for well-trained models. We integrated them in model training, working as the mask layer to directly generate mask values for input features (L2X) or restrict information flow by adding noise (IBA). In our experiments, all training methods worked with random dropout (ρ = 0.2) to avoid overfitting. More details about experiment setup are in Appendix C, including data pre-processing, model configurations, and the implementation of L2X and IBA in our experiments. Results and Discussion We trained the three models on the seven datasets with different training strategies. Table 3 shows the prediction accuracy of different models on test sets. The validation performance and average runtime are in Appendix D. As shown in Table 3, all base models have the similar prediction performance comparing to numbers reported in prior work (Appendix E). The models trained with VMASK outperform the ones with similar network architec- tures but trained differently. The results show that VMASK can help improve the generalization power. Except the base models and the models trained with the proposed method, the records of other three competitors are mixed. For example, the traditional 2 -regularization cannot always help improve accuracy, especially for the BERT model. Although the performance with IBA is slightly better than with L2X, training with them does not show a constant improvement on a model's prediction accuracy. To echo the purpose of improving model interpretability, the rest of this section will focus on evaluating the model interpretability quantitatively and qualitatively. Quantitative Evaluation We evaluate the local interpretability of VMASKbased models against the base models via the AOPC score (Nguyen, 2018;Samek et al., 2016) and the global interpretability against the IBAbased models via post-hoc accuracy (Chen et al., 2018). Empirically, we observed the agreement between local and global interpretability, so there is no need to exhaust all possible combinations in our evaluation. Local interpretability: AOPC We adopt two model-agnostic explanation methods, LIME (Ribeiro et al., 2016) and SampleShapley (Kononenko et al., 2010), to generate local explanations for base and VMASK-based models, where "local" means explaining each test data individually. The area over the perturbation curve (AOPC) (Nguyen, 2018;Samek et al., 2016) metric is utilized to evaluate the faithfulness of explanations to models. It calculates the average change of prediction probability on the predicted class over all test data by deleting top n words in explanations. We adopt this metric to evaluate the model interpretability to post-hoc explanations. Higher AOPC scores are better. For TREC and Subj datasets, we evaluate all test data. For each other dataset, we randomly pick up 1000 examples for evaluation due to computation costs. Table 4 shows the AOPCs of different models on the seven datasets by deleting top 5 words identified by LIME or SampleShapley. The AOPCs of VMASK-based models are significantly higher than that of base models on most of the datasets, indicating that VMASK can improve model's interpretability to post-hoc explanations. The results on the TREC dataset are very close because top 5 words are possible to include all informative words for short sentences with the average length of 10. Global Interpretability: Post-hoc accuracy The where M is the number of examples, y m is the predicted label on the m-th test data, and y m (k) is the predicted label based on the top k important words. Figure 1 shows the results of VMASK-and IBAbased models on the seven datasets with k ranging from 1 to 10. VMASK-based models (solid lines) outperform IBA-based models (dotted lines) with higher post-hoc accuracy, which indicates our proposed method is better on capturing task-specific important features. For CNN-VMASK and LSTM-VMASK, using only top two words can achieve about 80% post-hoc accuracy, even for the IMDB dataset, which has the average sentence length of 268 tokens. The results illustrate that VMASK can identify informative words for model predictions. We also noticed that BERT-VMASK has lower posthoc accuracy than the other two models. It is probably because BERT tends to use larger context with its self-attentions for predictions. This also explains that the post-hoc accuracies of BERT-VMASK on the IMDB and SST-1 datasets are catching up slowly with k increasing. Qualitative Evaluation Visualizing post-hoc local explanations. Table 5 shows some examples of LIME explanations for different models on the IMDB dataset. We highlight the top three important words identified by LIME, where the color saturation indicates word attribution. The pair of base and VMASK-based models make the same and correct predictions on the input texts. For VMASK-based models, LIME can capture the sentiment words that indicate the same sentiment polarity as the prediction. While for base models, LIME selects some irrelevant words (e.g. "plot", "of", "to") as explanations, which illustrates the relatively lower interpretability of base models to post-hoc explanations. Visualizing post-hoc global explanations. We adopt SP-LIME proposed by Ribeiro et al. (2016) as a third-party global interpretability of base and VMASK-based models. Without considering the rectriction on the number of explanations, we follow the method to compute feature global importance from LIME local explanations (subsubsection 5.1.1) by calculating the sum over all local importance scores of a feature as its global importance. To distinguish it from the global importance learned by VMASK, we call it post-hoc global importance. Table 6 lists the top three post-hoc global important words of base and VMASK-based models on the IMDB dataset. For VMASK-based mod- els, the global important features selected by SP-LIME are all sentiment words. While for base models, some irrelevant words (e.g. "performances", "plot", "butcher") are identified as important features, which makes model predictions unreliable. Frequency-importance correlation. We compute the Pearson correlation coefficients between word frequency and global word importance of VMASK-based models in Appendix F. The results show that they are not significantly correlated, which indicates that VMASK is not simply learning to select high-frequency words. Figure 2 further verifies this by ploting the expectation (E[q(R xt |x t )]) of word masks from the LSTM-VMASK trained on Yelp and the word frequency from the same dataset. Here, we visualize the top 10 high-frequency words and top 10 important words based the expectation of word masks. The global importance scores of the sentiment words are over 0.8, even for some low-frequency words (e.g. "funnest", "craveable"), while that of the highfrequency words are all around 0.5, which means the VMASK-based models are less likely to focus on the irrelevant words to make predictions. Task-specific important words. Figure 3 visualizes top 10 important words for the VMASKand IBA-based models on three datasets via word clouds. We can see that the selected words by VMASK are consistent with the corresponding topic, such as "funnest", "awsome" for sentiment analysis, and "encyclopedia", "spaceport" for news classification, while IBA selects some irrelevant words (e.g. "undress", "slurred"). Conclusion In this paper, we proposed an effective method, VMASK, learning global task-specific important features to improve both model interpretability and prediction accuracy. We tested VMASK with three different neural text classifiers on seven benchmark datasets, and assessed its effectiveness via both quantitative and qualitative evaluations. Models Texts Prediction CNN-base Primary plot , primary direction , poor interpretation . negative CNN-VMASK Primary plot , primary direction , poor interpretation . negative LSTM-base John Leguizamo 's freak is one of the funniest one man shows I 've ever seen . I recommend it to anyone with a good sense of humor . positive LSTM-VMASK John Leguizamo 's freak is one of the funniest one man shows I 've ever seen . I recommend it to anyone with a good sense of humor . positive BERT-base Great story , great music . A heartwarming love story that ' s beautiful to watch and delightful to listen to . Too bad there is no soundtrack CD . positive BERT-VMASK Great story , great music . A heartwarming love story that ' s beautiful to watch and delightful to listen to . Too bad there is no soundtrack CD . positive The following derivation is similar to the variational information bottleneck, where the difference is that our starting point is the approximation distribution q(X, Y , Z) instead of the true distribution p(X, Y , Z). The lower bound for I(Z; Y ). where H q (·) represents entropy. Now, if we replace log q(y|z) with the conditional probability derived from the true distribution log p(y|z), we have where KL[· ·] denotes Kullback-Leibler divergence. Therefore, we can obtain a lower bound of the mutual information where the last step uses q(x, y, z) = q(x)q(y|x)q(z|x), which is a factorization based on the conditional dependency 4 . Since q(X, Y , Z) is the approximation defined by ourselves, given a specific observation (x (i) , y (i) ), the empirical distribution q(X (i) , Y (i) ) 4 Y ↔ X ↔ Z: Y and Z are independent given X. (13) Then, Equation 12 with X (i) and Y (i) can be further simplified as The upper bound for I(Z; X). By replacing q(z) with a prior distribution of z, p 0 (z), we have (16) Then we can obtain an upper bound of the mutual information C Supplement of Experiment Setup Data pre-processing. We clean up the text by converting all characters to lowercase, removing extra whitespaces and special characters. We tokenize texts and remove low-frequency words to build vocab. We truncate or pad sentences to the same length for mini-batch during training. Table 7 shows pre-processing details on the datasets. Implementation of L2X and IBA. • The explanation framework of L2X (Chen et al., 2018) is a neural network which learns to generate importance scores w = [w 1 , w 2 , · · · , w T ] for input features x = [x 1 , x 2 , · · · , x T ]. The neural network is optimized by maximizing the mutual information between the selected important features and the model prediction, i.e. I(x S ; y), where x S contains a subset of features from x. In our experiments, we adopt a single-layer feedforward neural network as the interpreter to generate importance scores for an input text, and multiply each word embedding with its importance score, x = w x. The weighted word embedding matrix x is sent to the rest of the model to produce an output y . We optimize the interpreter network with the original model by minimizing the cross-entropy loss between the final output and the ground-truth label, L ce (y t ; y ). • We adopt the Readout Bottleneck of IBA which utilizes a neural network to predict mask values λ = [λ 1 , λ 2 , · · · , λ T ], where λ t ∈ [0, 1]. The information of a feature x t is restricted by adding noise, i.e. z t = λ t x t + (1 − λ t ) t , where t ∼ N (µ xt , σ 2 xt ). And z is learned by optimizing the objective function Equation 2. By assuming the variational approximation q(z) as a Gaussian distribution, the mutual information can be calculated explicitly (Schulz et al., 2020) . We still use a single-layer feedforward neural network as the Readout Bottleneck to generate continuous mask valuses λ and construct z for model to make predictions. The Readout Bottleneck is trained jointly with the original model by minimizing the sum of the crossentropy loss L ce (y t ; y) and an upper bound L I = E x [KL[p(z|x) q(z)]] of the mutual information I(Z; X). See Schulz et al. (2020) for the proof of the upper bound. D Validation Performance and Average Runtime The corresponding validation accuracy for each reported test accuracy is in Table 8. The average runtime for each approach on each dataset is recorded in Table 9. All experiments were performed on a single NVidia GTX 1080 GPU. Table 10 shows some results of prediction accuracy of base models reported in previous papers.
6,767.6
2020-10-01T00:00:00.000
[ "Computer Science" ]
An MINLP Model that Includes the Effect of Temperature and Composition on Property Balances for Mass Integration Networks The synthesis of water networks based on properties has commonly ignored the effect of temperature on the property balances that are part of the formulation. When wide differences of temperatures are observed within the process, such an effect might yield significant errors in the application of conventional property balances. In this work, a framework for the development of water networks that include temperature effects on property balances is presented. The approach is based on the inclusion of constants in the property operators that are commonly used to carry out the property balances. An additional term to take care of composition effects is also included. The resulting approach is embedded into a formulation based on a mixed-integer nonlinear programming model for the design of water networks. A case study is presented that shows that the proposed approach yields an improvement in the prediction of the resulting properties for the integrated network, thus affecting the optimal solution. Introduction Mass integration techniques have found special applications in the development of water networks that minimize the consumption of both fresh water requirements and wastewater sent to the environment.The initial studies were based on extensions of the pinch concept for energy integration [1][2][3].The first reported mass integration strategy considered a set of process streams that served as removers of contaminants contained in other streams [4].Another structure was then considered in the form of a direct recycle network, in which process streams could serve as sources to be allocated into process units that could serve as sinks.From this approach, a mass pinch point was detected that guided the design of a mass integration network with minimum consumption of fresh sources.Additional studies were developed following these concepts and objectives [5][6][7][8][9].A review on the works for mass integration based on pinch methods is available in Foo [10].Alternative approaches to the mass pinch approach were developed to formulate network synthesis methods based on mathematical programming techniques [11][12][13][14][15][16][17].These works typically considered the concentration of contaminants as the task to be treated for the design of the network. It was later recognized that integration tasks are not only affected by the concentration of pollutants within water and process streams, but also by their properties such as pH, COD, color and odor, among others.A novel framework for water networks based on properties was proposed by Shelley and El-Halwagi [18].Applications of this concept and further developments were reported in several works from El-Halwagi's research group [19][20][21][22][23].To trace properties within the network, the use of property balances was needed, for which property operators were used to allow linear mixing rules.The first mathematical programming optimization model for the synthesis of mass integration networks based on properties was reported by Ponce-Ortega et al. [24].A mixed-integer nonlinear programming (MINLP) formulation was used to find the structure of the network with a minimum cost.Other network structures and optimization formulations were then developed [25][26][27][28].A good description on the development of mass integration concepts and applications can be found in the books by El-Halwagi [29][30][31]. The works developed for mass and property integration networks have commonly neglected the effect of temperature on properties.Given the various levels of temperature observed in process units and resulting process streams, it becomes important to account for that factor.In this work, a framework for the inclusion of temperature effects on stream properties is proposed.The approach is based on the modification of the property operators that are used to carry out the property balances, which can also be used to include the effect of additional variables such as composition. Problem Statement Given is a set of process units that can be used as sinks {j = 1, 2,…, J}.Each sink can process a given feed rate G j , and its contents have some property values that are constrained between minimum and maximum values.In addition, given is a set of process streams {i = 1, 2,…, I} with a flowrate W i that can be recycled and/or reused in process sinks.There is set of fresh sources {r = 1, 2,…, R} with costs ($/kg) and properties .There is a set of property interceptors {m = 1, 2,..., M}.Each property can be modified with the set of treatment units {u = 1, 2,…, U p }, with cost , and given separation efficiencies .The waste stream must meet environmental constraints.The objective is synthesizing an optimal water network such that the total annual cost is minimized. Model Formulation The model is based on a recycle/reuse structure, as in Ponce-Ortega et al. [25] (see Figure 1).It includes mass, energy and property balances, and disjunctive programming is used for the selection of property interceptors.A formulation that takes into account temperature and composition functionalities for property operators is included.Each process stream from the system in Figure 1 is split into unknown flows, to be allocated in process sinks, property interceptors and/or a waste stream.Fresh sources can only be sent to process sinks.The feed to process interceptors consists of process streams and/or streams exiting from other interceptors.The outlet of process interceptors can be sent to process sinks and/or waste stream. Splitting of Process Streams Each process stream i is split into J, M and waste fractions, with flows , and sent to process sinks, property interceptors and waste stream, (1) Splitting of Fresh Streams Similarly, each fresh source is split into J fractions with flows f r,j that can be sent to different process sinks, (2) Mass Balance at Inlet of Process Interceptors The flow to property interceptor m, d m , that treats property p' is the summation of the flows from process streams , and the flow from other interceptors q m,m' , (3) Property Balance at Inlet of Property Interceptors Property balances are needed to calculate the property values at the inlet of property interceptors, and are carried out using property operators.Interceptor m treats property p, with property operators and for process stream i and the outlet from other interceptors m', respectively. The resulting property for stream d m is calculated through the following balance, Some property operators are shown in Table 1.It must be stressed that the effect of temperature is originally not included in such operators. Energy Balances at the Inlet of Property Interceptors The energy balance for each interceptor m is given by, where T 0 is a reference temperature, is the temperature of process stream i, is the outlet temperature from other interceptors , is the inlet temperature to property interceptor d m , and Cps are heat capacities.Property interceptors are assumed to operate at constant temperature, (6) Property Interceptors For property treatment, there is a set of property interceptors, u(p'), for which efficiency separation factors and operating costs are given.It is assumed that only one property is treated with the use of each interceptor. For the selection of interceptors to treat property p', the following disjunction is used, where Y u,m is a Boolean variable, which when true implies the selection of the intercepting unit m with total cost operating with an efficiency separation factor α u,m ; refers to the inlet value of property p, and is the property value at the exit of the unit. The disjunction is reformulated with the Convex Hull technique.Boolean variables are substituted by integer 0-1 variables, such that when the binary variable y u,m is equal to one, the unit m is used; otherwise, it is equal to zero.Since at most one type of interceptor is selected for the treatment of each property, the summation of the integer variables is bounded to one, Stream Splitting at the Outlet of Each Property Interceptor Each stream at the outlet of each interceptor m is split into flows , q m,m' and that go to process sinks, other property interceptors, and waste, (18) Mass Balance for Process Sinks The flow into process sink j, G j , is the summation of flows from process streams, , from property interceptors, , and from fresh streams, f r,j , Property Balance for Process Sinks To implement property balances in the process sinks, property operators are used, (20) where , and are the property operators for process stream i, outlet of property interceptors m, and fresh sources, respectively. Energy Balance for Process Sinks Energy balances for process sinks are written as, (21) where is the temperature of process stream i, is the outlet temperature from interceptor m, is the temperature of fresh source r, and Cps are heat capacities. Mass Balance for Waste Stream The waste stream flow is the summation of flows from process streams and from the process interceptors , (22) Property Balance in Waste Stream In terms of property operators, the property balance in the waste stream is written as, It should be noticed that temperature adjustments are carried out with standard heaters or coolers, which are not considered as property interceptors. Energy Balance in Waste Stream Since temperature effects are included in the formulation, the energy balance for the waste stream is needed, (24) Heat capacities in the energy balances are taken as a function of temperature.For instance, for each component i of the waste stream, (25) So the heat capacity of the stream would be, (26) Constraints For process sinks, lower and upper limits on properties are given, (27) Likewise, for waste streams limits are established by environmental constraints, (28) The objective function is the minimization of the total annual cost, which consists of the cost of fresh streams, , and the yearly cost of property interceptors, where H v is an annualization factor. Modification of Properties Temperature and composition are two variables that affect properties, and they are considered in this work via the following framework. Properties as a Function of Temperature Previous works have ignored the effect of temperature on properties.However, given a set of properties p, there would be at least a subset that depends on temperature, p(T).Each process stream has known properties , or in terms of property operators .Property balances are used when different streams are mixed, e.g., for the inlet of property interceptors, process sinks and waste, , and , respectively. In this work we establish a framework for including temperature effects on properties through the use of property operators, which is a convenient approach given its applications for property balances. In general, for any property operator we can write, (30) where for stream m, j, or waste, is the corrected property operator is the temperature functionality, is the operator for concentration, is the uncorrected value of the operator for property p(T). Methodology for Estimation of Parameters The estimation of the parameters , , , is carried out ahead of design.For instance, for the case when composition and temperature data are known, the following steps can be used to estimate such values. For different process streams, values of properties of interest (viscosity, density, etc.) are calculated at different temperatures (from process simulators or from experimental data), and the functionality with temperature, , is developed. (1) From the set of data, Equation ( 31) is used to obtain the parameter values.Temperature, , concentration and the uncorrected property operator are independent variables.(2) To estimate the parameters, takes the value for the property operator p(T) for the original process streams.(3) With the calculated parameters, the equation is implemented into the MINLP model.We take two properties, viscosity and density, to illustrate this procedure. Viscosity Viscosity depends on composition and temperature.For pure substances, Duhne [32] proposed the following logarithmic relationship for the estimation of viscosities for liquids, (32) where A and B are constants.Based on this relationship, the functionality for viscosity is taken as inversely proportional to temperature, so that the constraints for the optimization model can be written as, (33) (34) Density Variations in density are directly related to temperature.Therefore, constraints can be written as, (35) (36) Case Study To illustrate the proposed approach, the process for the production of phenol from cumene hydroperoxide is taken as a case study [25,33].Phenol is subject to environmental regulations because of its toxic nature. The process involves three stages.In the first one, the production of cumene hydroperoxide via air oxidation of cumene is carried out.The reaction takes place at temperatures between 90 and 120 °C, with pressures between 0.5 and 0.7 MPa.In the second stage, sulfuric acid is used to break the hydroperoxide molecule to produce phenol, with acetone as byproduct.Sulfuric acid is then neutrilized with sodium hydroxide, forming organic and aqueous phases; the aqueous phase is sent to treatment of waste water, and the organic phase is sent to a final distillation system to purify the products (Figure 2).There are three aqueous streams that can be reused via process integration.Two fresh sources are also available.Data on stream flowrates, composition, toxicity, chemical oxygen demand and pH were taken from Ponce-Ortega et al. [25], while temperature data were taken from Kheireddine et al. [33].There are two fresh sources, one consisting of high-quality water and another one of slightly contaminated water.From the several streams from the process, only wastewater streams are considered as process sources, one from the cumene peroxidation unit, another one from the cleavage section and the last one from the output form a final washer that uses fresh water.Three equipment units serve as process sinks, one washer after the peroxidation and separation units, a neutralizer that uses NaOH in the separation stage, and a final unit consisting of a washer to purify the product.Table 2 shows the relevant data for each stream.Values for heat capacity, density and temperature were calculated from the given conditions for each process stream. The constraints for the three process units that serve as process sinks are given in Table 3 and Table 4. Temperature constraints are given in Table 5.For the waste stream, the operating cost due to cooling in order to meet environmental constraints was calculated from, (37) where is the utility cost, taken as 0.06688 $/MMkJ.Data on cost and efficiency factors for property interceptors are reported in Table 6.Table 7 gives the constants for the calculation of heat capacities, while the parameters for density and viscosity corrections are reported in Table 8. Table 2. Data for case study.The model was implemented into the GAMS software, and solved using DICOPT.In order to observe the effect due to the corrections of properties with temperature, two solutions were obtained.The first solution was obtained using the model with the proposed estimations.The second one considered only the effect of mixing on the calculations of viscosity and density.When both temperature and composition effects on properties were included, the solution reported in Figure 3 was obtained.Values for properties at the process sinks and waste stream are given in Table 9.Three property interceptors are used as part of the network, one to treat phenol concentration u(z 1 ), another one for toxicity treatment, u(Tox 1 ), and the third one for pH adjustment, u(pH 1 ).In this case, some direct use of split process streams into process sinks are observed, with the use of fresh sources in the second sink for adjustment.Cooling of the waste stream in order to meet environmental constraints is also observed.The total annual cost for this case was $7.8686 × 10 5 /year. To establish a comparison basis, the model was also solved without including the effect of temperature on properties.Figure 4 shows the solution obtained for such a case.Three property interceptors are again observed, along with the use of one fresh source and temperature treatment for the waste stream.The total annual cost of the network amounts to $7.5557 × 10 5 /year.One can observe that the network structure for the two solutions is similar, but with differences in the flows for the streams.The difference in TCA is 4.14%, which is related to an improvement in the estimation of properties with the functionality implemented for temperature dependence.To observe how density and viscosity estimations were affected by including the dependence with temperature and composition, such values were also obtained from the Aspen Plus process simulator in order to validate the predictions.Values for density and viscosity obtained from the proposed optimization model (i.e., model with p(z,T), along with the values obtained without such modification were used for comparison. Table 10 shows the results for density.It can be observed that the model that includes p(T) provides lower errors for the streams of sinks G1 and G2 and for the waste stream.From the viscosity results of Table 11, one can see that for all streams the error is lower with the model that includes p(T); the most notable difference is observed for the waste stream, which can be attributed to the cooling treatment from process streams at 348.15 K, 338.15 K, and 313.15 K, to a waste stream temperature of 308.15 K. Conclusions An optimization model for the design of water networks that includes effects of temperature and composition on property operators has been presented.Property operators have been used as a convenient tool to carry out property balances for the design of the network.Typical formulations based on optimization techniques have included mass and property balances.In this work, a formulation that includes mass, property and energy balances has been used, along with a methodology to include the effect of variables such as temperature and composition on properties into the MINLP formulation.The results from the case study show that an improvement in the solution for the optimal network is obtained with the proposed approach. Figure 1 . Figure 1.A recycle/reuse integration network with in-plant property interceptors. are written in terms of the disaggregated variables, (12) (13)To complete the reformulation, bounds are used for the disaggregated variables, Figure 2 . Figure 2. Main steps for the production of phenol. Figure 3 . Figure 3. Optimal structure when effects of temperature and composition were considered. Figure 4 . Figure 4. Optimal structure when no effect of temperature and property composition was considered. d Table 1 . Some property operators. Values obtained from simulations with Aspen Plus; ** Values obtained from reported sources. * Table 3 . Upper limits for process sinks and waste stream. Stream Flow (kg/h) Z Table 4 . Lower limits for process sinks and waste streams. Table 5 . Constraints for temperature in process sinks and waste stream. Table 6 . Data for cost and efficiency for property interceptors. Table 7 . Constants for heat capacity estimations. Table 8 . Parameters for temperature dependence. Table 9 . Property values for sinks and waste stream. Table 10 . Deviations in density estimations in process and waste streams. Table 11 . Deviations in viscosity estimations in process and waste streams.
4,320.4
2014-08-18T00:00:00.000
[ "Engineering" ]
Computational and experimental studies of salvianolic acid A targets 3C protease to inhibit enterovirus 71 infection Hand, foot, and mouth disease (HFMD) is a common childhood infectious disease caused by enterovirus (EV) infection. EV71 is one of the major pathogens causing hand, foot, and mouth disease and is more likely to cause exacerbation and death than other enteroviruses. Although a monovalent vaccine for EV71 has been developed, there are no clinically available anti-EV71 specific drugs. Here, we performed virtual screening and biological experiments based on the traditional Chinese medicine monomer library. We identified a traditional Chinese medicine monomer, Salvianolic acid A (SA), a polyphenolic compound isolated from Salvia miltiorrhiza. Salvianolic acid A inhibits EV71 virus infection in a concentration-dependent manner, and its antiviral activity is higher than that of other reported natural polyphenols and has a high biosafety. Furthermore, molecular dynamics simulations showed that salvianolic acid A can anchor to E71, a member of the enzyme catalytic triad, and cause H40 to move away from the catalytic center. Meanwhile, molecular mechanics generalized born surface area (MMGBSA) and steered molecular dynamics (SMD) results showed that the P1 group of SA was most easily unbound to the S1 pocket of 3Cpro, which provided theoretical support to further improve the affinity of salvianolic acid A with 3Cpro. These findings suggest that salvianolic acid A is a novel EV71 3Cpro inhibitor with excellent antiviral activity and is a promising candidate for clinical studies. Introduction Hand, foot, and mouth disease (HFMD) is an infectious disease caused by enteroviruses that primarily affects infants and young children (Chan et al., 2003). The clinical features are vesicular eruptions mainly on the skin of the hands, feet, and oral cavity, and accompanied by fever (Chen et al., 2007). Enterovirus 71 (EV71) and coxsackievirus A6 (CA6) and A16 (CA16) are the main prevalent pathogens of HFMD. A statistical report from Beijing, China, showed that CA6, CA16, and EV71 were detected in 36.1, 24.1, and 12.0% of the 440 HFMD clusters in 2016-2020, respectively (Cui et al., 2022). EV71 has the highest probability of causing severe illness and death compared to other enteroviruses (Xing et al., 2014). Indeed, EV71 has been associated with a wide spectrum of acute central nervous system (CNS) syndromes, including aseptic meningitis, brain-stem encephalitis, and fulminant neurogenic pulmonary edema (McMinn, 2002). Over the past 20 years, HFMD caused by EV71 has become a major public health challenge throughout the Asia-Pacific region, and the magnitude and severity of the HFMD have caused global concern (Zeng et al., 2012). To prevent a pandemic, China has successfully developed an inactivated monovalent EV71 vaccine (Lin et al., 2019). However, there is still a lack of safe and reliable treatment for patients infected with EV71 (Diarimalala et al., 2020), and the risk of being permanently disabled or fatal after EV71 infection remains, there is a pressing need to develop anti-EV71 drugs to combat HFMD. EV71 is a non-enveloped virus whose genome is a single positive-stranded RNA that encodes a 5′-UTR, a polyprotein, and a 3′UTR (Solomon et al., 2010). EV71 has been classified into subtypes A, B, C, and D based on the phylogenetics of its major antigenic protein, VP1. Among these, subtype A contains only the prototype strain BrCr, subtypes B and C each have five different subgenogroups (B1-B5 and C1-C5), and the strains circulating in China belong to subtype C4, subtype D is represented by a single strain which has been isolated from India (Lei et al., 2015). The polyprotein of EV71 contains three precursor proteins (P1-P3). P1 is in turn cleaved into four viral capsid proteins (VP1-VP4), P2 and P3 are cleaved into seven non-structural proteins (2A-2C, 3A-3D) involved in protein processing and genome replication (Solomon et al., 2010). The viral 3C protein (3C pro ) is a cysteine protease containing 183 amino acids and E71, H40, and C147 form a conserved catalytic triad of the protease (Wen et al., 2020). 3C pro is involved in the hydrolysis of all seven non-structural proteins of EV71 as well as two structural proteins (VP1, VP3) (Yuan et al., 2018), and also cleaves host proteins related to the immune response, e.g., 3C pro suppresses RIG-I signaling by disrupting the RIG-I-IPS-1 complex and IRF3 nuclear translocation, affecting the innate immune response (Hornung et al., 2006). The central roles played by EV71 3C pro make it a very promising target for antiviral drug development (Cui et al., 2011). Structure-based drug design and screening based on 3C pro have identified several active compounds with significant inhibitory effects against EV71 infection (Supplementary Figure S1) (Diarimalala et al., 2020). In 2011, Rupintrivir (AG7088), a 3C pro inhibitor of human rhinovirus, was shown to have strong antiviral activity against EV71 3C pro . Subsequently, we designed and synthesized NK-1.8k and NK-1.9k (Wang et al., 2017a) based on this peptidomimetic compound with better stability and drug properties than rupintrivir, and we resolved the complex structures of NK-1.8K and NK-1.9K with 3C pro and elucidated the interaction modes of small molecules with 3C pro (Wang et al., 2017b). Also, we show that these inhibitors have the highest activity and higher selectivity when the three-residue mimics (AG7088) is shortened to a two-residue peptidyl mimics and the inhibitor P1 group is a δ-lactam and the P1′ group is an aldehyde group or a cyanohydrin group (Supplementary Figure S1). Another peptidomimetic inhibitor reported by our group is (1R, 2S, 2′S, 5S)-9, which is one of the most potent 3C pro inhibitors to date . However, the presence of cyanohydrin in the structure gives it unstable and toxic properties. Ma et al. (2016) discovered a novel 3C pro inhibitor, DC07090, which can bind 3C pro and reversibly inhibit their protease activity, showing a high potential for drug generation. In addition, several natural products and derivatives have been shown to have low cytotoxicity and potent antiviral activity, including Luteoloside (Cao et al., 2016), Quercetin (Yao et al., 2018), Chrysin and Diisopropyl Chrysin-7-i1 Phosphate (CPI) . However, the above active small molecules cannot reach the clinical stage due to their poor oral availability or higher toxicity and easy degradation, thus the discovery of antiviral agents that can enter clinical use is the most urgent task for EV71 drug development (Lu et al., 2011;Diarimalala et al., 2020). In this study, we propose a drug screening strategy for traditional Chinese medicine monomer targeting EV71 3C pro (Figure 1). We identified a novel EV71 3C pro inhibitor, Salvianolic acid A (SA) by constructing a traditional Chinese medicine monomer compound library and docking-based virtual screening. We used molecular dynamics simulations and molecular biology experiments to reveal the molecular mechanism of 3C pro inhibition by SA, and tested its antiviral activity by measuring the luciferase expression in cells with EV71 infection. The data show that SA is an EV71 3C pro orthosteric site inhibitor with high antiviral activity. Importantly, two SA-rich Chinese drug agents, DanShenDiWan and FuFangDanShenPian, have been marketed and used for decades as a treatment for angina pectoris, and Danhong injection (containing SA) has entered the clinic for the treatment of stroke in China (Lin et al., 2022), suggesting that SA has good biosafety. Therefore, the development of SA as an antiviral agent would be more economic than innovative drug development. Our strategy offers new ideas for the discovery of safe and effective antiviral drugs. SA is a novel EV71 3C pro inhibitor The utilization of traditional Chinese medicines monomers to treat diseases has been a popular research topic for decades and has shown significant curative effects in many cases (Harvey, 2008). To screen antiviral small molecules, we collected more than 2,300 monomers from the Traditional Chinese Medicine Systems Pharmacology Database (Ru et al., 2014). We performed a virtual screen using the previously identified 3C pro inhibitor binding pocket as the receptor docking region. As shown in Figure 2A, the binding pocket was originally an NK-1.8K binding region and was structurally very stable at all sites except for the β-ribbon region ( Figure 2B). First, we evaluated the usability of the docking software Vina, and as shown in Figure 2A, the RMSD of the docked conformation of NK-1.8K to the crystal conformation was less than 2 Å, indicating that the scoring function of Vina can accurately describe the ligand binding mode to 3C pro . Then, 2,300 monomers compounds were sequentially docked to 3C pro . Smaller molecules with higher affinity to proteins (binding energy < −8 kcal/mol) were further screened visually to ensure structural diversity of the molecules ( Figure 2C). Finally, 10 candidates were tested for biological activity (Supplementary Table S1). As shown in Figure 2D, 10 μM Salvianolic acid A (SA) almost completely inhibited EV71 infection, indicating its high antiviral activity. The binding energy of SA to 3C pro is −8.6 kcal/ mol, and its binding mode is different from that of NK-1.8K ( Figure 2E). To verify whether SA targeted at 3C pro and affected its enzymatic activity, we carried out the in vitro inhibition assays based on the Fluorescence Resonance Energy Transfer (FRET). The inhibition curves showed that 1 μM of SA significantly reduced the hydrolytic activity of 3C pro ( Figure 2F). Then, we quantified the halfinhibitory concentration (IC 50 ) of SA inhibition of 3C pro , and the IC 50 value was 0.69 µM ( Figure 2G). The activity of SA to inhibit 3C pro is approximately 5.8 times higher than that of chrysin . Virtual screening and FRET experiments showed that SA is a novel 3C pro inhibitor with higher activity than other natural polyphenols that have been reported (Diarimalala et al., 2020). To explore which infection stages were impacted by SA, we performed time-of-addition assays. The single-round EV71 luciferase virus was used to test virus propagation when treated with SA, which was beneficial to exclude reinfection with the virus. NK-1.8k (targeting EV71 3C pro ) and GPP3 (targeting the FIGURE 1 Flowchart of the work. This work consists of three parts: drug screening, inhibition mechanism, and interaction mode. Frontiers in Pharmacology frontiersin.org viral capsid) were applied as controls (De Colibus et al., 2015;Wang et al., 2015). NK-1.8k and GPP3 are compounds that inhibit viral replication and entry, respectively. RD cells were infected with EV71 luciferase virus and treated with 5 μM SA, 2 μM NK-1.8k, and 1 μM GPP3 at different time points (−6, −4, −2, 0, 2, 4, 6, 8, and 10 hpi). As shown in Figures 3A, B, the inhibition effect of SA from −6 to 10 hpi were independent of the treatment time. A similar pattern of results was obtained with the viral inhibitor NK-1.8k. Different from the above results, the antiviral effect of the virus entry inhibitor GPP3 was dramatically decreased from 4 hpi ( Figure 3C). This experiment showed that SA inhibited virus by the same pattern as NK-1.8K, which cannot inhibit virus entry into cells, but can inhibit virus replication by acting on 3C pro . To assess whether the inhibitors' antiviral activity against this virus depends on the cell types or species, we analyzed the antiviral effect on different cells. The EC 50 of SA on RD, HEK-293T, and Vero cells were 1.27, 0.67, and 0.79 μM, respectively ( Figures 3D-F). This indicates that SA has a significant ability to inhibit EV71 infection on different cell types, and its activity is significantly higher than that of other natural products that have been reported (Diarimalala et al., 2020). Furthermore, we tested the cytotoxicity of SA on RD, HEK-293T, and Vero cells. Even at 100 μM, SA did not affect the viability of the three different cells (Figures 3G-I). Taken together, this evidence suggests that SA has excellent antiviral activity and biological safety in vitro. The member of the catalytic triad, E71, is the structural basis for SA inhibition of 3C pro The activity of 3C pro depends on the catalytic triad consisting of E71, H40, and C147 (Cui et al., 2011;Wen et al., 2020). First, H40 in Frontiers in Pharmacology frontiersin.org the triad exchanges protons with C147, thereby deprotonating C147. Then, the 3C pro reacts with the substrate by acylation to form and release the first product, the amine R-NH 2 . Finally, the acyl-3C pro reacts with a water molecule to release the second product (Yuan et al., 2018). To determine the molecular mechanism of SA inhibition of 3C, we performed conventional molecular dynamics (CMD) simulations (Supplementary Table S2). We calculated the electrostatic surface potential (ESP) of the residues and SA and determined the protonation state of the residues in the simulated system ( Figures 4A-C). In the Apo system, E71 forms a hydrogen bond (H-bond) with -NH at the H40 δ site and stabilizes the -N at the H40 ε site pointing to C147. This is the structural basis for the deprotonation of C147 ( Figure 4A). However, unlike the Apo system, when we performed protein-ligand complex simulations in the Holo system without any changes, SA was not stabilized in the binding pocket during all three 100 ns simulations and the RMSD of the SA fluctuated drastically ( Figures 4D-F). This is understandable because the initial conformation of the protein is from the NK-1.8K complex with 3C pro , and the reason for this phenomenon is that the key interaction between the protein and SA is not formed. We calculated the ESP of the key sites in the Holo system and found that SA has two phenolic hydroxyl groups close to the carboxyl group of E71. the ESP of the SA phenolic hydroxyl site is 74.92 kcal/mol, and the ESP of the H40 δ site, which forms a hydrogen bond with E71, is 51.45 kcal/mol ( Figure 4G). The binding mode of 3C pro and SA suggests that E71 can only form electrostatic interactions with one of H40 and SA. The ESP values indicate that the potential of the phenolic hydroxyl group of SA is significantly higher than that of the δ-site amino group of H40, so SA may preferentially bind to E71 ( Figure 4D). In fact, we also observed the formation of H-bonds between SA and E71 in the Holo simulation system (Supplementary Figure S1). Therefore, for the unaltered Holo system, SA would fall into a meaningless fluctuating state unable to be stabilized. To avoid the system from falling into a meaningless equilibrium, we modified the protonated state of H40 in the Holo system to avoid H40 competing with SA for binding E71 ( Figure 4G). We increased the simulation time to 300 ns and analyzed the dynamic behavior of SA in the Holo system. The RMSD of SA Frontiers in Pharmacology frontiersin.org and the final conformations of three trajectories showed that the RMSD of SA stabilized in the range of 2-2.5 Å ( Figure 4H), and the binding pose was consistent ( Figure 4I). This indicated that the interaction mode between SA and 3C pro was more consistent and representative. To evaluate the conformational stability of Apo and Holo systems during MD simulations, we mapped the protein conformational free energy landscape. The RMSD and the radius of gyration (Rg) of the proteins were used as reaction coordinates for the free energy. As shown in Figures 5A, B, the RMSD values of the proteins are distributed in the range of 1.0-1.8 Å and the Rg values are distributed in the range of 15.2-15.4 Å, and there is only one stable state for both systems. Furthermore, we examined the RMSF of the protein in both systems, and the flexibility of the β-ribbon of Apo is significantly higher than that of the Holo system (Supplementary Figures S3A, B), indicating that SA contributes to the stability of this region ( Figure 5C). Indeed, the β-ribbon itself has a larger B-factor value and its conformation moves away from the catalytic center in the absence of ligand binding (Supplementary Figures S3C, D) (Cui et al., 2011). To assess the stability of the electrostatic interaction between SA and E71, we calculated the number of H-bonds between the two. All three replicate trajectories of the Holo system showed the presence of stable H-bonds between SA and E71, and the occupancy of H-bond OE2-H14 and H-bond OE2-H13 reached 86% (Figures 5D-F). To characterize the H-bonds, we calculated the electrostatic surface potentials of SA with E71. As shown in Figure 5G, there is an overlap of van der Waals surfaces in the regions where residues form H-bonds with SA, and the electrical properties of the overlapping regions are complementary ( Figure 5G). As the H-bond between E71 and H40 is disturbed Frontiers in Pharmacology frontiersin.org by SA binding, the H40 side chain no longer points toward residue C147 and is deflected away from the catalytic center by π-π stacking with SA ( Figure 5H). The dihedral angle of H40 is deflected by about 100° (Figures 5I, J). The binding of SA leads to the allosteric of the 3C pro catalytic triad, which ultimately prevents the 3C pro from initiating the catalytic process. Binding and dissociation mechanism of SA and 3C pro To assess the quantitative effect of affinities between the 3C pro and SA, binding free energy calculation and decomposition were performed using the molecular mechanics generalized born surface area (MMGBSA) method. ΔG MMGBSA of SA bound to the 3C pro was −31.43 kcal/mol (Supplementary Table S3), and the ΔG MMGBSA was driven by the electrostatic interaction (ΔE ele ), polar solvation (ΔG GB ), the vdW interaction (ΔE vdW ), non-polar solvation (ΔG SA ). Specifically, the contribution of ΔEele to the binding energy is the largest in this system (Supplementary Table S3). As in Figure 6A, the residues with the most favorable contributions (lower than −2.0 kcal/mol) to the binding free energy were labeled. According to the binding free energy decomposition spectrum, three residues that significantly promoted the binding were, E71 (−6.80 kcal/mol), H40 (−2.55 kcal/mol), and L127 (−2.28 kcal/mol). To determine the binding mode of SA to 3C pro , we mapped the free energy landscape of the SA conformation ( Figure 6B). First, we calculated the distance between the E71 side chain and the SA terminal carbon atom, as well as the angle of the SA molecular structure ( Figure 6C). We used these data as reaction coordinates to determine the lowest energy To explore the 3C pro -SA interactions and the affinity of the binding pockets for SA during SA dissociation, we performed steered molecular dynamics (SMD) simulations. Unlike the pore and groove binding models, the 3C pro inhibitor binding pocket is located on the protein surface, and thus the SA molecule has a large degree of freedom and the dissociation pathway is difficult to determine. However, with limited calculations, each SA group can be made to dissociate from their respective pockets separately to determine the affinity of each group to the protein, which is critical for the druggability optimization of SA. Here, the PMF profile displayed the energy changes of SA each group unbinding to 3C pro (Figures 7A-C). In the S1 pocket, the lower energy barrier (~10 kcal/mol) makes it easier for the P1 group of SA to unbind (Figures 7C, D). The P2 group of SA requires the highest energy (~25 kcal/mol) to unbind from the S2 pocket, indicating that this P2 has the highest affinity for the 3C pro , which is consistent with the estimate of the binding free energy ( Figure 7D). It should be noted that the energy of the P3 group is not converged in 30 simulations, because the dynamic simulations show that the dissociation of the P3 group drives the P1 group to unbind from the S1 pocket together, which is determined by the structural rigidity of SA itself ( Figure 7D). Discussion HFMD has become a serious public health problem in the Asia-Pacific Region. Since May 2008, more than 13 million cases of HFMD have been reported cumulatively, including more than 3,300 deaths (Yang et al., 2017). Although the EV71 vaccine has been approved by the NMPA, there is an urgent need for efficient and safe antiviral drugs in the face of the mutation of the virus and the risk of its spread (Diarimalala et al., 2020). However, the drug progress has not been as developed as for vaccines, and still no relevant and effective drugs have been brought to market (Diarimalala et al., 2020). Currently, extensive research has focused on the structure of viral proteases and their interaction with synthetic inhibitors with a view to designing drugs that can combat the devastating epidemic of HFMD (Kuo et al., 2008). Considering the high safety and structural diversity of traditional Chinese medicine monomers, we screened an excellent antiviral small molecule, SA, using virtual screening as well as biological experiments targeting the 3C pro of EV71. The results of molecular dynamics simulations show that SA can bind to a member of the catalytic triad, E71, and thus anchor to the protease substrate active site. And the binding of SA disrupts the interaction between E71 and H40, which in turn causes H40 to move away from the catalytic center, making the 3C pro unable to initiate catalytic function ( Figure 7E). Indeed, disruption of the 3C pro catalytic triad conformation is the key for the inhibitor to gain antiviral activity. The deflection of Frontiers in Pharmacology frontiersin.org the H40 side chain was found as early as 2011 when using X-ray to resolve the complex of rupintrivir and 3C pro . This is similar to the results found in our simulations, where a π-π stacking interaction was formed between H40 and the inhibitor P2 group. Differently from the inhibition mechanism of NK-1.8k, the P2 group of SA plays a key role in binding 3C pro , while the former anchors the sulfhydryl group of cysteine protease through the P1′ group aldehyde group (Wang et al., 2017a). The results of SMD simulations and MMGBSA calculations indicate that the P2 group can stabilize the binding conformation of the whole molecule by anchoring in the S2 pocket. Meanwhile, the binding and dissociation of ligands also point out that the P1 group of SA is the weakest bound to 3C pro , and it is the easiest to unbind to 3C pro . Based on the experience when optimizing NK-1.8K and NK-1.9K, the P1 group is very important for ligand activity, and replacing the P1 position of rupintrivir γ-lactam with δ-lactam not only increases the binding of the inhibitor to the target protein but also produces a higher hydrophobicity, which allows the compound to pass through the plasma membrane more easily Wang et al., 2017a). In the future, if optimization and modification of SA molecules are needed to enhance the affinity of SA with 3C pro , it is recommended to enhance the binding energy of the P1 group to the S1 pocket. Although our group has previously identified numerous small molecules that have inhibitory effects on EV71 infection, such as FOPMC/FIOMC , and SLQ-4/SLQ-5 , the pharmacology and toxicology of these compounds are unknown and these compounds require long-term testing and optimized modifications before they can hope to pass the evaluation phase. Here, we screened a compound with significant inhibitory activity against EV71 3C pro based on a library of traditional Chinese medicine monomer compounds, SA. And, SA with its structural analogue (−)-Epigallocatechin gallate (EGCG) was reported to have significant inhibitory effect on SARS-CoV-2 Figure S4) (Zhong et al., 2022). Also, the predicted targets of SA using SwissTargetPrediction showed that the protease was the predominant target of SA action (Supplementary Figure S5), and these results further support the molecular mechanism of SA as a viral protease inhibitor. SA is one of the major water-soluble phenolic acids extracted from Salvia miltiorrhiza (Li et al., 2008). S.miltiorrhiza has been used clinically to treat and prevent cardiovascular disease, hyperlipidemia, and cerebrovascular disease (Jiang et al., 2005). Notably, SA is one of the most potent compounds in S. miltiorrhiza that has the strongest protective effect against peroxidative damage to biological membranes. Studies have shown that SA has a variety of pharmacological activities, including prevention of brain lesions, defense from oxidative damage, and antithrombotic (Fan et al., 2010). Although the content of SA in S. miltiorrhiza is relatively low, some studies have shown that SA is more abundant in the Chinese drug agents, DanShenDiWan and FuFangDanShenPian (Sun et al., 2016). Moreover, Danhong injections containing SA have already entered clinical use, which indicates that the safety of SA in humans can be fully guaranteed. In conclusion, the discovery of SA will facilitate the research process of HFMD drugs and provide opportunities for the discovery of novel antiviral drugs. Experimental procedures Drug screening The protein in the complex structure of EV71 3C pro with NK-1.8K (PDB ID: 5GSO) (Wang et al., 2017a) was used as a docked receptor. Virtual screening of drugs was performed using the molecular docking program AutoDock Vina 1.0 (Trott and Olson, 2010). The traditional Chinese medicine database contains 2,300 monomers. Monomers with molecular weight >700 Da and <300 Da were removed. Autodock tool 1.5.6 (Morris et al., 2009) was used to prepare the PDBQT files of TMEM16A and drugs. The receptor was programmed to remain rigid, while the ligand was flexible. The grid center is determined according to the center of the binding pocket, with a searching space size of 24 × 24 × 20 Å 3 . The global search exhaustiveness value was set to 50. The maximum energy difference between the optimal binding mode and the worst case was set to 5 kcal/mol to ensure diverse docked poses. The test molecules were purchased from MedChemExpress (MCE) and the purity of SA was 99.75%. SwissTargetPrediction (Daina et al., 2019) was used to predict the possible protein targets of the molecules. Molecular dynamics simulations and quantum chemical calculations The simulation systems were constructed using tleap program of Amber 16 (Case et al., 2016). The simulation boxes contained approximately 33,000 atoms and dimensions of~76 × 73 × 74 Å 3 . All simulations were performed using Amber16 (Case et al., 2016). The Amber ff14SB force field and the Joung/Cheatham ion parameters (Li et al., 2013;Li and Merz, 2014) were used. Parametrization of SA was performed using the Antechamber module of Amber16, using the Generalized Amber Force Field to assign atom types and the AM1-BCC method to assign charges. First, the simulated system of solutions, and the entire system were sequentially performed for energy minimization. Next, the system temperature was increased from 0 to 100 K under the NVT ensemble, and then the temperature was increased from 100 to 300 K under the NPT ensemble, during which the protein was restraint (1 kcal mol −1 ·Å −2 ). Finally, for each simulation system, three separate production simulations were performed under NPT conditions at 300 K and 1 bar. The other parameters were the same as we set before (Shi et al., 2021;Shi et al., 2022). The snapshots were extracted every 100 ps for all equilibrium MD trajectories to calculate statistical distributions. The CPPTRAJ module of the Amber 16 program was used to analyze the generated trajectories. Quantum chemical calculations were performed using Gaussian 03 (M. J. Frisch et al., 2003) and Multiwfn (Tian and Feiwu, 2012) programs. The wave function data used in the electrostatic surface potential analysis were generated using the B3LYP/6-31G** level algorithm. MMPBSA.py in the AmberTools16 package (Miller et al., 2012) was employed to conduct free energy calculations for the two complexes. 100 conformations were extracted from each equilibrious trajectory (from 250 to 300 ns) for calculations. SMD simulations (Jensen et al., 2002) were performed using the Amber16 software package. Here, SA with 3C pro stable state was selected as the initial conformation of SMD. To obtain the converge potential mean of force (PMF) in SMD, SA was simulated 30 times along each of the three reaction directions. In these simulations, the trajectories with energy values closest to the Jarzynski average (JA) are considered representative. The stretching velocity was 10 Å/ns in this SMD simulation, coupling a spring constant k of 40 kcal/(mol × Å 2 ). As the distance between the two selected atoms reached 10 Å, there is no longer any interaction between the ligand-related groups and the corresponding binding pocket. Preliminary screening of antiviral activity The inhibitory activity against EV71 of traditional Chinese medicine monomers was evaluated by phenotype screening. Briefly, 3 × 10 4 RD cells were seeded in a 96-well plate and cultured overnight at 37°C in 5% CO 2 . Monomers (10 µM) and EV71 luciferase virus (MOI = 1) were added and incubated for 24 h. The luciferase expression level was monitored using a Microplate Reader (Tecan, Austria). In vitro inhibition assay The fluorescent peptide NMA-IEALFQGPPK(DNP)FR was employed as the substrate for inhibition assay based on the FRET effect. The inhibition assay proceeded to contain 1 μM EV71 3C pro , 20 μM substrate and 1 μM SA in 50 mM HEPES (pH 7.5), 100 mM NaCl, 2 mM DTT at 30°C. The fluorescence intensities were read at λex = 340 nm and λem = 440 nm every 1 min for 60 min. The IC 50 was executed with gradient diluted SA (0.2-50 μM) and incubated at 30°C for 2 h, and then 20 μM substrates were added into each well. The fluorescence intensities were read at a Microplate Reader and calculated by GraphPad Prism 7.0. Frontiers in Pharmacology frontiersin.org The inhibition effect and cytotoxicity of SA RD (3 × 10 4 per well), Vero (3 × 10 4 per well), and HEK-293T (2 × 10 4 per well) cells were seeded in 96-well plates and cultured at 37°C 5% CO 2 overnight. Each cell line was treated with serial dilutions of the SA ranging from 0.05 to 50 μM. EV71 luciferase reporter virus was added after 2 h and cultured for 24 h. The supernatants were removed, and cells were lysed using Bright-Glo Luciferase substrate. The luciferase values were read on a Microplate Reader (Tecan, Austria), and the EC 50 was calculated using Graph Pad Prism. Cell viability assay was used to measure the cytotoxicity of SA on different cell lines. Serial dilutions of the SA (1.56-100 μM in DMEM) were added and incubated for 48 h at 37°C. Cells were incubated for 10 min with 100 μL of CellTiter-Glo ® reagent (Promega, United States). The luminescence signals were determined using the Microplate Reader. The viability of cells treated with inhibitors was relativized to that of the non-treated cells. Time of addition assay We performed the time of addition assay with SA, NK-1.8k, and GPP3 to elucidate the stage at which the compound inhibited viral replication. RD (3 × 10 4 ) cells were cultured in 96-well plates at 37°C under 5% CO 2 overnight. Cells were treated with 5 μM SA, 2 µM NK-1.8k, and 1 µM GPP3, respectively, and infected with EV71 luciferase reporter virus for different periods. After 24 h post-infection (hpi), antiviral activity was determined by the reduction of the luciferase activity compared with the control cultures using Bright-Glo Luciferase substrate. Data analysis Graphical presentation and data analysis were performed using Microsoft Excel 2019. The data are presented as mean ± standard deviation (S.D.), and the number of replicates is given in Figure legends. Statistical significance of the differences between group means was evaluated by one-way analysis of variance (ANOVA) using Tukey's honestly significant difference (HSD) test as a post hoc test; p values ≤0.05 were considered statistically significant (*p < 0.05, **p < 0.01). Discovery Studio visualizer was used to analyze non-covalent interactions between SA and its binding pocket. Visualization and analysis of model features were performed by VMD (Humphrey et al., 1996) and Open-Source Pymol (https:// pymol.org). Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
7,397.6
2023-03-02T00:00:00.000
[ "Biology" ]
Enhanced TabNet: Attentive Interpretable Tabular Learning for Hyperspectral Image Classification : Tree-based methods and deep neural networks (DNNs) have drawn much attention in the classification of images. Interpretable canonical deep tabular data learning architecture (TabNet) that combines the concept of tree-based techniques and DNNs can be used for hyperspectral image classification. Sequential attention is used in such architecture for choosing appropriate salient features at each decision step, which enables interpretability and efficient learning to increase learning capacity. In this paper, TabNet with spatial attention (TabNets) is proposed to include spatial information, in which a 2D convolution neural network (CNN) is incorporated inside an attentive transformer for spatial soft feature selection. In addition, spatial information is exploited by feature extraction in a pre-processing stage, where an adaptive texture smoothing method is used to construct a structure profile (SP), and the extracted SP is fed into TabNet (sTabNet) to further enhance performance. Moreover, the performance of TabNet-class approaches can be improved by introducing unsupervised pretraining. Overall accuracy for the unsupervised pretrained version of the proposed TabNets, i.e., uTabNets, can be improved from 11.29% to 12.61%, 3.6% to 7.67%, and 5.97% to 8.01% in comparison to other classification techniques, at the cost of increases in computational complexity by factors of 1.96 to 2.52, 2.03 to 3.45, and 2.67 to 5.52, respectively. Experimental results obtained on different hyperspectral datasets demonstrated the superiority of the proposed approaches in comparison with other state-of-the-art techniques including DNNs and decision tree variants. Introduction Hyperspectral imagery (HSI) consists of abundant spatial and spectral information in a 3D data cube with hundreds of narrow spectral bands. Due to high spectral resolution, it has been applied in many applications, such as pollution monitoring, urban planning, analysis for land use, and land cover [1][2][3][4]. However, an increase in spatial and spectral information poses a challenge in HSI analysis. Thus, analysis of HSI, such as classification, dimensionality reduction [1,5], and feature extraction [6,7], has obtained much attention among the remote sensing community for decades [8]. Moreover, such approaches can be applicable towards vision technology applications in other engineering domains [9][10][11], multispectral remote sensing, and synthetic aperture radar (SAR) imagery [12,13]. In the last decades, spectral-based classification approaches such as support vector machine (SVM) and composite kernel SVM (SVM-CK) have been widely used in remote sensing [14][15][16]. In addition, different spatial-spectral features have been introduced for HSI classification [17,18]. Sparse representation (SR) for HSI classification was successfully applied in [19], inspired by the successful application of sparse representation in face recognition [20]. Consequently, many sparse and collaborative In this work, we observed enhanced performance of unsupervised pretraining on TabNet (uTabNet) for HSI classification, and pretraining was extended to TabNets, resulting in uTabNets. The unsupervised pretrained version of TabNets, i.e., uTabNets, can consider sequential attention in addition to spatial processing of masks by using 2D CNN in the attentive transformer. Moreover, the existing TabNet does not include any preprocessing stage, weakening its ability to learn in a better way. Certainly, including spatial information in a spectral classifier has led to increased classification accuracy. Many deep learning classifiers, such as recurrent neural networks (RNN) [42] and generative adversarial network (GAN) [43], use CNN for deep feature extraction with several convolutional and pooling layers [44,45]. However, most deep learning methods need massive training to accurately learn of parameters. To deal with such issues, various classification frameworks, such as active learning [46] and ensemble learning [47], are introduced. In addition, spatial optimization using structure profile (SP) is introduced in [48] for feature extraction purposes. In this paper, we incorporate SP in the TabNet with structure profile (sTabNet). Similarly, SP is used in extended versions of TabNet, including uTabNet with SP (suTabNet), TabNets with SP (sTabNets), and uTabNets with SP (suTabNets). The main contribution of this work can be summarized as follows: 1. It introduces TabNet for HSI classification and improves classification performance by applying unsupervised pretraining in uTabNet; 2. It develops TabNets and uTabNets after including spatial information in the attentive transformer; 3. It includes SP in sTabNet as a feature extraction to further improve the classification performance of SP versions of TabNet, i.e., suTabNet, sTabNets, and suTabNets. The remainder of this article is organized as follows. Section 2 presents related work. Section 3 discusses the proposed TabNet versions for hyperspectral image classification. Section 4 shows experimental results along with a discussion. Section 5 summarizes the article conclusively. Related Work Features should be picked wisely for meaningful prediction in machine learning. Global feature selection methods are techniques of selecting appropriate features based on the entire training dataset. Forward selection and LASSO regularization are broadly used global feature selection techniques [49]. Forward selection uses an iterative approach in a step-by-step fashion to select appropriate features from each iteration, and Lasso regularization can allocate zero weights for irrelevant features in a linear model. As stated in [50], instance-wise feature selection can be used to select individual features for each input and explainer model to maximize the mutual information between the response variable and the selected features. Moreover, the actor-critic framework can be used to mimic a baseline by optimizing the feature selection [51]. Using the actor-critic framework, reward can be generated by the predicting network for the selecting network. However, TabNet can be used for soft feature selection by controlling the sparsity that can perform feature selection and output mapping, and can provide better representations of features to enhance performance. Tree Based Learning Tree-based methods are well suited for tabular data learning, as they can provide statistical information gains by picking global features [52]. Ensembling can be done to enhance the performance of tree-based models, such that random forests (RF) can use random subsets of data with randomly selected features to grow many trees [28,30]. Furthermore, CatBoost [53], XGBoost [31,32], and LightGBM [54] are recent ensemble decision tree approaches that can provide better performance for classification. Deep learning can be implemented by using the feature selecting capability to provide better performance than tree-based techniques. Attentive Interpretable Tabular Learning (TabNet) TabNet is based on tree-like functionality, as it can be used for the linear combination of features by determining the coefficients for the contribution of features in the decision process. It uses sparse instance-wise feature selection that can be learned in a training dataset, and it constructs a sequential multi-step architecture such that the portion of a decision can be determined at each decision step by using the selected features. Furthermore, features are nonlinearly processed. In an advanced task, such as HSI classification or anomaly detection, intrinsic spectral features need to be considered in detail to avoid the problems of non-identical spectra from the same materials or similar spectra from different materials [55]. Conventional DNNs, such as multi-layer perceptron (MLP) or stacked convolutional layers, lack the proper mechanisms to select soft features. TabNet should be implemented in comparison to conventional DNN-based approaches because TabNet has powerful soft feature selection capability, in addition to controlling the sparsity with sequential attention. Proposed Method The different variants of enhanced TabNet classifiers proposed in this work are summarized in Table 1. Notation Meaning TabNet Attentive interpretable tabular learning uTabNet Unsupervised pretraining on attentive interpretable tabular learning TabNets Attentive interpretable tabular learning with spatial attention uTabNets Unsupervised pretraining on attentive interpretable tabular learning with spatial attention sTabNet Structure profile on attentive interpretable tabular learning suTabNet Structure profile on unsupervised pretrained attentive interpretable tabular learning sTabNets Structure profile on attentive interpretable tabular learning with spatial attention suTabNets Structure profile on unsupervised pretrained attentive interpretable tabular learning with spatial attention TabNet for Hyperspectral Image Classification Suppose that a hyperspectral dataset with d spectral bands contains M labeled samples for C classes, and each is represented by x  and the corresponding label vector is . As shown in Figure 1, spectral features are used as inputs to TabNet. Suppose the training data X is passed to the initial decision step with batch size B . Then, the feature selection process includes the following steps: (1) The "split" module separates the output of the initial feature transformer to obtain features [ 1] i − a in Step 1 when i = 1; (2) If we disregard the spatial information in the attentive transformer of TabNets shown in Figure 4 below, it becomes the attentive transformer for TabNet. It uses a trainable function i h , consisting of a fully connected (FC) and batch normalization (BN) layer to generate features with high dimensions; (3) In each step, interpretable information is provided by masks for selecting features, and global interpretability can be attained by aggregating the masks from different decision steps. This process can enhance the discriminative ability in the spectral domain by implementing local and global interpretability for HSI feature selection. The attentive transformer then generates masks as a soft selection of salient features with the use of processed features [ 1] i − a from the previous step as: Entmax normalization [56] inherits the desirable sparsity of sparsemax and can provide smoother, and differentiable curvature, whereas sparsemax is piecewise linear denoted as is the prior scale term that denotes how much a particular feature has been used previously: where γ is a relaxation parameter such that a feature is used at one decision step when 1 γ = and features can be used in multiple decisions steps when γ increases. For input attention , its sparsemax output can be estimated as: where D Δ represents the probability distribution and sparsemax( ) z provides zero ability to choices with low scores. However, entmax normalization provides continuous probability distribution. estimating better distributions in comparison to sparsemax normalization, which can be stated as: (4) The sparsity regularization term can be used in the form of entropy [57] for controlling the sparsity of selected features. , , where ∈ takes a small value for numerical stability. Sparsity regularization sparse λ is also added to the overall loss as sparse sparse λ × L , which can provide favorable bias for convergence to high accuracy for datasets with redundant features; (5) A sequential multi-step decision process with steps N is used in TabNet's encoding. The processed information from ( 1) th i − step is passed to the th i step to decide which features to use. The outputs are obtained by aggregating the processed feature representation in the overall decision function as shown by feature attributes in Figure 1. With the masks [ ] i M obtained from the attentive transformer, the following steps are used for feature processing. (1) The feature transformer in Figure 2 is used to process the filtered features, which can be used in the decision step output and information for subsequent steps: (2) For efficient learning with high capacity, the feature transformer is comprised of layers that are shared across decision steps such that the same features can be input for different decision steps, and decision step-dependent layers in which features in the current decision step depend upon the output from the previous decision step; (3) In Figure 2, it can be observed that the feature transformer consists of the concatenation of two shared layers and two decision step-dependent layers, in which each fully connected (FC) layer is followed by batch normalization (BN) and a gated linear unit (GLU) [58]. Normalization with 0 .5 is also used for ensuring stabilized learning throughout the network [59]; (4) All BN operations, except applied at input features, are implemented in ghost BN [60] by selecting only part of samples rather than using an entire batch at one time to reduce the cost of computation. This improves performance by using the virtual or small batch size v B and momentum B m instead of using the entire batch. Moreover, decision tree-like aggregation is implemented by constructing overall decision embedding as: where steps N represents the number of decision steps; TabNet with Unsupervised Pretraining To include unsupervised pretraining in TabNet (uTabNet), a decoder architecture is incorporated [39,40]. As shown in Figure 3, the decoder is composed of a feature transformer, and FC layers at each decision step to reconstruct features by combining the outputs. Missing columns of features can be predicted using other feature columns. is a binary mask and r is the pretraining ratio of features to randomly discard for reconstruction such that the variable r represents the ratio of masking inside the binary mask S . The term in the encoder is initialized as [ ] (1 ) = − P 0 S such that the model focuses on the known features, and the last FC layer of the decoder is a result of the product of S and unknown output features. For this purpose, the reconstruction residual ( rec L ) used in an unsupervised manner without label information is formed as: where , i j X represents the reconstructed output and TabNet with Spatial Attention (TabNets) The generated masks in the attentive transformer for soft feature selection. Spatial information is incorporated by including a 2D CNN inside the attentive transformer, resulting in TabNet with spatial attention (TabNets), as shown in Figure 4. The output feature maps of each layer in TabNets are shown in Table 2. In CNN, 2D kernels are used for convolving the input data after calculating the sum of the product of the kernel and the input data. To cover the total spatial area, the kernel is strided on the input data. Nonlinearity is introduced with an activation function on the convolved features. The value after activation , , layer with the l-th feature map can be expressed as: where ψ represents the function of activation, with , k l e being the bias parameter. First of all, the 3D patch input T P P × × for the T reduced channels from principal component analysis (PCA) and patch size P P × is converted to a 1D input vector. For instance, in the Indian Pines data, the 3D input of size 10 25 25 × × becomes a 6250 × 1 vector. The feature size from each layer in the encoder is shown in the second part of Table 2: (1) The first BN generates a 6250 × 1 vector; (2) It is converted by the first feature transformer layer before Step 1 into a feature vector of size 512 For spatial attention inside an attentive transformer, a feature map of different layers is shown in the first part of Table 2. (1) The output of entmax from Equation (1) is reshaped to 10 25 25 × × as input to the first 2D convolution layer. For a kernel size of 3 3 × and stride = 3, the first 2D convolution layer provides a 16 8 8 × × output; (2) The second convolution layer generates an output of size 32 6 6 × × with a kernel size of 3 3 × and stride = 1; (3) The third convolutional layer generates an output shape of 64 4 4 × × with a kernel size of 3 3 × and stride = 1; (4) The flatten layer provides an output of size 1024 × 1; (5) Finally, the FC layer generates an output of size 6250 × 1 that is provided as input to the prior scales for updating the abstract features generated by the FC and BN layers inside the attentive transformer. In addition, TabNets with unsupervised pretraining (uTabNets) can be obtained by using steps of unsupervised pretraining and Equation (8) on TabNets. Structure Profile on TabNet (sTabNet) By using spatial feature extraction with structure profile (SP) [48] in the preprocessing stage, the performance of TabNet can be enhanced by using the TabNet with structure profile (sTabNet). Spatial feature extraction with structure profile: First of all, the original input image is divided into M subsets. The structure profile S can be extracted from the input image X using an adaptive texture smoothing model as: where λ is the free parameter and w is the weight that controls the similarity of adjacent pixels. For smoothing purposes, a local polynomial can be implemented as x for each ∈ x Ω with the optimization function as: is a polynomial with a degree L ≤ , and w decides the contribution of pixels ( ) i X x towards the construction of polynomial ( ) where ( ) Y ⋅ is the small region that can be used for comparing patches around i x and x, the scale parameter 0 h is set to 1, and G σ is the Gaussian function with standard deviation σ . Equation (10) can now be expressed as: Using the Bregman iteration algorithm [61], Equation (13) can be solved as below: The soft thresholding method can be used: These steps of updating 1 ( ) After obtaining convergence, the aforementioned TabNet classifier is implemented on the extracted SPs to obtain the classification results for sTabNet. Structure Profile on Unsupervised Pretrained TabNet (suTabNet) After applying SP in feature extraction before uTabNet, the performance of the TabNet with unsupervised pretraining with SP feature extraction (suTabNet) can be obtained. Similarly, SP feature extraction can be applied to TabNets and uTabNets to obtain their SP-extracted versions sTabNets and suTabNets, respectively, and on other comparative methods for a fair comparison. Datasets Three different datasets were used to validate the proposed methods. The first dataset used for the experiment is the Indian Pines dataset collected by the Airborne Visible and Infrared (AVIRIS) sensor. It consists of 16 different classes with a spatial size of 145 × 145 pixels and spectral bands of 220 (200 after noise removal). The water-absorption bands 104-108, 150-163, and 220 were removed. The spectral wavelength ranges from 0.4 to 2.5 μm. Ten percent of training samples were taken into consideration from each class for training and the remaining were used for testing. The number of training and testing samples for each class is listed in Table 3. The second dataset used is the University of Pavia dataset, which was acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor in Italy. It has a spatial size of 610 × 340 pixels. It consists of a total of 103 spectral bands after noisy band removal. It includes spectral bands in the range 0.43 to 0.86 μm. Nine different classes exist in this dataset and 200 training samples were taken from each class as training samples; the remaining were used as testing samples. Table 4 shows the number of training and testing samples for each class. The third dataset is the Salinas dataset, which is collected with an AVIRIS sensor in Salinas Valley, California. It comprises a spatial size of 512 × 217 pixels with 224 bands (204 bands after band removal). Water-absorption bands 108-112, 154-167, and 224 were removed. It has a spatial resolution of 3.7 m-pixels with 16 different classes. For training, 200 samples from each class were taken and remaining were used for testing. Table 5 shows the number of training and testing samples in different classes. Experimental Setup For all methods in comparison, such as RF, MLP, LightGBM, CatBoost, XGBoost, and CAE, parameters were estimated according to [28][29][30][31][32]35,41,53,54]. For our proposed methods, the Adam optimizer was used to estimate the optimal parameters. In all three datasets, 10% of training samples were allocated for validation and the remaining 90% of training samples were allocated for learning optimal weights of the network for tuning the hyper parameters of the network. The performance of TabNet, uTabNet, TabNets, uTabNets, and their SP-extracted versions sTabNet, suTabNet, sTabNets, and suTabNets on different parameters was investigated from a predefined set of parameters. was investigated to incorporate more spatial information. However, choosing a too large window size may add redundancy due to interclass variation among neighboring pixels. As shown in Table 7, 25 25 × was found to be the most suitable for all datasets. For Indian Pines and Salinas data, 10 25 25 × × was used, and 7 25 25 × × was used for University of Pavia data. In Figures 5-7, the classification map of the three datasets is consistent with the results in Tables 8-13. In Figure 5, the classification map for Indian Pines is shown, which consists of ground truth for the original image in Figure 5a,b. In addition, in these classification maps, labeled pixels are listed, in which sTabNet outperforms TabNet and the SP versions of other techniques. Furthermore, suTabNet outperforms uTabNet and sTabNet. The proposed TabNets shows less noise in the area of Soybean-notill and Woods, and uTabNets shows less noise in the region of Woods. Moreover, their SP-extracted versions sTabNets, and suTabNets show less noise in the areas of Soybean-mintill and Woods, respectively. In Figure 6, the classification map for the University of Pavia is shown. It can be observed that the maps from the proposed TabNets and uTabNets are smoother in the regions of Bare soil and Meadows, respectively. Similarly, their SP-extracted versions sTabNets and suTabNets produce smoother areas of Bare soil and Meadows, respectively. In Figure 7, the classification map for different methods on the Salinas dataset are shown. It is illustrated that the maps from the proposed TabNets and uTabNets are less noisy in the regions of Corn-seneseed-green-weeds and Grapes-untrained. In addition, the maps from their SP-extracted versions sTabNets and suTabNets contain less noise in the areas of Grapes-untrained and Vinyard-untrained. z > , is larger than 1.96 or 2.58, which represents statistical difference at 95% or 99% confidence levels, respectively. The comparison among TabNet, uTabNet, TabNets, uTabNets, sTabNet, suTabNet, sTabNets, suTabNets, and other classifiers is illustrated, which indicates their superiority over their counterparts. To estimate the computational complexity involved in the proposed algorithms, execution time for different algorithms on three hyperspectral datasets is illustrated in Table 15. All the experiments were run using a NVIDIA Tesla K80 GPU and MATLAB on an Intel(R) Core (TM) i7-4770 central processing unit with 16 GB of memory. It can be observed that TabNet has higher computational complexity in comparison to other tree-based methods, which may be due to the sequential attention involved in tabular learning. In addition, the unsupervised pretraining version of TabNet (uTabNet) has higher complexity than TabNet because of the pretraining operation. Additionally, the proposed TabNets and its unsupervised pretraining version uTabNets show slightly higher complexity than TabNet and uTabNet because of the convolution layer in the attentive transformer for spatial processing of masks. Moreover, the SP-extracted versions TabNets, uTabNets, sTabNets, and suTabNets are slightly costlier than their counterparts due to SP extraction. Conclusions In this work, we propose a TabNets network that uses spatial attention to enhance the performance of the original TabNet for HSI classification by including a 2D CNN in the attentive transformer. Moreover, unsupervised pretraining on TabNets (uTabNets) was introduced, which can outperform TabNets. SP-extracted versions of TabNet, uTabNet, TabNets, uTabNets were also developed to further utilize spatial information. The experimental results obtained on different hyperspectral datasets illustrate the superiority of the proposed TabNets and uTabNets and their SP versions in terms of classification accuracy over other techniques, such as RF, MLP, LightGBM, CatBoost, XGBoost, and their SP versions. However, the proposed networks show slightly higher complexity for network optimization. In future work, more spatial and spectral information will be incorporated into TabNet to enhance the classification performance with reduced computational cost. Moreover, the performance of the enhanced TabNet on hyperspectral anomaly detection will be investigated. This has potential applications for solving similar classification and feature extraction problems for high-resolution thermal or remote sensing images.
5,462.8
2022-02-03T00:00:00.000
[ "Computer Science" ]
Privacy-Enhancing Security Protocol in LTE Initial Attack Long-Term Evolution (LTE) is a fourth-generation mobile communication technology implemented throughout the world. It is the communication means of smartphones that send and receive all of the private date of individuals. M2M, IOT, etc., are the base technologies of mobile communication that will be used in the future cyber world. However, identification parameters, such as International Mobile Subscriber Identity (IMSI), Radio Network Temporary Identities (RNTI), etc., in the initial attach section for accessing the LTE network are presented with the vulnerability of being exposed as clear text. Such vulnerability does not end in a mere identification parameter, but can lead to a secondary attack using the identification parameter, such as replication of the smartphone, illegal use of the mobile communication network, etc. This paper proposes a security protocol to safely transmit identification parameters in different cases of the initial attach. The proposed security protocol solves the exposed vulnerability by encrypting the parameters in transmission. Using an OPNET simulator, it is shown that the average rate of delay and processing ratio are efficient in comparison to the existing process. Introduction LTE is an abbreviation for Long-Term Evolution, which is a fourth generation mobile communication technology.LTE is designed for high-speed transmission, reduced cost per bit, low transmission delay and applicability to existing frequency bands.It is currently implemented worldwide. LTE technology is not only used as the base technology of smartphones, which store, send and receive the sensitive personal information of individuals [1,2], but as the base technology of mobile communication technology to be used in the future cyber world, such as M2M, IOT [3,4], etc., and as the research base technology of the fifth generation mobile communication technology to be used in the future cyber world [5]. However in the current LTE technology, vulnerability of the identification parameter values of UE (user equipment) exists, being exposed as clear text in the initial attach process [6][7][8][9].The vulnerability has existed since the initial release of the LTE standards and is still present in Release 12.In the LTE technical documentation, according to the "Technical Specification Group Services and System Aspects; Rationale and track of security decisions in Long-Term Evolved (LTE) RAN/3GPP System Architecture Evolution (SAE) (Release 9)", there is a vulnerability during the initial attach process in the access to the LTE network, in which the UE identifying parameters are transmitted in plain text.Problems, such as tracing and privacy infringement, could occur. This paper proposes a plan for safely transmitting identification parameters by classifying the initial attach processes into two tasks, initial attach with the International Mobile Subscriber Identity (IMSI) and initial attach with the Global Unique Temporary Identifier (GUTI). The proposed paper consists of six sections.Section 2 analyzes the structure of the LTE, initial attach process, security process and threats.Section 3 proposes a security protocol by classifying the initial attach process into multiple cases in order to safely transmit identification parameters.Section 4 carries out a security analysis of the proposed protocol, and Section 5 compares and evaluates the performances between the proposed process with a security protocol and the existing process.Section 6 concludes the discussion. Please see Table 1 for definitions and terms used in this paper. LTE Network Structure The LTE network consists of LTE entities dealing with wireless access network technology and EPC entities dealing with core network technology.Its structure is shown in Figure 1. Of the LTE entities, UE accesses the evolved Node B (eNB) through the LTE-Uu Uu wireless interface.eNB, serving as the base station, provides the user with a wireless interface and provides wireless remote resource management (RRM) features, such as radio bearer control, wireless admission control, dynamic wireless resource allocation, load balancing and inter cell interference control (ICIC) [10]. EPC entities consist of the mobility management entity (MME), S-GW, P-G and home subscriber server (HSS).MME is an E-UTRAN control plane entity, communicating with HSS for user authentication and user profile download, and through NAS signaling, it provides the user terminal with EPS rambling management (EMM) and EPS session management (ESM) features.S-GW is the termination point between E-UTRAN and EPC and the anchoring point in the handover with eNB and the handover with the 3GPP system.P-GW connects UE to an external PDN network and provides packet filtering.In addition, P-GW allocates an IP address to the user terminal and serves as the mobile anchoring point in the handover between 3GPP and non-3GPP.Lastly, HSS manages the users' personal profiles [8,[10][11][12][13][14]. The IMS/Internet domain is the domain commonly calling external Internet services. LTE Initial Attach for UE The "initial attach for UE" process is a case of the first access to the network by the user subscribing to the LTE network using UE [15][16][17].LTE Initial attach process is shown in Figure 2. The "initial state after radio link synchronization" process is one in which UE selects eNB and synchronizes a wireless link.The "ECM connection establishment" process is a NAS layer, that is a process of transmitting IMSI to request MME for net access.Through the relevant process, the RRC connection and S1signaling connection are established. The "authentication" process is a mutual authentication procedure between UE and MME using EPS-AKA,and the "NAS security setup" process is a key setting process to safely transmit NAS messages between UE and MME [18,19]. The "location update" process is the one that receives personal profile information from HSS after registering the location, and the "EPS session establishment" process is the one that allocates network resources, so that the users can be provided with the services [20,21]. LTE Security After the "ECM connection establishment" process between UE and eNB, the UE starts mutual authentication by transmitting IMSI to MME.Centering around the LTE security layer, the LTE network carries out mutual authentication based on EPS-AKA.The LTE security is divided broadly into three processes: UE-HSS mutual authentication process, UE-MME NAS security setup process and UE-eNB ASsecurity setup process [6,7,9,14,[22][23][24].LTE security process is shown in Figure 3. LTE Threats IMSI refers to the unique ID requested from each user when the net administrator registers the user to the service, and this value refers to the unique number of identifications saved in the USIM in the user device [14]. Yet, when "initial attach for UE" is carried out, in the "ECM connection establishment" process, the UE transmits IMSI to MME in plain text.The IMSI transmitted in plain text is transmitted to MME through a number of eNB, which has a vulnerability in which it is leaking to an attacker through malicious eNB.In addition, for a user tracking an attack using the leaked IMSI, a device tracking attack and a privacy abuse attack may take place. RNTI, the unique ID to differentiate UE from eNB, and GUTI, used instead of IMSI after a series of the process, also, are transmitted in a variety of initial attach processes in plain text, so that the same vulnerability and the threat of attack of IMSI may occur [6,7,9,25]. An attack that could allow an attacker to use the stolen identification parameter is shown in Figure 4. UE constantly transmits location information to the eNB and MME; an attacker can track the user using a leaked IMSI.Furthermore, the attacker can make a DoS attack using a leaked IMSI and GUTI, which is used for requesting RNTI [26,27].LTE threat model and its process are shown in Figure 4. Proposed Security Protocol for Initial Attach in LTE The proposed security protocol was designed to protect unique information about identification, such as IMSI and RNTI transmitted in plain text when the UE attempts an initial attach to the network.It consists of four types by the initial connection of the UE, and the terms and symbols used in the proposed security protocol are shown in Table 1. Initial Attach with IMSI The first protocol is shown in Figure 5 and is carried out after the "initial state after radio link synchronization" process in the initial attach with the IMSI case.It was designed to protect IMSI leaked from the "ECM connection establishment" process in plain text and RNTI leaked from the "EPS session establishment" process in plain text. After the "initial state after radio link synchronization" process, UE and MME start an "ECM Connection" process.The UE transmits a generated random number and the "UE network capability" to MME for the attach request.MME receiving the attach request MME generates a random number and transmits it to the UE, and the UE and the MME carry out a series of arithmetic operations to safely transmit IMSI. The UE and the MME enter the transmitted and received random numbers and the Public Land Mobile Network ID (PLMN ID) into the f() function, secretly shared according to Mobile Network Code (MNC), and generate an F string with 4n bits.The generated F string is divided into four numerical progressions with n bits each.After this process, the MME generates a random number progression used as challenge bits, the UE the second random number, and through lrnumerical progression and exclusive OR arithmetic operation, it generates RN U E_2 .f() = hash( Expansion P-Box( hash( S-Box(RN U E_1 ), S-Box(RN M M E_1 ), PLMN ID ) ) ) The MME generates challenge bits C i using lr i , ad i and c i .If lr i is zero, C i = c i ||ad i and if lr i is one, C i = ad i ||c i .The MME transmits C i to UE to verify it through the response value, and the UE verifies the MME through C i . The UE is aware of lr, so it can differentiate C i transmitted by the MME into C i = c i ||ad i and if lr i is one.At this time, r 0 i and r 1 i transmits r 0 i if c i transmitted by the MME is zero and r 1 i if c i is one.At this time, the MME receives the transmission of RN U E_2 of the UE and saves it. For ad i transmitted by the MME (ad i = ad i ), the UE detects an error and transmits the response value as a random value.The MME, too, halts the attach process if an error is detected through r 0 i = r 0 i and r 1 i = r 1 i .After the challenge-response process, the UE uses unused r 0 i and r 1 i concatenated value as a key to encrypt IMSI to transfer it to the MME.The MME generates a key through the same process as that of the UE and then decrypts the transmitted cypher text to get the IMSI. Initial Attach with GUTI After the IMSI is safely transmitted, UE, eNB, MME and HSS carry out up to the "AS security setup" process during the "ECM connection establishment", "authentication", "NAS security setup", "location update" and "EPS session establishment" processes. After the "AS security setup", the eNB encrypts RNTI to MME using the secret key of the "AS security setup" to allocate the RNTI to the UE.The MME encrypts the transmitted RNTI to RN U E_2 saved in the "ECM connection establishment" process to transmit to the eNB, and the eNB allocates the RNTI by transmitting the relevant value to the UE. Initial attach with GUTI is the initial attach process of the case in which the UE that successfully performed an initial attach with the IMSI process re-accesses, due to a series of events.f() functions that are used in Section 3.2, are as the following formula. Case 1: MME Unchanged The first case is that the connected MME in an initial connection with the UE is not changed, and the UE re-accesses through the same MME.Security protocol for the first case is shown in Figure 6.In a re-access, the initial attach process carries out authentication using GUTI to protect the IMSI.The process of transmitting the GUTI is the same as the Section 3.1.IMSI transmission process, and the initial attach process is carried out using existing information saved in the MME according to information, such as GUTI, NAS-MAC and NAS Seq.No. Since a series of information about the UE has already been saved in the MME, no "authentication", "NAS security setup" or "location update" process is carried out, but the "EPS session establishment" process only is carried out. Case 2: MME Changed The second case is that the MME is changed, but the old MME saves the information about the UE, so it transmits the information about the UE to the new MME.UE transmits the old MME's information to encrypt GUTI, so the new MME is transmitted the encryption key from the old MME using the challenge response method.Security protocol for the second case is shown in Figure 7. Case 3: MME Changed and IMSI Needed The third case is that the MME connected during the initial connection has been changed to a new MME, and there is no information about the UE in the MME connected during the initial connection. In a re-access, the initial attach process carries out authentication using GUTI to protect the IMSI.The process of transmitting the GUTI is the same as the Section 3.1.IMSI transmission process. When the MME is changed, the new MME requests the old MME for the information about the UE according to the information, such as GUTI, NAS-MAC and NAS Seq.No.At this time, if there is no information about the relevant UE in the old MME, the new MME requests the UE for the IMSI. In this process, to safely transmit the IMSI, this proposed protocol transmits the GUTI, encrypts IMSI and transmits it using the generated series of values.For the encryption, the UE hashes KGUTIand RN U E_2 (RN, random number) to generate a key and encrypts the IMSI using the generated key, KIMSI, to transmit to the MME.After the transmission of the IMSI, the "authentication", "NAS security setup", "location update" and "EPS session establishment" processes are carried out.Security protocol for the third case is shown in Figure 8. Security Analysis The initial attach for UE process specified in the LTE Standards Release 12 transmits the parameters to identify the UE in plain text, and the proposed security protocol encrypts and safely transmits the relevant identification parameters.Security comparative analysis table is shown in Table 2.The proposed security protocols generate a key value used to encrypt identification parameters through the challenge-response process.The key used to encrypt IMSI and GUTI is defined as the value not used in the challenge-response process during the numerical progression generated through the f() function of the secret sharing between the UE and the MME.The relevant key value is defined only in the UE and MME, but not transmitted through the communication process to the outside.Therefore, an attacker can find the identification parameters only through the attack on the encryption algorithm, like AES-256, to know the encrypted IMSI and GUTI. The key value encrypting the RNTI is transmitted through the challenge-response process, but the location of the bit continues to change for the transmission.Even if an attacker collects the bit string through an attack, like tapping, the probability of finding the key value with a total of n bits is (1/4) n . Error Detection and Verification In the proposed security protocols, since the challenge-response and encryption are carried out through the f() function of the secret sharing between the UE and the MME and the key definition method, the UE and MME can verify if the other is a legitimate entity.In addition, in the challenge-response process, the UE can detect errors through ad i = ad i , and the MME can detect errors through r 0 i = r 0 i and r 1 i = r 1 i . Reliability In order to secure the credibility of new MME, communication is only made possible when the UE, old MME and new MME have cross-authenticated each other.In this process, as the challenge-response method supports error detection, even a 1-bit mis-transmission will put a stop to the communication.Additionally, since IMSI is transmitted in the new MME using the values mutually shared by the UE and old MME during the past communication, the credibility of the new MME can be secured.In essence, since the UE constantly alters the MME, during which the key maintains continuity, like a hash-chain, safety against IMSI takeover through fake MME can be enhanced. Conclusions In this paper, a security protocol was proposed in order to solve the vulnerability of the unique identification value being transmitted as clear text, which is constantly being pointed out in LTE standards and related technical documents, and to be used as a basic study of mobile communication technology to be used in the future cyber world. The proposed security protocol was designed to safely transmit IMSI, RNTI and GUTI in a variety of initial attach processes when the UE accesses the LTE network.The proposed security protocol generated a key through the challenge-response method to encrypt and transmit the unique ID and support error detection and verification.As a result of a performance analysis, the security protocol encrypted and safely transmitted vulnerability parameters and turned out to have a performance of an average of 32.0% based on VoIP 100%, demanding a lower rate of delay, so that the safety and performance of the proposed encryption algorithm were found to be efficient. After all of the analyses, the proposed security protocols had less overhead and fully satisfied the privacy requirements and the maximum permissible delay defined in LTE standards. Table 1 . The terms and symbols used in the proposed security protocol. Table 6 . Summary of performance analysis.
4,093.6
2014-12-12T00:00:00.000
[ "Computer Science" ]
Piezoelectric Energy Harvesting Based on Bi-Stable Composite Laminate Piezoelectric Energy Harvesting Based on Bi-Stable Composite Laminate Energy harvesting employing nonlinear systems offers considerable advantages compar- ing to linear systems in the field of broadband energy harvesting. Bi-stable piezoelectric energy harvesters have been proved to be a good candidate for broadband frequency harvesting due to their highly geometrically nonlinear response during vibrations. These bi-stable energy harvesters consist of bi-stable structure and piezoelectric transducers. The nonlinear response depends on the host bi-stable structure. A possible category of bistable structures is bi-stable composite laminate, which has two stable equilibrium states resulting from the mismatch in thermal expansion coefficients between plies. It has received considerable interest in deformable structure since the “ snap-through ” between two stable states results in a significant deformation without continuous energy supply. Combining piezoelectric transducer and the bi-stable composite laminate is a feasible method to obtain bi-stable energy harvester. Piezoelectric energy harvesters based on bi- stable composite laminate have been shown to exhibit high levels of power output over a wide range of frequencies. This chapter aims to summarize and review the various approaches in piezoelectric energy harvesting based on bi-stable composite laminates. Introduction With the advent of low power, wireless and autonomous sensors, energy harvesting which is considered as a potential way to replace batteries and realize self-powering becomes a highly active research area [1,2]. Piezoelectric materials can be embedded in the host structure to convert the strain energy of the host structure into electrical energy through direct piezoelectric effect, so piezoelectric energy harvesting technique becomes one of the primary methods to harvest vibration energy. Compared with electromagnetic and electrostatic methods, the main advantages of piezoelectric energy harvesting technique are the larger power density and higher flexibility of being integrated into one system [3]. Because of the simple structure and ease of producing relatively high average strain for a given force input, the typical piezoelectric energy harvester consisting of a cantilever beam with piezoelectric elements attached near its clamped end is widely analyzed and designed. The cantilever type harvester operates on the fundamental principle of linear resonance. It means that the maximum energy transduction from the vibration source to harvester can be achieved by tuning the host beam's natural frequencies to be equal or very close to the excitation frequency. Therefore, the frequency bandwidth of linear harvester is usually limited to a specific range and the power output of linear harvesters would be reduced drastically when the frequency of vibration source deviates slightly from the resonant frequency of the harvesters. However, the energy of ambient vibrations is distributed over a broad spectrum of frequencies or the dominant frequencies drift with time in many applications. Several solutions were presented to solve this problem. Tuning mechanism is a potential method for broadband energy harvesting, which uses passive or active means to vary the fundamental frequency of the harvester to match the dominant frequency of the vibration source [4,5]. However, it is not very efficient when the frequency of vibration is random or varies rapidly, and tuning mechanism requires external power or complicated design. Nonlinear harvesters have been proposed for broadband energy harvesting benefiting from the ability of nonlinearities to extend the coupling between the excitation and a harmonic oscillator to a broader range of frequency. The nonlinear harvester with a bi-stable potential is proved to be a good candidate for broadband energy harvesting. It has been shown that when carefully designed, bi-stable energy harvesters can provide significant power levels over a wide range of frequencies under steady-state harmonic excitation. Bi-stable systems have two stable equilibrium positions between which they may snap-through under a certain level of excitation. Generally, the bistable harvester can be achieved by applying magnetic force [6] or axial load [7] to buckle the piezoelectric beam. The potential energy of bi-stability can be tuned by the magnitude of magnetic force or axial load. However, such a magnetic bi-stable system would require an obtrusive arrangement of external magnets and could generate unwanted electromagnetic fields [8]. An alternative bi-stable composite laminate has been developed for broadband energy harvesting. This review aims to review the piezoelectric energy harvesting technologies based on two different bi-stable composite laminates and find out the potential benefits and defects from the existing energy harvesting techniques using bi-stable composite laminates. Bi-stable composite laminate The bi-stable asymmetric composite laminate was reported by Hyer firstly in 1981 [9,10]. He found that thin asymmetric laminate may have two stable cylindrical shapes, which are attributed to the thermal stresses due to the difference of thermal expansion of the laminate. Due to the limitation of the classical lamination theory which predicts that all asymmetric laminates have a curved saddled shape where the two curvatures are always of opposite sign, Hyer developed a theory to explain the characteristics of the curved shapes of thin asymmetric laminates. This theory introduced the von-Karman geometric nonlinearities within the classical lamination theory to capture room-temperature shapes, and Rayleigh-Ritz method based on the concept of minimum total potential energy was used to obtain the curved shapes. The prediction of room-temperature shapes of asymmetric laminate becomes one of major research directions in next decades. The analytical model can be improved by more reasonable hypothesis for displacements and strains, so accuracy and efficiency can be achieved in a good balance [11,12]. Additionally, the development of finite element method makes it possible to predict room-temperature shapes of asymmetric laminate with more complex geometry conditions [13,14]. Another interesting feature of bi-stable laminate is snap-through behavior, which was studied extensively [15,16]. Recently, the potential applications of the bi-stable laminate have received considerable attention. One of potential applications is morphing structure. The main advantage of the bi-stable laminate as a morphing structure is that it has two stable positions where the structure can maintain without demanding an external power. Moreover, it only needs a very small energy input to trigger it snap from one stable position to the other with a relatively large deflection. Many researchers have studied the feasibility of bistable laminate as morphing structure from different perspectives, such as actuation method [17] and dynamics [18]. Another potential application of the bi-stable laminate is energy harvesting. Bi-stable laminate can provide large structural deformation resulting from the snap-through behavior. The piezoelectric elements will obtain large strain and produce high electric power if the piezoelectric elements are attached to the surface of bi-stable laminate. Compared to using magnetic mechanism, this bi-stable energy harvester has four main advantages: (1) the arrangement can be designed to occupy a smaller space; (2) there are no magnetic fields; (3) the laminate can be easily combined with piezoelectric materials; (4) there is potential to control over harvester response through adjusting lay-up and geometry [19]. According to the lay-up and stable shapes, bi-stable harvester based on composite laminate can be classified into two categories. The first one is the asymmetric laminate-based harvester. This asymmetric bi-stable laminate is made from a carbon fiber reinforced polymer (CFRP) with a [0/90] T layup, as shown in Figure 1(a). For asymmetric laminate, the residual thermal stress causing curved deformation results from the differences in the thermal expansion coefficient between the carbon fiber and epoxy matrix during cooling process from an elevated cure temperature to room temperature. When the ratio of edge length to thickness increases to a specific value, asymmetric laminates has two approximately cylindrical stable shapes, as shown in Figure 1(b). The curvature direction of two shapes is orthogonal to each other, but the magnitude of curvature is equal to each other. The other category is the hybrid symmetric laminate-based harvester. Li et al. [20] presented a bi-stable hybrid symmetric laminate (BHSL) by combining of aluminum plies and CFRP plies. The bi-stability is induced by the difference in thermal expansion coefficients between aluminum and CFRP. Typically, the stacking sequence of BHSL is shown in Figure 2(a). There are two types of region, which are the hybrid region with [90 2 /Al/90 2 ] T and composite region with [90 2 /0 2 /90 2 ] T . The stacking sequence of two regions is symmetric. This hybrid symmetric laminate exhibits two stable cylindrical shapes with same longitudinal curvature in the opposite direction, as shown in Figure 2 3. Piezoelectric energy harvesting based on bi-stable laminate 3 .1. Asymmetric laminate-based harvester Asymmetric laminate has been investigated for several decades. The prediction of roomtemperature shape and bifurcation phenomena with different parameters (layup, geometry, thickness and so on) can be achieved a reasonable level of accuracy with reliable theoretical model and finite element method. The new challenge for asymmetric laminate is to find a suitable application scenario according to its particular features. The application of morphing structure was firstly presented, for example, morphing airfoil [21] and trailing edge box [22]. Though asymmetric laminate is a good candidate for morphing structure, how to trigger the snap-through behavior becomes a problem. To address this problem, many types of research have studied the snap-through motion of asymmetric laminate with different methods, such as piezoelectric actuation [23], memory alloy actuation [24] and heating actuation [25]. The method adopted in piezoelectric actuation is that applying voltage on piezoelectric patches attached to the surface of laminate enables piezoelectric patches to deform. Thus, asymmetric laminate can snap once the applying voltage is high enough. Piezoelectric actuation utilizes the direct piezoelectric effect. On the contrary, the voltage will be generated by piezoelectric patches resulting from inverse piezoelectric effect when asymmetric laminate deforms under external excitation. Therefore, the application of asymmetric laminate extends to piezoelectric energy harvesting. Arrieta et al. [26] firstly presented a piezoelectric nonlinear broadband energy harvester based on asymmetric laminate in 2010. Only the experimental results were reported in this earlier work. This square 200 Â 200 mm asymmetric laminate with [90 2 /0 2 ] T layup was mounted from its center to an electromechanical shaker, and four PZT-5A flexible piezoelectric patches were bonded to the surface of the laminate, as shown in Figure 3. Five types of nonlinear responses were observed in the experiments. As shown in Figure 4(a), these five types of responses are linear oscillation, large amplitude limit cycle oscillation (LCO), chaotic oscillation, intermittency oscillation and 1/2 subharmonic oscillation. Large amplitude LCO, chaotic oscillation, and intermittency oscillation involve snap-through behavior between two stable shapes, so the voltage output corresponding to these three types of responses are apparently higher than remaining two responses. Accordingly, the output powers of these three types are also higher, as shown in Figure 4(b). The intermittency and large amplitude LCO can give 34 and 27 mW respectively under a forcing acceleration level of 2.0 g. For chaotic oscillation, the maximum average power was found to be 9 mW. The experimental results in this work illustrate that asymmetric bi-stable laminate has rich dynamics for nonlinear energy harvesting and it is a potential candidate for broadband energy harvesting through combining the ranges where large amplitude LCO, chaotic oscillation, and intermittency oscillation occur. Betts et al. [8,27,28] published several works to optimize configurations of asymmetric laminate for improving electrical power generation in 2012. This piezo-laminate was considered in the actuation arrangement, as shown in Figure 5. The actuation force applied on the center of the laminate surface while displacements of all four corners were constrained in the zdirection. The optimization was based on the static states of the system, so the electrical energy output depending on the snap-through between two stable shapes. The M8557-P1 MFC was employed as a piezoelectric material. Maximization of electrical energy generated in two sets of four piezoelectric layers was as optimizable objective under several constraint conditions, such as laminate surface area, limitation of piezoelectric strain and bistability. Four variables were adopted which are ply orientation, single ply thickness, aspect ratio and piezoelectric surface area. It was found that square cross-ply laminates [0 P /0/90/90 P ] T offered the most extensive energy outputs while the laminate curvatures were maximized and aligned with the piezoelectric polarization axis. The results of optimization are meaningful for the design of bi-stable energy harvester based on asymmetric laminate, but the optimization was carried out from a static perspective. The energy harvesting from vibration is a dynamic problem, so the optimization results are not very suitable for dynamic situations. Therefore, Betts et al. [29] had done some preliminary works about the dynamic transition between stable states as the piezoelectric laminate was exposed to an oscillating mechanical force in the same year. It was found that thicker laminates produced higher levels of energy when snap-through is fully induced. Furthermore, the dynamic loading causes a higher level of strains of piezoelectric transducers, which may cause its failure. Next year, Arrieta et al. [30,31] presented a new concept for broadband energy harvesting. They changed the boundary condition from center fixation to cantilever for asymmetric Figure 5. Actuation arrangement for a [0 P /0/90/90/90 P ] T laminate with 40% piezoelectric coverage [27]. laminate. A symmetric-asymmetric layup was designed to realize cantilevered boundary condition, as shown in Figure 6(a). The symmetric layup was for clamping, and the asymmetric layup was for bi-stable laminate. Two flexible piezoelectric transducers (Piezo QP16n) were bonded on the surface of the asymmetric laminate and were closer to the clamped root. The two stable shapes of this cantilevered bi-stable laminate are shown in Figure 6(b). There are three significant advantages to this design. Firstly, this type of arrangement allows to exploit high strains developed close to clamped root by the piezoelectric transducers; Secondly, the cantilever configuration results in large displacements given the considerable distance from the root to the tip; Thirdly, the cantilevered configuration allows for more natural integration with the host structure. Two specimens with different length were fabricated and tested respectively. The nonlinear behaviors of two specimens were studied by frequency sweep for a base acceleration, as shown in Figure 7 where blue dots and red crosses show frequency sweep with the initial condition on state 1 and 2 respectively, and arrows show regions of cross-well dynamics. Due to the cantilevered boundary condition, the two stable shapes are entirely asymmetric which leads to the different nonlinear responses depending on the initial state. Specimen B with longer length has a broader range of cross-well oscillation than specimen A. The cantilevered bi-stable laminate can be triggered to snap at a low level of base excitation. Additionally, the obtained power of this design ranges from 35 to 55 mW when Synchronized Switching Harvesting on Inductor (SSHI) circuit is adopted. In the same year, Betts et al. [32] extended their work to experimental investigation of dynamic response and power generation characteristics of piezoelectric energy harvester based on a square [0/90] T laminate with the size of 190 Â 190 mm. A single piezoelectric Marco Fiber Composite (MFC) layer (M8585-P2, 85 Â 85 mm) was attached to the laminate surface, as shown in Figure 8(a). Additional masses were attached to the four corners of the laminate to increase the achievable curvatures and help snap-through during oscillation. The whole device was mounted to the shaker from its center, as shown in Figure 8(b). The open-circuit voltage Figure 6. (a) Symmetric-asymmetric lay-up enabling cantilever configuration and positioning of piezoelectric transducers [30]. (b) Stable shapes and critical displacement for a cantilevered bi-stable laminate [31]. was measured, and the three-dimensional displacements were captured by a Digital Image Correlation (DIC) system. They had the similar experimental results as what was found by Arrieta et al. [26]. The experimental results have revealed that the modes of oscillation are sensitive to the frequency and amplitude of the external vibration, as shown in Figure 9. There are three oscillation patterns: (1) small-amplitude oscillation without snap-through; (2) uniform and nonuniform intermittent snap-through; (3) repeatable snap-through. The largest power output of 3.2 mW was found when snap-through occurred. In 2014, Betts et al. [33] continued their work and presented an analytical model and experimental characterization of a piezoelectric bi-stable laminate of 200 Â 200 Â 0.5 mm with [0/90] T stacking sequence. As before, a single flexible Macro Fiber Composite (MFC) was employed as a piezoelectric transducer and attached to one surface of the laminate. The analytical model was an extension of the model presented in Ref. [34]. This analytical model can capture the mix of nonlinear modes in the response of this bi-stable piezoelectric energy harvester subjected to mechanical vibrations. As before, these modes are continuous snap-through, intermittent snap-through (both periodic and chaotic), and small amplitude oscillations which were validated by experiments. A map of these modes while varying the acceleration (g level) and frequency of excitation was obtained based on experimental results, as shown in Figure 10(a). The results show that the desired continuous snap-through mode for energy harvesting requires high acceleration level, and intermittency snap-through modes can cover a broader range of frequency. The average powers with different frequencies and accelerations were measured and compared with analytical results. The analytical results are higher than experimental results, and analytical and experimental average powers for 10 g excitation are shown in Figure 10(b). This design can cover the bandwidth of 19.4 Hz involving snap-through behavior at the acceleration of 10 g, and the peak power is as high as 244 mW. Though the power output and bandwidth are so excellent, the acceleration level demanded is much higher than the work of Arrieta et al. [30]. In 2015, Syta et al. [35] employed the same design as Betts et al. [33]. The difference is that "0-1 test" was introduced for the experimental works to identify the chaotic dynamics of this bi-stable energy harvester. Next year, Syta et al. [36] examined the modal responses of this bi-stable electro-mechanical energy harvester with same design by Fourier spectrum and Recurrence Quantification Analysis (RQA). RQA was used to identify the periodic and chaotic responses from reasonably short time series. To overcome the high excitation level demand of large amplitude LCO at high frequencies, in 2015, Li et al. [37] exploited the nonlinear oscillations around the second vibration mode of a rectangular piezoelectric bi-stable laminate (RPBL) for broadband vibration energy harvesting at relatively higher frequencies with relatively lower excitation acceleration. This harvester consisted of a 150 Â 50 Â 0.42 mm, [0 2 /90 2 ] T laminate and a 15 Â 15 Â 0.2 mm piece of PZT-5H and copper electrode, as shown in Figure 11(a). Through finite element analysis, the frequency of second vibration mode of state A with mass is 66.8 Hz, and the second vibration mode shape is shown in Figure 11(b). The experimental results show that the lowest excitation acceleration needed to trigger the LCO is 2.66 g at 59 Hz. Two optimized RPBLs were found by finite element analysis, which illustrates that the frequency bandwidth of LCO can be broadened by decreasing the deformation needed to trigger the local snap-through. Although this design lowers the excitation, the power output of 0.98 mW is relatively low. In 2014, Harris et al. [38] manufactured two piezoelectric energy harvesters and compared their performance by experiments. One of the two energy harvesters is linear with an asymmetric layup, as shown in Figure 12(a). The other one is bi-stable with asymmetric layup shown in Figure 12(b) (,) and the two stable states are shown in Figure 12. The results showed that the bi-stable harvester had higher power output over a broader range of frequencies at low frequency and low excitation and the linear one had the potential to produce a higher peak power but at a narrow bandwidth. In 2016, Harris et al. [19] continued their work and investigated the dynamics of this bi-stable energy harvester by multiscale entropy and "0-1" test. As before, these oscillation modes including single-well oscillation, periodic and chaotic intermittent snap-through and continuous periodic snap-through were captured in experiments. The multiscale entropy and "0-1" test can be helpful in the response characterization. One benefit from this analysis method is that the continuous plate system may be characterized by a single variable (voltage and displacement). In the same year, Harris et al. [47] added magnets in the bi-stable system to lower the level of excitation that triggers the snap-through for this bi-stable cantilever from one stable state to another, as shown in Figure 13. The system performance can be adjusted by varying the separation between the magnets. The scenario without magnets was taken as a control, and the different separations were measured. The results showed that this approach could adjust the fundamental frequency of the harvester and the magnets benefited the increase of the bandwidth with lower peak power at low acceleration levels and benefited the increase of peak power with narrower bandwidth at higher excitation levels. Additionally, a single-degree-of-freedom (SDOF) model was established based on the experimental load-deflection characteristic. Hybrid symmetric laminate-based harvester Besides traditional asymmetric bi-stable laminate, bi-stable laminates with novel layup supply new potential ways to design energy harvester. In 2015, Pan et al. [40] presented a bi-stable piezoelectric energy harvester (BPEH) based on bi-stable hybrid symmetric laminate (BHSL). The most apparent difference between asymmetric bi-stable laminate and BHSL is the stable shape. BHSL has two double-curved shapes, which have identical curvatures with opposite signs, as shown in Figure 2. It has better designability compared with the traditional asymmetric laminate due to the variation of the position of the metallic layer. Moreover, the cantilever-type boundary condition can be realized benefiting from its unique symmetric stable shapes. In this work, to obtain more deformation, 20 pieces of PZT-5H were bonded to the middle of two BHSL's surfaces where the curvatures distribute uniformly. Due to the electrical conductivity of carbon fiber, the laminate can be as an electrode, and a parallel connection was employed for piezoelectric transducers. Two types of stacking sequences and two types of piezoelectric transducer shapes with the identical area were chosen, and four types of bi-stable harvesters and one linear harvester (LPEH) as control sample were designed, as shown in Figure 14. Through finite element analysis, it was found that stress mechanism of PZTs consists of bending stress and residual thermal stress and the stable shapes cannot reflect the deformation of the PZT. A rough power measurement, which is handshaking to actuate continuous snap-through behavior, was employed. The maximum power of 37.06 mW with the optimal resistance of 40 kΩ was obtained at 5 Hz from the BPEH with highest open-circuit voltage. Additionally, through the comparison between BPEHs and LPEH, it was shown that the BPEH could take advantages of each piece of PZT. It means the PZTs on BHSL can obtain uniform deformations during the snap-through process, which leads that BPEH can output much higher power. Through previous preliminary investigation, BPEH shows good potential for energy harvester. In 2017, Pan et al. [41] continued their work and investigated the dynamics of this bi-stable energy harvester. The bi-stable laminate was redesigned with a smaller size of 100 Â 40 mm and 8 pieces of PZT-5H with 1.0 Â 1.0 mm were bonded in the middle of laminate surface like before. Three types of linear harvesters with two different layups and two types of PZT positions were designed as control samples, as shown in Figure 15. The forward sweeps and reverse sweeps at five acceleration levels were carried out for BPEH, as shown in Figure 16. Unlike responses of three linear harvesters, which only has have one sharp peak, responses of BPEH exhibited nonlinear characteristics. BPEH had a softening response under low level of excitation, and it could switch to hardening response when the excitation increased to a certain extent. Two oscillation modes were observed in experiments, which were single-well oscillation and cross-well oscillation. BPEH only has single-well oscillation mode under softening response at relatively low excitation level. When the response switch to hardening, BPEH can repeatedly travel between its two Figure 14. Stable shapes of the BPEHs and LEPH samples [40]. potential wells, namely be under cross-well oscillation mode. When BPEH is under crosswell oscillation mode, the output voltage is much higher than that of single-well oscillation mode. The responses also are affected by sweep direction. The frequency range of highvoltage branch in the forward sweep is wider than that in the reverse sweep when BPEH exhibits hardening-type nonlinearity. Compared to LPEHs, BPEH has higher output voltage in a wider frequency range under the same excitation level. For output power, BPEH has a more remarkable performance than LPEHs. The maximum output power of BPEH subjected to 5 g acceleration at 36 Hz is 5.7 times more than that of LPEH-1. The reasons for this can be attributed to two aspects of higher voltage and more uniform strains in piezoelectric elements. The results of BPEH average power showed that the hardening responses help BPEH extend the high-output bandwidth and the output power associated with the excitation frequency and acceleration. All the experimental results demonstrated that BPEH has the potential to harvest vibration energy under broadband excitations. In the same year, Pan et al. [42] analyzed the influence of layup design on the performance of this bi-stable energy harvester. The initial voltage induced by stable configuration and longitudinal curvature with different lay-up and hybrid width was calculated and analyzed by a static finite element analysis. The results showed that the lay-up could vary the initial voltage and longitudinal in opposite directions, and hybrid width can adjust these two variables in the same direction, as shown in Figure 17. Through finite element analysis, it was found that the initial voltage of BPEH depends on strain variations in the two directions. Three types of BPEHs were manufactured to verify the analytical results. In the experiments, three types of oscillation modes were observed which are continuous cross-well vibration, single-well vibration, and intermittent cross-well vibration. The inherent characteristics of BPEH determine the frequency of characterized voltage, and the stable configuration affects the vibration mode. The BPEH with the lowest curvature can occur continuous snap-through vibration, but the BPEH with highest initial voltage only can occur single-well vibration. The layup and hybrid width can affect inherent characteristics and configurations at some time. The combination of lower frequency and lower longitudinal curvature is easier to obtain desired continuous crosswell vibration for energy harvesting. Three types of BPEH were actuated by two methods, which are a hand-driven method and a shaker-driven method, respectively. The hand-driven method was utilized to confirm the contribution of initial voltage to output power. As shown in Figure 18(a), BPEH with higher initial voltage can generate higher power by the hand-driven method as expected. However, the different results were found by the shaker-driven method, as shown in Figure 18(b). The BPEH under continuous cross-well vibration outputs the highest power. However, BPEH with the highest initial voltage only can occur single-well vibration so that it has the lower power. BPEH, which is under intermittent cross-well vibration, generates the lowest maximum power. Summary This chapter reviewed the piezoelectric energy harvesting technique based on bi-stable composite laminate. As an essential branch of bi-stable energy harvesting, composite laminate has its unique advantages, such as inherent nonlinearity and combination with piezoelectric materials. The comparison of the reviewed piezoelectric energy harvester based on bi-stable composite laminate is shown in Table 1. Most of the designs are based on asymmetric laminate due to its more mature theory and design method. However, asymmetric laminate has observable bifurcation phenomena of size so that current designs have a relatively large size and it is hard to decrease the size. It may limit its further application. Hybrid laminate has a more flexible design and has an excellent potential to decrease the size. However, the theories including statics and dynamics of the hybrid laminate are still lacked. Also, most of the designs employ composite piezoelectric materials (MFC) as a transducer of which useful piezoelectric volume is limited. It is against to objective of decreasing the size and maintaining outputs. So far, the required excitation level of large amplitude oscillation (snap-through) is high. How to lower the required excitation with guaranteed reliability, stability and performance is a challenging task in both theory and practice. The harvesters based on bi-stable composite laminate have a relatively high output power (mW level) which can satisfy the requirement of conventional wireless sensors. It is difficult to evaluate their quality whether good or bad due to the different characterization method of power. However, portability design and low demanding are hot topics in this area of energy harvesting. Another way for harvester based on bi-stable laminate is finding suitable application situation, such as large amplitude and low-frequency vibration.
6,616.4
2018-06-27T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
A Semantic Ontology for Disaster Trail Management System Disasters, whether natural or human-made, leave a lasting impact on human lives and require mitigation measures. In the past, millions of human beings lost their lives and properties in disasters. Information and Communication Technology provides many solutions. The issue of so far developed disaster management systems is their inefficiency in semantics that causes failure in producing dynamic inferences. Here comes the role of semantic web technology that helps to retrieve useful information. Semantic web-based intelligent and self-administered framework utilizes XML, RDF, and ontologies for a semantic presentation of data. The ontology establishes fundamental rules for data searching from the unstructured world, i.e., the World Wide Web. Afterward, these rules are utilized for data extraction and reasoning purposes. Many disaster-related ontologies have been studied; however, none conceptualizes the domain comprehensively. Some of the domain ontologies intend for the precise end goal like the disaster plans. Others have been developed for the emergency operation center or the recognition and characterization of the objects in a calamity scene. A few ontologies depend on upper ontologies that are excessively abstract and are exceptionally difficult to grasp by the individuals who are not conversant with theories of the upper ontologies. The present developed semantic web-based disaster trail management ontology almost covers all vital facets of disasters like disaster type, disaster location, disaster time, misfortunes including the causalities and the infrastructure loss, services, service providers, relief items, and so forth. The objectives of this research were to identify the requirements of a disaster ontology, to construct the ontology, and to evaluate the ontology developed for Disaster Trail Management. The ontology was assessed efficaciously via competency questions; externally by the domain experts and internally with the help of SPARQL queries. Keywords—Semantic web; ontology; information retrieval; disaster trail management I. INTRODUCTION A large number of human beings are being affected by disasters every year. In disaster-affected areas, the survivors suffer a lot due to an interruption in the essential services like health care, communication, transportation, etc. Infrastructural damages can also affect food and water supply. Although the man has made considerable progress in the field of science, engineering, and technology, yet he is unable to control the occurrence of disasters. All his efforts, so far, aim at managing hazards, mitigation and to reduce the impact of disasters. Due to the devastating effects of disasters on human lives, catastrophes and crises management have always been given vital importance. Disaster management is planning, arrangement, and deployment of resources with a precise aim of reducing disaster"s damaging effects. Socio-economic conditions of the affected area and existence of effective information system regarding the occurrence of emergency are the significant factors that influence this management. Timely information plays a vital role in reducing disaster impact up to a certain level. The arrangements and organization of resources and efforts for mitigation majorly depend upon disastrous areas" situation and effects of the disaster on the local population. Efforts are made for gathering, organizing, and disseminating factual information to various stakeholders taking part in the mitigation process. Efficiency in the deployment of resources is one of the significant concerns in disaster management as it can minimize the disastrous aftereffects to a great extent. A. Overview The word "disaster" itself shows that it is something troublesome that needs to be avoided or requires mitigation to bring down its outcomes if it ever happens again. Disaster mitigation focuses on long-term measures for diminishing risks. These measures can be structural or non-structural. Developing technological solutions and training of key personnel are examples of structural measures, whereas legislation and communicating potential threats to the public are considered as non-structural measures. Disaster mitigation or management process can be divided into three major phases: "The word "disaster" comes from ancient Greek words dis means "bad" and aster means "star". The astrological sense of disaster based on calamity blamed on star positions." (ewonago.wordpress.com) Ontology is getting importance for providing clear and definite search by focusing the concepts in documents collection and data sources. Ontologies are designed to help improve communication, whether it is between human and machine or is between computers. In other words, ontology helps in managing knowledge. An ontology primarily comprises of concepts (classes), properties (attributes), and possible relationships (slots) among concepts. There may exist some constraints (facets) on slots, or cardinalities on relationships among concepts. Collectively, the components and instances (individuals) form the knowledge base that helps in reasoning. There are various types of ontologies which have been defined or discussed by multiple researchers, including: The primary purpose of designing ontologies is to formulate sketches for disaster plans. Operational centers set up for emergency may be aided with task ontology like study and planning of objects in disastrous scenes. Upper ontologies are enigmatic and abstract hence hamper understanding for those who are alien to its application. The developed DTM ontology covers nearly all sovereign states like nature of the hazard, occurrence date, damages like mislaying including the loss in infrastructure, refugee camps and facilities available or required in the refugee camp, rehabilitation tasks associated with the contributor, location, and relief index, and so forth. Disasters cannot be avoided, but their effects can be minimized by active warning systems and better disaster trail management systems. In disaster management, real-time availability of information can improve the result-oriented rescue operations. Folk yield an enormous mass of information over blogs and social media that can be employed to aid relief services. A system can be devised to extract electronically the precise information related to disaster damages, filter, arrange, and format appropriately so that it can be utilized in disaster trail management. Consequently, semantic web technologies can perform a vigorous role in rendering up-to-date information that can later help disseminate to other stakeholders. Disaster trail management is a challenging task due to its complexity and enormous requirements. "Insufficient coping capacity, pathetic communication, and collaboration between different concerned departments, lack of community awareness, resources gathering, and insufficient budgeting, lack of technology awareness, adoption and integration are the most common barriers in disaster management domain" [4]. Many efforts had been made in the field of ICT to breathe new life into dying humanity is itself not less than panacea in its benevolent magnitude. There are some ICT solutions available, but these solutions depend on hand-operated data entry. A massive amount of current data, shared on the internet, can be searched and utilized. Information from the internet can be retrieved via search engines, either keywordbased or semantic-based. Semantic-based search is more relevant to the user"s information needs as it makes use of an ontology to get relations among query words to understand the meaning of words instead of searching only keywords and using page ranking algorithms which are the base of conventional keyword-based search. The machine-readable semantic features of the ontology result in the more contextual and significant search output. Moreover, no domain ontology for disaster trail management has been developed and evaluated by earthquake-prone area domain experts for completeness and relevance by scrutinizing the competency questions. Some disaster-related ontology research work exists, but it does not serve the purpose of disaster trail management. To enhance the previous research work, and to overcome the highlighted issues, the development of disaster trail management ontology was needed to benefit the relevant users" group. This research will not only play its role to improve the ontology development in the disaster management domain but also add value to information retrieval in general. The research paper is divided into five sections. Remaining part of the article is organized in the following segments. Section II reviews the existing ontologies for disaster management. Section III discusses the proposed disaster trail management ontology. Section IV deals with the ontology evaluation, and finally, the last part is dedicated to the conclusion and future work. II. LITERATURE REVIEW The research, as shows its title, carries some terms including ontology and disaster management; the literature review discusses both one by one. This section starts talking about previous ontology works. Then the discussion focuses on disaster-related ontologies and disaster management systems beginning with a brief description of disaster management phases; existing technological and semantic webbased solutions to disaster management. www.ijacsa.thesai.org A. Ontology: State of the Art Information Retrieval Systems not only have to deal with the structural complexity of complex databases but also with the semantic relationships between data which encourages the use of ontologies for knowledge representation [5]. Ontologies are growing more prevalent as a means for knowledge management, knowledge representation, knowledge sharing, and information retrieval, especially after the evolution of the Semantic Web technologies. An ontology represents a machine-understandable grammar consisting of concepts and relationships among these concepts to describe an area of knowledge [6,7]. Among the causes of the increased prevalence of ontology is its capability to aid the information exchange between various systems, which is the significant success factor of the semantic web [8]. An ontological approach is establishing more practical in all developments of information retrieval, whether relating to opinion mining or cybercrime classification schemes. Practicing ontology with big data yields noteworthy gains in efficiency and productivity. Ontology finds extensive use in many domains, including machines learning, medical science, and genetic algorithms. Gruber [17,18] defined ontology as "a formal, explicit specification of a shared conceptualization." A need always drives every development; the same applies to ontology development as well. Following are some salient reasons that motivate researchers to develop an ontology.  to share common domain knowledge  to detach domain knowledge from the operational ones  for a thorough analysis of domain knowledge to make it definite  To qualify optimal reuse of domain knowledge in a specific area B. Disaster Management Due to the devastating effects of disasters on human beings, catastrophes and crises management have always been given vital importance. Disaster management is planning, arrangement, and deployment of resources with a precise aim of reducing disaster"s damaging effects. Socio-economic conditions of the affected area and existence of an effective information system regarding the occurrence of emergency are the significant factors that influence this management. Timely information plays a vital role in reducing disaster impact up to a certain level. Efficiency in the deployment of resources is one of the significant concerns in disaster management as it can minimize the disastrous aftereffects. Disaster management has four phases, namely, mitigation, preparedness, response, and recovery. Fig. 1 briefly describes the disaster management phases. C. Technological Advancements and Disaster Ontologies When disasters occur, government agencies, nongovernment organizations, and volunteers come forward for immediate rescue operations. First and foremost, the rescuers need to know about nature and disaster intensity, and the resulting damages and causalities. Secondly, the timeliness of information is critical. Disaster relief or control process executes three primary tasks, specifically data acquisition and interpretation, data communication, and data synthesis. Authoritative information and communication technology can offer notably to all the fundamental functions and expedite collaboration among workers and organizations. The evolution of ICT solutions has supported experts and researchers to devise such routines that operate more reasonably while incorporating all the necessary measures to alleviate and control the emergency. The research is improving present-day systems and is continuously making efforts to develop a perfect solution for disaster management. The computer system deals with only those operations which run fluently without facing any hurdle. Disaster preparedness in computer science is a word depending wholly on a computer system, the central aim of which is data recovery. There is no much scientific contribution in improving the tools at hand and reshaping the ideas in this field. It must be the humanitarian duty of the scientists to study the problems related to the areas and affectees and provide prior research before callous mayhem. The research community has proposed many solutions, including semantic-based ontological solutions to address various needs of disaster management. Following are some worth noting semantic web-based contributions: Kontopoulos et al. [14] represent an ontology for climate crises management. The proposed solution is claiming to cover all relevant aspects of the domain to facilitate a decision support system for crises management. The authors pointed out the overwhelming flow of varied information as the most critical challenge for decision making authorities and proposed a semantic ontological solution. The following set of three figures helps to understand the complete model of the proposed ontology. Fig. 2 elaborates natural disasters, Fig. 3 depicts analyzed data, whereas the Fig. 4 semantically represents the response unit assignments. www.ijacsa.thesai.org Notable Research Gaps: This research provides a knowledge base to help give authorities in decision support for emergency management. Although research challenges to include all the pertinent aspects of the domain, yet the focus of the study is on the response phase of disaster management. Thus, it has limitations to fulfill the requirements of the recovery phase like the orphan care, relief camps' services, facilities, and so on. The study by Isbandono et al. [20] concentrates the capacity building of society through research, education, and training for handling disastrous situations. The study aims at disaster awareness in people by training in a planned and controlled manner along with handling media for information spreading regarding the emergency. The interest of the community to participate in disaster management activities can improve institutes" efficiency. Notable Research Gaps: The research does not produce any concrete solution in the form of software or analytical methodology, instead, propose a program for developing awareness in society about crises to strengthen and provide support to the catastrophe management institutions by the participation of the community. Bouyerbou et al. [21] proposed a geographic ontology to process satellite images after the disastrous event. Damages maps are prepared with the help of pre and post-disaster images by the photo-interpreter team, which is a complex and time-demanding task. The use of automatic or semi-automatic tools can make this activity easy and efficient. Automated processing has semantic limitations. This research proposed an ontological solution to reduce the semantic gap to help improve automatic processing and is evaluated with processing the Haiti 2010 satellite images. It consists of three sub-ontologies, namely surface, disaster, and damage. The surface sub-ontology illustrates geographical concepts. The concepts of disaster sub-ontology and damage sub-ontology are divided into two groups each-the former has Manmade and Natural, whereas the latter has Land Cover Damage and Material Damage groups. "The ontology aims to describe the content of satellite images, but a large number of concepts may cause complexity for an automatic process." Notable Research Gaps: The ontology is aimed at the semantic annotation of satellite images but lacks in comprehensive domain representation, change detection, and detection of operational roads and the location of the highest priority areas like hospitals, residential buildings, and schools in the impacted areas. Ahmad et al. [22] highlighted data problems of a disaster information management in setup and planning for emergency response and recovery stages and suggested improvements. They pointed out some challenges regarding data transmission that include data fragmentation, data transfer capacity of the correspondence channel, heterogeneity of the information structures. They described a data framework for system administration, apparatus arrangement, and resource reservation. Peterson et al. [23] conducted a case study by engaging a team of 20 digital volunteers to capture medical-related information shared on social media after an earthquake disaster in Nepal. This study discusses potential strategies for future research joint ventures between the research and practitioner communities to utilize social media content. The research claims that near-real-time mission-specific actionable information can be generated during disasters and then used in decision-making. Notable Research Gaps: The proposed system has some significant gaps. The data shared on social media has a lot of replication, and there should exist some mechanism to avoid duplications. The entire data floating on the public data stream is not factual; fake news or unauthentic information is also widespread, which need identification. Data shared in different forms, including text and multimedia. Only relevant data in an appropriate data type must be collected and analyzed. The data collection mechanism should be smart enough to accept only relevant data. Usage of abbreviation instead of complete words is widespread in messages on social media and needs to address correctly. Information shared in multiple languages, including native languages, require translation. Ignoring sharing in the local language may cause missing of essential data. The commonly used descriptive terms and area names need to be identified. Zhou et al. [24] elaborated the model and faces of emergency decision making. In an emergency, decisions have to be taken in many areas of activity and at different levels; this increases complexity in forming a decision-makers group. Also, the natural difference in human perception, cognitive level, interest, and limited information can raise conflicts. Situation evaluation relies on experience and knowledge, which is not always enough in unexpected events. Also, the use of mathematical models and knowledge management tools www.ijacsa.thesai.org in emergency decision making have their limitations. The study proposed a model for seismic infrastructure hazard by using Bayesian networks but lacks in the verification of the usefulness of the model in risk reduction. In 2018, Inan et al. [25] proposed a decision support system for disaster management. The DSS mechanisms adopt an either bottom-up or top-down approach for decision making depending upon the knowledge trigger. Knowledge triggering factors maybe internal, i.e., the initiatives of disaster management authorities or may be external, i.e., the environmental changes. The study aims at specific disaster plans related to the volcanic eruption. A case study of Mt. Agung volcano eruption demonstrates the efficacy of proposed mechanisms. "The adopted knowledge analysis framework (KAF) allows the authorities to deal with uncertainties in the DM domain by understanding, analyzing, and finally structuring them into a format acceptable by the familiar stakeholders." Notable Research Gaps: The authors feel that the system requires performance and efficacy evaluation in a real-time operational environment. They are also of the view that the DM agencies are lacking in fully documented DM plans. Although the research discusses the general scenario of disaster management domain yet, it focuses on addressing the disastrous situations caused by the volcanic eruption. An effective early warning system for emergencies is a service to humanity because the intended population can take precautionary safety measures if they are aware of the crisis well in time. Moreira et al. [26] proposed an ontology-based EWS for alerting of distress. The study aims to develop an epidemiological surveillance EWS for the detection of infectious disease outbreaks in an area. Model-driven engineering framework of the system relies on the Situation Modeling Language (SML). The authors claim that their model is also suitable for the detection of floods, landslides, and wildfire. Notable Research Gaps: The study proposed an ontologybased EWS for generating alerts to make the population aware of the imminent disaster. The focus of research is to detect infectious deceases outbreaks well in time although the authors are confident of their model to be suitable for detection of other catastrophes. The proposed model is not designed to work with post-disaster situations. Anbarasi [27] proposed an ontology-based solution to crises management using data mining approaches. The study aims at developing a decision support system to address the information requirements of the disaster response. The software uses ontology concepts in the data mining framework. The authors named the ontology as a Humanitarian Assistance Ontology (HAO). Disaster-related data shared on social media can be collected for processing. Data integration and usage in the decision-making process in an emergency become difficult because of the data obtained from social media has structural and semantic heterogeneity. Notable Research Gaps: The data acquisition from social media has some dilemmas and challenges which has been briefly discussed above in research gaps of [23]. The limitations of this research are quite similar to that. Zhong et al. [28] presented a geo-ontology in 2016 as an emergency management solution for meteorological disasters. The factors of a meteorological disaster are typically time and space bound. Thus, the semantic relationships between concepts of the disaster domain are geographic locationspecific. Due to the geographic characteristics, the proposed solution to the meteorological disaster system is a Geo-Ontology. The primary objective of the ontology is to address the information needs of the preparedness and the response phases of meteorological disasters. A geo-ontology based semantic conceptual model for earthquake emergency response knowledge was proposed by Xu et al. [29]. Geo-ontology can represent the geospatial aspects of the knowledge and satisfy the semantic needs of information interchange in the modeling process. "The model aims to solve knowledge problems to improve earthquake www.ijacsa.thesai.org disaster response." The architecture classifies knowledge into four categories, namely factual knowledge, rule knowledge, procedural knowledge, and meta-knowledge. "The study presents a geo-ontology; and geo-ontology-based knowledge modeling primitives contain the spatial and earthly characteristics needed to adequately represent "earthquakes on the ground," rather than earthquakes in general." Notable Research Gaps: The research focused on addressing response phase of disaster management and proposed a knowledge architecture only for earthquake disaster response. Disaster management requires solutions for acquiring, analyzing, disseminating, and integrating data for which the information and communication technologies can play a viable role. Among several disaster management systems, Sahana has been exercised in numerous crises globally and recognized as the most mature DMS. Strenuous work of manual data entry is one of the vital issues of disaster management systems to date, which is quite a time consuming and tough, especially in case of crises. Immediate availability of essential information is always critical in crises management, which encourages to process the real-time data. Secondly, the existing DMS systems are semantically deficient, which is another significant problem to address. A semantic web-based disaster management system is the solution to the issues highlighted, which is the best strategy to acquire, investigate, and share essential data where required. The semantic web uses ontologies as a fundamental constituent. Above discussions notified numerous ontologies, though all have limitations and expect structural reforms. Most of the research conducted so far aims at specific perspectives of the catastrophe administration domain. III. THE PROPOSED DISASTER TRAIL MANAGEMENT ONTOLOGY There exist multiple ways to accomplish ontology development. In other words, there exists no exact and definite way or methodology for developing the ontologies. In the development of disaster trail management ontology, this research followed the methodology proposed by Noy & McGuinness [1] in addition to taking guidance from the Protégé practical guide by Horridge et al. [30]. They proposed an iterative approach to ontology development by starting with a rough first pass at the ontology, revise and refine the evolving ontology and fill in the specifications. The research also uses a naming convention in ontology development to maintain uniformity and consistency of the ontology structure. The concept names start with a capital letter followed by lowercase letters, whereas the relationships and data properties begin with a lowercase letter. Relationship names are mostly of the form "hasRelationship" and inverse relationship is of the type "isRelationshipOf". Following steps have taken into consideration during the ontology development. A. The Domain and Scope of the Ontology Competency questions and their answers can be used as a tool to better understand the domain and scope of an ontology [31]. These questions are used as a litmus test for ontology because they determine whether the ontology contains enough information to answer this type of queries or not. This test also prompts whether the answers require a specific level of detail or representation of a particular area or not. The disaster trail management ontology should answer these competency questions. The list of prepared questions has been divided into three groups according to their nature, including Disasterrelated questions, Effects and Losses related questions, and Services and Facilities related questions. Although the competency questions are just a sketch and not the exhaustive list of inquiries, yet it includes the most appropriate questions which are enough to judge the domain and scope of the disaster trail management ontology. Following is a sample of a few questions from the list of competency questions.  What is the magnitude and depth of the earthquake disaster?  A secondary disaster is caused by which disaster?  Floods are generated by bursting or seiche of which dam/water reservoir?  How much area was damaged due to forest fire?  How much is crops area affected by the disaster?  What are relief items provided in the disaster area?  What facilities are available in a refugee camp? B. Reusing Existing Ontologies It is admirable to see what others have achieved over time and to look forward to improving and enhancing the accomplishments for the specific domain. Reusing of existing ontologies may be essential in situations where the proposed ontology has to interact with other applications that have already been entangled with the controlled vocabularies [1]. This research has explored various ontology libraries but could not find any existing disaster ontology that answers the complete list of prepared competency questions. So, it was assumed that no relevant ontologies existed and started to develop the ontology from scratch. C. Enumeration of Important Terms in the Ontology It is a convenient approach to list down the comprehensive list of relevant terms of the concept, which are either required for making statements or need to clarify to a user. As the terms are the building blocks of an ontology, so, it is essential to be specific and clear about the principal terms and their related properties. This step addresses the basic concepts of the ontology. D. The Classes and the Class Hierarchy As far as the strategy of defining classes and class hierarchy is concerned, there are three types of approaches, namely top-down, bottom-up, and mixed designing approach. All three ontology design approaches are equally good. A developer can select any of these approaches depending on his/her personal view of the domain. According to Rosch [32], the combination design approach is the most convenient and practical approach for ontology development because the concepts "in the middle" tend to be more clear concepts in the domain. Regardless of the adopted design approach, the development usually starts by defining classes. From the list of terms created in the "enumeration of important terms in the ontology" step, it is more convenient to start with selecting the terms describing the objects with independent existence. These terms become classes in ontology. After analyzing the disaster domain and having a critical view from the domain experts, the domain is conceptualized in the following way. "Every OWL class is a subclass of Thing" (w3.org). Under the Thing class, the very first level of classes is considered as the top-level concepts in this documentation, such as Activity, CampFacility, Damage, Disaster, Location, Person, Miscellaneous, Organization, RefugeeCamp, ReliefItem, and Service are shown in Fig. 7. Next level in the class hierarchical taxonomy is to define subclasses. For example, the concept of Activity is further divided into subclasses of StrategicPlanning and Vaccination. Among them, the concept, StrategicPlanning has specialized concepts of ResponseTeam, Rehabilitation, TaskReview, ScopeOfAction, and Evacuation. The hierarchical class taxonomy of Activity class is illustrated in Fig. 8. The top-level concept of Damage is fractionated into subconcepts of Agriculture, Building, Crop, Forest, Infrastructure, and Livestock as illustrated in Fig. 9. Next level of subclasses is defined for Infrastructure which contains further specialized concepts of Airport, Bridge, CommunicationLine, FireStation, Road, ElectricitySupplyLine, FuelStation, GasSupplyLine, Seaport, MobileCommunicationTower, ParkOrPlayGround, RailwayTrack, SewagePipeLine, WaterReservoir, and WaterSupplyLine. Although this research did not capture all type of damages, it is tried to cover all those damages that are likely to occur. E. The Properties of Classes-Slots The properties of classes or the slots are defined to describe the internal structure of concepts as the defined terms of classes alone, do not contain enough information to answer the competency questions. Some of the terms from the prepared enumerated list of important terms in the ontology are the concepts, called, classes and rest of the terms are the properties of the classes (slots) and facets to the slots. The remaining list of terms 84 | P a g e www.ijacsa.thesai.org Now from the prepared list of properties of classes, it is required to determine which property is described for or related to which class. The properties of classes become slots attached to the classes. For example, from the list of terms including Disaster, name, date, Location, Activity, CalamityArea, Damage, hasAffect, hasConsequence, hasDamage, and hasDemand, the terms hasAffect, hasConsequence, hasDamage, and hasDemand are the object properties because they associate the concept Disaster with some other concept. Whereas, the terms name and date are taken to be the data properties because they correlate the concept Disaster with some datatype value like string and dateTime, etc. The properties defined in the above paragraph are assumed to be the essential part of every disaster; hence, are set at the root level. The focus of this study is on disaster trail management of earthquake, but an earthquake can cause other disasters as well. Thus, the concept Disaster has component classes of Earthquake and Concomitant (the concept to capture the knowledge of concomitant concepts of the earthquake). In the class hierarchy, the properties of the parent class are inherited by its subclasses. Hence, the subsumed classes of Disaster, that is, Earthquake and Concomitant, inherit the properties from their subsuming class Disaster and also have additional properties defined at their level. For example, the Earthquake class adds the properties like hasConsequence (that associates Earthquake with its Concomitant disasters), hasEpicenter, depth, and magnitude, whereas, the concept Concomitant has the additional property isConsequenceOf, which is the inverse relationship of hasConsequence and correlates the concomitant disaster to its causing earthquake. Concomitant is a generalized concept which is further fractionated into specialized concepts of disasters caused by the earthquake, like Avalanche, Faulting, Fire, Flood, Landslide, RadioactivityFromNuclearPlant, Rockslide, SoilLiquefaction, SpillOfChemical, Tsunami, and Volcanic Eruption. These inherited concepts share the properties defined in their parent concept Concomitant and grandparent concept Disaster along with their specific properties added at their level. For example, the concepts Avalanche, Faulting, Landslide, RadioactivityFrom NuclearPlant, Rockslide, SoilLiquefaction, SpillOfChemical, and VolcanicEruption also require the hasEpicenter property. Fire adds the properties of areaTemperature (data property), hasDamage (associate the Fire concept with Forest) and hasEpicenter. Along the same lines, the concept Flood needs additional properties of isCausedByBurstOf and isCausedBySeicheOf to associate this concept with another concept WaterReservoir. A volcanic eruption may also cause Tsunami; that"s why the concept Tsunami requires an additional property hasConsequence to correlate this concept with its causing concept VolcanicEruption. F. The Facets of the Slots Facets are the restrictions on slots. A property restriction describes an anonymous class. All individuals that satisfy the restriction become a member of the anonymous class. A facet may represent the value type, value domain, cardinality, and other such feature related to a slot value. A slot may have different facets. For example, the value of the slot name (as in "the name of a disaster") is a string, with a single value. That is, the name is a slot with value type string. A slot hasAffect (as in "a disaster has affected these locations") can have multiple values, which are instances of the class Location. That is, hasAffect is a slot with value type Instance with Location as an allowed class. The value types used in the ontology are the integer, decimal, unsignedLong (for huge, whole numbers), string and dateTime. "Instance-type slots allow the definition of relationships between individuals. Slots with value type Instance must also define a list of allowed classes from which the instances can come [1]". Let"s start explaining the facets with the most prominent concept Disaster in the ontology. This concept is created with some other restrictions (anonymous classes), i.e., date exactly 1 dateTime, hasAffect some Location, hasConsequence some CalamityArea, hasDamage some Damage, hasDemand some Activity, name exactly 1 string, etc. The restricted property hasAffect, is an object property whose domain is Disaster, the restriction type is some (existential) and the restriction filler is Location (the range of hasAffect). In other words, the restriction hasAffect some Location is an existential restriction (as denoted by the keyword, some), that works along with the hasAffect property, and has a filler Location. This restriction demonstrates the class of individuals that have at least one hasAffect relationship to an individual of the class Location. The restriction is a class which holds the individuals that satisfy the restriction. The restriction hasAffect is also depicted in Fig. 10, along with the other restrictions related to Disaster. Fig. 11 illustrates all the restrictions on the Disaster concept, which is the domain of all these restrictions. It is shown in the depiction that all object properties are existential whereas the datatype restrictions are exactly 1. The restrictions hasConsequence, hasDamage and hasDemand have the concepts CalamityArea, Damage, and Activity as their respective ranges. The mathematical representation of the concept Disaster with its slots and facets is defined in Description Logic (DL) as follows. G. The Hierarchical Taxonomy of Disaster The concept Disaster has two inherited concepts Earthquake and Concomitant. The following figure depicts the slots and facets of slots of subsumed concept Earthquake. The concept Earthquake, has two additional object properties, namely hasConsequence and hasEpicenter (along with the inherited properties from subsuming concept Disaster), as illustrated in the following figure. The slot, hasConsequence some Concomitant, as shown in the representation, has restriction type existential (some) and the slot is linked with the concept Earthquake as the domain of the slot and the concept Concomitant as the range. Another slot, hasEpicenter exactly 1 Epicenter, has exact cardinality equal to 1 and attached to the concept Epicenter as the range of the slot. Earthquake is defined with two additional datatype properties, namely depth, and magnitude. Both the slots have existential facets and are linked to decimal datatype as their range. The mathematical representation in DL, of the concept Earthquake, as portrayed in Fig. 12, can be defined as follows. Another inherited concept of Disaster, the Concomitant, with all its additional slots and inherited concepts are described in Fig. 13. For simplicity of the representation, the subsumed concepts of Concomitant shown in the bluebordered rectangle are not shown with their slots and is portrayed in Fig. 14. Concomitant has one additional existential object property is consequence of some Earthquake which is an inverse object property of the concept Earthquake (has consequence some Concomitant, presented in Fig. 12). The inherited concept Flood of Concomitant has two additional existential object properties; isCausedByBurstOf some WaterReservoir and isCausedBySeicheOf some WaterReservoir as presented by the following figure. Two subclasses of Concomitant, Tsunami, and VolcanicEruption are linked to each other via an existential slot isConsequenceOf having Tsunami as the domain and VolcanicEruption as the range. Avalanche, Faulting, Fire, SoilLiquefaction, Landslide, Rockslide, RadioactivityFromNuclearPlant, SpillOfChemical, and VolcanicEruption are the inherited concepts of Concomitant, all having an addition slot hasEpicenter some Epicenter which is an object property of existential restriction type and is illustrated by the Fig. 14. The concept Fire has another addition existential object property hasDamage some Forest and a datatype property areaTemperature some integer which also has an existential facet. Mathematical representations in DL, of all the concepts presented in the above Fig. 13 [33] expressed ontology evaluation as a technical process of judging the quality of the ontology. Various ontology evaluation techniques are found in past research work. The ontology evaluation of this research follows the guidelines and criteria recommended by Gomez et al. [2] and Kreider [3]. Ontology evaluation is essential to promise the overall quality of the developed ontology. It not only assesses the technical aspect of the ontology but also encourages the domain experts" involvement for their worthy judgment. The efficacy of ontological knowledge depends on the quality of the ontology, and evaluation is a way to gauge the level of quality of the ontology. This section presents some ontology evaluation techniques accompanied by significant ontology evaluation results. The results produced by the process of assessment delineates the correctness and usefulness of the ontology. The term "Evaluation" subsumes the terms "Validation" and "Verification". The validation process is completed by involving domain experts, whereas, for the verification process, the use of a software technique is brought into work. As a first step and to measure the domain and scope of ontology, some appropriate competency questions (CQs) are fabricated, which are then evaluated by the domain experts. The relevance and completeness of these CQs validate the ontology. Verification process gauges the correctness and usefulness of the ontology, which is done by developing queries in SPARQL to provide solutions to the CQs. According to Fernández et al. [34], verification speaks about the activity that assures the correctness of an ontology. Vrandečić [35] refers to verification as an evaluation task for assessing that the ontology has been built correctly. This task can be performed during each phase of ontology development or between phases of the development life cycle. This technical process of verification guarantees the usefulness and accuracy of an ontology according to the accepted understanding of the domain of specialized knowledge sources. The verification process can be performed by technically generating answers to the CQs using some appropriate query language. This research is using SPARQL query language. The methodology of ontology evaluation by generating answers to CQs using a query language is adopted by many researchers. This section furnishes the results produced from the execution of CQs codified in SPARQL, a query language. CQs are among the foremost applied and familiar context for ontology assessment. This competency appraisal is performed to examine the preciseness of the ontology using query language in the ontology development tools, for example, DL queries or SPARQL queries. SPARQL query is the commonly used plugin within Protégé. The SPARQL Query is used as a tool to gauge the adequacy of the ontology. In the frame of reference for this research, SPARQL queries will be developed for the execution of CQs. CQs are composed at the ontology specification stage to define the scope of the ontology. The concluded set of CQs are then codified in the query language before the execution on SPARQL Query. www.ijacsa.thesai.org "SPARQL allows for a query that consists of triple patterns, conjunctions, disjunctions, and optional patterns. SPARQL does not have a native inference mechanism incorporated into the language. SPARQL queries return what is contained in the information model in the form of graph bindings [36]". To get some appropriate results from a SPARQL query, the individuals can be added to the ontology with some actual or sample data. These queries will help evaluate the ontology and may also help to improve the architecture of the ontology. The CQs are listed in section III (A). The representation is in three folds starting with natural language query (question) followed by the SPARQL query and finally the result produced by the execution of SPARQL query. V. CONCLUSION AND FUTURE WORK On the first hand, the ontology designing is an innovative development. Thus, different ontology developers would undoubtedly come up with ontologies designed for different purposes. Secondly, multiple ontologies may serve the object for a domain correctly. This research proposes an ontology for disaster trail management. The study included the relevant concepts very carefully after analyzing the real data from credible sources, disaster-related news, and discussion of the scenario with domain experts. Positive comments by the domain experts on ontology validation assessment and generated results of SPARQL queries executed for relevant CQs showed that the ontology meets the required criteria. The research proposed a distinct and cardinal ontology which encompasses the entire domain of earthquake disaster trail management for Pakistan using appropriate semantic relationships among the ontology concepts. It is hoped that the semantic web research community will contribute further to enhance the ontology to make it fit for all type of disasters including human-made disasters and for accommodating region-specific requirements of other countries in the world. Whether we talk about ontology development or an architectural model for an application, these are completed through a progressive approach. Thus, no such work can be assumed as ultimate, and there is always room for modification and enhancement. Following are the recommendations by this study as future work to enhance the proposed semantic web-based ontology.  The proposed ontology can be improved to a multilingual corpus.  The ontology aims at addressing the disaster trail management in Pakistan, which can be modified or enhanced to fit the region-specific requirements of other countries.  The ontology design can be improved to accommodate human-made disasters like war.
9,385.6
2019-01-01T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
A Federated Learning-Based Approach for Improving Intrusion Detection in Industrial Internet of Things Networks : The Internet of Things (IoT) is a network of electrical devices that are connected to the Internet wirelessly. This group of devices generates a large amount of data with information about users, which makes the whole system sensitive and prone to malicious attacks eventually. The rapidly growing IoT-connected devices under a centralized ML system could threaten data privacy. The popular centralized machine learning (ML)-assisted approaches are difficult to apply due to their requirement of enormous amounts of data in a central entity. Owing to the growing distribution of data over numerous networks of connected devices, decentralized ML solutions are needed. In this paper, we propose a Federated Learning (FL) method for detecting unwanted intrusions to guarantee the protection of IoT networks. This method ensures privacy and security by federated training of local IoT device data. Local IoT clients share only parameter updates with a central global server, which aggregates them and distributes an improved detection algorithm. After each round of FL training, each of the IoT clients receives an updated model from the global server and trains their local dataset, where IoT devices can keep their own privacy intact while optimizing the overall model. To evaluate the efficiency of the proposed method, we conducted exhaustive experiments on a new dataset named Edge-IIoTset. The performance evaluation demonstrates the reliability and effectiveness of the proposed intrusion detection model by achieving an accuracy (92.49%) close to that offered by the conventional centralized ML models’ accuracy (93.92%) using the FL method. Introduction The development of the industrial Internet of Things (IIoT) has advanced significantly over the last few years as a result of the rapid development of wireless transmission and processing. A range of cutting-edge portable devices, such as smart phones, smart watches, and smart applications, have emerged on the IoT networks. Numerous businesses, including live gaming, smart manufacturing, navigational systems, smart cities, and smart healthcare, have extensively used these. The architecture of IoT networks still faces a number of important challenges due to their rapid proliferation. The creation of efficient and flexible control for IoT systems that can aid in energy savings, increase the number of applications, and be advantageous for potential future expansion is one of the main challenges. Along with guaranteeing security and privacy against unauthorized access, the IoT networks are cognitively demanding, time-efficient, and have a constant requirement for computing resources, which is another significant barrier. Due to the rapid development of digital technology and the growth in personal awareness, people are beginning to think about personal data security even more [1]. access, the IoT networks are cognitively demanding, time-efficient, and have a constant requirement for computing resources, which is another significant barrier. Due to the rapid development of digital technology and the growth in personal awareness, people are beginning to think about personal data security even more [1]. Distributed learning methods are needed so that devices can work together to create a single way to learn with local training. Federated learning (FL) is a decentralized platform for machine learning (ML) [2]. Unlike centralized learning frameworks, the FL framework automatically promotes confidentiality and privacy because data created on an end device does not leave the device. The data from the participating devices is used on the device itself to train the distributed learning model. A client device (e.g., a local Wi-Fi router) and a cloud server only share the settings that have been changed. Some of the benefits of using FL in wireless IoT networks are: (i) instead of exchanging huge amounts of training data, local ML system settings can save power and use less wireless bandwidth; (ii) locally calibrating the parameters of an ML model can greatly reduce transmission delay; (iii) FL can help protect data privacy because only the local learning model variables are sent and the training data stays on the edge devices. As shown in Figure 1, edge devices in FL work together to make a learning model by only sending locally learned designs to a global aggregation server and keeping the local training input at the device end [3]. The security of wireless IIoT devices is becoming an increasing concern for both manufacturers and consumers. IIoT devices are susceptible, as they lack the essential built-in security mechanisms to resist intrusions. The confined environment and low computational power of these devices are two of the primary reasons for this. The functions that can be performed on IoT devices are typically constrained by their low power consumption. Security measures subsequently keep failing as a result of that. Moreover, when individuals kept their data online without a password or with weak or default settings, the researchers uncovered one of the most critical security flaws. Frequently, IIoT devices are shipped with default, easy-to-remember passwords or no password at all. By gaining access to these devices, hackers may exploit this vulnerability with relative ease. Due to this The security of wireless IIoT devices is becoming an increasing concern for both manufacturers and consumers. IIoT devices are susceptible, as they lack the essential built-in security mechanisms to resist intrusions. The confined environment and low computational power of these devices are two of the primary reasons for this. The functions that can be performed on IoT devices are typically constrained by their low power consumption. Security measures subsequently keep failing as a result of that. Moreover, when individuals kept their data online without a password or with weak or default settings, the researchers uncovered one of the most critical security flaws. Frequently, IIoT devices are shipped with default, easy-to-remember passwords or no password at all. By gaining access to these devices, hackers may exploit this vulnerability with relative ease. Due to this vulnerability, attackers may exploit IIoT devices to perform serious assaults, putting the privacy of users at risk [4]. In order to prevent catastrophic harm, the research community is working on designing systems that can respond quickly and effectively to these assaults. An intrusion detection system (IDS) is a specialized security system that continuously examines network or computer system events for signs of an intrusion [5]. It examines the metadata found in network packets and employs pre-established rules to determine whether to allow or deny traffic. Intrusion detection methods can primarily be divided into two types: deployment-based techniques and detection-based techniques. Additionally, each of these categories can be grouped into a further two subcategories. Depending on how they are used, the intrusion detection systems can be of two types: host-based IDS (HIDS) and network-based IDS (NIDS). Depending on how intrusions are detected, IDS techniques can be categorized as either signature-based or anomaly-based systems. The first is established based on predefined behavioral patterns. As a result of that, a new or unknown threat cannot be detected with signature-based IDS. In order to detect any anomaly in the system's behavior and identify it as a potential threat, anomaly-based IDS employ specific network traffic characteristics. Currently, there are few ML-based solutions that are being used to produce a practical IDS to aid IIoT networks in discovering system irregularities. However, typical centralized systems are susceptible to a single point of failure (SPOF) [6], in addition to other limitations. One feasible solution to these issues is FL, which enables devices to train a common ML model without trading or receiving data. This paper presents a method for accurately identifying unwanted intrusions in wireless IIoT networks in order to protect the privacy and security of IIoT data. We designed and tested an FL-based IIoT network capable of detecting network breaches and increasing security where ML takes place locally on local distributed clients rather than a centralized server. The contributions of our work can be found below: • We have proposed an FL approach for IIoT intrusion detection in order to train the models at the local device end and accumulate their learning to ensure enhanced security against unwanted attacks. • We have deployed two deep learning classifiers, namely, convolutional neural network (CNN) and recurrent neural network (RNN) for both centralized ML and federated learning (FL). • We have utilized the Edge-IIoTset dataset [7] to demonstrate a thorough evaluation that contains real-world data about different types of attacks. • We compared our proposed method to other works to show that it is superior. The remainder of the article is organized as follows. Section 2 includes a review of the literature on the most current FL advancements related to IoT and intrusion detection. Section 3 discusses the proposed method in detail. Section 4 presents and thoroughly discusses the experimental data and evaluation. Finally, Section 5 summarizes our findings and suggests many potential future research topics. Related Works We have highlighted the contributions and limitations of existing related works focused on FL and ML adoption in IoT networks and intrusion detection. In the most recent state of the art, there are two surveys about IoT intrusion detection [8,9]. Zarpelo et al. [8] gave an overview of IDS that are specific to the IoT and a taxonomy to organize them. Additionally, they presented a thorough comparison of the different IDS for IoT, taking things such as installation strategy, detection mechanism, and validation strategy into account. Benkhelifa et al. [9] were more concerned with enhancing IoT intrusion detection processes. They investigated the current state of the art, with a focus on IoT architecture. It provided a more comprehensive and critical analysis. In another paper, Ahmad et al. [2] presented the use of ML to defend massive IoT devices from various attacks. This work proved ML approaches and their efficacy by evaluating hundreds of research publications. Samek et al. [10] focused on ML in IoT devices. Modern communication systems generate massive amounts of data, and ML optimizes resource allocation, saves energy, and improves performance. The research studied distributed learning to improve wireless device connectivity. Gunduz et al. discussed denial-of-service (DoS) attacks in wireless sensor networks (WSNs) [11]. DoS attacks can damage any layer of a WSN's architecture, compromising its security. Their study focused on five TCP/IP protocol levels and ML DoS solutions. In addition, they studied ML-based IDSs to solve the issue. Moreover, most conventional solutions lacked an ML-oriented approach; hence, the authors recommended ML IDS to secure TCP/IP layers. Using a dimension reduction technique and a classifier, Zhao et al. [12] proposed a system that detects anomalies in IoT networks. Dimension reduction was performed with the help of principal component analysis to stop the performance of fault diagnostics from getting worse and to stop real-time demand threats from complex computations. This article is lacking an example of how to calculate memory size. Vallathan et al. [13] suggest a novel deep learning-based technique for predicting the likelihood of abnormal events utilizing footage from connected surveillance systems and alerting users to those events in an IoT environment. A deep neural network, a multiclassifier, and kernel density functions make up the suggested solution. Here, abnormal behaviors are anticipated using the Random Forest Differential Evolution with Kernel Density (RFKD) method, and any abnormal activities that are identified result in signals being delivered to IoT devices using the MQTT (Message Queuing Telemetry Transport) protocol. The work does not, however, cover the tracking and detection of multiple anomalies in living environments. Ferrag et al. [14] carried out a study that investigated a deep learning-based IDS for distributed DoS (DDoS) attacks that is built on three models: convolutional neural networks, deep neural networks, and recurrent neural networks. The performance of each model was investigated across two classification types (binary and multiclass) using two new actual traffic datasets, CIC-DDoS2019 and TON_IoT, which include various forms of DDoS attacks. The intrusion detection approach presented by Pajouh et al. [15] is built on a two-layer dimension reduction and two-tier classification module. This method is also reported to be able to identify User-to-Root (U2R) and Remote-to-Local (R2L) cyberattacks. Both linear discriminant analysis and component analysis have been used to minimize the number of dimensions. The Network Security Laboratory-Knowledge Discovery Dataset (NSL-KDD) dataset is used throughout the entire experiment. The two-tier classification module employs the Naive Bayes and certainty factor versions of KNN to identify anomalous behavior. An intrusion detection approach based on a two-step classification system was proposed by Pamukov et al. [16]. The first stage was to employ a negative selection approach that was resistant to difficult classification problems. The attack samples were then classified using a trained neural network. So, this strategy saved what little power and computing resources the end devices had by getting rid of the overhead of the training process. This work is limited to the creation of the Negative Selection Neural Network (NSNN) algorithm, and currently, there is no best way to implement online learning for it. Khan et al. [17] focused on the application of decentralized ML technology to handle the massive amounts of data from the increasing IoT devices and subsequently discussed the challenges of FL. Their work developed a Stackelberg game-based methodology in order to create an FL incentive mechanism to improve the interaction between devices and edge servers. For their experiment, they utilized the Modified CoCoA framework and the MNIST dataset. The authors claimed that this approach was useful for big IoT networks since it enables customization of different ML computing characteristics based on the capabilities of connected devices. Tang et al. [18] suggested an FL-based approach to network intrusion detection. It supposedly addresses the issues of an inadequate network intrusion detection data set and privacy protection. For iterative training, this technique runs the GRU deep learning model locally. Network traffic data is stored locally, and the central server aggregates and averages the parameters, as is the case with all federated learning techniques. They employed the CICIDS2017 intrusion detection data set for their experiment, which is a popular intrusion detection data set but just a lab simulation data set. Chen et al. [19] discussed the importance of distributed learning and the integration of FL in large IoT networks. However, achieving ML accuracy with the FL technique in a large IoT network requires regular updates to global algorithms, which cost a significant amount of data. They investigated the problem of maximizing resources and learning performance in wireless FL systems at the same time in order to reduce communication costs and improve learning performance. A Lagrange multiplier method is first used by decoupling variables, such as power variables, bandwidth variables, and transmission indicators, in order to maximize effective information flow via networks. Then, a power and bandwidth allocation mechanism based on linear search is established. In their work, Cao et al. [20] said that it was a major challenge for local differential privacy (LDP) in power IoTs to show how to find a balance between utility and privacy while still letting the native IoT terminal run. It was suggested to use an optimized framework that considered the trade-off between local differential privacy, data utility, and resource utilization. Additionally, users were divided into groups according to their level of requirement for privacy, and sensitive users received better privacy protection. The authors used Sparse Coding Randomized Aggregable Privacy-Preserving Ordinal Response (SCRAPPOR) and Factorial Hidden Markov Model (FHMM) algorithms and evaluated the Reference Energy Disaggregation Dataset (REDD) in the proposed method. This research was limited to LDP and power IoTs. Attota et al. [21] offered the MV-FLID FL-based intrusion detection method, which trains on various IoT network data perspectives in a decentralized manner to identify, classify, and prevent intrusions. Maximizing the learning effectiveness of various kinds of attacks is facilitated by the multi-view ensemble learning component. The FL feature efficiently aggregates profiles through the use of peer learning, as the device's data is not shared with the server. However, they did not explore unsupervised and reinforcement ML systems, which can improve intrusion detection by detecting untrained attacks, in their work. In order to identify cyber threats in smart Internet of Things (IoT) systems, including smart homes, smart e-healthcare systems, and smart cities, Tabassum et al. [22] suggested a federated deep learning (DL) intrusion detection system utilizing GAN, named FEDGAN-IDS. In order to train the GAN network using augmented local data and serve as a classifier, they distributed it among IoT devices. They demonstrated that their model performed better and converged earlier than the most standalone IDS by comparing the model's convergence and accuracy. Driss et al. [23] described a framework based on FL for detecting cyberattacks in vehicular sensor networks (VSNs). The proposed FL approach makes it possible to share computing resources and train with devices. For better performance in attack detection, a Random Forest (RF)-based ensemble unit is used in the suggested approach in conjunction with a group of Gated Recurrent Units (GRU). Du et al. [24] highlighted that vehicle IoT devices feature sensors that produce device-specific information that can affect device security if leaked. GPS, cameras, radar, etc., must be shared in cooperative driving. The authors proposed using FL to improve system security and performance by integrating massive IoT networks with various devices. Ghourabi et al. [25] proposed a new intrusion and virus detection system to secure the healthcare system's whole network. The proposed approach consists of two parts: an IDS for medical devices installed on the healthcare network and a malware detection system for data servers and medical staff devices. The goal was to protect the entire network, regardless of the devices and computers installed. The limitation is that it demands the installation of numerous systems across the healthcare network. Additionally, a correlation procedure is needed to compile the outcomes. We have summarized the key contributions of the similar works in Table 1. As can be seen, we have included the publication year, experiment dataset, used classifiers/algorithms and key findings in the table. In this work, abnormal behaviors are predicted using the RFKD method, and any abnormal activities that are identified result in signals being delivered to IoT devices using the MQTT protocol. The work does not, however, cover the tracking and detection of multiple anomalies in living environments. 2021 [14] CIC-DDoS2019, TON_IoT CNN, DNN, RNN This deep learning-based IDS was proposed for cybersecurity in agriculture 4.0 and identified DDoS attacks using three ML models: CNN, DNN, and RNN. The performance of each model was investigated across two classification types (binary and multiclass) using two real traffic datasets, CIC-DDoS2019 and TON_IoT, which include various forms of DDoS attacks. 2020 [15] NSL-KDD Naive Bayes, KNN This proposed IDS for anomaly detection in IoT networks is based on a two-step classification system. This method is capable of identifying cyberattacks from U2R and R2L. The two-tier classification module employs the Naive Bayes and certainty factor versions of KNN to identify anomalous behavior. [16] NSL-KDD NSNN An intrusion detection approach based on a two-step classification system was proposed in this paper using a negative selection algorithm and neural network. This strategy saved what little power and computing resources the end devices had by getting rid of the overhead of the training process. This work is limited to the creation of the NSNN algorithm, and currently, there is no best way to implement online learning for it. [17] MNIST Modified CoCoA framework This research was focused on the application of FL technology for resource optimization and incentive mechanisms in edge networks. A Stackelberg game-based methodology was proposed in order to create an FL incentive mechanism to improve the interaction between devices and edge servers. This approach is useful for big IoT networks since it enables customization of different ML computing characteristics based on the capabilities of connected devices. [18] CICIDS2017 GRU This FL-based IDS supposedly addresses the issues of an inadequate network intrusion detection data set and privacy protection. This technique makes it possible for numerous ISPs or other organizations to carry out joint deep learning training under the premise of preserving local data. The privacy protection of the network traffic can be solved using this method, but no real-world scenario simulation was conducted to prove its feasibility. 2022 [19] MNIST CNN This work investigated the problem of optimizing resources and learning performance in wireless FL systems at the same time in order to reduce communication costs and improve learning performance. A Lagrange multiplier method is used by decoupling variables, such as power variables, bandwidth variables, and transmission indicators, in order to maximize effective information flow via networks. A power and bandwidth allocation mechanism based on linear search is also established. The framework can successfully schedule clients based on wireless channel dynamics and learned model parameter features. 2020 [20] REDD SCRAPPOR, FHMM This paper proposed an optimized framework that considered the trade-off between LDP, data utility, and resource utilization in power IoTs. Additionally, users were divided into groups according to their level of requirement for privacy, and sensitive users received better privacy protection. This work is limited to LDP and power IoTs. [21] MQTT RF Maximizing the learning effectiveness of various kinds of attacks is facilitated by the multi-view ensemble learning component in this work. The FL feature efficiently aggregates profiles through the use of peer learning, as the device's data is not shared with the server. However, they did not explore unsupervised and reinforcement ML systems, which can improve intrusion detection by detecting untrained attacks. NSL-KDD, KDD-CUP99, and UNSW-NB15 GAN In order to identify cyber threats in smart IoT systems, including smart homes, smart e-healthcare systems, and smart cities, this federated DL intrusion detection system was proposed. In order to train the GAN network using augmented local data and serve as a classifier, they distributed it among IoT devices. By comparing the convergence and accuracy of the model, they showed that their model worked better and converged faster than most standalone IDS. This method consists of two parts: an IDS for medical devices installed on the healthcare network and a malware detection system for data servers and medical staff devices. The goal was to protect the entire network, regardless of the devices and computers installed. The limitation is that it demands the installation of numerous systems across the healthcare network. Additionally, a correlation procedure is needed to compile the outcomes. 2023 This paper Edge_IIoTset CNN, RNN, FedAvg Algorithm We have proposed an FL approach for IIoT intrusion detection in order to train the models at the local device end and accumulate their learning to ensure enhanced security against unwanted attacks. We have deployed two deep learning classifiers, namely, CNN and RNN, for both centralized ML and FL. We have utilized the Edge-IIoTset dataset [7] (which contains real-world data about different types of attacks) to demonstrate a thorough evaluation and our method's performance. In this paper, we propose a methodology for intrusion detection in IIoT networks in which data is securely stored at the end devices (often a collection of IIoT devices) and aggregated learnings are transmitted to the central server. We also conducted experiments and evaluations of our methodology using both centralized and FL on the Edge-IIoTset dataset, where our proposed FL model performed admirably, allowing us to confidently state that our method can accurately identify unwanted intrusions in IIoT networks. Proposed Method In conventional settings for anomaly detection, training data from the objects to be modeled is utilized to construct the model. However, the IIoT environment creates challenges for this method. As IIoT devices are primarily single-function devices with limited capabilities, they do not generate massive volumes of data. This makes it difficult to train a model using only data collected from a user's local network, as it may take some time to collect sufficient data for training a consistent model. This needs a method that combines training data from several users into a single local training dataset, thus accelerating the learning of a stable model. FL clients are those end devices (e.g., a local Wi-Fi router) that collect data from their respective connected IIoT devices. In a FL environment, each FL client trains a local model utilizing locally accessible training data, as opposed to transmitting data to a central entity as in a fully centralized learning architecture. After that, the learnings from those local trainings are transmitted back to the global server for aggregation. At the global server, the model is again improved by global training and distributed back to the local FL clients for the next FL iteration. By this way, both the global and local models get improved and can effectively classify the intrusions and benign traffic inside an IIoT dataset. This section describes and explains the operation of our proposed approach in detail. System Architecture The layout of the suggested FL technique for IIoT intrusion detection is shown in Figure 2, where a number of devices are installed in various locations and connected to the network. Our proposed model is divided into three parts: Local-End Learnings and Intelligence In this part of the framework, each k client (k ∊ [1, …, K]) at the local end trains the data acquired from their separate IIoT devices with the local models shared by the server, while the IDS at the client end detects any unwanted attacks. Additionally, an analyzer is employed to keep track of their network data for subsequent analysis. This kind of smart learning on the device protects the independence of local intrusion detection by requiring local training, tweaking of parameters, and improving inference procedures. Learnings Distribution With the aim of integrating the models and developing a better intrusion detection system with optimum parameters, the clients exchange their trained learning with a server-based system for aggregation. The intelligent communication administrator (e.g., security gateway) is in charge of all interactions between clients and the aggregation server. Accumulated Global Learnings In order to obtain the efficiency of centralized ML methods, which somewhat contain global data learning, the detection models are exchanged with the server-based aggregation platform. The aggregation server is in charge of aggregating local learning and transforming it into global learning. The distributed clients receive the optimized model through the communication platform, which facilitates knowledge sharing. A client can detect intrusions using comparable behaviors obtained from several participating devices thanks to this sharing mechanism, which gradually improves learning. Adversary Model and Assumptions In an IIoT network, a threat or adversary M might be internal or external. An external adversary generally uses the Internet to launch cyberattacks, such as abusing digitally Local-End Learnings and Intelligence In this part of the framework, each k client (k ∈ [1, . . . , K]) at the local end trains the data acquired from their separate IIoT devices with the local models shared by the server, while the IDS at the client end detects any unwanted attacks. Additionally, an analyzer is employed to keep track of their network data for subsequent analysis. This kind of smart learning on the device protects the independence of local intrusion detection by requiring local training, tweaking of parameters, and improving inference procedures. Learnings Distribution With the aim of integrating the models and developing a better intrusion detection system with optimum parameters, the clients exchange their trained learning with a serverbased system for aggregation. The intelligent communication administrator (e.g., security gateway) is in charge of all interactions between clients and the aggregation server. Accumulated Global Learnings In order to obtain the efficiency of centralized ML methods, which somewhat contain global data learning, the detection models are exchanged with the server-based aggregation platform. The aggregation server is in charge of aggregating local learning and transforming it into global learning. The distributed clients receive the optimized model through the communication platform, which facilitates knowledge sharing. A client can detect intrusions using comparable behaviors obtained from several participating devices thanks to this sharing mechanism, which gradually improves learning. Adversary Model and Assumptions In an IIoT network, a threat or adversary M might be internal or external. An external adversary generally uses the Internet to launch cyberattacks, such as abusing digitally equipped systems, injecting malicious content into databases, stealing confidential information, and so on, or a member of an insider group who could remain inside the network, such as a compromised IIoT device or another networked device. IIoT malware can locate and exploit weak IIoT systems and devices by manipulating devices with lower security as a platform for cyberattacks. In our investigation, we made a few other assumptions. They are as follows: • Trustworthy FL Aggregator: As aggregation servers are an essential part of the learning process, it is necessary that there always be some level of trust in the system that coordinates learning. • No Malicious IIoT Device by Design: In some cases, security problems may already be present in a newly released IIoT product. These devices, however, must not be tainted or infected before they are put to their intended purpose. As a result, devices will only generate allowed interactions until an adversary M identifies and exploits any vulnerabilities, providing our model with valuable data from which to learn. • Secure Clients: Since clients are essential components of FL learning in IIoT systems, we assume that they are secure. If such clients are compromised, the IIoT infrastructure is no longer safeguarded. FL for Intrusion Detection As previously stated, in our FL Model, all K clients' train a local model with the same shared global model, but it is trained on various local datasets rather than on a central server. Following that, they communicate the learning from these local trainings to an aggregation server via an SSL/TSL authenticated connection via the communication administrator (e.g., gRPC channel [26]). The aggregation server integrates all of them and produces an updated global model with optimal parameters. The letter w stands for the starting weights, and R represents the number of FL rounds, which will be repeated before reaching a convergence level. When each local client's weight is submitted to the aggregation server during communication round t, the following equation (Equation (1)) adapted from the FedAvg algorithm [27] can be used to update the model weights. where n is the total size of all client datasets, and n k represents the size of each client dataset. W t+1 is the updated global model after the iteration. Figure 3 depicts a scenario illustrating the interconnections between the various FL IIoT intrusion detection system participants. The server initially selects clients who have connectivity with active IIoT devices that are powered on, charging, and connected to an unmetered Wi-Fi network to take part in the FL process. After that, the different parts of the system interact in the following ways to finish the whole process: 1. At t = 0, the server generates a neural network model from a global intrusion detection model. At this point, the number of hidden layers, neurons, epochs, etc. is counted. The symbol w denotes the model's initial weight. 2. Every k client (k ∈ [1, . . . , K]) need to utilize the global model download it, regardless of whether they contribute to the FL process or not. With their own private data, each of the K clients retrains the global model locally in parallel and creates a fresh set of local weights w k t+1 . 3. The designated clients use the data collected from the IIoT devices under their control to improve the model under investigation while maintaining the privacy of their local data. 4. To protect the privacy of the clients, only the updated model parameters for the improved intrusion detection model are sent to the central server. 5. Once all the changes are received, the server combines the weights from the various node models to produce a new improved model (Equation (1)). The FedAvg method is used for the aggregation. In this method, the parameters are evaluated based on the size of the dataset at each node. 6. The updated model parameters are pushed back to the clients by the central server. 7. Each client uses the new model parameters and makes changes to them based on the new data. 8. For continuing model learning and improvement, steps 4, 5, 6, and 7 are repeated. Complexity of FL Approach The FL parameters add complexity to FL over a conventional ML system because of the distinct scenario they represent. The skewness of the heterogeneous data spread among the client devices, for instance, is a crucial parameter. In a multi-class classification issue, a severely skewed data scenario can involve each client having just one label of data. The number of participating clients, as well as each client's communication and processing budget, are further important FL variables. Provided that FL is supposed to function in the edge layer, it should have a low time complexity, as the software being run has a significant impact on the behavior of the associated processes. The time complexity of the global model depends on the time complexity of the clients, the time complexity of the aggregation server, and the complexity of the exchanged parameters, excluding transmission times, as these factors typically vary significantly across networks. The local client's time complexity can be defined by: Ο(E. .(l0.l1 +l1.l2 +…+ lL−1.lL)), where lx represents layer x, E represents the number of local epochs, and represents the size of each client dataset. Now, the global server time complexity depends on the total number of clients (K) and also the cumulation of all local model parameters (W). So, we can define the time complexity of the global server by: Ο(K.W). Again, we will have many parameters exchanged between global and local clients. The complexity of the exchanged parameters we can define as: Ο(W). So, the total time complexity (Ο(total)) of our proposed FL approach can be represented by the following equation (Equation (2)). Complexity of FL Approach The FL parameters add complexity to FL over a conventional ML system because of the distinct scenario they represent. The skewness of the heterogeneous data spread among the client devices, for instance, is a crucial parameter. In a multi-class classification issue, a severely skewed data scenario can involve each client having just one label of data. The number of participating clients, as well as each client's communication and processing budget, are further important FL variables. Provided that FL is supposed to function in the edge layer, it should have a low time complexity, as the software being run has a significant impact on the behavior of the associated processes. The time complexity of the global model depends on the time complexity of the clients, the time complexity of the aggregation server, and the complexity of the exchanged parameters, excluding transmission times, as these factors typically vary significantly across networks. The local client's time complexity can be defined by: O(E.n k .(l 0 .l 1 +l 1 .l 2 + . . . + l L−1 .l L )), where l x represents layer x, E represents the number of local epochs, and n k represents the size of each client dataset. Now, the global server time complexity depends on the total number of clients (K) and also the cumulation of all local model parameters (W). So, we can define the time complexity of the global server by: O(K.W). Again, we will have many parameters exchanged between global and local clients. The complexity of the exchanged parameters we can define as: O(W). So, the total time complexity (O( total )) of our proposed FL approach can be represented by the following equation (Equation (2)). ML Classifiers for Intrusion Detection The intelligent IDS solution now has an entirely revolutionary path for growth thanks to the rapid advancements in ML techniques and applications. Neural network methods have proved to be very useful in extracting improved data representations for building effective models. While neural networks come in a variety of types, they all share these common basic components: neurons, weights, biases, and functions. In addition, neural networks frequently serve the same goal of connecting an input x and an output y so that y = f(x,θ), where θ is the parameters vector. For intrusion detection, we have kept the number of classifiers in a centralized setting very limited, since each classifier trains on the full dataset. We have used two different types of classifiers: the recurrent neural network (RNN) and the convolutional neural network (CNN). Convolutional Neural Network The purpose of CNN is to interpret data in the form of multiple arrays. The initial layers in this method consist of a collection of learnable filters subjected to convolutional feature extractors. Every bit of input data is traversed by a sliding window created by the applied filters. The outputs are referred to as "feature maps", and the overlapping distance is termed the "stride". A CNN layer that is employed to create distinct feature maps, is composed of convolutional kernels. In the feature map of the subsequent layer, a neuron is related to neighboring neuron areas. The kernel needs to be shared among all input spatial locations in order to produce a feature map. One or more fully connected layers are used to complete the classification after the convolutional and pooling layers are constructed [28]. In CNN architectures, the convolutional operation over the input feature maps and convolutional layers is shown by the following equation (Equation (3)): where * is the convolution operator, and h Recurrent Neural Network RNNs are improved feed-forward neural network models that have the capacity to memorize data at each step for subsequent outputs. In an RNN, neurons' output is linked to their own and other neurons' input. RNNs can therefore use their internal memory to represent data sequences and time series [29]. The following is a formalization of the standard RNN at time t: where x t is the input sequence. The weight matrices for the input layer to the hidden layer, the hidden layer to the hidden layer, and the hidden layer to the output layer are denoted by W (in,hi) , W (hi,hi) , and W (hi,ou) , respectively. In addition, g(·) denotes the activation function, and b stands for the bias. The long short-term memory (LSTM) is a special type of RNN model for our model evaluation. The addition of the cell state to the LSTM network makes a difference compared to the regular RNN. The cell state will be sent to the subsequent cell with time t, which stands for long-term memory. The cell individually chooses whether or not to forget throughout the calculation of the cell sequence, allowing it to retain past information for a considerable amount of time [30]. The LSTM hidden layer is developed using the following formula (Equation (5)) at time step t: where at time step t, i t stands for the input gates, f t stands for the forget gates, c t stands for the memory cells, o t stands for the hidden output, and y t stands for the output layer. Experiments, Results, and Discussion This section includes experimental details for our proposed technique as well as a detailed discussion of performance. Experimental Setup We performed our experiments on Google Colaboratory using Python 3 as the programming language. In order to implement our approach, NumPy and other well-known libraries are employed, along with multi-dimensional arrays and matrices. Pandas offers powerful data-structure manipulation and analysis tools, which we also utilized. In addition, TensorFlow and Keras are used for ML and DL. Moreover, numerous supervised and unsupervised ML method implementations are available in Scikit-learn. Furthermore, we used SMOTE [31] to oversample minority classes in order to increase the overall model efficiency. Dataset, Data Pre-Processing, and Feature Selection Datasets are necessary in IIoT networks for both training and testing IDSs, so the selection of the appropriate dataset is crucial. There is now a new cybersecurity dataset called Edge-IIoTset [7] that was created for IIoT and IoT applications. Numerous IoT devices, such as heart rate sensors, flame sensors, temperature and humidity sensors, etc., generate the data. The testbed was subjected to 14 different types of attacks, including injection, malware, DDoS attacks, and man-in-the-middle (MITM) attacks. For FL-based tasks, the data distribution needs to be non-independently identically distributed (Non-IID), imbalanced, and reflect the elements of the real-world scenario. For the experimental purpose, we have divided our dataset (Edge-IIoTset) into several local datasets to train them as per requirement for FL. This was necessary because FL-specific datasets were unavailable. We begin by grouping the data and removing duplicates and missing elements such as "NAN" (Not a Number) and "INF" (Infinite Value). Following that, we eliminate additional flow characteristics such as tcp.payload, tcp.options, tcp.len, tcp.flags.ack, tcp.flags, tcp.connection.rst, tcp.connection.fin, tcp.checksum, tcp.ack_raw, tcp.ack, arp.dst.proto_ipv4, ip.dst_host, ip.src_host, and frame.time. In the second step, we performed the data encoding step by using dummy encoding categorical data and by normalizing numeric data using the Z-score normalization defined by (x − µ)/σ, where the feature value is x, the mean is µ, and the standard deviation is σ. Then, the oversampling was performed using the SMOTE. We have applied the feature selection strategy to enhance the proposed model's performance and reduce its training and classification times. The feature selection approach looks for the most pertinent features and eliminates the irrelevant ones. Recursive Feature Elimination (RFE) has been applied in the suggested model. It is a wrapper method that repeatedly assesses how well a certain model performs with various feature combinations. In order to enhance accuracy, RFE recursively removes features from each feature set. The features are then ranked according to the order in which they were eliminated. Table 2 displays the randomly selected data for ML models after data pre-processing (cleaning and splitting) and feature selection, as well as the train and test data sets that are generated. We performed a series of tests to evaluate the efficacy of FL with varying client numbers (from 3 to 15) contributing to model training. We trained our model for a total of 50 epochs before reaching optimal output. We investigated the system's efficiency for various client numbers while developing the federated model. Each client received a piece of training data from the deployment dataset that was chosen at random. To examine this potential loss in accuracy, we generated three federated models by sharing the entire training dataset among 3, 9, and 15 clients, and we compared them to a centralized model. Performance Metric The following detection metrics were considered during model evaluation using test data: • Precision: It gives the ratio of successfully classifying attacks to the total number of expected attack outcomes, which can be calculated as follows: • Recall: It offers the ratio of accurately categorized attacks to all anticipated attack outcomes, which is determined as follows: • F 1 -Score: It gives the ratio of successfully classifying attacks to the total number of expected attack outcomes, which can be calculated as follows: • True Positive Rate (TPR): A true positive state occurs when an activity is detected by the IDS as an attack and the activity truly represents an intrusion. TPR is sometimes also regarded as the detection rate and can be represented by the following expression: • False Positive Rate (FPR): When the IDS detect an activity as an attack when it is actually benign or normal, the state is known as a false positive rate. A false positive could be regarded as a false alarm rate too, which can be calculated as follows: Performance Evaluation This part contains the findings of the experiment based on centralized learning and our proposed FL-based model performance in intrusion detection using the Edge-IIoTset dataset. Intrusion Detection Using Centralized Methods To evaluate the effectiveness of the proposed model, we first applied two traditional centralized ML approaches for cyber-attack detection, namely, CNN and RNN. The values that were used for the various classifiers' parameters in our suggested approach are listed in Table 3. Table 4 shows the results of ML approaches for a centralized model in terms of Accuracy, Precision, Recall, and F1-score, which indicates how well the model detects benign and attack classes from the dataset. As shown in the table, the Accuracy and F1-Score values can reach as high as 94% and 93% for RNN and CNN approaches, while the Precision and Recall values can reach as high as 95% (RNN) and 94% (CNN) for identifying benign and malicious attacks, respectively. Despite the fact that centralized models performed pretty well, they are nevertheless susceptible to Single Point of Failure concerns. Intrusion Detection using Federated Method We deployed our model for FL experiments with three multiple client sets, K, with K = 3 (first set), K = 9 (second set), and K = 15 (third set). We utilized two cases to provide data to our various clients: • Independent and Identically Distributed (IID): The distribution of data across the dataset corresponds to the distribution of data for each client. • Non-Independent Identically Distributed (Non-IID): The distribution of data across the dataset is inconsistent with the distribution of data for each client. Table 5 compares the accuracy outcomes of the global models, the best clients, and the worst clients after the 1st and 50th FL rounds using both the data distribution sets. As can be observed, the performance for all classes increased as the round increased. It is pretty common that in the case of IID, there is a smaller performance gap between the worst and best clients. The difference, however, is always quite large for Non-IID because some clients only have a few classes. As an illustration, for CNN with K = 9, the difference between the best and worst clients is quite large at the 1st FL round, but it becomes less for IID as we get closer to the 50th round. However, the gap for Non-IID is still big. Additionally, it can be seen that the detection accuracy is fairly competitive with the centralized learnings. We can therefore conclude that our FL model is extremely efficient and ensures increased privacy. Table 5 lists the Best Client Accuracy, Worst Client Accuracy, and Global Model Accuracy as letters B, W, and G, respectively. As shown in Figure 4, we have displayed the learning process vs. accuracy graph for both centralized and FL techniques. The gap between the centralized approach (both CNN and RNN) and the FL method is rather substantial at the beginning. However, the margin shrinks as we move on to higher rounds, and by the 50th round, the results are quite competitive. This demonstrates that our proposed FL-method performs quite well in comparison to the centralized ML approaches and has further advantages that we will cover in the next sections. Network 2023, 3, FOR PEER REVIEW 17 ensures increased privacy. Table 5 lists the Best Client Accuracy, Worst Client Accuracy, and Global Model Accuracy as letters B, W, and G, respectively. As shown in Figure 4, we have displayed the learning process vs. accuracy graph for both centralized and FL techniques. The gap between the centralized approach (both CNN and RNN) and the FL method is rather substantial at the beginning. However, the margin shrinks as we move on to higher rounds, and by the 50th round, the results are quite competitive. This demonstrates that our proposed FL-method performs quite well in comparison to the centralized ML approaches and has further advantages that we will cover in the next sections. Positive Rate, which can also be termed the Detection Rate and False Alarm Rate, respectively [32]. Given that all of the AUC (Area Under Curve) values fall between 0.91 and 0.95, we recognize that the performance of both ML approaches using our proposed FL method was quite satisfactory. Rate, which can also be termed the Detection Rate and False Alarm Rate, respectively [32]. Given that all of the AUC (Area Under Curve) values fall between 0.91 and 0.95, we recognize that the performance of both ML approaches using our proposed FL method was quite satisfactory. We have demonstrated the time consumption for different numbers of FL clients (3, 9, and 15) for two different data distribution types (IID and Non-IID). As can be seen from the figure, the training time increases as the number of clients increases. It is evident from the figure that the training process is faster for the CNN method in comparison to RNN method. Additionally, it can be noticed that the different types of data distribution have no real significant impact on the time consumption, although in Non-IID cases, the time is slightly on the higher side. So, we can conclude that the training time for our proposed method varies depending only on the ML method and the number of clients used. We have demonstrated the time consumption for different numbers of FL clients (3, 9, and 15) for two different data distribution types (IID and Non-IID). As can be seen from the figure, the training time increases as the number of clients increases. It is evident from the figure that the training process is faster for the CNN method in comparison to RNN method. Additionally, it can be noticed that the different types of data distribution have no real significant impact on the time consumption, although in Non-IID cases, the time is slightly on the higher side. So, we can conclude that the training time for our proposed method varies depending only on the ML method and the number of clients used. Table 6 compares the efficacy of our work to that of comparable FL-based IDS approaches. The scope of the comparison covers the deployment year, datasets, ML classifiers, number of clients, and data distribution methods. As can be seen from the table, the proposed model is the only one that has addressed both IID and Non-IID data issues, and in previous section, we have demonstrated the performance of both of these data types. Ours is also the only one that evaluated the performance on the latest dataset (Edge-IIoTset). [3,9,15] figure, the training time increases as the number of clients increases. It is evident from the figure that the training process is faster for the CNN method in comparison to RNN method. Additionally, it can be noticed that the different types of data distribution have no real significant impact on the time consumption, although in Non-IID cases, the time is slightly on the higher side. So, we can conclude that the training time for our proposed method varies depending only on the ML method and the number of clients used. Discussion Our proposed FL-based intrusion detection model is more effective for the following reasons. • By using FL instead of conventional ML methods, IIoT devices are able to provide data that is both more secure and require less bandwidth for transmission. Less bandwidth is required as clients do not share whole data but only the learnings of their respective local models are shared. • The massive amounts of private and secret information are no longer accessible through a single server. This ensures the privacy of the users and the security of the data. • Due to the local representation of the models, devices are able to independently predict and recognize network anomalies even when they are disconnected. So, in case of any disconnection, local clients are still able to train their models and detect any kind of intrusions. • As the number of FL rounds progressed, we achieved similar intrusion detection accuracy to centralized ML models. This is due to the fact that after each round, the models' performance gets enhanced by the learnings from all the client-end learnings, and they perform as accurately as a centralized model. Conclusions and Future Scope In this paper, we proposed an IoT intrusion detection system based on federated ML to increase security and privacy. Our primary objective was to identify unwanted intrusions so that IoT networks could be protected, and we performed our experiments on a recent dataset called Edge-IIoTset and ran experiments on both centralized and federated systems using two popular ML models called CNN and RNN. The experimental outcome demonstrated that with our proposed FL approach, we can achieve fairly competitive results in intrusion detection. In addition, we compared our technique to other FL-based IDS systems in both IID and non-IID scenarios. The experiments presented in this paper illustrate its applicability and usefulness, and have significant effects for the use of FL in the context of IoT networks. In our future work, we intend to make the model more reliable when there might be malicious edge nodes on the network. Additionally, we will concentrate on a mechanism that uses an outlier detection filtering technique to prevent poisoning attempts that are injected gradually. The majority of IoT devices are capable of using several types of energy and processing power (CPU cycles per second). Hence, innovative FL protocols are required that offer criteria for choosing a group of local devices with enough resources. The devices must be chosen based on long-lasting backup power, sufficient memory, accurate data, and increased processing capability. If there are dishonest clients and servers, the usual FL method may potentially pose privacy issues. Therefore, further research is necessary on the issue of how to achieve a more reliable FL by removing all possible threats. Additionally, FL has some limitations that can affect the accuracy of the global model, such as devices that stop working in the middle of an operation, slow upload and model update times, clients that do not have much relevant data, etc. For future research, it is important to solve these problems, which make the global model much less accurate.
12,395.6
2023-01-30T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
MITES AS MODERN MODELS: ACAROLOGY IN THE 21ST CENTURY We present a literature survey and analysis of the profile of mites (Acari, exclusive of Ixodida) in recent literature and on the World Wide Web, and compare their prominence to that of spiders (Araneae). Despite having approximately the same number of described species, spiders outshine mites on the Web, although the study of mites (Acarology) is better represented than the study of spiders (Araneology). Broad searches of scientific literature imply that publications on mites exceed those on spiders by 2-3x; however, this dominance was reversed when a smaller number of journals with broad readerships and no taxonomic orientation (e.g., Nature, Science) were surveyed. This latter analysis revealed that the topical content of mite and spider papers in these general-science journals differs significantly. A troubling leveling-off of taxonomic publications on mites also was discovered. We conclude by suggesting some strategies that acarologists and editorial boards might follow in order to raise mites to their proper status as exemplary models for ecological and evolutionary research. INTRODUCTION A decade ago, when we wrote our book Mites: Ecology, Evolution and Behaviour (Walter and Proctor, 1999), it was with the goal of revealing to students, scientists, and laypeople the wonders of the acarological world. We were spurred on both by a love of mites and by our experiences in academia, where we had repeatedly encountered otherwise well-educated colleagues who could not understand why we found mites so fascinating. Yet these same people often accepted work on a related taxon, spiders, as appropriate vehicles for addressing questions in evolutionary biology and ecology. On this the 50 th anniversary of Acarologia, the first journal devoted to the study of mites, and coin-cidentally the 10 th anniversary of the publication of our book, we decided to make an assessment of how Acarology has progressed over the last 50 years. We hope that readers will find this paper informative, entertaining, and insightful -particularly for determining which paths Acarologia may wish to tread in the coming years. Mites, or at least the mites that we call "ticks", have been a part of human culture at least since Homer started singing of a parasite on Ulysses' dog nearly twenty-nine centuries ago, and the Acari have been the subject of serious scientific study for about two centuries (Krantz, 2009). But how serious is this study; or rather, how seriously is the discipline of Acarology perceived by society at the end of the first decade of the 21 st Century? This time, rather than relying on personal experience, we take advantage of the electronic tools that have flourished since 1999, the internet and associated search engines and databases, to assess how Acarology has penetrated popular culture and the scientific literature. First we will estimate how well acarological topics are represented on the World Wide Web in comparison to other relevant disciplines and terms. We will follow this with a brief synopsis of how Acarology is presented on the web. Second, since spiders are fellow arachnids and have a similar number of described species, we put special emphasis on comparing and contrasting scientific publications on the Acari and Araneae. Where appropriate we compare the prominence of mites to that of spiders to determine if mites are on par with spiders as subjects for research published in some major scientific journals. 1960-1969 1970-1979 1980-1989 1990-1999 2000-2009 decade number of hits Acari(na) In these surveys we did not include Ixodida (ticks) in great detail, in part because of the historical separation between those who study this group and those interested in non-tick Acari (which is reflected in the paraphyletic phrase "mites and ticks"), and in part because of the problem with the word "tick" having many different meanings in English. This latter greatly affects the number of irrelevant returns from general searching on the Web, and to a lesser extent in journals as well (see comments in Results and Discussion). Mites and Acarology on the World Wide Web We used the Google TM Web Search (Google Canada, 2009) with default search settings to estimate the number of sites (hits) with terms or strings of words (in quotes) appropriate to our comparison. All of our searches were carried out in English, but with the Search Language unrestricted (http://www.google.com/preferences?hl=en). Safe Search Filtering was left at the default "moderate filtering" which excludes only "explicit images". Acarological Research in the Scientific Literature To estimate the amount of scientific literature that has been published on mites and spiders, we used two electronic databases available via the University of Alberta library. Biosis Previews ® (1926-2009) is a combination of Biological Abstracts ® and Biological Abstracts Reports, Reviews, Meetings ® and provides a general life sciences abstracting service with a very broad scope. Zoological Record (1864-2009) is the oldest and most comprehensive listing of animal taxonomic information. These databases are available through the ISI Web of Knowledge (2010) and the search engine allows Boolean operators such as 'or' and 'and' to be used to search for compound terms. When compiling totals, publications with unknown publication dates were deleted from totals. Additionally, we used the ISI Web of Knowledge (2010) to compare the 5-year journal citation Impact Factors of the 'high-profile' journal below. Mites versus Spiders in High-profile Journals The existence of a large number of publications on Acari may not accurately reflect the profile of mites in the scientific community. Are acarologists mainly publishing in taxon-delimited journals, and hence are talking mostly to other acarologists, or are they publishing in venues that are regularly read by a broad diversity of scientists? To answer this question, we narrowed our scope of search to a few highprofile journals that cover a wide range of disciplines and taxa, and compared the number of papers on 'mites' to those on 'spiders'. Although the 132 number of extant species of mites probably exceeds that of spiders by an order of magnitude, the number of described species of each taxon is very similar, between~40-50 thousand (Halliday et al. 2000, Chapman 2009). All else being equal, one might expect a similar number of publications on each taxon. We searched 7 journals (5 year ISI Impact Factor) for the time period from January 1999 to November 2009 for articles dealing with "mite(s)" or "spider(s)": Nature (31.434), Science (30.268), Naturwissenschaften (2.338), Proceedings of the Royal Society B (4.952), Proceedings of the National Academy of Science (10.228), Ecology (6.112) and Evolution (5.427). None of these publications is oriented to any particular taxon. Our rationale was that appearance in one of these widely cited journals indicated that a topic was considered (by editors, at least) to be of interest to a wide scientific audience rather than to a taxonspecific audience. At the low-impact end of our selection, Naturwissenschaften was included in order to have a representative journal based outside of the U.S. and U.K. Although its impact factor of 2.338 is relatively low compared to the other selected journals, Naturwissenschaften's IF is about twice the impact factor of the highest rated acarological journal. We checked each of the returned search items to check for relevancy (e.g., did "mite" refer to Acari or was it the second half of "ter-mite"?). We divided articles into several types: (1) primary research articles; (2) literature reviews; (3) 'editor's pick' articles in which there is an overview of a paper published in that issue; (4) 'journal club' articles in which a paper published in a different journal is highlighted (in this case, the identity of the other journal was not relevant); (5) book reviews. For each retrieved item we determined the following: the proportion of the item devoted to the mites/spiders (1 = minor, 2 = moderate, 3 = major, 4 = entire); number of species of that taxon (1 = single species, 2 = between 2-5 spp., 3 = 6-10 spp., 4 = more than 10 spp.); and whether the item included aspects of genetics, physiology, biochemistry, development, morphology, evolution, ecology, behaviour, agriculture, forestry, medicalveterinary applications, or applied materials sciences (0 = not included, 1 = included). For this last set of variables, each publication was scored for as many topics as were included. Data were analyzed in two ways. We used a Wilcoxon Signed Ranks test to compare numbers of 'mite' and 'spider' publications in the seven journals. Frequencies of occurrence of the different categories described above were compared between the two taxa using Chi-square tests. For these analyses, we used the statistical software package SPSS version 17.0, and included only 'primary research article' publications. We also compared 'mite' and 'spider' papers in multivariate space using the software package PATN 3.12 (http://www.patn.com.au/patn_v3.htm). For this analysis we restricted the publications to those primary research articles that were entirely devoted to mites or spiders (category 4). This was done in order to reduce the number of objects (publications) to allow particular tests to be done in PATN. The matrix consisted of the publication objects and their attributes: mites/spiders (0 or 1), number of species (1-4), aspects covered (0 or 1 for each of the categories listed above), year of publication. All but the year were used as intrinsic variables, and hence played a role in the construction of the ordination. A 3-D ordination was created using the semi-strong hybrid multidimensional scaling (SSH) algorithm (Belbin, 1989). Correlation of variables with the ordination was assessed using the Monte-Carlo Attributes in Ordination (MCAO) feature, and vectors significant at P<0.05 were plotted . We tested the hypothesis that papers focussed on mites were significantly different from those dealing with spiders using the Analysis of Similarity (ANOSIM) test. ANOSIM is analogous to ANOVA (Belbin, 1989). Mites and Acarology on the World Wide Web (WWW) Acarology, of course, is the study of mites; and thus, 'mite' would seem to be its least common denominator. Using the search engine Google TM (all languages, moderate filtering) and the word 'mite' yielded about 12,400,000 hits (see Table 1). This is much less than some other 4-letter words for animals, e.g. 'duck' produces 58,800,000 hits, 'bull' 106,000,000, and 'tick' 28,900,000. However, all of these simple The two most technical terms for mites are Acari and Acarina. According to Google TM (Table 1), Acari is the more favoured term and is used about four times more frequently. Additionally, Acari is more than twice as commonly found on webpages as Araneae (although Araneae is more than 10 times as common as Ixodida [52,300] and three times as common as Ixodidae [183,000], the largest family of ticks). This somewhat balances the result that 'spider' is almost six times as common as 'mite'. In contrast, 'Aves' a class of mite habitats of some interest, appears about 20 times (25,300,000) more commonly than 'Acari'. It also is worth noting that a search of Google Books produces 3,097 hits for 'Acari' and '22,019' for 'Aves'. A final approach is to search for 'Acarology' itself. On 17 December 2009, Google TM yielded 132,000 hits, but on 29 December 2009 only 129,000 hits. The decline of about 2% is unexplained, but presumably relates to the algorithms used by the search engine and one hopes not a sudden decline in interest in the study of mites. As a discipline, Acarology is often nested within Entomology, and always within Zoology. Not surprisingly, Zoology, the largest discipline receives many more hits, but Entomology does rather well (51%) and Acarology does poorly against both. This time the spiders come to the rescue: Acarology is 29 times more commonly mentioned on web pages than its sister discipline Araneology. These trends carry over when searching for strings with "Department of . . . " too. In conclusion, although mites are well represented on the WWW (despite the exclusion of ticks), much of this results from pages of little immediate interest to acarologists. When more technical search terms are used, the study of mites appears to be poorly represented compared to other disciplines. However, a wealth of acarological informa- tion is available on the web. We have divided our brief survey into three categories and present examples in the tables below (Tables 2-4). General acarological sites including society and journal webpages that provide acarological information and links are presented in Table 2. Of special interest here is the webpage of Systematic and Applied Acarology which pioneered web-based publication of taxonomic Acarology, includes freely available Special Publications, and also hosts an Acarological Reprint Library (http://www.nhm.ac.uk/hosted_sites/acarology-/saas/e-library/index.html). Also, the Acarological Society of Japan has made its papers available for free download as pdfs. Sites that host keys to mite identification are presented in Table 3. These include a variety of formats, some of which require proprietary third-party software to run (see Walter and Winterton, 2007, for a review). However, others can be used by anyone with a (preferably high-speed) internet connection or the ability to download and use Adobe pdf files. In terms of future directions, the most relevant of these keys (Knee and Proctor 2006) is hosted by the Canadian Journal of Arthropod Identification -a journal devoted to the online publication of freely available interactive keys. Finally, sites with lists of mite faunas are presented in Table 4. Other than Joel Hallan's monumental compilation on the Arachnida of the world (Hallan, 2005), the fauna list sites seem to be dominated by Oribatida. Acarological Research in the Scientific Literature In order to get a better idea how mites were fairing as subjects for scientific research in comparison to spiders, we used two well known abstracting services to search for publications in scientific journals and books that used the terms Acari, Acarina, or mite compared to Araneae or spider. The first of these, Biosis Previews®, is the more general and recorded almost eighty thousand publications that referred to mites since 1926 (76,993). In contrast, spiders were mentioned in only 30.2% as many studies (23,249). A similar pattern emerged from Zoological Record, a primary taxonomic reference that ex- However, when the number of publications per decade over the last 50 years is considered (Fig. 1), a disturbing trend emerges in the Zoological Record re-sults: the number of papers being published on the taxonomy of mites appears to have leveled off. This result is similar to what Halliday et al. (2000) noted in the description of new species of mites at the end of the last century. No such trend is present in the spider taxonomic research -a strong positive slope is apparent -nor in total publications mentioning mites as shown in the slope for the Biosis Previews® results. Mites versus spiders in general journals Searches of the seven journals over the past decade retrieved 172 returns for "mite" and 456 for "spider"; however, after discarding non-relevant hits the counts were reduced to 116 and 273, respectively. In contrast to the general survey of literature described above, every one of the seven journals surveyed had more spider-related articles than mite articles (Fig. 2). With regard to primary research papers only, there was no significant difference between 'mite' and 'spider' articles based on either proportion of each article devoted to the target taxon (Fig. 3) or the number of species of the target taxon in each article (Fig. 4). These results were somewhat surprising, as we had expected that research involving mites would more often include them as a subset of larger assemblages (e.g., of soil microarthopods) or as symbionts of a taxon that was the major focus of the paper (e.g., nest mites of birds). We had also ex-pected that papers on spiders might more often be focussed on a single species, whereas mite papers would cover many acarine taxa. Although there was a trend towards this (Fig. 4), it was not significant. However, we did find a significant difference in the topics covered by 'mite' vs. 'spider' research articles (Fig. 5). Ecology, genetics and agriculture more often occurred as topics in papers involving mites, whereas behaviour, morphology and materials science occurred much more often in those involving spiders. For materials science, the structure of spider webs was the main theme, and there were no papers at all involving mites or mite silk. These differences were reflected in the ordination analysis (Fig. 6). 'Mite' papers and 'spider' papers fall into different regions of the ordination. This degree of clumping is much tighter than one would predict by chance (ANOSIM statistics: real f-ratio = 1.45, best of 100 randomizations = 1.06, P < 0.01). Many of the vectors that most strongly match the mite/spider divide (e.g., morphology, ecology, genetics, agriculture) are those that differed greatly between the taxa in Fig. 5. It might be argued that this clumping is an artifact of having included taxon as one of the intrinsic attributes in the ordination. However, we re-ran the analysis with the influence of taxon removed, the clumping of 'mite' vs. 'spider' papers was still significant (real f-ratio = 1.095, best = 1.06, P < 0.01). The disciplinary separation between those who study ticks and those who study non-tick acarines was strongly reflected in the journal survey: none of the 116 articles returned in the search for Acari* or mite* was about ticks. We quickly did a sur-Stress = 0.189 FIGURE 6: SSH ordination of primary research articles with a taxon-devotion level of 4 in seven target journals (see Methods). Mite papers are indicated by red circles and spider papers by black circles. Two pairs of the three axes are shown (2 and 3 and 1 and 3); the remaining pair had most of the mite papers hidden behind the spider papers, making interpretation difficult. Vectors plotted are those that were correlated with the ordination at P<0.05, and hence were important in creating the topology of the ordination. Vectors point towards areas in which they have higher values. In the case of the taxon vector, mite papers were given a value of 0 and spider papers a value of 1, therefore the vector points towards the cluster of black circles. ANOSIM statistics: real f-ratio = 1.45, best of 100 randomizations = 1.06, P < 0.01). vey of those same seven journals for the same time period in order to determine how many additional papers included either "ixodid*" or "tick*". The returns were as follows, presented in the form of "(total/relevant)": Nature (256/40), Science (18/6), Naturwissenschaften (13/5), Proceedings of the Royal Society B (7/6), Proceedings of the National Academy of Science (27/27), Ecology (6/6), Evolution (3/3). Although no detailed analysis of content was done, the majority of the papers dealt with ticks as vectors, with a focus on disease or bacterial biology rather than having the biology of ticks as the focus. It is interesting to note than only 3 of the 93 relevant articles included "ixodid*"; it seems that ticks tend to be presented as taxonomy-free entities in these journals. CONCLUSIONS Although mites do have a significant presence on the World Wide Web, the results of our surveys in-dicate that Acarology as a science still has a long way to go to achieve parity with related disciplines in high profile scientific publications. We note that many of the high-profile journal articles on spiders highlighted fascinating aspects of their behaviour and morphology: courtship behaviour, male ornamentation, male and female genitalic extravagances; maternal care and social behaviour; predatory behaviour and web structure. Mites can match spiders in all of these areas (Walter and Proctor, 1999;Krantz and Walter, 2009), perhaps with the exception of web diversity (but see Saito 2010), but acarologists have yet to convince a significant number of research scientists that this is so. We also find it unfortunate that, although the Acari most likely have an extant diversity many times that of spiders, the number of described species is similar. The traditional role of acarological journals has been to publish descriptions of new species and this has been a strong point of the journal Acarologia. However, the apparent loss of mo-mentum in the publication of taxonomic acarological papers apparent in Fig. 1 and in description of new species noted by Halliday et al. (2000) suggests that we are finding it difficult to keep up. Part of this is undoubtedly due to the small number of acarologists and the low regard in which basic taxonomic work can be held in many academic circles (see Walter and Winterton, 2007). Another aspect of this problem may be that journals willing to accept acarological taxonomy are approaching saturation. As Acarologia enters its second half-century, it will be interesting to see how our premier taxonomic journal responds. We offer two recent journal models as examples of ways that may be considered by Acarologia and other acarological journals -and by acarologists publishing in other journals. The first has already been discussed, the Canadian Journal of Arthropod Identification (CJAI). Perhaps the primary impediment to finding evolutionary biologists or ecologists who are willing to work with mites is the difficulty of identifying them. CJAI publishes keys on-line where they can be used by anyone interested to identify arthropods -and then use them in their research. A second model is the highly successful journal Zootaxa (2009). Zootaxa is the fastest and best refereed taxonomic journal that we have experienced. Quick reviews by knowledgeable specialists and rapid publication facilitates bringing taxonomic knowledge to other biologists (and also helps academic acarologists to publish and avoid perishing). Zootaxa encourages authors to purchase open-access publication options for their papers, which allows any user with access to the internet to download the paper. Finally, there is a technique that many journals have experimented with in recent years: a lead or forum article for each issue that is designed to grab the attention of a broad array of researchers. Perhaps if we make more of an effort to introduce outsiders into the exciting realm of mites, our science will flourish in the 21 st Century.
5,063.8
2010-04-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Refinement of a Deflection Basin Area Index Method for Rigid Pavement 'e accuracy of the prediction and reliability of the deflection basin area index method for rigid pavement heavily depend on the sensor layout scheme and the calculation theory. A total of 154 groups of deflection data were generated by the finite element software in different conditions, and the simulated results were in good agreement with the analytical solutions of thin plates on elastic foundation. 'e accuracy of several types of deflection basin area index methods was assessed with the database of pavement deflection results. It is found that, apart from densifying the distal sensor layouts locally, replacing the deflection value of a specific point with the average of all measured points in the deflection basin area index was also an effective measure to improve the back-calculation accuracy of pavement structural parameters up to 40%. Finally, a new deflection basin area index method was proposed based on the deflection database with the regression analysis. Comparisons between the theoretical calculations of several models and the practical deflection data of an airport in northern China revealed that the newly proposed method performs better on the back-calculation accuracy and efficiency, which can provide a valuable guideline for the practical engineering in the design. Introduction Sufficient load capacities are essential for the airport runway due to the considerable impact loads during the aircraft landing. e deflection, total deformations of the airport pavement under the load, is a critical parameter to reflect the overall load-bearing capacities of pavement structures. e heavy weight deflectometer (HWD) test, a vital technology for the strength design and performance evaluation of pavement structures, has been widely adopted to determine the deflection value [1][2][3][4]. In general, the primary methods for parameter back-calculation of the pavements' structural performance based on deflection values can be classified as follows. e intelligent optimization method has been developed to find out the optimal solution of pavement structural parameters iteratively, such as neural network algorithm and genetic algorithm [5,6]. However, the accuracy of these methods heavily depends on the reliability of the algorithm and the calibration of some internal parameters [7,8]. Darter et al. [9] have developed the back-calculation technique, like the point-by-point fitting method, to determine the optimal solution by comparing the objective function results of calibration error with the actual deflection basin through an iterative approach. Nevertheless, low efficiency was detected with this method in the iterative process. Besides, Sun et al. [10] have reported that the inherent information of the inert point on deflection curves remained constant regardless of the variations of resilient modulus of pavement slabs. Although the back-calculation of pavement structural parameters can be conducted once the specific inert point was obtained, the deterministic process of this point was relatively complicated [10,11]. Compared with the back-calculation methods, the derived index method of the deflection basin [12] has been widely recognized by many researchers and engineers mainly due to the simplification of the back-calculation process. e method is also adopted by Chinese code MH/T 5024-2009 [13] and the U.S. Federal Aviation Administration [14] for the back-calculation of pavement structure parameters. is method simplifies the back-calculation process by constructing a deflection basin area index, which includes fitting formula method, back-calculation software method, and graphic method. However, related studies [15,16] have revealed that the normalization of the basin area index during the aforementioned constructional process with only an assigned measurement point will cause the inferior accuracy and higher variability. To incorporate the effect of different measuring points, Lin et al. [15] have improved the back-calculation method by minimizing the calculated errors of the objective function, but the efficiency has been significantly influenced. However, either the calculation accuracy of these methods is not up to the standard, or the calculation process is too complex, which is not conducive to direct engineering application. With this background, the finite element (FE) simulation was first applied to generate deflection information under different working conditions. en, the error analysis was systematically conducted for the existing deflection basin area index methods based on the assembled database, and a new improved approach was developed in this study for practical engineering. By comparing with the measured data of an airport, the prediction accuracy of this method and other selection methods for back-calculation results were evaluated. e result shows that the improved deflection basin area index method has the advantages of simple calculation and high accuracy. In practical engineering application, it can not only save time but also achieve fast and accurate results, which is of great significance. Finite Element Simulation Considering that multiple variables cannot be considered simultaneously in a practical engineering, the effect of different conditions on the structures cannot be comprehensively estimated. Moreover, the test setup with high costs and measurement strategy may have a great impact on the accuracy of test results. Over the past decades, the FE technique has become increasingly popular in academic researches [17][18][19][20] because it can solve the problems that cannot be realized in engineering field. us, the finite element software was adopted to simulate the deflections of airport pavement structures under varied working conditions in this study. Studies on parameters were also conducted with the verified FE model for the following theoretical analysis. Parameter Setting. e sectional size of the pavement slab was set as 30 m × 30 m to reduce the influence of edge effect on the simulation because there are no relationships between the deflection value and the slab size for the position beyond 0.7 m away from the slab edge [21]. As per the Chinese code MH/T 5024-2009 [13], some basic parameters of the model were presented as follows: Poisson's ratio (u � 0.15); the elastic modulus of the thin concrete plate (32 GPa). Boundary Conditions and Mesh. e spring foundation was added at the bottom surface of pavement slab, and the modulus of subgrade reaction was defined as k to simulate the equivalent reaction modulus of different soil layers under the pavement in actual engineering. Circular uniform load (0.15 m in radius) was applied on the upper surface of the model. No constraints were arranged on the other four sides. e static load of 1.5 MPa was used to simulate the maximum impact load produced by HWD test to obtain the corresponding deflection values according to the literature [19]. e free tetrahedral mesh was chosen, and the mesh element size was smaller than the thickness of the pavement slab h. FE Model Results. One hundred and fifty-four groups of test data (see Table 1) were generated by uniformly extracting the deflection value at the position of 0 cm, 20 cm, 30 cm, 45 cm, 60 cm, 90 cm, 120 cm, 150 cm, and 180 cm away from the load center of the model. In this process, the value of the modulus of subgrade reaction k and the thickness of pavement slab h are continuously changed. Figure 1 shows the layout of the measuring points mentioned above. Figure 2 shows the simulation results of pavement slab with 28 mm in thickness and the modulus of subgrade reaction of 125 MN/m 3 . Furthermore, the value of h can be determined from the suggested slab thickness for the FE simulation provided by Cheng et al. [19]. e empirical value of subgrade reaction k can be referred to as the Chinese code GB 50307-2012 [22], and the selected ranges of h and k are presented in Table 2. To facilitate the following discussion, each working condition is given a name composed of letters and numbers. e first letter and the subsequent numbers represent the thickness of the slab (unit: mm). e second letter and the corresponding number stand for the modulus of subgrade reaction (unit: MN/m 3 ). For example, H28K125 represents the working condition with the pavement thickness of 28 mm and modulus of subgrade reaction of 125 MN/m 3 . N represents sensors in Figure 1. Winkler Foundation Model. Chen et al. [20] have found that the Winkler foundation model performs better than the elastic half-space foundation model in calculating the structural responses of the pavement. It is assumed that the pressure p at one point on the foundation surface is proportional to the settlement w at the same point in the Winkler foundation model, and the pressure p is also independent of the pressure from surrounding points. e relationship between p and w can be presented as p � k · w, which means that every single point of the foundation is supported by a spring, which works separately at different locations. e analytical equation of the one thin slab deflection value predicted by the Winkler foundation model under uniformly distributed circular load is shown in equation (1) [23], which provides a reliable standard for the verification of deflection results obtained from the FE model. Shock and Vibration 5 Shock and Vibration where w(r) represents the deflection value at the distance of r away from the load center (m); q is the circular uniform load (Pa); R stands for the radius of load acting surface (m); l represents the radius of relative stiffness (m), which can be calculated by equation (2) according to Ioannides [12]; J 0 is the zero-order Bessel function; J 1 is the first-order Bessel function; r is the distance of a measurement point away from the load center (m); and t stands for the integral variable. where E represents the elastic modulus of pavement slab (Pa) and μ represents Poisson's ratio. Pavement slab Subgrade C ir c u la r lo a d p la te P a v e m e n t s la b T h e a r r a n g e m e n t P la c e m e n t o f s e n s o r s It can be seen from Table 1 and Figure 3 that the difference between the simulated value and the theoretical value is rather small, and the deviations of the points within 20-150 cm away from the load center are even lower than 5%. Moreover, the deviation decreases first and then increases with the increment of slab thickness, and it increases with the modulus of subgrade reaction. For the slab with a thickness of 0.24-0.40 m, the deviations are all lower than 10%. However, when the slab thickness exceeds or is lower than a specific value and with the increment of the modulus of subgrade reaction, the deviation between the two methods gradually increases. For example, when the thickness of the slab is 0.44 m, the maximum deviation of the measurement point in the load center is 10.96%, and the deviation of the measuring point at the location of 180 cm away from the load center is 11.82% on the slab with the thickness of 0.2 m. e reason is probably that the Winkler foundation model was initially developed based on the thin slab theory, and the thick slab may be squeezed vertically at the loading point, resulting in the measurement deviation. e accuracy of the predicted solution may be extensively degraded for the slab with a relatively high modulus of subgrade reaction. Furthermore, there was no interaction between different points, as assumed in the Winkler foundation model. However, weak interactions can be detected in the FE model. When the thickness of the slab was relatively thin, the cumulative deviation may bring a significant error for the farthest point away from the loading point with the increase of the modulus of subgrade reaction due to the weak interaction. erefore, to ensure the accuracy of verification and the rationality of the model parameters being relative to the engineering application, it is suggested that the optimal range of the slab thickness is 0.24-0.40 m. General Description. Ioannides [12] has illustrated that the deflection basin area index A w (see equation (3)) was a geometric property of the deflection basin, which can be calculated based on the by dividing the basin into several fine trapezoids as shown in Figure 4. en, A w is normalized by the deflection value w of the designated measuring point, which can be applied to describe the geometric information of the deflection basin: where s represents the distance between two adjacent measuring points (m); w is the deflection of the designated measuring point (m); and w i is the deflection of measuring point i (m). Based on the elastic foundation slab model theory, the deflection basin area index method is a back-calculation approach that determines the pavement structural parameters as per the deflection value. e back-calculation process can be stated as follows. According to equations (1) and (2), the deflection value at a certain point is determined by the calculations of qR/k and 1/l ∞ 0 (J 0 ((r/l)t)J 1 ((R/l)t)/1 + t 4 )dt. However, it is difficult to calculate the modulus of subgrade reaction k and the radius of relative stiffness l in practical engineering simultaneously. To separate these two variables, the deflection basin area index A w becomes a necessity, and the relationship between A w and l can be established. en, according to the expression of the deflection coefficient w(l), the equations of the modulus of subgrade reaction k and the elastic modulus of pavement slab E are also available, which are induced by the following equations: where w(r 0 ) is the deflection value at r 0 away from the load center (m) and w(l) stands for deflection coefficient which is equal to 1/l ∞ 0 (J 0 ((r/l)t)J 1 ((R/l)t)/(1 + t 4 ))dt. Method Introduction. So far, the conventional deflection basin area index methods with satisfactory accuracy mainly include MH/T 5024-2009 [13], Strategic Highway Research Program (SHRP) 4-in, SHRP 5-outer, US Air Force (USAF) 6-outer, and SHRP 7-in method [24]. It should be noted that the number represents the number of measuring points, and the following label of "in" and "outer" means the near-side measuring points (measurement ranges within the load center) and far-side measuring points (measurement ranges beyond the load center), respectively. Figure 5 shows the layout schemes of measuring points of several theories mentioned above. Area of deflection basin S = S1 + S2 + S3 + S4 + S5 S5 S4 S3 S2 S1 Shock and Vibration Cheng et al. [19] have analyzed the accuracy of backcalculation of the five deflection basin area index methods mentioned above for pavement structural parameters and reported that the results of the far-end measuring points were more accurate. Furthermore, it has also been observed that sufficient measuring points and a reasonable form of best-fitting formula can significantly improve the accuracy of prediction. In addition, for the reason that most of the existing methods cannot incorporate the deflection information of each measurement point, Lin et al. [15] developed a revised deflection basin area index method to overcome this deficiency using the point-by-point fitting method [9] based on the traditional deflection basin area index method. e results revealed that the new method can provide more reasonable back-calculation results. However, this method requires multiple iterations of the distribution, which makes the calculation process quite complicated. Method Evaluation. Based on the database mentioned above, the accuracy of the methods of MH/T 5024-2009 [13], SHRP 7-in [24], Cheng et al. [19], and Lin et al. [15] is systematically evaluated in this section. Nine groups of representative working conditions are compared. e results are presented in Figure 6 and Table 3. It should be noted that the calculated value of the modulus of subgrade reaction k is adopted to assess the accuracy. As illustrated in Figure 6, although the methods of SHRP 7-in [24] and MH/T 5024-2009 [13] adopted similar layout schemes on the near-end measuring points, smaller backcalculation deviations are detected for the former method. is is because that the measuring points in SHRP 7-in [24] method are arranged intensively near the loading point, where the slope of deflection basin curve is significantly changed to fully capture the shape of deflection basin, resulting in a higher accuracy compared with MH/T 5024-2009 [13]. Since the far-end measuring points were used in Cheng et al. [19] method to conduct the back-calculation process, the predicted errors decrease obviously due to the reason stated before. For these methods except for Lin et al. [15], the deflection value of a specific measuring point was applied to standardize the deflection basin area index A w . us, when the information of the specific measuring point is inaccurate, considerable errors of the predicted results may emerge, and the back-calculation process may become unstable as well. Moreover, if the deflection value normalizes the deflection basin area index at different measuring points, multiple-solution problems need to be considered as well. To avoid these potential problems, Lin et al. [15] have calculated the back-calculation results by minimizing the deviations as the objective function, which exhibits the best performance among all selected methods, as represented in Figure 6. However, low efficiency and complex calculations faced by this method means that it cannot be applied directly in practical engineering. Proposed Method. To balance the accuracy of prediction and calculation complexity, seven locally intensive far-end measuring points were adopted with the positions of 30 cm, 45 cm, 60 cm, 90 cm, 120 cm, 150 cm, and 180 cm away from the load center, as shown in Figure 7. Besides, the average w of all the above points was applied directly to determine the deflection basin area index A w and the resulting modulus of subgrade reaction k was recalculated as well (equations (6)- (8)). e most evident advantage of this method was that the information of deflection of all measuring points can be considered to make the predicted results more accurate and stable and to avoid the multiple-solution problem effectively. A w � 0.15 2w w 30 + 2w 45 + 3w 60 + 4 w 90 + w 120 + w 150 + 2w 180 , where w j is the deflection of measuring point j (m) and j represents the distance from the load center (m). Based on the deflection results from FE simulations under different working conditions (including different modulus of subgrade reaction k and different thickness of pavement slab h), the expression on the radius of relative stiffness l was obtained by regression analyses [25] with equations (2) and (6), as presented in equation (9); next, the deflection coefficient w(l) was regressed through equations (8)- (9), as presented in equation (10). Figures 8 and 9 show that the trend almost coincides with the simulation results with R 2 being equal to 0.9927 and 0.9995, respectively. It should be noted that, under all working conditions, the deflection results of the slab with a certain thickness are represented by the same label, and the number following the letter "H" refers to the thickness of the slab. X represents all working conditions under this plate thickness. Data Source. An airport in northern China was selected to obtain practical deflection results. e runway is 2200 m in length and 45 m in width, including two vertical contact lanes and six parking aprons. e airport is located on the frozen soil area, and the soil layers are topsoil/miscellaneous fill, fine sand, silty clay, and bedrock (moderately and strongly weathered granite) [26], respectively. e runway of the airport has a severe frost boiling problem due to the freeze-thaw effect and constant impact loads. us, the HWD deflection test has been carried out on the existing pavement to assess the safety grade of the airport, as shown in Figure 10, which covered all functional regions, as listed in Table 4. Verification Results. To verify the rationality of the proposed method, the actual deflection data of each functional area of the airport were randomly selected to examine the back-calculation accuracy of the new proposed approach and three other methods developed by MH/T 5024-2009 [13], Cheng et al. [19], and Lin et al. [15]. Based on the measuring data of deflections, the deflection basin area index A w can be determined by equation (6). en, the radius of relative stiffness l and the deflection coefficient w(l) were obtained by the proposed equations as well, and the modulus of subgrade reaction k and elastic modulus of pavement slab E can also be derived by equations (8) and (5), respectively. erefore, the theoretical value of deflection calculated by equation (1) as per the above structural parameters was adopted to compare it with the actual measurement value, and the difference between the two values is shown in Figure 11 and Table 5. As shown in Figure 11, the methods developed by Cheng et al. [19] and MH/T 5024-2009 [13] possess a lower accuracy and unstable predictions, which indicates that the information of the specific measuring point adopted to standardize A w may have a great impact on the back-calculation results in the conventional deflection basin area index methods. For the proposed method and method developed by Lin et al. [15], more stable results can be obtained because all measuring points are considered. Moreover, the new method proposed in this study exhibits the better performance with the lower predicted errors and [20] Lin et al. [16] MH/T 5024-2009 [14] H28-1 Figure 11: e deviation sum of deflection. higher computed efficiency compared with the method developed by Lin et al. [15], which is more convenient for the practical engineering application. Conclusions FE modeling was adopted to simulate the pavement structure in this study, and the study on parameter was also carried out to develop a new deflection basin area index. e main conclusions that can be drawn as follows: (1) By comparing the deflection values gained from FE simulation with the theoretical values calculated by the Winkler foundation response model, increasing deviations are observed with the increment of the modulus of the subgrade reaction when the pavement slab is either too thick or thin. us, considering the accuracy of the FE method and the actual demands of practical engineering, the suitable range of slab thickness for pavement structural parameters in the back-calculation process is suggested to be 0.24-0.4 m. (2) Based on the established database, the back-calculation results of several existing deflection basin area index methods reveal that the increase of local measuring points under the same layout scheme can improve the accuracy of back-calculation results. Moreover, using the average deflection of all measuring points instead of a specified point during the normalized procedure, the back-calculation will become more stable, and the multiple-solution problem can also be alleviated. (3) According to the deflection database obtained by FE simulation under different working conditions, a refined deflection basin area index method is proposed. e actual measuring data from an airport in northern China is applied to verify the rationality of the new method. Comparisons between the test results and theoretical predictions illustrate that the new method possesses a higher accuracy and efficiency, which may provide valuable guidance for the practical engineering application. Data Availability e data used in this paper are all from the finite element simulation and engineering measurement mentioned in the study. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this study.
5,392.8
2021-01-01T00:00:00.000
[ "Engineering" ]
Negative Curvature Hollow-Core Optical Fiber The background, optical properties, and applications of low-loss negative curvature hollow-core fiber are reviewed. Data on spectral attenuation are collated and extended. Negative Curvature Hollow-Core Optical Fiber Fei Yu and Jonathan C. Knight (Invited Paper) Abstract-The background, optical properties, and applications of low-loss negative curvature hollow-core fiber are reviewed. Data on spectral attenuation are collated and extended. Index Terms-Fiber optics, microstructured optical fiber. I. INTRODUCTION N EGATIVE curvature hollow core optical fiber (NC-HCF) is a novel hollow core optical fiber (HCF) which has emerged over the last few years ( Fig. 1) [1]- [8]. NC-HCF is characterized by the inverted curvature of its core wall, and usually exhibits multiple spectral transmission bands of low attenuation. The simple structure of the fiber cladding allows flexible tailoring of NC-HCFs design and dimension for specific wavelengths and applications [4], [5], [7], [9]. Since their first appearance NC-HCFs have been applied in high power/ultrafast laser delivery [10]- [14] and several other applications [15]- [20]. This paper comprises eight sections: Section II consists of a brief history of NC-HCF, while the optics of NC-HCF will be discussed in Section III. From Sections IV to VI the guidance properties of NC-HCFs are collated and summarized including mode attenuation, bending loss, dispersion and nonlinearity. Section VII reviews some applications of NC-HCFs and Section VIII contains conclusions. II. A BRIEF HISTORY A long time ago, "hollow core optical waveguides" referred to long cylindrical tubes and rectangular stripes made of dielectric or even metal. In 1936, Carson et al. studied the potential of hollow core fiber made of metal material for long haul electromagnetic wave transmission theoretically [21]. Later, in 1964, Marcatili and Schmeltzer studied the feasibility of using the dielectric HCF in telecommunications [22]. They demonstrated analytically that the trade-off between modal leaky loss and bending loss would fundamentally limit the application of dielectric hollow core fiber for long-haul optical signal transmission. Nevertheless, through advances in inner-surface coating techniques, HCF achieved great successes in some fields such as mid-infrared [1] (b) NC-HCF made of silica, with nontouching capillaries in the cladding [5] (c) NC-HCF made of silica with icecream-cone shape capillary in the cladding [3] (d) NC-HCF made of silica, with non-touching capillaries. Extra capillaries are added to reduce the coupling between the core and cladding modes [33]. light transmission, particularly in CO 2 laser delivery for industrial applications [23]. By the end of the 20th century, the emergence of hollow core photonic bandgap fiber (HC-PBG) speeded up the development of HCFs in both theory and applications enormously [24], [25]. On one hand, it brought in new concepts of light guidance in HCFs and boosted the development of fiber optics; and on the other hand, it creatively introduced micro-structure into optical fiber design, and inspired more novel designs of optical fibers. The state-of-the-art HC-PBGs already demonstrate extremely low transmission loss, comparable to that of commercial optical fibers [26], [27]. "Kagome" fiber was another important type of HCFs that was firstly reported in 2002 [28]. The Kagome fiber usually exhibits multiple transmission bands and overall covers a broader spectral range than HC-PBGs. Numerical simulations show that the "Kagome" lattice supports no photonic bandgap which make the Kagome fiber distinctive [25], [29]. The appearance of NC-HCF can be traced back to the discovery of the importance of core wall shape in Kagome fiber in 2010 [30]. A Kagome fiber with negative curvature core boundary unexpectedly exhibited a lower attenuation than regular ones. A series of subsequent experiments confirmed the significance of core wall shape in the reduction of attenuation in such fibers [31]. In 2011, Pryamikov and his colleagues fabricated the first NC-HCF for 3 μm wavelength transmission [1]. A 63 cm long NC-NCF was fabricated and several transmission bands were observed from 1 to 4 μm. Afterwards, Kosolapov et al. extended the transmission window to 10.6 μm for CO 2 laser delivery by using chalcogenide glass [2]. In 2012, a silica NC-HCF with low loss transmission in the mid-infrared spectral region from 3 to 4 μm was fabricated with minimum attenuation of 34 dB/km [3]. This type of NC HCF is characterized by "ice-cream-cone" shaped cladding capillaries, which are formed during the fiber drawing by the balance between gas pressure in the holes pressure and the surface tension of fiber material. Such fiber was later successfully demonstrated to deliver high energy microsecond pulses at 2.94 μm for invasive surgical laser procedure proposal [10], [11]. Attenuation figures of 24.4 and 85 dB/km were subsequently recorded in NC-HCFs of similar structure at 2.4 and 4 μm wavelength, respectively [9]. After demonstration in the mid-infrared spectral region, low loss NC-HCFs for shorter wavelength transmission in the visible and near-infrared were developed [13], [14], [18]. Recently, NC-HCF with attenuation of 0.15 and 0.18 dB/m at 532 and 515 nm, respectively achieved single-mode, stable transmission of nanosecond and picosecond pulses with 0.57 mJ and 30 μJ [13]. 40 dB/km was measured at 1064 nm in NC-HCF which was used as part of ring cavity in a mode-locked Ytterbium fiber laser with 37-11 MHz repetition rate [18]. Early in 2013, Kolyadin et al. reported the first design of open boundary of core wall in NC-HCF. This type of NC-HCF has contactless capillary cladding, and demonstrated low loss transmission of light in the mid-infrared spectrum range from 2.5 to 7.9 μm [5]. In 2014, greatly reduced bending loss in similar fibers was reported by Belardi et al. [8], [32]. This design was soon applied to shorter wavelength transmission in the nearinfrared spectrum [33]. In 2013, THz guidance in NC-HCF made of polymethylmethacrylate was demonstrated by Setti et al. [4]. A further design variation of NC-HCF was proposed by Belardi and Knight in 2014 [6]. By adding extra anti-resonant elements in the cladding, numerical simulations predicted that the minimum loss of NC-HCFs could reach below 1 dB/km [6], [33]. III. MODELING AND ANALYSIS NC-HCF has a simple structure compared to that of HC-PBG and Kagome fiber [25], [29]. It has no periodic cladding, and does not possess a photonic bandgap, which explains the leaky nature of such fiber. The leaky nature and antiresonant guiding mechanism make it natural to compare NC-HCF with Kagome fiber. Both have multiple transmission bands and similar attenuation figures at comparable wavelengths [34], [35]. Indeed, seeing the cladding of a Kagome fiber can be reduced to just one single layer without significantly increasing attenuation [36] we suspect that the guidance in these two fiber designs is very similar. However, the simple geometry of NC-HCF belies the lack of a simple quantitative model of its optical performance. Currently the modeling of NC-HCFs depends heavily on numerical simulations. The Marcatili and Schmeltzer model, the anti-resonant reflecting optical waveguide (ARROW) model and the coupledmode model are most commonly used to explain the guidance mechanism of NC-HCFs and other leaky HCFs. A. Marcatili and Schmeltzer's Model In 1964, Marcatili and Schmeltzer firstly analytically studied the mode properties of the dielectric HCF [22]. In their pioneering work, they derived formulas of mode attenuation and bending loss, which revealed the basic properties of modes in HCFs. An idealized HCF consists of a circular core surrounding by infinite homogenous non-absorptive dielectric medium of higher refractive index than the core material (usually gas/vacuum). Due to the inverted refractive index contrast, total internal reflection cannot occur when the light is incident from the core at the interface between the core and cladding. The partial reflectivity at the core boundary indicates inevitable loss for light propagating in the core. The mode attenuation and corresponding propagation constant are written as [22], Here u ν m is the mth zero of the Bessel function J ν −1 , and ν and m are azimuthal and radial number of modes; λ is the wavelength; r is the core radius; n core is the refractive index of core medium; V ν is the constant determined by the cladding refractive index and mode order [22]. Equations (1) and (2) are the most basic formulas to understand modes in all leaky HCFs. As r/λ grows bigger, the attenuation of modes is quickly reduced. For a mode of specific order, a higher propagation constant is required to satisfy the transverse resonance condition in a bigger core. Such larger core would give an increased glancing angle of light incident at the core boundary, which results in a higher Fresnel reflection. As a result, the mode in a larger core has a smaller attenuation. Marcatili and Schmeltzer's model fails when complex structures are introduced to the cladding of HCF. The multiple reflections from the structured cladding can reduce or increase the attenuation of leaky modes by constructive/destructive interference, and it affects the dispersion of modes too. B. ARROW Model The ARROW model was firstly proposed in 1986 to explain the enhanced confinement in a planar waveguide with a series of high-low index regions forming the cladding [37]. In 2002 it was used to explain the light transmission in HC-PBG [38]. Soon it was being applied to a range of micro-structured HCFs. The ARROW is 2-D model which can be extend to three dimensions. It approximates the cladding of HC-PBG as an array of high and low refractive index layers. Each higher refractive index layer can be considered as a Fabry-Perot resonator as in Fig. 2, which can enhance/decrease the confinement of light in the air core under different resonant/antiresonant conditions. Assuming r/λ we have β ≈ 2πn 1 /λ according to (2). In Fig. 2, the resonance wavelength is calculated as, where n 1 and n 2 are low and high material refractive indices, respectively; d is the thickness of high index layer; and m is an integer, representing the order of resonance. At the resonance wavelengths, the core mode experiences enhanced attenuation leaking through the high index layer. At wavelengths away from the resonance, the reflectivity of high index layer increases and the attenuation of the leaky mode is reduced. Therefore, the spectral transmission of ARROW waveguide features multiband transmission, similar to that of a Fabry-Perot resonator. The band edges of the transmission are well defined by the resonance wavelengths. It is worth observing that the Bragg reflection cannot replace the ARROW model to explain HCFs [38]. Changing the pitch of the high index layers in the cladding barely shifts the band edge in simulations. This is also supported by the experimental demonstration of Kagome fiber. In Kagome fiber, it has been shown that reduction of the number of cladding layers does not significantly increase the fiber loss or shift the transmission band positions [36]. We conclude that the single-layer property of cladding plays a most important role when applying ARROW model to HCFs. The loss edge of transmission bands can be precisely predicted by ARROW when d/λ 1. At longer wavelengths, corresponding to the spectral range below the m = 1 resonance, the ARROW model is no longer useful. The ARROW model is a well-established model which fits well with experimental results. Although it can precisely predict the band edge of transmission, it cannot provide further insight into the band properties. C. Coupled-Mode Model The coupled-mode model applies coupled mode theory [39] to analyze the properties of HCFs based on Marcatili and Schmeltzer's model. In real HCFs, the cladding is no longer an infinite and homogenous medium but has a complex configuration of refractive index distribution. The cladding modes are a set of modes including both dielectric modes (see Fig. 3(a) top) localized in the higher index regions and leaky air modes (see Fig. 3(a) bottom) inside lower index regions [40], [41]. In the coupled-mode model, the properties of core mode are interpreted as results of the longitudinal coupling with those cladding modes. This method was successfully applied in analyzing the formation of bandgaps in HC-PBG [42]. By this method, inhibited coupling was proposed to explain the guidance in Kagome fiber [29]. In 2007, Argyros and his colleagues applied the coupled mode theory to the square lattice polymer HCF and quantitatively analyzed the formation of band edges of leaky HCF [43]. They pointed out that the absolute phase matching is not necessary to achieve an effective coupling between the cladding and core modes. Δβ ∼ 10 −4 m was found to be the threshold to estimate the transmission band edge, which matched well with the experimental measurement [43]. In 2012 Vincetti and Setti applied this method to NC-HCF and presented details of cladding mode features in NC-HCF [41]. Later, they used this method to demonstrate that the geometry of cladding elements was important to determine the confinement loss of HCFs. The polygonal shaped tube in the cladding adds extra loss due to the Fano-like coupling between the core and cladding modes [44]. This can be used to explain the different spectral features between NC-HCFs and Kagome fibers. D. Function of Core Wall Shape NC-HCFs are characterized by the negative curvature of the core wall, which has been numerically and experimentally demonstrated to effectively reduce the attenuation. Numerical simulations (Fig. 6) showed that in NC-HCF an increase of negative curvature influences the mode attenuation and even bending loss in a complex way [45]. It has been confirmed that a large curvature of the core wall (small radius of curvature) can help decrease the overlap of core mode field with fiber materials to 10 −4 . This has been experimentally and numerically demonstrated in silica NC-HCFs [9], [45] (further discussions will be found in Section IV-B). Despite recent efforts in analytical analysis [46], the function of the core wall shape is not yet clear. It appears that both the properties of the high-index and of the low-index cladding modes are affected by the curvature. IV. GUIDANCE PROPERTIES-MODE ATTENUATION A typical NC-HCF possesses multiple transmission bands, and the band edges are determined by the resonance wavelengths of cladding as described by the ARROW model. To experimentally study the fiber attenuation, we use the standard cutback method [47]. In HCFs there is no cutoff condition for high order modes. All modes exist at the same time but with different attenuations. High order modes are seldom observed in the propagation over long lengths because of their far higher attenuation and more sensitivity to the bending. In the low order band of HC-NCF (determined by low order resonance wavelengths of cladding), because of relatively small r/λ ratio, the output is usually composed of the fundamental mode, or at most a few modes. By controlling the cutback length and the residual fiber length, the cutback method can be used to obtain a reliable measurement of fundamental mode attenuation. In 2013, Yu and Knight reported a systematic study of the mode attenuation in one transmission band between the first and second resonant wavelengths in NC-HCFs with ice-cream-cone shape cladding [9]. 50 similar NC-HCFs drawn for different spectral ranges were fabricated and measured. Among them, the NC-HCFs with the minimum attenuation in their spectral ranges were selected for study [9]. Fig. 4 summarizes those minimum attenuations reported in [9] and latest results of this type of NC-HCFs [13], [18]. Table I lists detailed information of NC-HCFs in Fig. 4, including the figures of minimum attenuation and wavelengths, the spans of first transmission bands, and core diameters. The minimum attenuation 53.8 dB/km at 687.4 nm and the majority of core diameters are reported here for the first time. In Fig. 4, the attenuation is dominated by different factors in different spectral ranges. Those factors include the material absorption at the longest wavelengths, leakage loss at mid-range wavelengths and "imperfection" loss at visible wavelengths and into the ultraviolet. A. Leakage Loss The leakage loss is the fundamental part of mode attenuation of NC-HCFs. Compared to conventional fibers, NC-HCF's are profoundly different in this respect because they have no strictly confined modes. By increasing r/λ, the mode attenuation can be effectively reduced according to (2). Generally, a NC-HCF with a large core can be expected to have lower loss. According to (1), the leakage loss α ∝ r −1 if r/λ remains constant. This is a natural result from the dimensional scaling Measured data in [9] is shown as black triangles. Updated data is shown as black square [13], [18]. The dark blue, light blue and red solid lines represent fittings to selected points of attenuation = A·λx with a resulting x of −2.09, −3.642 and −0.998 separately (dark blue curve fitting data points between 500 and 1040 nm; blue curve fitting data points between 700 and 1300 nm; red curve fitting points between 1500 to 2500 nm). The orange dashed line is total attenuation based on predictions of confinement losses from Comsol and scaled absorptive loss (measured). Inset: Simulation showing simulated variation in fiber attenuation as silica material absorption increases. Measured transmission windows of first band, minimum attenuations with wavelengths and core diameters of NC-HCFs in Fig. 4 are listed. * r/λ m is ratio of core radius over minimum attenuation wavelength. * * Measurement of transmission window was limited by light source intensity and detector response of monochromator. law of electromagnetic field in HCFs [41]. All the fibers in Fig. 4 were designed and intended to be fabricated similarly, and expected to present similar structures but scaled for the different wavelengths of operation. In Table I, r/λ m varies from 12.7 to 16.0 and the average is 14.6. Those fibers present high similarity apart from minor exceptions. We should therefor expect the minimum attenuation with wavelength to satisfy α ∝ r −1 . In Fig. 4, a fit to the data finds scaling with wavelengths as λ −0.998 in the near-infrared spectrum from 1.2 to 3 μm, consistent with the assumption. At shorter and longer wavelengths, different dependences imply that different loss mechanisms other than leakage loss start to dominate, which will be discuss in Sections IV-B and IV-C. The bigger η is, the thinner core wall thickness is. Neither material absorption nor material dispersion is considered in those simulations. R is the core radius [9]. In NC-HCF, reducing the core wall thickness is another important way to reduce the attenuation of NC-HCFs. Fig. 5 shows simulated attenuation spectra of NC-HCF for different core wall thicknesses [9]. Both axes are normalized by fiber core dimension. η is defined as the ratio of inner over outer diameter of capillary forming the core wall. The bigger η is, the thinner core wall thickness is. From Fig. 5, we observe that the leakage loss of NC-HCF is reduced with thinner core wall thickness. At the same time, the transmission window is wider, as predicted by ARROW model. The leaky "airy" mode in the hollow regions of the cladding recently has been numerically demonstrated as another source of leakage loss [6], [32], [48]. The leakage loss in NC-HCFs can be further reduced and potentially reach 1 dB/km or even less by reducing the mode coupling between the core mode and air mode of cladding [6]. To reduce the coupling, extra capillary or finer structure can be added into cladding bores, shifting the air modes away from the propagation constant of the core and hence reducing the loss through this mechanism [6]. B. Material Absorption In HCFs, because light is transmitted through the hollow core, the fiber material weakly influences the core mode properties, including the attenuation. The contribution of material absorp-tion to the total loss in NC-HCFs arises from the mode overlap with the fiber material. Fig. 4 inset shows the simulated contribution to the mode attenuation in NC-HCF from the absorption of the cladding material. For a specific NC-HCF, the simulated mode attenuation increases linearly with material absorption, as expected for low absorption. In NC-HCFs, it has been experimentally demonstrated that the mode overlap can be as low as 10 −4 [9]. Numerical simulations agreed well with the experimental result (Fig. 6) [45]. Although the influence from fiber material in NC-HCF is tiny, material loss can still dominate the mode attenuation when material attenuation reaches thousands of dB/m. Fused silica and softer glasses are the most common materials for fiber fabrication. Fused silica exhibits excellent transparency in the visible and near-infrared spectral regions, where the absorption of fiber material can be ignored [49]. However the phonon absorption increases quickly at longer wavelengths, and the material absorption rises rapidly from tens of dB/m at 3 μm to tens of thousands of dB/m beyond 5 μm wavelength [49], [50]. Such high loss is as the main factor limiting the performance of NC-HCFs at long wavelengths [3], [10], [11]. In Fig. 4, all NC-HCFs were made of F300 synthetic fused silica. 865 dB/m was measured at 4 μm wavelength [9]. The effect of material absorption on fiber loss is numerically estimated around 7000 times less than the material absorption due to the very low overlap of the guided mode with the glass at these long wavelengths by Comsol. By scaling the measured material absorption by this factor and adding it to the predicted leaky loss from Comsol, the total fiber loss was to increase rapid with material absorption and to dominate the attenuation of NC-HCF in this spectral range. Measured minimum attenuation was found to be 85 dB/km at 4 μm wavelength, which is measured as 10 000 times less than the measured silica absorption. Soft-glasses, such as chalcogenides and fluorides, possess a much lower absorption in the mid-infrared region up to 10 μm and have been widely adopted as optical fiber materials for the mid-IR [23]. In 2000, the first use of soft-glass to fabricate HCF was demonstrated [51]. In 2010, more complex structured HC-PBG were successfully fabricated with chalcogenide glass [52]. Theoretically, because of the low material absorption, NC-HCFs made of soft-glasses should achieve much lower loss than those formed from silica at the long wavelengths in the mid-infrared. However, measured results were of the order of several dB/m or even worse [2]. Because of the very sensitive viscosity/temperature relation of soft-glass materials, it is technically challenging to obtain uniform and delicate holey structures in HCFs, which determines the optical properties of fibers. Compared with the mature state of silica fabrication, the processing routes for purification and fiber drawing still need improvement to achieve the theoretical performance limits for these materials. C. "Imperfection" Loss In the fabrication of HCFs, it has been the challenge to scale down the dimension for shorter wavelengths through the nearinfrared and into the visible range. As the fiber gets smaller, the Table I. SEM images of Fibers 5, 3 and 2, and an optical micrograph of Fiber 4 as labelled. Fibers 3 and 2 have lower attenuation than Fibers 4 and 5, respectively, despite being at approximately comparable wavelengths. The degradation of cladding structure, such as the slab-like fusion between capillaries (Fiber 4) and reduced boundary curvature (Fiber 5) may result in such higher losses. Fibers 3 and 2 were fabricated from the same preforms with fluorine-doped silica glass as the jacketing tube, and using a smaller draw-down ratio. Fibers 5 and 4 were fabricated using a pure silica glasses jacketing tube and a higher draw-down ratio. balance between the surface tension and pressure in the fiber cladding becomes unstable, causing more frequent deformation leading to increased attenuation [53]. This extra loss of HCF arises from the "imperfection" of the fiber structure. In Fig. 4, a substantial increase of attenuation is found in Fibers 4, 5, and 6 (see Table I) at shorter wavelengths [9]. They were drawn from a same group of preforms with dimension scaled for different transmission windows. To overcome the "imperfection" degradation, a smaller draw-down ratio was used to fabricate Fibers 1, 2 and 3 (see Table I). Also, in the fabrication of Fibers 1, 2 and 3, fluorine doped glass tubes rather than plain silica tubes were used to jacket the silica perform in the final draw to fiber. Comparing those drawn using a pure silica jacket, the fluorine-doped jacketed fibers exhibited much lower losses and less structural degradation, although they were no different from the silica fibers in dimensions. We attribute this to the reduced viscosity of the fluorine doped material, which enables the fiber to be drawn under more favorable conditions. Fig. 7(a), (b) and (d) presents SEM images of Fiber 5, 3 and 2, and (c) is an optical microscope image of Fiber 4. Fibers 5 and 3 (silica jacket) were both drawn for light transmission near 1 μm with similar core sizes. Fibers 4 and 2 (fluorine jacket) also have most parts of transmission bands overlapped between 750 and 850 nm. Fiber 3 exhibits a much lower attenuation than Fiber 5. The most significant difference can be found in core wall curvatures, which is also found in comparison between Fibers 4 and 2. In Fiber 4, the fusion between capillaries has changed from points into slabs. Such defects are most common in small NC-HCFs for shorter wavelengths in the visible and near-infrared range. Extra "imperfection" loss was measured in Fiber 4 in comparison to Fiber 2. The degradation of fiber structure, including the slab fusion in cladding as well as flattened core wall curvatures, contributes to the increased attenuation at the short wavelengths. In NC-HCF, there has been no experimental evidence relating the "imperfection" loss at short wavelengths to the scattering due to the surface roughness. But it may be worth in future to explore if such effect can be one factor limiting the attenuation performance of NC-HCFs at short wavelengths. The scattering loss due to the core wall roughness has been identified as one of the reasons limiting the ultimate attenuation of HC-PBG [27], [54]. The surface roughness of core wall is inevitably formed along HCF because of the thermodynamics process in the fabrication, which is attributed to surface capillary wave frozen at the silica interface in the transition temperature range [55]. Such microscopic fluctuation is usually under one nanometer over a micrometric scale, but such roughness is enough to dominate the loss limit of HCFs, especially at short wavelengths. The scattering effect causes the minimum attenuation with wavelengths to be proportional to λ −a in log-log, where a may vary from 2.5 to 3.5 according to the design of the HC-PBG [27], [56]. V. GUIDANCE PROPERTIES-BENDING LOSS Generally, bending loss in optical fiber can be categorized into macro-bending and micro-bending according to the loss mechanism [47]. Micro-bending refers to the periodic or random deformations of the refractive-index distribution or of the geometry of optical fibers along the length, which can cause the energy to couple between the core modes and cladding modes if the deformations change sufficiently rapidly along the fiber axis [57]. The loss mechanism of micro-bending is essentially a scattering phenomenon, which is no interest of this section. The bending effect here refers to the macro-bending which accounts for the extra loss associated with the curved guidance when the fiber is bent. The origin of bending loss of NC-HCF can be generally categorized as true bending loss, high order core mode conversion, and enhanced radiative/absorptive loss. Kosolapov and colleagues numerically and experimentally demonstrated that at certain bending radii, the phase matching condition could be satisfied between the core and cladding air modes which resulted in extreme high losses [2]. Yu et al. found that the bending loss increased more rapidly at shorter wavelengths in the spectral transmission window [3]. This wavelength dependence was also experimentally observed in HC-PBG and all solid PBG [58], [59]. Setti et al. numerically found that the bending effect depended on wavelengths was due to the shift of band edge of transmission window to the longer wavelengths [4]. Belardi et al. found that NC-HCF with open core boundary exhibited much less bending sensitivity than the other NC-HCFs [8]. Jaworski et al. measured the relation between the bending loss and the bending radius in NC-HCFs [13], [14]. In their study, low bend losses were found in NC-HCFs with lower mode attenuations [13]. A. True Bend Loss In Marcatili and Schmeltzer's model, such loss can be determined by a perturbation correction of mode attenuation of a straight fiber in the bending condition [22]. The bending loss is written as, Here R is the bending radius. Equation (4) clearly shows that the bending loss grows faster than the bending curvature. For a specific fiber, at one wavelength, α ν m ∝ R −2 This bending loss dependence was also demonstrated by numerical simulations in NC-HCF [4] and matched with the observations in measurements [3], [8], [13], [14]. According to (4), the bend loss can be traded off with leakage loss as a function of core size. In the fiber design, to achieve lower transmission loss, a large core is preferable however it inevitably causes a higher bending loss. B. High Order Core Mode Conversion In 1969, Marcatili pointed out that higher order mode conversion is the main loss mechanism in the high refractive index core waveguide [60]. He demonstrated that, under multimode guidance, the loss originating from high order mode conversion is many orders of magnitude larger than the radiated loss of the fundamental mode because of bending [60]. NC-HCF exhibits lossy multimode guidance properties. The high order mode conversion is a significant reason for high bending loss in NC-HCFs. More work is needed to investigate such effects in HCFs. C. Enhanced Radiative/Absorptive Loss The coupling between the core and cladding modes can be greatly enhanced at specific wavelengths in the bent condition. On one hand, the phase matching condition can be satisfied or nearly satisfied between the core mode and cladding modes in the bent fiber. This has been observed both numerically and experimentally [2], [4], [61]. Such resonances cause well-defined and significant loss peaks in the transmitted spectrum. On the other hand, bending also increases the overlap between core mode and cladding material. With the method of conformal transformation [62], the equivalent refractive index distribution of NC-HCF under bending makes core modes closer to one side of the core. At long wavelengths where the fiber material is absorptive or at short wavelengths where surface roughness of core wall causes dramatic scattering loss, there may be an associated increase in observed bend loss. 7.96 * The bending loss of NC-HCFs were measured by bending a piece of NC-HCF by 180°w ith two ends kept straight. * * The fiber was bent into a coil and bending loss was calculated by dividing the total bending-induced loss by the number of coils. VI. GUIDANCE PROPERTIES-DISPERSION AND NONLINEARITY In NC-HCF, the group velocity dispersion and nonlinearity are tiny, and the modal dispersion is relatively low. The material contribution to the overall dispersion and nonlinearity is very limited due to the reduced overlap between the core mode field and cladding region. Although no direct measurements have been reported, the extremely low dispersion and nonlinearity of NC-HCFs have been demonstrated in experiments [13], [14], [18]. A 6 ps laser pulse at 1030 nm with nearly 92 μJ pulse energy was stretched to 8.7 ps after 8 m propagation in Fiber 5 in Table I [14]. Such pulse length stretching could be only caused by modal dispersion rather than group delay dispersion. No pulse stretching was found in delivering 6 ps laser pulse at 515 nm with 30 μJ pulse energy in Fiber 1 in Table II [13]. Because of their low dispersion and nonlinearity, NC-HCFs can provide appropriate solutions to high power/ultrafast laser delivery in fundamental scientific research and in industrial applications. A. Dispersion The modal group delay and group velocity dispersion can be calculated in Marcatili and Schmeltzer's model. According to (2), the group velocity dispersion should stay anomalous and extremely low over the whole spectral range if we assume the core medium is non-dispersive. By adding different types of gases and changing the gas pressure, the total dispersion of core can be tuned over a certain range [63]. The flat and shallow dispersion slope changes rapidly at wavelengths near band edges where cladding modes couple with the core mode. As in HC-PBG, the strong coupling interaction at the band edges causes the group dispersion across the transmission window to change from anomalous to normal towards shorter wavelengths [64]. Near the band edge, the value of group velocity dispersion rises rapidly up to hundreds of ps/(km × nm). Based on anti-crossing coupling, a method of tailoring the dispersion of HC-PBG has been proposed by tuning the size of subsidiary bores around the core in the cladding [65], [66]. B. Nonlinearity Nonlinearity of NC-HCFs comes from the gas inside the core and part of the cladding material. Air, for example, has Kerr nonlinearity in roughly three orders of magnitude less than in silica [67]. The cladding material contributes to nonlinearity due to the limited overlap between the core mode and cladding material. For NC-HCFs, the overlap between the core mode and cladding material approaches to 0.01%, in which we can expect the residual nonlinearity from the fiber material can be ignored. As a result, the main source of nonlinearity will only come from the air/gas filling the core, unless the gas pressure is low. VII. APPLICATIONS NC-HCFs have been already successfully used in high power laser transmission at various wavelengths from the visible to mid-infrared. They have also been demonstrated as a competitive candidate to replace gas cell/tube in the study of gas-light interactions. A. Laser Transmission In the far-infrared spectrum, CO 2 laser transmission at 10.6 μm was demonstrated by the NC-HCF made of As 30 Se 50 Te 20 glass in 2011 [2]. A single-mode-like guidance was measured in less than 1 m fiber with about 11 dB/m attenuation at 10.6 μm. When the fiber was bent to a radius of 30 cm, the deformation of mode distribution was observed [2]. In the mid-infrared spectrum, the delivery of high energy pulses laser pulses at 2.94 μm, with energies up to 195 mJ (limited by laser power), at a pulse length of 225 μs was successfully demonstrated through a silica NC-HCF in 2013 [10]. At 2.94 μm, high power Er:YAG laser have been used in biological tissue ablation in medical surgery because of high absorption coefficient of water molecules. In practical applications, a flexible guide from the laser source to the patient will allow more freedom for surgeons. NC-HCF developed for the experiment exhibited low attenuation of 0.183 dB/m at 2.94 μm. Single mode guidance was measured at the output of fiber from 6.8 to 0.8 m fiber length. This fiber also presented typically low numerical aperture (NA) of 0.03 which was determined by the large core size (94 μm in diameter). The small divergence of laser beam would allow longer working distance which could add more freedom in the practical surgical application. In the near-infrared, picosecond and nanosecond pulse delivery at 1030 and 1064 nm wavelengths were reported through a silica NC-HCF in 2013 [14]. Picosecond pulses with an average power above 36 W and energies of 92 μJ were delivered in NC-HCF. No optical damage to the NC-HCF was found during nanosecond pulse delivery and the fiber was expect to be capable of delivering at least 8 mJ in a 60 ns pulse before damage. 0.03 NA was also measured in the fiber. In the visible, deliver of high energy nanosecond and picosecond pulses in the green spectral region was also demonstrated successfully in 2015, which aimed for precision micro-machining and marking of metals and glasses [13]. The fabricated NC-HCF had attenuations of 0.15 and 0.18 dB/m at 532 and 515 nm, respectively. It achieved transmission of 6 ps pulses at 515 nm with a maximum energy of 30 μJ and an average power of 12 W, which was more than double record reported in Kagome fiber at this wavelength at that time [68]. The NC-HCF also provided a stable single-mode output, with very low bend sensitivity. The measured results showed that no degradation of temporal and spectral properties of the guided beam were detected. 0.038 NA was measured in the fiber. B. Gas Hollow Core Fiber Laser Gas hollow core fiber laser is a novel fiber laser which makes use of an atomic/molecular gas as a gain medium confined in a length of NC-HCF [69]. The advantages of using HCF instead of a traditional gas cell in developing a gas laser is that the interaction length could be extended, low thresholds and high efficiencies can be expected, and fiber-based systems have great potential for integration and reduction in size. Compared with conventional solid fiber lasers, gas HCF lasers will remove the nonlinear and damage limits which arise because of the interaction between the light and solid material. Thermal effects, such as thermal lensing under high power operation may be finally conquered in gas fiber lasers. A variety of atomic/molecular gases provides a broad spectral range of emission wavelengths, from the UV to the IR. Owing to the nature of atomic and molecular gas transitions, the gas HCF laser always presents a much narrower laser spectral bandwidth than the solid fiber lasers. In 2013 a Raman NC-HCF laser was firstly reported [15]. The simulated Raman scattering (SRS) at 1.9 μm was generated in a 6.5 m long H 2 -filled NC-HCF at 23 bars, pumped by a 1064 nm microchip laser. By proper design, a silica NC-HCF can guide both Stokes and pump light with low loss in two different transmission bands. 1500 W peak power of pure vibrational Stokes SRS at 1907 nm was measured with a corresponding quantum conversion efficiency above 48%. In 2014, Wang et al. reported that 3.1-3.2 μm mid-infrared emission was achieved in the single pass of silica NC-HCF filled with acetylene gas pumped by an amplified, modulated, narrowband, tunable 1.5 μm diode laser [17]. The maximum power conversion efficiency of the single-pass amplified spontaneous emission (ASE) was approximate 30%, with respect to the absorbed pump power in a 10.5 m length of fiber at pressure of 0.7 mbar. The minimum pump laser energy required was less than 50 nJ. Based on Wang's experiment scheme, a 3.16 μm acetylene-filled silica NC-HCF laser was recently demonstrated [20]. 101 m silica NC-NCF of extremely low loss at the lasing wavelength (25 dB/km) was added as a feedback fiber to form a ring cavity with the acetylene filled gain fiber. By synchronous pump with narrow line 1.53 μm pulses, pulsed lasing at 3.16 μm was finally obtained. The high damage threshold and low nonlinearity of NC-HCF suggested good potential of power scaling. Besides optically-pumped lasers, traditional electrical gas discharge for gas lasers are also being explored in gas NC-HCF lasers. Optical gain has been measured in NC-HCFs filled with Helium-Xenon gases by dc electrical discharge [16], [19]. VIII. REMARKS This paper describes the very rapid progress made over the last few years in developing new forms of hollow optical fiber, and gives early indication of their potential applications. It is not comprehensive, and we hope and expect that it will become rapidly out of date with the continuing progress in fiber design, fabrication and application. The work so far opens up a range of possible future directions, and we expect further rapid progress will lead to yet more potential applications. Nonetheless, progress so far has already shown that it is possible to design simple novel fibers which can outperform conventional optical fibers in many ways.
9,136.4
2016-03-01T00:00:00.000
[ "Physics", "Engineering" ]
Theodramatic Themes and Showtime in Nassim Soleimanpour’s White Rabbit Red Rabbit : This essay engages the experimental playwright Nassim Soleimanpour’s White Rabbit Red Rabbit alongside the theological dramatic theory of Hans Urs von Balthasar. Every Soleimanpour play can only happen once. Actors receive the script as they begin the show; any given actor must perform Soleimanpour’s drama as a cold reading unique in history. I propose “Showtime” to theorize this theatrical temporality, exemplified by White Rabbit Red Rabbit and shared by von Balthasar’s theology, on analogy to stage space. This article further examines the play’s themes of identity, self-sacrifice, free obedience, and writing about time through a “theodramatic structural analysis” keyed to von Balthasar. Soleimanpour expands Balthasarian theodramatics in unexpected and unintended directions. So too did the performance of White Rabbit Red Rabbit I attended in 2016 that featured Wayne Brady as the actor. This essay concludes with analysis of that performance and how it places this essay’s theodramatic structural analysis into contexts of race and the history of anti-Black racism in the United States. Introduction I performed as audience in a strange play one night in 2016, one where the actor held the script and we discovered its meaning together for the first time. White Rabbit Red Rabbit told its story about time, obedience, and death (Soleimanpour 2017). I witnessed what I took to be a religious drama, one where the faithful might feel compelled to kneel in deference to the real presence of God. Others, perhaps, saw a raucous improvised comedy on the edge between a piece of theatrical drama and performance art. I wonder whether Nassim Soleimanpour ever thinks about his play as a work of dramatic Christian theology? Playwright Nassim Soleimanpour both is and is not the actor's voice in White Rabbit Red Rabbit, on analogy to how God the Father both is and is not God the Son. Like all analogies, an appeal to trinitarian theology to explain an experimental play falls short. Soleimanpour's rabbits are not Easter bunnies. My aim, then, is not to explain but to think about Soleimanpour's play in the company of another sort of dramatic theologian, the Swiss-Catholic Hans Urs von Balthasar and his five-volume Theo-Drama (Balthasar 1988(Balthasar , 1990(Balthasar , 1992(Balthasar , 1994(Balthasar , 1998. White Rabbit Red Rabbit exemplifies what von Balthasar imagines to be drama's contribution to a Christian philosophical theology. This essay attempts a "theodramatic structural analysis" of Soleimanpour's play. For von Balthasar, theatrical drama aids investigation into the frameworks that undergird theological reasoning according to Christian religious symbols in response to God's self-revelation by participants in that real and ongoing drama. Drama foregrounds freedom, action, and presence, and these are also key theological themes for interpreting God's relationship to humans and the created world. But theatre also displays how philosophical theology might be presented to and shared by a mixed and public audience. If drama promises a structure (in Balthasarian terms, a "form") for seeing theology at play, then White Rabbit Red Rabbit demonstrates the importance of theatrical drama 1 There is an undeniable Eurocentrism to von Balthasar's dramatic theory as a "product of the Western world . . . although originally [the world-stage concept] arises from an awareness of the world which is at least as Asiatic as it is European. Quite apart from the Greeks, countless other peoples have been acquainted with the cultic and mythic drama: Egypt, Babylon, China, Indonesia, and Japan with its Noh plays that survive to this day". (Balthasar 1988, p. 135). But von Balthasar contends that the exclusion of other religions and cultures reflects the finitude of a single reader enmeshed in a particular culture. He puts it explicitly in the "Foreword" to the first volume of the entire trilogy of which Theo-Drama is the middle part: "But the author's education has not allowed for such an expansion, and a superficial presentation of such material would have been dilettantism. May those qualified come to complete the present fragment" (Balthasar 1982, p. 11). The finite human can only interpret God from the standpoint of human finitude, and this includes social location and education as well as choices regarding the sorts of arts and cultures a theologian consumes. 2 Many Christian theologians have identified the productive resonance between theological interpretation and dramatic interpretation. (See, among many, Vanhoozer 2005Vanhoozer , 2014Vander Lugt and Hart 2015) For von Balthasar, "All theology is an interpretation of divine revelation. Thus, in its totality, it can only be hermeneutics" (Balthasar 1990, p. 91). 3 On the uniqueness of historical events in theatre history, see (Balthasar 1988, p. 301). 4 Repeatable historical singularity shows why von Balthasar's believes drama, an event that unfolds in time characterized by free action, must become a preferred mode of Christian theological interpretation. All the more so if there is "a biblical answer to the question" of human existence that might be "intelligible to human beings". The "human dramatic question" of existence receives God's "divine dramatic answer". God's definitive, unique, and singular historical action in and through the person of Jesus the Christ "is relevant in all ages". Von Balthasar coins the neologism "eph-hapax" from the Greek eph-(as in the "all over" quality to skin in "epidermis") and hapax ("once") to describe this universally applicable "unique answer to all instances of the question" posed by the drama of human existence. "Eph-hapax" alludes to the technical term from literary and biblical criticism: hapax legomenon, a word that appears only once in a given text. Ironically and poetically, von Balthasar's term "eph-hapax" appears three times in the five volumes of Theo-Drama but only in the single cited paragraph on (Balthasar 1988, p. 21) ("ephhapax" in the German, cf. Balthasar 1973, pp. 20-1). The paragraph clearly proceeds with the Bible in mind, but the three-fold singularity of "eph-hapax" in the text and its object of reference (i.e., the action of God in the Christ "most acute" when "Good Friday turns into Easter") perhaps also means to invoke von Balthasar's trinitarian Christology. the play's themes "in abstract", I will turn to the particular performance of the play I witnessed as a participant-observer and its productive frictions between history, text, performance, and interpreting community. White Rabbit Red Rabbit calls attention to theatrical drama's situatedness within and in response to a much wider story, including stories that actors, audiences, playwrights, and producers may never have intended to be told. It is best to include, however, a warning. Scholarship about contemporary theatrical drama risks spoiling the plot. For some, the "spoiler" is a kind of ruin to the fun of discovery that prematurely releases dramatic tension; for others, "spoilers" empower critique. 5 In the case of White Rabbit Red Rabbit, such "spoilers" necessarily transform the object of consideration for the uninitiated. To already know the contents of the play's next page changes the play's theatrical possibilities and player's theatrical choices. Indeed, Soleimanpour prohibits foreknowledge about that which is unrelentingly destined by the printed script: "Give the actor the instructions below 48 hours before their performance. DO NOT give them a copy of the play. Ask them not to see the play, nor to learn anything about it before" (Soleimanpour 2017). 6 Such rules do not govern the audience; it is certainly possible to see this play again and again, night after night. Soleimanpour explicitly directs this prohibition to the performer alone. But all first-time witnesses to White Rabbit Red Rabbit play along in their own role. In a qualified sense, everyone who does not already know the play's script shares the actor's experience of time. The end has already been written, printed, given, and held by the actor, but its contents are not yet fully revealed or realized. I contend that this structural dynamic makes the play evocative of Abrahamic religious temporality. The entire book of history has been written and remains known to God in God's providence, but the world's story unfolds with human freedom and under the author's divine command against divination and soothsaying. Obedience to tradition, too, becomes a motif in White Rabbit Red Rabbit. I dutifully commend my reader to follow Soleimanpour's instructions and consider watching or reading the play for the first time prior to continuing this essay to get a sense of the play without the "spoilers" necessary for my writing. Already, theatrical obedience and timing become complicated themes. Words, Presence, and Soleimanpour's Theatrical Style Theatrical words can transcend historical boundedness and political boundaries. 7 Soleimanpour plays with language as a recurring theme in order to transfigure the relations of history and relationships across space and time. The play speaks from pasts, presents, and futures as an autonomous text. Its script renders the actor into a prophet of a non-existent past, and its plot meditates on the very problem of con-scripting an actor to follow the play down this rabbit hole. To perform with a script is, for Soleimanpour, to be obedient to that script's mission. The actor issues the commands of the playwright: "I actually made someone make you do something. [ . . . ] What are your limits of OBEDIENCE?" (Soleimanpour 2017, p. 25). Theatre, therefore, demonstrates an analogous obedience to the one expected in response to the call of God. Obedience to God differs from obedience to a 5 Certain theatrical styles, such as Bertolt Brecht's Epic Theater, intentionally give away the plot so as to encourage critical and distanced reflection on its political and ethical meanings. 6 The first words of the printed script are "Instructions to the Producer/Presenter". Some pages of the print version, such as the one referenced here, are not numbered. Soleimanpour includes words in all capitals as a way to indicate something important for the actor to emphasize. All of the capitalization, boldface, and underlining in my quotations from Soleimanpour's script match the original. 7 Soleimanpour is not alone amongst twenty-first-century playwrights who use theatre to cross religious, colonial, and militarized boundaries between the middle east and the north Atlantic, but Soleimanpour's experimental form sets his work apart. For example, Tony Kushner's Homebody/Kabul (Kushner 2004) approaches quite similar ideas about border-transgressing poetics thematically. Similarly, too, the multiple published versions of Kushner's play carry their own sense of an "unfinished" project. Kushner's Homebody/Kabul, however, deploys many of the expected conventions of a Broadway or West End production (e.g., memorization, naturalistic costuming and acting, scenes, a curtain call) missing from White Rabbit Red Rabbit. playwright's text (rarely is God so straightforward), yet a text speaks action into being without its own voice. Obedience to the text requires the actor's free choice to be obedient. To borrow Paul Ricoeur's phrase, the script is a "mute text" until the actor communicates on its behalf. 8 "This communication needs an INTERMEDIATE . . . the person who is called THE ACTOR" (Soleimanpour 2017, p. 24). Is the "I" spoken by a character co-identical with the ego of the actor or the playwright? Or are we already playing roles dictated to us by innumerable cultural scripts? Soleimanpour displays such roles in the playful animal pageants and rabbit parables of the script (Soleimanpour 2017, pp. 16-17, 38ff). Clear distinctions between what marks obedience to a "felt presence" in religious experience (a "call") or obedience to unspoken social rules can be difficult to draw (Dox 2016). 9 White Rabbit Red Rabbit posits a playwright responsible for the text now printed on the physical script. The playwright's body remains distinct and distant from the historical event of performance, but he is nonetheless given a present voice by the actor. Audiences hear Soleimanpour's "I" in the actor's voice. Accounts of easy co-identification between actor and writer further rupture in Soleimanpour's later work, Nassim, a heartfelt piece about language and home, where Soleimanpour plays himself alongside another unknowing and unknown actor. I want to linger for a moment with the comparison between White Rabbit Red Rabbit and Nassim to delineate some of Soleimanpour's theatrical techniques and style. Both plays are exercises in what Aida Rocci calls Soleimanpour's "manila envelope theatre" (Rocci 2017). Any given performance features an actor who does not know the play's script in advance. That is, both pieces feature a "COLD READING" (Soleimanpour 2017, p. 3) from printed scripts by actors with the help some of our era's ubiquitous telecommunications technology and some plot-consequential props. Nassim even includes live projections and photography as a part of the fun. As a genre, Soleimanpour's "manila envelope" dramas foreground their own materiality. Though the plot is metatheatrical and seems to be mostly an actor talking about actors talking, a Soleimanpour script cannot be dematerialized into memory or smoothed into the veneer of spontaneity in rehearsals. 10 Performances will be rife with mistakes and improvisations as easily understood to be innovations as they might be considered "glitches" in the theatrical ritual (Grimes 2014, p. 73). 11 Both Nassim and White Rabbit Red Rabbit invite the audience to participate, both on stage and off. Both Nassim and White Rabbit Red Rabbit collapse distance and estrangement by means of a shared and textually mediated experience. Both Nassim and White Rabbit Red Rabbit prompt us to consider Soleimanpour's biography as revelatory or, perhaps, at least interesting enough to merit a night's entertainment. The figure of the playwright is present for both plays, but in very different modes. The instructions to the producer of White Rabbit Red Rabbit requests (quite politely) that "it might be nice" if an "empty 8 "The text is mute. An asymmetric relation obtains between text and reader, in which only one of the partners speaks for the two. The text is like a musical score and the reader like the orchestra conductor who obeys the instructions of the notation" (Ricoeur 1976, p. 75). Here, Ricoeur is not being esoteric. Texts are mute because they lack the mouths to speak on their own behalf. Soleimanpour's theatrical script operates like Ricoeur's reference to a musical score: in order to speak the text/script/score must be played. 9 Donalee Dox's Reckoning with the Spirit in the Paradigm of Performance opens new ways to consider the spiritual knowledge imparted by "what cannot be seen in vernacular spiritual practices but is (for practitioners) nonetheless present" (Dox 2016, p. 148). Empiricist methodologies that require the confirmation of presence only through material and measurable proof create difficulties for performance studies interpretations. Dox calls performance the "permeable boundary between people's sense of an inner, spiritual life and the bodies acting in the materiality of culture" (Dox 2016, p. 60). For Dox, the materialist norms of the "performance paradigm" dismiss or explain away spiritual knowledges prior to serious investigation on practitioner's terms. 10 Soleimanpour's experimental approach-"for ME, this is not so much a PLAY, as an EXPERIMENT" (Soleimanpour 2017, p. 3)-exemplifies Larry D. Bouchard's "three overlapping sorts of metatheatre" (Bouchard 2020, p. 4). Both Soleimanpour's style and the White Rabbit Red Rabbit script foreground the theatricality of each performance (MT-1); the titular rabbit parable with its audience participants and animal pantomimes constitute a show within a show (MT-2); and the play's metaphors about obedience, suicide, and life (Soleimanpour 2017, p. 46ff) present the entire event as MT-3. 11 For Grimes, "ritual glitches" are noticeable and unintended disruptions to ritual action, like a "badly timed flyover of military helicopters" that can stop a public reading (Grimes 2014, p. 110). A ritual glitch calls attention to ritual as human activity capable of "failure" and open to criticism. seat in the front row" be reserved for the playwright. 12 The script calls repeated attention to the mobile "self" of the narrating actor and the script's writer. Do actors play Soleimanpour or themselves? It is not an Iranian dissident in 2010 who commands the scene but printed papers that construct a world and invite action. "Sometimes I get scared writing this play. I feel I'm designing a BIG GUN which will shoot somebody one day. Maybe even myself" (Soleimanpour 2017, p. 35). These pages and the risks inherent in their interpretation must be given over to someone else to read. Words and stories connect across real time and space. White Rabbit Red Rabbit instructs its audience members to send Soleimanpour an e-mail during the course of the play. In the print version<EMAIL_ADDRESS>still appears prominently in and around the text. Nassim, by contrast, calls for Soleimanpour himself to be there, a character in his autobiographical play. Both pieces therefore confront and build theatrical identities through materiality, confirming how human characters and non-human props "play" on a shared continuum in performance. Nassim even concludes with a totem of its performance history: a book filled with instant-print photographs taken at every performance. (Archaeologists who uncover this artifact can locate the performance I attended in Dublin by looking for an image with an airline blindfold, my "gift" to Nassim during the play.) 13 History and meaning are co-constructed between audience and actor. Where White Rabbit Red Rabbit concludes in silence and e-mail, Nassim concludes with a phone call in Farsi. In Soleimanpour's theatrical drama, communication becomes embodied and technologically mediated. Words, even if misunderstood, connect people through the things and experiences we share. Porous Boundaries of Stage Space and Showtime Soleimanpour's words (mediated through bodily movement, breath, speech, and material technologies) create the conditions of theatrical presence both spatially and temporally. The actor in Soleimanpour's play holds the pages of the script as a prop that is part of the show. The script has its own agency as a player in the drama. Scripts can be metaphors as well as physical objects. Soleimanpour's drama revels in this ambiguity. By asking audiences to give unplanned gifts (Nassim) or to use their smartphone to communicate in medias spectaculum (White Rabbit Red Rabbit), Soleimanpour points beyond the symbolic and literal perimeters of the stage space-so objects in the pocket of an audience member might become props, too. The script's words trigger present embodied actions that erase the boundary between active players and passive watchers. Soleimanpour's plays are full of invitations for the audience to become what Augusto Boal calls "spect-actors", simultaneous observers and participants in the theatrical event (Boal 1985). 14 During White Rabbit Red Rabbit every member of the audience assigns themselves a number and speaks it aloud; some numbers are called to play along with the script (Soleimanpour 2017, p. 2). The script therefore assigns roles to the members of the audience (one, two, three, and so on); sometimes and for some people, the script transforms those roles into missions. Number 5 always receives the crucial instructions to set the plot in motion: "I want you to choose a glass of water, take the vial and stir its contents into the chosen glass with the spoon. Then put the cap back on the vial. Go ahead. AND BE CAREFUL. DON'T SPILL ANYTHING" (Soleimanpour 2017, p. 5). These rules are written into the script as part of its dialogue. At other points, the script invites unspecified volunteers to take on a scripted role. 15 During the play's off-Broadway run in 2016, Nathan Lane joked about his distaste for audience participation, for him a theatrical taboo that "falls somewhere between incest and folk dancing" (Gioia 2016). Certainly, all conscious audiences participate in a theatrical performance; Soleimanpour even warns how "it is YOU, spectators, who ARE there. YOU are there. YOU are participating" (Soleimanpour 2017, p. 56). 16 Soleimanpour's script commands a violation of the spatial boundary between actor and audience, the 'sacred' distance that sets apart stage and seats in the house. The absent Soleimanpour, through the voice of the actor, calls these number-characters up to the stage to play along. The opening counting ritual concludes by musing on the question "Did you count me?" (Soleimanpour 2017, p. 3). Theatrical presence and participation need not be reduced to spatial proximity. But Soliemanpour's drama also blurs the temporal boundaries between actor, audience, and playwright in its conscious construction and subversion of what I call "Showtime". Theatre always calls time to mind. 17 It takes time to perform a story. 18 Rebecca Schneider explains, "Time is the stuffing of the stage-it's what actors, directors, and designers manipulate together" (Schneider 2014, p. 7). Showtime is that time set apart from other times by a theatrical event and during which a theatrical event occurs. 19 Showtime identifies the temporality of performance, the temporal dimension of the stage's space shared by performers and audience. One does not require a proscenium arch to make a stage, but the activity of performance, what Peter Brook calls the "act of theatre", brings a stage into being for some witness (Brook [1968 ] 2019). This is the difference between the scaffold that makes a platform in the front of any potentially empty auditorium and the performance that renders that platform into a stage for the show. In Shakespeare's famous speech, Jacques announces "All the world's a stage" thanks to its mere "players" with "entrances and exits" (Shakespeare 2006a, II.7). The boundaries of the world-stage, then, are not galactic wings or an oceanic apron but human parcels of passing time. Showtime is that which a showstopper disrupts but does not negate. Here, I distinguish "Showtime" from the description of a given performance's "run time": the show's duration as a length measurable by a clock. Speeches after a curtain call do not add to a play's run time, but they are an aspect of this performance's Showtime. Breaks for applause or laughter or lament constitute meaningful moments of a theatrical event. A subway car is not an architectural stage, but, in New York City, the (often unwelcome) announcement of "Showtime!" could transform mass transit into an acrobatic arena in the time between stops. Yet a play can twist time into knots, imagining morning sun after sunset or plunging a midday performance into midnight darkness. Hamlet reminds us how theatrical "time is out of joint" (Shakespeare 2006b, I.5). Showtime holds the strangely mutable and subjective experience of time's passing during performances. The same duration of time might carry a thick slowness for a dull play or a surprising lightness and speed during an exciting one. Showtime, therefore, refers at once to the "time of the play" (as in the drama's temporal settings and its performance histories) as well as the "play's time" (as in the theatrical event that occurs in time and with time). Just as a performance needs its stage space, a performance happens during Showtime. Stages are places set apart within a wider geography, so too Showtime sits apart from other times within wider histories. 20 Every performance of White Rabbit Red Rabbit remains singular thanks 16 Near the end of the play, Soleimanpour includes "PASSIVE" witnessing as a mode of participation for "my spectators", those numbered and present (Soleimanpour 2017, p. 56ff). 17 Many scholars have taken up the question of theatrical temporality. Time, after all, is a fundamental analytic category for drama and appears in Aristotle's Poetics as one of its "three unities of time, place and action" (cf. Wiles 2014, p. 55). Aristotlean time is not the only option. Maurya Wickstrom's Firey Temporalities in Theatre and Performance: The Initiation of History reviews how theatre's time can interrupt passive, "processional histories". Wickstrom tracks plays and performances like Soleimanpour's where conventional distinctions between past and present shift into the potentially emancipatory relationship between what has already been and what Walter Benjamin calls "a now" (Wickstrom 2018). 18 Theatre foregrounds the connection between Times and Narrative enumerated across (Ricoeur 1984). There can be no hard distinctions between reading theatrical drama and performing it. In many ways, "reading time" and time spent recalling a production expand to complicate the boundaries of Showtime. Encounters with theatrical drama-reading and seeing and remembering-always occur during some passage of time. 19 Anne Ubersfeld's semiotic approach to theatre and time begins its analysis by identifying how "theatrical time" can be understood as the relationship between the "two distinct temporalities" of theatrical phenomena: "the time it takes for a performance to be completed . . . and the time pertaining to the represented action" (Ubersfeld 1999, p. 126, emphasis original). 20 Performance, like play and ritual, sets itself apart in place and time from other phenomena. My analysis of performance incorporates the philosophy of play at its root. Consider how English language words for theatre show this essential link to the prohibition on foreknowledge and its always different actor, but the script also calls out the singularity of its temporal moment in history. 21 At one point, the actor demands that Number 6 announce the day of the week, the date, and the year of this performance (Soleimanpour 2017, p. 19). Soleimanpour uses this information to differentiate, but not sever, the time of the playwright from Showtime. Immediately after Number 6 provides the date, the actor says "The day I'm writing THIS part of the play is 25 April 2010. So you see how even MY TIME differs from yours" (Soleimanpour 2017, p. 20). The time of composition has been caught up into Showtime. Usually, Showtime would be a time of multiple citations. Showtime bridges the "gap" between the "liveness" of theatre and a given play's rehearsal and performance histories (Schneider 2014, pp. 68-69). But because every performance of White Rabbit Red Rabbit calls for a unique cast, the actor possesses no rehearsal record to recall and re-present. Indeed, this extemporaneous performance appears like a rehearsal with communal improvisation in the presence of the playwright's script. Showtime marks the time of communal endeavor. Soleimanpour foregrounds the "now" of Showtime in self-conscious awareness of the ongoing present moment. Showtime does not resolve Zeno's paradox, but it demarcates the finite experiences of beginning and ending. 22 White Rabbit Red Rabbit makes an interesting test case for Showtime precisely because it is not a piece of durational theatre that indexes the time of its own performance or responds to a specific moment of time. 23 Audiences might perceive some beginning and perceive some end as the fluid limits of Showtime. In this play, temporal limits echo in the spatial limits of the stage or the limited pages of the printed script. The audience sees a sign of Showtime's end as it approaches: the script's pages do not go on forever. Conventional theatrical drama marks the threshold of Showtime with the rituals of a curtain call: bows and applause. But, like so many theatrical experiments that unfold into the night-consider Richard Schechner and the Performance Group's Dionysus in 69 and its parade into the streets-White Rabbit Red Rabbit frustrates a clear moment of transition from Showtime to after in its conclusion (Performance Group 1970). Dead Ends: "You May Not Touch Him. You May Not Check His Health" 24 Showtime differentiates the porous temporal and spatial boundary between the event and the play's afterlife in conversation, worry, delight, confusion, and memory. The play concludes with an invitation for reflection in the presence of death's possibility. "Dead or alive, [the actor] will want to lie down on the stage for a time and think. About everything" (Soleimanpour 2017, p. 60). So, too, will the many spect-actors who depart from the hall. The play stops with the death of the actor who gives life to words. Showtime ends, somewhere, between the seats and the shuffle to the exit. The end of Showtime symbolizes mystery or a transcendent sacred in thin analogy to the moment of death. between play and performance. The phrase "a theatrical performance performed in a theater by theatrical performers" could just as easily be written as "a play played in a playhouse by players". "Play is distinct from 'ordinary' life both as to locality by and duration" (Huizinga [1944(Huizinga [ ] 1950). For Huizinga, playing makes and underlies representation in ritual and dramatic performance, culture, poetry, and art. Huizinga's theory informs Hans-Georg Gadamer's notion of play as a clue to the ontology of the work of art. (See Gadamer 1989, p. 101ff.) For a more recent application of play to the analysis of religion and theatre, see (Mason 2019). For Mason, "Playing creates being, in any way that the word being makes sense" (Mason 2019, p. 118). 21 The play highlights how time may be marked through differing religio-cultural calendars. Soleimanpour provides his birthdate both according to the Islamic-Solar Hijri calendar prominent in Iran ("Azar 19th, 1360") and Christian-Gregorian calendar used in most places where the play would be performed ("10 December 1981") (Soleimanpour 2017, p. 19). 22 Questions remain as to when one's experience of a play begins: when does the show start for me? When I see advertisements and this production first appears to my consciousness? When I buy my tickets and begin to anticipate the event as a kind of business transaction? Perhaps when I physically enter the venue or pose for a photo under the marquee? Or is it when I sit down and silence my electronic devices so to limit my distractions from the outside world and enter into the time of the play? These questions ask nothing about the preparation of the actors! Instead, Showtime refers to the overlapping time of performance shared between actor and audience. 23 For more on durational theatre in the context of theatrical temporalities, see the discussion of Karlheinz Stockhausen's Mittwoch aus Licht in (Wiles 2014, pp. 61-67). 24 (Soleimanpour 2017, p. 60). Soleimanpour makes this analogy explicit: the sending at the finale of White Rabbit Red Rabbit concludes this performance's unique Showtime with the instruction for a member of the audience to take the script as gift for future use (Soleimanpour 2017, p. 62). The White Rabbit, played by a member of the audience and now the one leading collective obedience to the script, establishes the last law: "After hearing 'the end,' everyone must leave the theatre" (Soleimanpour 2017, p. 60). There will be no time to confirm the impermanence of theatrical suicide. There will be no curtain call ritual to clap distance between Showtime and after, to give away numbers and responsibility. The symbolism of the possibly dying body on stage aligns with the departure of the audience, both "exits" in silence and doubt. The Showtime of Soleimanpour's play stops in death. One line of interpretation, interested in the ethics of causality, follows the play's focus on the question of the "gun" mechanism: who is responsible for the actor's "death"? Soleimanpour? The actor? The producers? The audience volunteers? The audience witnesses? Industrial capitalism? 25 The script calls forth the conditions for a suicide or homicide or accidental interpersonal violence as entertainment. Soleimanpour highlights how the conditions of this theatrical experiment and its scripts-preset props, authoritative instructions, social expectations-are no different from ordinary social life. Given circumstances might always be turned over to some risk of life and death. Such is the meaning of the titular white and red rabbit parable: ordinary obedience quickly escalates to extraordinary cruelty. But another line of interpretation goes down its own rabbit hole resonating with what Kevin Hart calls the "dark gaze" onto the sacred in Maurice Blanchot's mystical atheism. Like Soleimanpour, Blanchot enacts a "displaced mysticism of writing [where] to write is to transform the instant into an imaginary space, to pass from a time in which death could occur to an endless interval of dying" (Hart 2004, p. 10). 26 White Rabbit Red Rabbit opens towards the sacred in its attention to death's uncompromising mystery. "What MATTERS is NOTKNOWING" (Soleimanpour 2017, p. 33). 27 The play ritualizes the mystical encounter with uncertainty and its "POSSIBILITY" (Soleimanpour 2017, pp. 32, 50). Drama proceeds in the subjunctive. The risk of death is both playful and existential; the performance of suicide requires both a theatrical choice and unrehearsed trust (perhaps even quasi-religious faith) in the harmlessness of the show's props. "This is a theatre, so its VERY probably FAKE . . . right?" (Soleimanpour 2017, p. 30). The line's dramatic irony relies on an established theatrical tradition and faith in theatrical conventions and their moral code. But, like other avant-garde performance experiments, the play elides physical appearance with emotional reality. 28 The prop poison might well be placebo, but drinking the potion nonetheless risks a credible threat of suicide. The theatrical choice to drink in obedience to the script could bring about all too real consequences. Who knows? This call for ritual action in the presence of mystery supports my claim to identify the play's structure as "religious" in a qualified and generalized sense. Rather than representation, David V. Mason locates the poetic and playful making of performativity-"poesis, not mimesis"-as theatre and 25 "I take full responsibility for creating the machine. But I give YOU the responsibility for using it. After all, no one puts the inventor of the gun on trial" (Soleimanpour 2017, p. 55). 26 Hart further demonstrates that "Blanchot's thought of the neutral Outside contests the philosophy of neuter" tracks with how Hans Urs von Balthasar and other mid-century Catholic thinkers dismantled the reigning theological duplex ordo where 'pure nature apart from grace' proposes some "neutral, indeterminate being that is prior to the distinction between infinite and finite being, between God and creation" (10). Blanchot unequivocally rejects Christian revelation, but joins von Balthasar in resisting any urge to domesticate mystery. (See Hart 2004, pp. 48-49.) 27 Claire Marie Chambers offers the term "performance apophatics" to "signify the performative operation that traffics through the denial of denial, which can be felt in the restless dynamic of the unknowable that structures performance itself" (Chambers 2017, p. 10). Soleimanpour's emphatic "NOT KNOWING" calls for "critical unknowing" where "By cultivating learned ignorance, we might unself ourselves at the same time that we might unworld the world" (Chambers 2017, p. 261). Both performance apophatics and theatricality "insist that what is 'real' is not only the real, or that everything that is important or true is 'real'" (Chambers 2017, p. 259, emphasis original). 28 Soleimanpour's play opens ethical questions about integrity like those treated by (Bouchard 2011). Consider, for example, the moment in Dionysus in 69 where "the performance would pause until the actor playing Pentheus actually felt abused by the taunts of other cast members" (Bouchard 2011, p. 224). For a review of the religious underpinnings to American avant-garde theatre and connections to Gertrude Stein's influential views of theatrical time, see (Tanner-Kennedy 2020). religion's common root (Mason 2019, p. 156). 29 A performance of White Rabbit Red Rabbit may very well appear structurally indistinguishable from other "religious" rituals where a sacred text (be it the Bible, Vedas, Qur'an, Book of Common Prayer, or L. Ron Hubbard's Dianetics) prompts ritual obedience. Here is even time for a monetary collection (Soleimanpour 2017, p. 9). The text features two invocations of "god" (Soleimanpour 2017, pp. 13, 24) and one reference to the writer's face while writing, "straight as the devil's" (Soleimanpour 2017, p. 29). I intend to put Soleimanpour in conversation with a Christian framework, and there is one phrase that might be interpreted as a moment of recognizable revelation: "the [red] rabbit's ears have been EXPOSED. Oh my god!" (Soleimanpour 2017, p. 13). A theodramatic reading perhaps hears echoes of the Centurion who notices the Son of God exposed by crucifixion and earthquake (cf. Matt. 27:54) or "Doubting" Thomas' exclamation at the resurrected Christ's exposed wounds (cf. John 20:28). Further, the text invokes God's blessing-"MAY GOD SAVE YOU!" (Soleimanpour 2017, p. 24) on the volunteer notetaker who, by freely volunteering, now "is a red rabbit" (Soleimanpour 2017, p. 25). But Soleimanpour's most consequential use of something like Christian religious language happens only in the actor's speech just before handing the script to an audience volunteer and enacting their own theatrical-ritual death (Soleimanpour 2017, p. 55ff). The speech is the confession and pre-emptive absolution of the playwright, "Nassim Soleimanpour" whose full name appears twice (Soleimanpour 2017, pp. 55-56). The word "sin" appears twice as well to describe Soleimanpour's own guilty complicity in the actor's death (Soleimanpour 2017, p. 56). Soleimanpour's writing creates the conditions for the possibility of the actor's death, but it is present action in obedience to his words that might kill. An indictment of the audience interlaces with Soleimanpour's confession. He further argues how any "PASSIVE viewer of this suicide" will be "more of a sinner than me" (Soleimanpour 2017, p. 56). So who is guilty? The confession turns the question of identity back on the writer, whose voice we hear in and through the actor. Soleimanpour sounds similar to Blanchot by the end of the confession. The question of guilt and sin between author and actor asks about theatrical writing and about revelatory knowing. "In conclusion" the speech shifts into meta-reflection on the affective experience of writing as self-alienation. I feel what I'm writing is not my writing [ . . . ] some OTHER 'ME', lives INSIDE me, and THAT 'me" talks on my behalf-almost as someone to whom I have lent my body. Or maybe I'm reading from someone else's writing, or someone else, some OTHER ME, is loudly speaking ME . . . for YOU. (Soleimanpour 2017, p. 56) Here, Soleimanpour inverts what Blanchot calls "the Outside" and erasure of ego approached through writing; instead, some "OTHER" writer emerges from "INSIDE" like inspiration. 30 But where Blanchot's emphasizes the spatial and temporal, the metaphors for writing Soleimanpour's in play are doubly theatrical. 31 Words come into being only through the "loan" of a body. The author loans a body to write; the actor loans a body to read. Soleimanpour's writing these words requires the same sort of kenotic self-surrender as the actor who speaks them. The moment can be depersonalized: the invisible author makes demands of the visible actor. The experience of writing White Rabbit Red Rabbit matches 29 See also the discussion of ways to pursue a correlation between religion and theatre in (Mason 2019, p. 1ff). In another context, Mason explains "The manner in which the theatrical avant-garde necessarily resembles religious doing comes from the way that performance sharpens this paradox [glossing what he earlier calls 'yearning for presence that proves never possible'] of being in the world" (Mason 2019, p. 59). 30 "If to write is to surrender to the interminable, the writer who consents to sustain writing's essence loses the power to say 'I. ' And so he loses the power to make others say 'I'" in (Blanchot 1982, p. 27). "This is to say: one writes only if one reaches that instant which nevertheless one can only approach in the space opened by the movement of writing. To write, one has to write already. In this contradiction are situated the essence of writing, the snag in the experience, and inspiration's leap" (Blanchot 1982, p. 176). 31 Hart explains how "interval" and "space" both may plausibly translate Blanchot's espace (Hart 2004, p. 8). I add that both terms also carry theatrical resonance, e.g., "intermission" can also be called an "interval". its theatrical reading. The conclusion of the quasi-religious confession sees the identity of the invisible author ("ME") given over in the performance of the actor for the audience ("for YOU"). Soleimanpour seems to agree with Blanchot; "Perhaps it is sin" (Blanchot 1982, p. 175). The author and actor align: the written-loaned body offers itself as indifferent and obedient, gift and sacrifice. Both author and actor can now share "MY sin": "the secret of the red rabbit" (Soleimanpour 2017, p. 56). The actor is the author's "dear red rabbit" (Soleimanpour 2017, p. 57) sent to perform death as a revelation. Seeing the Form Revelation anchors von Balthasar's theology. For von Balthasar, humans interpret the God who has revealed Godself dynamically through loving action in history. Theo-Drama occupies the middle panel of von Balthasar's great theological triptych. Each part, further divided into multiple volumes, correlates reflection on God's self-revelation according to philosophical transcendentals of being-Beauty in The Glory of the Lord, Goodness in Theo-Drama, and Truth in Theo-Logic. Each part develops an accompanying theological method-aesthetics, dramatics, and logic, respectively, for Beauty, Goodness, and Truth. The unity of the single project across its many disparate parts expresses the philosophical transcendental of Oneness. Von Balthasar's writing operates according to what Anne M. Carpenter identifies as a theo-poetic style: "what he means and how he means it are central concerns. The 'what' is theological truth, and the 'how' is a perplexing combination of theological and poetic language" (Carpenter 2015, p. 3, emphasis original). Drama speculates on God in the light of Goodness that prompts considerations of God's action and the human position in its midst (drama) rather than God's appearance (aesthetics) or God's utterance (logic) (Balthasar 1988, p. 18). Good actions give freely. Theatre, in its presentation of the drama of human existence, provides analogous structures with which to think theologically: one needs to "play" Christian theology within the givenness of the world of theodramatic play. That is, von Balthasar's theodramatic approach demands the imaginative assent of the interpreter to God's initiative: doing theology is like doing improv. My scene partner (or a script) suggests some "given circumstances" and actors need to respond with actions that fit within that given world. My impersonation of a bunny making a big, steaming bowl of carrot soup will change rapidly when another actor replies "Yes, and we need to hide it from the hungry bears on the roof!" Without any rehearsal or hesitation, I become responsible to hop to it and play interpretive choices that work here and now with what I have been given. 32 Such acting-often surprising and funny-openly receives and inhabits the world that is given. The improvising actor co-creates the theatrical world by choosing to play along. So too for theodramatics: there can be "no external standpoint" outside the drama of God's action in history (Balthasar 1990, p. 54ff). God's drama "so overarches everything, from the beginning to the end, that there is no standpoint from which we could observe and portray events as if we were uninvolved narrators of an epic. [ . . . ] In this play, all the spectators must eventually become fellow actors, whether they wish to or not" (Balthasar 1990, p. 58). Even God's inner life, the Trinity, becomes the wider drama within which created history unfolds: "our play 'plays' in his play" (Balthasar 1988, p. 20). 33 Much has been written about von Balthasar's influence on contemporary Catholic and Christian theology, but less work has focused on his theological dramatic theory in dialogue with contemporary theatre and performance. 34 Some Balthasarian resonances with White Rabbit Red Rabbit may be already 32 For Konstantin Stanislavki "the circumstances, which for the dramatist are supposed for us actors are imposed, they are a given. And so we have created the term Given Circumstances" in (Stanislavski 2008, p. 52), emphasis original. On the "yes and" rule in improv, see (Frost and Yarrow 2007, pp. 144, 219). For von Balthasar on Stanislavki and what is given to the actor, see (Balthasar 1988, p. 279); on the "extemporaneous play", see (Balthasar 1988, p. 179). 33 For a challenge to the coherence of von Balthasar's theological style, see (Kilby 2012, pp. 64-65). 34 Certainly, drama remains a keyword for Balthasar studies. The most substantial contribution on his dramatic theory remains the German language collection "Theodrama and Theatricality" (Kapp et al. 2000). For the importance of drama to von Balthasar's philosophy, see (Schindler 2004). Theological dramatic theory gives Todd Walatka room to find greater apparent, such as how the singular conceit of every performance of Soleimanpour's play offers a microcosm of the singularity of salvation history and the deadly high-stakes of free action. At the same time, a Balthasarian reading of the Iranian experimental playwright seems an odd, perhaps exploitative, choice. Soleimanpour does not identify as a Catholic, and the history of Iranian theatre includes far more influence from Islam than Catholicism. 35 As already mentioned, the play presents few overtly religious symbols. But I contend that the play's structure might be usefully interpreted in Balthasarian theodramatic terms. He gives us many theodramatic themes to choose, but I will restrict myself to the following five: "theatre of the world", dramatic roles, freedom, obedience, and sacrificial death. Theatrum mundi. Perhaps the most obvious connection between Theo-Drama and White Rabbit Red Rabbit regards its use of the image of the "world-stage" or "theatre of the world" image, familiar from medieval drama, Shakespeare, Calderon, and others. The first volume of Theo-Drama samples the development of the theatrum mundi image in an eclectic survey of European dramatic literatures. The stage uniquely presents the predicament of created being: "theatre-expressly seen as 'theatre of the world'-is an image that is substantially more than an image: it is a 'symbol of the world,' a mirror in which existence can directly behold itself" (Balthasar 1988, p. 249). 36 Theological dramatic theory proposes to interpret the entire history of creation as a performance on the world-stage on which God joins. The world-stage metaphor lends Theo-Drama what I would call a performative ontology: creation exists only insofar as it plays with and in God. For von Balthasar, the world-stage embraces the Christian theological vision of creatio continua (a theme where God continually creates the world, and so is present at every moment in its history). Such is also the temporality of performance foregrounded by White Rabbit Red Rabbit: "From now on we are ALL present" (Soleimanpour 2017, p. 2) when "the actor (me), the audience (you), and writer (me)" (Soleimanpour 2017, p. 1) come into contact. What matters is the event of performance, not the "mute" script; only in performance, during Showtime, can the actor and writer both be "me". The world-stage metaphor emphasizes the givenness and goodness of creation for Christian theology as well as the spectator-theologian's situation within the drama of history. Created time and God's eternity meet in action. Dramatic temporality offers von Balthasar language to name how the Christ enfolds created time into God's very life: God's eternal becoming as an event in what von Balthasar calls "supertime" (Balthasar 1998, p. 32). Dramatis personae. Theo-Drama concerns itself not only with dramatic stories and images but also with the phenomenon of theatrical performance. As such, von Balthasar also takes keen interest in human roleplaying and the various roles of the theatrical ensemble: author, actor, and director. 37 I have already mentioned the ways in which any interpretation of God's action emerges from fellow actors on the world-stage. God intervenes in human history by stepping onto the world-stage as its leading player. Jesus quite literally saves the show, and Theo-Drama provides tools to think through the Christ's roles. The actor finds identity in the mission of playing their role on stage; humans find their identity in their mission to be disciples of the Christ. "The closer a man comes to this identity, the more perfectly does he play his part" (Balthasar 1990, p. 14). Where social roles might become closed loops and traps, sending and mission actualizes identity. So too, every "spectating" audience to Soleimanpour's play gets brought up into the event of its performance, in the image the prototypical actor. Roles prepare for missions; Number 5 makes the poison drink, everyone gets sent forth from the hall. To understand "Who am I?" requires freely acting the role that I am sent to play in the world. 38 compatibility between von Balthasar and liberationist themes about preference for the marginalized and concern for economic justice in (Walatka 2017). 35 (See Floor 2005) In an e-mail interview, Soleimanpour avers, "I think I have stronger roots in Iranian Literature [than Ibsen or Beckett]" (Mapari 2017). 36 The phrase perhaps includes an uncited allusion to Hamlet's mirror held up to nature (Shakespeare 2006b, III.2) as well as the quoted reference to the title of Eugen Fink's Spiel als Weltsymbol (Fink [1960 ] 2016). 37 (Cf. Balthasar 1988, p. 481ff) on everyday roleplaying founds in dramaturgical psychology and sociology; von Balthasar begins this section by quoting (Goffman 1959). 38 This question organizes the section on the transition from role to mission in (Balthasar 1988, p. 493ff). At the same time, von Balthasar finds a trinitarian analogy in the logical procession of the theatrical roles of author, actor, and director. 39 Chronologically, too, the practice of a separate, off-stage director emerged rather late in theatre history. 40 But the logical procession of theatrical roles demonstrates by analogy points of Christian doctrine and its speculations on the eternal movement of the Trinity. None of the co-equally divine persons can be "older" than another, but their relationships might be logically ordered. I cannot offer a detailed summary of von Balthasar's trinitarian theology here as it is so central to his theological project; I will restrict my comments to the triad of author, actor, and director as an analogue to God the Father, God the Son, and God the Holy Spirit. 41 For von Balthasar, understanding the movement of these roles within God's triune life (processions) provides the clue as to how to understand their work on the world-stage (missions). 42 The invisible Author-Father acts as the first principle of theatrical movement that sends the visible Actor-Son into the world (Balthasar 1988, p. 279). Because God so loves the world, God the Father sends God the Son in a revelatory and free gift pro nobis, "for us". The procession-mission functions like the sending of the word from the author to the actor in Soleimanpour: "loudly speaking ME . . . for YOU" (56). 43 But the Author-Father and Actor-Son share a common will aligned by the Director-Spirit. 44 The Director-Spirit proceeds from the ultimate unity of the Author-Father and Actor-Son and assures co-identity between the author and the actor in the performance event. Obviously, the analogical triad of author, actor, and director differs significantly from the Trinity of Father, Son, and Holy Spirit in Christian theology. 45 The former theatrical triad ordinarily implies three persons in three distinct people; the latter theological mystery remains three divine persons who are always already one God. In White Rabbit Red Rabbit, the author, actor, and director unite in the single performance of the play, revealed to and for the audience exclusively in the visible action of the actor "for us". Soleimanpour provides another "dramatic resource" for Christological-Trinitarian theodramatics beyond the stylings of theatrical naturalism. Freedom. Drama stages conflict including a contest of wills. The dramatic tension at the center of Christian theology consists in the confrontation between divine and human freedom: God's absolute decision to be in faithful, covenant relationship with the world and the potential for a free human refusal of God's good gifts (Balthasar 1990, pp. 252-53, 301-2). 46 Soleimanpour makes similar space for rejection in his instructions to the actor: if an audience member refuses to play along that role simply changes to another free volunteer, "But it is important to maintain the SAME NUMBER!" (Soleimanpour 2017). Freedom expresses itself through interacting roles. Von Balthasar's dramatic 39 The analysis of authors, actors, and directors appears on (Balthasar 1988, pp. 268-305). Note phrases throughout that resemble trinitarian theology as "This primacy of unity in the author is ontological" (Balthasar 1988, p. 269). 40 See (Balthasar 1988, p. 298n1), where von Balthasar shows his awareness of this chronological procession but chooses to leave it untreated. Similarly, von Balthasar does not theorize other members of the theatre company that stretch beyond the triad: designers, managers, dramaturgs, stagehands. 41 Von Balthasar's trinitarian imagery is always subtler and rarely so blatant as I here imply. Some moments are more explicit, see (Balthasar 1988, pp. 268-69, 280). I have elsewhere argued that one can map his theatrical triad from Theo-Drama first volume directly onto the trinitarian theology that appears in volume three and five. (See Gillespie 2019.) 42 The principle that the procession gives a clue to mission is Thomistic, and it allows von Balthasar to further analyze the coincidence of person and mission in Jesus the Christ. (See Scola 1995, p. 58.) 43 See also the way Soleimanpour muses about the "private world" of writing "something SIMILAR to a play" (Soleimanpour 2017, p. 3). Drama only becomes a play in its performance by the actor. 44 "Between the dramatic poet and the actor there yawns a gulf that can be bridged only by a third party who will take responsibility for the play's performance, for making it present here and now" (Balthasar 1988, p. 298). The director's role will be to integrate the author and actor: "its whole raison d'être consists in the way it mediates between them" (Balthasar 1988, pp. 298-99). 45 Difference, especially sexual difference, is a key theme in von Balthasar's theology. Trinitarian procession later becomes explicitly gendered in von Balthasar's Theo-Drama. Linn Marie Tonstad finds problems in von Balthasar's active-passive hierarchy that becomes his symbolically sexualized Trinity. For Tonstad, von Balthasar's theology is not only flawed in its construal of the hierarchical relationships between Trinitarian relations, but these missteps concretize in the potential divinization of (worldly) masculinity vis-à-vis the exclusive creatureliness of (worldly) femininity (Tonstad 2016, p. 45). 46 Theological dramatic theory gives space for the realization of a real encounter between divine-infinite and human-finite freedom: "we must assert that unconditional (divine) freedom in no way threatens the existence of conditional (creaturely) freedom, at whatever historical stage the latter may find itself-whether it is close to the former, alienated from it or coming back to its real self" (Balthasar 1990, p. 119). For a major discussion of these themes (Dalzell 2000). language shows this contest playing out across the drama of history to be freedom-in-relationship and not some mechanistic tragedy binding God and world to a pre-determined "fate" (Balthasar 1990, p. 196). Human freedom operates like the relational quality of the Trinity; true freedom makes room for others. 47 Therefore, God's will for the world is not like the "fate" of the Greek tragedies but, rather, a call to intimate fellowship, bolder action, and unique importance in the role for each person (Balthasar 1990, p. 296). But where ordinary social roles might become closed loops and traps, missions actualize identity in freedom. Dramatic language highlights the particularity and importance of each human response because freedom, understood theodramatically, reflects the act-quality of God's Triune inner life (Balthasar 1990, p. 256). Freedom becomes dramatic only in action, hence why theodramatics help von Balthasar understand and interpret the God who is an eternal free act of love. Theatre-making requires similar room for free improvisation. A balance between infinite and finite freedom appears in how the actors in White Rabbit Red Rabbit make their own free choices that could never have been intended or fated by Soleimanpour's script. The play depends on the freedom for improvisation of its actors: "Honestly, I don't know WHAT this actor is doing" (Soleimanpour 2017, p. 3). The play's author also expresses something that "tastes like FREEDOM" from political and temporal fixedness: drama escapes ordinary finitude insofar as the play and its "timeless travel" through space and time "with no need for a passport" in ways the historical Soleimanpour could not (Soleimanpour 2017, p. 21). Obedience. Divine-infinite freedom calls for human-finite freedom's obedience. So too on stage. The free improvising of White Rabbit Red Rabbit's theatrical realization points to its structural and thematic emphasis on obedience. The play anticipates and requires obedience to the mission of the script. Disobeying the script stops the performance, even though Soleimanpour makes room for improvised jokes and commentary that divert from the printed word. To continue, the text must be freely obeyed. The instructions to the actor include how "You might think you want to add something. If so, that's fine. But tell the audience its yours" (Soleimanpour 2017). (This might be signaled with a physical choice: at the performance I saw, the actor raised a hand whenever deviating from Soleimanpour's script.) So too, von Balthasar's theology hinges on free obedience to mission. Obedience is "becoming transparent to one's mission" (Balthasar 1988, p. 289). In another sense, however, von Balthasar sees theatrical obedience to be reciprocal: "We must reject any suggestion that would make the actor into the author's servant and equally any that would degrade the author to the level of a mere cobbler of plays for the actor" (Balthasar 1988, p. 283). The Director-Spirit holds freedom and obedience together and works to make the performance interpretation relevant for the present audience. 48 The Actor-Son performs in perfect obedience to the will of the Author-Father. Freedom, then, finds expression in the kenotic obedience of the Christ or Soleimanpour's actors. Even bracketing theological overtones, Soleimanpour's play confounds the assumption that freedom and obedience are contradictions in terms. The improvisatory play of such red rabbits will be both free and obedient at the same time: "MAY GOD SAVE YOU!" (Soleimanpour 2017, p. 24). Sacrificial Death. As von Balthasar writes, "It follows quite naturally that if, obedient to his mission, a person goes out into a world that is not only ungodly but hostile to God, he will be led to the experience of Godforsakenness" (Balthasar 1988, p. 647). The Christ, the Actor-Son, plays a human script that ends in sacrificial death. Like the finale to White Rabbit Red Rabbit, obedience to the will of the Author-Father includes the real possibility of death: "infinite freedom appears on the stage in the form of Jesus Christ's 'lowliness' and 'obedience unto death'" (Balthasar 1990, p. 250). The final volume of the Theo-Drama makes much of the Christ's willingness to endure the Godforsakenness of 47 The finite freedom of existence consists in the ability to say "I am unique, but only by making room for countless others to be unique" (Balthasar 1990, p. 209). 48 The director guides the play like the Holy Spirit guides the modern church toward a "valid aggiornamento" (Balthasar 1988, p. 303). rejection on the cross, death, and in descent into hell. 49 The interpretive key for von Balthasar is always the Christ's cry of dereliction and abandonment: "my God, my God, why have your forsaken me" (cf. Psalm 22:1,Mark 15:34,Matt. 27:46). This "total self-giving" over into Godforsakenness by God the Son becomes a divine "super-death", a "radical 'kenosis'" that lets go without holding onto any remainder (Balthasar 1998, p. 84). 50 Theodrama always happens against the horizon of the final act, of sacrifice and death. For von Balthasar, the Christ's obedience unto death occurs without any consoling knowledge of future resurrection. The Christ, as the incarnate Son of God, freely surrenders divinity back to God in solidarity with creatures. Von Balthasar calls this the Christ's "laying up" of his "divine power and glory" with God the Father where he says "this concept only summarizes" the kenotic hymn in Philippians 2 (Balthasar 1998, p. 257). 51 Kenotic "laying up" permits von Balthasar to talk about the Christ's absent foreknowledge of the resurrection because "the Father's presence was so veiled that the Son experienced God-forsakenness" (Balthasar 1998, p. 257). The horror of crucifixion and abandonment and travel into unknown hellish territory would be truly human experiences. 52 Like the actor in White Rabbit Red Rabbit who obediently drinks a poisoned draught in deference to the will of the writer, so too does the Christ obediently defer to God the Father's will and drink from a cup that leads to death (cf. Matt. 26:39,Mark 14:36,Luke 22:42). 53 Parallels abound between the obedience and suicide plot in White Rabbit Red Rabbit and the Passion narrative of the Christ's kenotic self-sacrifice. Theodramatics give language to the play's mysterious and risky encounters with the unknown. Like von Balthasar's Jesus, Soleimanpour's actors "lay up" theatrical foreknowledge in solidarity with the audience. The audience disperses like the disciples, newly formed "red rabbits". Soleimanpour's play ends on the Balthasrian "Holy Saturday", and, like von Balthasar, we can only speculate about the actor's inner experience of theatrical forsakenness and "super-death". Even the script suggests that the actor may want to linger and reflect for a moment in the liminal space at the end of the show. Whose Line Is It Anyway? With these themes in mind, I conclude with a brief analysis of the unique instance of White Rabbit Red Rabbit where I performed as a member of its audience. As part of a gift to a friend, I attended a performance of its off-Broadway run in 2016. 54 This run featured celebrities playing the role of the actor on stage. The contradiction of a singular event in an ongoing commercial run off-Broadway confirms the importance of the play's production history. A single "production" (that is, the same producers-Tom Kirdahy and Devlin Elliott-with the same scenic design and props) could be realized through as 49 Rejecting God is sinful, so the forsaking of the Son as sin by the Father could only be done by God. "This is the central mystery of the theodrama: God's heightened love provokes a heightened hatred that is as bottomless as love itself (John 15:25)" (Balthasar 1998, p. 285). 50 Critics, on both von Balthasar's right and on his left, have cautioned against such swift movement from divine love to trinitarian generation to kenosis to death to descent into hell and its theological and ethical implications, as in (Tonstad 2016, p. 38). 51 Here, von Balthasar, somewhat controversially, follows language drawn from Adrienne von Speyr's visionary writing. The confusing phrase "laying up"-presented in scare marks in both English and German-is a literal translation of "Hinterlegung", a word that carries a range of economic connotations: deposit, escrow, filing, lodgment. 52 Kenotic "laying up" also refers to the Christ's veiled God-consciousness (in Fredrich Schleiermacher's turn of phrase), including von Balthasar's position that Jesus the Christ did not have access to perfect self-knowledge of himself as the Son of God throughout the experience of the Passion. On Christ's knowledge and mission, see (Balthasar 1992, p. 149ff). 53 Theodramatic structures, whether in von Balthasar or Soleimanpour, are dangerous precisely in the ways they might be misread in praise of suicide. A poisoned cup also recalls the double suicide at the end of Romeo and Juliet, but kenotic self-sacrifice gives over for the sake of another. Romeo and Juliet's ending, some "reconciliation of the hostile families over the dead bodies of their children", receives no endorsement (Balthasar 1988, p. 472). A more theodramatic conclusion-and one that follows Soleimanpour's warning to audience-rabbits who mindlessly obey political and social pressures-includes the mission to act differently. Such a take on theatrical suicide appears at the end of Shakespeare's play in the version described in (VanZandt Collins 2020, p. 7). 54 Thanks to Jewelle Bickel for the tickets and to Justin E. Crisp with whom I saw the play and enjoyed much conversation that first formed the ideas in this essay. Meaning cascaded onto the play through free obedience to coincidences unimaginable to the writer. When the day's date was correctly given as 4 April 2016 by a member of the audience, a voice rapidly followed proclaiming something like "the anniversary of Martin Luther King Jr.'s assassination". The date changed the meaning of the play. The Blackness of the actor's body bestowed layers of significance onto Soleimanpour's text previously illegible (at least to me) as racialized. Should the colored rabbits of the title-white and red-point toward something about race? Do its scenes of playful cruelty to animals evoke systemic racism? Brady paused and took stock of the situation. The play (billed as a comedy) generates all sorts of what Ricoeur might call "surplus meaning" in its performance. Brady balanced a night of improvisational silliness with palpable reverence for the play's themes about the spectacle of death, self-sacrifice, playful freedom, and obedience. Following the production's established convention to signal when breaking from the script, Brady raised his hand to ask "Do you see what he's saying here?" The play never stopped being funny, but its meditations about the danger of absolute obedience took on specificity in surplus meanings created by the interpretive work of a famous Black actor in the United States guiding his room full of players on this anniversary. One section of Soleimanpour's script enumerates seventeen "ways to commit suicide", perhaps meant to be improvised (Soleimanpour 2017, pp. 46-47). The page can be elongated and played for laughs by miming each item as a theatrical prompt. In some ways, that suicide list in White Rabbit Red Rabbit (far from its only reference to self-sacrifice or self-caused death) recapitulates rehearsal improvisation games that heighten melodrama to the point of absurdity. Brady refused to play the suggestions in order to underscore the seriousness of the play's questions. (He made one exception for a slip of the tongue-I cannot remember exactly, perhaps from "hunger strike" to "hunger shark" (Soleimanpour 2017, p. 47)-and consented to improvise death by shark attack. Brady made a hilarious squeaking noise as he donned a mimed "swimming cap".) Brady made no special emphasis to the phrases "hanging" or "provoking the police" in his performance (Soleimanpour's suicide options number seven and seventeen, respectively). But the spectacle of a Black man's death (even if it is "very probably FAKE") to entertain the (predominantly) white audience can turn the focus of theodramatic interpretation. The text connects this singular performance with the moment of composition, but the context of the memory of King's assassination and the persistence of anti-Black racism and violence in the United States situated the play's Showtime in a third crosscurrent of history. As James H. Cone puts it, "The lynching tree is the cross in America" (Cone 2011, p. 158.) The play's freedom for interpretation beyond its textual boundaries finds in this actor, theatrically "poisoned" and sacrificed in obedience to the plot, an image of the lynched Christ. 55 Brady's performance brought, in M. Shawn Copeland's term, enfleshed theological meanings to rupture expectations about the play's theodramatic form (Copeland 2009). Performance generates meanings beyond what might be presumed to be intended. These include von Balthasar's presumptions about directors that impose the politics of the present onto drama (or theology). 56 "Now our creaturely becoming has a share in the ineffable 'becoming' of the Divine Being" (Balthasar 1998, p. 131). The play performs its own theatrical inverse of theodramatic reality: the play stages what one dares to hope is a virtual (Soleimanpour's "fake") manifestation of the actor's 55 Dolores Williams, by contrast, calls into question the notion of kenotic substitution of the Christ's death for the sake of the world as "surrogacy". Williams critiques the image of surrogacy that rhetorically connects the historical experience of Black women in the United States with Christian theologies of substitutionary atonement. (See Williams 1991, p. 9ff.) See also Williams discussion of the rhetorical power of theological symbols in (Williams 1993). 56 "Here the director meets his hardest task: he must be committed enough to make the play relevant and at the same time civilized enough not to equate this here-and-now relevance with a narrow doctrine of society. The theatre is a political reality, in a lofty and noble sense, but it should not be misused for political party propaganda" (Balthasar 1988, p. 303, emphasis original). forsakenness by the playwright in potential self-sacrifice. Dramatic art operates in Soleimanpour to disclose reality in the drama of history. White Rabbit Red Rabbit makes present a threat for theatre's madness and violence to become real. But such is the threat of free obedience to any script. One never knows for sure until after making choices. The play concludes with an audience exiting while the actor lays and lies "dead". Brady remained still, and the room of audience-players departed in a reverential silence usually reserved for the sacred. Alejandro Garcia-Rivera argues that theological aesthetics in the tradition of von Balthasar can become a means for "lifting up the lowly" (García-Rivera 1999, pp. 187-96). 57 Time and bodies confront Soleimanpour's text to tell complex and ambiguous stories: Iranian political oppression, anti-Blackness in the United States, a cosmic theodrama. White Rabbit Red Rabbit questions the closure of theatrical or theological interpretation in obedience to a singular vision of possibility. Brady's improvisation can be one way of enfleshing what Ashon T. Crawley calls "otherwise possibility" (Crawley 2017). 58 But the singularity of White Rabbit Red Rabbit carries with it the melancholy of a loss. Like von Balthasar's notion of the history of the world illumined by the fact of the Christ's incarnation, the play can only be played once as an actor without knowledge of what comes next. Now, with awareness and dramatic irony, the play opens for participation in infinite numbers of other credible interpretations. Players return to participate in yet another instantiation of its "eph-hapax" meaning. Anyone who has seen or read the play now can produce it with Soleimanpour's permission included in the script (Soleimanpour 2017, p. 55). 59 White Rabbit Red Rabbit aims for some real connection via the timelessness and spacelessness of words made present. Even after its performance, White Rabbit Red Rabbit gestures toward creaturely invitation to resurrected life as something akin to von Balthasar's theology of all creature's ability to participate in the Christ's, the Actor-Son's death. Death's "non-time" has been engulfed by God's eternity. 60 Perhaps it can be better put in the language of Blanchot's "feeling of lightness" of "the infinite opening up?" when writing the experience of a halted execution: "I know, I imagine that this unanalyzable feeling changed what there remained for him of existence. As if the death outside of him could only henceforth collide with the death in him. 'I am alive. No, you are dead'" (Blanchot 2000, pp. 8-9). Blanchot's phrase now applies to Brady (still alive and still working as an actor, at least at the time of my writing, still alive and still working on this essay) and to everyone else complicit in a production of White Rabbit Red Rabbit. So too is the confrontation between Blanchot's phrase and the work still left to do for every player in a social drama so tacitly complicit in anti-Black violence. White Rabbit Red Rabbit calls attention to present and active bodies, especially those whose surplus meanings fail to obey arbitrary genre expectations. The play's theodramatic Showtime displays religious and theatrical co-present and co-presence to be freeing even if already written: "I did not see you, but in a way, I met you. And I am happy. The end" (Soleimanpour 2017, p. 63). Funding: This research received no external funding. 57 An experience of the Beautiful-in community-lifts up the lowly because "The aesthetic sign 'calls' the heart to discern original Beauty so that it may orient itself towards a Beautiful end" (García-Rivera 1999, p. 190). 58 For Crawley, "enfleshment [is] distinct from embodiment . . . enfleshment is the movement to, the vibration of, liberation and this over and against embodiment that presumes a subject of theology, a subject of philosophy, a subject of history" (Crawley 2017, p. 6). 59 There can be no easy legal identification of the play's writer, called Nassim Soleimanpour, and the owner of the play's intellectual property, presumably the same Nassim Soleimanpour. The print version contradicts the play's text; its copyright page asserts "All rights whatsoever in this play are strictly reserved and application for performance etc. should be made . . . to Nassim Soleimanpour c/o Oberon Books. No performance may be given . . . and not alterations may be made . . . without the author's prior written consent". But is such consent not already contained within the play's text? 60 "In his Resurrection, Jesus has already taken the whole of transitory time (including life and death) with him into eternal life which was the source of his constant obedience to the Father's commission. This means he also recapitulated the 'non-time' of the dead. It also means that the Risen One does not live in some 'intermediate time' before the 'end of the world'" (Balthasar 1998, p. 128, with internal references to Adrienne von Speyr). Due diligence notes that while von Balthasar also appeals to the poetics of wideness, he more frequently invokes military-sexual metaphors for God's relationship to the world and its time. For example, "God intends not only to dominate creaturely time from above but to embed it, with all its created reality, in his eternal time" (Balthasar 1998, p. 127). Conflicts of Interest: The author declares no conflict of interest.
16,801.4
2020-09-29T00:00:00.000
[ "Art", "Philosophy" ]
Distributed Application Management in Heterogeneous GRIDS Distributing an application on several machines is one of the key aspects of Grid-computing. In the last few years several groups have developed solutions for the occurring communication problems. However, users are still left on their own when it comes to the handling of a Grid-computer, as soon as they are facing a mix of several Grid software environments on target machines. This paper presents a tool, which eases the handling of a Grid-computer, by hiding some of the complexity of heterogeneous Grid-environments from the user. INTRODUCTION The Grid is generally seen as a concept for 'coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organization' [5] .The original idea came from scientists, who were mainly interested in the scientific solution of their problem.Their jobs can be executed on any machine, respectively on any set of machines, to which they have access.Like in Metacomputing, which was a popular concept in the Mid 90s [4] , the distribution of jobs onto several machines is an important part of the Grid concept.The need for doing so mainly comes from applications, which have a high demand on computational resources [10] [2] or for increasing throughput on the machines and reducing turnaround times. Distributing an MPI job onto several machines imposes however many problems to the user and the application.Most of the problems stem from the fact, that the Grid is heterogeneous from several aspects.This starts with different data representations on different machines, which requires data conversion either in the MPI-library or in the application.Various processor speeds and differences in the available memory require the application to have a smart initial load distribution and/or dynamic load balancing.The differences on the communication characteristics for communication between processes located on potentially different sites require distinct programming techniques for hiding the wide area latency and dealing with the low bandwidth.The latter aspects have been handled in the last couple of years by several projects, which focused on the development of MPI libraries for Grid environments, like MPICH-G2 [12] , STAMPI [11] , MPI_Connect [6] and PACX-MPI [9]. Another aspect of heterogeneity in the Grid has been up to now not really handled.The user has to deal also with several mechanisms of how to access a machine and how to start a job on it.While some computing centers run ssh [17] , rsh or similar protocols, Grid software environments like Globus [5] , Legion [16] or UNICORE [1] are getting more common.There is a high probability that the user has to deal with a mix of these access mechanisms when distributing an application onto several hosts.In the following this is the aspect of heterogeneity which we mean, if we refer to the Grid as being heterogeneous. Beside troubling the user, this aspect of the Grid affects also the startup procedure of MPI libraries.Library developers tend to integrate their communication library and the startup procedure into a specific Grid software environment.For example, MPICH-G2 is part of the the Globus software toolkit, and uses the Globus mechanisms for authentication, authorization and communication.However, this implies, that it is hard to execute an MPICH-G2 job on a machine, where Globus is not installed. The default startup procedure of PACX-MPI requires just a login on each machine.In addition, further methods for starting PACX-MPI jobs are provided, which interface directly with several Grid computing environments.Together with the Globus group, an interface has been defined and implemented, which allows a PACX-MPI job to use all Globus features mentioned in the previous section [22].Furthermore, a plug-in for the UNICORE Grid-computing environment for PACX-MPI is currently under development. However, none of these solutions support the user, if he has to deal with a heterogeneous Grid-computing environment.If Globus is installed on the first machine, UNICORE on the second one, and ssh is the only way to access the third machine, the user currently cannot use a unique startup procedure with any of the libraries mentioned above.He might run a PACX-MPI job on all machines, but in this case, he is not taking advantage of any of the features of the Grid software environments.This situation is not satisfactory.Our experience in the last years doing distributed simulations all over the world (see e.g.[13] ) showed furthermore, that starting with a certain number of machines, it is not easy to handle a terminal for each machine.Especially, PACX-MPI requires some configuration files to be available on all hosts.When modifying the configuration of the metacomputer, the user has to update the configuration files of PACX-MPI on all hosts.While this is not a problem for two or three machines, it is an error-prone procedure for more. To solve the problems described, a tool is currently under development in the frame of the DAMIEN project [19] sponsored by the European Commission called the 'Configuration Manager', and it is designed to handle distributed MPI-applications in heterogeneous Grid-environments.The primary goal of the Configuration Manager is to ease the handling of resources and applications for PACX-MPI jobs. The structure of this paper is as follows: in the next section we describe PACX-MPI and the required configuration files for the library.Then, we present the goals, concept and the implementation of the Configuration Manager.The following section gives some usage examples of the Configuration Manager.Finally, we summarize the paper and give a brief overview about the ongoing work in this area. PACX-MPI This section describes the communication library PACX-MPI [9] .The focus is on the startup procedure and the configuration files of PACX-MPI, since these aspects are necessary for understanding the concept of the Configuration Managers.PACX-MPI is the implementation of the message passing standard MPI that is optimized for Meta-and Gridcomputing environments.Key aspect of these environments is the difference in the characteristics of the communication layers regarding latency and bandwidth.If communication occurs between two processes within the same machine, the latency is usually in the range of microseconds, and the bandwidth in the range of tens to hundreds of MBytes/s.However, if the communication partners are distributed on different machines, latencies tend to be in the range of tens of milliseconds, and bandwidth drops to some MBits/s.PACX-MPI takes these characteristics into account by using two independent layers for handling communication between processes on the same machine (internal communication) and for handling communication between processes on different machines (external communication).Internal communications are handled by using the vendor MPI-library on each machine, since this a highly efficient and portable communication method.For external communications two communication daemons have been introduced, which translate the MPI messages into TCP/IP messages or vice versa.Furthermore, PACX-MPI has optimized the high level functions of the MPIstandard, e.g. the collective operations [7] or the derived datatypes [8] , to these environments.For more details, see e.g.[9]. When starting an MPI-application on several hosts simultaneously, PACX-MPI reads two configuration files, which answer the following questions: • Which machines are participating in the current run ? • Which is the rank of this machine in the given configuration ? • Which network protocol is a machine using for communicating to each other host ? The first question is important for determining which machines have to be contacted at all.The second question is dealing with the problem of how to assign a global, unique rank to each process in the global configuration.This problem is solved in PACX-MPI by using for each participating machine a unique host-rank and the local MPI-rank of each process.Finally, the third question clarifies the communication protocol between each pair of hosts and the attributes of the protocol.This information consists of a list of protocols supported by PACX-MPI (currently TCP/IP, a native ATM protocol and a secure protocol based on SSL), and some protocol dependent attributes, e.g. the port number under which a machine can be reached for the TCP/IP protocol.This information is split into two configuration files: the hostfile , which contains the information regarding the participating machines, and the netfile , which deals with the network parameters of the current run.While the hostfile is mandatory for each run, the second configuration file is optional.In case the library does not find this file, it assumes, that all machines are using the default networking protocol (TCP/IP) with default port-numbers. In the following, we would like to describe the configuration files using a simple example of three machines, called host1, host2 and host3, with 16 processors on the first two machines, and 24 processors on the last one.While the first two machines are within the same institution and protected by a firewall, the third machine is located at a different site.Therefore, one can use standard TCP/IP between host1 and host2, while a secure protocol is required to the third machine.This configuration is also shown in figure [1]. In the most simple case, the according hostfile looks like the following: host1 16 host2 16 host3 24 The hostfile consists of a list of the machines and the number of processes used for the application on each machine.The format is straight forward and based on the well-known machinefile of MPICH [15] .This information however is not enough for cluster-like machines (e.g.IBM RS6000SP, PC-Clusters), where each processor has its own IP-address, and therefore the machine can not be identified by a single name or IP-address.In this case, the format of the hostfile and the whole startup procedure is different: it contains the keyword Server, the name or the IP-address of the Startup-Server and the rank of the machine in the global configuration. Server <Startup-Server > <host-rank> The Startup-Server is an independent process.Its purpose is to collect relevant startup-information, e.g. the IPaddresses of all PACX-MPI daemons, and redistribute this in a second step to all hosts. For the example described above the second configuration file, the netfile looks like the following: BEGIN 1; HOST=3, PROTOCOL=SSL; END; BEGIN host2; HOST=3, PROTOCOL=SSL; END; BEGIN 3; HOST=1, PROTOCOL=SSL; HOST=2, PROTOCOL=SSL; END; The file contains one section for each participating machine.A section starts with a BEGIN statement and either the rank of the machine in the global configuration or the name of the host and ends with an END statement.Inside each section, the network connections of this host to all other machines are described.To simplify the usage of this file, just those connections between hosts have to be described, which are not using default parameters.Therefore, the connection between host1 and host2 need not be described. GOALS, CONCEPT AND IMPLEMENTATION OF THE CONFIGURATION MANAGER To automatize the handling of resources for a PACX-MPI job, the Configuration Manager has to provide the following basic functionality: • Create the configuration files for PACX-MPI: Based on a GUI, the user describes the required resources for the current run.• Distribute the configuration files onto all machines: Since the configuration files of PACX-MPI have to be available on all machines, the Configuration Manager has to transfer the most recent version of these files to all machines.• Start the application on all hosts: Among the information, which the user has to provide when specifying a machine in the Configuration Manager, is a command which will be used to start the application on the given host. As described in the introduction, the heterogeneity of the Grid will not only consist of different types of machines with different data representations, but also the method how the user can access the machines.Gridsoftware like Globus, Legion and UNICORE are meanwhile well established.On the other hand, some machines will probably continue to be accessible just by ssh, rsh or similar methods.Therefore, another goal of the Configuration Manager is to hide the complexity of how to access each individual machine for the every-day use. To hide the complexity of heterogeneous Grids from the user, the Configuration Manager requires the following functionality for each access-mechanism: • Authentication and Authorization: This can be either done by a combination of username and password for ssh, or a certificate for Globus and UNICORE.• File-transfer: This is required for sending the current versions of the PACX-MPI hostfile and netfile to all hosts.Furthermore, file transfer is necessary for executable staging or transferring input files to the according machines respectively moving output files back onto local machines.• Job submission: This is used to start each part of the distributed job on all machines. The job submission again can be detailed to the following items: • Name and path of the application: This has to be handled separately for each machine, since the requirement of having the same name and path for the application on all systems can often not be fulfilled (e.g. the name is <myappl>.<architecture> .• Number of processors: Again this is a machine and run-specific information and has to be stored for each host separately.• Generic MPI command: Specifying the command how to start an MPI application on a certain machine. In fact, the user often has to face the problem, that several MPI libraries are installed on a machine with different MPI start commands (e.g.mpirun, mpiexec, poe). In the concept of the Configuration Manager foreseen is the possibility to store the settings for a certain machine.Typically, an end-user will have a limited number of machines which he uses for his distributed runs.This feature allows him also to store the same machine using different MPI implementation (e.g.<machinename>.<vendormpi> and <machinename>.<mpich1.2.3> ). Implementation Issues The Configuration Manager is implemented in Java, following Model-View-Controller (MVC) principles.This concept separates input, processing and output in graphically oriented software.Therefore, the programmer using the MVC principles has to deal with three different components: • The model, which represents the application object • The view, which realizes the representation of the model on the screen and • The controller, which defines the reaction on input of the user-interfaces.The current implementation of the Configuration Manager supports two different methods to save and open a configuration.In case the same user wants to reuse the configuration, it is convenient for him not to enter the passwords or to specify the certificates for each machine every time.Therefore, the Configuration Manager encrypts all passwords and certificates when saving the configuration.Thus, the user has to enter a password when opening a configuration.The encryption functionality is handled using the Java Cryptography Extensions (JCE), using an implementation of Bouncy Castle [21] . However, it is sometimes desirable to pass a configuration to another person.In this case passwords and certificates should not be included in the configuration.The Configuration Manager supports this scenario by providing export/import functions. For the integration of the startup-server mechanism of PACX-MPI for handling cluster-like machines, an implementation of the startup-server in Java has been developed .When using this startup method of PACX-MPI, the GUI automatically changes the content of the hostfile .The user can then launch the startup-server, which will be executed in a separate thread.Thus, the handling of different formats of the PACX-MPI configuration files is nearly seamless for the end-user, improving the usability of PACX-MPI. An ssh-based Grid-terminal The first access-mechanism implemented in the Configuration Manager was ssh.Due to its current wide usage, ssh is still considered to be the most widespread method to access a high-performance computer.The implementation in the Configuration Manager is based on the Java ssh-implementation of Mindbright [18] .Based on this library, a grid-terminal (gridterm) has been developed, which allows the user to execute a command simultaneously on one, a subset or all hosts in the current configuration (see fig. [3]).Stdout and stderr are merged for all hosts into a single window.This helps to avoid the situation mentioned in the introduction, where starting with a certain number of machines, the user has problems to manage regular terminals on all hosts.It is therefore a major improvement of a regular terminal for Grid-environments.At the current stage, the Configuration Manager supports password based ssh-access to the different machines. Integration with UNICORE Currently ongoing work is the implementation of UNICORE as a second access mechanism.UNICORE (Uniform Interface to Computer Resources) is a software infrastructure having the goal to provide the user an easy way to submit jobs or chains of jobs to supercomputers.Furthermore, UNICORE supports the idea of specifying a multi-site job, where the first part of an application is executed on the first machine, and the second part is executed on the second machine.In the frame of the UNICORE Plus project, the handling of Metacomputing jobs with PACX-MPI is already supported.To realize this, the plug-in mechanism of UNICORE is used for creating the PACX-MPI configuration files and for launching the jobs.However, as mentioned previously in the introduction, this mechanism requires UNICORE to be installed on all machines, which are part of the Metacomputer.Therefore, another solution for handling mixed environments is currently being developed for the Configuration Manager. Like ssh, UNICORE supports file-transfer and the starting of a job on a machine.Two major differences to ssh are however important when integrating UNICORE into the Configuration Manager: • UNICORE does currently not support interactive access to computing resources.While this is ongoing work in the frame of the Eurogrid project [14] , this does mean, that all jobs (even a simple lscommand) will be executed as batch jobs.Since co-allocation of resources across several sites is still an unsolved problem, this does also impose currently, that it is up to the user to make sure, that all resources are available at the same time.• The access to UNICORE resources is based on X509 certificates, and not on passwords.Since this is also part of the ssh2 protocol, the Configuration Manager is currently being extended to support the management of certificates as-well. EXAMPLE OF USAGE SCENARIOS The Configuration Manager has been used in several demonstrations and scenarios.It is currently tested under industrial conditions by the European Aeronautic Defense and Space Company (EADS) in the frame of the DAMIEN project.In this section, we will give brief descriptions of another scenario presented in the frame of the DAMIEN project, where computational resources in Stuttgart/Germany, Rouen/France and Barcelona/Spain were connected for a distributed simulation between these sites.The machines used were an SGI Origin 2000 in both Rouen and Stuttgart and an IBM RS6000SP in Barcelona. During the preparation of the demonstration, the network between the sites was continuously modified in order to improve the connection of each site to the new European high-speed backbone network Geant.This time, the Configuration Manager turned out to be extremely useful, since the launching of the application could be done within seconds.Therefore, an immediate response to the networking experts about the quality of the network and the impact of their modifications could be provided. The application used for the simulation was a flow-solver called URANUS [3] .This code simulates the flow around a space vehicle during the reentry phase.In the final run, a simulation of the MIRKA-vehicle has been successfully demonstrated for a medium size problem. Of importance for the Configuration Manager is one technical issues of URANUS, its directory structure.During compile-time, a directory will be created for each architecture.Thus, the executables are not located in the same directory for the IBM RS6000SP and for the SGI Origin2000.While the whole run could also be done without the Configuration Manager, its usage eased the management of the application and provided an easy to use tool for Grid-and Metacomputing. CONCLUSIONS AND FUTURE PLANS In this paper we presented a tool which helps to ease the usage of distributed resources in Grid-environments.The Configuration Manager creates the configuration files for PACX-MPI, transfers them to the participating machines and starts the application on all hosts simultaneously.Furthermore, it hides parts of the complexity of the Grid from the user by supporting several mechanisms of how to reach different machines.Currently implemented are ssh and (partially) UNICORE.This tools has proved its usability in several demonstrations and projects, and is currently being tested in an industrial environment. A further aspect of the Configuration Manager has proved its usefulness beyond the submission of PACX-MPI jobs.The grid-terminal (gridterm) is an extension of a regular terminal, which enables simultaneous execution of commands and applications on several hosts in a Grid environment. Further plans for the tool involve the support for other Grid software environments such as Globus or Legion. Together with the already implemented mechanisms, we believe we will then cover the most relevant software environments for Grid-computing. In the frame of the DAMIEN project, PACX-MPI is extended to handle Quality-of-Service parameters.The Configuration Manager will support the specification of these parameters as-well. An omnipresent question currently is the convergence of all Grid-tools toward the Open Grid Software Architecture (OGSA) [23] , since most of the Grid software environments, like e.g.Globus or UNICORE, have committed to convert toward this new architecture.In this concept Grid-computing is constructed out of several Grid Services with defined interfaces and semantics.These services are accessible by several protocols e.g. the Simple Object Access Protocol (SOAP) [24] .The development and implementation of a tool like the Configuration Manager could potentially be eased by a common architecture, since all operations should be formulated as requests to an abstract system, independent of the implementation underneath.However, such a common architecture would not really affect the Configuration Manager for two reasons: first, none of the systems are currently exceeding the status of pure demonstrations, and second, the argument, that some systems will remain just accessible through protocols like ssh, remains still valid. FIGURE 1 : FIGURE 1: Example Configuration consisting of three hosts FIGURE 2 : FIGURE 2: Screenshot of the Configuration Manager FIGURE 3 : FIGURE 3: Screenshot of the grid-terminal
4,944.8
2002-12-17T00:00:00.000
[ "Computer Science" ]
Thermal Conductivity of Polyamide-6,6/Carbon Nanotube Composites: Effects of Tube Diameter and Polymer Linkage between Tubes Reverse nonequilibrium molecular dynamics simulations were done to quantify the effect of the inclusion of carbon nanotubes (CNTs) in the Polyamide-6,6 matrix on the enhancement in the thermal conductivity of polymer. Two types of systems were simulated; systems in which polymer chains were in contact with a single CNT, and those in which polymer chains were in contact with four CNTs, linked together via polymer linkers at different linkage fractions. In both cases, heat transfer in both perpendicular and parallel (to the CNT axis) directions were studied. To examine the effect of surface curvature (area) on the heat transfer between CNT and polymer, systems containing CNTs of various diameters were simulated. We found a large interfacial thermal resistance at the CNT-polymer boundary. The interfacial thermal resistance depends on the surface area of the CNT (lower resistances were seen at the interface of flatter CNTs) and is reduced by linking CNTs together via polymer chains, with the magnitude of the reduction depending on the linkage fraction. The thermal conductivity of polymer in the perpendicular direction depends on the surface proximity; it is lower at closer distances to the CNT surface and converges to the bulk value at distances as large as 2 nm. The chains at the interface of CNT conduct heat more in the parallel than in the perpendicular directions. The magnitude of this thermal conductivity anisotropy reduces with decreasing the CNT diameter and increasing the linkage fraction. Finally, microscopic parameters obtained from simulations were used to investigate macroscopic thermal conductivities of polymer nanocomposites within the framework of effective medium approximation. Introduction The low thermal conductivity (usually ranging from 0.1 to 0.5 W/(m·K) at room temperature) of polymers limits their use in many engineering applications [1]. For example, in polymer-based electronic systems a higher thermal conductivity of the order of 1 to 30 W/(m·K), is needed to dissipate the waste heat generated during the operation of device [2]. It is known that the addition of highly conductive nanofillers to polymers modifies their thermal/mechanical properties. Due to their excellent resistance to corrosion, light weight, and ease of processing, such polymer nanocomposites are regarded as the new paradigm for materials with diverse applications in electronic, automotive, and aerospace industries, as well as in energy devices [3,4]. In fact, significant alterations in the structural and dynamic properties of polymer occur at very low loadings (≈ 1-5%) of nanofillers, such as graphene nanoplatelets and carbon nanotubes (CNTs). The enormous interfacial area provided by the nanofillers has a large impact on the surrounding polymer matrix, extending to a few radii of gyration of the unperturbed chain. Among nanofillers, CNTs have attracted considerable attention as ideal fillers due to their high thermal conductivity (≈ 2000-6000 W/(m K) at room temperature). However, even though CNTs have high thermal conductivities, the polymer/CNT nanocomposites have substantially lower thermal conductivities than what would be expected based on a linear law of mixing [5]. In fact, the thermal resistance (Kapitza resistance) at the interface between CNT and polymer is a barrier against heat conduction from CNT to polymer. So far, a significant amount of literature on heat transport in polymer nanocomposites has been devoted to quantifying the thermal resistance at the polymer/filler interface [3][4][5][6][7][8]. In the case of CNT-based nanocomposites, experimental measurements show a moderate increase in the thermal conductivity of polymer matrix (≈ 50-250%) at ≈ 7% of the CNT loading [9,10]. In some cases, even the reports on the thermal resistance at the polymer/filler interface are controversial; the study by Bonnet et al. [11] showed no considerable improvement, and a study by Moisala et al. [12] showed a decrease in the thermal conductivity with the addition of CNTs. From a theoretical point of view, the effective medium approximation (EAM) [13] is used to estimate the macroscopic properties of composites through averaging the properties of their constituents. This theory is valid at low filler concentrations, and the estimates of thermal conductivity using this method are much higher than experimental measurements. Further progress in the theoretical estimation of heat transport in the composites was made through the development of acoustic mismatch [14] and diffuse mismatch [15] models. However, both models fail to accurately describe the phonon interfacial scattering process. Complementary to experimental investigations and theoretical modeling, computer simulation methods have also been conducted to understand the microscopic picture of thermal resistance at the interface. Early molecular dynamics (MD) simulation studies on the heat transfer at solid-polymer junctions include nonequilibrium molecular dynamics (NEMD) simulation of CNT (5, 5)-octane (as a model of polymer) [5], and that of silicon-amorphous polyethylene [16] by Keblinski et al. Further simulation studies in this field consist of heat transfer at the junctions of solids with polymers of more detailed chemical structure at the interface of CNT or graphene sheets [17][18][19][20][21][22][23]. The results of these simulations have revealed a substantial thermal resistance at the polymer/solid interface, depending on the packing of the polymer and chain stretching at the interface as well as the functionalization of CNT/graphene by polymer chains. Although significant information on the mechanism of heat transfer have been obtained in these simulations, there are still unresolved questions on the influence of the molecular nature of polymer/filler interactions on the heat resistance at the CNT/polymer interface. In this study, we investigated the structure and dynamics of Polyamide-6,6 (PA-6,6) at the interface of CNT (17,0), CNT (10, 0), and CNT (6, 0) [24]. The aim of this work is to investigate the heat transfer at the interface of PA-6,6/CNTs. The effects of surface curvature, chain orientation, and the linkage of CNTs by polymer chains on the heat transfer will be discussed. Theory The reverse nonequilibrium molecular dynamics (RNEMD) technique [25][26][27] was used to calculate the thermal conductivity of PA-6,6 at the CNT interface. According to this method the heat flux is imposed and the resulting force is measured. The heat transfer can be studied in the perpendicular or parallel to the CNT axis directions. The perpendicular heat flow can be studied by dividing the simulation box into a number of cylindrical shells around the CNT. The heat flow is artificially maintained between the CNT and the outermost cylindrical shell. Because the energy is conserved, it flows back through the system (in the radial direction) via a physical transport mechanism. As a result, a temperature gradient develops in the system. A projection of the simulation box on the plane normal to the CNT axis, xy plane, which depicts the artificial heat transfer and physical heat flow, is shown in Figure 1b. According to Fourier's law of heat conduction, the heat flux is connected to the force through the following expression: where J(r) is the heat flux in the radial direction, dT/dr is the temperature gradient along the radial direction (force), and λ ⊥ is the coefficient of thermal conductivity in the perpendicular direction. In the RNEMD method the non-physical heat transfer is performed by exchanging the velocities of atoms in the CNT and the outermost cylindrical shell in Figure 1b. For this purpose, the Cartesian coordinates of velocities of the coldest atom in the hottest cylindrical shell (the C atom in CNT with the smallest velocity) is exchanged with those of the hottest atom in the coldest cylindrical shell (the C atom in the outermost polymer shell with the largest velocity). In the steady state, the average energy flux is expressed as: where l is the length of the CNT, r is the radius of the cylindrical shell around the CNT, m is the atomic mass, t is the duration of the simulation, and subscripts hot and cold refer to the hot and the cold slabs, respectively. In Equation (2), ∆ /t is the rate of energy transfer between the hot and cold shells and the factor 2πrl is the surface area of the cylindrical shell, co-centered with CNT, locating at distance r from the CNT axis. From Equations (1) and (2), the thermal conductivity in the perpendicular direction is expressed as: where l is the length of the CNT, r is the radius of the cylindrical shell around the CNT, m is the atomic mass, t is the duration of the simulation, and subscripts hot and cold refer to the hot and the cold slabs, respectively. In Equation (2), Δϵ/t is the rate of energy transfer between the hot and cold shells and the factor 2πrl is the surface area of the cylindrical shell, co-centered with CNT, locating at distance r from the CNT axis. From Equations (1) and (2), the thermal conductivity in the perpendicular direction is expressed as: To perform RNEMD simulations of heat transport in the parallel (to the CNT axis) direction the velocity exchange is done in slabs perpendicular to the tube axis. In this case, in Equations (1) and (2), is replaced with , assuming that the CNT axis orients along the z direction, and the factor 2πrl in the denominator of Equation (2) is replaced with ( and being the dimensions of simulation box along the x and y directions, respectively). Simulations We did two sets of simulations for calculating the thermal conductivity of polymer-CNT nanocomposites. In the first set, the PA-6,6 chains consisting of 6 chemical repeat units (see the structure in Figure 1(a)), were in contact with a single CNT of length 5.983 nm, coinciding with the dimension of the MD simulation box in the z direction. Here, the heat transfer in the perpendicular (to the tube axis) direction was studied by exchanging velocities of C atoms of the CNT and those of the outermost cylindrical polymer shell. A snapshot of the simulation box describing the velocity exchange in the perpendicular direction is shown in Figure 1 To perform RNEMD simulations of heat transport in the parallel (to the CNT axis) direction the velocity exchange is done in slabs perpendicular to the tube axis. In this case, in Equations (1) and (2), J(r) is replaced with J(z), assuming that the CNT axis orients along the z direction, and the factor 2πrl in the denominator of Equation (2) is replaced with l x l y (l x and l y being the dimensions of simulation box along the x and y directions, respectively). Simulations We did two sets of simulations for calculating the thermal conductivity of polymer-CNT nanocomposites. In the first set, the PA-6,6 chains consisting of 6 chemical repeat units (see the structure in Figure 1a), were in contact with a single CNT of length 5.983 nm, coinciding with the dimension of the MD simulation box in the z direction. Here, the heat transfer in the perpendicular (to the tube axis) direction was studied by exchanging velocities of C atoms of the CNT and those of the outermost cylindrical polymer shell. A snapshot of the simulation box describing the velocity exchange in the perpendicular direction is shown in Figure 1b. In the second set, the PA-6,6 chains were in contact with 4 CNTs, linked together by PA-6,6 linkers of fixed length. We have tabulated the details of simulated systems in Table 1. To have the same volume fraction of CNT in the simulation box (as in the case where PA-6,6 chains were in contact with a single CNT), we doubled the size of the simulation box in the x and y directions. The structure of the linkers is close to that of the free PA-6,6 chain; only the terminal methyl and butyl groups of PA-6,6 chains in Figure 1a are removed and the terminal carbonyl C atoms are grafted to CNTs The linkers extending in each direction were grafted to nearly equidistant C atoms along the z direction and the number of linkers connecting CNTs along x and y directions were the same. The axes of CNTs linked together via linkers were ≈ 7.3 nm apart (see Table 1) For systems with linked CNTs, the thermal conductivity in the perpendicular direction was studied by exchanging velocities between CNTs A snapshot of the simulation box, showing the hot and cold CNTs, linked together with PA-6,6 linkers, as shown in Figure 1c. In both sets of simulations the thermal conductivities in the parallel to the tube direction were also studied. To study the effect of surface curvature on the rate of heat transfer at the interface, in both sets, three systems containing CNT (6, 0), CNT (10, 0), and CNT (17, 0), of diameters 0.475 nm, 0.786 nm, and 1.333 nm, respectively, were simulated. The force field for CNT was the empirical Brenner-type [28,29] and that for PA-6,6 was a flexible united-atom force field [30], which was shown to predict the thermal conductivity of polymer in close agreement with experiment. Equilibrium MD simulations were done at 350 K and 101.3 kPa over a time window of 20 ns to generate relaxed structures. Starting from the relaxed structures, RNEMD simulations were done at different (velocity) exchange periods. The simulation box was periodic in all dimensions. All simulations were done using the simulation package YASP [31]. The temperature and pressure were kept constant using a Berendsen thermostat and barostat [32]. The time constants for coupling the system to the thermostat and barostat were 0.2 ps and 5.0 ps, respectively. The nonbonded interactions were truncated at 0.90 nm and the neighbors were included if they were closer than 1.0 nm. The Coulombic interactions were calculated using the reaction field method with an effective dielectric constant of 5.5 [33]. The time step was 1.0 fs and during the simulation the trajectories of all atoms were recorded every 1 ps. Polymer Structure at the Interface The microscopic structure of polymer at the CNT interface can be depicted in terms of the density profiles in cylindrical shells around the tube. In Figure 2 we show the density profiles, normalized with the bulk number density, ρ 0 = 1100 kg/m 3 , for PA-6,6 monomers as a function of their centers-of-mass radial distances from the CNT surface. The polymer at the interface forms ordered layers, which extend to ≈ 2 nm from the CNT surface. The magnitude of the CNT surface effect on the polymer layering at the interface depends on the surface curvature (area). Better structured layers were formed at the interface of the larger-diameter CNT, i.e., CNT (17,0). The monomers in close vicinity to the CNT (6, 0), whose diameters are very short, wrap around the tube and their center of mass fell inside the tube. In the following sections, we show that heat transfer at the interface depends on the formation of ordered polymer layers. and their center of mass fell inside the tube. In the following sections, we show that heat transfer at the interface depends on the formation of ordered polymer layers. Temperature Profiles To examine the radial heat flow in the polymer, the simulation box was divided into a number of cylindrical shells (parallel to the CNT axis) of specified thickness. The velocity exchange was done between C atoms of the CNT (the first cylindrical shell) and those of the polymer in the outermost cylindrical shells (see Figure 1(b)). The shell thickness is adopted as the distance over which the first polymer density profile peak spans, i.e., 0.55 nm (see Figure 2). Averaging the temperature in each cylindrical shell, we plotted the temperature profiles as ⁄ according to Equation (3) in Figure 3. A large temperature jump between the CNT and the polymer film in its vicinity can be seen, which is an indication of a large thermal resistance at the CNT/polymer boundary. The temperature jump is stronger in the case of smaller-diameter CNTs, suggesting weaker heat flow between smallerdiameter CNTs and polymer. Temperature Profiles To examine the radial heat flow in the polymer, the simulation box was divided into a number of cylindrical shells (parallel to the CNT axis) of specified thickness. The velocity exchange was done between C atoms of the CNT (the first cylindrical shell) and those of the polymer in the outermost cylindrical shells (see Figure 1b). The shell thickness is adopted as the distance over which the first polymer density profile peak spans, i.e., 0.55 nm (see Figure 2). Averaging the temperature in each cylindrical shell, we plotted the temperature profiles as dT/dln(r) according to Equation (3) in Figure 3. A large temperature jump between the CNT and the polymer film in its vicinity can be seen, which is an indication of a large thermal resistance at the CNT/polymer boundary. The temperature jump is stronger in the case of smaller-diameter CNTs, suggesting weaker heat flow between smaller-diameter CNTs and polymer. Perpendicular Thermal Conductivities To calculate the interfacial thermal conductivities, the energy was assumed to be transferred between the CNT and the first polymer layer, the position of which corresponds to the position of maximum in the first density profile peak (0.55 nm). Therefore, in Equation (2), the surface area corresponds to the surface area of a cylinder with radius rCNT + 0.55/2, co-centered with the CNT. The thickness of cylindrical shells is accordingly fixed at 0.55 nm. For heat transfer between polymer shells, the surface area corresponds to the surface area of a cylinder (co-centered with the CNT) with a radius corresponding to the average radii of the layers. We have shown the thermal conductivity of PA-6,6 chains at the interface of CNT (17, 0), CNT (10, 0), and CNT (6, 0) as a function of distance from the CNT surface in Figure 4. Note that the values of λ⊥ at d = 0.275 nm correspond to the interfacial thermal conductivities. The results in Figure 4 show that the perpendicular thermal conductivities at distances longer than 2 nm converge to the corresponding bulk values (λ0 = 0.27 W/(m·K)). At shorter distances to the CNT, the thermal conductivity decreases with decreasing the distance to the CNT surface. This is due to the formation of organized polymer layers at the CNT interface (see Figure 2). In such organized polymer layers, heat conduction in the perpendicular direction mostly takes place via intermolecular collisions. The thermal conductivity falls off suddenly right at the polymer/CNT boundary. The thermal (Kapitza) resistance can be calculated from the temperature jump at the interface and the known flux, according to the following relation: Our calculated interfacial thermal resistances at the interfaces of PA-6,6 with CNT (17, 0), CNT (10, 0), and CNT (6, 0), correspond to 3 × 10 -8 m 2 K/W, 3.9 × 10 -8 m 2 K/W, and 4.7 × 10 -8 m 2 K/W, respectively. This shows that the interfacial thermal resistance depends on the surface area (diameter) of the CNT; larger resistances are seen at the interface of smaller-diameter CNTs. This result agrees Perpendicular Thermal Conductivities To calculate the interfacial thermal conductivities, the energy was assumed to be transferred between the CNT and the first polymer layer, the position of which corresponds to the position of maximum in the first density profile peak (0.55 nm). Therefore, in Equation (2), the surface area corresponds to the surface area of a cylinder with radius r CNT + 0.55/2, co-centered with the CNT. The thickness of cylindrical shells is accordingly fixed at 0.55 nm. For heat transfer between polymer shells, the surface area corresponds to the surface area of a cylinder (co-centered with the CNT) with a radius corresponding to the average radii of the layers. We have shown the thermal conductivity of PA-6,6 chains at the interface of CNT (17, 0), CNT (10, 0), and CNT (6, 0) as a function of distance from the CNT surface in Figure 4. Note that the values of λ ⊥ at d = 0.275 nm correspond to the interfacial thermal conductivities. The results in Figure 4 show that the perpendicular thermal conductivities at distances longer than 2 nm converge to the corresponding bulk values (λ 0 = 0.27 W/(m·K)). At shorter distances to the CNT, the thermal conductivity decreases with decreasing the distance to the CNT surface. This is due to the formation of organized polymer layers at the CNT interface (see Figure 2). In such organized polymer layers, heat conduction in the perpendicular direction mostly takes place via intermolecular collisions. diameter of 0.8 nm) compared to graphene sheets of the same volume fraction. As a graphene sheet can be regarded as an infinite-diameter limit of CNT, it is reasonable to accept that the Kapitza resistance of a CNT inversely depends on its diameter (surface area). Parallel Thermal Conductivities To calculate the thermal conductivities in the parallel to the CNT axis direction, we divided the simulation box along the CNT axis, z direction, into 20 slabs and performed a velocity exchange between the first and the 11th slab. The energy transfer can be either restricted to the exchange of velocities between identical atoms involved in polymer chains, between the two slabs or to include the C atoms of the CNT in the exchange process as well. Previous reports [21,23] in the literature show that both exchange methods lead to nearly identical results for the parallel thermal conductivity of the polymer. Here, we did an unrestricted velocity exchange between the two slabs. The temperature profiles for polymer and CNT for heat transfer in the parallel direction are shown in Figure 5. The results show that while a clear temperature difference between the hottest and coldest regions of the box is observed for polymer, the temperature along the CNT is nearly constant. This is the result of thermal insulation of the CNT from the polymer, as already discussed in terms of the large interfacial thermal resistance. Because of the fact that CNT has a much higher thermal conductivity than the polymer, the temperature gradient in the CNT is very small. This fact also clearly indicates why the two different afore-cited methods of velocity exchange (inclusion/exclusion of CNT in/from the exchange process) give the same results for the parallel thermal conductivity of the polymer. The thermal conductivity falls off suddenly right at the polymer/CNT boundary. The thermal (Kapitza) resistance can be calculated from the temperature jump at the interface and the known flux, according to the following relation: Our calculated interfacial thermal resistances at the interfaces of PA-6,6 with CNT (17, 0), CNT (10, 0), and CNT (6, 0), correspond to 3 × 10 -8 m 2 K/W, 3.9 × 10 -8 m 2 K/W, and 4.7 × 10 -8 m 2 K/W, respectively. This shows that the interfacial thermal resistance depends on the surface area (diameter) of the CNT; larger resistances are seen at the interface of smaller-diameter CNTs. This result agrees with the findings of Bui et al. [34] on the higher Kapitza resistance at the interface of CNTs (with a diameter of 0.8 nm) compared to graphene sheets of the same volume fraction. As a graphene sheet can be regarded as an infinite-diameter limit of CNT, it is reasonable to accept that the Kapitza resistance of a CNT inversely depends on its diameter (surface area). Parallel Thermal Conductivities To calculate the thermal conductivities in the parallel to the CNT axis direction, we divided the simulation box along the CNT axis, z direction, into 20 slabs and performed a velocity exchange between the first and the 11th slab. The energy transfer can be either restricted to the exchange of velocities between identical atoms involved in polymer chains, between the two slabs or to include the C atoms of the CNT in the exchange process as well. Previous reports [21,23] in the literature show that both exchange methods lead to nearly identical results for the parallel thermal conductivity of the polymer. Here, we did an unrestricted velocity exchange between the two slabs. The temperature profiles for polymer and CNT for heat transfer in the parallel direction are shown in Figure 5. The results show that while a clear temperature difference between the hottest and coldest regions of the box is observed for polymer, the temperature along the CNT is nearly constant. This is the result of thermal insulation of the CNT from the polymer, as already discussed in terms of the large interfacial thermal resistance. Because of the fact that CNT has a much higher thermal conductivity than the polymer, the temperature gradient in the CNT is very small. This fact also clearly indicates why the two different afore-cited methods of velocity exchange (inclusion/exclusion of CNT in/from the exchange process) give the same results for the parallel thermal conductivity of the polymer. Our calculated parallel thermal conductivity for PA-6,6 at the interface of CNT (6, 0), CNT (10, 0), and CNT (17, 0) was 0.3 W/(m·K), 0.31 W/(m·K), and 0.33 W/(m·K), respectively. A comparison of the parallel thermal conductivities with the corresponding bulk value, 0.27 W/(m·K) shows that heat transfer in the parallel direction was facilitated at the interface. This can be explained in terms of chain orientation at the interface. Unlike the bulk PA-6,6 sample, in which the chains adopt random orientations, the chains at the interface of CNT are preferentially aligned along the tube axis [24]. We quantified the magnitude of chain ordering in the interphase by plotting the second Legendre polynomial for monomers' end-to-end vectors. The second Legendre polynomial is defined as: where P2(d) is the second Legendre polynomial, u1 is a unit vector parallel to the CNT axis, u2 is the monomer's end-to-end unit vector, and d is the monomer's center-of-mass distance from the CNT surface. Parallel, random, and perpendicular orientations of monomers' end-to-end vectors to CNT axis correspond to P2(d) = 1, 0, and -0.5, respectively. Figure 6 shows that there is strong perturbation in the chain conformations in the interphase; the chain segments adopt parallel orientations to the CNT axis. The effect is more pronounced for polymer in contact with CNTs of larger surface areas, which explains why the parallel thermal conductivities increases with increasing the CNT surface area. In the present simulation we were not able to calculate local parallel thermal conductivities, i.e., parallel thermal conductivities as a function of distance from the CNT surface, however, one can conclude that local parallel thermal conductivities are larger at closer distances to the CNT surface. This explains the increase in the parallel thermal conductivities of polymer nanocomposites with an increase in the volume fraction of the CNT. In such extended chains, heat transfer occurs through vibration of atoms along the backbone (parallel to the CNT axis). Our calculated parallel thermal conductivity for PA-6,6 at the interface of CNT (6, 0), CNT (10, 0), and CNT (17, 0) was 0.3 W/(m·K), 0.31 W/(m·K), and 0.33 W/(m·K), respectively. A comparison of the parallel thermal conductivities with the corresponding bulk value, 0.27 W/(m·K) shows that heat transfer in the parallel direction was facilitated at the interface. This can be explained in terms of chain orientation at the interface. Unlike the bulk PA-6,6 sample, in which the chains adopt random orientations, the chains at the interface of CNT are preferentially aligned along the tube axis [24]. We quantified the magnitude of chain ordering in the interphase by plotting the second Legendre polynomial for monomers' end-to-end vectors. The second Legendre polynomial is defined as: where P 2 (d) is the second Legendre polynomial, u 1 is a unit vector parallel to the CNT axis, u 2 is the monomer's end-to-end unit vector, and d is the monomer's center-of-mass distance from the CNT surface. Parallel, random, and perpendicular orientations of monomers' end-to-end vectors to CNT axis correspond to P 2 (d) = 1, 0, and -0.5, respectively. Figure 6 shows that there is strong perturbation in the chain conformations in the interphase; the chain segments adopt parallel orientations to the CNT axis. The effect is more pronounced for polymer in contact with CNTs of larger surface areas, which explains why the parallel thermal conductivities increases with increasing the CNT surface area. In the present simulation we were not able to calculate local parallel thermal conductivities, i.e., parallel thermal conductivities as a function of distance from the CNT surface, however, one can conclude that local parallel thermal conductivities are larger at Effect of CNT Linkage We performed RNEMD simulations to calculate the thermal conductivities in the parallel and perpendicular directions for systems in which the CNTs are linked together via linkers. The linker has the same structure as the PA-6,6 chains shown in Figure 1(a). Only the terminal methyl and butyl groups of PA-6,6 chains were removed and each terminal carbonyl C atom was grafted to the surface of a CNT. During the course of RNEMD simulations, velocity was exchanged between the linked CNTs (see Figure 1(c)). Temperature profiles were measured in co-centered cylindrical shells around CNTs. The radius of the outer boundary of the outermost cylindrical shell, co-centered with a CNT, extends to 1/16( + ). The shells at identical distances from the axes of both hot CNTs are equivalent; the same is true for shells around cold CNTs. Because of the symmetry, the temperatures in equivalent cylindrical shells were averaged. Finally, two temperature profile curves were plotted (one as a function of distance from the axis of hot CNT and another as a function of distance from the center of cold CNT) and the thermal conductivities at identical distances were averaged. Compared to the case in which free PA-6,6 chains were in contact with a single CNT, a smaller temperature jump between CNT and its closest polymer layer was seen. The magnitude of temperature change at the CNT/polymer interface depends on the linkage fraction, defined as the ratio of number of linked C atoms on the surface of CNT to the total number of C atoms in the CNT. Smaller temperature jumps were observed at higher linkage fractions. In other words, the interfacial thermal resistance (Kapitza resistance) depends on the linkage fraction. Figure 7 shows the Kapitza resistance at the interface of CNT (17, 0) as a function of linkage fraction. The interfacial thermal resistance decreases with increasing the linkage fraction. In fact, the linkers carry energy from the hot tube to the polymer matrix and/or from the polymer matrix to the cold tube. The effect is more pronounced at the interface of larger-diameter CNTs Effect of CNT Linkage We performed RNEMD simulations to calculate the thermal conductivities in the parallel and perpendicular directions for systems in which the CNTs are linked together via linkers. The linker has the same structure as the PA-6,6 chains shown in Figure 1a. Only the terminal methyl and butyl groups of PA-6,6 chains were removed and each terminal carbonyl C atom was grafted to the surface of a CNT. During the course of RNEMD simulations, velocity was exchanged between the linked CNTs (see Figure 1c). Temperature profiles were measured in co-centered cylindrical shells around CNTs. The radius of the outer boundary of the outermost cylindrical shell, co-centered with a CNT, extends to 1/16(l 2 x + l 2 y ). The shells at identical distances from the axes of both hot CNTs are equivalent; the same is true for shells around cold CNTs. Because of the symmetry, the temperatures in equivalent cylindrical shells were averaged. Finally, two temperature profile curves were plotted (one as a function of distance from the axis of hot CNT and another as a function of distance from the center of cold CNT) and the thermal conductivities at identical distances were averaged. Compared to the case in which free PA-6,6 chains were in contact with a single CNT, a smaller temperature jump between CNT and its closest polymer layer was seen. The magnitude of temperature change at the CNT/polymer interface depends on the linkage fraction, defined as the ratio of number of linked C atoms on the surface of CNT to the total number of C atoms in the CNT. Smaller temperature jumps were observed at higher linkage fractions. In other words, the interfacial thermal resistance (Kapitza resistance) depends on the linkage fraction. Figure 7 shows the Kapitza resistance at the interface of CNT (17, 0) as a function of linkage fraction. The interfacial thermal resistance decreases with increasing the linkage fraction. In fact, the linkers carry energy from the hot tube to the polymer matrix and/or from the polymer matrix to the cold tube. The effect is more pronounced at the interface of larger-diameter CNTs We have also calculated the perpendicular thermal conductivities for PA-6,6 chains at the interface of CNTs. Figure 8 shows the perpendicular thermal conductivity of PA-6,6, normalized with the thermal conductivity of the bulk sample at the interface of CNT (17, 0) as a typical example. The thermal conductivity of polymer at the interface of CNT increases with increasing the distance from the CNT surface. At distances ≈ 2 nm, the perpendicular thermal conductivity converges to the corresponding bulk values. Increasing the linkage fraction increases the perpendicular thermal conductivities at the interface. In other words, the stretched linkers, which act as good conductors, facilitate heat conduction in the polymer. More pronounced effects are seen at distances closer to the CNT (17, 0) surface. We have also calculated the perpendicular thermal conductivities for PA-6,6 chains at the interface of CNTs. Figure 8 shows the perpendicular thermal conductivity of PA-6,6, normalized with the thermal conductivity of the bulk sample at the interface of CNT (17, 0) as a typical example. The thermal conductivity of polymer at the interface of CNT increases with increasing the distance from the CNT surface. At distances ≈ 2 nm, the perpendicular thermal conductivity converges to the corresponding bulk values. Increasing the linkage fraction increases the perpendicular thermal conductivities at the interface. In other words, the stretched linkers, which act as good conductors, facilitate heat conduction in the polymer. More pronounced effects are seen at distances closer to the CNT (17, 0) surface. We have also calculated the perpendicular thermal conductivities for PA-6,6 chains at the interface of CNTs. Figure 8 shows the perpendicular thermal conductivity of PA-6,6, normalized with the thermal conductivity of the bulk sample at the interface of CNT (17, 0) as a typical example. The thermal conductivity of polymer at the interface of CNT increases with increasing the distance from the CNT surface. At distances ≈ 2 nm, the perpendicular thermal conductivity converges to the corresponding bulk values. Increasing the linkage fraction increases the perpendicular thermal conductivities at the interface. In other words, the stretched linkers, which act as good conductors, facilitate heat conduction in the polymer. More pronounced effects are seen at distances closer to the CNT (17, 0) surface. We performed RNEMD simulations in the parallel to the tube axis. Figure 5 shows the temperature profiles for polymer and CNT. Compared to the results for the case where a single CNT exists in the simulation box, a larger temperature change in the CNT is seen. This means that the conductivity of a functionalized CNT is less than that of a pure CNT, which is in agreement with previous reports in the literature [22,35]. In fact, functionalization introduces defects in the CNT, which act as scattering centers for phonon propagation along the CNT [35]. Moreover, the stretched polymer chains linking the tubes together carry part of the heat in the perpendicular direction. Another noticeable point in the temperature profiles in Figure 5 is the lower slope (temperature gradient) for chains acting as linkers compared to the free chains at the interface of linked CNTs. The stretched polymer chains, linking the CNTs together, have a higher thermal conductivity than free chains. Due to strong anisotropy in their conformations, heat transfer in such stretched chains is performed through vibrations along the chain's backbone. Penetration of linkers from one slab to the neighboring slabs facilitates heat transfer in the parallel direction. We have also examined the magnitude of anisotropy of heat conduction in the parallel and perpendicular directions in systems containing linked CNTs. The thermal conductivity anisotropy reduces with an increase in the linkage fraction. Heat Conduction in Polymer Nanocomposites The calculated CNT-polymer interfacial thermal resistances can be used as a source for calculating the thermal conductivities of polymer nanocomposites. Due to limitations of the MD simulations, we simulated short-length CNTs at low volume fractions. However, it is known that the thermal conductivity of CNTs depends strongly on their length [29]. To be able to calculate the thermal resistance in polymer nanocomposites, we fitted our calculated thermal resistances with Nan's [13] effective medium theory (EMA). This model calculates the thermal conductivity for random orientations of micron-sized CNTs in the polymer matrix, using an effective medium approximation. where Λ is the thermal conductivity of the composite for random orientations of CNTs, λ 0 is the thermal conductivity of pure PA-6,6, f is the volume fraction of CNT, and β x and β z are defined as: In Equations (7-9), R λ is the interfacial thermal resistance (Kapitza resistance), λ CNT is the thermal conductivity of CNT, and D and l are the diameter and length of CNT, respectively. The parameters obtained from our simulations, and used to calculate macroscopic thermal conductivities are reported in Table 2. It is worth mentioning that the volume fractions of CNT in our samples, calculated as the ratio of volume occupied by CNT to the volume of the simulation box, are ≈ 0.004, ≈ 0.01, and ≈ 0.03 for nanocomposites containing CNT (6, 0), CNT (10, 0), and CNT (17, 0), respectively. Here, the volume of CNT is calculated by knowing its diameter and the maximum packing fraction of ordered cylinders (≈ 0.9). As the CNT thermal conductivity is size dependent, we calculated the thermal conductivity of an infinitely long CNT, λ ∞,CNT , by calculating the thermal conductivities of CNTs of different lengths and extrapolating the graph of 1/λ CNT vs. 1/ l to zero. The results are shown in Table 1. Using the tabulated parameters in Table 1, we calculated the macroscopic thermal conductivity, Λ, for polymer nanocomposites with random orientations of CNTs. Figure 9 shows the thermal conductivities for experimentally relevant (micron) sized CNTs in polymer. We also compared the results of our EAM modeling with experimental data [9,[36][37][38][39] in Figure 9. The EAM predictions for micron-sized CNTs are within the range of experimental data [9,[36][37][38][39][40][41]. It is worth mentioning that the results presented in Figure 9 are for a CNT/polymer nanocomposite with random orientations of CNTs. Table 2. Parameters used in effective medium approximation (EAM) theory for prediction of thermal conductivities of PA-6,6/CNT nanocomposite.*. Table 2. It is worth mentioning that the volume fractions of CNT in our samples, calculated as the ratio of volume occupied by CNT to the volume of the simulation box, are ≈ 0.004, ≈ 0.01, and ≈ 0.03 for nanocomposites containing CNT (6, 0), CNT (10, 0), and CNT (17, 0), respectively. Here, the volume of CNT is calculated by knowing its diameter and the maximum packing fraction of ordered cylinders (≈ 0.9). As the CNT thermal conductivity is size dependent, we calculated the thermal conductivity of an infinitely long CNT, , , by calculating the thermal conductivities of CNTs of different lengths and extrapolating the graph of 1/λCNT vs. 1/ l to zero. The results are shown in Table 1. Using the tabulated parameters in Table 1, we calculated the macroscopic thermal conductivity, Λ, for polymer nanocomposites with random orientations of CNTs. Figure 9 shows the thermal conductivities for experimentally relevant (micron) sized CNTs in polymer. We also compared the results of our EAM modeling with experimental data [9,[36][37][38][39] in Figure 9. The EAM predictions for micron-sized CNTs are within the range of experimental data [9,[36][37][38][39][40][41]. It is worth mentioning that the results presented in Figure 9 are for a CNT/polymer nanocomposite with random orientations of CNTs. Figure 10 shows the macroscopic thermal conductivities for samples of polymer-linked CNTs at linkage fractions ranging from 0 to ≈ 0.06 and the results are compared with experimental reports of Li et al. [42] on the thermal conductivity of functionalized CNT polymer nanocomposites. A comparison of the results for nanocomposites with linked CNTs and those in which free chains are in contact with randomly oriented CNTs shows that in this case, the linkage between CNTs strongly enhances the thermal conductivity of the nanocomposite with increasing the CNT volume fraction. It is worth mentioning that in this case, we have not taken into account the decrease in the thermal conductivity of the CNT due to defects introduced to CNT by linking chains to its surface. [36], circles; by Yu et al. [9], (squares); by Patti et al. [37], diamonds; by Liu et al. [38], upward triangles; by Deng et al. [39], downward triangles, by Hong and Tai [40] (crossed squares), and by Song and Youn [41] (crossed circles). Figure 10 shows the macroscopic thermal conductivities for samples of polymer-linked CNTs at linkage fractions ranging from 0 to ≈ 0.06 and the results are compared with experimental reports of Li et al. [42] on the thermal conductivity of functionalized CNT polymer nanocomposites. A comparison of the results for nanocomposites with linked CNTs and those in which free chains are in contact with randomly oriented CNTs shows that in this case, the linkage between CNTs strongly enhances the thermal conductivity of the nanocomposite with increasing the CNT volume fraction. It is worth mentioning that in this case, we have not taken into account the decrease in the thermal conductivity of the CNT due to defects introduced to CNT by linking chains to its surface. For a sample of parallel aligned CNTs in polymer, the thermal conductivity of nanocomposite in the parallel, Λ ∥ , , and perpendicular directions can be expressed as: CNT and The parallel and perpendicular thermal conductivities of composites as a function of the volume fraction of CNT are shown in Figure 11. A decrease in the perpendicular thermal conductivity with an increase in the volume fraction of CNT was observed. This is understandable because with an increase in the volume fraction of CNT, heat must pass through more CNT/polymer interfaces of high Kapitza resistance. In a nanocomposite containing linked CNTs, the perpendicular thermal conductivity vs. CNT volume fraction curve has a less negative slope than that of a nanocomposite containing non-linked CNTs. This is due to the fact that the linkage of CNTs through polymer chains lowers the Kapitza resistance at the CNT/polymer boundary. Rationally, the perpendicular thermal conductivity of nanocomposite depends on the diameter of the CNT. Because of the larger Kapitza resistance of a smaller-diameter CNT; the thermal conductivity of a nanocomposite filled with a smaller-diameter CNT is lower that filled with larger-diameter CNTs. Contrary to perpendicular thermal conductivity, the thermal conductivity of the composite in the parallel direction grows abruptly with increasing the volume fraction of CNT. As Equation (11) shows, in this case the higher For a sample of parallel aligned CNTs in polymer, the thermal conductivity of nanocomposite in the parallel, Λ , and perpendicular directions can be expressed as: (11) and with The parallel and perpendicular thermal conductivities of composites as a function of the volume fraction of CNT are shown in Figure 11. A decrease in the perpendicular thermal conductivity with an increase in the volume fraction of CNT was observed. This is understandable because with an increase in the volume fraction of CNT, heat must pass through more CNT/polymer interfaces of high Kapitza resistance. In a nanocomposite containing linked CNTs, the perpendicular thermal conductivity vs. CNT volume fraction curve has a less negative slope than that of a nanocomposite containing non-linked CNTs. This is due to the fact that the linkage of CNTs through polymer chains lowers the Kapitza resistance at the CNT/polymer boundary. Rationally, the perpendicular thermal conductivity of nanocomposite depends on the diameter of the CNT. Because of the larger Kapitza resistance of a smaller-diameter CNT; the thermal conductivity of a nanocomposite filled with a smaller-diameter CNT is lower that filled with larger-diameter CNTs. Contrary to perpendicular thermal conductivity, the thermal conductivity of the composite in the parallel direction grows abruptly with increasing the volume fraction of CNT. As Equation (11) shows, in this case the higher thermal conductivity of CNT compared to that of polymer matrix causes a sudden change in Λ with a small increase in the amount of CNT in the matrix. Our results on the increase of thermal conductivity in the parallel direction supports the experimental findings of Marconnet et al. [43]. According to their experimental data, at a CNT volume fraction of 0.15, the thermal conductivity in the parallel direction increases by a factor of 15. Our results show a much larger increase in parallel thermal conductivity than that experiment. However, it is worth mentioning that our results refer to samples of perfectly aligned CNTs in polymer, while in an experiment it is not possible to prepare such samples. A combination of both components of thermal conductivity (parallel and perpendicular) leads to an overall increase in the thermal conductivity of the nanocomposite with an increase in the CNT volume fraction (see Figure 9). thermal conductivity of CNT compared to that of polymer matrix causes a sudden change in Λ ∥ with a small increase in the amount of CNT in the matrix. Our results on the increase of thermal conductivity in the parallel direction supports the experimental findings of Marconnet et al. [43]. According to their experimental data, at a CNT volume fraction of 0.15, the thermal conductivity in the parallel direction increases by a factor of 15. Our results show a much larger increase in parallel thermal conductivity than that experiment. However, it is worth mentioning that our results refer to samples of perfectly aligned CNTs in polymer, while in an experiment it is not possible to prepare such samples. A combination of both components of thermal conductivity (parallel and perpendicular) leads to an overall increase in the thermal conductivity of the nanocomposite with an increase in the CNT volume fraction (see Figure 9). Conclusions Although the thermal conductivity of CNTs is very high, their inclusion in polymers does not lead to an enhancement in thermal conductivity, which might be expected based on the linear law of adding the thermal conductivities of both components. The reason is the existence of a large interfacial thermal resistance (Kapitza resistance) at the CNT-polymer boundary. We have shown that the Kapitza resistance depends on the diameter of CNT; it reduces with an increase in the diameter of the CNT. The thermal conductivity of PA-6,6 chains in the perpendicular (to the CNT axis) direction depends on the surface proximity. It is lower at distances closer to the CNT surface and converges to the thermal conductivity of neat polymer at distances as large as 2 nm from the CNT surface. This is due to the formation of organized polymer layers at the CNT interface, in which heat transfer in the perpendicular direction mostly occurs through molecular collisions. Due to the existence of a large Kapitza resistance and lower perpendicular thermal conductivity of polymer at the interface, the thermal conductivity of nanocomposites containing aligned CNTs decreases in the perpendicular direction as the CNT volume fraction increases. Increasing the volume fraction of CNT Figure 11. Dependence of the perpendicular thermal conductivities, Λ ⊥ of nanocomposites on the volume fraction of CNTs. The full curves belong to non-linked CNTs and the dashed curves from bottom to up represent the perpendicular thermal conductivities for a nanocomposite containing linked CNTs (17, 0) at linkage fractions 0.021, 0.042, and 0.063. In the inset, the dependence of parallel thermal conductivities, Λ of nanocomposites on the volume fraction of CNTs is shown. Conclusions Although the thermal conductivity of CNTs is very high, their inclusion in polymers does not lead to an enhancement in thermal conductivity, which might be expected based on the linear law of adding the thermal conductivities of both components. The reason is the existence of a large interfacial thermal resistance (Kapitza resistance) at the CNT-polymer boundary. We have shown that the Kapitza resistance depends on the diameter of CNT; it reduces with an increase in the diameter of the CNT. The thermal conductivity of PA-6,6 chains in the perpendicular (to the CNT axis) direction depends on the surface proximity. It is lower at distances closer to the CNT surface and converges to the thermal conductivity of neat polymer at distances as large as 2 nm from the CNT surface. This is due to the formation of organized polymer layers at the CNT interface, in which heat transfer in the perpendicular direction mostly occurs through molecular collisions. Due to the existence of a large Kapitza resistance and lower perpendicular thermal conductivity of polymer at the interface, the thermal conductivity of nanocomposites containing aligned CNTs decreases in the perpendicular direction as the CNT volume fraction increases. Increasing the volume fraction of CNT in such nanocomposites introduces more CNT-boundaries, and hence, larger interfacial thermal resistances that hinder the heat flow in the perpendicular direction. On the other hand, chain orientation at the interface facilities heat transfer in the parallel (along the CNT) direction. The chains at the CNT interface preferentially orient parallel to the CNT axis. Heat transfer in such extended chains mainly occurs through vibrations along the chain backbone, which is faster than the molecular collision mechanism. In other words, in nanocomposite samples containing aligned CNTs, heat transfer in the parallel direction increases with increasing the volume fraction of polymer. As a combination of both factors, the inclusion of CNTs in polymers improves the thermal conductivity of the polymer. Modifying the interface, through linking the CNTs together with linkers (polymer chains), substantially improves the heat transfer in polymer-CNT composites. The linkers substantially reduce the Kapitza resistance at the CNT-polymer boundary, facilitating heat transfer in the perpendicular direction. The magnitude of decrease in the Kapitza resistance depends on the linkage fraction and the CNT diameter; it reduces more at higher linkage fractions and with increasing the CNT diameter. Both factors, increasing the linkage fraction and increasing the CNT diameter, also facilitate heat transfer in the parallel direction. Therefore, compared to the neat polymer, heat conduction in samples of polymer-linked CNTs improves considerably (depending on the volume fraction of CNT and the linkage fraction). To establish a connection with experimental measurements, we employed microscopic parameters obtained from simulations to investigate macroscopic thermal conductivities of polymer nanocomposites within the framework of effective medium approximation [13]. Our calculations predict experimental measurements on the thermal conductivity of nanocomposites containing randomly oriented micron-sized CNTs in polymers. Inclusion of linkage between CNTs further enhances the thermal conductivity of such nanocomposites. Moreover, it was shown that the thermal conductivity of nanocomposites containing aligned CNTs is reduced in the perpendicular direction, and substantially increases in the parallel direction with increasing the volume fraction of CNT. It is worth mentioning that although the simulations/modeling results in this work are done for CNT/PA-6,6 composites, the results are valid for all CNT/polymer nanocomposites. This is because of the fact that compared to CNTs, all polymers have a very low thermal conductivity, and the large polymer-CNT interfacial thermal resistance (because of the large surface to volume ratio) mainly controls the rate of heat transfer in all CNT/polymer nanocomposites.
11,865
2019-09-01T00:00:00.000
[ "Materials Science", "Engineering" ]
High-Efficiency Super-Resolution FMCW Radar Algorithm Based on FFT Estimation This paper proposes a high-efficiency super-resolution frequency-modulated continuous-wave (FMCW) radar algorithm based on estimation by fast Fourier transform (FFT). In FMCW radar systems, the maximum number of samples is generally determined by the maximum detectable distance. However, targets are often closer than the maximum detectable distance. In this case, even if the number of samples is reduced, the ranges of targets can be estimated without degrading the performance. Based on this property, the proposed algorithm adaptively selects the number of samples used as input to the super-resolution algorithm depends on the coarsely estimated ranges of targets using the FFT. The proposed algorithm employs the reduced samples by the estimated distance by FFT as input to the super resolution algorithm instead of the maximum number of samples set by the maximum detectable distance. By doing so, the proposed algorithm achieves the similar performance of the conventional multiple signal classification algorithm (MUSIC), which is a representative of the super resolution algorithms while the performance does not degrade. Simulation results demonstrate the feasibility and performance improvement provided by the proposed algorithm; that is, the proposed algorithm achieves average complexity reduction of 88% compared to the conventional MUSIC algorithm while achieving its similar performance. Moreover, the improvement provided by the proposed algorithm was verified in practical conditions, as evidenced by our experimental results. Introduction Radar sensors are a subject of research in various fields, such as defense, space, and vehicles, given their robustness against several conditions, including wind, rain, fog, light, humidity, and temperature [1][2][3][4][5][6]. The ultra wide band (UWB) radar systems with highresolution and high-precision had been in the spotlight as a representative radar system [3]. The UWB radar systems employ very narrow pulse width and thus they require very wide bandwidth [7,8]. This is the reason for the high complexity of the UWB radar systems. Hence, UWB radar systems are mainly used in fields that are less sensitive to the burden of costs such as defense and space [9]. As the application range of radar gradually expands, those with low cost and low complexity have been placed in the research spotlight among the many types of radar systems. The continuous wave (CW) radar is a representative lowcomplexity radar system [10,11]. The CW radar systems use only the difference between the carrier frequencies at the transmitter and receiver to estimate the velocity of the target. Since it is only necessary to perform sampling on the sine wave signal corresponding to the carrier frequency difference, it is converted into a digital signal with less complexity burden. However, the CW radar has the limitation that it cannot be used for various purposes because it cannot measure the distance to the target. As an alternative to these, studies on frequency modulation continuous wave (FMCW) radar systems have been reported [12][13][14][15][16][17][18]. FMCW radar systems are capable of estimating the range, Doppler, and angle of targets, despite their low-cost, low-complexity hardware systems, as their signal processing is performed in a low frequency band after mixing. As the applications of the radar sensors increase, FMCW radar technology is considered one of the most promising technologies. For example, the FMCW radar systems have been applied to surveillance applications [19][20][21]. In [19,20], they have presented the design and test of radar sensing platform based on FMCW radar in transport systems. In addition, they provided for the better trade-off that can be found in terms of power consumption and the detectable range. In [21], however, the authors proposed a solution to rapidly detect the moving targets by utilizing the subtract between two FMCW chirp signals. In [22][23][24][25], they addressed that the FMCW radar is one of the most promising techniques for non-contact monitoring to measure vital signals, such as heart respiration rates. In [23], they showed the human indication by measuring the respiration pattern using the FMCW radar and deep learning algorithm. In [25], the authors presented a vital sign monitoring systems by using the 120 GHz FMCW radar. In [26], a low complexity FMCW radar algorithm was proposed by reducing the dimension of 2D data. Meanwhile, FMCW radar systems had been utilized for vehicles [27][28][29]. In [27], the randomized switched antenna arrays FMCW radar were introduced for automotive applications. They tried to solve the delay-space coupling problem of the traditional switched antenna arrays systems. In [28], they proposed a method to simultaneously detect and classify objects by using a deep learning model, specifically, that you only look once, so-called YOLO, with pre-processed automotive radar signals using FMCW radar. In FMCW radar systems, meanwhile, fast Fourier transform (FFT)-based estimators are widely employed [19]. In [16], a novel direction of arrival (DOA) estimation algorithm was proposed for FMCW radar systems. This algorithm virtually extends the number of arrays using simple multiplications. The FFT is employed in this algorithm, and thus, the computational complexity of this algorithm is very low compared with other highresolution algorithms. However, it does not provide a large resolution improvement, although the resolution provided by this algorithm is higher than that of conventional FFTbased estimation algorithms. In [18], an algorithm employing only regions of interest in the total samples was proposed, in order to estimate the distance and the velocity of targets in an attempt to reduce redundant complexity. However, an improvement in resolution was not expected, as this algorithm was also based on the FFT. In other words, it is difficult for FFT-based estimators to distinguish between multiple adjacent targets. To overcome this disadvantage of FFT-based estimators in FMCW radar systems, several algorithms have been proposed. In [30][31][32][33][34][35][36][37][38][39], various super-resolution algorithms have been proposed, such as the multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance technique (ESPRIT) algorithms. These algorithms employ eigenvalue decomposition (EVD) or singular-value decomposition (SVD) of the correlation matrix obtained from the received signal, in order to distinguish signal and noise subspaces. The parameters corresponding to the desired signals are accurately estimated using the relationship that the subspace of the signal and the subspace of the noise are orthogonal to each other. However, their computational complexity drastically increases as the number of input samples increases. Thus, these algorithms may not be suitable when the number of input samples is large. Oh et al. [35] employed the inverse of the covariance matrix, instead of the EVD or SVD, to reduce the complexity of super-resolution algorithms. However, under a low signal-to-noise ratio (SNR), the performance of this algorithm was significantly degraded [36]. In [37], a low-complexity MUSIC algorithm for DOA estimation was proposed. This algorithm properly uses the trade-off between the field-of-view (FOV) and the angular resolution. Hence, this algorithm attempts to reduce the computational complexity of the MUSIC algorithm. In this case, however, DOA estimation is considered; thus, the number of inputs to the MUSIC algorithm to be considered is not large, as the maximum number of samples is the same as the number of arrays. To reduce the computational complexity while exploiting the high-resolution features of super-resolution-based estimators, the algorithm proposed in this paper reduces the number of samples used as input to the MUSIC algorithm, based on the beat frequency estimated by the FFT. In other words, in the proposed algorithm, the number of samples is set based on the distance estimated by FFT, instead of the maximum detectable distance (as in the conventional MUSIC algorithm). Based on this reduced number of samples, the overall complexity of the proposed algorithm is decreased, by using only some of the samples of a given beat signal as the input to the MUSIC algorithm. Compared to [37], in the proposed algorithm, the number of samples used as inputs of the MUSIC algorithm is also determined in various ways according to various situations depending on the estimated distance rather than one threshold condition. To this end, in this paper, we mathematically show the process of how many reduced samples were required for the same performance according to the ratio of the distance estimated by the FFT and the maximum detection distance. Our simulation results confirm the improvement in performance produced by the proposed algorithm; that is, the proposed algorithm can achieve similar performance to the conventional MUSIC algorithm, despite its considerably lower complexity. Moreover, our experimental results verify that the proposed algorithm can operate well in a real environment. The remainder of this paper is structured as follows: in Section 2, we describe the system model considered in this study. Then, the proposed low-complexity MUSIC algorithm is described in Section 3. In Section 4, through simulations, the performance of the proposed algorithm is compared to that of the conventional MUSIC algorithm and their computational complexities are evaluated. In Section 5, the experimental setup is introduced and the experiment results are provided, which confirm the performance of the proposed algorithm in practical environments. Finally, we conclude this paper in Section 6. System Model for FMCW Radar Systems The system model of a FMCW radar system consisting of one transmitting (TX) antenna and one receiving (RX) antenna is considered in this section. The TX signal at the ith frame of the FMCW radar is denoted by x (i) (t), which is transmitted from the TX antenna during N F frames, as shown in Figure 1 l The TX FMCW signal, x (i) (t), is composed of a total of L ramp signals, and is expressed as [17,18] x (i) (t) = where x 0 (t) is a ramp signal, expressed as where f c is the center frequency and µ is the sweep rate of the ramp signal. Let B and T denote the system bandwidth and the time duration of the ramp signal x 0 (t), respectively. Hence, µ is calculated as µ = B/T. The TX signal x (i) (t) is reflected on M targets and is then received by the RX antenna. Let r (i) l (t) denote the RX signal corresponding to the lth ramp signal, expressed as follows: is obtained and expressed as [37] y (i) After analog/digital conversion (ADC), y Here, the number of total samples is denoted by N s , and is expressed as where f s is the sampling frequency (i.e., f s = 1/t s ) and · is the floor operator. From Equation (6), it can be shown that the number of total samples N s is determined by the sampling frequency f s and T. In addition, f s is determined by the maximum detectable distance d max . Using the relation between f s and d max [40,41], the minimum sampling frequency f s,min is as follows: where c is the speed of the electromagnetic wave. Hence, the minimum number of samples is denoted by N s,min , expressed as For simplicity, the sampled beat signals are considered for only one frame; thus, the m are changed to f b,m , τ m , and η m , respectively. By redefining the coefficient term a m,l (i.e., a m,l =ã , the sampled beat signal can be simply expressed as [37] To effectively denote the variables, the variable is expressed, in vector form, as [37] y l = Ha l + w, where where (·) T is the transpose operator, and H ∈ C N s ×M and a l ∈ C M×1 are the matrix and vector corresponding to the beat signal η[n] and amplitude a m,l , respectively, where C N×L and R N×L are denoted by N × L complex and real matrices, respectively, and w ∈ C N s ×1 is an AWGN vector. The beat signal matrix, H, is composed of M beat signal column vectors and is expressed as The lth amplitude vector a l and amplitude matrix A are expressed as As the range of the target is estimated based on the time delay τ m , the range of the target can be obtained in the FMCW radar system by estimating the beat frequency of η m [n], which is denoted by f b,m . Letd m denote the estimated range, which is calculated using the time delay as follows:d By substituting τ m = f b,m /µ into Equation (14),d m can be obtained by estimating f b,m , such that:d The range resolution, d , is inversely proportional to the TX waveform bandwidth, B, as follows [42]: By substituting Equation (8) into Equation (16), the range resolution, d , can be expressed as In pulsed radar systems, which is one of the representative radar systems, since bandwidth is a factor that determines the range resolution, increasing T or N s could not improve the range resolution. From Equation (17), however, in order to improve the resolution performance in FMCW radar systems, an increase in the duration of the FMCW TX signal T or an increase in the number of samples N s is required for a given f s (i.e., to decrease d ). [40,41] This implies that the computational complexity inevitably increases when improving the resolution performance. Proposed Super-Resolution Algorithm Using FFT-Estimated Ranges This section describes the proposed low-complexity super-resolution FMCW radar algorithm. Figure 2 shows a block diagram of the proposed algorithm. The proposed algorithm consists of two steps: first, simple clutter is rejected and the ranges are coarsely estimated using the FFT; second, the computational complexity of the super-resolution algorithm is reduced using the estimation results from the first step, thus reducing the considered number of samples used as input to the super-resolution algorithm. For this study, we employed the MUSIC algorithm, which is a representative super-resolution algorithm, for comparative purposes. The details of each step are provided in each subsection. Simple Clutter Rejection and Coarse Range Estimation Using FFT As mentioned above, the proposed algorithm performs simple clutter rejection and coarse range estimation using the FFT. To achieve simple clutter rejection, the proposed algorithm determines the difference between y 1st and y 2nd , which are the partial matrices composed of y l for l = 0, 1, ..., L − 2 and l = 1, 1, ..., L − 1, respectively. Let y denote the difference between the two partial matrices; that is, y = y 1st − y 2nd . In general, the clutters that do not move do not generate Doppler. This means that y l is the same, regardless of l, ignoring the effect due to the noise component. Therefore, in y , only the signals corresponding to targets whose velocity is not zero remain. Then, the FFT operation is performed on the sampled beat signal, y , in order to estimate the range of targets. For the convenience of notation, the lth ramp signal of y is denoted as y l [n]. The kth FFT output of y l [n] is represented by Y l [k], and is calculated as where N is the size of the FFT, which is a power of 2. The FFT output Y l [k] is expressed, in vector form, as where D is an N × N matrix for the discrete Fourier transform operation, which consists of is a zero vector). By peak detection of the magnitude of the FFT outputs, the estimated beat frequency,f b,m , can be obtained. By substitutingf b,m into Equation (15), the estimated range,d FFT m , is obtained. Fine Range Estimation Using MUSIC As shown in Figure 2, after coarse range estimation using the FFT, fine range estimation is performed by the MUSIC algorithm, which can achieve a higher resolution, compared with the FFT. The MUSIC algorithm achieves significantly higher resolution performance than the FFT by using the orthogonality between the noise and signal subspaces. The matrix form of the beat signal is denoted by: Let y ∈ C N s ×L denote the matrix form of the beat signal (i.e., y = [y 1 , y 2 , ..., y L ]). Let R denote the correlation matrix of y, expressed as follows [18]: where (·) H is the Hermitian operator and R i,j is the element at the ith row and jth column of R. By performing SVD on R, the signal and noise subspaces can be separated as [18] where U M is the subspace of the signal (i.e., U M = [u 1 , u 2 , ..., u M ]); U −M corresponds to the subspace of the AWGN component (i.e., U −M = [u M+1 , u M+2 , ..., u N s ]); and Σ is a diagonal matrix based on N s eigenvalues (i.e., Σ = diag(λ 1 , λ 2 , ..., λ N s ), where λ p is the pth eigenvalue of R and diag(·) is the diagonal matrix operator). The ith eigenvalue, λ i , is given as where ρ i corresponds to the ith eigenvalue of the considered signal part and σ 2 n corresponds to the noise variance. The region of the beat frequency We employ the orthogonal property between the steering vector h( f b,m ) and the subspace of the noise term U −M , as follows: Therefore, using Equation (23), the pseudo-spectrum of the MUSIC algorithm, P MUSIC , is calculated as By using Equation (15) and the estimated beat frequency with high resolution in Equation (24), close targets that could not be distinguished in the FFT-based estimation can be successfully distinguished. In this procedure, the proposed algorithm performs the MUSIC algorithm for only a part of y l , instead of performing the MUSIC algorithm for all samples of y l . As shown in Equation (7), the sampling frequency, f s , is determined by d max . However, as this is based on the worst case (i.e., d max ), f s can be reduced if the target is not d max . The proposed algorithm employs the reduced sampling frequency f s based ond FFT m , instead of d max . The reduced sampling frequency, f s , is calculated as Asd FFT m ≤ d max in most cases, f s is smaller than f s . In Equation (17), by setting the ratio ofd FFT m and f s equal to the ratio of d max and f s , the proposed algorithm achieves the same d as the case based on d max with f s . Figure 4, as can be observed from the power spectral density (PSD) result using the FFT, they were estimated as one target, even though there were two targets. Compared with the FFT, the conventional MUSIC algorithm and the proposed MUSIC algorithm could estimate the two adjacent targets. In Figure 4b, the proposed MUSIC algorithm achieved similar estimation performance as the conventional MUSIC algorithm, despite using a reduced number of samples. Simulation Results Here, we discuss the results of our simulations, in order to verify the improvement in the performance provided by the proposed super-resolution algorithm. For the simulations, the parameters f 0 and the maximum distance d max were set to 24 GHz and 50 m, respectively. The number of targets was set to 2 (i.e., M = 2) and the ranges of the two targets of each target, d 1 and d 2 , were selected to be independent and uniformly distributed between 1 and d max . For the initial estimate, we performed 1024-point FFT. To generate various sample sizes, the bandwidth B was set to 1.54 GHz, 768 MHz, and 384 MHz (leading to N s = [1024, 512, 256]). To calculate the RMSE, we performed 10 4 simulations. As a measure to observe the performance difference between the conventional high-complexity algorithm and the proposed low-complexity algorithm, we calculated the root mean square error (RMSE) of the estimation of the range. The RMSE was calculated as Figure 5 shows the RMSE of the range estimations by the conventional and proposed MUSIC algorithms. In the low-SNR region (i.e., SNR = 0 dB), the RMSE of the proposed algorithm was about 4.5% higher, compared with that of the conventional MUSIC algorithm. However, the RMSE results of the two algorithms were almost the same when SNR ≥ 0 dB, despite the significantly lower computational complexity of the proposed MUSIC algorithm. Complexity Comparison In this section, the computational complexity of the proposed and conventional MUSIC algorithms was analyzed and compared. To measure the complexity of each algorithm, we compared the required number of multiplications for the generation of noise subspace and the SVD operation [43]. Let C conventional and C proposed denote the required number of multiplications in the conventional MUSIC algorithm and the proposed MUSIC algorithm, respectively. The case of the conventional algorithm, C conventional , was calculated as follows: For the proposed MUSIC algorithm, the number of samples decreases with the initial estimated ranged; however, for the initial range estimation, the FFT operation is additionally required. Hence, C proposed was calculated as where N s is the reduced number of samples (i.e., N s =d d max × N s ). Figure 6 shows C conventional and C proposed , with respect to the initial estimation of the range. In the worst case (i.e.,d = d max ), the complexity of the two algorithms was almost the same. However, asd decreased, compared with d max , the complexity drastically decreased. In the cased = 10 m, the proposed algorithm achieved a 99.17% complexity reduction, compared with the conventional algorithm. In addition, whend = 20 m, the proposed algorithm achieved a 93.33% complexity reduction, compared with the conventional algorithm. In the general case, as d is smaller than d max , the complexity of the proposed algorithm was expected to be significantly lower, compared with the existing MUSIC algorithm. Assuming that the target distance was uniformly distributed between 1 m and d max = 50 m, the average range was d max /2. Assuming these conditions, the complexity of the proposed algorithm was reduced by about 88%, compared with the conventional algorithm. Experiments In this section, we describe the experiments we conducted using a real FMCW radar system, in order to verify the performance of the proposed MUSIC algorithm in a practical environment. First, we introduce the modules and equipment used in the experiment and their specifications; then, the measurement results are provided and analyzed. Figure 7a,b show photos of the actual structure of the front-end module (FEM). As shown in Figure 7a,b, the FEM was composed of two parts, a TX part and an RX part. Two TX antennas were located on the top of FEM, with gains of 15 and 20 dBi. The azimuth and elevation angles of the RX antennas were 99.6 • and 9.9 • , respectively. A power amplifier (PA), voltage-controlled oscillator (VCO), phase-locked loop (PLL), oscillator at 20 MHz, and a micro-controller unit (MCU) were included in the TX part. The frequency synthesizer with the PLL was controlled by the MCU, and one of the two TX antennas was selected. The azimuth angles of the first and second TX antennas were 26 • and 12 • , respectively. The RX part was located at the bottom of the FEM. There were 8 RX antennas, and the distance between two adjacent RX antennas was half a wavelength. In the RX part, low-noise amplifiers (LNAs) were included for noise reduction. In addition, a mixer to obtain the beat signals and two kinds of filters-that is, high-pass filters (HPFs) and low-pass filters (LPFs)-were included. The amplifier (AMP) was used to amplify the weak signal, and the variable-gain amplifier (VGA) was used to control the gain, according to the input. The RX signals from the RX antennas passed through the LNA and thus, the noise terms in the RX signals were reduced. Then, the output of the LNA was multiplied by the TX signal, synchronized by the PLL. The mixed signals were input to the (150 kHz) HPFs and then amplified by the AMPs. Finally, the outputs of the AMPs were passed through the (1.7 MHz) LPF. Figure 8 shows a photo of the back-end module (BEM). As shown in Figure 8, a fieldprogrammable gate array and digital signal processing were included in the BEM. The analog signal from the FEM to the analog input was converted into a digital signal, with a 20 MHz sample rate, using an analog-to-digital converter. The converted signal was stored as data in two external memory banks with 512 Mbytes for DSP, called DDR2 SDRAMs. When the two DDR2 SDRAMs were filled with data, the stored data were moved to a personal computer (PC) using an ethernet cable. Figure 9 shows the scenario and environment used for the experiment. As shown in Figure 9a,b, the targets were two people, who were d 1 and d 2 meters away from the radar, respectively. As mentioned above, the data of the sampled beat signal transformed by the ADC were transmitted to the PC by an ethernet cable. By employing software installed for the experiment, as shown in Figure 9b, we easily set and selected the parameters, such as the sampling rate, number of ramps, and so on. Then, the performance of each algorithm was verified by applying the proposed algorithm and the conventional algorithm to the same data. Figure 10 shows the estimation results using FFT with the clutter rejection algorithm. The two targets were located at the same range of 3.5 m, and each angle was located at ±20 • . The two targets were not stationary objects but humans, and thus there is movement of the chest by breathing. Therefore, a Doppler change occurs due to the respiration of the targets and thus the clutters are easily canceled. In the case without the clutter rejection algorithm, we observed dominant peaks at 0 and 2.2 m, as shown in Figure 10. Hence, the ranges without clutter rejection were estimated as 1 m and 2.2 m. In contrast, according to the results of the clutter rejection algorithm, the dominant peaks at 0 and 2.2 m were removed. The result of the range estimation was almost identical to the actual range. Figures 11 and 12 show the results of the range estimation experiment using the proposed and conventional MUSIC algorithms. The simple clutter rejection algorithm was applied to both algorithms. In Figure 11, the ranges of the two targets, d 1 and d 2 , were 2.7 and 3.2 m, respectively. From these results, we observed that the two adjacent targets were properly distinguished by the proposed algorithm, despite its low complexity. Figure 12 shows the experimental results when the distance between the two targets was closer than that in Figure 11 (i.e., [d 1 , d 2 ] = [2.4 m, 2.6 m]). From these results, we found that two adjacent targets could be distinguished by the proposed algorithm, similarly to the conventional algorithm, even when the two targets were very close. Conclusions We constructed a low-complexity MUSIC algorithm based on the FFT-estimated beat frequency, and analyzed and compared the complexity of the proposed and conventional MUSIC algorithms. The proposed algorithm achieved a complexity reduction of 10 to 100 times, while producing similar performance to the conventional MUSIC algorithm. In addition, we experimentally confirmed the performance improvement provided by the proposed algorithm in a practical environment. Author Contributions: B.-s.K. proposed the idea for this paper, verified it through simulations, and wrote this paper; Y.J. and J.L. performed the experiments using the FMCW radar system and contributed to the construction of the FMCW radar system; S.K. discovered and verified the idea for this paper with B.-s.K. and edited the paper. All authors have read and agreed to the published version of the manuscript.
6,434.4
2021-06-01T00:00:00.000
[ "Computer Science" ]
Measurement of E-Learners’ Level of Interest in Online Course Using Support Vector Machine Motivations: E-Learning is more popular in today’s era; most of the institutes have E-learning systems and also provide distance learning education. On the other hand, some of the main reasons for student’s failures in academics are lack of communication with teachers. Problem: This type of gap between student and teachers make big flaws in academics. Due to this type of gap, the student cannot take an interest in any academic course. This study aims to measure the level of student’s interest in any online course using Support Vector Machine (SVM). Objectives: The main objective of this research is to measure the student’s interest level in online courses and classify the e-learners on the basis of their level of interest in the course. Methods: The information from the spring semester of 2019 from the Department of Computer Science and encompass 597 students and 39 diverse courses. SVM technique has been used to filter the data and collected sequence data especially data processing, clustering, classification, regression, visualization, and feature selection. Findings: In the last, our system can detect the quantity of pages per session. In this system, we accomplish the different levels of students with their learning styles. The teacher can also be able to improve themselves with this system. The teacher may change the teaching methodology with their students. Application: It has been created a platform for classifying the students with weblogs interest in a particular course. After classifying the e-learners on the basis of their interest in a particular course will help the teacher/ E-teacher to change or improve the instructional strategy to develop/improve the level of interest of students. Introduction Recently, online learning activity shows a much better result in educational level and during the learning process, it helps a lot. If some student cannot reach school destination, then he/she can easily take advantage of the online learning education. When some of the experienced teachers want to teach each and every student then they can give facility to students through online learning education. In this case, the data mining technique paying an important role in the educational side, it is also known as Knowledge Discovery Database (KDD)Where all students can easily approach their perspective knowledge and their education. 1 As the grooming field of data mining, it shows many better results some years ago and it is known as emerging issues of the computer science field. Moreover, machine learning (ML) is one of the strong techniques to work automatically. In the educational sector. ML is playing an important role in the system. Obviously, in the recent era education system depend upon technology each and every technology giving an important part. Such as online lecturers, online quizzes, online tests, and online subject material. ML makes it easier to select a subject and take decisions. 2 On the other hand, it provides more facilities in the educational system. Many of the clustering algorithms have used in ML for Keywords: Machine Learning, Support Vector Machine, E-Learning, Web Logs, Clusters a different purpose. Algorithms have their own work efficiency and results, so the SVM has used for clustering of the objects and text and provides better results with accuracy. During the ML it helps to cluster a lot of records of students and teachers. This study shows the student' interest during the curriculum study. After that, it helps the teacher can be able to work on that particular student and to give them perfect and proper counseling. A lot of work shows inside the LMS but in this system, it shows the level of student and helps a lot of students during the bachelor level of study. The main and perfect reason for this study is to provide a better quality of education. 3 Because education is our basic need and every student should know their expertise inside any field. The core benefits of this study are: • Data mining technique can be used in it for student and course evaluation. • The main purpose of this study is to use a data mining algorithm for new indexes and metrics. • These techniques can be easily adjustable with any LMS. • Such a user-friendly environment the visualize results and communicate with given data. 4 The organization of this study depends following aspects: section 1 introduction shows the basic background and the machine learning things, section 2 shows the keywords and core text of machine learning and educational things, section 3 shows the previous work of e-learning, section 4 shows the brief description of the work and steps. In the last section 5 shows the discussion and conclusion of the complete study. Educational Research Model Several advantages concerned with EDM via more educational research models, like practical works. At the beginning of the educational data source where all data sets are available and easily generated which makes EDM extremely feasible. In this regard, it shows all records of regular or non-regular students. Such important files of educational sets include in reliable learning tasks, in this regard student performance can be increased rapidly and maybe touch the high peak of educational level. 5 For measuring feasibility with environmental challenges on students' performance so the researcher can easily handle another paradigm of educational research. The rapid development of the educational system increasing day by day and sharing information on student performance can be affected with better results. In educational research, paradigm shows that the increasing rate of students learning the environment in the real world. A lot of the students take help from educational apps and software. Using those software students can take help and knowledge but some of the things will be still missing that students have not any ability to take their interest subject regarding their course. 6 After that student can give feedback regarding their performance and academic records. Machine Learning Application in AI is known as Machine learning which makes the system capable of to automatically grab, learn and improve with the experience with confusing the programming done for it. Machine learning mission is based on the development of programming in a computer which can access and later on utilize it for themselves for learning. Machine learning starts with data or observation like experiencing directly, giving instruction, so you look in patterns in data and it helps you to make the best judgment in the future depending on the giving example we have mentioned. 7,8 The mission is to give access to the computer learning automatically without any interference or disturbance for human and then act on it accordingly. Machine Learning Methods We categorized algorithms of ML as an unsupervised or supervised category. • You can Supervise learning of machine algorithms by applying what has been taken while learning the past with new data being labeled taking an example so you can predict future events. You give it a start from analyzing the trained data and learn algorithm so it can produce an inferred function that can predict the values of output. Later on, the system can easily provide certain targets for the new input since enough training done on it. While you can learn algorithm which compares the result with intended, finding errors and correct so it can enhance and correct the module as needed to be. classified. You use unsupervised learning to know the infer function so explain a hidden data from the data which is not labeled. You can only explore data as the system cannot detect any output and you can from datasets you can draw interferences and then describe and hidden data which is unlabeled. • Something which falls between supervised & unsupervised learning is known as Semi-Supervised. This can use both the category unlabeled and labeled in training, mostly a minimum amount of data is labeled and the maximum amount of data is unlabeled. We use this semi supervision when we need to find out relevant and skilled sources so we can train them and take knowledge from it. You do not need an additional source to get unlabeled data. • Action produces by interaction or discovering rewards or finding errors is done by reinforcing algorithms of ML. Search & delay, error and trail can be described as reinforcing learning of machine. A method that allows software providers and machines to get ideal behavior automatically with a limited amount of data so you can get maximum performance. Feedback given to the provider can help to learn a lot and know which action was best. ML analyzing the huge quantity of data can be done. This delivers it faster with a better result, which can help to locate opportunities that are profitable or dangerous; however, it does require recourse and more time to train properly. To make it better and effective you can combine it with cognitive and AI tech with a large volume of data. [9][10][11] ML allows examining the huge amount of data. Normally, it gives fast and most accurate output to increase the opportunities or risks. It also needs extra time and resources to train. The integration of ML with AI and technologies can make it easier to process the huge amount of data. The main function of ML is to target functions, so it can be work accurately as undefined data and unseen data instances. 12,13 ML can easily target the function which can be (ho) is sometimes called a model. Figure 1 shows the model of the learning process. In view of marked training examples, the learning algorithm finds the structures or patterns in the training data set. From there, it creates a model that sums up well from that data. Normally, the learning procedure is explorative. Much of the time, the procedure will be played out numerous times by using various varieties of learning algorithms and configurations. Support Vector Machine SVM is known as for clustering and an effective algorithm to classify the text and make a decision. In other words, it is related to other classification algorithms such as kernel trick which is used for non-linear input spaces. As well as, it shows the importance in every field of technology and used widely in many applications such as face detection, classification of emails, news articles and web pages. 14,15 After that, it used mostly in text classification and handwriting recognition. SVM is a delighted algorithm and adequately simple concepts. The classifier discrete data points using a hyperplane with an enormous amount of margin. SVM is known as a discriminative classifier due to this reason. It finds the finest hyperplane, which helps in the new data point's classification. SVM could be handle smartly multiple continuous and categorical variables. To distinct different classes, it constructs a hyperplane in multidimensional space. It produces a huge hyperplane in an iterative mode that used to slash errors. The principle of SVM is to perceive a maximum marginal hyperplane (MMH) that superlative halve databases into classes. 16,17 In this case, the Figure 2 shows the classification of data. Literature Survey Some related work shows that the rapid development of student learning in their educational environment. In 18 has discussed the recognition system. In this system, the author has worked on a tool which recognizes the text and makes the proper arrangement of it. The author has introduced the algorithm that recognizes the text in the system. Similarly 19 has developed a system that categorizes the text with the help of SVM, in this system the author has worked on SVM for collecting better results. After that, the author has collected 67% result accuracy and categorizes five tags. Furthermore 20 has implemented a system during the students learning process. In this system, the author has implemented a web application and used that application in different schools. After that, the author collected the result which shows the student interest during the study. The author compared that Game-based learning is better than traditional learning methodology. Moreover 21 has used different and special data mining techniques for the recognition of learning styles on LMS. The author has collected experimental data. On the basis of their results, it shows the better improve technique of LMS. In this system, their technique shows the student interest according to teach them. In 22 has studied the data mining technique in E-Learning tools. The author has collected their results from teaching experiences belongs to the information system course. Some of the data mining techniques were taught inside the course which is part of that course. After that, the author has introduced the learning tool to analyse the data mining techniques. Some studied parts belong a case study with customer switching prediction. Similarly 23 has discussed the E-Learning objectives, methodologies and limits of E-Learning techniques. The author has focused on a different kinds of tools and techniques in methodology. Some of the issues have been discussed in particular learning, design, and other communication issues. Finally, the author has concluded and design a tool that allows the user to interact with it from anywhere and also mentioned the feedback of users with improved suggestions. Likewise 24 , has proposed a system which belongs to an online educational system. The author focused on campus wise data which can be collected from different sources of campuses. In this system, the author has collected student data from different campuses using the data mining technique. Correspondingly 25 has introduced the framework of smart cities where all data can be collected online. In this system, the author has worked on smart cities and analyzing big data. A huge amount of data can have handled with smart cities' approaches. Additionally 26 has studied the improved mode of E-Learning approaches and that system had efficiently worked on the educational environment. The author worked on the e-learning model and provides feedback from the audience. Moreover 27 has researched the common challenges which are included within their business environment are sophisticated and unstable. Therefore, people are interconnected with the same organization on the same network. The author used deep learning techniques inside his and collect their results. Furthermore 28 has developed a unique system where all data can be collected and stored online. After that student can easily choose their desired data with a unique identification. Mostly all students can easily ask any questions regarding their subjects. Methodology In this case, the main heading of the research is depending upon two main steps. After that, the proposed methodology works on the learning phase and the prediction phase. Where the learning phase is used for student data and their searching interest using web application. Because many of the students have different approaches and different interest levels. Interest level data can be collected from the student selection course. On the other hand, the second phase can be predicting the measurement of an interest course level. These data can be extracted from the local Web server of an LMS and instructors can easily evaluate the student's interest level. Logging the Data Such types of steps can show the student logging data on the platform of E-Learning. Inside the e-learning module, some fields have been recorded using a web server of E-Learning. It can record some fields of the E-Learning platform. Specifically, python used the special field for data saving inside the record which is Named Tupple. Inside the python Named Tupple can easily work and record the data of the student's course. Such of the python tornado web servers can be used for the configurations of Log files which are given below. In this case, much of the previous work has been shown inside the application and such requests can be stored on servers. After that, the admin can easily fetch the data from the server. Moreover, for the fetching of record two queries are working inside the application, on is Unique_ ID and another one is Sign (+), which easily indicates the first and second record of the user on each time of the request. Furthermore, inside the web application, some APIs have been used which are a direct concern with the webserver of an E-Learning application. During the collection of data, it is an independent platform of an E-Learning application. Data Pre-processing Information in the logbook has a noise like lost values, outliers, etc. The standard will be pre-processed so that the data mining analysis standard prepared. Definitely, the phase extracts stored information received from the first phase. Upper lever finding used and values at risk are taken away. This phase is not done by the eLearning platform and therefore merged with a variety of learning management systems. Logbook created during the last phase strained, hence contain the field below: • Course ID, Course identification • Session ID, Session Identification; Even if, the fields above stated have the facts for the whole eLearning procedure, further catalogues and metrics Table 1 offers in order to effectively assist the assessment of course practice. Hence a lot of matrics are available on the web to analyze the E-Commerce and parallel matrices are missing for eLearning, which have been merged with the Model Learning Management system so the quality of learning management system is assessed and as well as procedure of learner's coordination. Dataset Description The results of various institutional changes and modifications that customs the Open E-Class and eLearning policy of the GUNET platform is an integrated Electronic Learning Management System. It follows the philosophy of open-source software and supports the Asynchronous eLearning service without restrictions and commitments. Access to the service is through the use of a simple web browser without the need for specialized technical knowledge. The information from the spring semester of 2019 from the Department of Computer Science and encompass 597 students and 39 diverse courses. The information is in American Standard Code for Information Interchange system (ASCII) (formerly intended for use with teletypes, and so the explanations are slightly cloudy and their use is recurrently not as proposed) and is acquired from the Python Required Fields. Preprocessing Logbook formed for prior step is saved, thus that one holds only the fields: As defined in the previous Table 1 student are categorized into their categories. On the basis of the categorization of a student, it can have clustered the student's interest using their activity. After that, we have collected 687 student's activity data. In this case, we have just shown 20 different course data from different semesters. During the clustering, it focused on assignment submission, quizzes, class discussion, Mid Term, and Final Term. Furthermore, Table 2 shows the different interest levels of students. During the different semester lot of the students have taken different interests in the subjects. In Table 2 we have just shown the five subjects with different interest levels in a different activity. Clustering Results The following pace discussing the recommended approach implicates the assembling of the information. In this period, we executed 2 distinctive assemblings. The initial one collection are courses centered on the suggested metrics termed at the prior pace. The next collection is the undergraduates affording to the courses they have incorporated. Therefore, the suggested approach be responsible for the mentor with perception not merely into the courses, on the other hand also for learners. Course Clustering In this section, we have clustered the course with a different class activity. During the course clustering data mining technique has been used and it works with python numerous student data can be taken for collecting better results. The SVM technique has been used to filter the data and collected sequence data especially data processing, clustering, classification, regression, visualization, and feature selection. The properties of SVM are based on Euclid's method consists in supposing a small set of naturally appealing axioms, and gathering many other propositions from these objectivities through two collections, subsequently, our aim was to isolate the 39 courses into extraordinary commotion and short commotion. Outcomes found that 9 (29%) of the courses had extraordinary action and 30 (71%) of the short action. Student Clustering In this section, we have clustered the students with their class enrollment and course ID. If a student visits the Web Application, then he should first signup their ID. After that, he visits their subject with different classified activities. In our system, their respected time can be calculated. Because we can have calculated their interest level with their selection. Figure 3 shows the traffic of students during accessing the web application and the SVM can filter the students with their selection and interest subject. Furthermore, it shows the timing when the student accesses their account and how much time he/she can appear on that particular subject. There is an insignificant cluster of 4 learners consistent to course "Data Structure and Algorithm" (Course ID = 222) which is actuality educated during the previous semester and merely a precise low number of learners ordinarily select and join it. The learner of this course is well-known in the graph (Figure 3) even previously relating the SVM Algorithm, subsequently, the common courses are not being shared within the students. Therefore, the graph illustration of the interactions of the learners previously the collecting can describe quarantined learners and assists the lecturer to determine definite collections of them. The user can also apply the SVM from the Bio Layout surroundings. Subsequently, the assembling is concluded, Bio Layout colors the modules of individually class contrarily in order to visually differentiate each collection ( Figure 4). Moreover, the 3D graph in the form of Bio-Layout. We implement the SVM approach on 687 students and that technique composed from the tornado log file. Students grouped as a pair of fours number can be seen "Professional Structures" are kept back unmoved producing one hard bunch. A sample explanation of this bunch (Cluster 18) is shown in the ( Figure 5). We can see the User ID for the respective learner, the Course ID and the quantity of received and disappearance connections for each node (received and leaving links shared operators that visit the same pages), although in this case some of the figures are impacts. Similarly, Figure 6 shows the detailed results about the cluster 18 because the cluster 18 shows the students record while attending the classes. During the different semesters of students can perform and attend their classes. In the programming fundamental course, students can take a good interest. After that Linear Algebra course and Calculus Math course, both are similar to the most same exercise. During the semester student can be able to take interest in a different level. Some students have an interest to solve assignments and course exercises. But some students have no interest in mathematics subject. Due to the lack of interest of student's teachers can be able to make a meeting with students and talk with them easily about the subject and their interest level. For showing our research on student's interest we have captured the limited snapshot and focused only on 5 subjects of different subjects. The additional sample is displayed in Figure 6 where the outcomes for Collection 18 are accessible. The learners of this collection have shared typically the concept (Course ID = 58) of course "Digital analysis of algorithm" and its accompanying lab course (Course ID = 39) that are educated in the third semester. The SVM cluster algorithm brings about to isolate these learns from the whole of 687 learners. Discussion and Conclusion We display that the recommended metrics be able to compromise an introductory course classification, which in seizure be able to be used as a response for assembling algorithms. These recommend precise activities to mentors so that they can easily update their course outlines and their usability. Moreover, Using the SVM technique the courses can be easily clustered and divided with their results. Many of the students can easily know their interest level than they can pick the right subject and spend their time on it. Furthermore, this system can show the level of students and also suggest them to learn that course within the time. Explicitly, our system has the succeeding rewards: Customs assembling systems act to detect altered clutches of courses and altered clutches of learners. Diagram of SVM Cluster assembling algorithm is firm and accessible and pragmatic of foundation period in the field of eLearning. SVM Cluster can be able to load effectively courses into the alike collection centered on the learners' course visits and be able to discriminate quarantined courses. The Bio Layout conception tool supports illustrate the assembling outcomes and consent the mentor to advance search the outcomes in a collaborative 3D environment. We acknowledged comment approximately the system by the mentors. The mentors were knowledgeable almost the indexing outcomes and most of them amplified the excellence and the size of their learning substantial. They upgraded the quality by reordering the learning factual in a constant, categorized and controlled system. They also amplified the extent by entrenching supplementary learning factual. A significant conclusion through the manner of apprising the mentors' almost our outcomes was that the level of the courses constitutes a significant enthusiasm for the mentors to endeavor and develop their learning substantial. For the reason that of their reciprocal struggle, they respectively need their courses to be exceedingly categorized. Though, rare mentors criticized that the institute of their courses does not support them in taking extraordinary ending marks in the standing gradient. A numerical assessment for metrics of those courses on the initial level and later a mentor's acquaintance to and employment of the metrics would suggest valuable perceptions and support scientifically assess our system. We similarly plot to supplementary systematize the entire system, which is evolved as a Plug-in tool to systematize that records Pre-processing and collecting paces. The device will run intermittently (every single month) and will be mailed to the mentor's course rank and recommendations. Where that extended contract and the mentors were up-to-date spontaneously by mail nearly the quality of the content of their progressions. In the last, our system can detect the quantity of pages per session. In this system, we accomplish the different levels of students with their learning styles. The teacher can also be able to improve themselves with this system. He may change the teaching methodology with their students. Offline, use of data mining procedures such as pre-processing, conception, assembling, arrangement, worsening and connotation, and determining secreted data outlines. This is helpful for new comer's students as well as teachers.
6,154.4
2019-10-20T00:00:00.000
[ "Computer Science", "Education" ]
Random Numbers Generated from Audio and Video Sources Random numbers are very useful in simulation, chaos theory, game theory, information theory, pattern recognition, probability theory, quantum mechanics, statistics, and statistical mechanics. The random numbers are especially helpful in cryptography. In this work, the proposed random number generators come from white noise of audio and video (A/V) sources which are extracted from high-resolution IPCAM,WEBCAM, andMPEG-1 video files.The proposed generator applied on video sources from IPCAM and WEBCAM with microphone would be the true random number generator and the pseudorandom number generator when applied on video sources from MPEG-1 video file. In addition, when applying NIST SP 800-22 Rev.1a 15 statistics tests on the random numbers generated from the proposed generator, around 98% random numbers can pass 15 statistical tests. Furthermore, the audio and video sources can be found easily; hence, the proposed generator is a qualified, convenient, and efficient random number generator. Introduction The security concerning the access of Cloud Database has become a significant up-to-date issue.Cryptography and network security are both fundamental components when accessing Cloud Database safely; furthermore, random numbers play an essential role in cryptography and network security.The random numbers could be generated from two major ways: true random number generators (TRNGs) and pseudorandom number generators (PRNGs).TRNGs often generate random numbers from nature phenomena such as dice, coin flipping, flip-flop circuit, oscillator, electromagnetic wave, thermal noise, and atmospheric noise.As a result, the random numbers generated from TRNGs cannot be reproduced [1].On the other hand, PRNGs often generate random numbers from mathematical functions such as linear congruential to simulate real randomness, which allow the sender and receiver generating the same random numbers from PRNGs with the same initial value. Most people access Cloud storage via personal devices; Hence, to create random number generator algorithms from video [2] that could be used in limited computing capacity personal devices would be significant invention.In addition, the result of [2] shows that 98% random numbers generated from IPCAM and WEBCAM can pass National Institute of Standards and Technology (NIST for short) SP 800-22 Rev.1a 15 statistical tests [3], but only 72.8% random numbers generated from a XiangSheng MPEG-1 video files can pass 15 statistical tests.The XiangSheng is a kind of comic dialogue to entertain the audience with ridiculous stories.Furthermore, once adopting the dual-video sources algorithm on the XiangSheng MPEG-1 video file to obtain random numbers, the passing rate against 15 statistical tests will rise up to 97%.Table 1 shows the comparisons of Video Random Number Generator (VRNG) and Dual-Video Random Number Generator (DVRNG).The columns of 2WEBCAM, 2IPCAM, and 2XiangSheng mean that DVRNG came from two video sources, respectively; and the Pure Sound means the single sound source TRNG. However, passing rate of the random numbers generated from pure sound is almost zero, and using two video sources to generate random numbers is not convenient enough when compared with the random numbers generated from white noise of audio and video-Audio and Video Random Number Generator (AVRNG as short).The AVRNG requires camera and microphone to generate random numbers, which means that AVRNG could not only be applied on personal computer but also on the more and more widespread smartphone and tablet PC.We will review related works in Section 2, and the main contribution of the proposed algorithms, AVRNG with filter, will be described in Section 3. Section 4 will be a conclusion and followed by references. PRNGs and TRNGs. A linear congruential random number generator [4] represents one of the best-known PRNGs and was firstly broken by Reeds [5] and then by Boyar [6].Researchers develop the feedback shift register since then [7].In 2010, Debiao et al., proposed "A Random Number Generator Based on Isogenies Operations" [8].They used character of elliptic curves to generate random numbers, which is also the PRNG method to pass the NIST SP 800-22 Rev.1a 15 statistical tests.Wang and Yu proposed "A Block Encryption Algorithm Based on Dynamic Sequences of Multiple Chaotic Systems, " in 2009 [9], in which algorithm makes the pseudo-random sequence possess more concealment and noise like characteristic and overcomes the periodic malpractice.Furthermore, Wang et al. proposed a parameter perturbation method based on the good property of the extended one-dimensional smooth map in 2011 [10].And Wang et al. proposed a serial of random number generators based on chaotic system from then on [11][12][13].Wang et al. overcame the periodic problem of random number and said that it is hard to obtain the truly random numbers; therefore, this work will present a method that can be used as both PRNG and TRNG. Intel attempted to produce true random numbers by taking some of the thermal noise firstly in 1999.But, to amplify thermal noise consumes lots of power; hence, Intel turned to make a random number generator based on only digital hardware in 2008 [14].Besides, the thermal noise and amplify circuit are not suitable for common computer users.The most popular cryptographically sound random number generator-LavaRnd-was developed in 1996 by Landon Curt Noll, Simon Cooper, and Mel Pleasant.The LavaRnd includes 3 stages: gathering digital chaos, randomizing chaos by digital blender, and outputting random data.Over million people grabbed random numbers from the LavaRnd website [15].However, digital blender part of LavaRnd computational complexity is too high to use the SHA-1 hash function. In 2009, Tawfeeq proposed "A Random Number Generator Based on Single-Photon Avalanche Photodiode Dark Counts" [16] which produced nearly 50% 0's and 50% 1's.Yamanashi and Yoshikawa proposed "Superconductive Random Number Generator Using Thermal Noises in SFQ Circuits" [17] which used the superconductive single-fluxquantum (SFQ) circuits and thermal noises to produce random numbers.Nevertheless, those two methods also require specific equipment and electronic circuit knowledge to simulate random numbers. In 2012, Wang et al. proposed "A Novel True Random Number Generator Based on Mouse Movement and a One-Dimensional Chaotic Map" [18] which utilized the coordinate to be the length of an iteration segment of their true random numbers and -coordinate to be the initial value of this iteration segment.As the result, Wang et al. made a uniform distribution random number with average passing rate 68%. Alsultanny proposed another TRNG called "Randombit Sequence Generation from Image Data" in 2008 [19].Alsultanny used simple operation like XOR to generate random bits from images, but only 94% of the generated random bits could pass at least four statistical tests [20] as well as 79% pass all five tests which seems not feasible enough.Though Alsultanny's results are not good enough, this method indeed inspire is us to generate random numbers from video sources. Tsai et al. proposed the random numbers generated from video [2] and divergence of scaling function [21] in 2009 and 2012.The result of [2] is shown in Table 1.In [2], we cannot produce qualified random numbers from single video or single audio source; further improvement by importing audio and video sources meanwhile applying the NIST SP-800-22 Rev.1a 15 statistical tests [3] to verify the randomness is proposed in this work.The above 15 statistical tests are commonly used for determining whether the random numbers possess some specific characteristics that a true random numbers could possibly exhibit.In addition, the NIST issues the 140 Publication Series to coordinate the requirements and standards for cryptographic modules.The FIPS Pub 140-1 and 140-2 were published in 1994 and 2004, respectively [22].Revised draft FIPS Pub 140-3 adding on new security features that reflect recent advances in technology and security methods was published in 2009 [23].FIPS pub 140 serials recommend some statistical tests and FIPS pub 800-90a serials recommend random number generator using Deterministic Random Bit Generator and providing validation system [24].For more other statistical tests please refer to [22][23][24][25]. The Proposed Scheme: AVRNG with Filter Even though random numbers that we generated from single source cannot all pass the NIST SP-800-22 Rev.1a 15 statistical tests.Consequantly this work combines audio and video to develop AVRNGs in order to take advantage of audio's influence.In our experiments, the AVRNG with cartoon video source still cannot pass NIST SP-800-22 Rev.1a 15 statistical tests due to cartoon is artificial color.The artificial color is almost the same in the neighborhood area such as arm or face.Hence, coordinate threshold and discard threshold V, ℎ, and a new coordinate equation to avoid generating random numbers by capturing pixels in the neighborhood area are introduced. AVRNG with Filter Algorithm. Let and denote the video frame's width and height and , , and be the color value of red, green, and blue, respectively.We take only from 8 to 15 frames of size 320 * 200 pixels per second to ensure the randomness.And the coordinate threshold vt is a filter; if the RGB difference between current coordinate and the previous coordinate is less than vt, then adopt another coordinate.The vt could be set as any image's half variance; thus, we set vt as half variance of the video frame in the 3rd second.And we set the discard threshold ℎ = 100.If we cannot get available RGB (, ) coordinate more than th times, then discard this frame and go to the next frame.The algorithm is shown as follows. (a) Initial value: the initial value can be obtained anyway. In the proposed method, we take average value of the nine pixels around the center pixel of the first frame to be the initial value.Let the coordinate of center pixel be ( , ), and let the color of ( , ) be where ≪ means bit shift.Then we set the initial value.Let , , , ( We set initial coordinate (, ) to be where "&" means AND, "⊕" means XOR, and "%" means modular arithmetic. (b) Set threshold: we set vt as half variance of the video frame in the 3rd second.Set the discard threshold ℎ = 100 and let ℎ = 0. ( and let ℎ = ℎ + 1. If ℎ > ℎ, then we discard this frame, move to the next frame, and run step (d) again. (e) Get one random bit: bit New coordinate will be Both and having the "≪4" operation are very important.If we omit this operation, the pass rate of statistical test will be down to 10%∼50% rapidly. (f) Then back to step (c) to get another random bit bit [] until we get a random byte. Figure 1 presents the AVRNG flowchart.The next step is to prove that the random numbers generated from AVRNG are qualified enough.Obviously, we should apply NIST SP 800-22 Rev.1a-15 statistical tests [3] which are proposed in April 2010 to verify the randomness of proposed AVRNG. Result of AVRNG with Filter. First of all, we briefly describe the materials that are used for experiments.The XiangSheng is a kind of comic dialogue to entertain the audience with ridiculous stories we mentioned in Section 1.And the cartoon, Spirited Away, is a Japanese animation produced in 2001.It is definitely difficult to produce qualified random numbers from a cartoon image because the original colors have already been artificial.The pictures and result are shown in Figures 2, 3, and 4 and Tables 2 and 3. The test result in XiangSheng part, minimum value of XiangSheng3, is 0.98 which is the best result of all XiangSheng test samples.We can see that, when adopting AVRNG with filter, both XiangSheng and cartoon become good enough compared with WEBCAM with microphone.From the XiangSheng0 to XiangSheng5 means XiangSheng film XOR from 0 to 5 sound points, respectively.Similarly, from the Cartoon0 to Cartoon5 means cartoon film XOR from 0 to 5 sound points, respectively.The complete results are shown in Tables 2 and 3. In cartoon part, minimum value of Cartoon3 is 0.98 which is the best result of all cartoon test samples.The result shows that we can generate qualified random numbers from AVRNG with filter no matter from WEBCAM, XiangSheng, or cartoon.The experimental equipment is Logitech Clear Chat Stereo headset's microphone (100-10,000 Hz, Sensitivity −62 dBV/uBAR, −42 dBV/Pascal ±3 dB) [26], and the video devices are 1.3 million pixel WEBCAM of Logitech Quick-Cam Pro 4000 [27] and BlueEyes's IPCAM BE-1200 [28].The computer specification of experience is Asus U45J: CPU Intel i5-460 M (2.53 Hz) and 4 G RAM, and the OS is Fedora-18 x64. Conclusions In this work, the proposed AVRNG with filter could generate random numbers from WEBCAM with microphone (as TRNG) or from video file's frame with sound (as PRNG).Furthermore, the AVRNG adopts the filter 98% random The proposed random numbers generating method requires merely WEBCAM and microphone instead of complex equipment such as electronic circuit or oscillator to generate true random numbers.The results are principally smartphone so as to prove that these efforts can be widely applied will be the next stage.The random numbers generated from this work could be used in evolutionary algorithm [29,30].Hopefully, this work will evolve into an effective adjunctive decision making tool. numbers generated from both XiangSheng, and cartoon film could pass 15 statistical tests.Moreover, AVRNG with or without filter takes almost the same time to generate 100,000 random bits.The result is shown in Table4. based on personal computer; consequently, to transfer proposed algorithm upon personal devices such as tablet PC or
3,017
2013-04-17T00:00:00.000
[ "Computer Science" ]
Meteorin-like levels are associated with active brown adipose tissue in early infancy Introduction Meteorin-like (METRNL) is a hormonal factor released by several tissues, including thermogenically active brown and beige adipose tissues. It exerts multiple beneficial effects on metabolic and cardiovascular systems in experimental models. However, the potential role of METRNL as brown adipokine in humans has not been investigated previously, particularly in relation to the metabolic adaptations taking place in early life, when brown adipose tissue (BAT) is particularly abundant. Methods and materials METRNL levels, as well as body composition (DXA) and circulating endocrine-metabolic variables, were assessed longitudinally in a cohort of infants at birth, and at ages 4 and 12 months. BAT activity was measured by infrared thermography at age 12 months. METRNL levels were also determined cross-sectionally in adults; METRNL gene expression (qRT-PCR) was assessed in BAT and liver samples from neonates, and in adipose tissue and liver samples form adults. Simpson-Golabi-Behmel Syndrome (SGBS) adipose cells were thermogenically activated using cAMP, and METRNL gene expression and METRNL protein released were analysed. Results Serum METRNL levels were high at birth and declined across the first year of life albeit remaining higher than in adulthood. At age 4 and 12 months, METRNL levels correlated positively with circulating C-X-C motif chemokine ligand 14 (CXCL14), a chemokine released by thermogenically active BAT, but not with parameters of adiposity or metabolic status. METRNL levels also correlated positively with infrared thermography-estimated posterior-cervical BAT activity in girls aged 12 months. Gene expression analysis indicated high levels of METRNL mRNA in neonatal BAT. Thermogenic stimulus of brown/beige adipocytes led to a significant increase of METRNL gene expression and METRN protein release to the cell culture medium. Conclusion Circulating METRNL levels are high in the first year of life and correlate with indices of BAT activity and with levels of an established brown adipokine such as CXCL14. These data, in addition with the high expression of METRNL in neonatal BAT and in thermogenically-stimulated brown/beige adipocytes, suggest that METRNL is actively secreted by BAT and may be a circulating biomarker of BAT activity in early life. Introduction Meteorin-like protein (METRNL), also known as Meteorin-b, interleukin-41 and subfatin, is a recently identified hormone involved in metabolic regulation and considered a candidate biomarker of metabolic syndrome (1). In rodent models, METRNL is highly expressed in brown adipose tissue (BAT) upon thermogenic activation and also in skeletal muscle after exercise (2). METRNL was found to promote energy expenditure and glucose tolerance through the induction of alternatively activated macrophages at adipose depots and by promoting the browning of adipose tissue. Further research showed that peroxisome proliferator-activated receptor-g (PPARg) enhances the capacity of METRNL to antagonize insulin resistance in adipose tissue (3). METRNL also attenuates inflammation and insulin resistance in skeletal muscle via AMP-activated protein kinase and PPARd-dependent pathways (4,5). The beneficial effects of METRNL have been associated with innate immunity (6, 7), and protection against cardiac dysfunction (8,9). In adult humans, METRNL levels are low in patients with obesity and diabetes and correlate negatively with glucose levels and markers of insulin resistance (10)(11)(12)(13). Metabolic and nutritional alterations in the early postnatal life are not only relevant for health during infancy but may also contribute to the development of metabolic syndrome in later life. In recent years, the activity of thermogenic (brown/beige) adipose tissues in adult humans has gained attention as a protective factor against obesity, type 2 diabetes and cardiovascular disease (14). This is attributed to the capacity of BAT to both drain glucose and lipids for adaptive thermogenesis and to secrete adipokines with healthy effects on metabolism (15). However, despite the existing awareness that BAT size and activity are particularly relevant in infants (16), the pathophysiological consequences of distinct BAT activities early after birth have not been studied. The identification of BAT-derived adipokines in infants and their capacity to be used as biomarkers of metabolic health has also been scarcely undertaken, and only a few circulating molecules, such as bone morphogenetic protein-8B (BMP8B) and C-X-C motif chemokine ligand 14 (CXCL14), have respectively been associated with BAT activity in newborns and in one-year-old infants (17)(18)(19). Here we determined for the first time the circulating levels of METRNL across the first year of life and disclosed a significant association between this variable and the extent of BAT activity. Study population and ethics The primary study cohort consisted of 50 infants (27 girls and 23 boys) who were enrolled prenatally during the customary third trimester visit among Caucasian pregnant mothers consecutively seen in the outpatient clinics of Hospital Sant Joan de Deú and Hospital de Sant Boi -Parc Sanitari Sant Joan de Deú (Barcelona, Spain) (Supplementary Figure 1). These infants had previously participated in a longitudinal study assessing BAT activity and circulating levels of CXCL14 and BMP8B in the first year of life (18,19). Inclusion criteria were: maternally uncomplicated, singleton pregnancy with delivery at term (37-42 weeks), exclusive breastfeeding or formula-feeding in the first 4 months, postnatal follow-up completed (at 15 days, 4 and 12 months) and written informed consent. Exclusion criteria were maternal disease, alcohol or drug abuse, congenital malformations and complications at birth. Birth weight was not considered as inclusion or exclusion criterium; accordingly, the study population included infants with a wide range of birth weight Z-scores (between −2.9 and +1.0). Circulating METRNL was exclusively measured in a subset of infants who had spare serum sample available at birth (20 girls and 18 boys), and at age 4 and 12 months (26 girls and 16 boys). Serum METRNL was also measured in 30 mothers of those infants (age, 33.6 ± 0.9 years) during the third trimester of pregnancy (Supplementary Figure 1). In addition, serum METRNL concentrations were analyzed cross-sectionally in healthy adult women (N= 10; age, 38.7 ± 1.9 years) METRNL mRNA gene expression was assessed in dorsointerscapular BAT (N= 5) and liver (N= 6) post-mortem samples obtained on occasion of autopsies (2-3 h after the death) of Caucasian newborns with a gestational age of 28-36 weeks who survived, at most, 3 days post-partum, supplied by the Academy of Sciences of the Czech Republic as previously described (20) (Supplementary Table 1). For comparison, METRNL mRNA gene expression was also determined in adult liver samples (obtained from hepatic biopsies performed when a hepatic tumor was suspected, with a negative ultimate result), deltoid muscle samples (from adult individuals who underwent skeletal muscle biopsy because of muscle complaint in whom skeletal muscle histology was thereafter normal) and subcutaneous adipose tissue samples (obtained from volunteers), as described (8,21). The study was approved by the Institutional Review Board of the University of Barcelona, Sant Joan de Deú University Hospital; all participating mothers signed the informed consent at recruitment. Clinical and endocrine-metabolic assessments Maternal data were retrieved from hospital clinical records. Gestational age was calculated according to the last menses and validated by first-trimester ultrasound. Weight and length of the newborns were measured immediately after delivery, and again at age 4 months and 12 months. Maternal venous samples were obtained during the third trimester of gestation, between week 28 and delivery. Neonatal blood samples were obtained at birth from the umbilical cord before placenta separation (22). At age 4 and 12 months, venous samples were obtained during the morning in the fasting state. Adult venous samples were also obtained after overnight fasting. The serum fraction of the samples was separated by centrifugation and stored at -80°C until analysis. Body composition and BAT activity assessment Body composition was assessed at age 15 days, 4 months and 12 months by dual-energy X-ray absorptiometry (DXA) with a Lunar Prodigy and Lunar software (version 3.4/3.5; Lunar Corp., Madison, WI, USA) adapted for infants (22). As previously described (18), BAT activity at age 12 months was estimated through the infrared thermography-based measurement of the skin temperature overlying BAT depots. The parameters assessed included the maximal temperature at the posterior cervical (T PCR ) and supraclavicular (T SCR ) regions, and the extent of active BAT in these regions (Area PCR and Area SCR ). Cell cultures of neonatal beige adipocytes Pre-adipocyte cells obtained post-mortem from a 3-month-old infant with Simpson Golabi Behmel Syndrome (SGBS cells) (23), capable to differentiate into adipocytes bearing a beige phenotype (24,25) were used. SGBS pre-adipocytes were maintained in Dulbecco's modified Eagle's (DMEM)/F12 medium, 10% fetal bovine serum (FBS). Beige adipogenic differentiation was initiated by incubating confluent cell cultures for 4 days in serum-free medium plus 20 nM insulin, 0.2 nM triiodothyronine, 100 nM cortisol, 25 nM dexamethasone, 500 µM 3-isobutyl-1-methylxanthine, and 2 µM rosiglitazone. Subsequently, cells were switched to DMEM/F12, 20 nM insulin, 0.2 nM triiodothyronine, and 100 nM cortisol and maintained for up to 10 days, when more than 90% cells have acquired differentiated adipocyte morphology. To induce thermogenic activation of adipocytes, differentiated cells were treated with 1mM dibutyril-cAMP for 24 hours. All cell culture reagents and drugs were from Sigma-Aldrich (St Louis, Missouri, USA). Cells were collected for RNA isolation and the cell culture medium, corresponding to 24 h before harvest, was also collected for measurement of METRNL levels. Statistics Statistical analyses were implemented in SPSS version 27.0 (SPSS software, IBM, Armonk, NY, USA), GraphPad Prism 5 (GraphPad Software, CA, USA) and R Project version 4.2.2 (RStudio, MA, USA). Results are shown as mean ± standard error of the mean (SEM). Variables with normal distribution were compared with two-tailed Student's t-test. Chi-square test was used to compare qualitative variables. Correlation and stepwise multi-regression analysis were used to study associations between circulating METRNL levels and the assessed variables; outliers were detected using a studentized residual outlier test and excluded from further analyses; this approach did not modify the statistical significance of any analysis. Covariance analysis was used to adjust for ponderal index and breastfeeding. A P-value < 0.05 was considered statistically significant. METRNL levels in the first year of life Supplementary Table 2 shows the longitudinal data from infants over the first year of life and from their mothers in late pregnancy in the cohort in which serum METRNL assessment was performed. As previously reported (18, 19), girls had less lean mass, higher levels of circulating CXCL14 and higher posterior BAT activity. When splitting METRNL levels by sex, or by type of early feeding, no differences were found at any study time; accordingly, the results were pooled. Circulating METRNL concentrations in infants at birth were higher than at the postnatal ages of 4 and 12 months, and higher than in non-pregnant women (Figure 1). Correlations between METRNL levels and clinical, endocrine-metabolic and body composition variables The associations between circulating levels of METRNL and anthropometric, adiposity-related, and endocrine-metabolic parameters, including some putative brown adipokines (CXCL14, BMP8B), throughout follow-up are shown in Supplementary Table 3. At birth, circulating METRNL levels were negatively related to abdominal fat only in girls (R= -0.678; P= 0.013, Supplementary Table 3). At age 4 months, circulating METRNL showed a strong positive correlation with circulating CXCL14 concentrations in the entire population (R= 0.648; P= 0.002) (Figure 2A). At age 12 months, METRNL concentrations were also positively correlated with CXCL14 levels (R= 0.693, P= 0.001) ( Figure 2B). This correlation was maintained when girls were analyzed separately (R= 0.698; P= 0.012). Correlations between METRNL concentrations and parameters of BAT activity BAT activity at the posterior cervical and supraclavicular regions at age 12 months was analyzed in a subset of infants of the study cohort by using infrared thermography-based procedures, as previously described (18). Correlations between circulating METRNL levels at all time points and indicators of BAT activity at age 12 months are summarized in Supplementary Table 4. Significant positive correlations between posterior-cervical BAT activity and METRNL levels at 4 months (R= 0.400; P= 0.047; Figure 3A) and at 12 months (R= 0.432; P= 0.006; Figure 3B) were disclosed. Separate analyses by sex revealed a significant correlation between BAT activity and METRNL levels at 12 months only in girls (R= 0.426; P= 0.004). METRNL expression in neonatal and adult tissues METRNL expression levels in human neonatal post-mortem samples were significantly higher in dorso-interscapular BAT than in the liver. In addition, METRNL expression in neonatal BAT was much higher than in adult adipose tissue, skeletal muscle, and liver ( Figure 4). Activation of neonatal brown/beige adipocytes leads to increased METRNL expression and secretion of METRNL Neonatal adipocytes differentiated into beige phenotype (SGBS cells) were thermogenically activated using cAMP (26). METRNL gene expression was dramatically induced, similarly to the thermogenic biomarker UCP1 ( Figure 5A) and other marker genes of brown/beige phenotype such as peroxisome proliferatoractivated receptor-g coactivator-1a (PPARGC1A), iodothyronine 5'-deiodinase (DIO2) and BMP8B (Supplementary Figure 2). Moreover, thermogenic activation of the cells also induced a significant increase in the release of METRNL protein to the culture medium ( Figure 5B). Discussion The present study is, to our knowledge, the first to have assessed the circulating concentrations of METRNL in human infants, and to have related METRNL levels to BAT activity. Our study demonstrates that circulating levels of METRNLa novel adipokine that promotes browning of white adipocytes upon thermogenic stimulus (2)is high at birth as compared to adult values and decreases over the first year of human life, although remaining higher than in adults. These findings are in line with those recently reported on circulating BMP8Banother brown adipokine involved in thermoregulation and metabolic homeostasis that shows a similar decreasing trend from birth to age 12 months (19). Moreover, our data also disclosed a high expression of METRNL in neonatal human BAT and in thermogenic activated neonatal brown/beige adipocytes, as well as an increased secretion of METRNL in thermogenically activated cells, which confirms the capacity of human neonatal brown adipocytes to secrete this adipokine. Altogether, these data highlight the elevated activity of BAT after birth, when the demands for thermogenesis and risks for hypothermia can be especially high (16), and the concomitant production of BAT-secreted adipokines such as METRNL. Circulating METRNL concentrations displayed a positive association with posterior-cervical BAT activity, as well as with circulating levels of CXCL14a chemokine secreted by active brown/beige adipose tissue (27). These data fit well with previous studies reporting a positive correlation between circulating CXCL14 levels and BAT activity in early life (18), and also between CXCL14 and METRNL expression in adult adipose tissue (28). Interestingly, CXCL14 and METRNL have emerged as circulating factors that modulate M2 macrophage activation playing a role in brown/beige thermogenic regulation (2, 27). There is evidence of an interplay between BAT and skeletal muscle development in large mammalian species, which is characterized by a progressive decline in BAT after birth concomitant with skeletal muscle maturation, and this may affect BAT and muscle secretome (29). In rodents, skeletal muscle is a relevant site of METRNL gene expression (2) whereas in adult humans METRNL expression is low (8) but induced after exercise (2, 30). Lack of availability of muscle samples -or tissues other than BAT and liver-from neonates and young infants is a limitation of our study on METRL expression, and thus we cannot exclude a role of muscle or other tissues in influencing systemic METRNL levels in early development. Given the developmental overlap between BAT and muscle (29), it can't be excluded that the correlation between A B FIGURE 2 Correlation between circulating Meteorin-like (METRNL) and C-X-C motif chemokine ligand 14 (CXCL14) at age 4 months (A) and 12 months (B). Grey dots correspond to girls and black dots represent boys. P values are adjusted for ponderal index and breastfeeding. FIGURE 3 Correlation between the area of active brown adipose tissue at the posterior cervical region, as determined by infrared thermography (Area PCR ), at age 12 months, and circulating Meteorin-like (METRNL) concentrations at age 4 months (A) and at age 12 months (B). Grey dots correspond to girls and black dots represent boys. P values are adjusted for ponderal index and breastfeeding. METRNL levels and BAT activity in early infancy is indeed an indirect reflection of METRNL release by muscle. In any case, data retrieved from the transcriptomics database in muscle from infants in the first year of life and elderly adults does not indicate relevant differences in METRNL gene expression (31). Further studies would be required to establish the relative contribution of BAT and muscle to METRNL level changes in the first year of life. Although METRNL levels were not different between girls and boys, the above-mentioned associations of METRNL levels in relation to BAT activity and CXCL14 levels at age 12 months were only maintained in girls. This finding may fit with the previously reported observation that BAT activity at that age is higher in girls than in boys (18) and with prior data reporting sexbased differences in the levels of other putative batokines such as CXCL14 and BMP8B (18, 19). There is extensive evidence of sexbased differences in BAT thermogenic activity due to direct and indirect hormonal mechanisms (32) and it is likely that sex-based differences occur also for the BAT secretome. The distinct prevalence of "classic brown" versus "beige" adipocytes at specific anatomical BAT depots in humans (33,34) may explain why circulating METRNL levels correlate with measures of posterior-cervicalbut not supraclavicular -BAT activity. Differential secretory properties of brown-versus-beige cells have not yet been reported (even in experimental models) but a distinct capacity for METRNL secretion by different types of thermogenic adipocytes could account for the preferential association between METRNL levels and posterior-cervical BAT. On the other hand, high METRNL levels in early life, released by BAT and perhaps also by other tissues may promote a "browned" phenotype in white adipose depots in infants, given the known effects of METRNL in inducing the browning of adipose tissue (8), which would be especially adaptive to the thermally challenging conditions occurring in early infancy. Circulating METRNL levels did not show significant correlations with systemic parameters of endocrine-metabolic status or adiposity in our cohort. This indicates that, although METRNL concentrations appear to be a potential indicator of the extent of BAT activity in one-year-old infants, they are poorly informative about their general endocrine-metabolic status. Possibly, the fact that our cohort involved apparently healthy children exhibiting no major differences in metabolic or adiposity parameters among individuals, precluded the identification of meaningful associations. Along these lines, the only significant correlation was the negative association between METRNL levels and abdominal fat in girls, present only at birth. Although this finding is reminiscent of the negative correlations between METRNL levels and visceral adiposity found in adults with obesity and/or type 2 diabetes (12, 13), the fact that it occurs only at birth indicates the need for future studies to explore a possible involvement of METRNL in the fat accretion occurring during late fetal development, something totally unknown to date. Our study has several limitations, among them the relatively low number of serum samples available for METRNL assessment, the lack of tissue samples for METRNL mRNA gene expression from infants of the studied cohort due to obvious ethical reasons, and the lack of follow-up beyond age 1 year. Moreover, the lack of availability of neonatal samples from additional tissues (e.g. muscle) for gene expression analysis limited our capacity to infer whether, in addition to BAT, other tissue sources may be relevant contributors to systemic METRNL levels in infants, as in rodent models. Moreover, high levels of METRNL in blood from pregnant mothers, which may be caused by the high METRNL gene expression in placenta [data accessible at GEO profile database; GEO accession GDS3113, symbol 197624 (35)], may influence the high levels of METRNL in neonates at birth. On the other hand, further studies would be particularly interesting to assess in early infancy the potential relationship of METRNL levels with those of other secreted factors for which there is experimental evidence of involvement in BAT development, such as fibroblast growth factor (FGF)-9 or FGF21 (36). It should be also mentioned that our data on BAT activity were obtained by infrared methodology, which is minimally invasive but does not allow to assess the actual BAT mass which would require water-fat magnetic resonance imaging or quantification of the proton density fat fraction using magnetic resonance imaging (MRI) (37,38). The strengths of our study include being the first assessment of METRNL levels in humans in early life and the coavailability of a large set of endocrine-metabolic, body composition and BAT activity data. In summary, early life is associated with higher levels of circulating METRNL. The progressive reduction of METRNL concentrations in the first year of life -albeit maintained above those in adults-might reflect overall changes in BAT activity during early development. In addition, circulating METRNL concentrations associate with BAT activity and with CXCL14 levels, particularly in girls, supporting a role for METRNL as a brown adipokine and novel biomarker for BAT activity in early life. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by Institutional Review Board of the University of Barcelona, Sant Joan de Deú University Hospital, Spain. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. Author contributions CG-B contributed to literature research, design of figures and tables, data collection, data analysis and interpretation. AN-G contributed to the analysis of circulating parameters and interpretation of data. TQ-L performed gene expression and cell culture-based studies. AL-B and FZ contributed to data interpretation, and reviewed/edited the manuscript. LI and FV contributed to study design, data interpretation, reviewed/edited the manuscript and wrote the manuscript. All authors contributed to the article and approved the submitted version.
5,009.2
2023-03-02T00:00:00.000
[ "Biology", "Medicine" ]
PD-L1 siRNA Theranostics With a Dextran Nanoparticle Highlights the Importance of Nanoparticle Delivery for Effective Tumor PD-L1 Downregulation Purpose The inhibition of immune checkpoints such as programmed cell death ligand-1 (PD-L1/CD274) with antibodies is providing novel opportunities to expose cancer cells to the immune system. Antibody based checkpoint blockade can, however, result in serious autoimmune complications because normal tissues also express immune checkpoints. As sequence-specific gene-silencing agents, the availability of siRNA has significantly expanded the specificity and range of “druggable” targets making them promising agents for precision medicine in cancer. Here, we have demonstrated the ability of a novel biodegradable dextran based theranostic nanoparticle (NP) to deliver siRNA downregulating PD-L1 in tumors. Optical imaging highlighted the importance of NP delivery and accumulation in tumors to achieve effective downregulation with siRNA NPs, and demonstrated low delivery and accumulation in several PD-L1 expressing normal tissues. Methods The dextran scaffold was functionalized with small molecules containing amine groups through acetal bonds. The NP was decorated with a Cy5.5 NIR probe allowing visualization of NP delivery, accumulation, and biodistribution. MDA-MB-231 triple negative human breast cancer cells were inoculated orthotopically or subcutaneously to achieve differences in vascular delivery in the tumors. Molecular characterization of PD-L1 mRNA and protein expression in cancer cells and tumors was performed with qRT-PCR and immunoblot analysis. Results The PD-L1 siRNA dextran NPs effectively downregulated PD-L1 in MDA-MB-231 cells. We identified a significant correlation between NP delivery and accumulation, and the extent of PD-L1 downregulation, with in vivo imaging. The size of the NP of ~ 20 nm allowed delivery through leaky tumor vasculature but not through the vasculature of high PD-L1 expressing normal tissue such as the spleen and lungs. Conclusions Here we have demonstrated, for the first time, the feasibility of downregulating PD-L1 in tumors using siRNA delivered with a biodegradable dextran polymer that was decorated with an imaging reporter. Our data demonstrate the importance of tumor NP delivery and accumulation in achieving effective downregulation, highlighting the importance of imaging in siRNA NP delivery. Effective delivery of these siRNA carrying NPs in the tumor but not in normal tissues may mitigate some of the side-effects of immune checkpoint inhibitors by sparing PD-L1 inhibition in these tissues. INTRODUCTION The identification of immune checkpoints such as PD-L1 is providing exciting new advances in cancer treatments designed to block these checkpoints, exposing cancer cells to the immune system. Antibody based checkpoint blockade can, however, result in serious autoimmune complications such as vitiligo, colitis, and lupus (1). Furthermore, recent reports have identified additional roles of PD-L1 in cancer cells within intracellular compartments that are not accessible by antibodies (2). As sequence-specific gene-silencing agents, siRNA have significantly expanded the specificity and range of "druggable" targets making them promising agents for precision medicine in cancer (3). siRNAs are being actively investigated as molecular-based therapeutic strategies in clinical trials (4) in several diseases including lipid disorders (5), neurological diseases (6,7), cancer (8), and cardiovascular diseases (9). Several NPs have been developed for effective siRNA delivery (3). By decorating these NPs with an imaging reporter, it is possible to visualize the delivery and distribution of the NPs in the tumor for theranostics. Imaging these NPs allows an evaluation of the role of NP delivery in downregulation of the target gene. NPs of~20 nm in diameter extravasate into tumors through leaky tumor vasculature, but do not easily extravasate through normal vasculature (10)(11)(12). This is important for most tumors where specific receptors or antigens are not available for targeting (13). Effective delivery of these siRNA carrying NPs within the tumor, but not in normal tissues, would mitigate some of the side-effects of immune checkpoint inhibitors. We previously synthesized an imaging reporter labeled biodegradable dextran NP to use as an efficient cationic polymer carrier for siRNA delivery (14). As a homopolysaccharide of glucose, dextran has been used as a drug carrier in human applications due to its biodegradability, wide availability, and ease of modification (15). For electrostatic binding with siRNA, necessary amine functional groups are conjugated to the dextran platform through acetal bonds. Acetals are attractive for the release of therapeutic cargo through cleavage of the bond under acidic conditions that occur in cancer and inflammation, as well as within endocytosis compartments (16,17). When the NP was delivered within cancer cells, the acetal bond was cleaved under weak acid conditions that was clearly visualized through the use of multiple imaging reporters (14). The rapid cleavage and release of amine groups minimized the proinflammatory side effects of the positively charged amine groups making this cationic nanopolymer a useful carrier for siRNA delivery to downregulate gene expression. The siRNA is bound electrostatically to the amine groups. Transmission electronic microscopy (TEM) imaging identified a diameter of approximately 20 nm. Here we report, for the first time, on the use of this theranostic dextran NP to deliver PD-L1 siRNA in triple negative MDA-MB-231 human breast cancer xenografts. Tumors were inoculated orthotopically or subcutaneously as orthotopic tumors are better vascularized than subcutaneous tumors (18,19), allowing us to evaluate the role of NP delivery in downregulation of PD-L1. Optical imaging was used to visualize the delivery and biodistribution of the NP in vivo. Molecular characterization established the downregulation of PD-L1 message and protein in the tumors. Image analysis of the NP delivery showed a close association between NP delivery and downregulation of PD-L1, highlighting the importance of NP delivery in target downregulation, and the importance of noninvasive imaging in determining NP delivery to establish effectiveness of target gene downregulation. This modified dextran scaffold was mixed with siRNA in reduced serum medium (ThermoFisher, Waltham, MA, USA) for cell studies, or with PBS for in vivo studies, for 20 min immediately prior to adding to cell culture or prior to injecting into mice. All the siRNA dextran NPs contained a ratio of nitrogen atoms in one dextran molecule to phosphor atoms in one siRNA molecule (N/P ratio) equal to 15. Cell Culture Human breast cancer MDA-MB-231 cells were obtained from American Type Culture Collection (ATCC) (Manassas, VA, USA). Fetal bovine serum, penicillin, and streptomycin were from Invitrogen (Carlsbad, CA, USA). Cells were maintained in RPMI 1640 (Invitrogen, Grand Island, NY, USA) supplemented with 10% fetal bovine serum in a humidified incubator at 37°C/ 5% CO 2 . Cells were seeded at a density of 400,000 cells per dish in 60 mm dish (for qRT-PCR experiments) or 1,000,000 cells per dish in 100 mm dish (for immunoblots experiments) 24 h prior to the transfection experiment. Cells were incubated for 48 h in RPMI 1640 medium containing siRNA-PD-L1 dextran NPs (concentration of siRNA: 100 pmol/mL, N/P = 15). Cells were treated with NPs for 48 h, because this incubation period resulted in the most effective downregulation of the target genes. All transfections were carried out based on established protocols (20). Mouse Model and Tumor Implantation All in vivo studies were done in compliance with guidelines established by the Institutional Animal Care and Use Committee of the Johns Hopkins University. MDA-MB-231 human breast cancer cells (2 × 10 6 cells/mouse) were inoculated orthotopically in the mammary fat pad (n = 15) or subcutaneously (n = 10) in female severe combined immunodeficient (SCID) mice. Tumors were palpable within two to three weeks after implantation and reached a volume of approximately 300-400 mm 3 within four to five weeks, at which time they were used for the studies. In Vivo RNA Interference Experiments For biodistribution studies, MDA-MB-231 tumor bearing mice were injected intravenously with 200 µl of PD-L1 siRNA dextran NPs (PD-L1siRNA, 5 nmol/mouse/dose; dextran 2.5 mg/mouse/ dose, N/P = 15) through the tail vein. Two different protocols were tested. In group 1, mice received two doses of NPs 48 h apart and were sacrificed at 24 h after the second injection (n = 10, four orthotopic, six subcutaneous). In group 2, mice received two doses 48 h apart, but were sacrificed at 48 h after the second injection (n = 5, orthotopic). A group of 10 mice were injected with an equivalent volume of PBS and served as controls. In Vivo and Ex Vivo Optical Imaging Studies In vivo and ex vivo optical images were acquired with a Pearl ® Trilogy Small Animal Imaging System (LI-COR, Lincoln, NE). Delivery of the NPs was confirmed by imaging the mice in vivo at 48 h after the first injection and either at 24 h (group 1) or at 48 h (group 2) after the second injection. Mice were sacrificed and organs excised for ex vivo quantification. Excised tumors, kidneys, liver, spleen, heart, lungs, and muscle were imaged. Fluorescent intensities in regions of interest (ROIs) were quantified by using Living Image 4.5 software (Caliper, Hopkinton, MA). The tumors were sectioned into two to three slices of~1 mm thickness. Fluorescent signal was acquired from both sides of each slice, and the values acquired for each tumor were averaged. Signal intensities were normalized to the area of the ROI. RNA Isolation and Quantitative Reverse Transcription-PCR Total RNA was isolated from MDA-MB-231 cells grown in 60 mm dish or from frozen MDA-MB-231 tumor tissue by using QIAshredder and RNeasy Mini kit (Qiagen, Valencia, CA, USA) as per the manufacturer's protocol. cDNA was prepared using the iScript cDNA synthesis kit (Bio-Rad, Hercules, CA, USA). cDNA samples were diluted at 1:10 dilution and quantitative real-time PCR was performed using IQ SYBR Green supermix and gene specific primers in the iCycler real-time PCR detection system (Bio-Rad). All primers were designed using either Beacon designer software 7.8 (premier Biosoft, Palo Alto, CA, USA) or publicly available Primer3plus software. The expression of target RNA relative to the housekeeping gene hypoxanthine phosphoribosyltransferase 1 (HPRT1) was calculated based on the threshold cycle (C t ) as R = 2-D(DCt) , where DC t = C t of target gene -C t of HPRT1 and D(DC t )= DC t siRNA treated cells/tumor -DC t untreated cells/tumors. Protein Isolation and Immunoblots Total protein was extracted from MDA-MB-231 cells grown in a 100 mm dish or from frozen MDA-MB-231 tumor tissue by using 1x cracking buffer [100 mmol/L Tris (pH 6.7), 2% glycerol] containing a protease inhibitor (Sigma) at 1:200 dilution. Protein concentration was estimated using the Bradford Bio-Rad protein assay Kit (Bio-Rad). Approximately 100 µg of total protein was used in each experiment. Expression levels of PD-L1 were determined by immunoblotting using a rabbit polyclonal against human PD-L1 at 1:1,000 dilution (GeneTex, Irvine, CA). Monoclonal anti-GAPDH antibody (1:50,000 dilution, Sigma-Aldrich) was used as loading control. Proteins were visualized with HRP (horseradish peroxidase)-conjugated secondary antibodies using the SuperSignal West Pico Chemiluminescent substrate kit (Thermo Scientific). Statistical Analysis Statistical analyses were performed using GraphPad Prism 4 software (GraphPad Software, Inc., San Diego, CA, USA). To determine the statistical significance of the quantified data, an unpaired two-tailed Student's T-test was performed. P values ≤0.05 were considered significant unless otherwise stated. RESULTS Effective PD-L1 Downregulation in Cells Following Treatment With PD-L1 siRNA Dextran Nanoparticles MDA-MB-231 triple negative human breast cancer cells were treated with the dextran NP as a carrier for PD-L1 siRNA. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) was performed to measure PD-L1 expression in untreated MDA-MB-231 cells, and in MDA-MB-231 cells treated with scrambled siRNA dextran NPs used as controls or treated with PD-L1 siRNA dextran NPs. Values were normalized to mRNA levels measured in untreated cells. Figure 1A shows changes in mRNA levels in PD-L1 siRNA dextran NP treated cells, compared to scrambled siRNA NP treated cells. Treatment with scrambled siRNA NPs did not alter PD-L1 mRNA levels. A significant decrease of~50% PD-L1 mRNA was detected in cells following treatment with PD-L1 siRNA dextran NPs compared to treatment with scrambled siRNA dextran NPs. To determine whether changes in mRNA translated to changes in PD-L1 protein levels, we analyzed proteins obtained from untreated cells, and cells treated with scrambled siRNA dextran NPs or with PD-L1 siRNA dextran NPs, by immunoblotting. As shown in Figure 1B Figure 2A. Representative in vivo images obtained at 24 h or at 48 h following i.v. administration of the second dose are presented in Figure 2B and Figure 2D, respectively. As observed in these representative images, NPs were also detected in the liver in vivo. Corresponding representative ex vivo images obtained at 24 h after the second dose and at 48 h after the second dose are presented in Figure 2C and Figure 2E NPs in subcutaneous tumors and different organs obtained at 24 h after the second dose are summarized for six tumors in Figure 4B. A significantly lower tumor retention and a lower tumor to muscle ratio, of approximately 2.3, was observed in subcutaneous tumors compared to orthotopic tumors. Compared to orthotopic tumors, NP accumulation in spleen, heart, lungs, and muscle was significantly lower ( Figure 3). As anticipated, since NPs of this size are cleared by the reticuloendothelial system, we found significantly higher accumulation in the liver. Downregulation of PD-L1 in Tumors The importance of siRNA NP delivery and accumulation in the effectiveness of PD-L1 downregulation is highlighted in Figure 5 for mRNA and Figure 6 for protein expression. A significant correlation (R = -0.650, p = 0.009) between the fold-decrease of PD-L1 mRNA normalized to control values, and the tumor/ muscle fluorescence was observed as shown in Figure 5A demonstrating that tumors with higher NP delivery showed a greater reduction of PD-L1. This dependence was further confirmed when we separated the tumors into the highest 50% and lowest 50% tumor/muscle fluorescence groups. The highest 50% group consisted of seven orthotopic tumors, three from group 1 and four from group 2. The lowest 50% group consisted of six subcutaneous and two orthotopic tumor, all from group 1. We found a significant reduction in fold change of PD-L1 mRNA in the tumors with high tumor/muscle fluorescence compared to control tumors as shown in Figure 5B. In tumors with low tumor/muscle fluorescence, there was no significant difference in PD-L1 mRNA compared to control tumors. We next analyzed the relationship between PD-L1 protein expression and the NP delivery and accumulation in the tumor. PD-L1 proteins levels were represented by the PD-L1/GAPDH ratio measured in immunoblots. The accumulation of PD-L1 siRNA dextran NPs in the tumor was represented by the tumor/ muscle fluorescence ratio. As shown in Figure 6A, we observed that the decrease in PD-L1 protein levels in tumors directly correlated with the tumor/muscle fluorescence ratio (R = -0.718, p = 0.003). Additionally, we observed a significant correlation (R = 0.773, p < 0.001) between the PD-L1 protein levels and the mRNA levels as shown in Figure 6B, confirming that the effective decrease of mRNA translated to an effective decrease of protein in these tumors. DISCUSSION Our purpose in these studies was to demonstrate the ability of the dextran siRNA NPs to downregulate PD-L1 in tumors, and to highlight the importance of siRNA NP delivery and accumulation in achieving effective downregulation. We established that PD-L1 siRNA dextran NPs could downregulate PD-L1 in tumors, provided that effective NP delivery and accumulation were achieved. Based on the biodistribution studies we found that NP accumulation was significantly lower in spleen, heart, lungs, and muscle compared to the tumors. Because NPs of this size are cleared by the reticuloendothelial system (RES), we found significantly higher accumulation in the liver. Renal accumulation of the NPs was likely due to renal clearance of molecules (21). Antibody-based immunotherapies target normal tissues where the immune checkpoint is expressed along with the tumor. This leads to significant side-effects (22,23). As a result, novel approaches to targeting immune checkpoints using small molecules, peptides and macrocycles, are being actively explored (24,25). According to the Human Protein Atlas (26), in addition to cancer cells, PD-L1 is also expressed in healthy lungs, heart, colon, and spleen. Inhibiting PD-L1 in these organs can lead to immune-related pneumonitis, myocarditis, and colitis (27,28). The use of siRNA NPs that primarily accumulate in tumors but not in normal tissues would reduce side-effects associated with immune checkpoint inhibition in normal tissues. In addition, the use of siRNA NPs provides the potential to combine multiple siRNAs directed toward different molecular pathways, including multiple immune checkpoints, within a single NP (3). This is especially significant as studies have shown the impact of tumor metabolism on PD-L1 levels (29)(30)(31) and the immune response (32)(33)(34). As a result, metabolic inhibitors of different pathways are being evaluated in clinical trials in combination with immune-checkpoint inhibitors, with promising outcomes (35). This creates the possibility of including siRNA that downregulate enzymes in metabolic pathways in combination with immune checkpoint siRNA. In addition, PD-L1 has pro-oncogenic roles beyond its traditional functions in immunomodulation making its downregulation important (36)(37)(38)(39)(40)(41). Several nanopolymers have been evaluated as molecular agents to deliver PD-L1 siRNA in tumor. A polymeric carrier consisting of disulfide-cross-linked polyethylenimine and dermatan sulfate was used to deliver PD-L1 siRNA in vivo in a mouse melanoma model (42). In another study, PLGA NPs simultaneously delivered PD-1 and PD-L1 siRNA, silencing these genes in cytotoxic T lymphocytes and tumor cells in a colon murine model (43). A polymer containing a poly-L-lysine-lipoic acid reduction-sensitive core and a tumor extracellular pH-stimulated shedding polyethylene glycol layer was used to co-deliver PD-L1 siRNA and doxorubicin in a melanoma model with promising results (44). Silencing the expression of PD-L1 in dendritic cells (DCs) and of PD-1 in T cells by siRNA-loaded chitosan-dextran sulfate nanoparticles was recently described (45). Ex vivo evaluation of the DC phenotypic and functional characteristics, and of the T-cell functions following tumor antigen recognition on DCs, showed that PD-L1-silenced DCs presented a potent immunotherapeutic approach in combination with PD-1 siRNA loaded NPs. Here, we reported, for the first time, the use of a biodegradable dextran nanopolymer as an siRNA carrier to selectively downregulate PD-L1 in a xenograft model of triple negative human breast cancer. With imaging, we demonstrated that the siRNA NPs successfully accumulated in tumors to downregulate PD-L1 expression. Cancer cells induce neovascularization by coopting and remodeling existing vasculature (46), by stimulating angiogenesis and inducing sprouting of new blood capillaries from existing ones (47,48). This vasculature is chaotic and heterogeneous (49), contributing significantly to heterogeneities in NP delivery and accumulation. Our imaging data highlight the importance of being able to detect NP delivery within tumors, and the importance of NP delivery and accumulation in NP function. Integrating imaging into siRNA NP delivery will allow optimization of NP structure and tumor manipulation to improve delivery. One of the limitations of this study is the use of fluorescence imaging to detect NP accumulation in the tumor, as this is not translatable to human applications. This limitation can be overcome in future studies by decorating the NP with a radiolabel, or with an MR contrast agent, so that the biodistribution and delivery can be detected in deepseated tumors and tissues with nuclear imaging or MRI. CONCLUSION Our data demonstrate that, while it is possible to significantly downregulate tumor PD-L1 with siRNA NPs, effective delivery is critically important in achieving effective downregulation. Imaging can play an important role in tracking effective delivery of siRNA NPs in vivo. Strategies to improve siRNA NP delivery should continue to be an area of major emphasis. The siRNA NP strategy targeting PD-L1 presented here has several advantages over traditional antibodies or pharmacological based therapies. It can be made tumor specific to reduce side effects and can be multiplexed with multiple siRNA targeting other pro-oncogenic pathways. Downregulation with siRNA can also reduce the prooncogenic roles of immune checkpoints, by limiting de novo synthesis. The dextran-based PD-L1 siRNA NP showed significant downregulation of PD-L1 expression in tumors where effective delivery was achieved. Because of its biocompatibility and synthesis reproducibility, this siRNA carrier has a clear path for translational applications to achieve effective PD-L1 or other siRNA delivery in patients. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by The Institutional Animal Care and Use Committee of the Johns Hopkins University. AUTHOR CONTRIBUTIONS All authors conceptualized and designed the study. ZC performed the Dextran NP synthesis, purification, and characterization. BK, M-FP, YM, and JP-T collected and assembled the data. BK, M-FP, JP-T, YM, ZC, and ZB analyzed and interpreted the data. All authors contributed to the article and approved the submitted version.
4,720.8
2021-02-25T00:00:00.000
[ "Medicine", "Materials Science" ]
Prevalence and risk factors for cervical intraepithelial neoplasia in HIV-infected women in Salvador, Bahia, Brazil ABSTRACT CONTEXT AND OBJECTIVE: The human immunodeficiency virus (HIV) is frequently associated with high-grade intraepithelial neoplasia. Immunosuppression and high HIV viral load are the main risk factors for cervical intraepithelial neoplasia (CIN). The aim of this study was to determine the prevalence of CIN in HIV-infected women in Salvador, Bahia, Brazil, and to describe the risk factors in comparison with non-infected women. DESIGN AND SETTING: Cross-sectional study at the AIDS Reference Center of Bahia and the Gynecological Outpatient Clinic of Fundação Bahiana para o Desenvolvimento da Ciência, in Salvador, Bahia, Brazil. METHODS: Sixty-four HIV-infected women and 76 uninfected women from Salvador were enrolled between May 2006 and May 2007. Associations between CIN and presence of HIV infection, HIV viral load, proportion of T CD4+ lymphocytes and risk factors were evaluated. The independence of the risk factors was investigated using logistic regression. RESULTS: CIN was more prevalent among HIV-infected women than in the control group (26.6% versus 6.6%; P = 0.01). The odds ratio for CIN among HIV-infected women was 3.7 (95% confidence interval, CI: 1.23-11; P = 0.01), after adjusting for the following variables: age at first sexual intercourse, number of partners, number of deliveries and previous history of sexually transmitted disease. CONCLUSION: The prevalence of CIN among HIV-infected women was significantly higher than among women without HIV infection. HIV infection was the most important risk factor associated with the development of cervical lesions. INTRODUCTION It is estimated that over one million women worldwide currently have cervical cancer, mostly undiagnosed.Over the past three decades, cervical cancer rates have fallen in most of the developed world, probably as a result of screening and treatment programs.In contrast, rates in developing countries have risen or remained unchanged.Each year, at least 274,000 women die from invasive cervical cancer, mainly in developing countries, where access to screening services is limited. 1 Brazil has a continental size and marked regional inequalities.Cervical cancer is the third most common cancer among women in Bra-zil.In the state of Bahia, in the northeastern region of the country, the estimated risk is 13.55 cases per 100,000 women. 2 Infection by human papillomavirus (HPV) is the main cause of cervical intraepithelial neoplasia (CIN), which is a precursor lesion for cervical cancer. 3In addition to HPV infection, the presence of cofactors such as young age at first sexual intercourse, large number of sexual partners and high-risk sexual behavior of the partner significantly increase the risk of CIN. 3 Longterm use of oral contraceptives, high parity and smoking are also established factors for the development of CIN and cervical cancer among HPV-infected women. 3,4t is well documented that women infected by the human immunodeficiency virus (HIV) have higher prevalence of HPV infection and CIN of the uterine cervix.HIV infection is frequently associated with higher grade cervical dysplasia.Among such patients, these lesions have a worse outcome and progress faster than in immunocompetent patients.Such lesions are difficult to treat, with a high recurrence rate.However, it remains unclear whether immune depletion is the only active mechanism connected with HIV infection and CIN. 5,6here are few studies on the prevalence of CIN among HIV-infected women in Brazil, especially in Salvador, Bahia, a state that has sociodemographic characteristics similar to those of African countries. OBJECTIVE The aim of this study was to report on the prevalence of cervical cytological abnormalities among HIV-infected women and to describe the risk factors associated with CIN in this group in Salvador, Bahia. Study population and procedure Sixty-four HIV-infected women who were referred to the AIDS Reference Center of Bahia (Centro Especializado em Diagnóstico, Assistência e Pesquisa, CEDAP) and 76 women without HIV infection who visited the Gynecological Outpatient Clinic of the Bahia Foundation for Science Development (Fundação Bahiana para o Desenvolvimento da Ciência), in Salvador, Bahia, Brazil, between May 2006 and May 2007, were included in this study.The patients were invited to participate in the study when they came for a routine visit. The inclusion criteria were that the women should be older than 18 years of age, sexually active and serologically positive for HIV (for the HIV group) or negative (for the control group).The exclusion criteria were pregnancy or postpartum status, use of vaginally applied medication over the three days prior to cytological sample collection, sexual intercourse or recent douching over the 48 hours preceding the examination, or vaginal bleeding. The study was approved by the committee for the protection of human subjects of the Gonçalo Moniz Research Center (Centro de Pesquisa Gonçalo Muniz, CPqGM), Oswaldo Cruz Foundation (Fundação Instituto Oswaldo Cruz, FIOCRUZ), Bahia.All patients signed an informed consent form prior to admission. Specimen collection Specimens for Papanicolaou smears were collected from the ectocervix and endocervix using an Ayres spatula and cytobrush, respectively.Squamous cell abnormalities seen in the Papanicolaou smears were classified as low-grade or high-grade squamous intraepithelial lesions, in accordance with the Bethesda System. 7Colposcopic examinations were performed on all of the women by the gynecologist.If a lesion was indicated by the colposcopy or cytology results, it was further evaluated by means of biopsies, which were examined and classified in accordance with the CIN system. 8rior to enrollment, all the women were properly tested for HIV.All the T CD4+ lymphocyte counts and the viral load values were obtained from medical records when the tests were carried out not more than six months prior to the visit.The T CD4+ lymphocyte counts were determined by means of flow cytometry and the HIV viral load by means of the polymerase chain reaction (PCR).Standardized demographic and clinical data were obtained by means of specific questionnaires. Statistical analysis This was an analytical cross-sectional study with a control group.The cytological and histological samples from the HIV-infected women were compared with those of the control group, using t-tests for continuous variables and chi-square tests or Fisher's exact test for categorical variables.We examined the association between CIN and the presence of HIV and immunosuppressive status (T CD4+ lymphocytes < 500 cells/mm 3 ), along with risk factors for CIN.Unadjusted odds ratios (ORs) were calculated to screen for inclusion in an initial multivariate model.Variables that exhibited at least a moderate association (P = 0.10) with the outcome in the presence of these design variables were considered for inclusion in the final models.The statistical analysis was performed using the SPSS software (Statistical Package for the Social Sciences), version 13.0. RESULTS The patients' mean ages were 30.4 ± 5.5 years for the HIV-infected women and 28.6 ± 5.9 for the uninfected patients.In the HIV group, 34 (56.3%) were treated with highly active antiretroviral therapy (HAART).The mean T CD4+ lymphocyte count was 644 ± 514 cells/mm 3 and HIV viral load was 3.9 ± 4.3 log 10 copies/ml (Table 1). The cytological smears differed significantly between the groups: squamous intraepithelial lesions (SIL) were more prevalent in HIV-infected patients (16 out of 64) then in the women without HIV infection (6 out of 76) (P = 0.01) (Table 2).CIN was found in 17 patients (26.6%) in the HIV-infected group and in five uninfected women (6.6%) (P = 0.01) (Table 3).Disagreement between cytology and histology was observed in relation to one HIV-infected patient who had normal cytology but presented CIN1 in the histological test, and in relation to one patient in the control group who had low-grade squamous intraepithelial lesion (LSIL), but her histology was normal. Among the HIV-infected women, the risk factors associated with CIN were young age at first sexual intercourse, large mean number of sexual partners, history of previous sexual transmitted diseases and high number of deliveries (Table 1). The odds ratio value for CIN among HIV-infected women was 3.7 (95% confidence interval, CI: 1.23-11; P = 0.01) after adjusting for the following variables: age at first sexual intercourse, number of partners, number of deliveries and history of sexually transmitted diseases (Table 4). In the group of HIV-positive women who underwent antiretroviral therapy, the frequency of CIN was 18.4 %, while it was 34.6% in the untreated group, without statistical significance (P = 0.14).Moreover, when the HIV-infected patients were stratified according the T CD4+ cell count, there were no significant differences in the CIN frequencies (Table 5). DISCUSSION Cervical and vaginal intraepithelial neoplasia occur frequently in immunodeficient women, especially those infected by HIV. 9,10ecurrence of CIN2 and CIN3 following treatment consisting of large-loop excision of the transformation zone can reach up to 26% among HIV-infected women, compared with 0.6% among women without HIV infection. 11This study confirmed that the prevalence of CIN was higher among HIV-infected women than among uninfected women in Salvador, Bahia, Brazil.The power of the sample size in this study was 84% and the alpha error was 0.05.Odds ratios (OR) with 95% confidence interval (95% CI) were used to evaluate the association between HIV infection and CIN.The outcome variable used for this calculation was the presence of cervical neoplasia intraepithelial.One in four (26.6%) of the HIV-infected women screened presented evidence of CIN.This proportion was similar to that described by Levi et al., who found CIN in 18% of their sample of HIV-infected women in São Paulo, Brazil. 5In another cross-sectional study carried out in the city of Vitória, Brazil, the prevalence of HPV infection among HIV-infected women (56.3%) was higher than in uninfected controls (40.7%).Nevertheless, the prevalence of high-grade SIL was low (0.7%), and there was no difference between HIV-infected women and uninfected women.It was suggested that the low prevalence of high-grade SIL might be due to earlier access to healthcare and prompt diagnosis, thereby avoiding occurrences of high-grade SILs. 124][15][16][17][18] The presence of high-risk HPV viral load may also be important in predicting highgrade CIN among women with atypical squamous cells or LSIL in their cervical smears. 19In the presence of high-risk HPV subtypes, cytological abnormalities may be present in up to 44% of HIV-infected women. 20In the present study, the HIV-infected women had their first sexual intercourse at an earlier age, a higher number of sexual partners and a higher prevalence of STDs.Nevertheless, in the multivariate analysis, HIV infection remained independently associated with CIN.Even in the absence of HIV infection, Silva et al., in Pernambuco, also described earlier age of first sexual intercourse, HPV type and smoking as risk factors for CIN. 4 Parham et al. found that among HIV-infected women, age, CD4+ cell count, and presence of any high-risk HPV type were significantly associated with abnormal cytological smears.In a multivariable logistic regression model, they suggested that the presence of high-risk HPV type was an independent predictor for abnormal cytology (adjusted OR: 12.4; 95% CI: 2.62-58.1;P = 0.02). 21he association between CIN severity and HIV infection has been clearly demonstrated. 6,16,17Moreover, the impairment of cell immune response observed during HIV infection is associated with inadequate clearance of HPV infection, which is one of the major etiological agents for CIN.In such patients, persistence of HPV infection is common, and infection by multiple HPV subtypes and spontaneous regression of low-grade lesions are rare. 18he impact of highly active antiretroviral therapy (HAART) on the prognosis for SIL in HIV-infected women has been analyzed.Antiretroviral treatment reduces the risk of recurrence of cervical lesions, probably by restoring or preserving immune function. 19,20Levi et al. observed that 31% of the patients with fewer than 200 cells/ µl had abnormal cervical smears, in contrast with 13% of those with counts higher than 200 cells/µl. 5In the present study, no relationship was observed between immunosuppression and lesion severity.The T CD4+ lymphocyte count was not statistically different between patients without CIN and patients with low or high-grade neoplasia.This finding is in accordance with other studies that did not find any association between the T CD4+ lymphocyte count and the severity of CIN. 22,23Nevertheless, the majority of the HIV-infected patients enrolled in the present study were being treated with HAART and their cell immune response was preserved, with T CD4+ cell counts higher than 500 cells/mm 3 . However, the immune response against HPV also depends on innate immunity, including macrophages, natural killer cells and cytokine production, which may also be impaired during HIV infection. 24Therefore, local cervical immunity, especially with regard to reduced numbers or function of dendritic cells, could explain the progression of cervical neoplasia. 25 CONCLUSIONS In summary, the prevalence of CIN in HIV-infected women was significantly higher than in women without HIV infection.The presence of HIV infection was the most important risk factor associated with the development of cervical lesions, probably because HIV patients are exposed to several risk factors associated with CIN.To prevent the lesions from progressing to invasive cancer, gynecological evaluation, cervical cytological tests and colposcopy should be considered to be essential examinations for HIV-infected woman, even in the presence of higher T CD4+ cell counts.HIV-infected women should be prioritized in HPV screening programs. Adjusted variables: HIV infection, first sexual intercourse, number of sexual partners, STD history, number of deliveries.OR = odds ratio.P < 0.05.CI = confidence interval.Logistic regression. Table 2 . Cytological smears from HIV-infected women and uninfected women LSIL = low-grade intraepithelial lesion; HSIL = high-grade intraepithelial lesion.P < 0.05 from chi-square test or Fisher's test. Table 3 . Histology results among HIV-infected women and uninfected women Table 4 . Adjusted odds ratio for risk of developing cervical intraepithelial neoplasia (CIN) Table 5 . Frequencies of histological findings among HIV-infected women stratified according to T CD4+ lymphocyte counts CIN = cervical intra-epithelial neoplasia; SD = standard deviation.P < 0.05 from chi-square test or Fisher's test.
3,123.4
2010-07-01T00:00:00.000
[ "Medicine", "Biology" ]
Big data analytics and Mining for effective Visualization and trends forecasting of Crime data Big Data Analytics (BDA) is an excellent procedure for investigating assorted models. In this paper, we use BDA to crook records and exploring the records exam has directed for illustration. A magnificent records mining is done profound getting to know techniques are utilized. Following measurable research and illustration, a few exciting realities and examples are observed from the records of San Francisco, Chicago and Philadelphia. The prescient display that the Prophet version and Keras stateful LSTM carry out a manner, but is not higher than neural company models, where an appropriate length of the practice records is observed to be 3 years. These promising consequences will help for police divisions and regulation requirement institutions for extra recognizing of crimes and supply with a bit more understanding with a view to empower them to observe exercises, foresee the opportunity of occurrences, safely ship the property and enhance the dynamic cycle. Through reading large records implementation cases, we sought to recognize how large records analytics skills rework organizational practices, thereby producing ability benefits. In addition to conceptually defining 4 large records analytics skills, the version gives a strategic view of large records Introduction In recent years, Big Data Analytics (BDA) has emerged as a growing way to read records and extract statistics and similarities in various software environments. Due to urban sprawl and population development, crime plays a vital role in our society. However, those qualities have been further followed by the rise of violent crime and accidents. To address these issues, social scientists, analysts, and defense institutions have devoted much effort to understanding the styles and skills of the mines. When it comes to public access, there are many situations that require a lot of access to a record. As a result, new techniques and technologies need to be designed to study these different records and to be available in more places. Analysis of such large records allows us to better manage events, identify similarities, set inventory and make short decisions accordingly. This can also help by increasing our knowledge of the problems of individual history and modern conditions, over time ensuring improved security / safety and a better standard of living, in addition to cultural expansion and financial growth. The rapid growth of cloud computing and recording of grocery purchases and technology, from commercial enterprises and research institutes to many governments and groups, has created a tremendous amount of weight / sophistication from the records collected and made available to the public. It has emerged as a growing value for the importance of extracting important statistics and R e t r a c t e d 2 gaining new insights into information styles in those record assets. The BDA can effectively handle the demands of records that may be very large, unstructured, and quickly transferred to standardized methods. As a fast-growing and powerful practice, the DBA can mobilize circles to use its records and promote new freedom. In addition, the BDA can be used to help sensible organizations spread out in advance with more dynamic jobs, over-earning and happy customers. Big Data Big record is a subject that treats approaches to investigate, systematically extract facts from, or in any other case deal with records sets that might be too big or complicated to be handled via way of means of conventional records-processing utility software program. Data with many fields (columns) provide greater statistical strength, whilst records with better complexity (greater attributes or columns) can also additionally cause a better fake discovery rate. Big records evaluation demanding situations include shooting records, records storage, records evaluation, search, sharing, transfer, visualization, querying, updating, facts privacy and records source. The evaluation of huge records offers demanding situations in sampling, and for this reason formerly taking into consideration simplest observations and sampling. Therefore, huge records frequently consist of records with sizes that exceed the potential of conventional software program to procedure inside a suitable time and fee. The idea of huge records has been around for years; maximum corporations now apprehend that in the event that they seize all of the records that streams into their corporations, they are able to follow analytics and get the widespread fee from it. But even withinside the 1950s, a long time earlier than everybody uttered the term "huge records," corporations had been the usage of fundamental analytics (basic numbers in a spreadsheet that had been manually examined) to find insights and trends. The new advantages that huge records analytics brings to the table, however, are pace and efficiency. Whereas some years in the past an enterprise might have accumulated facts, run analytics and unearthed facts that might be used for destiny choices, nowadays that enterprise can discover insights for fast choices. The cap potential to paintings fasterand live agile -offers corporations an aggressive fact they didn't have earlier than. Cost reduction. Big records technology which includes Hadoop and cloud-primarily based totally analytics carries widespread price benefits in relation to storing big quantities of recordsplus they are able to discover greater green approaches of doing enterprise. Faster, higher selection making. With the velocity of Hadoop and in-reminiscence analytics, mixed with the cap potential to investigate new reasserts of records, corporations are cabin a position to investigate facts immediatelyand make choices primarily based totally on what they've learned. New merchandise and services. With the cap potential to gauge purchaser wishes and pleasure thru analytics comes the strength to present clients what they want. Davenport factors out that with huge records analytics, greater groups are growing new merchandise to fulfill clients' wishes. Data Mining Data mining is a manner of coming across styles in big records sets involving strategies on the intersection of the device getting to know, statistics, and systems. Data mining is an interdisciplinary subfield of laptop science and statistics with an ordinary purpose to extract facts (with wise strategies) from records set and remodel the facts right into an understandable shape for further use. Beside the uncooked assessment step, it additionally incorporates data set and records control angles, model and induction contemplations, intriguing quality measurements, intricacy thought, postpreparing of decided constructions, representation, and internet refreshing. The term "records mining" is a misnomer, due to the fact the purpose is the extraction of styles and information from big quantities of records, now no longer the extraction (mining) of records itself. It is also a buzzword and is regularly carried out to any shape of big-scale records or facts. The term "records mining" is a misnomer, due to the fact, the purpose is the extraction of styles and information from big quantities of records, now no longer the extraction (mining) of records itself. It is also a buzzword and is regularly carried out to any shape of big-scale records or facts processing (collection, extraction, warehousing, evaluation, and statistics) in addition to any software of laptop choice assist system, including 3 synthetic intelligence (e.g., device getting to know) and enterprise intelligence. The book Data mining: Practical device getting to know gear and strategies with Java (which covers the whole device getting to know the material) changed into at first to be named just Practical device getting to know, and the term records mining changed into most effective delivered for advertising and marketing reasons. Often the greater trendy terms (big scale) records evaluation and analytics-or, whilst referring to real strategies, synthetic intelligence and device getting to know-are greater appropriate. Data mining includes exploring and reading big blocks of facts to glean meaningful styles and trends. It may be utilized in quite a few ways, consisting of database advertising and marketing, credit score hazard control, fraud detection, junk mail Email filtering, or even to parent the sentiment or opinion of users. The records mining manner breaks down into 5 steps. First, companies gather records and cargo them into their records warehouses. Next, they keep and manipulate the records, both on in-residence servers or the cloud. Business analysts, control teams and facts generation experts get admission to the records and decide how they need to arrange it. Then, software program types the records primarily based totally on the person's results, and finally, the end-person affords the records in an easy-to-share format, consisting of a graph or table. Data Visualization Data visualization (frequently abbreviated information viz) is an interdisciplinary discipline that offers the image illustration of information. It is an especially green manner of speaking while the information is served as an example of a Time Series. From an academic factor of view, this illustration may be taken into consideration as a mapping among the authentic information (generally numerical) and image elements (as an example, traces or factors in a chart). The planning decides how the credits of those components territory in accordance with the data. In this light, a bar outline is a planning of the length of a bar to a meaning of a variable. Since the picture design of the planning can unfavorably affect the clearness of an outline planning is a middle competency of Data perception.Data visualization has its roots withinside the discipline of Statistics and is consequently normally taken into consideration by a department of Descriptive Statistics. However, due to the fact, each layout abilities and statistical and computing abilities are required to visualize efficiently, it's miles argued via way of means of a few authors that it's miles each an Art and a Science. To talk about statistics genuinely and successfully, information visualization makes use of statistical photos, plots, statistics photos and different tools. Numerical information can be encoded with the usage of dots, traces, or bars, to visually talk a quantitative message. Related Work In this paper City-scale traffic speed prediction provides significant data foundation for Intelligent Transportation System (ITS), which enriches commuters with up-to-date information about traffic condition. However, predicting on-road vehicle speed accurately is challenging, as the speed of vehicle on urban road is affected by various types of factors. These factors can be categorized into three main aspects, which are temporal, spatial, and other latent information. In this paper, we propose a novel spatio-temporal model named L-U-Net based on U-Net as well as Long Short-Term Memory (LSTM) architecture, and develop an effective speed prediction model, which is capable of forecasting city-scale traffic conditions. It is worth noting that our model can avoid the high complexity and uncertainty of subjective features extraction, and can be easily extended to solve other spatio-temporal prediction problems such as flow prediction. The experimental results demonstrate that the prediction model we proposed can forecast urban traffic speed effectively. A proposal of a novel spatio-temporal prediction model named L-U-Net by utilizing LSTM neural network combined with U-Net architecture. The model can not only capture features both in temporal and spatial dimension for traffic speed prediction, but also extract features without extensive features engineering. Our method can reduce the workload of feature engineering effectively, and we have demonstrated that it can predict traffic conditions in future well across the real dataset. In this paper E-Learning is a response to the new educational needs of society and an important development in Information and Communication Technologies (ICT) because it represents the future of the teaching and learning processes. However, this trend presents many challenges, such as the processing of online forums which generate a huge number of messages with an unordered structure and a great variety of topics. These forums provide an excellent platform for learning and connecting students of a subject but the difficulty of following and searching the vast volume of information that they generate may be counterproductive. [2] In this paper This paper presents an approach for the interactive visualization, exploration and interpretation of large multivariate time series. Interesting patterns in such datasets usually appear as periodic or recurrent behavior often caused by the interaction between variables. To identify such patterns, we summarize the data as conceptual states, modeling temporal dynamics as transitions between the states. This representation can visualize large datasets with potentially billions of examples. We extend the representation to multiple spatial granularities allowing the user to find patterns on multiple scales. [3] In this paper A major information examination empowered change model dependent on trainingbased view is created, which uncovers the causal connections among enormous information investigation capacities, IT-empowered change rehearses, advantage measurements and business esteems. This model was then tried in a medical care setting. By dissecting huge information usage cases, we looked to see how large information investigation capacities change authoritative practices, along these lines creating expected advantages. Notwithstanding thoughtfully characterizing four major information investigation capacities, the model offers an essential perspective on enormous information examination. Three huge way-to-esteem chains were distinguished for medical services associations by applying the model, which gives viable experiences to managers. Big Data examination for gauging the travel industry objective appearances with the applied Vector Autoregression model. [4] Proposed Methodology Huge Data Analytics (BDA) is becoming an arising approach for dissecting information and separating data and their relations in a wide scope of use zones. comparable to a public arrangement in any case, there are numerous difficulties in managing a lot of accessible information. Therefore, new strategies and innovations should be contrived to examine this heterogeneous and multi-sourced information. Big information investigation (BDA) is applied and focussed in the arenas of information science and software engineering. The origination of huge information in BDA, its examination and the related difficulties while communicating among them. on exploration holes and difficulties of wrongdoing information mining. [5] In extra to that, this undertaking knowledge about the information digging for finding the examples and patterns in wrongdoing to be utilized fittingly and to be assistance for novices in the examination of wrongdoing information mining. As an outcome, the administration and investigation with colossal information are exceptionally troublesome and complex. To build the effectiveness of wrongdoing discovery, it is important to choose the information mining procedures reasonably. different information mining applications, particularly applications that applied to address the violations Apriori calculation to locate the viable affiliation rule and to lessen the measure of handling time. [6] Furthermore, there are a few procedures that have been created to break down the relationship between two itemsets all the more adequately, for example, common data idea however the calculation was expanded the more measure of time. Figure 1 shows the architecture diagram. Data Pre-Processing Before actualizing any calculations on our datasets, a progression of pre-handling steps is performed for information molding as introduced beneath: Time is discretized a few segments to consider time arrangement estimating for the general pattern inside the information. For some missing direction credits in Chicago and Philadelphia datasets, we ascribed irregular qualities inspected from the non- missing qualities, processed their mean, and afterward supplanted the missing ones [7]. The timestamp demonstrates the date and season of an event of every wrongdoing, we reasoned these ascribe into five highlights, Month (1-12), Day (1-31), Hour (0-23), and Minute (0-59). We likewise overlook a few highlights that unneeded like incident Num, arrange. Visualization Thinking about the geographic idea of the wrongdoing episodes, the informational collection was utilized for information representation, where wrongdoing occurrences are bunched by their property data like scope and longitude data. The blue mark represents the dispersion of police headquarters in every city, where the round name with numbers are for wrongdoing problem areas and the related number of episodes. Experimental Setup We have examined significant learning figurings and time plan measure models to predict bad behavior designs. For execution appraisal, the Root Mean Square Error (RMSE) and spearman correlation to train our models for anticipating designs, we initially summarized the number of bad behavior events every day, and by then changed these data into a "tibbletime" plan, and thereafter we isolated the data into getting ready and testing sets, where the readiness set contains data and the testing set has data, for planning measure we set 1 year's data as endorsement set. We evaluated the introduction of the figure models while changing the amount of setting up quite a while from 1 to 10 6 and the results are summarized in, truly planning data don't actually provoke better results yet too little getting ready data moreover fails to deliver incredible results as shown in figure 2-5. The ideal time frame for bad behavior design gauging is 3 years where the RMSE is the least and the spearman association is the most raised. The results similarly showed that the Prophet model and LSTM model performed better contrasted with standard neural association models as demonstrated in that neural association shows up has lower RMSE yet the connection between foreseen characteristics and the certifiable ones is low. The view of the examples in, and 1 also changes this end. Besides, we also evaluated the effects of some vital limits in the best two techniques, the Prophet and LSTM models. For the Prophet model, ensuing to setting we up can obtain examples and abnormality of the dataset, yet for event parts, we need to truly enter the value. As exhibited in, we summarize the top 10 dates with the most and least bad behavior events independently, as needs be we set these 20 dates as events. Furthermore, we separated particular changepoint ranges, suggesting the degree of history where design changepoints will be evaluated. Conclusion In this paper, a progression of best-in-class enormous information investigation and representation strategies were used to dissect wrongdoing large information from three US urban areas, which permitted us to distinguish designs and acquiring patterns. By investigating the Prophet model, a neural organization model, and the profound learning calculation LSTM, we found that both the Prophet model and the LSTM calculation perform better compared to traditional neural organization models. We likewise figured out the ideal time for the preparation test to be 3 years, to accomplish the best forecast of patterns regarding the RMSE and spearman relationship. Ideal boundaries for the Prophet and the LSTM models are additionally decided. Extra outcomes clarified before will give new bits of knowledge into wrongdoing patterns and will help both police offices and law implementation organizations in their dynamic. In the future, we intend to finish our on-going stage for conventional huge information investigation which will be fit for preparing different sorts of information for a wide scope of utilizations. We additionally plan to join multivariate perception chart mining procedures and fine-grained spatial examination to uncover more expected examples and patterns inside these datasets. Also, we mean to direct more sensible contextual investigations to additionally assess the viability and adaptability of the various models in our framework.
4,326
2021-01-01T00:00:00.000
[ "Computer Science" ]
A Novel Deep Blue LE-Dominated HLCT Excited State Design Strategy and Material for OLED Deep blue luminescent materials play a crucial role in the organic light-emitting diodes (OLEDs). In this work, a novel deep blue molecule based on hybridized local and charge-transfer (HLCT) excited state was reported with the emission wavelength of 423 nm. The OLED based on this material achieved high maximum external quantum efficiency (EQE) of 4% with good color purity. The results revealed that the locally-excited (LE)-dominated HLCT excited state had obvious advantages in short wavelength and narrow spectrum emission. What is more, the experimental and theoretical combination was used to describe the excited state characteristic and to understand photophysical property. Introduction The organic light-emitting materials play an important role in flat-panel display, solid-state lighting, photodynamic therapy, and so on [1][2][3][4]. These efficient fluorescent molecules often have large π-conjugation plane which could be applied in biological probes and sensors because of good biocompatibility of organic compounds [5,6]. According to the spin-statistics rule, the 75% spin-forbidden triplet excitons will be wasted from the lowest triplet state (T 1 ) to the ground state (S 0 ) through non-radiative transitions [7][8][9][10]. In order to maximize the utilization of a triplet exciton, a large number of materials were reported including the thermally-activated delayed fluorescence (TADF) materials which were considered to be the most popular of a new generation organic light-emitting diode (OLED) materials [11][12][13]. However, the TADF can promote exciton utilizing efficiency (EUE) by forming a charge-transfer (CT) state, which will cause seriously red-shifted emission at the same time. Further, the separated highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) always depends on a twisted donor-acceptor structure or strong CT, leading to broadened electroluminescence (EL) spectra. Lately, Hatakeyama and co-workers reported the B,O-doped polycyclic aromatic molecules with a 1,4-oxaborine substructure identified as multi-resonance TADF (MR-TADF) materials [14][15][16][17][18]. The MR-TADF materials ensured the color purity of emissions under the premise of realizing TADF. However, all of these molecules suffered from an extremely serious loss of raw materials in the synthesis process, causing obstacles to industrial manufacturing. Targeting the above-mentioned problems, the HLCT mechanism had attracted enormous attention because of its advantages [19][20][21]. The HLCT state was hybridized by the LE and CT state when the non-adiabatic LE and CT states had small energy gaps which can cause big mixing coefficients [22]. According to Figure S1 and Table S1, the S 1 (3.8019 eV) was close to S 2 (3.9945 eV), indicating a small energy gap in the LE and CT state, which was of benefit to form HLCT. The small energy difference of S 1 -T 5 was tiny enough (∆E S1T5 was calculated to be 0.0768 eV). Based on the tiny ∆E S1T5 , a "hot exciton" channel can be constructed. With the help of the "hot exciton" channel, T 5 exciton could be converted to singlet via hRISC with no delayed lifetime of the excited state (less than 10 ns) [23]. Besides, the moderate overlap degree of "hole" and "particle" was different from the typical CT state or LE state. The HLCT materials had unequivocal design principles, aiming to promote the hybridization in different proportions between LE and CT states [24][25][26]. Ma and Hu et al. illustrated that the LE-state was responsible for the high radiative transition rate and high luminous efficiency, while the CT-state was provided with a small splitting and efficient RISC channel, a high PL efficiency and a high EUE at the same time. The HLCT state can achieve high EUE because of the RISC process occurring at high-lying energy levels [27]. As for LE-dominated HLCT, the large weight of the LE state always equals to small reorganization energy. The small reorganization energy represents the suppressed vibrational rotation, resulting in narrow spectrum and deep blue emission. In addition, the HLCT molecules have simple molecular construction, which decreases the difficulty of synthesis and paves the way to mass production. The 1H-phenanthro[9,10-d]imidazole (PI) is a classic group of HLCT benefiting from its bipolar character which has been systematically investigated by our group in recent years [28][29][30], but the narrow spectrum emission potential of PI is rarely involved. As we all know, the imidazole ring of PI is usually modified in two directions to modulate the new excited state. The substituent can be introduced into the system in a horizontal and vertical direction on the C-position and N-position, respectively. It is also noteworthy that the mild donor and acceptor are important factors to balance the LE and CT components. For example, the triphenylamine (TPA) and cyano group were selected because of the mild electron-donating/withdrawing ability and smaller steric hindrance. In 2015, TBPMCN achieved compatible coexistence between a high photoluminescence quantum yield (PLQY) and high EUE via a quasi-equivalent HLCT state, which was reported by our group [31]. In this work, N-TBPMCN was designed and synthesized, which has a similar structure with TBPMCN. The substituents in C-position and N-position of N-TBPMCN were changed to form LE-dominated HLCT (Scheme 1). As the isomer of TBPMCN, N-TBPMCN demonstrated a deep blue emission and narrowed full width at half maximum (FWHM). This work provided us with a novel molecular design for an LE-dominated HLCT excited state, which was beneficial to realize the electroluminescence with short wavelength and narrow spectrum emission. LE and CT state, which was of benefit to form HLCT. The small energy difference of S1-T5 was tiny enough (∆ES1T5 was calculated to be 0.0768 eV). Based on the tiny ∆ES1T5, a "hot exciton" channel can be constructed. With the help of the "hot exciton" channel, T5 exciton could be converted to singlet via hRISC with no delayed lifetime of the excited state (less than 10 ns) [23]. Besides, the moderate overlap degree of "hole" and "particle" was different from the typical CT state or LE state. The HLCT materials had unequivocal design principles, aiming to promote the hybridization in different proportions between LE and CT states [24][25][26]. Ma and Hu et al. illustrated that the LE-state was responsible for the high radiative transition rate and high luminous efficiency, while the CT-state was provided with a small splitting and efficient RISC channel, a high PL efficiency and a high EUE at the same time. The HLCT state can achieve high EUE because of the RISC process occurring at high-lying energy levels [27]. As for LE-dominated HLCT, the large weight of the LE state always equals to small reorganization energy. The small reorganization energy represents the suppressed vibrational rotation, resulting in narrow spectrum and deep blue emission. In addition, the HLCT molecules have simple molecular construction, which decreases the difficulty of synthesis and paves the way to mass production. The 1H-phenanthro[9,10-d]imidazole (PI) is a classic group of HLCT benefiting from its bipolar character which has been systematically investigated by our group in recent years [28][29][30], but the narrow spectrum emission potential of PI is rarely involved. As we all know, the imidazole ring of PI is usually modified in two directions to modulate the new excited state. The substituent can be introduced into the system in a horizontal and vertical direction on the C-position and N-position, respectively. It is also noteworthy that the mild donor and acceptor are important factors to balance the LE and CT components. For example, the triphenylamine (TPA) and cyano group were selected because of the mild electron-donating/withdrawing ability and smaller steric hindrance. In 2015, TBPMCN achieved compatible coexistence between a high photoluminescence quantum yield (PLQY) and high EUE via a quasi-equivalent HLCT state, which was reported by our group [31]. In this work, N-TBPMCN was designed and synthesized, which has a similar structure with TBPMCN. The substituents in C-position and N-position of N-TBPMCN were changed to form LE-dominated HLCT (Scheme 1). As the isomer of TBPMCN, N-TBPMCN demonstrated a deep blue emission and narrowed full width at half maximum (FWHM). This work provided us with a novel molecular design for an LEdominated HLCT excited state, which was beneficial to realize the electroluminescence with short wavelength and narrow spectrum emission. Molecular Design The previous works had shown that the 1H-phenanthro[9,10-d]imidazole (PI) was an excellent candidate to form short wavelength emitting materials [32,33]. Especially for HLCT materials, the PI with electron-withdrawing sp 2 N-atoms and electron-donating sp 3 N-atoms was used as the bipolar main body with balanced CT and LE. The cyano group was a common electron acceptor to HLCT. Meanwhile, the cyano group with sphybridized N-atoms expanded the conjugation and enhanced the oscillator strength in a vertical direction. Based on these advantages, TBPMCN achieved efficient blue emission successfully because of the HLCT state. In this work, we optimized the combination mode of PI, TPA and cyano group to generate LE-dominated HLCT N-TBPMCN (Scheme 2) in order to realize good color purity and deep blue emission. Molecular Design The previous works had shown that the 1H-phenanthro[9,10-d]imidazole (PI) was an excellent candidate to form short wavelength emitting materials [32,33]. Especially for HLCT materials, the PI with electron-withdrawing sp 2 N-atoms and electron-donating sp 3 N-atoms was used as the bipolar main body with balanced CT and LE. The cyano group was a common electron acceptor to HLCT. Meanwhile, the cyano group with sp-hybridized N-atoms expanded the conjugation and enhanced the oscillator strength in a vertical direction. Based on these advantages, TBPMCN achieved efficient blue emission successfully because of the HLCT state. In this work, we optimized the combination mode of PI, TPA and cyano group to generate LE-dominated HLCT N-TBPMCN (Scheme 2) in order to realize good color purity and deep blue emission. The ground state geometry was optimized with Gaussian 09 D. 01 package by the density functional theory (DFT) method at a M062X/6-31g (d, p) level [34]. According to the geometry of the ground state (Figure 1a), the twisting angle of the imidazole ring and the benzene ring in C-position was 25.4°. Because of the small twisting, the C-position can be used to introduce the LE state and increase oscillator strength. As for the N-position, the peripheral hydrogen atoms caused steric hindrance leading to a more twisted angle of 74.7°. Generally speaking, the introduction of the acceptor in the N-position was conducive to induce the CT component, because of the relatively large twisting. However, if we can use the electron donor TPA to replace the acceptor in the N-position, the LE component would be enhanced. In addition, the biphenyl group caused intensive conjugate effects, resulting in a portion of the LE state. The conjugated structure can neutralize the influence of the CT state to some extent. The NTO of S1 excited state of N-TBPMCN. The molecular geometry of N-TBPMCN was optimized using a M062X/6-31g (d, p) method, and the excited state properties were calculated using the td-M062X/6-31g (d, p) method. The ground state geometry was optimized with Gaussian 09 D. 01 package by the density functional theory (DFT) method at a M062X/6-31g (d, p) level [34]. According to the geometry of the ground state (Figure 1a), the twisting angle of the imidazole ring and the benzene ring in C-position was 25.4 • . Because of the small twisting, the C-position can be used to introduce the LE state and increase oscillator strength. As for the N-position, the peripheral hydrogen atoms caused steric hindrance leading to a more twisted angle of 74.7 • . Generally speaking, the introduction of the acceptor in the N-position was conducive to induce the CT component, because of the relatively large twisting. However, if we can use the electron donor TPA to replace the acceptor in the N-position, the LE component would be enhanced. In addition, the biphenyl group caused intensive conjugate effects, resulting in a portion of the LE state. The conjugated structure can neutralize the influence of the CT state to some extent. Molecular Design The previous works had shown that the 1H-phenanthro[9,10-d]imidazole (PI) was an excellent candidate to form short wavelength emitting materials [32,33]. Especially for HLCT materials, the PI with electron-withdrawing sp 2 N-atoms and electron-donating sp 3 N-atoms was used as the bipolar main body with balanced CT and LE. The cyano group was a common electron acceptor to HLCT. Meanwhile, the cyano group with sp-hybridized N-atoms expanded the conjugation and enhanced the oscillator strength in a vertical direction. Based on these advantages, TBPMCN achieved efficient blue emission successfully because of the HLCT state. In this work, we optimized the combination mode of PI, TPA and cyano group to generate LE-dominated HLCT N-TBPMCN (Scheme 2) in order to realize good color purity and deep blue emission. The ground state geometry was optimized with Gaussian 09 D. 01 package by the density functional theory (DFT) method at a M062X/6-31g (d, p) level [34]. According to the geometry of the ground state (Figure 1a), the twisting angle of the imidazole ring and the benzene ring in C-position was 25.4°. Because of the small twisting, the C-position can be used to introduce the LE state and increase oscillator strength. As for the N-position, the peripheral hydrogen atoms caused steric hindrance leading to a more twisted angle of 74.7°. Generally speaking, the introduction of the acceptor in the N-position was conducive to induce the CT component, because of the relatively large twisting. However, if we can use the electron donor TPA to replace the acceptor in the N-position, the LE component would be enhanced. In addition, the biphenyl group caused intensive conjugate effects, resulting in a portion of the LE state. The conjugated structure can neutralize the influence of the CT state to some extent. The NTO of S1 excited state of N-TBPMCN. The molecular geometry of N-TBPMCN was optimized using a M062X/6-31g (d, p) method, and the excited state properties were calculated using the td-M062X/6-31g (d, p) method. The natural transition orbital (NTO) was calculated with time-dependent DFT (TDDFT) using a TD-M062X/6-31g (d, p) method, which was used to describe the excited state properties, as shown in Figure 1b. In principle, the common CT state had a totally separated "hole" and "particle"; on the contrary, the LE demonstrated a positive coincident distribution of "hole" and "particle". On the one hand, the transition from TPA/biphenyl to TPA/biphenyl and the transition from PI to itself can be identified as an obvious LE feature. On the other hand, the benzonitrile played an important role in the D-A structure as electron acceptor, which induced the CT state to some degree. Besides, the transition from TPA/biphenyl to PI can also be observed, representing the CT component of an excited state. In the meantime, the N-TBPMCN had higher S 1 (3.9775 eV) than TBPMCN (3.8019 eV), and this high value indicated the LE-dominated HLCT state leading to a short wavelength and narrow spectrum emission. The S 1 oscillator strength of N-TBPMCN was calculated to be 0.7572, which can be expected to possess high PLQY. Photophysical Characterizations and Excited State Properties The absorption and emission of N-TBPMCN were investigated in different solvents. The absorption spectra demonstrated b-band absorption of the benzene ring and HLCT absorption at around 254 nm and 351 nm, respectively. The emission presented a fine structure in hexane, and with increasing solvent polarity the fine structure disappeared as shown in Figure 2a and Figure S2. In hexane, the emission wavelength was shorter than 400 nm with the PLQY of 55%. The maximum PLQY was achieved in the ether (77%) because of the well balanced LE and CT of the HLCT state. Further, in acetonitrile solvent, the PLQY dropped down to 1% with the wavelength of 425 nm, indicating that a high polarity environment can damage the hybridization of LE and CT. The solvation effect of N-TBPMCN was obviously distinct from TADF materials, because of its moderate degree of red-shift. The natural transition orbital (NTO) was calculated with time-dependent DFT (TDDFT) using a TD-M062X/6-31g (d, p) method, which was used to describe the excited state properties, as shown in Figure 1b. In principle, the common CT state had a totally separated "hole" and "particle"; on the contrary, the LE demonstrated a positive coincident distribution of "hole" and "particle". On the one hand, the transition from TPA/biphenyl to TPA/biphenyl and the transition from PI to itself can be identified as an obvious LE feature. On the other hand, the benzonitrile played an important role in the D-A structure as electron acceptor, which induced the CT state to some degree. Besides, the transition from TPA/biphenyl to PI can also be observed, representing the CT component of an excited state. In the meantime, the N-TBPMCN had higher S1 (3.9775 eV) than TBPMCN (3.8019 eV), and this high value indicated the LE-dominated HLCT state leading to a short wavelength and narrow spectrum emission. The S1 oscillator strength of N-TBPMCN was calculated to be 0.7572, which can be expected to possess high PLQY. Photophysical Characterizations and Excited State Properties The absorption and emission of N-TBPMCN were investigated in different solvents. The absorption spectra demonstrated b-band absorption of the benzene ring and HLCT absorption at around 254 nm and 351 nm, respectively. The emission presented a fine structure in hexane, and with increasing solvent polarity the fine structure disappeared as shown in Figure 2a and Figure S2. In hexane, the emission wavelength was shorter than 400 nm with the PLQY of 55%. The maximum PLQY was achieved in the ether (77%) because of the well balanced LE and CT of the HLCT state. Further, in acetonitrile solvent, the PLQY dropped down to 1% with the wavelength of 425 nm, indicating that a high polarity environment can damage the hybridization of LE and CT. The solvation effect of N-TBPMCN was obviously distinct from TADF materials, because of its moderate degree of red-shift. The Lippert-Mataga solvatochromic model revealed the relationship of solvent polarity and Stokes shift, which was used as a common method to estimate the dipole moment of the excited state [35]. The Stokes shift can be calculated by the absorption and emission peaks in different solvents, as shown in Figure 2b, Figure S2 and Table S2. The Lippert-Mataga solvatochromic model revealed the relationship of solvent polarity and Stokes shift, which was used as a common method to estimate the dipole moment of the excited state [35]. The Stokes shift can be calculated by the absorption and emission peaks in different solvents, as shown in Figure 2b, Figure S2 and As in solution, the non-doped film of N-TBPMCN also emitted within a deep blue region with a wavelength of 426 nm, and the lifetime was measured to be 8.5 ns (Figure 3a,b). As a comparison, we chose N-TBPMCN as guest material and polymethyl methacrylate (PMMA) as host material to prepare doped films. The doped film had a bluer emission with a wavelength of 406 nm and shorter lifetime of 2.3 ns (Figure 3a,b). Besides, the lifetimes of N-TBPMCN in different solutions possessed less than 10 ns. These non-delayed lifetimes indicated an obvious HLCT excited state feature, which can ensure a fast radiative transition rate benefitting the high luminescent efficiency. As in solution, the non-doped film of N-TBPMCN also emitted within a deep blue region with a wavelength of 426 nm, and the lifetime was measured to be 8.5 ns (Figure 3a,b). As a comparison, we chose N-TBPMCN as guest material and polymethyl methacrylate (PMMA) as host material to prepare doped films. The doped film had a bluer emission with a wavelength of 406 nm and shorter lifetime of 2.3 ns (Figure 3a,b). Besides, the lifetimes of N-TBPMCN in different solutions possessed less than 10 ns. These non-delayed lifetimes indicated an obvious HLCT excited state feature, which can ensure a fast radiative transition rate benefitting the high luminescent efficiency. Thermal Properties and Electrochemical Properties The thermal stability of materials is a significant factor to affect electroluminescence (EL) performance. The glass transition temperature (Tg) and the decomposition temperature of 5 % weight loss (Td) were measured to be 218 °C and 475 °C by differential scanning calorimetry (DSC) and thermogravimetric analyses (TGA), respectively. As shown in Figure 4a and Table 1, the results of DSC and TGA demonstrated good thermal stability of N-TBPMCN, which can be evaporated as an emitting layer during device fabrication. Thermal Properties and Electrochemical Properties The thermal stability of materials is a significant factor to affect electroluminescence (EL) performance. The glass transition temperature (T g ) and the decomposition temperature of 5 % weight loss (T d ) were measured to be 218 • C and 475 • C by differential scanning calorimetry (DSC) and thermogravimetric analyses (TGA), respectively. As shown in Figure 4a and Table 1, the results of DSC and TGA demonstrated good thermal stability of N-TBPMCN, which can be evaporated as an emitting layer during device fabrication. Molecules 2021, 26, x FOR PEER REVIEW 5 of 11 As in solution, the non-doped film of N-TBPMCN also emitted within a deep blue region with a wavelength of 426 nm, and the lifetime was measured to be 8.5 ns (Figure 3a,b). As a comparison, we chose N-TBPMCN as guest material and polymethyl methacrylate (PMMA) as host material to prepare doped films. The doped film had a bluer emission with a wavelength of 406 nm and shorter lifetime of 2.3 ns (Figure 3a,b). Besides, the lifetimes of N-TBPMCN in different solutions possessed less than 10 ns. These non-delayed lifetimes indicated an obvious HLCT excited state feature, which can ensure a fast radiative transition rate benefitting the high luminescent efficiency. Thermal Properties and Electrochemical Properties The thermal stability of materials is a significant factor to affect electroluminescence (EL) performance. The glass transition temperature (Tg) and the decomposition temperature of 5 % weight loss (Td) were measured to be 218 °C and 475 °C by differential scanning calorimetry (DSC) and thermogravimetric analyses (TGA), respectively. As shown in Figure 4a and Table 1, the results of DSC and TGA demonstrated good thermal stability of N-TBPMCN, which can be evaporated as an emitting layer during device fabrication. The electrochemical property revealed carrier injection and transport properties, which was important to design the OLED device structure. The highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) were calculated by the following equation [36,37]: (1) According to Figure 4b and Table 1, both HOMO and LUMO were calculated to be −4.86 eV and −2.77 eV. Based on the HOMO and LUMO energy level of N-TBPMCN, the OLED structure can be optimized for the best EL performance. OLED Performance Generally, it is necessary to optimize a proper OLED structure for improving device performance [38][39][40][41][42]. All of these OLEDs had a relatively low turn-on voltage of 3.3 V, which is attributed to the suitable device structure with balanced carrier transport despite different doping concentrations. The optimal OLED structure was fabricated as ITO/HATCN (5 nm)/TAPC (25 nm)/TCTA (10 nm)/emitters (20 nm)/TPBI (35 nm)/LiF (8 nm)/Al (500 nm). However, the non-doped OLED of N-TBPMCN suffered from serious aggregation caused by quenching (ACQ) [43,44], leading to a low EQE of 1.4 %. In order to alleviate the ACQ effect, the doped OLED of N-TBPMCN were also fabricated with CBP as host material in the emitter layer. As a result, the host CBP doped with N-TBPMCN (15 wt%) was the most appropriate doping ratio, and this doped OLED harvested the best performance: EQE of 4.0 %, EL wavelength of 423 nm, and FWHM of 50 nm (Table 2). Obviously, the doped OLED demonstrated a shorter emission wavelength, narrower FWHM and higher EQE than those of the non-doped OLED. In Figures 3a and 5a, the PL spectra of N-TBPMCN film had a greater similarity with the EL spectra of doped OLED, rather than non-doped OLED ( Figure S3a). This indicates that the host material could effectively suppress excimer formation and concentration quenching. The electrochemical property revealed carrier injection and transport properties, which was important to design the OLED device structure. The highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) were calculated by the following equation [36,37]: According to Figure 4b and Table 1, both HOMO and LUMO were calculated to be −4.86 eV and −2.77 eV. Based on the HOMO and LUMO energy level of N-TBPMCN, the OLED structure can be optimized for the best EL performance. OLED Performance Generally, it is necessary to optimize a proper OLED structure for improving device performance [38][39][40][41][42]. All of these OLEDs had a relatively low turn-on voltage of 3.3 V, which is attributed to the suitable device structure with balanced carrier transport despite different doping concentrations. The optimal OLED structure was fabricated as ITO/HATCN (5 nm)/TAPC (25 nm)/TCTA (10 nm)/emitters (20 nm)/TPBI (35 nm)/LiF (8 nm)/Al (500 nm). However, the non-doped OLED of N-TBPMCN suffered from serious aggregation caused by quenching (ACQ) [43,44], leading to a low EQE of 1.4 %. In order to alleviate the ACQ effect, the doped OLED of N-TBPMCN were also fabricated with CBP as host material in the emitter layer. As a result, the host CBP doped with N-TBPMCN (15 wt%) was the most appropriate doping ratio, and this doped OLED harvested the best performance: EQE of 4.0 %, EL wavelength of 423 nm, and FWHM of 50 nm (Table 2). Obviously, the doped OLED demonstrated a shorter emission wavelength, narrower FWHM and higher EQE than those of the non-doped OLED. In Figures 3a and 5a, the PL spectra of N-TBPMCN film had a greater similarity with the EL spectra of doped OLED, rather than non-doped OLED ( Figure S3a). This indicates that the host material could effectively suppress excimer formation and concentration quenching. According to previous reports, the deep blue luminescent PPI and mTPA-PPI had similar structures to N-TBPMCN (shown in Figure S4) [26]. Compared to PPI (EQE max = 1.86%) and mTPA-PPI (EQE max = 3.33%), the N-TBPMCN was provided with higher EQE, demonstrating that the benzonitrile played an important role in the D-A system. Although the maximum emission wavelength and FWHM were similar between N-TBPMCN ( Figure 5a) and TBPMCN ( Figure S5a) in non-doped OLEDs, the doped OLED of N-TBPMCN displayed a shorter wavelength and better color purity, compared with those of TBPMCN (EL = 452 nm, FWHM = 93 nm in Figure S5b). These results revealed the advantages and potential of the LE-dominated HLCT state in regard to high-efficiency, color-purity and deep blue OLED. Synthesis of Materials All of the chemical reagents and solvents were purchased from Acros, Energy Chemical and Changchun Sanbang. The reagents and solvents can be used directly without further purification. The 1 H NMR spectra and 13 C NMR spectra of the intermediate products and the target products were illustrated in Figure S6. The synthesis details are as follows: 4-[1-(4-Bromo-phenyl)-1H-phenanthro[9,10-d]imidazol-2-yl]-benzonitrile (N-BrPMCN) 4-Cyanobenzaldehyde (10 mmol, 1.3 g), 9,10-Phenanthrenequinone (10 mmol, 2.1 g), 4-Bromoaniline (40 mmol, 6.9 g) and ammonium acetate (50 mmol, 3.7 g) were added into a 250 mL round-bottom flask. Then, 20 mL CH 3 COOH was chosen as solvent. After three times being degassed, the temperature was set as 120 • C and stirred for 3 h. The solvent was cooled down and the solid particle was separated by vacuum filtration. The filter cake was washed by CH 3 COOH and water in a 2:1 volume ratio to remove soluble impurities. After that, the filter cake was dissolved into CH 2 Cl 2 and dried by a molecular sieve. The crude product was purified by silica column chromatography eluting by CH 2 Cl 2 and petroleum ether in a 1:1 volume ratio (R f = 0.3 in CH 2 Cl 2 :petroleum ether = 1:1), and white pure product was obtained N-BrPMCN (5 mmol, 2.4 g), 4-(Diphenylamino)phenylboronic acid (7 mmol, 2.0 g), and K 2 CO 3 (30 mmol, 4.2 g) were added to a mixed solvent of H 2 O (8 mL), THF (5 mL) and toluene (10 mL) and degassed, Pd(PPh 3 ) 4 (0.22 mmol, 253 mg) was added under a nitrogen atmosphere as a catalytic agent, the temperature was set at 90 • C, and it was stirred for 3 h. Then, the reaction solution was cooled down to room temperature. The solution was extracted with CH 2 Cl 2 and the organic layer was dried with MgSO 4 . The crude product was purified by silica gel column chromatography eluting by CH 2 Cl 2 (R f = 0.6 in CH 2 Cl 2 ), and yellow pure product was obtained (2.6 g, yield = 78 %, 638.25 g/mol). m/z = 638.69 (M + H + ). 1 Photophysical Measurements The solutions were diluted to 1 × 10 −5 mol·L −1 to ensure the monodispersion of the molecular. All the solutions were put into the quartz cell and solid samples were put into a quartz plate in order to ensure accuracy. The UV-3100 spectrophotometer was applied to record the UV-vis absorption spectra. The fluorescence measurements were carried out with an RF-5301PC. The PLQY of doped and non-doped films were measured by using an Edinburgh FLS-980 with integrating sphere apparatus. An Edinburgh FLS-980 with an EPL-375 optical laser was the main instrument to estimate lifetime. The samples were put into the quartz plate to estimate lifetime. The total lifetimes of multi-sectioned PL-decay spectra were calculated using the following equation: where τ is lifetime; i represents for the number of the lifetime components; and A i is the proportion of each lifetime components. Electrochemical Measurement The cyclic voltammetry (CV) was measured by a BAS 100W Bioanalytical System. The three electrode system was introduced to conduct the electrochemical measurement. The glass carbon disk (Φ = 3 mm), platinum wire and Ag/Ag + electrode were used as working electrode, auxiliary electrode and reference electrode, respectively. The ferrocenium/ferrocene was set as a redox couple. The solution was blown up with nitrogen for 5 min to exclude oxygen for more accurate results. OLED Fabrication and Performances The substrate of OLED was set as ITO-coated glass with a sheet resistance of 20 Ω square −1 . We chose deionized water, isopropyl alcohol, acetone and chloroform to wash the ITO glass by ultrasonic cleaner. The OLED was fabricated by evaporation. The organic layers were controlled at the rate of 0.03-0.1 nm/s, the LiF layer was controlled at the rate of 0.01 nm/s and the Al layer was controlled at the rate of 0.3 nm/s. The PR650 spectra scan spectrometer was used to record EL spectrum and 2T model 2400 programmable voltage-current source. Conclusions In summary, a new structure N-TBPMCN was designed and synthesized for a deep blue OLED emitter. The LE-dominated HLCT excited state property was revealed by photophysical characterizations and quantum chemical calculation. Compared to the isomer TBPMCN, the weight of the LE component of N-TBPMCN was enhanced by exchanging the position of TPA and cyano groups. The OLED of N-TBPMCN harvested shorter wavelength emission and narrower FWHM, because of the advantages of LEdominated HLCT. The N-TBPMCN was provided with good thermal properties and electrochemical properties, which was appropriate for electroluminescence OLED. Overall, we reported a novel material with deep blue emission and high color purity by simple synthetic route in this work, which contributes to the molecular design of better color purity blue-emissive materials. Supplementary Materials: The following are available online. The energy landscape for excited states of N-TBPMCN and TBPMCN ( Figure S1 and Table S1); the ultraviolet-visible (UV-Visible) absorption and PL spectra ( Figure S2 and Table S2); the performance of N-TBPMCN and TBPMCN OLED ( Figures S3 and S5); the structure of PPI and mTPA-PPI ( Figure S4); the NMR spectroscopy of key intermediate and target compound ( Figure S6); the data and explain of Lippert-Mataga solvatochromic model (Table S1 and Section S3.2); the relative PLQY in different solutions (Table S3 and Section S3.3); the additional information of experiment and measurement (Section S3.1). Author Contributions: X.T. and J.S. contributed equally to this work. X.T. and J.S. performed the experiment and wrote the manuscript. X.T. and S.X. contributed significantly to the OLED fabrication and operation. X.T. and H.L. contributed to the photophysical analysis. S.X. and Y.G. performed the electrochemical characterization. S.Z. and B.Y. supervised the whole work. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
7,481.6
2021-07-28T00:00:00.000
[ "Physics", "Chemistry" ]
Identification of alanine aminotransferase 1 interaction network via iTRAQ-based proteomics in alternating migration, invasion, proliferation and apoptosis of HepG2 cells Objective: To investigate the mechanism of alanine aminotransferase 1 (ALT1) in the progression of HCC, the differentially expressed proteins (DEPs) in the ALT1 interaction network were identified by targeted proteomic analysis. Methods: Wound healing and transwell assays were conducted to assess the effect of ALT1 on cellular migration and invasion. Cell Counting Kit-8 (CCK-8), colony formation, and flow cytometry assays were performed to identify alterations in proliferation and apoptosis. After coimmunoprecipitation processing, mass spectrometry with iso-baric tags for relative and absolute quantitation was utilized to explore the protein interactions in ALT1 knockdown HepG2 cells. Results: The results showed that ALT1 knockdown inhibits the migration, invasion, proliferation of HepG2 cells, and promotes apoptosis. A total of 116 DEPs were identified and the bioinformatics analysis suggested that the ALT1-interacting proteins were primarily associated with cellular and metabolic processes. Knockdown of ALT1 in HepG2 cells reduced the expression of Ki67 and epithelial cell adhesion molecule (EP-CAM), while the expression of apoptosis-stimulating protein 2 of p53 (ASPP2) was increased significantly. Suppression of the ALT1 and EP-CAM expression contributed to alterations in epithelial–mesenchymal transition (EMT) -associated markers and matrix metalloproteinases (MMPs). Additionally, inhibition of ALT1 and Ki67 also decreased the expression of apoptosis and proliferation factors. Furthermore, inhibition of ALT1 and ASPP2 also changed the expression of P53, which may be the signaling pathway by which ALT regulates these biological behaviors. Conclusions: This study indicated that the ALT1 protein interaction network is associated with the biological behaviors of HepG2 cells via the p53 signaling pathway. AGING contribute to the development of HCC [2]. The gender disparity in HCC incidence suggests that approximately 2-8 times as many men develop HCC than women, which may result from the association of sex hormones with the progression of HBV-induced HCC [3,4]. Tumor metastasis is usually coupled with the aggravation of liver cancer and higher mortality of HCC patients [5]. In addition, deregulated cell proliferation combined with suppressed apoptosis constitutes the minimal common form upon which all neoplastic evolution occurs [6]. Alanine transaminase (ALT) converts alanine into pyruvate in gluconeogenesis, and its two isoforms, ALT1 and ALT2, have different subcellular and tissue distributions [7]. ALT1 is found in the cytosol and is used as a biomarker for liver disease assessment [8]. Studies have suggested that increasing levels of serum ALT are closely related to the incidence of HCC in patients with HBV/HCV [9,10]. Another study illustrated that the evaluated serum ALT levels are also connected with the incidence of HCC regardless of hepatitis virus negativity [11]. While these previous studies have improved the connection of serum ALT levels with HCC, the effect of cytoplasmic ALT1, before secretion into the serum, on the development and progression of HCC remains unclear. Isobaric tags for relative and absolute quantitation (iTRAQ) is an established technology to identify DEPs and the protein interactions [12]. Since it can label and analyze eight samples simultaneously, this method can effectively reduce the risk of error during repeated sample testing [13]. In this study, we aimed to use this method to evaluate the variations in the proteome of ALT1 siRNA-treated and control siRNA-treated samples to determine further the proteins participating in the metastasis, proliferation, and apoptosis of HCC. Immunohistochemistry and tissue microarrays Commercial tissue microarrays containing 50 cases of HCC tissues and matched adjacent non-tumor tissues (Catalog: LV1505, Alenabio Biotechnology, Xi'an, China) were obtained for immunohistochemical (IHC) analysis. The samples were deparaffinized with xylene, rehydrated with an ethanol gradient, and washed with double-distilled H2O [14]. The tissues were soaked in 3% H2O2 for 10 min to quench endogenous peroxidase activity and blocked with BSA for 30 min. The tissue samples were incubated with primary antibodies against ALT1 overnight at 4° C. An EnVision+ System, HRP (DakoCytomation, Glostrup, Denmark) was used to detect the expression of ALT1 under 200x magnification [14]. Wound healing and transwell assays HepG2 cells were transfected with ALT1-siRNA and negative control siRNA in wound healing and Transwell assays, respectively. After the cells had condensed in the 6-wells plate for 48 h, scratches were made along a ruler with a 200 µl pipette tip, and the cellular debris was gently washed out with PBS. The migration was identified by changes in the area for up to 24 h under the microscope at 10 x magnification. The transwell assays were performed with a Cell Migration and Cell Invasion Assay kit. The transfected cells were re-cultured in the upper chambers of 24-well plates with cell medium in the lower chamber. The 8-µm membranes separated the invading/migrating cells, and cell staining buffer and extraction buffer were used to stain and extract the cells beneath the membranes. Twenty minutes after staining, the number of cells that pass through the membrane without or with matrigel was measured by CyQuant GR fluorescent dye (560 nm). CCK-8 Transfected cells were plated in 96-well plates at a density of ~2×10 5 cells per well. 10 µl CCK-8 was added every 12 h and incubated at 37° C for 1 h. The colorimetric results were measured at 450nm. Due to the toxicity of Lipofectamine® 2000 and CCK-8, we narrowed the test interval to every 12h. Colony formation assay The transfected cells were digested with 0.05% trypsin and resuspended in cell culture medium. We replated these cells to 6-well plates and incubated them at 37° C in a 5% CO2 incubator for two to three weeks. When the cells showed visible colonies, we stopped the culture and gently washed the cells twice before fixing in 4% PFA for 15 min and dyeing the cells with crystal violet. Colonies containing ≥30 cells were counted. The clone formation of efficiency was defined as the number of formed colonies/the number of seeded cells × 100%. Cells were imaged under a microscope and the stained area was calculated using ImageJ. Flow cytometry After 48h of transfection, cells were centrifuged and resuspended in PBS, divided into labeled EP tubes, and centrifuged at 3000 rpm for 4 min. For cell apoptosis evaluation, the cell pellet was resuspended in PBS. For cell cycle assessment, the cell pellet was resuspended in 70% alcohol. After adding 5 μL of annexin V-FITC and PI (eBiosciences) and avoided light for 15 minutes, apoptosis was detected using the flow cytometric evaluation (Beckman Coulter, CytoFLEX) and FlowJo analysis (Treestar, 10.0.7r2). Protein collection CO-IP and iTRAQ labeling When the cells were approximately 80% confluent, HepG2 cells transfected with ALT1 knockdown siRNA or negative control siRNA were washed three times with PBS and lysed with 500ul-1ml cell lysate buffer. 0.2-2 μg of the primary antibody was added for immunoprecipitation and the lysates were incubated overnight at 4° C with slow shaking. 40 ul of resuspended Protein A+G Agarose was added and shaken slowly at 4° C for 1-3 hours. The samples were centrifuged at 2500 rpm for 5 minutes, and the pellets were washed 5 times with PBS. The supernatant was removed, the pellet resuspended in 20-40 μl 1X SDS-PAGE electrophoresis loading buffer, and the sample centrifuged to the bottom of the tube by instantaneous high-speed centrifugation. The samples were boiled for SDS-PAGE electrophoresis. For the agarose group, samples were accessed through the elimination of the primary antibody. For the IgG group, the samples were obtained by replacing the primary antibodies with IgG antibody. For iTRAQ detection, the eluted proteins were precipitated by acetone at -20° C overnight. According to the iTRAQ manufacturer's protocol, the proteins were dissolved in dissolution buffer, denatured, cysteine blocked, and then were digested with trypsin. The protein samples were labeled as follows: negative control siRNA transfected protein, 113, 114, and 115 tags; and ALT1 siRNA-treated protein, 116, 117, and 118 tags. The labeled samples were mixed for further analysis. Peptide fractionation The collected labeled protein was soaked in the solution of Pharmalyte (GE Healthcare Life Sciences, Little Chalfont, UK) and urea. The samples were rehydrated on pre-hydrated immobilized pH gradient (IPG) strips (pH 3-10) before isoelectric focusing on an IPGphor system at 68 kV/h. The peptides were extracted from gels by acetonitrile and formic acid incubation and purified on a C18 detection-based DSC-18 SPE column (Supelco, Sigma-Aldrich, Darmstadt Germany). Finally, the peptides were vacuum lyophilized and stored at -20° C before mass spectrometry analysis. Mass spectrometry (MS) Mass spectrometry was performed on a TripleTOF 5600 + LC/MS system (AB Sciex LLC., Framingham, MA, USA). The prepared peptide samples were dissolved in a 2% acetonitrile solution and later analyzed by an Eksigent NanoLC system (SCIEX). The solution was resolved on a C18 capture column (5 μm, 100 μm×20 mm), and gradient elution was performed on a C18 analytical column (3 μm, 75 μm×150 mm) with a 90 min time gradient at a flow rate of 300 nL/min. For information-dependent acquisition (IDA), the MS spectrum was detected with an ion accumulation time of 250 ms. Later, the MS spectrum with 30 precursor ions was accessed with an ion accumulation time was 50 ms. Furthermore, the MS1 spectrum was collected in the range of 350-1500 m/z, and the MS2 spectrum was collected in the range of 100-1500 m/z. The precursor ion dynamic exclusion time was set to 15 s. We used the search engine matched with AB Sciex 5600 plus-ProteinPilot™ (V4.5), which considered all possible modification types, and added an automatic fault-tolerant matching function. For the identified proteins, we considered an Unused ProtScore ≥ 1.3 (the reliability level was above 95%), and each protein contained at least one unique peptide as a trusted protein [15,16]. Bioinformatics analysis All identified proteins were evaluated by a Gene Ontology analysis. The cellular components, biological processes, and molecular functions were identified by searching the PANTHER database (http://www.pantherdb.org/). Western blot The cells were collected and lysed after 48 h of transfection. The cell suspension was separated by centrifugation at 12000 rpm for 5 min, and the supernatant was pooled. Protein concentration was measured by a BCA kit. Proteins were separated by SDS-PAGE (10%) and transferred to PVDF membranes. Then the membranes were incubated with primary and secondary antibodies. Protein bands were exposed via a ChemiDoc MP imaging system (Bio-Rad Laboratories, Hercules, CA, USA). Immunofluorescence (IF) The cells were fixed with 4% paraformaldehyde for 10 min and washed with PBS. Cell membranes were ruptured using 0.2% Triton X-100 and the cells were washed with PBS an additional 2 times. The cells were incubated with primary antibodies overnight at 4° C, rewashed with PBS 3 times, and incubated with secondary antibodies at 37° C in the dark for 1 h. The cells were washed again with PBS in the dark, the cell nuclei stained with DAPI, and images were taken with a confocal laser scanning microscopy (Nikon Corporation, Tokyo, Japan) at 400x magnification. RT-qPCR Total RNA was isolated from HepG2 cells with TRIzol reagent (Invitrogen; Thermo Fisher Scientific, Inc.), according to the manufacturer's protocol. Subsequently, cDNA was synthesized using a Reverse Transcription kit (Thermo Fisher Scientific, Inc.). Quantitative realtime PCR was subsequently performed with SYBR Premix Ex Taq (Takara, Dalian, China) using an ABI 7500 instrument (Applied Biosystems Inc). The products were analyzed by the 2 −ΔΔCt method [17]. Statistical analysis All experiments were conducted at least three times, and quantitative variables were presented as the mean ± standard deviation (SD). The statistical analysis and graph illustration were performed using GraphPad Prism v8.0 (GraphPad Software, La Jolla, CA, USA). Student's t-test or a Mann-Whitney U test was used to analyze data between groups. In addition, the χ 2 test was applied for qualitative variables. Statistical significance was defined as p-value <0.05 for differences between the experimental and control groups. "*" implies a p-value <0.05, "**" implies a p-value <0.01, "***" implies a pvalue <0.001 and "****" implies a p-value <0.0001; these symbols are marked above the histograms. Data availability statements The data generated in the present study may be requested from the corresponding author. Overexpression of ALT1 in HCC tissues The expression of ALT1 in tissue microarrays containing 50 cases of HCC tissues and 50 cases of matched non-tumor adjacent tissues was evaluated via IHC. Darker staining occurred in the HCC samples, while lighter staining occurred in the non-tumor adjacent tissues ( Figure 1A). The assessment of IHC scores demonstrated higher expression of ALT1 in HCC tissues (p<0.05) ( Figure 1B). Expression of ALT1 in cell lines We conducted western blot analysis with un-transfected L02 (normal liver cell) and liver tumor cell lines Hep3B, HepG2, and Huh7. The expression of ALT1 in HepG2 cells was not only higher than in the other liver cancer cell lines, but also greater than in L02 (Figure 2A). Therefore, the HepG2 cells were utilized for subsequent experiments. Effects of ALT1 knockdown on the migration and invasion of HepG2 cells Wound healing and transwell assays were performed to assess the effects of ALT1-knockdown on the migration and invasion of HepG2 cells. ALT1 specific siRNA suppressed the ALT1 expression ( Figure 2B). Second, the results of the wound healing assay demonstrated that AGING ALT1-knockdown significantly inhibited the migration of HepG2 cells compared with the negative control group ( Figure 2C). Additionally, we also used a transwell assay to measure the migration and invasion ability of HepG2 cells. The OD values indicated that the migration and invasion capabilities of HepG2 cells were significantly decreased by suppressing ALT1 (Figure 2D). Effects of ALT1-knockdown on proliferation and apoptosis of HepG2 cells The CCK-8 and colony formation assays showed vast differences between the ALT1-knockdown group and the negative control group, which implied that the proliferation of HepG2 cells was inhibited by suppressing ALT1 (Figure 3A, 3B). Additionally, flow cytometry suggested that the apoptosis rate of the ALT1-knockdown group was remarkably greater than that of the control group ( Figure 4). The cell cycle analysis demonstrated the number of cells in G1 and S phases decreased, while the number of cells in G2 phase increased significantly, suggesting that cell arrest occurred in G2 phase ( Figure 5). iTRAQ quantification of the ALT1 interactome Supplementary Figure 1 details the process of iTRAQ labeling and detection. Before iTRAQ-based MS detection, the knockdown efficiency of ALT1 was analyzed, and the results showed suppressed expression of ALT1 as compared to that in the control group ( Figure 6A). A total of 116 DEPs were identified, consisting of 49 upregulated DEPs and 67 downregulated DEPs (Supplementary Table 1). DAVID analysis of ALT1-interacting proteins The cellular components ( Figure 6B), biological processes ( Figure 7A), and molecular functions ( Figure 7B) were identified via PANTHER. Among the cellular components, the cell occupied the maximal portion with 18.8%, while the cell part ranked second at 18.79%. Among the biological processes, cellular processes accounted for the largest proportion (13.04%), followed by metabolic processes (11.36%). The molecular function analysis involved mainly binding (52.84%) and catalytic activity (26.49%). Validation of the interaction between EP-CAM, KI67, ASPP2, and ALT1 Three main DEPs were selected for evaluation further. EP-CAM, KI67, and ASPP2 were captured by co-IP with ALT1 used as a bait protein, indicating a direct interaction between ALT1 and these three DEPs. Confocal microscopy was used to determine the subcellular co-location of these three key DEPs with ALT1 ( Figure 9). Effects of ALT1 and EP-CAM knockdown on MMPs and EMT in HCC Alterations in MMP levels and EMT are closely related to the migration and invasion of cancers. Western blot detection showed that the expression of MMP2 and MMP9 was suppressed when ALT1 and EP-CAM were knocked down ( Figure 10A). Furthermore, RT-qPCR was used to verify the EMTrelated markers E-cadherin, N-cadherin, Snail, and Twist. Despite increased expression of the E-cadherin mRNA, the expression of N-cadherin, Snail, and Twist Figure 10B) These results suggested that after ALT1 interacted with EP-CAM, it participated in the migration and invasion of HepG2 cells by influencing EMT and MMPs. Effects of ALT1 and KI67 knockdown on the expression of markers of proliferation and apoptosis Cleaved caspase 3, Bax, and Bcl-2 are common markers associated with apoptosis, and P21 and CDK4 are common markers related to proliferation. When the expression of ALT1 and KI67 was inhibited, the expression of Bcl-2 and CDK4 decreased, while the expression of cleaved caspase 3, Bax, and P21 increased ( Figure 11A, 11B). These findings suggest that ALT1 interacts with KI67 to participate in the proliferation and apoptosis of HepG2 cells. Effects of ALT1 and ASPP2 knockdown on the P53 signaling pathway The expression of ASPP2 activates P53 expression in many cancers, and the inhibition of ASPP2 suppresses the expression of P53. Based on the previous Western blot analysis, ALT1 knockdown upregulated the expression of ASPP2. We further inhibited ALT1 and found that the expression of P53 increased ( Figure 12A). Figure 12B shows the band intensity of P53 and control group expressions of ASPP2 and ALT1. The results suggested that ALT1 knockdown suppressed the progression and development of HepG2 cells by activating the P53 signaling pathway. DISCUSSION Previous research has suggested that enrichment of ALT2 in the breast can promote the development of breast cancer [8]. On the basis of such hypothesis, we analyzed the expression of ALT1 and we found high expression levels in HCC tissues. However, few studies have reported its effects in HCC cells. When ALT1 expression was inhibited, the migration and invasion capabilities of HepG2 cells decreased. The CCK-8, colony formation, and flow cytometry assays suggested that ALT1 knockdown decreased proliferation and increased apoptosis in HepG2 cells. CO-IP and iTRAQ-based Based on the Western blot and RT-qPCR analyses, we concluded that ALT1 influenced EMT and MMPs by interacting with EP-CAM. We also concluded that ALT1 was involved in proliferation and apoptosis by interacting with Ki67. Finally, according to the ALT1 interacting protein ASPP2, which plays the role of a P53 activator, we suggested that knockdown of ALT1 suppressed the progression and development of HCC via the P53 signaling pathway. EP-CAM is a 35-kDa molecule with the capability of transmembrane glycoprotein cell adhesion, which is associated with the adhesion of Ca 2+ -independent adhesion between cells [18,19]. Additionally, EP-CAM has been applied in the detection of circulating tumor cells [20,21]. Previous studies have suggested that EP-CAM is associated with cell migration, invasion, signal transduction, and differentiation [22,23]. The overexpression of EP-CAM in breast cancer, ovarian cancer, and head and neck squamous cell cancer has shown a negative impact on cancer prognosis [24][25][26]. EP-CAM can strengthen the tumor-initiating ability by influencing EMT, which is induced through Nglycosylation of EpCAM in breast cancer [27][28][29]. Furthermore, EP-CAM can regulate MMP2 and MMP9 in gastric cancer by activating the NFκB signaling pathway [30]. Based on the above studies, we selected EP-CAM as the key DEP to determine whether ALT1 was involved in migration and invasion. KI67, a 380 kDa nuclear protein, is detected throughout the three phases of the cell cycle. Therefore, the expression of KI67 suggests the proliferation stage of cells [31,32]. Additionally, KI67 is overexpressed in many cancers and is considered a prognostic marker of cancer [33][34][35]. ASPP2, as a haploinsufficient tumor suppressor and a member of the p53 activator, stimulated apoptosis with its C-terminus binding with P53 [36,37]. The P53 signaling pathway is involved in many human cancers, but an increasing number of cancers have evolved to inactivate this common tumor suppressor pathway [38]. EMT is not only a vital biological process that is activated through the c-met signaling pathway but also an essential initiation step in tumor migration and invasion [39]. In addition, MMPs are a series of major proteolytic enzymes, and they can regulate tumor metastasis by decomposing the extracellular matrix [39]. In this study, silencing ALT1 and EP-CAM expression in HepG2 cells contributed to alterations in the mRNA expression of EMT-related biomarkers, including E-cadherin, N-cadherin, Snail, and Twist, and changes in MMP2 and MMP9. However, some limitations remain. First, Western blotting identified the expression of ALT1, but the iTRAQ analysis did not detect it, which may have been caused by the low abundance of ALT1 before peptide detection. The results of the Western blot and RT-qPCR analyses of DEPs interacting with ALT1 proved the reliability of the iTRAQ screening results. Additionally, few liver cancer cell lines were included in the current study. Although HepG2 cells are most commonly used in drug metabolism and hepatotoxicity studies, coupled with their higher proliferation rates than other liver cell lines, further studies are needed to examine the effect of ALT1 inhibition in other liver cell lines [40]. Finally, more research is required to study further how ALT1 interacts with the DEPs identified via iTRAQ. CONCLUSIONS In conclusion, iTRAQ proteomics analysis identified 116 proteins that interact with ALT1. Three proteins that interact with ALT1 are closely associated with tumor biological behaviors. According to the functions of EP-CAM, KI67, and ASPP2, we concluded that ALT1 may bind with these proteins to regulate migration, invasion, proliferation, and apoptosis via the P53 signaling pathway in HCC and may influence EMT and MMPs to alter migration and invasion. However, its potential mechanism in proliferation and apoptosis remains unknown and requires further study. AGING
4,888.8
2022-09-14T00:00:00.000
[ "Biology" ]
Building Supply Chain Resilience Capabilities during Pandemic Disruption Supply chain resilience is used to mitigate and deal with the unexpected disruptions of supply chains in the past decades. The sudden catastrophes such as epidemic of Coronavirus disease has led to the lockdowns globally and caused harsh economic consequences. The immediate global supply chain disruption has initiated a sharp plunge in all business activities, demand, and production interruption. This unstable, challenging, and vulnerable market environment highlights the necessity of investigating the supply chain resilience (SCRes) which has regained academicians’ and practitioners’ attention from various industries recently. This study aims to find the limitations and new developments in conceptualizing SCRes in the discipline of supply chain risk management. Hence, a total of 597 articles in the theme of SCRes are collected and adopted the content analysis to screen in this study.87 papers are final investigated via refining the research of SCRes to each stage according to the pre/in/after the disruption. In this perspective, different phases were included to value and replenish the notable limitations in conceptualizing SCRes, which emphasizes the significance of the stage of “Growth” and the “dynamic” perspective (not only return to the original level after the disruption but also develop into a novel, more desirable condition). The outcomes of the review indicate that SCRes is a significant dynamic capability for supply chains to prepare, adjust, response, recover and grow (which has ignored by many scholars) before or after the unexpected interruption including the outbreak of coronavirus disease (COVID-19). Introduction Progressively multifaceted supply networks, globalization, and external properties (e.g., force majeure, all-inclusive diseases, and political interferences) have repetitively initiated supply chain disruptions during the last decade especially for these years (Fan and Stevenson, 2018;Chen et al., 2019;Lechler et al., 2019;Spieske and Birkel, 2021) including the recent unpredictable global disaster. A worldwide pandemic is considered as an unconvincing event (Francis, 2020;Hilderink, 2020), such as the outbreak of the Coronavirus disease announced by the World Health Organization (WHO) in 2020, which negatively affected the global supply chains (Araz et al., 2020;Govindan et al., 2020;Francis, 2020;Ivanov and Dolgui, 2020;Ozdemir et al., 2022). The informed roughly calculation (October 2020) showed that the coronavirus decelerated global economic increase between the proportion of 4.5% and 6.0% in 2020, with a fractional recovery in percentage from 2.5% to 5.2% by the end of 2021, determining by whether the authorities of countries were able to control the diffusion of the COVID-19 (Orlando et al., 2021). To mitigate the spread of the COVID-19, governments had made stricter boundaries and implemented fully nationwide lockdowns around the borders as well as cities which caused disruptions to international trade and supply chains all over the world (Kumar and Managi, 2020). Jackson et al (2020) indicated that the depth and extent of this global economic downturn and massive disruptions resulted in the decrease of international trade at rate of 9.2% every year. According to the calculation of Fortune (2020), the analysis showed that over 94% of top 1000 companies were influenced by this unexpected outbreak since their supply chain are mostly lean and globalized in structures . The pandemic is straight causing interferences in supply and demand at the global and native dimensions (Ivanov, 2020), potentially leading to business discontinuity. Global supply chains relied heavily or solely on factories in China for parts and materials were forced for halts in production (e.g., automotive industry, manufacturing industry) (Harbour, 2020). The supply chain disruption proves the inconceivable weaknesses and shortages in supply (Govindan et al., 2020;Ivanov, 2020;Francis, 2020;Pournader et al., 2020;Araz et al., 2020;El Baz and Ruel, 2021), lack of reactivity to surge in demand and production interruption. . Some of the supply chains' demand encountered a sharp increase since the supply could not cope with the sudden growth, such as global supply chains of healthcare industry (Govindan et al., 2020). Therefore, on-time shipment of healthcare services and goods are extraordinary important for customers who are at risk of infection, under curfew, lockdowns, or quarantine. Under such circumstances, if a supply chain can perform and deliver products and services would be characterized as resilient (Blackhurst et al., 2011). Supply chain resilience (SCRes) indicates the readiness and adaption of an institution's supply chain to cope up with sudden and unawares supply chain disruptions (Mubarik et al., 2021). From recent literatures, supply chain disruptions have gained popularity in the condition of the COVID-19 pandemic, which highlights the SCRes as a coordinately central position of interest from scholars today (Hosseini et al., 2019;Ivanov and Dolgui, 2020;Reeves and Whitaker, 2020;Kumar and Managi, 2020;Dolgui et al., 2020;Silva et al., 2021). There has been increasing scholars have intensively researched SCRes to prepare (readiness), respond, adjust and growth before, in and after the disruptions, finding the new development in the domain of supply chain risk management (Hosseini et al., 2019;Alikhani et al., 2021;Spieske and Birke, 2021). Taking the view of system optimization is chosen by the existing literature in the theme of SCRes Dixit, 2020;Govindan et al., 2020), which restricts SCRes in level of system design. Supply chain disruptions cause by complex supply networks, globalization, or external effects, especially the recent outbreaks of COVID-19, stimulate this study to review through the publishments to refine and value the evolution of SCRes (Fan and Stevenson, 2018;Lechler et al., 2019;Araz et al., 2020) to broaden the concept of SCRes. In this context, the purpose of this study is two-fold, including (i) reviewing the current existing literature on the subject of SCRes to revisit and refine its conceptualization, and (ii) illustrating the significance of SCRes in supply chain risk management across the boundary of disciplines. determined that SCRes was directly related to the stages of readiness, response, and recovery in terms of disruptions. Hohenstein et al (2015) illustrated that rarely studies centralized on the prospective growth after being interrupted. This phase of "Growth" reflects the developmental nature of SCRes in moving to a new condition and improving competitive advantages (Ponis and Peck, 2004;Jüttner and Maklan, 2011;Pettit et al., 2013;Wieland and Wallenburg, 2013). The conceptualization of the SCRes is extended from the competence of response and recovery from disruptions (Rice and Caniato, 2003;Christopher and Peck, 2004) to comprise the capability of the supply chain to get ready for, prevent from, adapt to, recover, learn, and growth from disruptions (Hohenstein et al., 2015;Datta, 2017;Spieske and Birke, 2021). In this context, a high SCRes contained not only the pre-and earlyinterruption phases but also a better acquirement in later phases (Han et al., 2020;Reeves and Whitaker, 2020). Recently, Sawyerr and Harrison (2020) conceptualized SCRes as the capability to initiatively scheme and devise the network of supply chain to forestall unforeseen negative incidents (crisis), adaptively respond to disruptions while preserving structural and functional operations and surpassing to a post-incidents robust state. Mubarik et al. (2021) valued the resilience of a supply chain by preparedness (the readiness of a supply chain against the disruptions), response (the speed of a supply chain to cope with disruptions) and recovery (the ability of a supply chain to revival after the disruptions) to the supply chain disturbances. The sustainable and changeable perspective in long-term that includes the phase of "growth" has often been ignored which might discourage firms from achieving competitive advantages in later phase of SCRes (Hohenstein et al., 2015). Likewise, Spieske and Birke (2021) further well-defined supply chain resilience as a framework consisting of four distinct phases: readiness(preparedness), response, recovery, and growth in chronological order as shown in Figure 2.1. Figure 1 summarizes four separate phases: readiness(preparedness), response, recovery, and growth in chronological order. In this study, to conceptualize supply chain resilience, these four stages are the fundamental dimensions to conceptualize SCRes to ensure the theoretical validity. Readiness In the pre-disruption phase as illustrates in Figure 2.1, readiness is the initial stage of the supply chain to plan and get ready for sudden events to lessen its vulnerability against disruptions (Christopher and Peck, 2004;Jüttner and Maklan, 2011). The concept of "readiness" to mitigate the negative effects of unexpected events first proposed by Datta et al. (2007). Ponomarov and Holcomb (2009) Recovery readiness (preparation dimension) for unpredicted events. Overall, this study defines readiness as all measures in the pre-disruption condition appropriate to diminish disruption's probability, its damaging scopes and impact. Macdonald and Corsi (2013) named all the proactive and prepared movements as readiness, such as, be prepared to implement all plans for emergencies to aid managers in responding the unexpected events. It indicates abilities of a firm in recognizing, forecasting, and preventing disruptions, emergencies, and risks at the pre-stage (Chowdhury and Quaddus, 2016). Chowdhury and Quaddus (2016) proved that this kind of capability (readiness) is crucial by framing dynamic regulations on the supply chain to mitigate disruptions. In line with Kandel et al (2020), preparedness was described as an effective paradigm of operational activities in preventing, monitoring, and reacting to unanticipated, changeable, and adverse outbreaks. Respond The next phase after the hit of disruption is "respond". Response was proposeed by Rice and Caniato (2003), ever since then, it has been frequently valued and investigated as an essential component of SCRes and widely explored and emphasized in conceptualizing SCRes (Hohenstein et al., 2015). Responses comprise countermeasures that are executed directly after an outage is detected or encountered. When the disruption is experienced, the instant campaign should be taken to relieve and control the crisis, limit the ripple effects, and resume normal operations in a short process to earn advantages in the market (Chen et al.,2019). A robust and/or redundancy strategy, such as, buffer stocks, rerouting map, multisource and backup suppliers was adopted and highlighted as preventive solutions at the stage of responding in plenty studies (Tomlin, 2006;Singhal et al., 2011;Sawik, 2013;Gupta et al., 2015) to mitigate the crises in the field of risk management. The ripple effect impacts supply chain through multiple echelons which promotes the escalation of the interruption (Hosseini and Barker, 2016;Ivanov, 2018;Li et al., 2018;Chen et al., 2019). The foremost importance at this stage must be speedy for refraining from damaging consequences for the supply chain (Han et al., 2020). Clear recognizing and quick responding could prevent this effect and harvest more shares in the new and changeable market, solidify and enhance the status in the industry, significantly reduce risk and improve operations against the hit (Ponomarov and Holcomb, 2009;Juttner and Maklan, 2011;Al-Omoush et al., 2020). Under the instable environment like the global market, firms who desired to safely mitigate the disruptions even dominate bigger shares shall respond to the crisis in a timely manner and allocate resources to update competencies (Chowdhury and Quaddus, 2016). Such reconfiguration and renewal of capabilities gain the competitive advantage for winners and help them to recover from unexpected crisis (Kylaheiko and Sandstrom, 2007). Recovery After the respond to disruption, enterprises would concentrate on developing the capability of recovering to the original level (Li et al., 2010;Hobbs, 2020). Appropriate and effective recovery strategies in processing disrupted risks are pursued by most manufactures in producing essential items, especially in the conditions of the demand spike (Wu et al., 2020) and the disruption of supply (Harbour, 2020) inflicted by COVID-19. Chen et al. (2019) indicated that companies should concentrate on developing and execution contingency strategies which follow essential principles for recovering from disruptions, such as effective adaptation and minimize the long-term impaction, to preserve operations with stabilize resiliency. These optimization approaches and mitigation strategies were investigated by several studies in the post-disruption stage of a resilient supply chain, focusing on safety stocks, decreased lead time, inventory level enhancement and optimizing transportation routing (Sawik, 2013;Kristianto et al, 2014;Nguyen et al., 2021). Under conditions of ripple effect (Ivanov et al, 2015), the joint efforts of pre and post tactics are essential to the recovery policies and thus measure the dynamic supply chain recovery performance (Nguyen et al., 2021). Supply chain stabilization, energetic adjustments of allocating the scarce resources and sharing information with local manufacturers, had also been commonly suggested as important tactics in the recovery stage by scholars to ensure process continuity against the disruption of production (Sheffi and Rice, 2005;Ivanov et al., 2014Ivanov et al., , 2016Ivanov et al., , 2017Gupta et al., 2015;Chang et al., 2019). Paul et al (2020a); Rahman et al (2021) declared multiple solutions could be adapted to mitigate interruptions of manufacturing by increasing the production, which contains recruiting additional operators, purchasing more facilities of operations, and utilizing alternate shifts to assist recovery after crisis, like COVID-19. Growths The final phase of the SCRes is "Growth". Some of the primary studies about SCRes limitedly related to quantify the level of a specific resilient supply chain by developing strategies in both aspect of preparedness, response, and recovery (Chowdhury and Quaddus, 2016;Ivanov et al., 2017;Graveline and Grmont, 2017;Hosseini et al., 2019). Likewise, numerous definitions of SCRes have emerged with notable limitations from these studies. Some studies under-examined the steps on what the enterprise could learn and improve from the sudden disruption (Hohenstein et al., 2015;Han et al., 2020;Spieske and Birke, 2021). Essentially, the interactions of the several drivers and the competences developed to diminish the negative influences of disruptions (vulnerabilities) are investigated by increasing researchers for supply chains' adaptation and growth after disruption (Pettit et al., 2010;Zhao et al., 2011;Scholten and Schilder, 2015;Ribeiro and Barbosa, 2018;Pettit et al., 2019;Alfarsi et al., 2020). There are many researchers have developed the concept of SCRes and focused on how to survive disruptions as well as on adaptation and development (Zhang et al., 2011;Ivanov et al., 2014;Fiksel, 2015;Gabler et al., 2017;Pettit et al., 2019). Due the popularity of "Growth" stage, the definition of resilient supply chain is more encompassing now. Methodology The content analysis refers to an inference about any type of text to tell whether its production process is effective and trustworthy. To make systematic analyze literature objectively in quantitative ways, the content analysis was selected as the research method in this study. The aim at adopting the content analysis to review literature in this study is to reveal the implicit information, clarify and assess the essential primary facts and developing trends to provide intelligence predictions for the development of supply chain resilience. For this study, papers were selected based on the English-language academic journals and conference articles published between 2000 and 2022. This review was concentrated on one single language. The database was from Scopus, Science Direct Journal, and Google Scholar to systematically review literature which is associated with supply chain resilience. Description of review results, descriptive analysis, thematic categorization, and specific industry application are the standards of searching articles. The process of reviewing is shown in Table 1. After searching and screening on the database,597 papers were accumulated after being confirmed by substance and relevance, and 87 papers were selected and investigated in final stage. Searching Articles Searched articles in the database of Scopus, Science Direct Journal, and Google Scholar with keywords. Screening The search included both journal and peer-reviewed conference publications to illustrate the history of developing the topic and new findings. Exclusion The subject regions of the database focused on the field of supply chain management, supply chain risk management, economics, logistics, industry engineering, social sciences, and decision science. 198 Critical and Comprehensive Content Selection Based on synthesis and comparisons, reviewing thorough all articles after screening, papers not related to objectives were excluded. Final Article Assessment Decided the articles to do the investigations. 87 Evolution of CSRes From the primary literature, the objective to strengthen SCRes is let the supply chain swiftly recover from unforeseen supply chain disruptions and recapture the original performance or even obtain an improvement afterwards (Sheffi and Rice, 2005;Ponomarov and Holcomb, 2009;Hohenstein et al., 2015;Alfarsi et al., 2020;Spieske and Birkel, 2021). In this study, four different stages of SCRes are discussed with proactive and/or reactive competences to diminish the influential of the unexpected disruptions. Likewise, scholars indicated that the capabilities of SCRes in mitigating sudden disturbances and returning the supply chain to its former or an even better state might lead to competitive advantages (Kamalahmadi and Parast, 2016;Yu et al., 2019;Al-Hakimi et al., 2021). SCRes was considered as a dynamic capability (Yu et al., 2019;Simonovic and Arunkumar, 2016) for supply chains to prepare, adjust, response to (answer), recovery and growth before or after the sudden interruption (e.g., the outbreak of coronavirus disease ) in this study. The concept of "Resilience" could be discovered in ecological, socio-ecological, and physical systems, economy, organizational, network engineering, and disaster management research (Ponomarov and Holcomb, 2009). For examples, ecologists defined resilience as the capability of living systems absorbing change and bouncing back from a distribution and/or changing conditions (e.g., Holling 1973;Jia et al., 2020). In addition, material scientists examined the ways objects revert to their initial structures after being deformed, whereas psychologists and sociologists conceptualized resilience as the capabilities of individuals, organizations, or communities to handle outside pressures and interrupts because of political, social, and/or environmental changes, and scholars of management investigated the function of personal resilience in leadership (Adger, 2000;Bonanno, 2004). The late 1990s witnessed the emergence of supply chain resilience's concepts, with a surge of progress in the early 2000s and its widespread applications' generation in 2010s (Pettit et al., 2019). Themes chosen by scholars concentrated on sustained refinement of the conceptualizations and parameters/dimensions of SCRes (Chowdhury and Quaddus, 2017;Brusset and Teller, 2017), network constructions or topologies in building network resilience (Kim et al., 2015;Dixit, 2020;Tsolakis et al., 2021), depth and wideness of resilience and additional practical evaluations (Chowdhury and Quaddus, 2017;Scheibe and Blackhurst, 2018;Pettit et al., 2019) as well as strategies in managing resilience of a specific supply chain (Autry et al., 2013;Silva et al., 2021). For supply chain managers, successfully manipulating sudden disruptions depends on whether they choose the suitable strategy when designing their supply chain (Chowdhury and Quaddus, 2017). Scholars deployed the concept of dynamic supply chain capabilities to explain how supply chain partners mobilize processes beyond organizational boundaries in building or/and revising capacities due to market turbulences (Defee and Fugate, 2010;Beske, 2012;Aslam et al., 2020). Instead of analyzing the SCRes in the development of designing network (Kim et al., 2015;Alikhani et al., 2021), this study defined SCRes from the perspective of dynamic capabilities which provides a new angle for companies' supply chain risk management. Winners in overcoming disruptions can effectively resist unexpected outbursts inside and outside the supply chain by building capabilities and shaping resilience to mitigate the impact of interruptions caused by unexpected global crisis (i.e., COVID-19) on the daily operations. Golgeci and Ponomarov (2013) used the SCRes as an effective approach for risk mitigation and management. SCRes have been conceptualized in several studies on companies responding to risk by enhancing their dynamic adaptabilities (Scholten et al., 2019;Pettit et al., 2019;Dolgui et al., 2020). Fiksel (2006) proposed the characteristic of a supply chain as the resilient if the network could operate and distribute products and/or services under the situations of upheavals, disruptions, and unforeseen events. Using the theory of Fiksel (2006) and Ponomarov and Holcomb (2009), Pettit et al. (2013) and Sawik (2013) defined the resilient supply chain as the competence against the turbulence environment. Extensive research has proposed SCRes as a dynamic capability (Sheffi and Rice, 2005;Ponomarov and Holcomb,2009;Golgeci and Ponomarov, 2013) which allows the supply chain to rapidly and efficiently prepared, respond, adapt, and recover from disruptions (Juttner and Maklan, 2011;Blackhurst et al., 2011;Chen et al., 2019). Likewise, Hohenstein et al. (2015) indicated SCRes as a multidimensionality concept which involved the sub-abilities of a supply chain (abbreviated as subject in this study) in responding and adapting uncertainties. In this study, definitions of SCRes are divided into three dimensions based on the research of Hosseini and Barker (2016), which named restorative capacity, adaptive capacity, and absorptive capacity separately. Absorptive capacity is the degree of which a subject could assimilate impacts from disruptive events via proactive planning for resilience (Cheng and Lu, 2017) or the strategic exploitation in the pre-stage that could be regarded as the first line of defense. Vugrin et al. (2011) and Rose (2009) viewed this capability as being ordinary and endogenous to systems. Adaptive capacity is defined as to what extent a subject could complete adaption in the post-stage for minimizing negative consequences in performance (Tukamuhabwa et al., 2015;Adobor, 2020), which could be a component of a temporary strategy after the interruptions as the second line of defense. Restorative capacity is described as the extent of which a subject could recover permanently from disruption (Hosseini et al., 2019). Unlike adaptive capacity, capabilities of restorative strategies can be considered as the last line of defense because of its longer-term nature (Hosseini and Barker, 2016). Kochan and Nowicki (2018) argued the primary authors mainly define SCRes as an ability. Since then, the SCRes capabilities were investigated as the antecedents of SCRes to create the competitive advantages and meet a higher performance under the new industry environment against the sudden interruptions. Fundamental aspects of resilience comprise preparedness and emergency management from the perspectives of pre-disruption (readiness/preparedness) and post-disruption (response, recovery with adaptation, and growth) (Ivanov et al., 2014;Pettit et al., 2019;Spieske and Birke, 2021). Consistent with the existing literature, this study asserts that supply chains managers require developing proper strategies according to the four phases of SCRes to strength both proactive and reactive capabilities of enterprises to prepare, respond, reconfigure, adapt, learn and growth surrounding disruptive events. Conlusions As found in different stages of SCRes and the nodes of supply chains, disruptions may occur, and it requires the continuous and creative operations by participators of the supply chain to maintain the functions and performance. After reviewing through all the selected articles, the phase of "Growth" was ignored by most of the existing literatures. Hence, this study refined SCRes according to different stages of SCRes (i.e., absorptive capability in the stage "readiness", restorative capacity in stage "response" and restorative capacity in stage "recovery") and develops the dimensions from three to four by adding the resilient abilities in the stage "Growth". Under the new circumstance of the COVID-19, the challenges caused by the global disruptions have forced corporations to swiftly respond and innovate operational patterns to maintain functional business to keep their supply chains effective and efficient (Al-Omoush et al., 2020). The existing literature basically analyzes SCRes from the perspective of system optimization (Hosseini and Barker, 2016;Fraccascia et al., 2018;Dixit, 2020;Ivanov and Dolgui, 2020;Govindan et al., 2020), which restricts SCRes to the system design level. The improvement of resilient supply chains would become limited. The meaningful contributions of this study emerge. The new perspective in conceptualizing SCRes as the dynamic capabilities motivates the future researchers to explore SCRes capabilities in which conditions would be more effective in mitigating disruptions. This study reorients the SCRes reviewing
5,266.8
2022-03-22T00:00:00.000
[ "Business", "Engineering", "Environmental Science" ]
Dead time influence on operating modes of transistor resonant inverter with pulse frequency modulation (PFM) Received Mar 19, 2019 Revised Apr 14, 2019 Accepted May 28, 2019 This study explores the impact of dead-time on the transistor resonant inverter operating modes depending on the ratio of the transistor switching frequency and the resonant frequency of the series-resonance circuit in the diagonal of the transistor bridge. On the basis of theoretical data and experimental results, a dead-time limitation relation has been offered besides for a minimum value but for a maximum value. This provides extension of the operating mode range in zero voltage switching (ZVS). INTRODUCTION The need to introduce dead-time in the controlling of single leg transistors in the schemes of the power electronic converters is known. This is the time, which the signal for switching on of each of the transistors delays with respect to the shutoff signal of the other. In some articles, the necessity of such a time is only mentioned [1], while in others are researched the processes related to it in voltage source inverters, resonant inverters and DC/DC converters, as different ways for its definition and implementation are offered. In [2] the time is recorded on the graphical time diagrams, but the processes are not studied. In [3] is considered a DC/DC converter with a zero voltage switching (ZVS) algorithm, as the dead-time calculation is based on the input capacitance of the MOSFET and the processes associated with the gate charge during switching. With regard to transistor resonant inverters, one of which is explored in this article, the following systemization can be done. Some companies recommend fixed set-up by external resistor in the integrated control circuit, providing the corresponding graphical dependencies [4,5]. In [6] a fixed value equal to 330 is set. In most studies, dead-time is determined based on the recharging of the output capacitances of the single leg MOSFETs, as the offered formulas for calculating its minimum value are almost identical [7][8][9][10]. A similar formula is offered in [11], as the delay time in the drivers is added. When examining a transistor resonant inverter for induction heating in [12], Pulse Frequency Modulation (PFM) is used and dead-time is introduced, but the processes in it are not researched. In [13] this time is optimized based on the ratio between the resonance capacity and the transistor drain-source capacitance, taking in mind the maximum switching frequency. Manual tuning based on the experimental loss measurement in the converter is suggested in [14]. The article [15] is interesting, as besides the minimum value limitation, it offers maximum value limitation as well. There are adaptive dead-time definitions, for example [16], presenting an adaptive adjustment scheme in the range of 0 3.5 , suitable for high voltage applications. In [17] is presented a method for compensating dead time, based on a complex form of the modulation signal .As a summary of all the described can be concluded that the studies consider the development of the processes in the power scheme when the dead-time is insufficient in value. Quite often, however, the designers choose a higher value than needed, to be secured in case of load and power transistor parameters changes. No publications are known on how the operation modes change in case the dead-time has much greater value. The problem is: if this time is much greater, the zero voltage switching mode of the resonant inverters changes. This increases the losses on the transistors and reduces the efficiency. The aim of this study is to present a research of the impact on dead-time operating modes with significant (2 to 3 times) higher value than the needed one. This is done with a transistor bridge resonant inverter, for induction heating application for example. Based on the analysis, the authors propose a new solution: except for a minimum value, dead time should be limited to a maximum value, and mathematical dependence is derived. Part 2 presents general theoretical information. Part 3 -the results of experimental researches, done in a slightly more unusual approach -the theoretical time diagrams and experiment oscillograms are shown next to each other. On the basis of the experimental results in Part 3, theoretical conclusions and recommendations for the choice of dead-time are made in Part 4. Figure 1 shows a power circuit diagram of a bridge transistor resonant inverter designed for induction heating, with constant voltage supply. Depending on the load, by equivalent transformations the diagram in the diagonal of the bridge can be turned into a series-resonance circuit for which the following resonance condition is fulfilled. THEORETICAL DATA It is known that the own resonance frequency of the circle is determined by the dependence . where . is the damping ratio. There is another frequency for the inverter -switching frequency of the transistors set by the inverter control system. The adjusting of the output power is via Pulse Frequency Modulation (PFM) -change of the switching frequency while maintaining symmetric control of the diagonally connected transistors -50% duty cycle if dead-time is neglected. Depending on the ratio between the actual resonance frequency of the circle and the switching frequency , different operation modes are known -with a frequency below or above the resonance, in resonance mode, in which the current through the diagonal can be interrupted or continuous. By mathematical analysis, respective mathematical dependencies can be obtained and can be drawn graphical dependencies for the conduction times of transistors and diodes in the scheme, as it is done in [18], not taking in mind the dead-time. The theoretical information presented here is completed in Part4 with theoretical conclusions based on experimental research. Deliberately, the recharging of the output capacitances of the single leg transistors is ignored, which as is known happens at the beginning of the dead-time. This is done to differentiate the regimes in terms of the conductivity range of the semiconductor devices in the circuit. EXPERIMENTAL RESEARCHES The oscillograms presented here are from experiments under the following conditions: inverter supply voltage 200 , used transistors -IRF450. The capacitances of these transistors are as follows: output capacitance -600 , feedback capacitance -240 . The maximum value of drain current in the design and implementation of the resonant inverter is 1.5 . At load changing, this maximum value is 15 , but at the design the smaller value is taken, as dead-time is inversely proportional to the maximum current value. The calculations based on this data, using the formulas from page 18 of [11] and formula (7) of [7] give approximately the same result for the minimum dead-time value by calculation 200 . In the practical implementation, the dead-time value is firmly established 600 . For all the oscillograms shown below, the CH1 current is monitored with a current probe with 1:1 ratio and the voltage of CH2 with voltage probe with 10:1 ratio. Figure 2(b). The same is the case at 0.5 Figure 3(a) and Figure 3(b). The theoretical mode of resonance , ideally, without dead-time consideration, is illustrated with the time diagrams in Figure 4. It can be seen that the reverse diodes do not conduct, i.e. there is no return of energy from the load circuit back to the power supply. If the control system supports this theoretical mode, it would provide maximum energy efficiency. When entering the dead-time in the resonance mode, immediately after switching off the consecutive pair of diagonally connected transistors, their reverse diodes conduct - Figure 5(a) and Figure 5(b). It turns out that this mode is similar to that shown in Figure 3, regardless of the difference in ratio of the frequencies and . (a) (b) The operating modes shown in Figure 6 and Figure 7 are not described in the literature and are due only to dead-time, although the switching frequency is higher than the resonance frequency. Typically, this mode of operation in the absence of dead-time, is expected to provide ZVS. The interesting in both modes is the alternation of conduction intervals of one pair of diagonally connected diodes with the other pair of diagonally connected diodes. In Figure 7, the duration of the two intervals is the same. As you can see, both of the modes do not provide ZVS. In the case shown in Figure 8, when the dead-time coincides in duration with the conduction interval of a pair of reverse diodes before the switching of their respective transistors, ZVS is provided. Namely this borderline case is used below to set the maximum dead-time limitation. This case is taken in mind, because at the further increase of the frequency of switching during dead-time always the reverse diode of the respective transistor conducts and ZVS is implemented. THEORETICAL CONCLUSIONS From the shown in Part 3 can be concluded that apart from the minimum limit the dead-time must have maximum limit to provide a ZVS mode in a wider interval of frequency variation around the resonance value. It is sufficient to the recharging time of the output capacitances of the transistors [7,11] to add the time for switching on the reverse diodes, i.e. to ensure ( 4 ) The question is what is the value of time . Regarding the switching time of the diodes used in the DC/DC converters, there is a detailed study [19] showing that for the different types it ranges from 2 to 15 nS. In this case, however, it is about the reverse diode of MOSFET, which is actu ally a parasitic transistor in the structure. At the gate of the transistor there is no switch-on signal as it is in the synchronous rectifiers so this signal will be provided after dead-time. In this case, the published studies on synchronous rectifiers can not be used. From this point of view, it can be said that in the manufacturers' documentation there is no precise data on the time of switching on of the reverse diode. For example, with regard to the used here IRF450 transistors in [20] it is said: "Intrinsic turn-on time is negligible. Turn-on speed is substantially controlled by LS + LD". Also for the sum of the inductances of the two terminals of the transistor, the value is 6.1 nH. Taking in mind the above, the authors offers the following way of determining the time : Before starting the switch on of the MOSFET reverse diode, the capacity of the drain-source transistor needs to be current discharged through the diagonal of the inverter bridge to a voltage of approximately 0.6V. Therefore, it can be assumed that on the sum of the inductances is applied voltage with such value. Then the rate of current change through them and the reverse diode will be: . ( 5 ) Or in the described case . . 0.1 ( 6 ) Therefore, at known value of the current which must be reached through the reverse diode . For example, for the case shown in Figure 8(b) for 1 , 10 . At current value 10A the time would be 100nS. CONCLUSIONS As a result of this study for the dead-time impact on the operating modes of bridge transistor resonant inverter, is established an operating mode around the resonance frequency with consecutive conduction of the pairs diagonally connected reverse MOSFET diodes, which has not been described till now. Here is offered a formula for the maximum limitation of dead-time value besides the known so far minimum value limitation. An example of how to use it is shown as well.
2,664.2
2019-12-01T00:00:00.000
[ "Engineering", "Physics" ]
Seismicity in the Northern Rhine Area (1995–2018) Since the mid-1990s, the local seismic network of the University of Cologne has produced digital seismograms. The data all underwent a daily routine processing. For this study, we re-processed data of almost a quarter century of seismicity in the Northern Rhine Area (NRA), including the Lower Rhine Embayment (LRE) and the Eifel Mountain region (EMR). This effort included refined discrimination between tectonic earthquakes, mine-induced events, and quarry blasts. While routine processing comprised the determination of local magnitude ML, in the course of this study, source spectra-based estimates for moment magnitude MW for 1332 earthquakes were calculated. The resulting relation between ML and MW agrees well with the theory of an ML ∝ 1.5 MW dependency at magnitudes below 3. By applying Gutenberg-Richter relation, the b-value for ML was less (0.82) than MW (1.03). Fault plane solutions for 66 earthquakes confirm the previously published N118° E direction of maximum horizontal stress in the NRA. Comparison of the seismicity with recently published Global Positioning System–based deformation data of the crust shows that the largest seismic activity during the observation period in the LRE occurred in the region with the highest dilatation rates. The stress directions agree well with the trend of major faults, and declining seismicity from south to north correlates with decreasing strain rates. In the EMR, earthquakes concentrate at the fringes of the area with corresponding the largest uplift. Introduction The Northern Rhine Area (NRA) mainly comprises the region between the rivers Mosel, Rhine, and Maas ( Fig. 1). Observation of its moderate seismicity started around 1905 with a station in the city of Aachen (Haußmann 1907). However, the main purpose of this station was to discriminate between natural tectonic earthquakes and mine tremors as the mine owners wanted to avoid reimbursements for damages caused by natural earthquakes (Hinzen 2011). In 1906 Haußmann, a mine surveyor and professor at the Technical University in Aachen, ordered a 1.3-ton Wiechert seismometer from the Spindler and Heuer company in Göttingen. Because he could not go himself, he sent his assistant Ludger Mintrop to Emil Wiechert, the first professor of geophysics at the Göttingen University, to receive instructions for maintaining the seismometer. Consequently, the interests of Mintrop, now regarded as the "father of exploration seismology," shifted from surveying to seismology (Haußmann 1907). The original station located in one of the mines was moved to a surface location because of trouble with high subsurface humidity that affected the recording paper. In 1908, Mintrop (1909aMintrop ( , 1909b moved to Bochum and installed a second seismic station. These two remained the only local stations operating until sometime during WW2. After the war, the Euskirchen earthquake of 1951 (Berg 1953), of magnitude 5.7 with damage of intensity VIII, triggered the interest of Martin Schwarzbach. At that time, Schwarzbach was director of the Geological Institute of the Universität zu Köln. He was eventually well known for his pioneering work on geology and climate (Schwarzbach 1963) but had no background as a seismologist. After applying for help from Wilhelm Hiller from Stuttgart, Schwarzbach established a local station in Bensberg (BNS), located 12 km east of the center of Cologne the station rested on Devonian hard rock (Fig. 1). This provided a more favorable place for seismic recording than the university campus near downtown Cologne on the eastern rim of the LRE above 360 m of Tertiary and Quaternary sediments (Ahorner 2008). Ludwig Ahorner expanded the one station enterprise into a network in the 1970s, referred to as BENS network. Between 1995 and 2000, all BENS network stations were converted to digital recording, and in 2006, the number of permanent stations reached 43 with 20 accelerometer stations. Several scientific projects in the past explored different aspects of seismology in the NRA between Rhine, Maas, and Mosel for which data from permanent seismic stations produced a valuable database. From 1976 to 1982, the "Plateau Uplift" project employed an interdisciplinary approach to better understand the vertical movements of the Rhenish Shield still in the era before Global Positioning System (GPS) was available (Fuchs et al. 1983;Ahorner 1983). In the frame of the DECORP project in the 1980s, deep sounding seismic reflection profiles were measured throughout the Rhenish Massif (Meissner and Bortfeld 1990;DECORP 1991), the results improving the understanding of crustal structure and distribution of seismic velocities. The PALEOSIS project (Camelbeeck and Meghraoui 1996;Camelbeeck et al. 2001) revealed evidence for the first time of strong surface rupturing earthquakes during the Holocene in the LRE. The Eifel Plume Project, a large seismic tomography survey in 1997/1998, covered an area of approximately 400 × 250 km 2 centered on the Eifel volcanic fields with more than 200 permanent and temporary stations (Ritter et al. 2001). Results from the study strengthened the hypothesis of the existence of a mantle plume below the Eifel. Previously, in a tomographic study down to 70 km, Braun and Berckhemer (1993) had found an upper crustal low-velocity body under the Vogelsberg volcano. As part of the AGRIP-PINA project, array techniques to use ambient noise for the exploration of soft sediments were advanced with data measured in the LRE (Scherbaum et al. 2003). Reamer and Hinzen (2004) transferred about 96,000 hand-written phase readings of earthquakes recorded between 1975 and 1995 with the BENS network into digital format and used the database to construct an optimized 1D velocity model and calibrate parameters for M L determination. Beginning in 1999, the routinely processed data of the BENS network were reported online (http://www.seismo.uni-koeln.de, last accessed August 2020); through this link, the actual earthquake catalog is accessible. In this paper, we report on the reprocessing of earthquakes recorded between 1995 and 2018, which comprise all events of the BENS network for which digital seismograms are available until the end of the observation period. The reprocessing of earthquakes in the study area (50°to 52°N and 6.5°to 8.5°E) included (1) control and if necessary re-picking of phases, (2) relocation, (3) re-discrimination of blasts and mine tremors falsely registered as tectonic earthquakes, and (4) for the first time the determination of moment magnitude M W for all earthquake records where the signal-to-noise ratio (SNR) was sufficiently high providing the data necessary to (5) update the magnitude frequency relation for the study area. (6) Fault plane solutions were determined for events with more than ten unambiguous polarity readings. (7) The seismicity results are then further discussed with respect to GPS-based data of crustal deformation which recently became available through work by Kreemer et al. (2020). The network At the beginning of the observation period (1995), the BENS network consisted of seven short-period analog (FM tape) or semi-digital (PCM tape) recording stations. With the exception of one station (JUL) on the sediments in the LRE and inside the compound of an experimental nuclear facility, all other stations were located on the Rhenish Shield (Fig. 1). As noted, after 1995, all stations were converted to digital recording with AD converters originally designed by the Royal Observatory of Belgium (Snissaert 1992). Subsequent years saw the addition of several new short-period stations and one broadband station (DREG). In 2001, a dozen stations equipped with short-period sensors for surveying the open lignite pit mines west of Cologne were integrated into the BENS network. The location of these "mining" stations changed from time to time, due to the advancing work face of the mines. In 2001, the recording hardware was updated to commercial 24-bit AD converters and industry-standard PCs with continuous recording in Nordic format (Havskov et al. 2020) which is still the status of the stations monitoring at the time of this contribution. This format was chosen for the acquisition software, because all routine processing was done by the SeisAn software package (http://seis.geus. net/software/seisan/, last accessed July 2020). Seismic event timing is controlled by DCF77 clocks. The network reached its current size of 43 stations in 2006 when 20 accelerometer stations were added (Hinzen and Fleischer 2007). Most of these strong-motion stations are in the free field placed on the soft sediments of the LRE, with changing sediment thicknesses from a few decameter to 1.3 km and in close vicinity to the active faults. These stations provided good records of events with magnitudes down to M L 1 or even lower in case of small hypocentral distances. Four accelerometer stations (Farr et al. 2007) and the simplified geologic information is based on the 1:200,000 geologic map of Germany (Zitzmann 2003) (Hinzen et al. 2012;Hinzen 2014). The increase of the number of active stations after 2006 to 43 reduced the median inter-station distance within the network from 66 km (in 1995) to 44 km. Station locations and parameters are available at http://www.seismo.uni-koeln.de/station/netz.htm (last accessed July 2020). In addition to the BENS network (University of Cologne 2016), data from stations of neighboring networks are shown in Fig. 1. These include the Royal Observatory of Belgium (ORB) (Belgium 1985), the Koninklijk Nederlands Meteorologisch Instituut (KNMI 1993), Ruhr-Universität Bochum (RUB Germany 2007), Federal Institute for Geosciences and Natural Resources (GRSN 1976), Erdbebendienst Südwest (LED and LER), and the Geologischer Dienst NRW (GD NRW). Data from these stations were regularly utilized to locate earthquakes; however, magnitudes in the following are based only on seismograms from the BENS network. Since 2003, regular meetings twice a year with colleagues from ORB and KNMI helped to coordinate and improve cross-border seismic activities in the Rhine-Maas area and thereby continuing the initiative of an EU project "Rapid Transfrontier Seismic Data Exchange Network" which was coordinated by the British Geological Survey between 1994 and 1997. Reprocessing Data reprocessing included visual inspection of all phase picks, control and eventually re-picking of arrivals with unusual residuals, re-evaluation of event type (tectonic earthquake, mine-induced, explosion), re-localization, determination of M L , and (with sufficient SNR) determination of M W from distance-corrected displacement spectra of P-and/or S-phases. M W had only been previously determined for 39 selected earthquakes with magnitudes larger than 2 which occurred between 1975 and 2001 (Reamer and Hinzen 2004). The distance correction we used for the M L determination in this study is the one for the NRA given by Reamer and Hinzen (2004): where A is half the peak to peak Wood Anderson maximum trace amplitude with an amplification of 2080, and R the hypocentral distance. Statistics The raw catalog contained 14,782 events of which 7778 were categorized as tectonic or mine-induced earthquakes within the study area between 50°N and 52°N and between 5.5°E and 8.5°E. After reprocessing, 3330 tectonic earthquakes remained in the list and M W could be determined for 1332 of them. (For 16 earthquakes in 1995, no magnitude could be determined.) The numerous non-tectonic events include mine-induced events from the deep coal mines in the Ruhr district (e.g., Casten and Cete 1980;Hinzen 1982;Gibowicz et al. 1990;Bischoff et al. 2010), open-pit lignite mines west of Cologne (Ahorner and Schaefer 2002), and quarry blasts. Not all detected quarry blasts were included in the daily routine processing; only examples of special interest were analyzed, based on the subjective decision of the seismologist on duty. The number and the distribution of quarry blasts in the study area were examined by Hinzen and Pietsch (2000) who estimated at the time of the study about 21,000 blasts per year (80 per working day) were fired, roughly 200 times the number of tectonic earthquakes. The sheer volume posed a particular challenge for the discrimination in routine processing. Figure 2 shows the number of earthquakes per year for the 24-year period starting in 1995. The effect of densification of the network with the lowering of the detection threshold between 1995 and 2006 is indicated by the increase of detected tectonic events from 30 to more than 100 per year. Since 2010, the yearly number is around 200 earthquakes, with the increase primarily due to smaller earthquakes with magnitudes below M L 1.0 being detected. The high number of almost 500 earthquakes in 2011 is due to the aftershock sequence of the 14. February event near Nassau (Hinzen 2019). Further statistics on earthquakes occurrence is shown in Fig. 3, which breaks down the number of events per hour of the day, the minute, and the weekday. Time is local time and takes into account daylight saving time change in March and October of each year. Earthquake map The relocated epicenters of the dataset are shown on the map in Fig. 4; a complete list is available in the Online Resource 1. Seismicity is concentrated in a NW-SE trending stripe which follows the direction of the maximum horizontal shear stress (N118°E) as was determined earlier from fault plane solutions (Hinzen 2003). Seismicity north of the Neuwied Basin in the Ahr Area and the Siebengebirge (Fig. 1) forms a~20-kmwide stripe trending parallel to the Rhine River valley with a sharp drop of activity towards NE and SW. On the right side of the river the activity extends north up to 50.9°N. Further north of this latitude, there are only two earthquakes in the dataset east of the Rhine River. Both occurred in 2015 within 20 min with magnitudes of M L 1.9 and 1.6 at depths of 2-3 km. There were hints from the public who submitted macroseismic questionnaires that these events might have been associated with drilling activities in the area; however, this could not be confirmed. The main seismic activity in the LRE is located in the western part with two stripes parallel to the Erft Fault System and the Rurrand Fault. The activity continues further north in the Netherlands where the fault is designated as Peel Boundary Fault the location of the 1992 Roermond earthquake (M L 6) where seismic activity continued during the observation period. The highest concentration of microearthquakes was observed in the Neuwied Basin south of the Laacher See volcano (compare Fig. 1). As stated in Section 4, these events include tectonic earthquakes as well as earthquakes, some with hypocenters below the crust, which can be associated with ongoing volcanic activity in the East Eifel volcanic field (Hensch et al. 2019). For earthquakes with ten or more clearly identifiable polarity readings, fault plane solutions were made with a grid search. For 66 events, well-determined solutions were found and are listed in Online Resource 1; the mechanism for 27 of these with M L ≥ 2.5 are shown in Fig. 4. M L /M W relation The use of a consistent magnitude scale is essential in many sectors of seismological research. The moment (Hanks and Kanamori 1979) has become a standard in the past decades. However, many (local) seismic networks, including BENS, do not determine M W on a routine basis as M L determination proves adequate particularly for earthquakes with magnitudes below 5. But particularly in regions with low to moderate seismicity and scarce earthquakes of magnitude 5 and above, the lower end of the magnitude frequency relation is important for determination of seismic hazard. Because many empirical relations between ground Hinzen 2003). The graph on the righthand side of the map shows the depth distribution of the earthquakes along a north-south profile, background color indicates the P-wave velocity of the 1D model (Reamer and Hinzen 2004), and numbers below the scale give the P-wave velocity in kilometers per second at major discontinuities. Focal mechanisms are shown for 27 earthquakes with M L ≥ 2.5. Thin black lines link the focal spheres to the epicenters on the map. Black and white dots indicate the P and T axes, respectively. TEF Tegelen Fault, VIF Viersen Fault, PBF Pell Boundary Fault, WBF Western Border Faults, RRF Rurrand Fault, EFS Erft Fault System motion and magnitude are now based on M W , a reliable relation between M L and M W is essential when such ground motion models are applied. In the same way, as the distance calibration functions for M L have to be determined individually for every region (Hutton and Boore 1987), there is no universal M L /M W relation that can be applied for all networks (Deichmann 2017). We used spectral analysis of P-and S-wave phases to determine source parameters, including seismic moment, corner frequency of the source spectrum, and the related moment magnitude. Model spectra following Brune (1970) were fitted to distance-corrected displacement spectra using the SeiSan (Havskov et al. 2020) spectral modeling tool for all traces with sufficient SNR. The main processing steps are (1) removal of DC offset, (2) application of a cos-taper with 10% of the signal length at both ends, (3) FFT, (4) restitution of the instrument response, and (5) correction for attenuation along the travel path assuming a Q 0 of 540 (Romanowicz and Mitchell 2007). To fit the model spectra to the observed data, a frequency band with suitable SNR is selected by comparison to a time window of equal length prior to the first arrival, and then a grid search of the low-frequency level of the spectrum and the corner frequency is used to find the best fit. The results of all fitting procedures were visually controlled. The vertical component was used to analyze P-phases and both horizontal components for the S-phase, if possible. A total of 1332 M W values was determined from 16,295 spectra. Figure Both linear and quadratic relations have been used to parametrize empirical relations between M L and M W (e.g., Grünthal and Wahlström 2003;Grünthal et al. 2009;Allmann et al. 2010;Goertz-Allmann et al. 2011;Munafò et al. 2016;Deichmann 2017). For our dataset with −0.7 ≤ M L ≤ 4.6, both types of relations fit the data equally well (Fig. 5b): , respectively, for earthquakes in Switzerland, and the dark green line by Grünthal et al. (2009) for Central Europe. The dashed blue line gives the relation determined by Reamer and Hinzen (2004) for earthquakes in the Northern Rhine Area. All relations are shown within the magnitude range for which they were determined 3.4 Gutenberg-Richter model Figure 6 shows two diagrams with the Gutenberg-Richter relation (Gutenberg and Richter 1956) Discussion Even though the routine processing of the data from the BENS network has been competently performed over the years, the reprocessing has shown that overall data quality could still be improved. The occurrence time statistic from Fig. 3 can indicate whether a significant number of man-made events remains in the catalog. In contrast to naturally occurring tectonic earthquakes, man-made events are not uniformly distributed in time. In particular, quarry blasts are bound to working days (Monday to Friday) and are often fired at specific and regular times often around midday and at the full hour (Hinzen and Pietsch 2000). Contamination with falsely classified quarry blasts would show skew in the histograms in Fig. 3. The clear trend of the histogram with 170 earthquakes per hour at night times and less than 100 earthquakes in the middle of the day is the reverse trend of the daily noise level. In the densely populated and highly industrialized study area (with an extensive highway and train network), there is a significant shift in the noise level affecting the frequency range of local earthquakes between 0.5 and 20 Hz; thus, the detection threshold increases from night to daytime, indicated by the boxplots of the magnitude distribution per hour overlaid in Fig. 3a. From 10 to 12 and 15 to 16 h local time, the average number of events per hour is 82. At 13 and 14 h, the number is increased to 109 and 115, respectively. This excess of about 50 events in 2 h might at first be seen as a sign of misclassified blasts. However, the M L 4.3 Nassau earthquake on 14 February 2011(Hinzen 2019) had a strong aftershock sequence and 40 of these occurred between 13 and 15 h local time. Furthermore, the number of events per minute does not indicate larger values around the full hour as it would be the case with more blasts. The largest number per day of the week was observed on Mondays with 511, but again 67 of these were aftershocks of the 2011 Nassau earthquake. Neglecting these aftershocks, the average number on a work day is 452 ± 14. The somewhat higher number of events on Sundays (491) and Saturdays (511) can be attributed to the lower noise levels on weekends. In Fig. 5, the relation between M L and M W for the NRA is compared to results of previous studies. The slightly different slope of the linear relation of 0.691 compared to the 0.722 derived by Reamer and Hinzen (2004) for the same region can be attributed to the different covered magnitude ranges. Reamer and Hinzen (2004) used 39 events with magnitudes M L above 2.0. For most of those, only a limited number of phases could be used due to the few digital seismograms available from the study time period 1975 to 2001 compared to 1332 events for this study with many more digital seismograms available. Grünthal et al. (2009) updated the work by Grünthal and Wahlström (2003) and proposed a M L /M W relation for Central Europe based on 221 data pairs with original M W determinations mostly from the Swiss moment tensor solutions : The data set on which Eq. (4) is based mainly contained earthquakes with M W above 2 and up to 6, only six events had magnitudes below 1. As shown in This relation agrees with our linear fit (2) at M L 2, but Eq. (5) predicts slightly larger values at low magnitudes with M W = 1.58 compared to 1.45 at M L = 1.0. By referring to the stochastic waveform models of Hanks and Boore (1984), Edwards et al. (2015) proposed to change the slope in relation (5) at M L smaller than 2 to a value of 2/3 (0.667) instead of 0.594 (Deichmann 2017): In a recent paper, Munafò et al. (2016) by applying random vibration theory showed that for small earthquakes with a M W below 4, M L is proportional to the logarithm of the seismic moment and the corresponding relationship between the magnitudes is: where C′ is a free constant individual to the region and data set. In a test with a data set from the Upper Tiber Valley (northern Apennines, Italy) of 1191 earthquakes, they determined a C′ of 1.15. Assuming a fixed slope of 2/3 for our data set of 1332 events in the NRA, the best fit for C′ is 0.802 (Fig. 5): and predicts M W = 1.47 at M L = 1.0 compared to 1.45 with relations (2) and (3). Deichmann (2017) did a detailed study using model calculations and empirical analysis and concluded by referring to Hanks and Boore (1984), Edwards et al. (2010, Edwards et al. 2015, and Munafò et al. (2016) that in an attenuating medium for small earthquakes below some magnitude threshold, the 2/3 slope to be almost unrefutable. The question of what the magnitude threshold is below which the 2/3 slope applies is not easy to answer. Deichmann (2017) found a gradual transition even in a rather homogeneous dataset of induced events from a limited source volume, all recorded at the same borehole station, and concluded the magnitude threshold is clearly a function of the degree of attenuation along the travel path. For a dataset like ours which averages the M L /M W relation over a comparatively large area of epicenters and uses data from some 40 stations, Deichmann (2017) places the threshold somewhere between M W 2 and 4, depending on the attenuation conditions. As our dataset contains only 21 events with M L above 3.0 but 929 with M L between 0.5 and 2.0, an unambiguous threshold cannot be stated; however, below M L 3.0, the application of Eq. (8) seems to be justified. These differences between M L and M W for low magnitudes also affect the corresponding Gutenberg-Richter relations. For our dataset, the b-value for the M L relation (0.82) is lower than for the M W relation (1.03). Deichmann (2017) reported a similar trend for the earthquakes of the Swiss earthquake catalog, with a lower bvalue for M L compared to M W ; however, in that study, the M W values were converted from M L using the relation by Goertz-Allmann et al. (2011) with the modification by Edwards et al. (2015) (Eqs. (5) and (6)). For BENS values of M W were directly determined from the seismic records, Fig. 5b shows in addition to the directly determined M W values (black dots), the values which result when the measured M L data are converted to M W using Eq. (8) (red circles). Below M W = 2.5, the b-value of the converted data (1.02) is similar to the one obtained from measured data (1.03), but due to the influence of the few stronger earthquakes, a change in the b-value of the converted data to 1.11 occurs at magnitudes above 2.5. The largest measured M W is 4.3, but the corresponding converted value only 3.8. As an additional check on the clustering of hypocenters, the double-difference (DD) method (Waldhauser 2001) was applied to the dataset of 51,209 differential arrival times. Limiting the search for clusters to events with a minimum of four links resulted in 824 hypocenters in 18 clusters. Figure 7 shows these earthquakes together with the rest of the dataset and the frequency depth distribution of the earthquakes in nine selected areas. The three areas within the LRE, entitled Erft, Rur, and Roermond, contain about 25% of all earthquakes in the database. Most events in the Erft cluster occurred at depths between 6 and 15 km with a maximum depth of 26 km. The epicenters align in a pattern parallel to the Erft Fault System (Fig. 7) at a horizontal distance indicating a westward dip of the faults of about 50 to 55°. In the Rur cluster, more earthquakes are located at a shallower depth between 4 and 14 km. A clear association with one of the faults in the Rur area is not easy because at depth they can occur on either the westwarddipping Rurrand Fault or on one of the east-dipping Western Border Faults. For example, the fault plane solution of the 2002 Alsdorf earthquake (M W 4.3, depth 15.8 km) indicates a normal faulting mechanism on a 52°west-dipping plane which can be associated with the Rurrand Fault (Hinzen and Reamer 2007). The earthquakes in the Roermond cluster are in the vicinity of the strongest earthquake in the NRA in the past century with M W 5.4 ) and linked to the Peel Boundary Fault (Fig. 7). The Voerendaal cluster in the Dutch province of Limburg shows a large percentage (46%) of hypocenters at shallow depth between 2 and 6 km. These earthquakes were part of small swarms of events between 2000 and 2001 (Dost et al. 2004). The proximity to former coal mines suggests the possibility that these shallow events may be associated in a causal sense due to the rising water table after the abandonment of these mines (Sigaran-Loria and Slob 2019) which produces heterogeneous surface displacements detected by satellite radar interferometry (Caro Cuenca et al. 2013). The clusters in the Ardennes and in the West Eifel show hypocenters in the lower crust, almost reaching the Moho at 30 km. The high percentage of events with depth of 12 to 14 km in the West Eifel is due to two small swarms of microearthquakes in 2010 at this depth level. More than 40% of all earthquakes are located within the Neuwied Basin cluster. While the majority of tectonic earthquakes in and around the basin has shallow hypocenters of less than 10 km, some events have been observed at depths reaching 40 km, well below the Moho discontinuity. Hensch et al. (2019), using data from temporary as well as permanent stations, showed that deep low-frequency microearthquakes below the Laacher See Volcano occurred in four distinct clusters between 10-and 40-km depth. The Laacher See is the caldera of the last eruption in the East Eifel volcanic field 12.9 Kyr ago, which was fed by a shallow magma chamber at 5-to 8-km depth, erupting a total magma volume of 6.7 km 3 (Zolitschka et al. 2000;Schmincke 2007Schmincke , 2009). In the Taunus, the majority of hypocenters occur at depths shallower than 15 km. Some of the events form lineaments in the SW-NE direction roughly parallel to faults in the Devonian bedrock. In the Ahr cluster, two bands of epicenters stretch NE-SW and NNE-SSW across the Rhine River. The northern part of these bands includes earthquakes in the Siebengebirge, where the last volcanic activity occurred in the Miocene, and north of it on the eastern side of the Rhine River. Hypocenters in this cluster are mainly shallow, around 2 to 5 km and between 10 and 14 km. Of the 66 fault plane solutions from the earthquakes with more than 10 unambiguous polarity readings (Online Resource 1 and Fig. 4), 30% have strike-slip character, while the others mainly conform to a normal faulting mechanism. Application of the linear inversion algorithm from Michael (1984) reveals a trend of the largest compressive stress in the NRA of N118°E. This value is identical to the maximum horizontal stress direction determined by Hinzen (2003). (Also, see the rose diagram in Fig. 4; eight of the 66 fault plane solutions from this study were also part of the database used by Hinzen 2003). In a recently published paper, Kreemer et al. (2020) used Global Positioning System (GPS) data to robustly image vertical land motion (VLM) and horizontal strain rates over most of intraplate Europe. They found a clear spatially coherent positive VLM anomaly over a large area surrounding the Eifel volcanic fields with maximum uplift of ∼ 1 mm year −1 at the center (when corrected for glacial isostatic adjustment) and significant horizontal extension surrounded by a radial pattern of shortening. This combination strongly suggests a common dynamic cause (Kreemer et al. 2020). Their uplift model postulates a mantle plume at the bottom of the lithosphere. The location of the modeled plume agrees well with the findings by Ritter (2007) from the large-scale seismic experiment, the Eifel Plume project. The deformation field deduced by Kreemer et al. (2020) is based on continuous GPS data of more than 2100 stations operating across Europe for up tõ 20 years. The GPS station density in the central part of Europe in the Kreemer et al. (2020) dataset includes more than 70 stations within our study area (Fig. 8), sufficient to compare the deformations with seismicity recorded in the study area over the past quarter century. The dataset from Kreemer et al. (2020) contains horizontal and vertical velocities and deformation resolved on a 0.1°grid. From these data, we calculated the maximum shear strain: where λ 1, 2 are the eigenvalues of the strain rate tensor with componentsε xx ,ε yy , andε xy and the directions of maximum shear strain rate (Hackl et al. 2009): and the trace of the tensor corresponds to the relative variation rate of the surface area (dilatation) (Hackl et al. 2009): δ ¼ε xx þε yy ð11Þ Figure 8 shows the seismicity with respect to the vertical land movement corrected for the far-field glacial isostatic adjustment signal (Kreemer et al. 2020) and the . Major faults are shown in black, and the edge of the Rhenish Shield is indicated by white lines. a The colored contours show the vertical land movement determined from GPS measurements by Kreemer et al. (2020) corrected for the far-field glacial isostatic adjustment signal. Triangles give the locations of volcanoes in the west and East Eifel volcanic fields (after Meyer 2013). b Same map area as in a with color-coded dilatation rates and vectors of horizontal velocities based on a model inferred from the GPS data by Kreemer et al. (2020) dilatation rate after Eq. (11) with the overlain horizontal velocities. The vertical movements show a clear maximum slightly above 1 mm/year in the center of the Eifel between the west and East Eifel volcanic fields (Meyer 2013;Schmincke 2009). In the area of the largest vertical movements (around 1 mm/year) with its center at 50.3°N and 7.0°E, few earthquakes have been observed in the past 25 years. On the other hand, at the fringe of that area, where the gradients of vertical movement are large with respect to the center of the uplifted area, clusters of earthquakes exist (i.e., along the Rhine River valley and in the Neuwied Basin, in the Hunsrück, in the Ahr region extending to the Siebengebirge, and northwest of the West Eifel volcanic field). With the exception of a small swarm of earthquakes with magnitudes below 0.5 which occurred within a few days in 2010, few events were recorded in the West Eifel volcanic field. The largest dilatation rates are found at the western border of the LRE with a maximum of 3.7 10 −9 /year at 50.7°N and 6.4°E. As shown in Fig. 8b, horizontal velocities are practically zero at the center of the dilatation and increase with distance from it pointing radially away. In the range of the largest dilatation lies the Rur cluster of earthquakes (Fig. 7), which also includes the 2002 Alsdorf earthquake (M W 4.3), (Hinzen 2005). In addition, increased seismicity was observed here in the two years following the 1992 M W 5.4 Roermond earthquake ) which occurred 40 km northwest of the Alsdorf event (Hinzen and Reamer 2007). A similar pattern of horizontal extension in this area was noted by Camelbeeck and van Eck (1994) based on an analysis of 24 fault plane solutions. The maximum shear strain (Eq. 8) shown in Fig. 9a together with the seismicity reveals lowest shear strain values are found in the Eifel, south of the West Eifel volcanic field, and in the northern part of the LRE. The low shear strain in the Eifel may correlate with the low seismicity in this area. In addition to the shear strain values, two (indistinguishable) directions of the maximum shear (Eq. 9) are shown in Fig. 9a. In addition to the direction at the grid points of the map (gray symbols), the directions along the major faults in the LRE and along some lineaments of the seismicity pattern within the Rhenish shield are plotted (red symbols). The directions agree well with the trend of the faults; Fig. 9 a Color-coded maximum shear strain map based on a model inferred from the GPS data by Kreemer et al. (2020). The crosses give the direction of the maximum shear strain rate, with their size scaled to shear strain rate amplitude. Gray crosses are shown at every third grid point (for better visibility). The red crosses show the directions along the Quaternary faults of the Lower Rhine Embayment and along some lineaments in the southern part of the map. Gray circles show epicenters of earthquakes . TEF Tegelen Fault, VIF Viersen Fault, PBF Pell Boundary Fault, WBF Western Border Faults, RRF Rurrand Fault, EFS Erft Fault System. b The shear strain rate amplitudes are plotted with respect to latitude along a NS trending profile at 6°E starting at the southern tip of the LRE (indicated in the map by the green line). The red dots show the number of earthquakes in 5-km-wide horizontal stripes along the profile overlapping by half the bin width e.g., they align even with the bends in the trend of the Viersen Fault and the Erft Fault system (Fig. 9a). The size of the symbols, scaled by the maximum shear strain, displays a clear decrease of this value from south to north in the LRE. To quantify this observation, in Fig. 9b, the strain rate is plotted along a south-north trending profile at 6°E (shown in Fig. 9a). In addition, Fig. 9b shows the number of earthquakes along this profile in 5km-wide east-west trending stripes which overlap by half of the bin width from the southern end of the LRE to the northern limit of the study area. The decrease of earthquake frequency in the LRE from south to north correlates well with the decrease in shear strain rate. The directions of shear are also found to be parallel to some lineaments in the seismicity pattern in the West Eifel, and the Middle Rhine Area. In particular, the NW-SE trend of the limits of seismicity agrees well with the NW-SE shear direction (Fig. 9a). Conclusions The reprocessing of almost a quarter century of digital seismic data from the BENS network resulted in an updated earthquake catalog. Direct determination of moment magnitudes of 1332 small earthquakes supports the hypothesis M L ∝ 1.5 M W for magnitude below 3. An updated Gutentenberg-Richter relation with measured moment magnitudes (1.5 ≤ M W ≤ 4.3) reveals higher b-value of 1.03 than a 0.83 b-value based on local magnitudes. Fault plane solutions confirm a maximum horizontal stress at N118°E. Comparing the seismicity with recently published GPS-based deformation data shows that most earthquakes in the Lower Rhine Embayment in the past two and a half decades occurred within an area corresponding to maximum crustal dilatation. The direction of shear strain agrees well with the trend of major normal faults of the LRE. The decrease in seismic activity from south to north correlates with the decrease in shear strain rate. In the Eifel Mountain Region, the area with largest uplift rates (reaching 1 mm/year) shows few earthquakes; however, at the fringes of this area where the gradient of vertical movement is large, numerous earthquakes occurred during the observation period. In the Middle Rhine Area, lineaments in the seismicity pattern are parallel to one of the maximum shear directions. The Northern Rhine Area has low to moderate seismicity. The 25 years of earthquake data from the BENS network do not include any spectacular damaging earthquakes. This deceptively low level of seismic hazard has led public and authorities to repeatedly underestimate the earthquake risk in this densely populated and highly industrialized area. The damaging earthquakes of the past century and those known from the historical record signify the risk. A recurrence of one of the surface rupturing earthquakes as evidenced by paleoseismic studies would result in major destruction in the study area. Therefore, continued and even intensified surveillance of the seismic activity in the NRA is essential for mitigating earthquake risk in addition to monitoring the volcanic activity in the Eifel Region. Supplementary Information The online version contains supplementary material available at https://doi.org/10.1007/s10950-020-09976-7. and RWE Power AG. Open access funding was provided by project DEAL. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
9,163
2020-12-16T00:00:00.000
[ "Geology" ]
LAB–QA2GO: A Free, Easy-to-Use Toolbox for the Quality Assessment of Magnetic Resonance Imaging Data Image characteristics of magnetic resonance imaging (MRI) data (e.g., signal-to-noise ratio, SNR) may change over the course of a study. To monitor these changes a quality assurance (QA) protocol is necessary. QA can be realized both by performing regular phantom measurements and by controlling the human MRI datasets (e.g., noise detection in structural or movement parameters in functional datasets). Several QA tools for the assessment of MRI data quality have been developed. Many of them are freely available. This allows in principle the flexible set-up of a QA protocol specifically adapted to the aims of one’s own study. However, setup and maintenance of these tools takes substantial time, in particular since the installation and operation often require a fair amount of technical knowledge. In this article we present a light-weighted virtual machine, named LAB–QA2GO, which provides scripts for fully automated QA analyses of phantom and human datasets. This virtual machine is ready for analysis by starting it the first time. With minimal configuration in the guided web-interface the first analysis can start within 10 min, while adapting to local phantoms and needs is easily possible. The usability and scope of LAB–QA2GO is illustrated using a data set from the QA protocol of our lab. With LAB–QA2GO we hope to provide an easy-to-use toolbox that is able to calculate QA statistics without high effort. INTRODUCTION Over the last 30 years, magnetic resonance imaging (MRI) has become an important tool both in clinical diagnostics and in basic neuroscience research. Although modern MRI scanners generally provide data with high quality (i.e., high signal-to-noise ratio, good image homogeneity, high image contrast and minimal ghosting), image characteristics will inevitably change over the course of a study. They also differ between MRI scanners, making multicenter imaging studies particularly challenging (Vogelbacher et al., 2018). For longitudinal MRI studies stable scanner performance is required not only over days and weeks, but over years, for instance to differentiate between signal changes that are associated with the time course of a disease and those caused by alterations in the MRI scanner environment. Therefore, a comprehensive quality assurance (QA) protocol has to be implemented that monitors and possibly corrects scanner performance, defines benchmark characteristics and documents changes in scanner hardware and software (Glover et al., 2012). Furthermore, early-warning systems have to be established that indicate potential scanner malfunctions. The central idea of a QA protocol for MRI data is the regular assessment of image characteristics of a MRI phantom. Since the phantom delivers more stable data than living beings, it can be used to disentangle instrumental drifts from biological variations and pathological changes. Phantom data can be used to assess, for instance geometric accuracy, contrast resolution, ghosting level, and spatial uniformity. Frequent and regular assessments of these values are needed to detect gradual and acute degradation of scanner performance. Many QA protocols additionally complement the assessment of phantom data with the analysis of human MRI datasets. For functional imaging studies, in which functional signal changes are typically just a small fraction (∼1-5%) of the raw signal intensity , in particular the assessment of the temporal stability of the acquired time series is important, both within a session and between repeated measurements. The documented adherence to QA protocols has therefore become a key benchmark to evaluate the quality, impact and relevance of a study (Van Horn and Toga, 2009). Different QA protocols for MRI data are described in the literature, mostly in the context of large-scale multicenter studies [for an overview, see Van Horn and Toga (2009) and Glover et al. (2012)]. Depending on the specific questions and goals of a study, these protocols typically focused either on the quality assessment for structural (e.g., Gunter et al., 2009) or functional MRI data (e.g., . QA protocols were also developed for more specialized study designs, for instance in multimodal settings as the combined acquisition of MRI with EEG (Ihalainen et al., 2015) or PET data (Kolb et al., 2012). Diverse MRI phantoms are used in these protocols, e.g., the phantom of the American College of Radiology (ACR) (ACR, 2005), the Eurospin test objects (Firbank et al., 2000) or gel phantoms proposed by the Functional Bioinformatics Research Network (FBIRN)-Consortium . These phantoms were designed for specific purposes. Whereas for instance the ACR phantom is well suited for testing the system performance of a MRI scanner, the FBIRN phantom was primarily developed for fMRI studies. A wide array of QA algorithms is used to describe MR image characteristics, for instance the so-called "Glover parameters" applied in the FBIRN consortium [for an overview see Glover et al. (2012) and Vogelbacher et al. (2018)]. Many algorithms are freely available [see, e.g., C-MIND (Lee et al., 2014), CRNL (Chris Rorden's Neuropsychology Lab [CRNL], 2018); ARTRepair (Mazaika et al., 2009); C-PAC (Cameron et al., 2013)]. This allows in principle the flexible set-up of a QA protocol specifically adapted to the aims of one's own study. The installation of these routines, however, is often not straight-forward. It typically requires a fair level of technical experience, e.g., to install additional image processing software packages or to handle the dependence of the QA tools on specific software versions or hardware requirements. 1 In 2009, we conducted a survey in 240 university hospitals and research institutes in Germany, Austria and Switzerland to investigate which kind of QA protocols were routinely applied (data unpublished). The results show that in some centers a comprehensive QA protocol is established but that in practice most researchers in the cognitive and clinical neurosciences have only a vague idea to what extent QA protocols are implemented in their studies and how to deal with potential temporal instabilities of the MRI system. To get started performing QA on MRI systems we developed an easy-to-use QA tool which provides on the one hand a fully automated QA pipeline for MRI data (with a defined QA protocol), but is on the other hand easy to integrate on most imaging systems and does not require particular hardware specifications. In this article we present the main features of our QA tool, named LAB-QA2GO. In the following, we give more information on the technical implementation of the LAB-QA2GO tool (see section "Technical Implementation of LAB-QA2GO"), present a possible application scenario ("center specific QA") (see section "Application Scenario: Quality Assurance of an MRI Scanner") and conclude with an overall discussion (see section "Discussion"). TECHNICAL IMPLEMENTATION OF LAB-QA2GO In this section, we describe the tool LAB-QA2GO (version 0.81, 23. March 2019), its technical background, outline different QA pipelines and describe the practical implementation of the QA analysis. These technical details are included as part of a manual in a MediaWiki (version: 1.29.0 2 ) as part of the virtual machine. The MediaWiki could also serve for the documentation of the laboratory and/or study. 1 Some QA algorithms require, e.g., the installation of standard image processing tools [e.g., Artifact Detection Tool (http://web.mit.edu/swg/software. htm); PCP Quality Assessment Protocol (Zarrar et al., 2015)] while others are integrated in different imaging tools [Mindcontrol (https://github.com/ akeshavan/mindcontrol); BXH/XCEDE (Gadde et al., 2012)]. Some pipelines can be integrated to commercial programs, e.g., MATLAB [CANlab (https:// canlab.github.io/); ARTRepair], or large image processing systems [e.g., XNat (Marcus et al., 2007); C-Mind] of which some had own QA routines. Other QA pipelines can only be used online, by registering with a user account and uploading data to a server [e.g., LONI (Petrosyan et al., 2016)]. Commercial software tools [e.g., BrainVoyager (Goebel, 2012)] mostly have their own QA pipeline included. Also some Docker based QA pipeline tools exist [e.g., MRIQC (Esteban et al., 2017)]. 2 https://www.mediawiki.org/wiki/MediaWiki/de Technical Background LAB-QA2GO is a virtual machine (VM). 3 Due to the virtualization, the tool is already fully configured and easy to integrate in most hardware environments. All functions for running a QA analysis are installed and immediately ready-foruse. Also all additionally required software packages (e.g., FSL) are preinstalled and preconfigured. Only few configuration steps have to be performed to adapt the QA pipeline to own data. Additionally, we developed a user-friendly web interface to make the software easily accessible for inexperienced users. The VM can either be integrated into the local network environment to use automatization steps or it can be run as a stand-alone VM. By using the stand-alone approach, the MRI data has to be transferred manually to the LAB-QA2GO tool. The results of the analysis are presented on the integrated web based platform (Figure 1). The user can easily check the results from every workstation (if the network approach is chosen). We choose NeuroDebian (Halchenko and Hanke, 2012, version: 8.0 4 ) as operating system for the VM, as it provides a large collection of neuroscience software packages (e.g., Octave, mricron) and has a good standing in the neuroscience community. To keep the machine small, i.e., the space required for the virtual drive, we included only packages necessary for the QA routines in the initial setup and decided to use only open source software. But users are free to add packages according to their needs. To avoid license fees, we opted to use only open source software. The full installation documentation can be found in the MediaWiki of the tool. For providing a web based user-friendly interface, presenting the results of the QA pipelines and receiving the data, the lightweight lighttpd web server (version: 1.4.35 5 ) is used. The web based interface can be accessed by any web browser (e.g., the web browser of the host or the guest system) using the IP address of the LAB-QA2GO tool. This web server needs little hard disk space and all required features can easily be integrated. The downscaled Picture Archiving and Communication System (PACS) tool Conquest (version: 1.4.17d 6 ) is used to receive and store the Digital Imaging and Communications in Medicine (DICOM) files. Furthermore, we installed PHP (version: 5.6.29-0 7 ) to realize the user interface interaction. Python (version: 2.7.9 8 ) scripts were used to for the general schedule, to move the data into the given folder structure, to start the data 3 Virtual machines are common tools to virtualize a full system. The hypervisor for a virtual machine allocates an own set of resources for each VM of the host pc. Therefore, each VM is fully isolated. Based on the isolated approach of the VM technology, each VM has to update its own guest operating system. Another virtualization approach could have been based on Linux containers (e.g., Docker). Docker is a computer program that performs operating-system-level virtualization. This hypervisor uses the same resources which were allocated for the host pc and isolates just the running processes. Therefore, Docker only has to update the software to update all containers. For our tool we wanted to have a fully isolated system. Fixed software versions independent of the host pc are more likely to guarantee the functionality of the tool. specific QA scripts, collect the results and write the results into HTML files. The received DICOM files were transferred into the Neuroimaging Informatics Technology Initiative (NIfTI) format using the dcm2nii tool [version: 4AUGUST2014 (Debian)] 9 . To extract the DICOM header information, the tool dicom2 (version: 1.9n 10 ) is used, which converts the DICOM header into an easy accessible and readable text file. For each QA routine a reference DICOM file can be uploaded and a DICOM header check will be performed to ensure identical protocols (using pydicom version: 1.2.0). To set up the DICOM header comparison we read the DICOM-header of an initial data set (which has to be uploaded to the LAB-QA2GO tool) and compare all follow-up data sets with this header. Here, we investigate a subset of the standard DICOM fields (i.e., orientation, number of slices, frequencies, timing, etc.) which will change if a different protocol is used. We do not compare DICOM fields that typically change between two measurements (e.g., patient name, acquisition time, study date, etc.). Any change in these relevant standard DICOM fields will be highlighted on the individual result page. A complete list of the compared DICOM header fields can be found in the openly available source code on GitHub 11 . The QA routines were originally implemented in MATLAB 12 (Hellerbach, 2013;Vogelbacher et al., 2018) and got adapted to GNU Octave (version: 3.8.2 13 ) for LAB-QA2GO. The NeuroImaging Analysis Kit (NIAK) (version: boss-0.1.3.0 14 ) was used for handling the NIfTI files and graphs were plotted using matplotlib (version: 1.4.2 15 ), a plotting library for python. Finally, to process human MRI data we used the image processing tools of FMRIB Software Library (FSL, version: 5.0.9 16 ). Motion Correction FMRIB's Linear Image Registration Tool (MCFLIRT) was used to compute movement parameters of fMRI data and Brain Extraction Tool (BET) to get a binary brain mask. QA Pipelines for Phantom and for Human MRI Data Although the main focus of the QA routines was on phantom datasets, we added a pipeline for human datasets (raw DICOM data from the MR scanner). To specify which analysis should be started, LAB-QA2GO uses unique identifiers to run either the human or the phantom QA pipeline. For Human Data Analysis We use movement parameters from fMRI and noise level from structural MRI as easily interpretable QA parameters (Figure 3). The head movement parameters (translation and rotation) are calculated using FSL MCFLIRT and FSL FSLINFO with default settings, i.e., motion parameters relative to the middle image of the time series. Each parameter (pitch, roll, yaw, movement in x, y, and z direction) is plotted for each time-point in a graph (Figures 3A,B). Additionally, a histogram is generated of the step width between two consecutive time points to detect large movements between two time points (Figures 3C,D). For structural MRI data, a brain mask is calculated by FSL'S BET (using the default values) first. Subsequently, the basal noise of the image background (i.e., the area around the head) is detected. First, a region of interest (ROI) is defined in the corner of the three-dimensional image. Second, the mean of this ROI aggregated by an initial user defined threshold multiplier is used to mask the head in a first step. Third, for every axial and sagittal slice the edges of the scalp were detected by using a differentiation algorithm between two images to create a binary mask of the head ( Figure 3G). Fourth, this binary mask is multiplied with the original image to get the background of the head image. Fifth, for this background a histogram (Figure 3) of the containing intensity values is generated. The calculated mask is saved to create images for the report. Also a basal SNR (bSNR) value is calculated by the quotient of the mean intensity in the brain mask and the standard deviation of the background signal. Each value is presented individually in the report to easily see by which parameter the SNR value was influenced. These two methods should give the user an overview of existing noise in the image. Both methods can be independently activated or deactivated by the user to individually run the QA routines. Practical Implementation of QA Analyses in LAB-QA2GO The LAB-QA2GO pipelines (phantom data, human data) are preconfigured, but require unique identifiers as part of the dicom field "patient name" to distinguish between data sets, i.e., which pipeline should be used for the analysis of the specific data set. Predefined are "Phantom, " "ACR, " and "GEL" in the field "patient name, " but can be adopted to the local needs. These unique identifiers have to be inserted into the configuration page (a web based form) on the VM (Figure 4). The algorithm checks for the field "patient name" of the DICOM header so that the unique identifier has to be part of the "patient name" and has to be set during the registration of the patient at the MR scanner. The MRI data are integrated into the VM either by sending them ("dicom send, " network configuration) or providing them manually (directory browsing, stand-alone configuration). Using the network configuration, the user has to integrate the IP address of the VM as a DICOM receiver in the PACS first. LAB-QA2GO runs the Conquest tool as receiving process to receive the data from the local setup, i.e., either the MRI camera, the PACS, etc., and stores them in the VM. Using the stand-alone configuration, the user has to copy the data manually to the VM. This can be done using, e.g., a USB-Stick or a shared folder with the host system (provided by the virtualization software). In the standalone configuration, the VM can handle both DICOM and NIfTI format data. The user has to specify the path to the data in the provided web interface and then just press start. If the data is present as DICOM files, then the DICOM data send process is started to transfer the DICOM files to the conquest tool, to run the same routine as described above. If the data is present in NIfTI format, the data is copied into the temporal folder and the same routine is started without converting the data. After the data is available in the LAB-QA2GO tool, the main script for analysis is either started automatically at a chosen time point or can be started manually by pressing a button in the web interface. The data processing is visualized in Figure 5. First, the data is copied into a temporal folder. Data processing is performed on NIfTI formatted data. If the data is in DICOM format, it will be converted into NIfTI format using the dcm2nii tool. Second, the names of the NIfTI files are compared to the predefined unique identifiers. If the name of the NIfTI data partly matches with a predefined identifier, then the corresponding QA routine is started (e.g., gel phantom analysis; see section "QA Pipelines for Phantom and for Human MRI Data"). Third, after each calculation step, a HTML file for the analyzed dataset is generated. In this file, the results of the analysis are presented (e.g., the movement graphs for functional human datasets). In Figure 6, we show an exemplary file for the analysis of gel phantom data. Furthermore, an overview page for each analysis type is generated or updated. On this overview page, the calculated parameters of all measurements of one data type are presented as a graph. An individual acceptance range can be defined using the configuration page, which is visible in the graph. Additionally, all individual measurement result pages are linked at the bottom of the page for a detailed overview. Outliers (defined by either an automatically calculated or self-defined acceptance range) are highlighted to detect them easily. APPLICATION SCENARIO: QUALITY ASSURANCE OF AN MRI SCANNER There are many possible application scenarios for the LAB-QA2GO tool. It can be used, for instance, to assess the quality of MRI data sets acquired in specific neuroimaging studies (e.g., Frässle et al., 2016) or to compare MRI scanners in multicenter imaging studies (e.g., Vogelbacher et al., 2018). In this section we will describe another application scenario in which the LAB-QA2GO tool is used to assess the long-term performance of one MRI scanner ("center-specific QA"). We will illustrate this scenario using data from our MRI lab at the University of Marburg. The aim of this QA is not to assess the quality of MRI data collected in a specific study, but to provide continuously information on the stability of the MRI scanner across studies. Center-Specific QA Protocol The assessment of MRI scanner stability at our MRI lab is based on regular measurements of both the ACR phantom and a gel phantom. The phantoms are measured at fix time points. The ACR phantom is measured every Monday and Friday, the gel phantom each Wednesday. All measurements are performed at 8 a.m., as first measurement of the day. For calculating the QA statistics, the LAB-QA2GO tool is used in the network configuration. As unique identifiers (see section "Technical Implementation of LAB-QA2GO"), we determined that all phantom measurements must contain the keywords "phantom" and either "GEL" or "ACR" in the "patient name." If these keywords are detected by LAB-QA2GO, the processing pipelines for the gel phantom analysis and the ACR phantom analysis, respectively, are started automatically. In the following, we describe the phantoms and the MRI protocol in detail. We also present examples how the QA protocol can be used to assess the stability of the MRI scanner. Gel Phantom The gel phantom is a 23.5 cm long and 11.1 cm-diameter cylindrical plastic vessel (Rotilabo, Carl Roth GmbH + Co., KG, Karlsruhe, Germany) filled with a mixture of 62.5 g agar and 2000 ml distilled water. In contrast to widely used water filled phantoms, agar phantoms are more suitable for fMRI studies. On the one hand, T2 values and magnetization transfer characteristics are more similar to brain tissue (Hellerbach, 2013). Furthermore, gel phantoms are less vulnerable to scanner vibrations and thus avoid a long settling time prior to data acquisition . For the gel phantom, we chose MR sequences that allowed to assess the temporal stability of the MRI data. This stability is in particular important for fMRI studies in which MRI scanners are typically operated close to their load limits. The MRI acquisition protocol consists of a localizer, a structural T1-weighted sequence, a T2 * -weighted echo planar imaging (EPI) sequence, a diffusion tensor imaging (DTI) sequence, another fast T2 * -weighted EPI sequence and, finally, the same T2 * -weighted EPI sequence as at the beginning. The comparison of the quality of the first and the last EPI sequence allows in particular to assess the impact of a highly stressed MRI scanner on the imaging data. The MRI parameters of all sequences are listed in Table 1. ACR Phantom The ACR phantom is a commonly used phantom for QA. It uses a standardized imaging protocol with standardized MRI parameters (for an overview, see ACR, 2005ACR, , 2008. The protocol tests geometric accuracy, high-contrast spatial resolution, slice thickness accuracy, slice position accuracy, image intensity uniformity, percent-signal ghosting, and low-contrast object detectability. Phantom Holder At the beginning, both phantoms were manually aligned in the scanner and fixated using soft foam rubber pads. The alignment of the phantoms was evaluated by the radiographer performing the measurement and -if necessary -corrected using the localizer scan. To reduce spatial variance related to different placements of the phantom in the scanner and to decrease the time-consuming alignment procedure, we developed a styrofoam TM phantom holder (Figure 7). The phantom holder allowed a more time-efficient and standardized alignment of the phantoms within the scanner on the one hand. The measurement volumes of subsequent MR sequences could be placed automatically in the center of the phantom. On the other hand, the variability of QA statistics, related to different phantom mountings, was strongly reduced. This allowed a more sensitive assessment of MRI scanner stability (see Figure 8, left). In Figure 8, we present selected QA data (from the gel phantom) collected over a duration of 22 months (February 2015-December 2016) during the set-up of a large longitudinal imaging study (FOR, 2107. The analysis of phantom data is able to show that changes in the QA-protocol (such as the introduction of a phantom holder, Figure 8A), technical changes of a scanner (such as the replacement of the MRI gradient coil, Figure 8B) or changes in certain sequence parameters (such as adding the prescan normalization option, Figure 8C), impact many of the QA statistics in a variety of ways. It is also possible to use QA statistics to quantify the data quality of different MRI scanners ( Figure 8D). In summary, this exemplary selection of data shows the importance of QA analyses to assess the impact external events on the MRI data. The normal ranges of many QA statistics drastically change whenever hardware or software settings are changed at a scanner -both in mean and variance. DISCUSSION In this article, we described a tool, LAB-QA2GO, for the fully automatic quality assessment of MRI data. We developed two different types of QA analyses, a phantom and a human data QA pipeline. In its present implementation, LAB-QA2GO is able to run an automated QA analysis on data of ACR phantoms and gel phantoms. The ACR phantom is a widely used phantom for QA of MRI data. It tests in particular spatial properties, e.g., geometric accuracy, high-contrast spatial resolution or slice thickness accuracy. The gel phantom is mainly used to assess the temporal stability of the MRI data. For phantom data analysis, we used a wide array of previously described QA statistics (for an overview see, e.g., Glover et al., 2012). Although the main focus of the QA routines was the analysis of the phantom datasets, we additionally developed routines to analyze the quality of human datasets (without any pre-processing steps). LAB-QA2GO was developed in a modular fashion, making it easily possible to modify existing algorithms and to extend the QA analyses by adding self-designed routines. The tool is available for download on github 17 . License fees were avoided by using only open source software that was exempt from charges. LAB-QA2GO is ready-to-use in about 10 min. Only a few configuration steps have to be performed. The tool does not need any further software or hardware requirements. LAB-QA2GO can receive MRI data either automatically ("network 17 Github: https://github.com/vogelbac. approach") or manually ("stand-alone approach"). After sending data to the LAB-QA2GO tool, analysis of MRI data is performed automatically. All results are presented in an easy readable and easy-to-interpret web based format. The simple access via webbrowser guarantees a user friendly usage without any specific IT knowledge as well as the minimalistic maintenance work of the tool. Results are presented both tabular and in graphical form. By inspecting the graphics on the overview page, the user is able to detect outliers easily. Potential outliers are highlighted by a warning sign. In each overview graph, an acceptance range FIGURE 8 | Selected QA statistics from gel phantom measurements. The data was collected over a duration of >1.5 years (February 2015-December 2016) during the set-up of a large longitudinal imaging study (FOR, 2107. (A) After the implementation of a phantom holder (October 2015), the variance of many QA statistics was considerably reduced, as exemplarily shown for the signal-to-fluctuation-noise ratio (SFNR). This made it possible to detect outliers in future measurements (defined as four times SD of the mean; red arrows). (B) In June 2016, the gradient coil had to be replaced. This had a major impact on the percent signal ghosting (PSG). (C) Changes in the MRI sequence, such as the introduction of the "prescan normalization" option that is used to make corrections for non-uniform receiver coil profiles prior to imaging, has a significant impact on the MRI data. This can be quantified using phantom data, as seen in the PSC. (D) Imaging data in the study was collected at two different scanners. The scanner characteristics can also be determined using QA statistics, as shown for the SFNR [for a detailed description of QA statistics, see Vogelbacher et al. (2018)]. (green area) is viable. This area can be defined for each graph individually (except for the ACR phantom because of the fixed acceptance values defined by the ACR protocol). To set up the acceptance range for a specific MRI scanner, we recommend some initial measurements to define the acceptance range. If a measurement is not in this range this might indicate performance problems of the MRI scanner. Different QA protocols that assess MRI scanner stability are described in the literature, mostly designed for large-scale multicenter studies (for an overview see, e.g., Glover et al., 2012). Many of these protocols and the corresponding software tools are openly available. This allows in principle the flexible set-up of a QA protocol adapted for specific studies. The installation of these routines, however, is often not easy. The installation therefore often requires a fair level of technical experience, e.g., to install additional image processing software or to deal with specific software versions or hardware requirements. LAB-QA2GO was therefore developed with the aim to create an easily applicable QA tool. It provides on the one hand a fully automated QA pipeline, but is on the other hand easy to install on most imaging systems. Therefore, we envision that the tool might be a tailormade solution for users without a strong technical background or for MRI laboratories without support of large core-facilities. Moreover, it also gives experienced users a minimalistic tool to easily calculate QA statistics for specific studies. We outlined several possible application scenarios for the LAB-QA2GO tool. It can be used to assess the quality of MRI data sets acquired in small (with regard to sample size and study duration) neuroimaging studies, to standardize MRI scanners in multicenter imaging studies or to assess the longterm performance of MRI scanners. We outlined the use of the tool presenting data from center-specific QA protocol. These data showed that it was possible to detect outliers (i.e., bad data quality at some time points), to standardize MRI scanner performance and to evaluate the impact of hardware and software adaptation (e.g., the installation of a new gradient coil). In the long run, the successful implementation of a QA protocol for imaging data does not only comprise the assessment of MRI data quality. QA has to be implemented on many different levels. A comprehensive QA protocol also has to encompass technical issues (e.g., monitoring of the temporal stability of the MRI signal in particular after hardware and software upgrades, use of secure database infrastructure that can store, retrieve, and monitor all collected data, documentation of changes on the MRI environment for instance with regard to scanner hardware, software updates) and should optimize management procedures (e.g., the careful coordination and division of labor, the actual data management, the long-term monitoring of measurement procedures, the compliance with regulations on data anonymity, the standardization of MRI measurement procedures). It also has to deal, especially at the beginning of a study, with the study design (e.g., selection of functional MRI paradigms that yield robust and reliable activation, determination of the longitudinal reliability of the imaging measures). Nonetheless, the fully automatic quality assessment of MRI data constitutes an important part of any QA protocol for neuroimaging data. In the present version of the LAB-QA2GO toolbox, we used relatively simple metrics to characterize MRI scanner performance (e.g., Stöcker et al., 2005;. Although these techniques were developed many years ago, they are still able to provide useful and easily accessible information also for today's MRI scanners. They might, however, not be sufficient to characterize all aspects of modern MRI scanner hardware. Many MR scanners are by now equipped with phased array coils, a number of amplifiers and multiplexers. Parallel imaging is also available for many years and multiband protocols become more and more common. Small changes in system's performance, e.g., slightly degraded coil elements or decreased SNR of one amplifier, might therefore not be detected with these parameters. The QA metrics we implemented so far should therefore not be considered as "ground truth." By now, more sophisticated QA metrics are available especially for the assessment of modern MRI scanners with multi-channel coils and modern reconstruction methods (Dietrich et al., 2007(Dietrich et al., , 2008Robson et al., 2008;Goerner and Clarke, 2011;Ogura et al., 2012). Their usage would further increase sensitivity of the QA metrics with respect to subtle hardware failure. Since our software is built in a modular and extensible way, we intend to include these QA techniques in future versions of our toolbox. In a future version of the tool, we will add more possibilities to locate the unique identifier in the data. We also will work on the automatic detection of the MR scanning parameters to start the corresponding QA protocol. With LAB-QA2GO we hope to provide an easy-to-use toolbox that is able to calculate QA statistics without high effort. AUTHOR CONTRIBUTIONS CV, AJ, and JS devised the project, main conceptual ideas, and proof outline. CV, MB, and PH realized the programming of the tool. VS created the gel phantom and helped in designing and producing the phantom holder.
7,490
2019-02-11T00:00:00.000
[ "Physics", "Medicine" ]
Fractional unit-root tests allowing for a fractional frequency flexible Fourier form trend: predictability of Covid-19 In this study we propose a fractional frequency flexible Fourier form fractionally integrated ADF unit-root test, which combines the fractional integration and nonlinear trend as a form of the Fourier function. We provide the asymptotics of the newly proposed test and investigate its small-sample properties. Moreover, we show the best estimators for both fractional frequency and fractional difference operator for our newly proposed test. Finally, an empirical study demonstrates that not considering the structural break and fractional integration simultaneously in the testing process may lead to misleading results about the stochastic behavior of the Covid-19 pandemic. Introduction The forecasts of daily events lead many decision-making processes to be more manageable. In time-series analysis, forecasts are generally made using the Box-Jenkins method. In the Box-Jenkins method, the prerequisite for making long-term forecasting or setting up an ARIMA model is the stationarity of the series under investigation. It is also essential to make long-term forecasts in the current Covid-19 outbreak. These forecasts contain crucial information to eliminate the uncertainties that may arise during the process. For example, forecasting the peak number of infected cases in the long term may give valuable information about the health care system. If these numbers can be accurately predicted, then the intensive care unit bed capacities and other resources can be allocated efficiently. This vital information can also be used by the other sectors which are affected by the Covid-19 outbreak. Besides, long-term forecasting can also be made for all other natural phenomena. Reliable forecasts of earthquakes, meteorology, biodiversity, and others are needed to manage disasters. The time-series literature has described covariance stationarity as a steady state in which the mean, variance, and covariance do not change over time. The stochastic difference equation's stationarity is determined by using the unit-root test of [1]. The test's basic principle is to see if the first-degree stochastic difference equation's parameter is statistically equal to 1 or not. If it is equal to 1, the series is a unit-root process or simply not stationary. In this study, we need to add the dynamics of this natural outbreak to the [1] (henceforth, ADF) method to examine the outbreak's stochastic features and test its long-term predictability. If the epidemic's data generation process is substituted correctly into the test methodology leading to a stationarity test result, then we can claim that the correct long-term forecast model is achieved. It is recognized that the number of daily cases in the outbreak models conforms to exponential function patterns. However, it is not easy to generalize the epidemic model to different functional designs, such as the second wave that may occur in later stages. This functional pattern will create a double exponential model or a more complex functional form. The complexities that arise obtained in this way can also decrease the effectiveness of the long-term forecasts. We have used the Fourier function to overcome this problem, thereby providing a remarkable convergence to any functional form whose structure is uncertain. In the literature, many researchers have employed the Fourier function to capture smooth structural breaks with integer frequency. Nevertheless, some studies have shown that this should be handled within a fractional frequency structure. In addition to the importance of using low frequency, previous studies have also emphasized the problems of using cumulative frequency in Fourier type of unit-root testing. A well-known problem associated with the traditional unit-root tests is that the power of the test decreases if too many variables are added into the testing equation when the cumulative frequency is employed. So how can the Fourier function capture the short-term oscillations in the daily cases without using cumulative frequency? In the consecutive days of pandemics, different dynamics or numbers of infected patients are detected. Temporary or permanent jumps are a prevailing dynamic of daily infected cases that the first-order difference equation cannot capture. The fractional difference equation employed recently in the literature is seen to solve such dynamics. It has been observed that the number of daily cases exhibits fractional-order difference equation features. After detrending the daily infected cases data with the Fourier function, the remaining series exhibit the features of a fractional first-order difference equation. Therefore, in the light of these explanations, the pretest of the long-term predictability of the number of daily Covid-19 cases must have considered the fractional frequency Fourier functional form with a fractional difference equation. Let us now turn to discussing the methodology used in the paper and literature available until now. Following the influential work of [1], testing the stationarity characteristics of variables has attracted a great deal of attention among researchers. This testing methodology can be broadly classified into three categories; linear unit-root tests, unit-root tests that permit a break in mean and/or trend (this can be termed time-dependent nonlinearity, or structural break (SB)), and finally unit-root tests that permit state-dependent nonlinearity. However, after recognizing the long-memory features of the stochastic processes, the fractionally integrated unit-root tests have attracted a great deal of attention in the recent literature. Therefore, in this study, we will focus on combining the unit-root tests that permit structural break and fractional integration (FI). A typical exercise in most time-series investigations is to check whether the drift part of a series is correctly characterized as deterministic or stochastic. Naturally, the stochastic drift is considered as a unit-root process. In contrast, the deterministic one is particularly time trends. It is generally concluded that traditional methods developed for fractionally integrated processes could drive spurious FI response if employed to short memory processes encompassing structural breaks. The reverse outcome is also entirely recognized; standard methods for identifying and measuring break dates lead to spurious structural change, generally at the midpoint of the series, in fact, there is only fractional integration in the sample (see [2,3]). Therefore, fractional integration and structural break indicate vastly diverse long and medium-run dynamics, making it hard to discriminate among them. As it is also recommended in [4], these two methods, FI and SB, are alternative methods for difference stationarity (D-ST) and trend stationarity (T-ST). It is well known that to avoid spurious estimates of the parameters and biases in the time-series studies, the data must be differenced to make them stationary. Therefore, the decision of optimal differencing is vital for obtaining correct information from the data under investigation. By introducing these two alternatives, we have to choose the correct differencing among D-ST, T-ST, fractional difference stationarity (FD-ST), and structural break stationarity (SB-ST). In addition to these, it is well documented that in the D-ST case memory is infinite, and past shocks are perfectly remembered. In the case of T-ST, memory is short, and the autocorrelation function decays exponentially. [2] indicates that the FI processes or the FD-ST case establish an interesting alternative to this separation as they are capable of linking the gap between these two possibilities. Therefore, FD-ST has a long memory but not as much as the D-ST, which indicates that d = 0.1 has a short memory with respect to d = 0.9. These methods can fulfil the gap between the short-lasting and unchanging effect of shocks in the T-ST and D-ST models, respectively, by providing transitional behaviors such as long memory and nonstationary mean-reversion (see [2]). So finding the exact differencing order is vital to limit information losses. As we have mentioned above, fractional integration and structural break indicate very different medium and long-run dynamics. Thus, it is hard to differentiate between them, so it is essential first to distinguish these two methodologies. To this end, we propose a procedure that combines these two methods using a simple but yet efficient way to identify the SB and FI processes correctly. Therefore, we can eliminate the problems which are explained in the above paragraph efficiently. The unit-root tests which are permitting for a break in mean and/or trend are as follows; [5][6][7][8], and [9]. These have acknowledged alternative trend models in examining for the unit-root testing, and have concentrated on models with segmented line trends; and single or multiple breaks [10]. However, recent studies have proposed unit-root tests where the alternative hypothesis is stationarity around a smoothly changing trend. [11] (LNV, hereafter) and [12] used logistic smooth trend functions that permit a smooth break in the data's deterministic trend. [13] specified nonlinear trend employing Chebyshev polynomials. Reference [14] employed trigonometric functions in Fourier form to define probable smooth breaks in the data. Numerous problems were encountered with these types of unit-root tests. 1 Nevertheless, the simplest and most accurate one has been the Fourier function, which was used by [14][15][16], and [17]. The second strand of literature deals with the fractionally integrated unit-root test proposed by [18] (henceforth, DGM). One stated that that both null hypotheses were rejected frequently in the previous studies, and concluded that many time-series were not well characterized as either I (1) or I(0). Therefore, the group of fractionally integrated processes, represented as FI(d), has proved to be very suitable in catching the persistence features of many long-memory processes (see [19,20], and [21]). Reference [18] has pointed out the shortcomings of the alternative methodologies used and suggested a simple Waldtype test in the time domain with adequate power properties. As a by-product of its application, this test delivers knowledge about the values of d under the alternative hypothesis. Therefore, this methodology is a generalization of the well-known Dickey-Fuller (D-F) test, which was originally developed for the case of I(1) versus I(0), to the more general case of FI(d 0 ) versus FI(d 1 ) with d 1 < d 0 and, thus, is denoted as the fractional Dickey-Fuller (FD-F) test. DGM test is based on the normalized-OLS estimates, or on its t-ratio, of the parameter on d 1 y t-1 in a regression of d 0 y t on d 1 y t-1 and possibly some lags of d 0 y t . Depending on the alternative hypothesis H 1 : d < d 0 , the pre-estimation is needed for the order of d. DGM has shown that the choice of a T 1/2 consistent estimator of d in its appropriate range suffices to make the FD-F test possible, while preserving asymptotic normality. Reference [18] has highlighted the advantages of their testing procedure as follows. The first one is theorizing the simple D-F framework to obtain simplicity for testing unit roots with a fractional difference operator. The second one is that the LM tests proposed contain a different structure than the traditional LM tests. The proposed LM test does not assume any known density for errors, which makes it more robust to fundamental ones. The third one is that in the exact case where d 0 = 1, the FD-F method inherits the flexibility of the standard D-F test. This provides a usual framework for testing the I(1) null hypothesis against some interesting compound alternative. According to [18], producing a fractional integration unit-root test by including a structural break does not seem feasible with other FI unit-root tests. However, the flexible FD-F structure that they propose will make this study much easier and feasible. The final one is that [18] has found a good finite sample properties with respect to other competing tests. Following [18], the third advice, we have extended this methodology to the structural break set up by using the [17] method. As we have mentioned above, the [17] procedure employs trigonometric functions in the form of Fourier form to define presumable smooth breaks in the data. Numerous difficulties are encountered with structural break type of unit-root tests. Nevertheless, the easiest and accurate one is the Fourier function used by [17] with which extended it to fractional frequency case. Therefore, [17] is another simple generalization of the ADF test like the DGM test. Combining these two simple methodologies will emerge as a more generalized and simple set up without facing any unnecessary details to test stationarity in a composite alternative hypothesis. The composite hypothesis of the series under investigation is a fractionally integrated series around a smoothly changing trend. Other attempts have been made in the literature to combine these two methodologies (namely SB and FI) by using different techniques. References [22] and [23], following [24] and [25], derived a Lagrange multiplier test in the time domain, and [26] and [3] have considered Wald-type tests for a unit-root null hypothesis against fractional integration following [18]. The traditional unit-root tests usually reject the null hypothesis when the actual process is a series that is integrated fractionally with d = (0.5, 1). We will see later that such series are not stationary. Therefore, the results of these studies become questionable. Moreover, it is well known that short memory processes with level shifts display features that lead one to conclude that long memory is present in the data generating process (e.g., [23], among many others). On the other hand, it was also recognized that long-memory processes cause the null hypothesis of no structural change to be rejected when traditional structural change tests are used (see, [2,3,23], among many others). To overcome these problems in the SB-FI literature and to address the reasons mentioned earlier, we propose the SB-FI unit-root test in the form of a fractionally integrated series around a smoothly changing fractional frequency flexible Fourier form. Therefore, we have obtained the following contributions from this newly proposed methodology: 1. The confusion about structural break and fractional integration, which we explained above, has been resolved with the most appropriate methods. 2. The two-step methodology allowed us to obtain the asymptotic distribution of the unit-root test easily. 3. It has been shown that the Fourier function can represent the deterministic structure of the Covid-19 outbreak. Also the best optimization algorithm that should be used with the fractional frequency Fourier function is found. 4. For fractional integration, a new estimator has been proposed that minimizes information losses. It has also been shown that predictions can be made with the least loss of information with this new estimator. 5. Finally, how to design the optimal forecast model for outbreaks by combining all of these methodologies has been shown. The structure of the article is as follows. Section 2 presents fractional frequency Fourier form fractionally integrated ADF test with its asymptotic distribution and presents an extensive simulation study to show the small-sample features. Section 3 discusses the various optimization algorithms that can be used with the fractional frequency estimation along with the parametric and semi parametric estimation of the difference operator d. Section 4 applies the FFFFF-FI-ADF test to pretest the long-term predictability of the Covid-19 cases. Section 5 is devoted to concluding remarks. The methodology for the fractional frequency flexible Fourier form fractionally integrated ADF test: FFFFF-FI-ADF In the introduction, we gave some basic ideas about the testing procedure. The main concern is to be simple in deriving the test, and its asymptotic. Hence, we have started with the Fourier approach in which we can detrend the series at first and assume the remaining part has fractionally integrated stationarity or nonstationarity of the series. Apart from [15,16] and [17], this two-step approach provides a straightforward setting for obtaining the testing procedure and asymptotic distribution of the proposed test statistics. Therefore, we will start with the Fourier approach and include the fractionally integrated ADF test in the second step. References [16] and [17] consider the following augmented Dickey-Fuller (DF) test: where ε t is a stationary error term with a variance of σ 2 , and ϕ(t) denotes the deterministic intercept and trend. Reference [16] claims that it is problematic to estimate Eq. (1) directly and study the unit-root hypothesis ψ = 1 without knowing the functional structure of ϕ(t). Following [14,16,27] and [17], we assume that ϕ(t) includes the following Fourier components: where α 0 , α 1 , and α 2 are changing intercept parameters, T is the number of observations, and t gives the trend term. The term k denotes the particular frequency to be determined over a pre-given interval. The trigonometric components sin( 2π kt T ) and sin( 2π kt T ) are utilized to approximate smooth breaks. If α 1 = α 2 = 0, then there are no smooth breaks. Through the grid-search method, [15,16] use k = k * to minimize the residual sum of squares (SSR) in Eq. (1), where k * indicates the value of k that achieves the minimum SSR. Besides, Becker et al. (2006) show that it is acceptable to set k = 1 or k = 2 to find the substantial structural changes in the data. Using a data-driven technique, [14] set the maximum number of breaks to be 5. Reference [15] further recommends the usage of low frequency to capture the smooth structural changes in the data. Reference [17] mentions the flexibility of the integer, but argues that it has many drawbacks in estimating the smooth trends (i.e., over filtration, type two error etc.). Hence, we follow [17] and use the fractional version of the test in this paper. To this end, instead of searching for a single integer frequency k in Eq. (2) we try to find the fractional frequency in Eq. (3), which is also employed in [14] and [15,16] for integer values. The largest frequency applied is k max , and k = 0.1 is used in the 0.1 range and other smaller increments, and the accuracy of the fractional frequency search was increased. The optimal fractional frequency is obtained at the point where the SSR is the lowest. This optimization process is carried out by applying the algorithm described above for Eq. (1). Moreover, we can also employ this to define the fractional frequency Fourier trend by using an F-test as proposed in [14] and [15,16]. The model is as follows: The null hypothesis of linear unit root is obtained when δ = 0, which is suggested by [16]. The two-step testing process is as follows. In the first step of the two stages procedure the following regression is run: where k fr indicates the fractional Fourier frequency. The above equation assumes that ω t is a random walk process and after being demeaned or detrended it can be used in the second step asω t , where u t ∼ iidN(0, σ 2 u ) and the initial conditionω 0 is a constant. Notice that this technique is asymptotically the same as the one step procedure of [17]. As we have mentioned above instead of assuming the case of I(1) versus I(0), the more general case of FI(d 0 ) versus FI(d 1 ) with d 1 < d 0 can be used following [18]. The DGM test is based on the normalized-OLS estimates, or on its t-ratio, of the coefficient on d 1ω t-1 in a regression of d 0ω t on d 1ω t-1 and possibly some lags of d 0ω t . 2 The definition of the FI(d) process that we will implement is that of an (asymptotically) stationary process when d < 0.5, and that of a nonstationary (truncated) process when d > 0.5. For the asymptotic distribution of δ = 1, the two-step process will be used with the following demeaned and detrended seriesω: whereα 0 ,α 1 ,α 2 andλ are OLS estimators for demeaned and detrended cases, respectively. Next, we build the fractional Fourier unit-root test by using the demeaned and detrended seriesω t in the second step. Although the D-F test is coherent when compared to the fractional alternatives, its low power makes it an appropriate ground for studying the new test procedures. Thus, we extend the regression model in (3) and (5) to test the null hypothesis that a series is FI(d 0 ) against the alternative that it is FI(d 1 ). The variableω is thought to be a unit-root process under the null hypothesis, but it constitutes a fractionally integrated stationary process in the alternative. Precisely, our suggestion is built upon testing for the statistical significance of β in the following FI-DF equation: where ξ t ∼ iid(0, σ 2 ξ ) I(0) process. Keep in mind that (6) is still an unbalanced regression where the dependent and independent variables are differenced with respect to their degrees of integration under the null and the alternative hypothesis. Theω t series follows the following process assuming that u t = ξ t and β = 0 in (6): This implies thatω t in (7) is FI(d 0 ). When β < 0,ω t can be expressed as whereω t is a FI(d 1 ) process. By using these arguments, we can write the normalized-OLS estimated coefficient or its t-ratio as in the standard D-F testing methodology as follows: • Fractionally integrated around a smoothly changing trendϕ(t) -FI(d 1 ). The test and its asymptotic properties Now we allow for d 0 = 1 and u t = ξ t in (7), where {ξ t } is a sequence of zero-mean i.i.d. random variables with unknown variance σ 2 ξ and finite fourth-order moment. The OLS estimatorβ ols and its t-ratio, t FF , are given by their usual least-squares formulas; In order to obtain the asymptotic distribution of the t FF (i = μ, τ ) test, we need the subsequent outcomes, where we let [rT], r ∈ [0, 1] be an integer close to rT. During the course of the derivation → implies weak convergence as T approaches ∞. Proposition 1 We have assumed that the remaining part or detrended series is a fractionally integrated series. Thus, we preserve the notation of [18] hereafter to derive the asymptotics of the proposed test. t | < ∞ implies the following linear processes: where I(·) is an indicator function and Then the following process verifies: and where p → denotes convergence in probability, where p → denotes weak convergence. Lemma 2 Let ξ t , z t , z * t , and g * t be identified as in Lemma 1. Then the subsequent processes are martingale differences and confirm: When we impose the Fourier form to the FI process: where B(·) denotes a standard Brownian motion, and B d (·), W d (·) or W d (k fr , r) are standard fractional Brownian motions. Depending on Lemma 2, the subsequent two theorems derive the asymptotic distribution and prove the consistency of an appropriately standardized-OLS estimator ofβ and its t-ratio, under the null hypothesis of I(1). Theorem 1 Under the null hypothesis of unit root and withω t as a random walk, the asymptotic distribution of t FF is as follows: where W i,-d 1 (k fr , r) for i = μ, τ give the demeaned and detrended standard fractional Brownian motions. We can derive the asymptotics of the other cases; d 1 = 0.5 and 0.5 < d 1 < 1 in a similar fashion with the 0 ≤ d 1 < 0.5. As pointed out before, since we are following two-step approaches the other distributions are the same as [18]. Therefore, we concentrate on the non-degenerated distribution of case 1 and give its distribution explicitly in Theorem 1. This asymptotic distribution obtained for fractional frequency is the general form of the integer frequency case, and it can be easily converted to an integer form with the values given in [15]. Proof The proof of Theorem 1 is given explicitly in Appendix A. Apparently, the asymptotic distribution of the obtained test statistics under the null depends on the fractional Fourier frequency, k fr , and integration order, d 1 , but it is invariant to the other parameters in the testing equation. The fractional frequency versions of the critical values are tabulated in Appendix B and for integer frequency the critical values tabulated in Tables 1-3 as follows: Small sample properties of the fractional frequency flexible Fourier form fractionally integrated ADF test FFFFFFI-ADF (FFFFF-FI-ADF) First, we will examine the small-sample size features of the test statistics. To assess the size of the test statistics, we investigate the following data generating process (DGP): Fig. 1 and suggest that the proposed test statistics have satisfactory size properties. As can be seen from Fig. 1, the newly proposed test exhibits good size properties similar to the previous tests [18,26,28]. Considering the scale next to the figure, the minimum and maximum size values are in the range of 0.02 and 0.08, respectively. Since the size analysis is performed for the 5 percent significance level, this scale indicates that the newly proposed test approaches the correct size value with a minimal error rate. As can be seen from the color spectrum given above, instead of the extreme values of yellow and dark blue, the size results were obtained with light green and blue intensity, and the real value of size was mostly 5%. Thus, in the light of Fig. 1 we can safely conclude that the newly proposed test has strong size properties. Therefore, we can proceed with the power analysis without any size adjustment. Now, we turn to the small-sample power properties of the proposed tests. We have done an extensive simulation study to see the proposed unit-root tests' power surface using the model Following [15,16], we set α 1 = (0, 3) and α 2 = (0, 5). The results presented in Fig. 2 suggest that the proposed FFFFF-FI-ADF test clearly exhibits a similar behavior to [18,26,28]. As it can be seen from Fig. 2, the power performance of the test is working well. The test's power increases with the time dimension T and decreases as the difference operator parameter varies from 0.1 to 0.9. Especially after 0.8, the power has started to decline Figure 2 Power properties of FFFFF-FI-ADF %5 nominal significance level from 1.00 to 0.2. and the lowest as 0.0. Towards 1, the color spectrum turns to yellow and black, while power weakens with blue and dark blue tones. As Fig. 2 shows, the high power of the test is justified with the abundance of yellow or black areas. Overall, this analysis proves that the test is powerful in capturing fractional integration data dynamics with a structural break. Furthermore, with these power analysis results we can distinguish between structural break and fractional integration because the detrended series in the first stage will not give a pseudo-integration order in the second stage. The method of estimations for Fourier fractional frequency and fractional integration parameter d 3.1 Estimation of fractional frequency for Fourier function The authors of [29] have conducted an extensive study analyzing the BFGS, BHHH, Genetic, Simplex and Grid Search (GS) algorithms in the estimation of the fractional frequency. They used the alternative hypothesis of the test [17] to evaluate the effects of using different algorithms on the parameter estimates. They have noticed that in the earlier studies, comparison of different optimization algorithm evaluation is commonly made on the critical value accuracy. Yet the Fourier unit-root test depends on the fractional frequency. Thus, the frequency is specified at first and then the critical values are acquired. Consequently, producing the critical values with a different optimization algorithm will not lead to different set of critical values. In our simulation study, the issues in [29] will be taken into consideration. In addition, since the subject to be examined should imitate the data generation process of the Covid-19 pandemic, the following model will be used: where T = 100. Subsequently, investigating different schemes of experiments, the authors of [29] have decided to use the SSR of the estimation results. The authors of [29] have classified the fractional frequency values that they obtain in terms of the stages of the pandemic. According to this classification, the fractional frequency was estimated to be between 0-0.75 in the early stages of the pandemic, 0.75-1.0 near the peak day, 1-1.25 in the second stage, and 1.5 around the plateau stage. We follow their study and use Eq. (25) to obtain Fig. 3 and Table 3. Like [29], we have found that the best estimation algorithms with nonlinear trends are simplex and genetic, which are indifferent in terms of SSR. As reported in [29], the second best approach appears to be the GS grid-search algorithm, while the third one is the derivative free methods of BHHH and BFGS. Consequently, following our results and the ones obtained in [29], we use the simplex algorithm for the estimation of the fractional frequency. Estimation of the fractional difference d In this study, we have used Andrews and [30] (henceforth, AG), [24]) (henceforth, RE) and [31]) (henceforth, GPH). The authors of [31] suggest a bias-reduced log-periodogram regression estimator,d r , of the long-memory parameter, d, that eliminates the first-and higher-order biases of the GPH estimator of [31]. The bias-reduced estimator is identical to the GPH estimator except that the pseudo-regression model that produces the GPH estimator contains as extra regressors the frequencies to the power 2k for k = 1, . . . , r where r is a particular positive integer. The bias decrease is acquired by the assumptions made on the spectrum solitary in the neighborhood of the zero frequency. The authors of [30] following [24] found that the asymptotic bias, variance, and mean-squared error (MSE) ofd r . These outcomes show that the bias ofd r goes to zero at a faster rate than that of the GPH. Therefore, the most suitable estimator for our FFFFF-FI-ADF test among these estimators is AG, which manages to catch the T 1/2 convergence and satisfies the unbiasedness property. There are other estimators which may be used in our study such as the [32]'s simple search algorithm. This algorithm depends on the SSR minimization and considers both the structural break and the estimation of d. However, since we are using the two-step procedure which considers the structural break and integration order separately, this procedure creates problems in our study. Despite this fact, we have tried the SSR approach in obtaining an estimate for the d parameter but found poor results with respect to the other estimators. 3 In the light of all these results, we propose a new estimator by using a simple search algorithm, which may be more suitable in our case and many other cases. In the interval d = [0, 1] plenty of different dynamics are available including d = 0, which corresponds to stationarity, 0 < d < 0.5, which gives difference stationarity, 0.5 ≤ d < 1.0, which refers to a nonstationary but mean-reverting process, and d = 1, which corresponds to a unit-root process. In our case, instead of using a priori estimate of d, we estimate it simultaneously within the unit-root testing procedure. For this purpose, we utilize both a search algorithm and a simple bootstrap algorithm as follows. Step 1: Estimate y t = α 0 + ϕ 1 sin( 2π kt T ) + ϕ 1 cos( 2π kt T ) +ω t for the series under investigation by using the optimal k * fr and use the series thereby obtained in the second step estimation, Step 2: For a predetermined value of d 1 , starting from d 1 = 0.1, estimate the FFFFF-FI-ADF test value by running d 0ω t = β d 1ω tt-1 + u t . Also introduce lags of the dependent variable using the AIC or SIC, Step 3: Obtain critical values for this predetermined value of d 1 using 2000 centered residuals from step 2 and a simple bootstrap algorithm, Step 4: Use steps 2 and 3 to obtain the p-values of the test statistics for the series under consideration, Step 5: Repeat steps 2 to 4 using the interval d = (0, 1) and increments d = 0.001. Then obtain all available p-values in this range, Step 6: If collected p-values truncate the 0.1 significance level, then the first truncation will be the estimate,d 1 . If there is no such truncation, then select the minimum p-value for the estimatedd 1 parameter. As an example, we have obtained the estimates for Germany, Italy, Russia, Spain, Turkey, and US in Fig. 4. Therefore, using our new methodology, we can also provide the best procedure which leads to the minimum information loss. Let us now elaborate more on the information loss. One major drawback of differencing is that it leads to information loss. In the most extreme case, by taking the first-order difference, that is, with d = 1 we lose valuable information contained in a series. Taking differences is in some ways analogous to differentiation. Before taking the first-order derivative of the function we have information on its time path or the primitive function. By taking the first-order derivative of the series with respect to time, we gain information about the rate of change or growth of the series (or derived function) while passing this time path. If the subject we want to examine includes the information of the time path, a first-order derivative with respect to time will enable us to examine the series' growth relationship. In this sense, a researcher who wants to use the gross national product (GNP) of a country must consider its growth rate because GNP in levels is not stationary. As another ex- ample, suppose we want to forecast the temperature, but the temperature data is not stationary. In order to make long-term forecasts, the series analyzed should be stationary. Otherwise, the forecast error will grow so rapidly after the one step ahead forecast that it will not allow the long-term forecast to be possible. From a forecasting perspective, it may not be relevant for the researcher to predict the growth rate of GNP instead of its level. When we take the difference from a lower order, valuable data including the growth and time path of the series is retrieved. If the d parameter is close to 0, the series that we it conveys the growth rate of the series. On the other hand, when the difference is taken at order d = 0.5, an optimal mix of these two will be obtained. Figure 5 shows how the primitive function converges to the derived function as the order of differencing changes. It is obtained using the following data generation: y t = 10 + 0.8y t-1 + u t , u t ∼ iidN(0, 1), y 0 = 10. Figure 6 visualizes the isomorphism among series with difference orders as the difference operator converges to 1. While d = 0.1, the series still preserves almost all features of the original series or the time path; that is, it preserves the information about the original state of the series (primitive function) at the maximum level. However, when d = 0.5, the resultant series seem to resemble the series' rate of change, although still preserving some time path information. This information has important implications in the time-series econometrics literature. Suppose the series is stationary in the interval d = 0.1 -0.5. In that case, we can continue our work with the level of the series, i.e., time path, and obtain unbiased estimates with respect to this level information. Moreover, the traditional distribution theory is still valid while conducting regression analysis with this group of data or differenced series. But if d > 0.5, the series obtained no longer contains information about its time path, and we will have to comment on the rate of change. After this point, while the traditional asymptotic theory ceases to maintain its validity, we should also be careful about the different integration orders. 4 Empirical example In this section, the daily infected case forecasts of the Coronavirus (Covid-19) pandemic, which started as of 01/01/2020 and spread worldwide, will be performed. Since the Covid-19 epidemic is on the agenda, many empirical and theoretical studies were conducted on the subject. Empirical studies on the subject include in the literature [33][34][35][36][37], and [38]. In addition, studies close to the theoretical structure of this article are [39][40][41][42][43][44][45][46][47], and [48]. The Coronavirus daily infected case numbers are collected from the European Health Organization database for 204 countries. The newly proposed FFFFF-FI-ADF type of unit-root test and the one developed in [17] were applied to the existing data of these countries. We have investigated the fit of fractional and integer Fourier functions to the daily infected case series for some selected countries by using the SSR estimates and graphed them below in Fig. 7. 5 The countries with a longer time span that exhibit different dynamics were selected. As Fig. 7 shows, the FFFFF method gives better results than the IFFFF method for all selected countries using the SSR criterion. Thus, these results are not tabulated. It is clear from Fig. 7 and the SSR results that the FFFFF method captures better the dynamics of the daily infected cases due to its highly flexible structure. Therefore, the FFFFF methodology can be used in obtaining the long-run forecasts of the daily Covid-19 cases. Of course, to confirm this claim, it is necessary to look at the results of both tests, namely FFFFF-ADF and FFFFF-FI-ADF. As aforementioned, long-term forecasting is only possible with stationary data. Thus, the FFFFF methods must be used when pretesting the stationarity of the daily infected cases. Moreover, since the daily cases were found stationary using both the FFFFF-ADF and the newly proposed FFFFF-FI-ADF type of unit-root tests, a forecast model constructed for these daily cases must also include the FFFFF type of flexible function using fractional integration. Since it would take a lot of space to tabulate the unit-root test results for the entire dataset of 240 countries, we preferred to visualize them using a world map in Fig. 8. Countries with nonstationary and stationary daily cases were colored with blue and red tones, respectively. According to the FFFFF-ADF test, the daily infected cases of 124 (out of 240) countries were found to be stationary. When we estimated the fractional frequency of the Fourier functions for these countries, 91 countries' fractional frequencies were found in the interval 1.7 and 4.13. The high frequencies found in these countries can be attributed to the random oscillations caused by irregular testing, wrong protection measures adopted, and similar situations arising in these countries. In some countries extremely high case numbers are seen in one day, whereas the next day no tests are run, and no numbers are announced. This behavior of the health authorities leads to the irregular distribution of jump discontinuities. Despite these irregular oscillations, the fractional frequency Fourier function captures the unknown deterministic functional forms extremely well. Besides, fractional integration is also useful for capturing these random oscillations. Therefore, it is better to use the FFFFF-FI-ADF test in countries where the unit-root null could not be rejected. For this reason, we selected the ten countries with the highest daily case numbers that were not found stationary with the FFFFF-ADF test. As can be seen from Table 4, the FFFFF-FI-ADF test results demonstrate that the daily cases of all countries, except Russia and Spain, are fractionally integrated and stationary. When it comes to Russia and Spain, their Covid-19 cases are found to be fractionally integrated, mean-reverting but nonstationary. The AG method produced stationary test results for Brazil, France, Germany, and the UK. The GPH method, which is the closest method to the AG method, yielded similar results except for Germany and the UK. Be- sides, the RP method leads to stationarity test results for Brazil, Chile, France, and the UK. On the other hand, the newly proposed method seems to be the most efficient one when compared to these other methods. It rejects the null hypothesis of unit root for Brazil, Chile, France, Germany, Italy, Turkey, the UK, and the US. The fractional integration dynamics that the FFFFF-ADF could not represent were caught with different methods. The oscillations that we mentioned in the introduction part was modeled correctly with FI. In this sense, it will be beneficial to use the FFFFF-FI model to forecast Covid-19's long-term potentially infected number of cases. These efficient long-term forecasts will enable policy authorities to control the outbreak better. Moreover, we can also see that the method we have just proposed provided the lowest difference order estimates. This obtained lowest difference order allows us to perform the most accurate unit-root test with the lowest information loss. Conclusion In this study, we have proposed a fractional frequency flexible Fourier form fractionally integrated ADF test. By implementing an extensive simulation study, we have showed that the newly proposed test has good size and power properties. Moreover, we have demonstrated that the best estimators for our unit-root testing procedure are both fractional frequency and newly proposed fractional difference operator. The newly proposed fractional difference estimator has shown to be the best estimator with respect to the minimum information loss criteria. Finally, the empirical study has demonstrated that not considering the structural break and fractional integration simultaneously in the testing process may lead to misleading results about the stochastic behavior of the series under investigation. Therefore, our proposed FFFFF-FI-ADF test will help policy authorities to control any natural disaster by providing an efficient method for pretesting the disaster's long-term predictability. Moreover, the fractional frequency and fractional difference estimation methodologies given in Sect. 3 shed light on the areas for future research. First of all, different functional forms could be used for the structural breaks. In this study, we showed that fractional frequency fits the structure of the Covid-19 epidemic quite well. However, another functional form can be recommended for another data type. Furthermore, different methodologies may be developed for implementing fractional difference estimation. Section 3.2 tried to examine the fractional differencing meaning and suggested an estimator that minimizes the information loss. The importance of taking differencing in different orders shows that new estimators and difference operators can be developed for various purposes in future studies. Appendix A This appendix provides asymptotic distribution of FFFFF-FI-ADF test statistics given in the text. Then For the detrended case similar arguments follow so we skip the algebra. Using the above given results, under the null we can obtain the demeaned Brownian's. Now we can proceed with the fractionally integrated part in the second step. Under the null hypothesis thatω t is a random walk and applying Lemma 1 and Lemma 2 and results in [18] and in [15], we obtain
9,853.2
2021-03-15T00:00:00.000
[ "Mathematics" ]
Protein Linewidth and Solvent Dynamics in Frozen Solution NMR Solid-state NMR of proteins in frozen aqueous solution is a potentially powerful technique in structural biology, especially if it is combined with dynamic nuclear polarization signal enhancement strategies. One concern regarding NMR studies of frozen solution protein samples at low temperatures is that they may have poor linewidths, thus preventing high-resolution studies. To learn more about how the solvent shell composition and temperature affects the protein linewidth, we recorded 1H, 2H, and 13C spectra of ubiquitin in frozen water and frozen glycerol-water solutions at different temperatures. We found that the 13C protein linewidths generally increase with decreasing temperature. This line broadening was found to be inhomogeneous and independent of proton decoupling. In pure water, we observe an abrupt line broadening with the freezing of the bulk solvent, followed by continuous line broadening at lower temperatures. In frozen glycerol-water, we did not observe an abrupt line broadening and the NMR lines were generally narrower than for pure water at the same temperature. 1H and 2H measurements characterizing the dynamics of water that is in exchange with the protein showed that the 13C line broadening is relatively independent from the arrest of isotropic water motions. Introduction Solid-state NMR on frozen protein solutions has a great potential for investigating protein folding, dynamical disorder, and soluble proteins that are too big to be investigated with liquid-state NMR spectroscopy [1]. For example, Tycko and coworkers studied the folding of HP35 using frozen glycerol-water solution NMR in the presence of guanidine hydrochloride [2] or using fast freezing methods [3]. They also demonstrated the ability of frozen solution solid-state NMR to investigate large biomolecular complexes by studying HIV proteins bound to an antibody and RNA [4,5]. Furthermore, frozen solution NMR of proteins and other biomolecules can be combined with dynamic nuclear polarization (DNP) a technique increasing the signal-to-noise (SN) of an NMR spectrum up to two orders of magnitude [6]. In fact, most of the biomolecular DNP done to date relies on frozen glycerol-water solutions, which serve as cryoprotectant and ensure the uniform distribution of the (bi)radicals [6]. However, solid-state NMR on frozen solutions comes at a price, since the linewidths observed for such samples, especially at the low temperatures of about 153 K used by Tycko and coworkers, and of about 100 K used for most DNP applications, are significantly larger than those observed in crystalline protein samples or other regular arrays of proteins [7]. We recently showed that it is possible to record relatively high resolution 13 C spectra of ubiquitin and a type III antifreeze protein in frozen solution samples at higher temperatures, i.e. just a few degrees below the freezing point of bulk aqueous solvent [8]. These results raised in our minds the question of the detailed dependence of the linewidths on temperature and composition of the solvent between freezing and 280uC, where the hydration shell directly surrounding the protein is progressively becoming solid. Such a study could give some insights into the origin of the observed line broadening in frozen solutions. Proteins in aqueous solution influence the properties of water in their direct vicinity. These water molecules, which form the hydration shell, reciprocally influence the structure and dynamic of the protein in solution [9][10][11]. Terahertz spectroscopy [12], neutron diffraction [13], and NMR [14][15][16][17][18] can be used to study the dynamics of the hydration shell and the interactions between the protein and its hydration water. Based on these methods and molecular dynamics simulations of protein-water system, it is believed that there is about 0.4 g of hydration water per gram protein and that this water is slightly denser and less dynamic than bulk water [14]. Several NMR studies suggested that a minimal protein hydration has not only a crucial influence on the dynamics of the protein, but also on the linewidth observed in solid-state NMR spectra of proteins, since the linewidth increases notably when the minimal hydration shell is removed [15,19]. Interestingly, the hydration water of proteins does not freeze at the same temperature as the bulk solvent, but is mobile and noncrystalline over a considerable temperature range below the bulk transition temperature, and undergoes a glass transition at lower temperatures. The experimentally observed solvent glass transition temperature depends critically on the timescale of the method used to probe the hydration water [20][21][22][23], consistent with a continuous retardation of the translation timescale with temperature. For example, differential scanning calorimetry (DSC) data, probing macroscopic time scales (1-100 s), indicates a glass transition in frozen protein solutions of approximately 200 K [21]. Neutron diffraction and NMR data probe faster timescales and observe the glass transition at considerably higher temperatures. 1 H NMR data indicate that the protein hydration shell of ubiquitin freezes and thereby escapes detection at about 223 K [24]. (This is confirmed in our NMR studies of the hydration shell of frozen ubiquitin and antifreeze protein solutions at 238 K [25]). Goddard and coworkers observed the complete arrest of the hydration water in rehydrated lyophilized protein at 170 K [18]. Since 13 C NMR linewidths of proteins in frozen solution are often reported to be broad at temperatures below the proteinsolvent glass transition, and relatively high-resolution 13 C spectra of proteins could be obtained above this transition, we set out to answer the following questions: How does the 13 C linewidth of a protein solution depend on temperature? What is the nature of the 13 C line broadening? What influence do solvent dynamics and aggregation state have on the 13 C linewidth? Is the 13 C linewidth influenced by the addition of glycerol? To learn more about the relation between solvent dynamics and 13 C linewidth, we present the following data on frozen ubiquitin solutions in the presence and absence of glycerol at temperatures down to 183 K. Carbon Linewidth How does temperature influence the 13 C linewidth of a protein in H 2 O below the freezing point of the bulk solvent, i.e. trapped in ice? To answer this question, we recorded 13 C spectra of ubiquitin at various temperatures. As can be seen from Figure 1, the linewidth of the spectra broadens in two stages. Noticeable broadening occurs with the freezing of the bulk water between 279 K and 264 K. As the temperature is further lowered from 264 K down to 183 K, the linewidth continuously broadens further. (To eliminate the possibility of a trivial effect on the decoupling performance, we re-optimized 1 H decoupling at the lowest temperature, confirming the same decoupling optimum.) We recorded 1D CP MAS spectra on perdeuterated ubiquitin at 256 K and 202 K, and these samples exhibited linewidths very similar to those of protonated ubiquitin (see Figure 2). The observed line broadening was, therefore, not the result of insufficient decoupling. Figure 3 shows two 2D DARR 13 C-13 C correlation spectra we recorded at 264 K (red) and 213 K (blue) to monitor the extent of line broadening in more detail. The two spectra overlap very well giving little indication of chemical shift change between the two temperatures. The well-resolved Cd1 of Il23 exhibits with about 0.5 ppm one of the highest chemical shift changes observed (see also reference [8]). However, the resolution at 213 K is significantly worse than at 264 K causing several cross peaks to merge at 213 K that were distinguishable at 264 K. The 1D slices shown below the 2D spectra confirm these results, and a dramatic line broadening could be observed e.g. for the Ala28 Ca-Cb cross peak at 18.6 ppm. The Cb linewidth of this Ala28 increases from ,150 Hz (0.8 ppm) at 264 K to ,240 Hz (1.3 ppm) at 213 K. A similar extent of line broadening can be observed in Figure 2 where the isolated line of Ile23 Cd1 at 8.5 ppm broadens from ,100 Hz (0.5 ppm) at 279 K to ,165 Hz (0.9 ppm) at 264 K and to ,256 Hz (1.5 ppm) at 213 K. Is this line broadening inhomogeneous, and due to an increase in structural heterogeneity at lower temperatures, or homogeneous, and due to an increase in T 2 relaxation? To answer this question, we measured the temperature dependence of the average 13 C T 2 ' of ubiquitin in frozen solution using a simple Hahn echo pulse sequence. Here, T 2 ' is the transverse relaxation time measured by a Hahn echo pulse sequence under MAS and high power 1 H decoupling, which includes the spinspin relaxation T 2 , the 13 C-13 C J-coupling, and contributions from imperfect averaging of dipolar couplings. The data in Figure 4 (circles) show that 13 C T 2 ' increases with decreasing temperatures. In addition to this, Figure 4 shows the T 2 * calculated from the Cd1 line of Ile23 (diamonds). T 2 * is the total transverse relaxation measured by the 13 C FID under 1 H decoupling, that includes T 2 ' and contributions from inhomogeneous line broadening i.e. small variations in the isotropic chemical shift throughout the sample. At temperatures just below the freezing point of the solution, the T 2 ' of ,3.8 ms is Spectra were recorded on a 750 MHz spectrometer, at 12 kHz MAS, using 128 acquisitions. Direct 13 C excitation by a 90u pulse was used at 279 K, a 1 H-13 C CP pulse sequence was used otherwise. The spectra at 279 K and 264 K were recorded with less sample than the others explaining the reduced signal intensity. The spectra show that the 13 C lines broaden abruptly when the solution freezes between 279 K and 264 K. After that, the linewidth increases in a continuous manner with lowering the temperature. doi:10.1371/journal.pone.0047242.g001 only slightly longer than the T 2 * we calculated from the linewidth and most of the linewidth we observe is, therefore, due to T 2 '. However, the line broadening we observed at lower temperatures cannot be explained by T 2 ' alone. Line broadening and T 2 ' seem to be uncorrelated indicating that the line broadening we observe with lower temperatures is inhomogeneous. The fact that the Cd1 line of Ile23 fit somewhat better to Lorentzian line shapes at high freezing temperatures and better to Gaussian line shapes at low temperatures, supports this finding (see Figure 5). Glycerol-water Solutions Many studies on frozen protein solutions are not done in frozen H 2 O, i.e. ice, but in mixtures of water and glycerol. In contrast to pure water, glycerol-water mixtures are known to form a glass at low temperatures, which proves to be advantageous, especially in the context of DNP experiments [6] and to prevent possible protein concentration and aggregation [26]. Therefore we asked: what is the influence of the additional glycerol on the 13 C linewidth of ubiquitin in frozen solution? To answer these questions, we recorded spectra of ubiquitin in frozen 1:1 glycerol-water mixtures. Figure 6 shows 1D 13 C CP MAS spectra recorded between 221 K and 186 K. Above 221 K the sample turned out to be lossy and gave low CP efficiencies, indicating that it was still relatively liquid at these temperatures. Similar to the data recorded in pure water, the 13 C linewidths of ubiquitin in glycerol-water are quite narrow right at the point where the viscosity of the bulk solvent slows the molecular tumbling of the protein into a range where the 1 H-13 C dipolar couplings are not averaged completely and the CP efficiency becomes comparable to crystalline samples. Lowering the temperature further leads to a continuous broadening of the protein linewidth. Furthermore, the glycerol becomes so rigid with lower temperatures that it is visible in the 13 C CP spectra. The intensity of the 13 C spectra we recorded of ubiquitin in glycerol-water solution is too low to observe the Ile23 line at 8.5 ppm, but already the three overlapping Cds of Ile3, 13, and 61 at 13.9 ppm have a linewidth of less than 100 Hz (0.5 ppm) at 221 K. This linewidth corresponds to the linewidth of ubiquitin in H 2 O solution above freezing. When the temperature is lowered, these lines broaden considerably in both samples and overlap with other lines making linewidth measurements difficult. Magnitude and Dynamics of Non-frozen Water To learn more about how the 13 C linewidth of ubiquitin in frozen solution is influenced by the dynamics of the surrounding water, we recorded 1 H and 2 H NMR spectra alongside the 13 C spectra shown in Figure 1. Figure 7 shows the temperature dependence of the intensity of liquid-like 1 H in a ubiquitin solution recorded at 750 MHz from temperature just below the freezing point down to about 200 K. The narrow 1 H line of the non-frozen water was separated from broader, static 1 H resonances using a sine bell window function and magnitude mode processing. The intensity of this narrow line reflects the amount of liquid water with a correlation time short enough to be detected in our experiment (much less than the inverse of the dipolar 1 H-1 H couplings, i.e. nanosecond or shorter). In these experiments, nonfrozen water becomes undetectable at about 240 K. We recorded similar data for the non-frozen water in a frozen ubiquitin D 2 O solution using a static probe on a 600 MHz spectrometer ( Figure 7). Also here we could observe a relatively narrow, sideband-free 2 H line for the non-frozen water until it got less intense and difficult to detect at about 240 K. This narrow, sideband free 2 H could not be detected in spectra of pure water and is also absent with some of the other solutes we tested. Only the broad 2 H tensor of ice is detectable in this case. To probe the 1 H and 2 H line broadening further, and clarify the slower molecular tumbling of the water at low temperatures, we measured 2 H T 1 and calculated the corresponding molecular correlations times t C of water using the following equation [27]. Figure 8. Despite the ambiguity of two solutions for the correlation time in this T 1 measured, it is clear that that t C increases (to ,15 ns) and the isotropic 2 H lines disappears when the condition t C %1 C q is not fulfilled [17]. The T 1 relaxation times we measured did not reach the theoretical T 1 minimum calculated from Equation 1 (dashed line in Figure 8a), which is often interpreted as evidence for complex motional models. Discussion We reproduced an abrupt broadening of the 13 C linewidth due to freezing of the solvent. Although this is not the major topic of the paper, it is interesting to discuss its origin. The abrupt broadening of the 13 C linewidth of ubiquitin in H 2 O when freezing the bulk solvent between 279 K and 264 K might result from restriction of ubiquitin to the confined space of its hydration shell and therefore cessation of large-scale molecular motions that average the inhomogeneous environment. Similar effects could be expected due to excluded volume effects resulting from the formation of large ice crystals and a resulting concentration of protein [26]. Another possible explanation for broadening at the freezing point of the bulk solvent could be that the freezing introduces structural heterogeneity via direct ice interactions, e.g. the different orientations of crystal planes around the protein and resulting variations in the size and shape of the remaining hydration shell. These effects might explain the small difference at 263 K between the measured T 2 ' and the T 2 * calculated from the 13 C linewidth (Figure 4). Our major emphasis was on the behavior of the NMR linewidths as a function of temperature below the freezing point of the solution. We observed a gradual broadening of the 13 C line when the temperature was lowered below the freezing point down to 183 K. It is interesting to compare this temperature profile to that of the solvent motions: The 2 H and 1 H lines of the non-frozen water undergo a more marked broadening and loss of detectability at about 240 K. Over this narrower temperature range, 240-260 K, where the solvent apparently undergoes a transition, we did not observe a more substantial 13 C linewidth change; the linewidth change for the protein was relatively smooth over a much broader temperature range. This observation suggests that there is no specific relation between solvent dynamics or averaging and linewidth over this range. The additional 13 C line broadening at 183 K as compared with 264 K is inhomogeneous. T 2 ' of the protein 13 C lines increases with lower temperatures over this range. The line broadening does not result from a decrease in T 2 but from an increase in local inhomogeneity. At high temperatures, it is possible that fast exchange among local microstates leads to narrow 13 C lines. At lower temperatures, the local conformations appear to become increasingly locked, leading to conformational differences throughout the sample, that are distinct and long-lived on the NMR timescale (ms or slower), and distinct in terms of their isotropic . Since t C is expected to decrease at lower temperatures the most likely t C s are shown with full circles. The equally likely t C s corresponding to the minimal T 1 are shown in gray. doi:10.1371/journal.pone.0047242.g008 chemical shift, and, therefore, line broadening is observed. This also explains why the line shapes become less Lorentzian and increasingly Gaussian at lower temperatures: Just below the freezing point of the solution the linewidth comes dominantly from the exponential T 2 ' decay of the 13 C magnetization resulting in a Lorentzian line shape. At low temperatures the linewidth comes increasingly from a distribution of chemical shifts resulting in a Gaussian line shape. The linewidth change can be thought of as local, relatively modest structural heterogeneity. The difference in linewidth of Ala28's Cb between 264 K and 213 K is ,0.5 ppm. In comparison, the average difference between an Ala Cb in an ahelical and extended b-sheet conformation is about 3.5 ppm. The line broadening of 0.5 ppm corresponds therefore presumably to a change that is much less dramatic than a secondary structure change. The line broadening at low temperatures is not typically accompanied with a measurable change in average chemical shift; only a few isolated signals such as the Cd1 of Ile23 showed chemical shift changes in the order of 0.5 ppm. Interestingly, Ile23 also showed the maximum chemical shift in an earlier study comparing solution with frozen solution shifts [8], suggesting that this is an unusually plastic site in the protein. Therefore, the 13 C line broadening does not reflect a change of secondary and tertiary structure of the protein but can be explained by continuous arrest of rotational transitions of amino acid side chains that are lubricated by the presence of a solvent shell or, especially in the case of methyl rotations, are volume conserving and independent of solvent dynamics [28,29]. Maus and co-workers described 13 C line broadening at low temperatures due to the interference between the methyl group rotation and 1 H decoupling field strength in a crystalline sample [30]. The fact that we observed comparable line broadening in protonated and perdeuterated samples of ubiquitin rules out the possibility that insufficient 1 H decoupling or interference between the methyl group rotation and 1 H decoupling field strength could be the reason for the line broadening in our case. However, the freezing of methyl group rotation could play an important role for the 13 C linewidth at lower temperatures [31]. The glycerol-water solution has no liquid-solid phase transition and rigidifies at much lower temperatures compared to pure water. Therefore, ubiquitin in glycerol-water becomes accessible to dipolar CP MAS spectroscopy at lower temperatures than ubiquitin in pure water. The 13 C lines of the protein in glycerol water mixtures become gradually broader as the temperature is lowered, analogously to the protein frozen in ice. However, at a given temperature the protein lines in glycerol-water mixtures are considerably narrower. We observed 13 C linewidths at 221 K in glycerol-water that are comparable to those in pure water at 264 K [32]. Between the freezing point of the pure water and the glass transition of the glycerol-water solution (,160 K), the bulk glycerol-water solution is apparently more dynamic than pure water at this temperature. It is very likely that this is also true for the water in the vicinity of the protein, leading to narrower lines at comparable temperatures. However, we were not able to distinguish the solvent at the protein surface from the bulk solvent with NMR in the case of the glycerol-water mixture. It is curious that for glycerol-water solutions over the temperature range from 221 to 186 K, dipolar based NMR sequences that would normally require a solid sample, such as CP, perform well, despite the fact that the solvent has not undergone a glass transition. This is arguably analogous to the situation for crystalline protein samples that are typically bathed in a liquid medium between about 200 and 270 K, and give very high quality SSNMR spectra nevertheless, because they are arrested by protein-protein interactions. Mainz and coworkers previously observed high-resolution 13 C CP MAS spectra of a protein in 20% glycerol-water solutions at a sample temperature of 263 K; the ability to achieve a high quality solid-state NMR signal at this relatively high temperature was due to sedimentation of the protein complex, rather than a rotational correlation time slow enough to permit Hartman-Hahn cross polarization [33]. For ubiquitin the centrifugal forces of an MAS rotor are not sufficient to sediment the protein. Certainly a precipitate of the protein could be centrifuged but an arrest of the protein due to the solidification of the solvent is the more likely reason that dipolar based solid-state NMR techniques are applicable to our samples [33]. The temperature at which NMR of frozen solutions show the best resolution, in pure water and glycerol-water mixtures, is as close as possible to the bulk transition, or the point where the solution is rigid enough to allow dipolar CP to work. At lower temperatures the line broadening is sufficient to present practical problems for example by making the assignment of 2Ds (Figure 3) increasingly difficult. In this case selectively labeling or additional spectral dimensions become necessary [2,34]. The linewidths we observed in frozen ubiquitin solutions at all temperatures are well above those observed by Zech and coworkers in crystalline samples of ubiquitin under very similar conditions [35]. This difference may result from larger structural inhomogeneity in the frozen than in the crystalline state. It is worth noting, however, that the linewidth of crystalline proteins can also get larger when lowering the temperature as characterized in a recent study about the temperature dependence of the 13 C linewidth in crystalline SH3 [36]. Rapid freezing experiments have been proposed to obviate this problem, but Hu and coworkers showed that the speed of freezing has only a modest effect on the linewidth of natively folded protein in frozen solution [3] and a recent EPR study showed that the freezing speed has no effect on protein structure [26]. Thus, other solutions are needed to this important practical problem. We were able to observe a significant fraction of mobile water in ubiquitin in H 2 O below the freezing point of the bulk water. When expressed in a mass ratio of water per protein, we find that this ratio corresponds relatively well to the 0.4 g of hydration water per g of protein reported in the literature [14]. In a previous publication we measured 1 H -13 C correlation spectra that showed that this water is in exchange with the protein [25]. We therefore assign this non-frozen water to either the direct hydration shell of the protein or water that is in rapid exchange with the hydration shell. We observed that the intensity of the 1 H and 2 H lines of the non-frozen water in Figure 7 are much diminished at 240 K (as compared to 270 K). In contrast, Tompa and coworkers reported relatively constant 1 H line intensity between 273 and 228 K by analyzing the components of a static 1 H spectrum (wide-line NMR) of a very similar frozen ubiquitin solution. Interestingly, our intensity measurements correspond very well to the DSC data reported in the same study [24]. The rotation correlation time t C of the non-frozen water measured via the 2 H T 1 relaxation rate became longer when lowering the temperature of our frozen protein solution (see Figure 8a). Furthermore, the work of Vogel and coworkers indicated that these dynamics are also complex [17]. This continuous change in complex solvent dynamics underlies the glass transition, whose temperature depends partly on the measurement criterion or the timescale of the experimental method. Our observation of a strong broadening of the 1 H and 2 H non-frozen water lines around 240 K, suggests that at this temperature t C was in the order of 10 210 -10 28 s and the t C vv1 C q condition was not fulfilled any more. This timescale for water rotation corresponds very well to the correlation times reported by Vogel and coworkers, who used 2 H spectra and T 1 measurements to characterize the dynamics of supercooled water in wet protein samples. The T 1 data reported in that study fit best to a distribution of correlation times. Such a distribution of correlation times could also explain the discrepancy between the theoretical T 1 minimum and the minimal measured T 1 in our data (see Figure 8). Additional data at different magnetic fields would be needed to characterize this distribution in more detail. The 13 C linewidth of the protein behaves very differently from the 1 H and 2 H linewidth of the non-frozen water. The narrow lines at high temperatures are assumed to be the result of dynamic averaging over different conformations in the fast exchange limit. The broad lines at lower temperatures are the result of the reduction of these dynamics, freezing of multiple specific conformations, which leads to static disorder and inhomogeneous line broadening. The timescale and of the motions relevant for the 13 C linewidth are determined by the intermediate to slow exchange limit given by the heterogeneous 13 C chemicals shifts. Assuming a line broadening of about 100 Hz, the motions leading to line narrowing at high temperatures are expected to be much faster than 10 23 s, and at lower temperatures must be much slower than that timescale. The 13 C linewidth is, therefore, potentially sensitive to much slower motions than the 1 H and the static 2 H lines. From these data, the linewidth changes that we observe between 190 and 270 K are not likely due to the changes in solvent dynamics we observe via the 1 H and 2 H spectra of the non-frozen water. However, the fact that the 13 C line broadening we observe is continuous with temperature does not exclude the possibility that the change in linewidth is connected to other dynamical modes or timescales of the surrounding solvent. Bajaj and coworkers observed no significant 13 C line broadening in spectra of a dehydrated crystalline tripeptide down to temperatures of 90 K [37]. Linden and coworkers, on the other hand, observed very similar line 13 C line broadening in SH3 protein crystals that contain significant amounts of crystal water [36]. Side-chain rotamer motions, especially methyl rotations that could be responsible for the 13 C line broadening we observe have been implicated to be solvent independent [28]. On the other hand the motions of especially solvent accessible polar sidechains are thought to be facilitated by the presence of the solvent and arrest if the solvent shell is either absent or very viscous [18]. Dry protein samples are usually showing severe 13 C line broadening which disappears with the hydration of the protein [15,19]. This indicates that water plays a role for the slow dynamics governing the 13 C linewidth and has a function as lubricant of protein motions. The fact that ubiquitin does not make direct contact to the surrounding ice but rather has a glassy hydration shell at all temperatures [21] and the fact that non-ice forming glycerol solutions show similar line broadening effects than pure H 2 O solutions make it unlikely that the 13 C line broadening we observed is a result of sample heterogeneity due to spontaneous formation of crystalline ice at the protein surface. This model could be further tested in a number of ways. Extending the data presented in this manuscript to temperatures below 180 K would be desirable since they would answer the question whether the broadening of the 13 C lines continues at lower temperatures similar to crystalline proteins [36] or reaches a plateau when the timescale of the side-chain motions become of the order of milliseconds. Another interesting way of testing the abovementioned hypothesis is the measurement of dynamic order parameters via e.g. 1 H-13 C dipolar couplings that could identify the difference in dynamics of the protein in frozen solution above the protein-solvent glass transition and at very low temperatures. In summary, we have characterized the 13 C linewidth of ubiquitin and its dependence on non-frozen solvent dynamics in frozen pure H 2 O and glycerol-water solutions at different temperatures. In pure H 2 O solutions, after an abrupt line broadening due to the freezing of the bulk solvent, the protein 13 C linewidth broadens continuously when lowering the temperature. This line broadening is inhomogeneous and independent of 1 H decoupling as confirmed with Hahn echo experiments and perdeuterated samples, respectively. The 13 C linewidth of a protein in frozen solution does not follow the pronounced line broadening we observed of the hydration shell 1 H and 2 H line around 240 K. We hypothesize that the 13 C line broadening may be related to solvent motions other than those we observed using 1 H and 2 H NMR. In this picture, the protein 13 C linewidth depends first on the freezing point of the solvent, arresting the protein to a confined space and, after the bulk solvent is frozen, on the local dynamics of the protein and its hydration shell. The glycerol-water mixture does not undergo a sharp phase transition and we consequently did not observe an abrupt 13 C line broadening as in pure H 2 O. Generally, the 13 C linewidth in glycerol-water was smaller than in pure H 2 O at the same temperature but the linewidth increased when lowering the temperature analogous to the pure H 2 O solution. Sample Preparation Uniformly 13 C-15 N labeled ubiquitin was expressed and purified as described earlier [25]. The lyophilized protein was dissolved in deionized water and 1:1 v/v glycerol-water at final concentrations of 35 mg/ml and 32 mg/ml, respectively. NMR Solid-state 13 C and 1 H NMR spectra were recorded on a Bruker 750 MHz Advance spectrometer using a 4 mm double resonance probe operating at 12 kHz MAS. The sample temperatures were calibrated externally using Pb(NO 3 ) 2 [38]. 1 H 1D spectra and saturation recovery curves were recorded using hard pulses of 100 kHz. Protein 1D 13 C spectra in frozen solutions were recorded using 1 H-13 C CP experiments with a contact time of 1 ms and rf-field strengths of 50 and 62 kHz for 13 C and 1 H, respectively. Equivalent 13 C 1D spectra of ubiquitin solutions above the freezing point were recorded using direct excitation via a 90u pulse of 50 kHz. Spinal64 [39] decoupling with a 1 H rf-field strength of 100 kHz was used during all 13 C detection periods. The 13 C T 2 ' relaxation of ubiquitin and glycerol were measured with CP step followed by a simple 13 C Hahn echo element and detection on 13 C. The 2D DARR spectra of Figure 3 were recorded with a mixing time of 20 ms. The spectral width was 50 kHz in both dimensions and 1536 and 1024 points were acquired in the t 1 and t 2 dimensions, respectively. For each t 1 increment 16 and 64 scans were acquired for spectra at 213 K and 264 K, respectively. Deuterium spectra were recorded on a Varian 600 MHz InfinityPlus spectrometer using a static 2 H probe. T 1 was measured by saturation-recovery followed by a quadrupolar echo with t = 23 ms. The T 1 of non-frozen water and of ice differ by several orders of magnitude making it is easy to separate the spectra of each component.
7,604.8
2012-10-15T00:00:00.000
[ "Physics" ]
Cephalosporin as Potent Urease and Tyrosinase Inhibitor: Exploration through Enzyme Inhibition, Kinetic Mechanism, and Molecular Docking Studies In present study, eleven cephalosporin drugs were selected to explore their new medically important enzyme targets with inherited safety advantage. To this end, selected drugs with active ingredient, cefpodoxime proxetil, ceftazidime, cefepime, ceftriaxone sodium, cefaclor, cefotaxime sodium, cefixime trihydrate, cephalexin, cefadroxil, cephradine, and cefuroxime, were evaluated and found to have significant activity against urease (IC50 = 0.06 ± 0.004 to 0.37 ± 0.046 mM) and tyrosinase (IC50 = 0.01 ± 0.0005 to 0.12 ± 0.017 mM) enzymes. Urease activity was lower than standard thiourea; however, tyrosinase activity of all drugs outperforms (ranging 6 to 18 times) the positive control: hydroquinone (IC50 = 0.18 ± 0.02 mM). Moreover, the kinetic analysis of the most active drugs, ceftriaxone sodium and cefotaxime sodium, revealed that they bind irreversibly with both the enzymes; however, their mode of action was competitive for urease and mixed-type, preferentially competitive for tyrosinase enzyme. Like in vitro activity, ceftriaxone sodium and cefotaxime sodium docking analysis showed their considerable binding affinity and significant interactions with both urease and tyrosinase enzymes sufficient for downstream signaling responsible for observed enzyme inhibition in vitro, purposing them as potent candidates to control enzyme-rooted obstructions in future. Introduction The cephalosporins are common antibiotics prescribed in routine for broad range of infections. Lesser toxic and allergic threats along with wide action spectrum make them popular [1]. They possess β-lactam ringed structure similar to penicillin. This interferes with the synthesis of bacterial cell wall showing significant antibacterial properties. Guiseooe Brotzu, Italian scientist, isolated cephalosporin compounds from Cephalosporium acremonium cultures in 1948 [2]. They are classified generation wise, lower generations possess strong activity against gram-positive bacteria, and higher generations possess more activity against gram-negative bacteria; however, cefepime from fourth generation possesses both gram-positive activity (equivalent to first generation) and gram-negative activity (equivalent to third generation) [3]. Third generation cephalosporins are active against gram-negative rods, especially Enterobacter and multiple resistant strains. They are proven helpful in controlling hospital-acquired infections including bacteremia and pneumonia [2]. For present study, eleven drugs from cephalosporin class with single active compound, cefpodoxime proxetil, ceftazidime, cefepime, ceftriaxone sodium, cefaclor, cefotaxime sodium, cefixime trihydrate, cephalexin, cefadroxil, cephradine, and cefuroxime, were purchased aiming to explore their potential against biologically important two enzymes, urease and tyrosinase. Urease, a nickel-dependent thiol-rich metalloenzyme is responsible for ammonia and carbamate formation from urea [4]. It is usually present in bacteria, fungi, algae, plants, and invertebrates. It is also present in soil as a soil enzyme [5]. The important components of ureases for catalytic activity are Ni 2+ ions and the sulfhydryl group (especially the cysteinyl residues in the active site). An important virulence factor of many bacterial species including Klebsiella pneumoniae, Proteus mirabilis, Salmonella species, Staphylococcus species, and Ureaplasma urealyticum is their ureolytic activity. It is associated with pathogenesis of certain medical conditions, i.e., hepatic coma, pyelonephritis, urinary stone formation, and peptic ulceration [6,7]. Increased pH (up to 9.2) during hydrolyses of urea is observed [6]. Thus, urease activity helps bacteria to adjust pH allowing them to survive even in originally low pH of stomach causing stomach cancer and peptic ulcers during colonization [8]. Hence, urease inhibitors are the first-line strategy to control infections caused by urease-producing microorganisms. Tyrosinase, our second study enzyme, is associated with melanin synthesis responsible for hair and skin colour [9,10]. Melanin is formed from L-tyrosine conversion into 3,4dihydroxyphenylalanine (L-DOPA) which oxidizes to produce dopaquinone [11]. Thus, the tyrosinase enzyme regulates the melanin content which protects skin from UV radiations and sun burn. However, its overexpression results in hyperpigmentation causing dermatological disorders, i.e., melisma and age spots [12]. Moreover, neuromelanin in the brain and neurodegeneration are known to be linked with Parkinson's disease [13]. Tyrosinase induction produces reactive oxygen species known to cause neurotoxicity [14]. Thus, discovery of tyrosinase inhibitors is important for tyrosinase control and treatment of melanin-related skin complications [15,16]. Although many tyrosinase inhibitors are identified however, their toxic effects prohibit their commercialization, indicating the need to search new safe and effective alternatives. Thus, the focus of study is to use already existing safety proven drugs to explore their new therapeutic targets, an effective strategy which not only allow to maximize the use of drug's potential but also help to reduce evaluation time, cost, and risk of failure. Thus, eleven cephalosporin drugs were selected to evaluate their potential against two medically important enzymes. Later, kinetic study of two most potent drugs was executed and evaluated their kinetic parameters and inhibition constants to explore their mechanism of enzyme inhibition. Moreover, a plot among remaining enzyme activity versus various concentrations of respective enzymes in the presence of selected drugs was devised as determinant of reversible or irreversible behaviour of enzyme inhibition. Finally, docking study identifying the binding pattern of drug with enzyme which is important for observed enzyme inhibition was executed. Materials and Methods 2.1. Chemicals. Enzymes, mushroom tyrosinase, and urease were purchased from Sigma. Eleven drugs from cephalosporin class were purchased from local pharmacy, and their active ingredients were summarized in Table 1 and Figure S1 with formula [17][18][19][20][21][22][23][24][25][26][27]. To prepare stock, ground powder was weighted to directly dissolve in DMSO. All items were stored in recommended conditions with shelf life of safe use till all evaluations. Tyrosinase Inhibitory Assay. To evaluate tyrosinase inhibition, assay was performed as described previously [29]. Reaction was started by loading 140 μl of phosphate buffer (20 mM, pH 6.8), 20 μl of mushroom tyrosinase (30 U/ml), and 20 μl of test drug in 96-well plate. After 10 min incubation at RT, 20 μl (0.85 mM) L-DOPA (3,4-dihydroxyphenylalanine) was added and incubated again for 20 min at RT. Then, OD475 nm was determined as measure of dopachrome formation by plate reader (BioTek, Elx 800). Kojic acid was used as standard inhibitor for reference. For clear statistical analysis, experiments were performed twice in duplet. First percentage inhibition was determined and then IC50 was calculated using Microsoft excel, and the test drug results were compared with standard. Study of Enzyme Kinetics. To evaluate the type of enzyme inhibition, series of kinetic experiments were performed using 2 most active drugs against both enzymes, urease and tyrosinase, following methods reported previously [29,30]. To this end, the Lineweaver-Burk plots of 1/absorbance versus 1/urea and 1/absorbance versus 1/L-DOPA were plotted. In all kinetic studies, drug concentrations (as indicated in Lineweaver-Burk plot) and respective substrates, urea in buffer (0.063 to 2 mM) for urease and L-DOPA (0.06 to 2 mM) for tyrosinase, were added and plates were incubated for 10 min at 37°C. Later, respective enzymes were added in plates and absorbance (wavelengths same as above) was monitored for 5 min with 1 min interval. The Lineweaver-Burk plot showing type of enzyme inhibition was plotted as inverse of velocities (1/V) versus inverse of substrate concentration 1/[S] Mm -1 . Later, inhibition constant (Ki) was evaluated by both the Dixon plot and from Lineweaver-Burk plot, by secondary replot of slope versus concentrations of inhibitor. Inhibition Mechanism of Potential Inhibitor. The inhibitory mechanism of both the enzymes, urease and tyrosinase, was determined with two most active drugs following Tahir et al. and Ali et al. [30,31]. To this end, a plot among remaining enzyme activity versus various concentrations of respective enzymes in the presence of drug concentrations 2 BioMed Research International (as indicated in graph) was devised as determinant of reversible or irreversible behaviour of enzyme inhibition. 2.6. In Silico Study: Repossession of Jack Bean Urease and Mushroom Tyrosinase from PDB. The crystal structures of jack bean urease and mushroom tyrosinase were retrieved from the Protein Data Bank (PDB) having PDBIDs 4H9M and PDBID 2Y9X (http://www.rcsb.org/), respectively. Furthermore, energy minimization of target, stereochemical properties, Ramachandran graph, and values of urease and mushroom tyrosinase were explored [32,33]. Moreover, to access architecture of study proteins and occurrence of α-helices, β-sheet and coil tool called VADAR 1.8 was used (http://vadar.wishartlab.com/). Results and Discussion In present study, we selected eleven antibiotics from cephalosporin family aiming to maximize the use of their potential for multiple applications with inherited safety advantages and rooting out their new biological targets such as enzymes, urease and tyrosinase, with possible inhibition mechanism eventually proposing effective and safe alternative for the management of enzyme-associated medical obstructions. In other words, ceftriaxone sodium and cefotaxime sodium showed 18 and 6 times better tyrosinase activity than standards hydroquinone. In biological reactions, enzymes play key role and therefore are considered attractive target in disease control [31,35]. Likewise tyrosinase, being the rate-limiting player in darkening of skin and fruits, its inhibition is desirable both in cosmetics and food industry. Multiple depigmenting agents called inhibitors such as arbutin [36], azelaic acid [37], retinoids [38], ascorbic acid derivatives [39], kojic acid [40], and hydroquinone [41] are known. However, unwanted side effects including cytotoxicity are observed from many well-known whitening agents such as hydroquinone and kojic acid which minimizes their use. Interestingly, all tested drugs showed activity; however, cefotaxime sodium and ceftriaxone sodium showed multifold better tyrosinase inhibitory effect than standard hydroquinone. Thus, to understand the mechanism of observed enzyme inhibition, study of enzyme kinetics was performed. Mechanism of Urease Enzyme Kinetics. To understand the mechanism of urease inhibition, series of kinetic experiments against two most active drugs, cefotaxime sodium and ceftriaxone sodium, were performed and the respective Lineweaver-Burk and Dixon plots were generated (Figures 1(a1) and 1(a2)). The Lineweaver-Burk plots, 1/V versus 1/[S], follows Michaelis-Menten kinetics and showed that both drugs behave as competitive inhibitor since increase in their concentration produced a family of straight lines with a common intercept on the ordinate but with different slopes [42]. To obtain insightful pathway, binding affinities of EI and ESI complexes were determined. Analysis revealed competitive mode of urease inhibition (Figures 1(a1) and 1(a2)). The secondary replots of slope versus drug concentration and secondary replots of intercept versus drugs concentration showed EI dissociation constant (Ki) (Figures 1(b1) and 1(b2)) and ESI dissociation constant (Ki') (Figures 1(c1) and 1(c2)). The Ki values for cefotaxime sodium and ceftriaxone sodium were calculated 0.12 and 0.7 mM, respectively, by both the Dixon plot and secondary replot from the Lineweaver-Burk plot of slope. However, Ki' values, 30 mM (cefotaxime sodium) and 6 mM (ceftriaxone sodium), were determined by secondary replot of the Lineweaver-Burk plot of intercept. Comparison showed less Ki compared to Ki' values indicating stronger binding between enzyme and drug [43] justifying preferred competitive mode of inhibition. The Inhibitory Effect of Drugs on Urea Hydrolysis Activity of Urease. To further understand the urease reversible or irreversible inhibitory behaviour by ceftriaxone sodium and cefotaxime sodium, experiments were performed as described in Materials and Methods. Plots among enzyme activity versus the concentration of enzyme (0.44, 0.88, 1.75, 3.5, 7, and 14 μg/ml) in the presence of drugs produced a group of straight lines ( Figure 2). These parallel straight lines with the same slopes indicate irreversible urease inhibition [44,45]. Thus, our both drugs, ceftriaxone sodium and cefotaxime sodium, are shown to bind effectively with urease active site to inhibit irreversibly. (Figures 3(a1) and 3(a2)). Evaluation showed that Vmax reduces with Km shift and increasing concentrations of ceftriaxone sodium and cefotaxime sodium, revealing their mixed type inhibitory behaviour. This means that drugs can interact with free enzyme (E) and enzyme-substrate (ES) complex [46]. To obtain insightful pathway, binding affinities of EI and ESI complexes were determined. The secondary replots for EI dissociation constant (Ki) (Figures 3(b1) and 3(b2)) and ESI dissociation constant (Ki') (Figures 3(c1) and 3(c2)) extracted. The values of Ki and Ki' were calculated as 0.1 and 0.6 mM (ceftriaxone sodium) and 0.07 and 0.8 mM (cefotaxime sodium), respectively. Comparison showed less Ki compared to Ki' values indicating stronger binding between enzyme and drug [43] that indicate preferentially competitive in mixed type mode of enzyme inhibition. The Inhibitory Effect of Drugs on Diphenolase Activity of Tyrosinase. To explore mechanism further and tyrosinase reversible or irreversible inhibitory behaviour, diphenolase activity of both drugs ceftriaxone sodium and cefotaxime sodium was performed. Plots among enzyme activity versus the concentration of enzyme (0.44, 0.88, 1.75, 3.5, 7, and 14 μg/ml) in the presence of different concentrations of drugs a family of straight lines were generated (Figure 4). The parallel straight lines with the same slopes indicate irreversible mode of enzyme inhibition [44,45]. Thus, like urease, both the most potent drugs, ceftriaxone sodium and cefotaxime sodium ,were irreversible inhibitors of mushroom tyrosinase for oxidation of L-DOPA. 3.5. Structural Assessment of Target Proteins. Urease (Jack bean) have tetra domains with different numbers of residues. Among all, the most important is the domain four due to presence of binding pocket and its catalytic behaviour. It consists 27% α-helices, 31% β-sheets, and 41% coils. The Ramachandran plots showed occurrence of 97.5% residues in favored regions evolving phi (φ) and psi (ψ) angle's good precision among the coordinates of jack bean urease structure ( Figure S2). 3.6. Molecular Docking Analysis. The docked complexes of cefpodoxime proxetil, ceftazidime, cefepime, ceftriaxone sodium, cefaclor, cefotaxime sodium, cefixime trihydrate, cephalexin, cefadroxil, cephradine, and cefuroxime against study enzymes were evaluated based on minimum energy values (Kcal/mol) and ligand interactions pattern. The docking energy values of jack bean urease and mushroom tyrosinase docked complexes have been tabulated in Table 1. 3.6.1. Binding Analyses of Drugs against Jack Bean Urease. Ceftriaxone sodium: the ligand-protein binding analyses showed that ceftriaxone sodium confined in the active binding pocket Cefotaxime sodium: cefotaxime sodium also found to confine in urease active region as mentioned in Figures 5(c) and 5(d). The results of cefotaxime sodium-jack bean urease docked complex showed that five hydrogen bonds depict the stability of drug against target protein. The two hydrogen atoms of cefotaxime sodium formed hydrogen bond against CME592 with bond length 2.44 Å and 1.82 Å, respectively. Another hydrogen was also observed between hydrogen atom and Ala440 with bond length 3.07 Å. Moreover, oxygen and nitrogen atoms of cefotaxime sodium also formed hydrogen bond with His593 and His519 with bond distances 2.81 Å and 3.05 Å, respectively. The other 2D depiction of urease is shown in Figure S3. The predicted results showed good correlation with published research data which strengthens our work and efficacy [47][48][49]. Binding Analyses of Drugs against Mushroom Tyrosinase. Ceftriaxone sodium: the binding analyses of ceftriaxone sodium showed that it was confined in the active binding pocket of tyrosinase as indicated (Figures 6(a) and 6(b)). Ceftriaxone sodium-mushroom tyrosinase docked complex showed 5 hydrogen bonds. The oxygen atom of ceftriaxone sodium forms hydrogen bond against Asn81 with bond length 2.94 Å, and the nitrogen and oxygen atoms formed hydrogen bonds with His85 with the bond length of 2.47 Å and 1.97 Å, respectively. Moreover, hydrogen atom of ceftriaxone sodium formed hydrogen bond with Glu322 with bond length of 2.70 Å. Similarly, another oxygen atom of ceftriaxone sodium forms hydrogen bond with Val248 with bond length of 2.70 Å. This shows [50][51][52]. Cefotaxime sodium: the ligand-protein binding analyses showed that cefotaxime sodium confined in the active binding pocket of target protein as mentioned in Figures 6(c) and 6(d). The results of cefotaxime sodium-mushroom tyrosinase docked complex showed that 2 hydrogen bonds were observed. An oxygen atom form hydrogen bond with His244 with bond length of 2.97 Å and the nitrogen atom of compound form hydrogen atom with Glu322 with bond length of 2.58 Å. The other 2D depiction of tyrosinase is shown in Figure S3. Our docking results show good correlation with published research which strengthens our work and efficacy [47,53]. The deep interaction profiles of drugs against urease and mushroom tyrosinase clearly depicted the significance of drugs in the enzyme activity. The binding pocket residues are more important and active key players in the activation of signaling pathways [54]. In our predicted results, drugs directly interact with active site residues of both urease and mushroom tyrosinase which depicts that binding of drugs may affect the activity of enzymes and showed good correlation with in vitro results. 3.7. Structure Activity Relationship (SAR) Analysis. The SAR is the relationship between the chemical structure having different incorporated functional groups ( Figure S1) and its biological activities against different enzymes. The cefotaxime sodium, cefixime trihydrate, and cefpodoxime proxetil have basically the same skeleton with different functional groups. Similarly, the other drugs ceftriaxone sodium, cefepime, and ceftazidime correlate with each other in terms of basic structure. Cefaclor, cefadroxil, and cefuroxime are the same, while cephalexin resembles with cephradine. The different drugs showed different inhibition behaviour and docking energy values. All the compounds have potential to block the entry of substrate by binding to amino acid residues lying at the pocket domain. The enzyme/inhibitor complexes are stabilized by number of different interactions such as H-bonding, π-sigma interactions, π-alkyl interactions, π-anion/cation sulphur interactions, polar BioMed Research International interactions, stacking, and metal-ligand interactions. We discuss the binding mode of two most active compounds (ceftriaxone sodium and cefotaxime sodium) and compare their interactions with the standard ligands. Figures 5 and 6 illustrate the relative positioning of ceftriaxone sodium and cefotaxime sodium in their most stable conformation with minimal energy in the active site of target. Different binding interactions were observed for ceftriaxone sodium and cefotaxime sodium due to structural differences and presence of an additional sodium citrate group in both the drugs. The ceftriaxone sodium showed good inhibition values 0:08 ± 0:004 and 0:01 ± 0:0005 (mM) with binding affinity -7.90 and -8.40 (Kcal/mol) as compared to other selected drugs ( Conclusion In present study, eleven cephalosporin drugs with single active ingredient were evaluated and found to inhibit medically important both the enzymes, urease and tyrosinase in vitro. All drugs outperform the positive control: hydroquinone for tyrosinase activity. The kinetic analysis of most active drugs, ceftriaxone sodium and cefotaxime sodium, revealed that they bind irreversibly with both enzymes; however, their mode of action was competitive for urease and mixed-type, preferentially competitive for tyrosinase enzyme. In addition, docking study showed their significant bonding with both urease and tyrosinase enzymes purposing them potent candidates to control enzyme-rooted complications in future. Data Availability The data used to support the findings of this study are included within the supplementary information file(s). Figure S1: chemical structures of test drugs and resemblance of their basic inner structure and different functional groups at different locations. Figure S2: Ramachandran graph of (a) jack Figure S3: 2D docking of test drugs with urease enzyme. Figure S4: 2D docking of test drugs with tyrosinase enzyme. (Supplementary Materials)
4,279.8
2022-07-28T00:00:00.000
[ "Biology" ]
Analysis of Implicit Type Nonlinear Dynamical Problem of Impulsive Fractional Differential Equations We study the existence, uniqueness, and various kinds of Ulam–Hyers stability of the solutions to a nonlinear implicit type dynamical problem of impulsive fractional differential equations with nonlocal boundary conditions involving Caputo derivative. We develop conditions for uniqueness and existence by using the classical fixed point theorems such as Banach fixed point theorem and Krasnoselskii’s fixed point theorem. For stability, we utilized classical functional analysis. Also, an example is given to demonstrate our main theoretical results. Introduction Over the past few decades, differential equations of fractional order have got considerable attention from the researchers due to their significant applications in various disciplines of science and technology.Fractional derivatives introduce amazing instrument for the description of general properties of different materials and processes.This is the primary advantage of fractional derivatives in comparison with classical integer order models, in which such impacts are in fact ignored.The advantages of fractional derivatives become apparent in modeling mechanical and electrical properties of real materials, as well as in the description of properties of gases, liquids, rocks and in many other fields (see [1,2]).Since fractional order differential equations play important roles in modeling real world problems related to biology, viscoelasticity, physics, chemistry, control theory, economics, signal and image processing phenomenon, bioengineering, and so forth (for details, see [3][4][5][6][7]), it is investigated that fractional order differential equations model real world problems more accurately than differential equations of integer order.The area devoted to the study of existence and uniqueness of solutions to initial/boundary value problems for fractional order differential equations has been studied very well and plenty of papers are available on it in the literature.We refer the reader to few of them in [8][9][10][11][12][13][14] and the references therein.To model evolution process and phenomena which are experienced from sudden changes in their states, impulsive differential equations serve as a powerful mathematical tool to model them.In daily life, we observe several physical systems that suffer from impulsive behavior such as the pendulum clock action, heart function, mechanical systems subject to impacts, dynamic of system with automatic regulation, the maintenance of a species through periodic stocking or harvesting, the thrust impulse maneuver of a spacecraft, control of the satellite orbit, disturbances in cellular neural networks, fluctuations of economical systems, vibrations of percussive systems, and relaxational oscillations of the electromechanical systems.For details, see [15][16][17][18][19][20][21][22][23]. In some cases, nonlocal conditions are imposed instead of local conditions.It is sometimes better to impose nonlocal conditions since the measurements needed by a nonlocal condition may be more precise than the measurement given by a local condition in dynamical problems.Also, nonlocal boundary value problems have become an expeditiously 2 Complexity growing area of research.The study of this type of problems is driven not only by a theoretical interest, but also by the fact that several phenomena in physics, engineering, and life sciences can be modeled in this manner, for example, problems with feedback controls such as the steady-states of a thermostat, where a controller at one of its ends adds or removes heat.For more applications, see [24] and references therein.Due to the aforesaid significant behavior, we prefer to take nonlocal boundary conditions. The definitions of the fractional order derivative are not unique and there exist several definitions, including Grunwald-Letnikov [7], Riemann-Liouville [25], Weyl-Riesz [26], Erdlyi-Kober [27], and the Caputo [28] representation for fractional order derivative.In the Caputo case, the derivative of a constant is zero and we can define, properly, the initial conditions for the fractional differential equations which can be handled by using an analogy with the classical integer case.For these reasons, in this manuscript, we prefer to use the Caputo fractional derivative. Tian and Bai [29] studied the existence and uniqueness of the following nonlinear impulsive boundary value problem: where R → R are continuous, and with ( + ), ( + ), ( − ), ( − ) are the respective left and right limits of ( ) at = . In 1940, Ulam posed the following problem about the stability of functional equations: "Under what conditions does there exist an additive mapping near an approximately additive mapping?" (see [30]).In the following year, Hyers gave an answer to the problem of Ulam for additive functions defined on Banach spaces [31].Let B 1 , B 2 be two real Banach spaces and > 0. Then for each mapping : for all , ∈ B 1 , there is a unique additive mapping : That is why the name of this stability is Ulam-Hyers stability.Later on, Hyers results are extended by many mathematicians; for details, reader may see [32][33][34][35][36][37][38][39] and the reference therein.The mentioned stability analysis is extremely helpful in numerous applications, for example, numerical analysis and optimization, where it is very tough to find the exact solution of a nonlinear problem.We notice that Ulam-Hyers stability concept is quite significant in realistic problems in numerical analysis, biology, and economics.The aforementioned stability has very recently attracted the attention of researchers; we refer the reader to some papers in [40][41][42][43][44][45].Because fractional order system may have additional attractive feature over the integer order system, let us suppose the following example to show which one is more stable in the aforementioned (fractional order and integer order) systems. The manuscript is structured as follows.In Section 2, we give some definitions, theorems, lemmas, and remarks.In Section 3, we built up some adequate conditions for the existence and uniqueness of solutions to the considered problem (7) through using fixed point theorems of Krasnoselski's and Banach contraction type.In Section 4, we establish applicable results under which the solution of the considered boundary value problem (7) satisfies the conditions of different kinds of Ulam-Hyers stability.The established results are illustrated by an example in Section 5. Background Materials and Auxiliary Results This section is devoted to some basic definitions, theorems, lemmas, and remarks which are useful in existence and stability results. Lemma 4 (see [11]).Let > 0, then the solution of the differential equation will be in the following form: where = [] + 1. Consider U be a convex closed and nonempty subset of Banach space X.Suppose F * , G * be the operators such that (ii) F * is compact and continuous and G * is contraction mapping. Then there is ∈ U such that = F * + G * . Theorem 7 ((Banach fixed point theorem) see [47]).Suppose S be a nonempty closed subset of a Banach space B. Definition 11 (see [48]).The considered problem ( 7) is generalized Ulam-Hyers-Rassias stable with respect to ∈ (I, R + ), if there is a constant ,, ∈ R + , such that, for every solution ∈ X of inequality (19), there is a unique solution ∈ X of the considered problem (7) with Remark 12.A function ∈ X is a solution of inequality (17), if there is a function ∈ X and a sequence , = 1, 2, . . ., (which depend on only), such that Remark 13.A function ∈ X is a solution of inequality (18), if there is a function ∈ X and a sequence , = 1, 2, . . ., (which depend on only), such that (19), if there is a function ∈ X and a sequence , = 1, 2, . . ., (which depend on only), such that Hence proof is completed. We define D : X → X by where Suppose that the following hold. ( The following result is based on Banach contraction theorem. Theorem 16.Under the assumptions ( 1 )-( 3 ) and if the considered problem (7) has a unique positive solution. Proof.Suppose , ∈ X and for every ∈ I, consider where which implies that D is contraction.Hence, the considered problem ( 7) has a unique positive solution. The following result is based on Krasnoselskii's fixed point theorem. Theorem 17. In addition to assumptions (𝐴 If then the considered problem (7) has at least one positive solution. Ulam Stability Results In this section, we built up some sufficient conditions under which problem (7) satisfies the assumptions of various kinds of Ulam-Hyers stability. Lemma 18. If ∈ X is the solution of inequality ( 17) and 1 < ≤ 2, then is the solution of the following inequality: Proof.Since is the solution of inequality (17), so in view of Remark 13, we have So the solution of (65) will be in the following form: where For convenience, we denote the sum of terms free of by ](), that is, Therefore, (66) becomes By using (i) of Remark 13, we get Theorem 19.Let assumptions ( 1 )-( 3 ) hold along with the condition Then problem (7) will be Ulam-Hyers stable. Proof.Suppose ∈ X be any solution of inequality ( 17) and let be the unique solution of the considered problem ( 7 where ( 8 ) Suppose a function ∈ (I, R + ), which is increasing.Then there is > 0, such that, for every ∈ I, the following integral inequality holds. Lemma 21. Let assumption ( 8 ) hold and suppose ∈ X is the solution of inequality (18), then will be the solution of the following integral inequality: Proof.From Lemma 18, we have (95) Hence, the considered problem ( 7) is Ulam-Hyers-Rassias stable. Example Consider the following implicit impulsive fractional differential equations with nonlocal boundary conditions.96) is Ulam-Hyers stable and by Remark 20, it will be generalized Ulam-Hyers stable.Also by demonstrating the conditions of Theorem 22 and Remark 23, it can be easily seen that the considered problem (96) is Ulam-Hyers-Rassias stable and generalized Ulam-Hyers-Rassias stable. Conclusion We have successfully built up some proper conditions for existence theory of implicit impulsive fractional order differential equations with nonlocal boundary conditions, by using different kinds of fixed point theorems which are stated in Section 2. The concerned theorems ensure the existence and uniqueness of solution.Further, we did settle some adequate conditions for different kinds of Ulam-Hyers stability (see Theorems 19 and 22 with Remarks 20 and 23, respectively) by using Definitions 8-11.The mentioned stability is rarely investigated for implicit impulsive fractional differential equations and also very important.Finally, we illustrated the main results by giving a suitable example.
2,363.2
2018-02-12T00:00:00.000
[ "Mathematics" ]
BLOCK ADJUSTMENT OF MULTISPECTRAL IMAGES WITHOUT GCP AIDED BY STEREO TLC IMAGES FOR ZY-3 SATELLITE Multispectral images are the main data source of optical satellite remote sensing application. Consistent geometric accuracy is the basis of image registration and fusion. But the multispectral images usually collected by the nadir imaging camera are of weak geometric intersection, which will lead to the rank defect of the adjustment equation when performing block adjustment (BA) directly, making the solution unstable. Thus, a planar BA aided by additional digital surface model (DSM) which can overcome this weak geometry is often used to improve the geometric accuracy and consistency of regional planar images. However, the inevitable elevation error and the indispensable ground control points (GCPs) make the method limited in practical application. In this paper, a new method aiming to the BA of planer multispectral images of ZY-3 satellite without use of GCPs is presented. This method introduces the constraint of stereo images to assist the BA of planar images. By configuring appropriate weights of different observations, the integrated optimization of positioning models of planar and stereo images can be achieved together. The effectiveness of the method was verified by the planar multispectral images and stereo three linear camera (TLC) images collected by ZY-3 satellite. The satisfactory results indicated the rationality and effectiveness of the presented method. INTRODUCTION The multispectral images of ZY-3 satellite contain four spectral bands (red, green, blue, and near-infrared) and have a spatial resolution of 5.8 m (Li, 2012). The approximate nadir pushbroom imaging collects the images with a swath width of 52km, thus, the maximum intersection angle of adjacent images is about 5.8 degrees, which is an extremely weak stereo geometry. Consistent geometric accuracy is the basis of subsequent application of multispectral images. Although ZY-3 satellite has been calibrated in orbit, the attitude error and some random influence on the satellite still cause significant accuracy differences between multispectral images and between multispectral and panchromatic images, attenuating the reliability in subsequent applications. Aiming to eliminating this accuracy differences and ensuring the geometric consistency among the ZY-3 images, we developed an effective block adjustment (BA) method to improve the geometric accuracies of multispectral and panchromatic images together. According to the intersection geometry of the images, BA method for satellite remote sensing images can be divided into stereo BA (Grodecki and Dial, 2003;Rottensteiner, et al., 2009) and planar BA (Teo, et al., 2010;Pi, et al., 2019). The former is developed for such images with sufficient intersection geometry (base-height ratio greater than 0.6 or intersection angle greater than 30 degrees), such as the three linear camera (TLC) images of ZY-3 satellite, the HR images of SPOT5 satellite; from which an adequate BA model with stereo measuring function could be established simply (Yang, et al., 2017). However, the method is not suitable for the case when satellite images are acquired in an approximately nadir viewing mode. In such case, the intersection * Corresponding author geometry among these images is relatively weak due to the characteristics of high orbit and narrow field of view. High resolution nadir images and multispectral images usually have such geometric characteristics, and these images with high spatial resolution and spectral resolution are the main data sources for the follow-up satellite remote sensing applications. The latter planar block adjustment is always adopted for the geometric process of these planar images, in which an available digital surface model (DSM) is usually used as elevation constraint to overcome the weak intersection geometry, suppressing the instability in calculation. However, a qualified DSM reference data is not always available readily in practical application, especially in those region blank of mapping. Some open DSM data at global scale, such as SRTM and ASTER, is restrained by limited sample size and geometric accuracy, and invalid in improving the BA accuracy sometimes, especially in those region of complex terrain. The reason is that in the planar BA, the elevation error will cause the accumulation of planar error in the block, resulting in the decline of the BA accuracy. Furthermore, due to the initial geometric positioning error of satellite images, it is necessary to use ground control points (GCPs) to ensure the registration of satellite image and DSM. Otherwise, the elevation error caused by the dislocation will still reduce the accuracy and reliability of BA. Therefore, this method is not suitable for the BA without use of GCPs and limits its application. In this paper, we present a new BA method for planar multispectral images without use of GCPs, inspired by the fact that a desirable stereo observing model with sufficient intersection can be derived from stereo satellite images. Through reasonable tie points (TPs) matching, the introduced stereo images can effectively improve the weak intersection of block, enabling stable and accurate estimation. In this method, we adopt the unified rational function model (RFM) as basic imaging model for both planar and stereo images, and establish the BA model by adding a correction model in the imagery space of RFM. On this basis, this paper describes a series of key technologies involved in this method, including establishment of BA model, weight setting of various observations and optimal estimation of unknown parameters. Compared with the traditional planar BA method, this method can not only overcome the weak intersection geometry of planar images, but also will not introduce elevation interpolation error since its elevation constraint directly comes from stereo satellite images. Additionally, the elimination of DSM makes it suitable for the BA without the use of GCPs and more practical. METHODOLOGY The bundle BA method is used to perform the mixed adjustment of multispectral planar images and TLC stereo images, and the RFM model of single image was used as the basic adjustment unit (Tao, et al., 2001). The tie points (TPs) were identified among the adjacent images automatically, and the virtual control points (VCPs) (Yang, et al., 2012) were generated to overcome the freedom of the block result from the lack of GCPs. According to the adjustment theory, the weight of the TPs, VCPs and elevation constraint were determined by prior knowledge and updating according to the posteriori accuracies. With the TPs, VCPs, original RPCs and estimated ground coordinates of TPs as inputs, the BA model then could be established, therefore the calculation of the BA parameter was conducted. The specific approach of the BA method without use of GCPs is shown in Figure. 1. Matching TPs Establishing modified normal equation Figure 1. Approach of the BA without use of GCPs Block adjustment model Geometric imaging model is the fundamental mathematic model in BA for optical satellite images, which establishes the relationship between each image point ( , ) ls and its objectspace counterpart ( , , ) B L H . The RFM, fitted from the rigorous physical model (Madani, 1999), is widely used in the geometric process of satellite images due to its simple and unified form. Therefore, we also adopt the RFM for the presented BA method. Through attaching a suitable error correction model ( , ) ls  into the image space of RFM, the basic BA model can be established, as in Eq.(1). The choice of error correction model depends on the estimation of remaining geometric error existing in the test images. The commonly used models are translation model, affine model and quadratic polynomial model. For the multispectral images and TLC images of ZY-3 satellite, the highorder geometric errors are compensated well by on-orbit geometric calibration in commissioning phase (Wang, et al., 2014;Zhang, et al., 2014), thus an affine model is usually sufficient to correct the remaining errors. The essence of BA without use of GCPs is the optimization based on a rank defect model. In general, supplementary constraints need to be developed to improve the BA model, and ensure the stable solution of BA. Here, we introduced the VCPs generated with the initial RPCs of images as weighted observations to improve the condition of BA model. The VCPs could be regarded as the GCPs with smaller weights. Therefore, the observations in our method include not only the TPs, but also the VCPs. By linearizing the adjustment model Eq. (1), we established the error equations for the VCP and TP, respectively. The difference between them is that the unknown parameters corresponding to the TP include not only the coefficients of the correction model, but also the corresponding ground point coordinates. As in Eq. (2) where t is the correction for the correction model coefficients, Weight strategy In BA, the contribution from each observation is controlled by the weight matrix. According to the classical least-square adjustment theory, each observation should be assigned to a reasonable weight according to its variance to ensure the optimal estimation of unknown parameters. However, in most cases, the accurate variance of observation is unavailable, and an initial weight is set empirically and roughly in terms of some prior knowledge. In the combined error equation Eq. (2), the observations involved in presented BA were divided into three groups, which are the VCPs of TLC images, VCPs of multispectral images and the TPs. The observations in different groups are independent of each other. Supposing that the observations in the same group have the same variance, thereby only three kind of weights need to be determined. The weights of the TPs could be estimated according to the matching accuracy by high-precision matching operators. In general, the matching accuracy of TPs among remote sensing images is better than one pixel, thus the initial weight of TP can be set to 1.0 directly. The weights of VCPs are difficult to determine due to the lack of prior information, which is critical for the optimal estimation of the BA parameters. If the weights are set too large, the power of the TPs will be weakened, resulting in relative error among images. Conversely, the weights are set too small, the freedom of the adjustment parameters fail to be constrained desirably, leading to poor convergence and unstable estimation. For ZY-3 images, a great number of studies have demonstrated that their initial RPCs provided by the vendors has a geopositioning accuracy of about 20m after geometric calibration (Tang, et al., 2012;Cao, et al., 2015), thus the initial weights of VCPs could be set to 0.0025 according to the relationship 2 1/ P   ( 2  is variance) between weight and accuracy. However, this weight is not constant, because with the iterative solution of BA, the image accuracy continues to improve, if the weight of the VCPs is still too small, it cannot play a role in constraining the deformation and error accumulation within the block, so it is necessary to update the weight according to the current imagery posteriori accuracy. It should be noted that this is only the initial weights of observations, and the final weight need to be balanced according to the number of TPs and VCPs. The weight of the elevation constraint depends on the intersection condition of the TPs. When the maximum intersection angle of the corresponding light rays in a TP is greater than 30 degrees, it is considered to be a good stereo intersection. Thus, the elevation constraint can be ignored. When the intersection is weak, a larger weight should be configured to ensure the stability of the elevation estimation. In this method, the elevation constraint weight of the TP with less than 0.1 arc intersection angle is set as 0.01, and that with larger angle is set as 0.005. Estimation of block adjustment Based on the established error equation, we adopted the least square method to calculate the adjustment parameters iteratively. For all TPs, the current RPC parameters were used to determine the corresponding ground 3D coordinates by forward intersection, and for the TPs of weak intersection, the ground 3D coordinates were estimated under the weighted elevation constraint. Then, the estimated coordinates were introduced into the BA model as the initial values to solve the unknown parameters. In the BA model, there are two kinds of unknown parameters (coefficients of correction model and the ground coordinates of TPs) need to be estimated, but they are not calculated together. In the least square solution, the unknown ground coordinates with lager number were eliminated to construct modified normal equation with only parameters of the error model. In the solution of the modified equation, the ternary storage structure based on sparse matrix was used to reduce the cost of storage and the complexity of data organization, and the conjugate gradient method was used to improve the efficiency of solution. With the estimated parameters, the RPC model were updated and then the ground coordinates were re-estimated. The adjustment calculation is an iterative process performed until the difference between two successive results is less than a predefined tolerance criterion. Additionally, a stepwise solution strategy from low-order to high-order parameter was developed for the estimation of correction model parameters. In the initial solution, when the weight of VCPs is small, only the translation error is calculated, and then other high-order errors are further estimated with the updating of the weight. The purpose of this is to prevent the deformation and error accumulation in the image block caused by the unequal distribution of TPs when the weight of VCPs is small. Description of dataset and test area We choose the Weifang in Shandong Province of China as the test area to verify the present BA. A total of 16 multispectral images with corresponding TLC stereo image pairs were used in this BA, and the corresponding RPC files were provided. These images were collected between from January 2014 to September 2018. The overlap along and across the track is greater than 15% and 50%, respectively. Based on the high precision matching algorithm (Temizel and Yardimci, 2011), a total of 6649 reliable TPs were identified, the distribution of these images and the identified TP over the test area is shown in Figure. Based on the identified TPs and generated VCPs, the BA of the multispectral images and stereo TLC images were achieved together under the condition without use of GCPs. All the BA calculations were achieved on a personal computer in one minute. Detailed information about the test area is listed in Table 1. Parameter Value Size of test area About 22,100 km 2 Number of images 64 Number of TPs 6649 Topographic relief About 400 m Table 1. Basic information of the experimental area. Accuracy assessment and analysis Since the BA was performed without the use of GCPs, which is unable to improve the absolute geo-positioning accuracy of images, we focused on the assessment of the relative geometric accuracy. The relative geometric accuracies of the images before and after BA were compared in this experiment. Two kinds of quantitative indexes were used to evaluate the relative geometric accuracy of the block. One is the statistical accuracy of the relative residual of the TPs on each image, and another is the relative geometric accuracy between adjacent images. The original RPCs and that refined using the estimated parameters were used to verify the accuracy before and after BA, respectively. The image relative residuals of a TP on an image is the bias between the measured and estimated image points. After determining the relative residuals of all TPs, the root mean square error (RMSE), mean value (Mean), and maximum error (Max) of the residuals could be counted, as listed in Table 2. Table 2. The statistics of the relative image residuals of TPs in the whole block before and after BA. Stage The overall relative error of the block before BA is about 13 pixels according to comprehensive RMSE. The large differences between the mean values and the maximum error indicate that the original geometric accuracies of the images are erratic. After the presented BA, the RMSE of the relative residuals was improved to better than 0.5 pixels, and the mean values of the residuals were almost zero, indicating the presented model and the solution strategy such as weight determination and stepwise estimation is reasonable. Additionally, the small deviations between mean values and maximum errors after BA shows that the images in the block have the consistent geometric accuracy, and further verify the effectiveness of the method. To further verify the relative geometric accuracy of BA, the splicing accuracies of each adjacent image pair before and after BA were also evaluated. The corresponding points identified in the overlapping area of adjacent images were used as checkpoints to assess the relative geometric accuracy. The root mean square (RMS) of the relative residuals for the checkpoints was used as the index to describe the relative geometric accuracy of each image pair. A total of 10 pairs of images distributed in the whole block were selected for the accuracy assessment. For each pair of images, the relative geometric accuracy between multispectral images (M-M), between panchromatic images (P-P), and between multispectral and panchromatic images (M-P) before and after BA were verified. The distribution of these splicing accuracies is shown in Figure. 3. It can be observed in Figure. 3 that the distribution of the relative geometric errors between multispectral images, between panchromatic images, and that between multispectral and panchromatic images all are disordered before BA, thus the initial geometric accuracy of the images in the block is extremely inconsistent. After the presented BA, the three kinds of splicing accuracies of the image pairs all are improved to better than 1 pixel, and the distribution of the accuracies is much more consistent, which indicate the effectiveness of the presented BA method. For an intuitive comparison of the improvement in relative geometric accuracy, we performed a vision comparison of splicing between multispectral images ( Figure. 4(a)), between multispectral and panchromatic images ( Figure. 4(b)), and between panchromatic images ( Figure. 4(c)), before and after BA. As shown in Figure. 4, in which improvement of the relative geometric accuracy is apparent. CONCLUSION A mixed BA method was presented in this paper aiming to the geometric correction of the planar multispectral images. With the supplement of stereo TLC images of ZY-3 satellite, we overcome the weak intersection geometry of the planar images by a weighted constraint BA model. Through a reasonable weighting strategy and a stepwise solution method, the optimal optimization of relative geometric accuracy among the multispectral images and that between the multispectral images and panchromatic images were achieved under the condition without use of GCPs and additional DSM. Based on our method, the BA of 64 multispectral and TLC images of ZY-3 satellite was performed. The relative geometric accuracy over the whole block is within a pixel, which meets the demand for seamless mosaic. The satisfactory experimental results indicate the rationality and effectiveness of this method.
4,347
2020-08-03T00:00:00.000
[ "Environmental Science", "Engineering" ]
Estimating rooftop solar technical potential across the US using a combination of GIS-based methods, lidar data, and statistical modeling We provide a detailed estimate of the technical potential of rooftop solar photovoltaic (PV) electricity generation throughout the contiguous United States. This national estimate is based on an analysis of select US cities that combines light detection and ranging (lidar) data with a validated analytical method for determining rooftop PV suitability employing geographic information systems. We use statistical models to extend this analysis to estimate the quantity and characteristics of roofs in areas not covered by lidar data. Finally, we model PV generation for all rooftops to yield technical potential estimates. At the national level, 8.13 billion m2 of suitable roof area could host 1118 GW of PV capacity, generating 1432 TWh of electricity per year. This would equate to 38.6% of the electricity that was sold in the contiguous United States in 2013. This estimate is substantially higher than a previous estimate made by the National Renewable Energy Laboratory. The difference can be attributed to increases in PV module power density, improved estimation of building suitability, higher estimates of total number of buildings, and improvements in PV performance simulation tools that previously tended to underestimate productivity. Also notable, the nationwide percentage of buildings suitable for at least some PV deployment is high—82% for buildings smaller than 5000 ft2 and over 99% for buildings larger than that. In most states, rooftop PV could enable small, mostly residential buildings to offset the majority of average household electricity consumption. Even in some states with a relatively poor solar resource, such as those in the Northeast, the residential sector has the potential to offset around 100% of its total electricity consumption with rooftop PV. Introduction How much energy could be generated if all suitable roof area in the United States had solar photovoltaics (PV)?This quantity is the technical potential of rooftop PV-an established reference point for renewable technologies that quantifies the generation available from a particular resource considering its availability and quality, the performance of the technology capturing the resource, and the physical area suitable for development (e.g.see Lopez et al 2012).It does not consider economics, growth potential, or gridintegration factors; thus it is neither an endorsement of particular deployment levels nor a prediction of expected deployment.Rather, it represents an upper limit on a technology's current potential generation. Various approaches are used for estimating rooftop PV potential, as summarized in Melius et al (2013) and Freitas et al (2015).These methods have been used to estimate the suitability of rooftops and PV technical potential for numerous cities and countries, such as San Diego (Anders and Bialek 2006) Yet nationwide analyses of US PV technical potential have been limited.Google's Project Sunroof is primarily a resource for building-level PV suitability information, but it has made several statements about overall suitability trends as well (Fehrenbacher 2017).In addition, the National Renewable Energy Laboratory (NREL) generated a national supply curve for rooftop PV (Denholm and Margolis 2008). This paper provides an updated analysis of the technical potential of rooftop PV across the entire United States.It extends the analysis described in a previ- Methods Our analysis of US rooftop PV technical potential has three stages.First, we characterize roof area sizes and orientations for a subset of US buildings for which we have detailed light detection and ranging (lidar) data.Second, we build statistical models to estimate the quantity and characteristics of roofs in areas not covered by lidar data.Finally, we model PV generation for all rooftops to yield technical potential estimates. Lidar data Our lidar data, from the US Department of Homeland Security's Homeland Security Infrastructure Program for 2006-2014, cover approximately 23% of US buildings3 .Margolis et al (2017) provide a detailed description of the initial geographic information system (GIS) processing of this data set, and discussions of trends observed in the areas covered by the lidar data.In brief, we define a set of shading, tilt, azimuth, and contiguous-roof-area criteria to determine what roof area is suitable for hosting rooftop PV.Roof area is considered unsuitable if facing northwest through northeast, tilting more steeply than 60 degrees, or enabling a PV system on the roof plane to produce less than 80% of the energy that would be produced by an unshaded system at the same location.A roof is considered suitable if it meets those criteria and has at least one contiguous plane with a projected horizontal footprint of 10 m 2 or greater. We apply these criteria to determine the quantity and characteristics of the roof area that could host PV for the 23% of US buildings covered by lidar data.To analyze trends by building size, we split the buildings into three sizes classes: small (less than 5000 ft 2 ), medium (5000-25 000 ft 2 ), and large (greater than 25 000 ft 2 ). Rooftop area and suitability modeling To extend our analysis to all buildings nationwide, we build a model to estimate the quantity and characteristics of rooftops not covered by lidar data.Gagnon et al (2016) provides a full mathematical description of our modeling, including validation calculations and additional analytical techniques.Here we describe the modeling approach briefly. Suitability modeling First, we estimate the number of buildings with at least one suitable roof plane.In our lidar data, greater than 99% of medium and large buildings have at least one suitable plane.Therefore, we simplify by assuming that all medium and large buildings in the United States have at least one suitable roof plane, and we use data from the Commercial Building Energy Consumption Survey for estimates of building counts at the census-division scale (CBECS) (EIA 2012). Because small buildings are more frequently unsuitable for rooftop PV, we build a model to estimate the fraction of suitable small buildings by US ZIP code.Using the observed trends in the lidar data, our predictive regression model leverages variables that are well correlated with PV suitability: locale description, census division, land cover classification percentage, and lidar coverage4 .When we check the model's predictions against buildings that have lidar data, about 60% of predictions are within 10% of the observed lidar values, and about 80% of the predictions are within 20% of the observed values. We also develop a regression model to predict the total number of small buildings in each ZIP code.The model inputs include building counts from the 2011 US Census American Community Survey (ACS), ZIP code population and population density, the northing of the ZIP code, and land cover classifications.About 50% of this model's predictions are within 10% of observed values, and about 70% of predictions are within 20% of observed values.The total numbers of small buildings are multiplied against the fractions that are suitable for PV to estimate the total number of suitable small buildings in each ZIP code. Roof plane area The process above estimates the number of suitable buildings but not the suitable roof area on those buildings.To estimate roof area, we use observations from the lidar data to characterize trends in the number and size of planes by building class. For each building size class, we take uniform random samples of buildings in the ZIP codes where lidar data are available.We generate exponential fits of plane-area trends for each building size class.We then sample from the suitable building count distributions described in section 2.2.1 to estimate the total size and number of suitable roof planes nationwide.Because of the resolution of the input data (see section 2.2.1), small building estimates are at the ZIP code level, whereas medium and large building estimates at the census-division scale are distributed to states by population weight; this method of distribution assumes that the total rooftop area of medium and large buildings correlates linearly with population at a sufficiently large geographic scale5 . Roof plane orientation Modeling the azimuth and tilt of roof planes is also important for modeling PV generation.We categorize each roof in our lidar-covered data set into one of 21 unique orientation bins, yielding probability trends for the roof characteristics of small, medium, and large buildings.For example, 18% of small building planes are south-facing with a tilt between 22 and 35 degrees.We then sample from these probability trends to estimate the characteristics of roof planes in areas without lidar coverage. PV capacity and generation modeling By combining the models of suitable building counts, roof plane numbers and sizes, and plane characteristics, we arrive at a prediction of the total amount and orientation of suitable roof area for all US buildings.Assuming that tilted roofs could hold PV modules at a 0.98 module-area-to-roof-area ratio and flat roofs at a 0.70 ratio, and multiplying the module area by the assumed power density of the modules, we obtain the PV capacity that could be installed on the suitable roof area6 .Finally, we simulate the energy-generation potential of this PV capacity via NREL's System Advisor Model (SAM). The solar resource and meteorological data used for this analysis are from the Typical Meteorological Year near major cities, the average distance from a ZIP code boundary to the nearest station is 9 km within our lidar data set. In addition to the geographic variation of solar resource, the equipment used and the design choices of the installer affect the technical performance of PV systems.We use a set of equipment and design assumptions that represent the average performance of PV systems installed in 2015 (table 1).To decrease the computational burden, similar planes are grouped into orientation bins and assigned the bins' midpoint tilt and azimuth values (figures 1 and 2).For example, any roof plane with a tilt between 34.8 and 47.4 degrees and an azimuth between 157.5 and 202.5 degrees is modeled with a tilt of 41.1 degrees and azimuth of 180 degrees.We use the PV performance values in SAM, in conjunction with the TMY3 solar resource and meteorological profiles, to estimate the annual electricity generation of the PV systems8 . Results and discussion Table 2 shows the total technical potential of rooftop PV in the contiguous United States.Although 74% of the total rooftop area on small buildings is unsuitable for PV deployment, the sheer number of buildings in this class gives small buildings the greatest technical potential-constituting approximately 65% of the total rooftop PV technical potential.The following subsections break the results down geographically to show regional trends. Small buildings This subsection presents results only for small buildings (with footprints less than 5000 ft 2 ).Because the national building stock contains about 78 million single-family households but only 3.2 million commercial buildings with a footprint of less than 5000 ft 2 , the results shown here can be interpreted as approximately representing trends in the residential sector. Figure 3 shows the percentage of small buildings that are suitable for PV by ZIP code9 .Suitability is broadly high-although suitability dips below 70% in a small proportion of ZIP codes, in no state are less than 72% of the total number of small buildings suitable for PV.The states with the highest percentages of suitable rooftops are Florida (90%) and Texas (89%).Across the contiguous United States, 82% of small buildings are suitable for PV.This value aligns closely with similar suitability numbers from Google's Project Sunroof, which states that 79% of all buildings are suitable for PV (Fehrenbacher 2017). Figure 3 also shows several regional trends in small building suitability.The highest densities of highsuitability ZIP codes are in southern California, Florida, Louisiana, and Texas.The percentage of suitable small buildings tends to be higher in regions without significant tree canopy coverage; for example, the relatively unforested southeast portion of Washington has a higher percentage of suitability compared with the heavily forested northwestern part of the state. Figure 3 should be interpreted with care, however.Developable area for rooftop PV is highly correlated geographically with population.Most potential for PV energy generation is condensed in the relatively small fraction of the country's land space that is developed.National maps such as the one shown in figure 3, therefore, can overemphasize the weight of rural regions if used to visually approximate a particular metric.Nonetheless, such maps can be useful for observing broad geographic trends.For example, the low suitability of northern Minnesota has little impact on the state's total technical potential, but it does illustrate the effect of heavy forestation on rooftop suitability. Figure 4 shows simulated annual electricity generation per small building at the ZIP-code level, based on our estimates of how much rooftop PV could be hosted10 .For comparison, figure 5 shows the simulated energy generation from generic hypothetical PV panels tilted at latitude, illustrating the varying intensity of the US solar resource.Broadly speaking, average small building production strongly correlates with the solar resource; however, there exists significant local variation driven by average household footprint and suitability.For example, the simulated average production in Florida is 12 100 kWh/year/building (130% of the national average), owing to an above-average solar resource, but it ranges from 5300 kWh/year/building to 30 100 kWh/year/building on a ZIP-code level because of variation in suitability and building footprint.Differences in suitability can also drive differences in Average Production Per Small Building (kWh/year) < 7,000 9,000 -10,000 10,000 -11,000 7,000 -8,000 8,000 -9,000 > 11,000 total productivity between regions with similar solar resources.For example, lower suitability in the South Atlantic states (see figure 3) leads to lower average small building productivity in those states compared with the Florida peninsula, despite a solar resource of similar quality. To help put this generation potential into context, figure 6 maps the average relative production of small buildings at the state level-a metric that we define as the annual rooftop PV generation of an average small building as a percentage of each state's average annual household electricity consumption.These results show that a relatively poor solar resource does not preclude the residential sector from offsetting a significant percentage of its electricity consumption via rooftop PV.An average small building across all of New England's states except Rhode Island could generate greater than 90% of the electricity consumed by an average household in the region.This result is driven primarily by the low average household consumption of 8011 kWh yr −1 in the region (70% of the national average), which is due in part to high use of natural gas and oil for heating as well as relatively low summer cooling requirements. All buildings This subsection presents the total national installed capacity and generation potential estimates for rooftop PV on all buildings (small, medium, and large buildings combined).Because our model analyzes medium and large buildings by state, not ZIP code, the combined results are presented at the state level.Table 3 shows the potential installed PV capacity, rooftop area suitable for development, and annual generation (in terawatt-hours and as a percentage of total electricity sales in 2013) by state.Figures 7 and 8 map the potential generation results. The total technical potential of rooftop PV across all buildings in the contiguous United States is 1118 GW of installed capacity and 1432 TWh of annual energy generation, which equates to 39% of total national electricity sales 11 .Of individual states, 11 As discussed in Gagnon et al (2016), this technical potential is notably greater than a previous NREL estimate (Denholm and Margolis 2008).The difference can be attributed to increases in module power density, improved estimation of building suitability, higher estimates of total number of buildings, and improvements in PV performance simulation tools that previously tended to underestimate productivity.California has the greatest potential to offset use-PV on its rooftops could generate 74% of the electricity sold by its utilities in 2013.A cluster of New England states could generate more than 45%, despite these states' below-average solar resource.Washington, with the lowest population-weighted solar resource in the contiguous United States, could still generate 27%.The best-performing six states-in terms of potential PV generation as a percent of total state sales-all have significantly below-average household consumption, suggesting the role an energy-efficient residential sector could play in achieving a high penetration of energy from rooftop PV. Wyoming has the lowest potential for offsetting statewide electricity sales with rooftop PV, at 14%, because it has the highest per-capita electricity sales of any state at 30.3 MWh/year/person (250% of the national average), driven by very high electricity use in the industrial sector (60% of retail electricity sales).Washington DC has the second-lowest potential to offset electricity sales, at 15%; lidar data indicate that this unique, almost entirely urban district has only 17.4 m 2 of suitable roof area per capita, which is much lower than the average of 24.9 m 2 per capita throughout the rest of the lidar-covered regions.Some states with below-average solar resource (such as Minnesota, Maine, New York, and South Dakota) have similar or even greater potential to offset total sales than states with higher-quality resource (such as Arizona and Texas).This highlights the observation that solar resource is only one of several factors that determine the offset potential. Florida can offset 47% of its total consumption, despite having an average household consumption that is 130% above the national average.This is largely explained by significantly below-average electricity consumption outside of the residential sector, which makes total per-capita state sales slightly lower than the national average, plus high-quality solar resource and a high percentage of buildings suitable for PV.In contrast, the other South Atlantic states range from a potential 23%-35% of electricity offset, owing primarily to lower average suitability and higher per-capita electricity sales. Conclusions We present a detailed analysis of the technical potential of rooftop PV across the contiguous United States.The higher values we find supersede the values in a previous NREL estimate of rooftop PV technical potential (Denholm and Margolis 2008).The difference can be attributed to increases in module power density, improved estimation of building suitability, higher estimates of total number of buildings, and improvements in PV performance simulation tools that previously tended to underestimate productivity. We made several noteworthy observations about the technical potential of rooftop PV.The percentage of buildings suitable for at least some PV deployment is high-82% for buildings smaller than 5000 ft 2 and over 99% for buildings larger than that.Additionally, relatively poor solar resource does not preclude a region's residential sector from being able to generate a quantity of electricity on par with its consumption, as demonstrated in most of the Northeastern states. In addition to these trends, we summarize the technical potential estimates of rooftop PV for the contiguous United States: 8.13 billion m 2 of suitable roof area could host 1118 GW of PV capacity, generating 1432 TWh of electricity per year.This would equate to 38.6% of the electricity that was sold in the contiguous United States in 2013.These values help to inform conversations about the potential that rooftop PV might play in future electricity generation portfolios in the United States. Figure 3 . Figure 3. Percentage of small buildings with at least one roof plane suitable for PV in each ZIP code in the contiguous United States. Figure 4 . Figure 4. Average rooftop PV production per small building at the ZIP-code level. Figure 7 .Figure 8 . Figure 7.Total annual energy generation potential from rooftop PV for all building sizes. ous Environmental Research Letters article (Margolis et al 2017), which estimates the technical potential of rooftop PV systems in select US cities.For a more detailed discussion of the model described below, see Rooftop Solar Photovoltaic Technical Potential in the United States: A Detailed Assessment by Gagnon et al 2016. Table 1 . Assumptions for PV performance simulations. Table 2 . Total technical potential of rooftop PV in the contiguous United States. Table 3 . Total estimated technical potential (all buildings) for rooftop PV by state.
4,548.6
2018-02-13T00:00:00.000
[ "Engineering", "Environmental Science" ]
Performance-based approach for movement artifact removal from electroencephalographic data recorded during locomotion The appreciation for the need to record electroencephalographic (EEG) signals from humans while walking has been steadily growing in recent years, particularly in relation to understanding gait disturbances. Movement artefacts (MA) in EEG signals originate from mechanical forces applied to the scalp electrodes, inducing small electrode movements relative to the scalp which, in turn, cause the recorded voltage to change irrespectively of cortical activity. These mechanical forces, and thus MA, may have various sources (e.g., ground reaction forces, head movements, etc.) that are inherent to daily activities, notably walking. In this paper we introduce a systematic, integrated methodology for removing MA from EEG signals recorded during treadmill (TM) and over-ground (OG) walking, as well as quantify the prevalence of MA in different locomotion settings. In our experiments, participants performed walking trials at various speeds both OG and on a TM while wearing a 32-channel EEG cap and a 3-axis accelerometer, placed on the forehead. Data preprocessing included separating the EEG signals into statistically independent additive components using independent component analysis (ICA). We observed an increase in electro-physiological signals (e.g., neck EMG activations for stabilizing the head during heel-strikes) as the walking speed increased. These artefact independent-components (ICs), while not originating from electrode movement, still exhibit a similar spectral pattern to the MA ICs–a peak at the stepping frequency. MA was identified and quantified in each component using a novel method that utilizes the participant’s stepping frequency, derived from a forehead-mounted accelerometer. We then benchmarked the EEG data by applying newly established metrics to quantify the success of our method in cleaning the data. The results indicate that our approach can be successfully applied to EEG data recorded during TM and OG walking, and is offered as a unified methodology for MA removal from EEG collected during gait trials. Introduction Surface electroencephalography (EEG) allows humanity a glimpse of our minds through the electrical output generated by the vast networks of neurons in our brains. The faint electrical signals arise from the joint activity of countless neurons, recorded using the EEG electrodes. Due to the delicate nature of the recorded signal it is easily overshadowed by various artefacts such as eye blinks, muscles activations, electromagnetic noise and movement artefacts [1,2]. The latter practically constraining the modern EEG to be recorded only during stationary settings. EEG signals recorded during gait activity reflect neural mechanisms associated with healthy or impaired leg movements [3]. In the past decade, EEG signals were recorded during treadmill (TM) walking [4][5][6][7], identifying a systematic modulation of EEG spectral amplitude during the gait cycle and coupling of EEG recordings and electromyography recorded from the lower limbs. Some of the studies involving EEG recordings during human locomotion addressed gait disturbances, for example, in persons with Parkinson's disease and in particular the debilitating phenomenon of freezing of gait [8][9][10][11][12]. Due to the dynamic nature of the aforementioned experiments, much effort is involved in studying and removing movement artefacts [13][14][15] , with some studies concluding that more sophisticated tools are needed to properly clean gait-related artifacts [2]. It also became apparent that data preprocessing is an important first step for MA removal because of the inherent complexity of the EEG data. However, currently there are no EEG MA studies that utilize an advanced preprocessing tool (i.e., PREP pipeline [16]) for improving the decomposition algorithm's (i.e., Independent Component Analysis (ICA)) performance. The field of ICA algorithms has also evolved, and today better decomposition algorithms are available [17]. The objective of this study was to remove EEG MA by developing a new framework that combines the most advanced algorithms in preprocessing and signal decomposition analysis with our own novel methodology for MA identification, and to test the results using recently published EEG benchmarking metrics [15]. Movement artefacts (MA) in EEG signals originate from mechanical forces applied on the scalp electrodes, inducing small electrode movements relative to the scalp which, in turn, cause the recorded voltage to change regardless of cortical activity. It was previously claimed that movement artifact should be removed in order to study electro-cortical activity during locomotion [13]. However, according to recent studies, EEG data recorded during walking is likely to contain substantial MA that cannot be removed using traditional signal processing methods [2]. Various methods have been proposed for the removal of MA from EEG signals. Gwin et al., [13] first removed an MA template from the stride-epoched data using a 20-stride moving average, and then applied independent component analysis (ICA), a source-separation algorithm, to further clean the data. Leutheuser et al., [17] later compared two ICA algorithms, the common InfoMax as well as AMICA (Adaptive-Mixture ICA), in terms of their performance in reduction of EEG artefacts, and found that the AMICA algorithm outperformed the Info-Max. Later, Onikura et al. [14] suggested an ICA based method to remove head-movement MA by high pass filtering components whose temporal correlation coefficient with a head accelerometer crossed a predefined threshold. Kline et al. [2] tested the use of stride-locked moving average subtraction and Daubechies wavelet transform to remove walking MA of various speeds. Automatic subspace reconstruction (ASR) [18] is another novel method for MA removal in which infected segments of the data are processed using baseline data and principal component analysis (PCA). While not all of the above methods were reported to successfully remove MA, they unanimously noted that caution should be exercised as not to remove neural data along with the MA, leading to subject specific thresholds, manual inspection, etc. In addition, they all utilized different benchmarking metrics for performance evaluation, an issue that was thoroughly addressed by Oliveira et al. [15] who described metrics for benchmarking EEG technologies during whole body motion. In this paper we primarily introduce an integrated, novel methodology for removing MA from EEG signals recorded during treadmill (TM) and over-ground walking, as well as inspect the EEG signal for the prevalence of such artefact in different locomotion settings. We tested our proposed approach using state of the art benchmarking metrics [15] and equipment. The methodology, that incorporates different parts of past studies while adding novelty of its own, was aimed at finding a fine line between retaining as much neural data as possible and reducing MA. Participants The study included 5 young, healthy adult (mean age ± SD: 30.5 ± 5.31) participants. All participants gave their written informed consent prior to entering the study. The experimental protocol was approved by the ethics institutional review board for experiments involving human participants in the Chaim Sheba Medical Center. Data collection; signal measurement, apparatus The participants were fitted with a 32-electrode wireless EEG system (eego sports 32 pro by ANT-Neuro, The Netherlands), which utilizes passive-wet electrodes arranged in the 10-20 system. Impedances were kept under 20 kOhm (cross-subject mean: 7.19 ± 3.77 KO), while channels with impedance of over 20 kOhm were excluded from the analysis. Using adhesive tape, an accelerometer (eego sports 32 pro by ANT-Neuro, The Netherlands) was placed on the midline of the participants' forehead in order to determine the mechanical forces that were applied on the EEG electrodes during locomotion. An instrumented dual belt TM equipped with bi-lateral force plates was used (R-Mill, ForceLink, The Netherlands. EEG, accelerometer and ground reaction forces (GRF) data were recorded simultaneously during walking at 1024, 1024 and 120 Hz respectively. In addition, all trials were video-recorded. Experimental protocol The experiment began with the recording of a one-minute sitting-baseline (BL), followed by 1-minute during which the participants were asked to nod their head back and forth in a comfortable frequency. The participants then performed six walking trials, each lasting two minutes. Firstly, four trials were performed on a TM at increasing speeds (0.4, 0.8, 1.6 and 2.2 m/s) (Fig 1), these were followed by two over-ground (OG) trials, first at the participant's natural pace and another later with elevated walking speed. OG trials were performed in a 24 m long corridor and walking speeds for these trials were determined using the average walking time for completing a predefined 10 m segment of the corridor. Each participant's OG walking speeds were derived from video recordings and accelerometer data. Data preprocessing EEG Data preprocessing and analysis was performed in MATLAB [The MathWorks Inc., Natick, MA] using EEGLAB [19], fitted with the PREP extension [16] and custom scripts as follows: 1. As MA may interfere with Channel rejection (CR), the latter was performed using PREP according to BL data only and then applied to the participant's full, continuous, dataset. Criterions for CR were (Parametric thresholds for the PREP GUI appear in parentheses): 1. Standard deviation (5) 2. High frequency noise (5). 2. The channel-rejected, continuous, dataset was then processed by: 1. Data de-trending by high-pass filtering; 1 Hz cutoff frequency. 2. Line noise removal at 50Hz and its harmonics using CleanLine. 3. Re-referencing of the signals to an average reference. 3. The continuous EEG data was cropped to trial specific datasets and each dataset was process by: 1. Running the AMICA algorithm [20] 2. Removing EOG, EMG and other non-movement artefact component by visual inspection and comparing the components' spatial distribution, time course and spectrograms to typical artefact patterns as outlined in [1]. 3. Removal of artefacts is performed using EEGlab's graphical user interface. Independent component analysis, MA component identification and removal from the EEG data All remaining components and vertical accelerometer data were transformed to the spectral domain using Fast Fourier transform (FFT). The average stepping frequency (ASF) of each trial was derived from the power spectrum of the accelerometer's vertical component using peak detection (Fig 2) and verified against the recorded video (i.e., counting steps in a time unit) and heel strike (HS) detection using the GRF data. The MA components have a unique spectral signature compared to their neural counterparts-a tall peak at the stepping frequency surrounded by relatively low amplitude. Since EEG data components usually have most of their spectral energy in the lower frequencies, a median is used to verify that indeed the MA independent components (ICs) spectral signature is present. To assess the amount of MA, each IC was given an MA prevalence (MAP) score, calculated as Where 'Power at ASF' denotes the component's spectral peak at the average stepping frequency, and 'Low frequency power median' is the power spectral density's median in the 0-5 Hz band. The MAP score is calculated as the ratio between a component's power at the stepping frequency and the median of the spectral power in the 0-5 Hz band. Since components marked as MA are removed entirely we wanted to make sure little to no neural data was removed along with them. After inspecting many MA and non-MA ICs' frequency spectra, we came to the conclusion that a ratio of 80 can serve as a classifying threshold aiming to remove components containing mostly MA while retaining as much neural data as possible. A MAP score of 80 means that there is 80 times more spectral power in the average stepping frequency compared to its surroundings in the 0-5 Hz range (i.e., where most of the spectral power resides in non-MA components). Additionally, we reviewed the power spectra of the different components and discovered another pattern of MA component spectra. This pattern featured decaying sub-harmonics at 0.5 multiples of the stepping frequency (Fig 3) and is related to lateral sway while walking, in line with previous studies [21]. These MA components were removed as well. Validation In order to determine the prevalence of MA in the cleaned data and to avoid over-cleaning, the EEG data were benchmarked using two metrics. We utilized a metric described by Oliveira et al. called the walking/sitting (W/S) ratio [15]. The W/S ratio is calculated by dividing the spectral power in the 5-80 Hz band, containing the theta (5-8 Hz), alpha (9-13 Hz), beta (13-30 Hz) and gamma (30-80 Hz) brain oscillations, of a walking trial by the spectral power in the same band of the sitting BL. The W/S Ratio determines changes in EEG spectral content related to movement where a ratio that is larger than 1 suggests the existence of MA in the EEG data and a W/S ratio smaller than 1 may indicate the EEG data were over-cleaned. We note, however, that although the W/S ratios should ideally be 1, this would require the same continuous electro-cortical activity between seated and walking conditions. Previous studies indicated a power drop in the alpha (9-13 Hz) and beta (13-30 Hz) brain oscillations during motor activity when compared to a resting baseline [22], as well as a related drop in W/S ratios during locomotion in a different study [15]. These explain why W/S ratios that are only slightly below 1 are not necessarily an indicator of over-cleaning. We utilized the previously described drop in alpha and beta power to assess the physiological validity of our results by calculating the W/S ratio specifically for the alpha and beta bands during the different walking trials, before and after the proposed MA-removal methodology was applied. To compare our method to the current state-of-the-art in MA removal, we also cleaned the data using ASR and compared the results using the previously described benchmarking criteria. ASR applies Principal Component Analysis (PCA) to the EEG data in a moving window, decomposing it into subspaces which are compared to a clean BL segment. The subspaces that are identified as noisy are reconstructed using an un-mixing matrix which was derived from the BL using PCA. By so, ASR automatically eliminates eye-blinks, muscle and movement artefacts. We referred to data prior to MA removal, data who's MA was cleaned using our proposed method and data pruned using ASR as 'Preprocessed', 'AMICA' and 'ASR', respectively. The results of these comparisons were statistically analyzed using a two-way ANOVA by ranks (Related-samples Friedman's test) for comparing the different methods and walking speeds. Statistical analysis. Statistical analysis was performed with the IBM SPSS statistics 21 software. Significance level for all test was set to α = 0.05. In order to assess the effect of the MA induced by the increasing speeds as well as their effect on the EEG data, we analyzed the parameters obtained from preprocessed-only, pruned using AMICA and pruned using ASR datasets. We evaluated each methods' ability to remove MA from EEG data recorded during increasing walking speeds, using the W/S ratios, as well as alpha and beta bands' spectral power parameters. This was performed using a two-way ANOVA by ranks (Related-samples Friedman's test) for comparing the different walking speeds within each method. Later, to assess the difference in performance between our proposed method and the current state-of-the-art (i.e., ASR), we performed a Related-samples Wilcoxon signed rank test, directly comparing the two methods, whereas the walking speeds now serving as the within variables. This was performed utilizing the 3 parameters described in the previous chapter. Results The faster a person walks-the more GRFs he/she will be subjected to (Fig 1, Plotnik et al, 2013 [23]) and hence more forces will be applied on the scalp electrodes. Thus, generally speaking, the faster one's walking speed-the more MA-infected the recorded EEG data will be, as demonstrated in Fig 1. As can also be seen in Fig 1, this is particularly true for the Cz electrode, that is the most prone to MA due to its location at the top of the scalp [2]. Cz electrode and accelerometer spectral comparison We begin by comparing the frequency spectra of the Cz electrode and of the accelerometer's vertical component in Fig 2. Presented are data from all 6 walking trials (TM and OG) of a typical participant (i.e., HU971), where the peak in the accelerometer's power spectrum corresponds to the participant's stepping frequency. It can be seen that (a) the spectral peak of the accelerometer matches to the spectral peak in the Cz electrode (e.g., Fig 2, 2.2 m/s) (b) the spectral peak's amplitude in the EEG and accelerometer data increased along with the walking speed in the trials before MA removal (c) MA removal considerably reduced the aforementioned peak. W/S ratios at various speeds and conditions As displayed in Fig 4, the W/S ratio for the trials prior to MA removal increased dramatically in conjunction with the walking speeds, while trials cleaned by our method and ASR displayed a ratio ranging between 0.87 and 1.25 across all trials While the data cleaned by AMICA and ASR are not affected by speed (χ 2 (5) = 1.2,p = 0.94; χ 2 (5) = 5.6,p = 0.34, respectively), the preprocessed-only data was clearly speed-dependent (χ 2 (5) = 22.94,p < 0.0001). For pairwise comparisons see Fig 4. Ratios >1 were encountered in the AMICA 2.2 m/s and OG-elevated trials (1.07 and 1.25, respectively) as well as the ASR 1.6 m/s and OG-regular trials (1.09 and 1.05, respectively). The W/S ratios rendered by ASR presented higher SD values when compared to our method, with the exception of OG-elevated. Note that although the MA manifests predominantly in the stepping frequency (i.e., below 5 Hz), it is clearly apparent in the 5-80 Hz band. Directly comparing our proposed method and ASR produced no statistically significant difference (Z = 6,p = 0.68). MA distribution across the scalp The scalp distribution maps portrays a more detailed picture of Fig 4 W/S ratios by spatially displaying the various electrodes' ratios as they are spatially spread across the scalp. Cross-subject scalp distribution maps of the W/S ratios across the 32 electrodes and in all trials are presented in Fig 5. Each electrode is represented by a dot while inter-electrode values were generated using spline interpolation. It can be seen how in the TM setting, the lowest speed (i.e., 0.4 m/s) presents practically no MA to begin with while right after, at 0.8 m/s, MA begin to appear and increase along walking speeds where in 2.2 m/s the W/S ratios values are well over 3 and the scalp distribution maps are saturated. Further, as a control, we applied the proposed methodology to artificial 'simulated walking noise' added to a baseline recording, and confirmed that after applying our method, W/S ratios dropped to 1.002 (see full description of this procedure in S3 File). The energies of the alpha and beta bands were normalized in reference to each participant's sitting baseline of the same band, in order to assess changes in alpha and beta power during locomotion. Data cleaned by our proposed method consistently exhibited the expected power drop in both the alpha and beta bands across all trials, with the exception of OG-elevated. ASR, in comparison, presented the beta band's power drop in most of the trials, but also displayed an unexpected increase in alpha power during the 1.6 m/s, 2.2 m/s and OG-elevated trials. Nevertheless, Friedman's tests yielded non-significant results (χ 2 (5) = 1,p = 0.96 for our method and χ 2 (5) = 5.57,p = 0.35 for ASR), indicating that these deviations are non-significant. Directly comparing our proposed method and ASR in the alpha and beta bands, no statistically significant difference was found (Z = 7;p = 0.89 and Z = 7,p = 0.686). Discussion In this article we have measured the effects of locomotion on recorded EEG data and proposed a novel methodology for the removal of the associated MA. Currently there are no EEG MA studies, that utilize an advanced integrated preprocessing tool (i.e., PREP pipeline), which greatly affects the decomposition algorithm's (i.e., AMICA) performance. Our procedure was [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30] bands are presented in panels A and B, respectively. Ratios of the preprocessed-only, AMICA and ASR cleaned data are displayed in blue, orange and gray, respectively. The mean cross-subject regular and elevated OG walking speeds were 1.37 and 1.81 m/s, respectively. X and y axes depict the various trials and band's W/S ratio, respectively. For preprocessed data (blue), significance in the pairwise comparisons are indicated by asterisks (single asterisks denotes p<0.01 and double asterisks p<0.05). AMICA and ASR cleaned data showed no statistically significant power increase. https://doi.org/10.1371/journal.pone.0197153.g006 Movement artifact removal from EEG data recorded during locomotion successfully applied for TM and OG walking alike, and is offered as a unified methodology for MA removal from EEG collected during gait trials. In general, the proposed approach removes MA based on stepping frequency information. In order to substantiate this approach we began with presenting evidence pointing that there is, in fact, correlation between the frequency spectrum of the recorded, uncleaned, data from the EEG electrodes and the forces applied on the scalp during locomotion. Given the periodic nature of human locomotion, our analysis of the ICs' frequency spectra serves as a straightforward and powerful way to detect and remove locomotion-originated MA. We also tested our methodology by applying it to data from baseline recordings with added artificial noise (i.e., simulating MA) and demonstrated that those artificial artefacts were successfully removed (see Figure A in S3 File). Our approach offers a systematic way of evaluating MA through the analysis of spectral components. We propose a full 'step by step' approach from start to finish, as well as combine multiple state-of-the-art signal processing methods (i.e., PREP pipeline, AMICA) with our novel methodology for assessing and removing MA. Cz electrode and accelerometer spectral properties Since the Cz electrode is the most prone to MA, we decided to examine this electrode's power spectrum before and after MA removal to provide a worst-case scenario overview. Looking at the vertical accelerometer data (Fig 2, top row), it is apparent that the faster the participant walked, the more MA was induced to the EEG data (i.e., wider, taller spectral peaks at the stepping frequency; Fig 2, middle row). This can be observed by the increasing amplitude of the spectral peaks along with the walking speed. Additionally, it can be seen that OG walking leads to more MA compared to TM at similar speeds. Differential considerations on MA removal for TM and OG walking In the lowest speed, 0.4 m/s, MA was negligible (i.e., low MAP score) and therefore the power spectra of the preprocessed and MA-removed data are the same. The spectral peak in the stepping frequency grew as the TM's speed was increased, while the higher speeds proved to be harder to clean (i.e., more EMG and MA components detected). Additionally, OG walking trials were harder to clean than trials performed on the TM because of two reasons: (i) Higher GRFs while walking OG, compared to similar TM speeds, due to the TM's suspension dampening and (ii) The self-paced nature of OG walking results in small variations to the stepping frequency across the trial thus, providing a quasi-periodic signal that's harder for the AMICA algorithm to separate from the neural data. On par with other observations in this study, the Cz electrode frequency spectra show our proposed method was able to prune the data from MA in all TM walking speeds and provide a substantial improvement in TM running and OG locomotion. W/S ratios at various speeds and conditions The W/S ratio, as seen growing alongside the walking speed in Fig 4, offers an opportunity to examine the increase of spectral energy contents in the brainwaves band of EEG data recorded during locomotion. While at the lowest speed (i.e., 0.4 m/s), both pre and post MA removal ratios are close to 1, an increase at the W/S ratio is already apparent at 0.8 m/s, indicating the presence of MA in the EEG data. This increase further escalades intensely at higher speeds. The MA pruned data, on the other hand, presents ratios slightly below 1 for all walking speeds, both on the TM and OG, which is in line with previous studies [15] as well as studies that have demonstrated a reduction in alpha and beta power during motor activity [22]. It can also be observed that the W/S ratio's standard deviation increased alongside the walking speed. These increasing SDs are a result of small differences in setup between subjects (i.e., small variations in the <20 impedance, different weight, walking style, sole cushioning, etc.). These differences, while miniscule at lower speeds, increase and hyper-manifest as the walking speed increases, and come to an extreme at OG locomotion. This points at how caution should be practiced when recording EEG during OG locomotion or TM running, due to the higher GRFs. We found no statistical difference in the performance at removing MA between our method and the current state-of-the-art (i.e., ASR). Nevertheless, our suggested method presents lower inter-electrode variance (i.e., smaller error bars in Fig 4), which suggests a more homogeneous removal of MA across the scalp and thus cleaner EEG data. MA distribution across the scalp Looking at Fig 5, we see again how for the three TM-walking speeds, as well as the OG-regular, the proposed method is able to clean the data well while at the TM-running (i.e., 2.2 m/s) and OG-elevated trials some MA is still present in the signal. In the latter conditions, the MA constitutes a very large portion of the data. So much so, that the AMICA algorithm struggles to separate it from the neural data, resulting in mixed neural-MA components. This gives us an idea about the method's abilities and limitations in various walking conditions. Additionally, It can be seen how MA appears first (i.e., at lower speeds) mainly at the top of the scalp, in the area around the Cz electrode, and later spreads to the peripheral areas, consistent with prior studies [2]. Alpha and beta bands analysis during motor activity Past studies have demonstrated a link between motor activity and a reduction in the alpha and beta bands' power of the recorded EEG [22,24]. Moreover, a more prominent drop has been shown in the alpha band when compared to the beta band. This property can serve as a good physiological validity benchmark of the EEG data, both prior to and after the MA-removal process, as depicted in Fig 6. Looking at the preprocessed-only data, an increase is evident in both the alpha and Beta bands' normalized spectral power. ASR displayed better values compared to the preprocessed datasets, but on the other hand also presented an unexpected increase in power evident in the alpha band during TM trials at speeds of 1.6 m/s and above, as well as in the OG-elevated trial, which may originate from non-cortical artifacts. Due to the slighter change in the beta band's power during motor activity, its power drop is more susceptible to MA as it diminishes in lower speeds then the drop in the alpha band's power (the power drop in the preprocessed data is diminished at 1.6 and 0.4 m/s for the alpha and beta bands, respectively). This delicate nature of the beta band also explains why it is first, and only, at displaying an increase in the MA-removed data (i.e., at the OG-elevated trial, which also displays the highest ratio after MA-removal). The alpha and beta bands analysis shows how MA removal is important in order to reveal physiological changes in EEG that may be hidden beneath the MA. Moreover, our results demonstrate that MA removal is essential in order to detect physiological changes in EEG during locomotion. Increased electro-physiological signals during higher speeds An interesting observation we made using AMICA is how electro-physiological signals, such as the neck muscles activations as separated by the algorithm, increased with the walking speed. We presume these increasing activations are in order to stabilize the participant's head during heel strikes at the growing TM speeds. While these components do not originate from scalp electrodes movement, and thus will probably be categorized as EMG artefacts and removed at the preprocessing stage, they exhibit a power spectrum similar to the MA's-a peak in the stepping frequency. This serves as an example of how physiological systems (e.g., muscles, heart rate, etc.) are engaged more intensively when walking faster-therefore possibly introducing electro-physiological (i.e., non-MA) noise to the recorded EEG. Thus, while our bodies engage physiological systems (e.g., muscles, heart rate, etc.) more intensively to walk faster, further care is needed to isolate the noise related to the elevated activity within the EEG signal and remove it, in order to obtain a signal that reflects primarily brain activity.
6,546.4
2018-05-16T00:00:00.000
[ "Biology", "Computer Science" ]
Role of the Skyrme tensor force in heavy-ion fusion We make use of the Skyrme effective nuclear interaction within the time-dependent Hartree-Fock framework to assess the effect of inclusion of the tensor terms of the Skyrme interaction on the fusion window of the 16O–16O reaction. We find that the lower fusion threshold, around the barrier, is quite insensitive to these details of the force, but the higher threshold, above which the nuclei pass through each other, changes by several MeV between different tensor parametrisations. The results suggest that eventually fusion properties may become part of the evaluation or fitting process for effective nuclear interactions. Introduction The Skyrme interaction was introduced in the 1950s [1] and has become the most widespread effective interaction used in mean-field calculations in nuclei.Its utility lies in part in the fact that it is designed as a kind of series expansion around a zero-range interaction; this gives a delta function in each term, though with spatial derivatives to explore the finite range part of the nuclear interaction, just as a Taylor expansion of a function is able to converge on true values of that function away from the point of expansion through the use of derivatives.The delta functions afford great calculational simplifications, and also allow a straightforward transformation between the effective interaction picture and an energy density functional. As originally formulated, the Skyrme effective interaction featured tensor terms, reflecting their known importance in the underlying nucleon-nucleon interaction.The tensor terms had subsequently been widely neglected (as the fits made at the level of Hartree-Fock calculations were not very sensitive to the tensor terms).More recently, a renewed interest in the role of the tensor terms has arisen [2][3][4][5][6][7][8], as the ability to perform large-scale calculations of nuclear properties, in which the tensor term may show an effect, have become the norm. Various parametrisations of the Skyrme interaction in which tensor terms are active have been produced.We select a small sample for this work to show the variation of results.We use a standard time-dependent Hartree-Fock (TDHF) implementation, based on the Sky3D code [9], with all the extra time-even and time-odd terms, including those from the tensor force (see [7] and references therein).The code also works as a standard static Hartree-Fock code to initialise the time-dependent run with nuclear ground states obtained via a damped relaxation method [10].We note here that the tensor force does not merely amend the coefficients at the level of the energy density functional, but adds further couplings between densities [8].Further work awaits a longer subsequent publication, or can be found in the PhD thesis of one of us [12].For more details of TDHF and its extensions, including its use in fusion reactions, the recent review by Simenel is a useful source [13]. The Skyrme tensor force We use the tensor terms as introduced by Skyrme [1] (though written in the notation of [2]): This term contains two parameters to be fitted to data; t e and t o .Here the subscript e means even and o means odd, since the associated terms are respectively even and odd under the exchange of spatial coordinates. Fusion Windows Calculations were performed using our modified Sky3D code for the upper fusion thresholds at zero impact parameter -i.e. the highest energy at which two 16 O nuclei fuse during head-on collisions, above which the nuclei pass through and undergo deep-inelastic excitation.We choose the forces SV-bas [14], as a sample non-tensor force (and that which comes as the default with the Sky3D sample input files), SLy5 [15], as a force whose time-odd terms have This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 8 been explored in previous TDHF calculations [16], and which has independently seen the perturbative addition of tensor terms, fitted to single-particle energy systematics [11] while leaving the rest of the force unchanged.The forces T22, T24 and T26 [2] had the tensor terms included in the initial fit, and are part of a series of forces with systematic variation of the tensor properties.Here, the variation of the second numeral in the force name indicates that the p-n part of the tensor term in the effective interaction is systematically varying.The choice of a 16 O+ 16 O reaction comes from its long history of use as a kind of standard system against which new techniques of forces are benchmarked [17].Selected examples of such recent studies of 16 O+ 16 O fusion reactions include the analysis of equilibriation [18], a detailed microscopic study around the barrier [19], and a study of the effect of the time-odd couplings in the spin-orbit force on dissipation as a function of energy [20], also including tensor-force contributions to the spin-orbit part of the energy density functional [21].Other recent applications of TDHF include a study of dynamic effects on potential barriers for heavier systems [22], and the implementation of a continuum-TDHF theory [23]. Force Threshold The results for the maximum fusing energy are given for this selection of forces in table 1.We label SLy5 (full) as the SLy5 force as originally conceived, with no tensor force, but with time-odd terms active where they arise from the original Skyrme parameters, with the exception of the ( ∇ • s) 2 and s • ∇ 2 s which may result in spin instabilities.It is seen that, at least in the case of SLy5 that adding the tensor terms to the force decreases the upper fusion threshold.This means that the tensor terms in the effective interaction serve to decrease transfer of relative kinetic energy of the fragments into their internal energy during the reaction.If this can be explained in terms of the sign of the tensor terms, which were determined solely by the requirement to improve the single particle energies [11], an important constraint for the tensor force would be found.Despite SV-bas and SLy5 both being fitted to group state properties of finite nuclei and to nuclear matter properties, a rather large difference in upper threshold energy is evident.The series of TIJ forces then shows a very wide range of energy differences for the upper threshold, despite all producing similar and reasonable ground state properties.We should point out that the lower (barrier) thresholds for fusion are quite insensitive to the Skyrme parametrisation, at least to the level of around 1 MeV difference between forces, with the main effect being due to the Coulomb force. Conclusions We have performed time-dependent Hartree-Fock calculations of fusion reactions between two 16 O nuclei, using a range of different parametrisations of the effective Skyrme interactions.Each of the interactions produces rather similar ground states in which neither time-odd nor tensor parts of the effective interaction are active, but the range of results in the upper fusion threshold is large.This highlights the as-yet unconstrained nature of the time-odd parts of the effective nuclear interaction.While systematic use of fusion calculations in fits of time-odd parts of Skyrme forces remains computationally prohibitive, the variation found here suggests that fusion dynamics may form part of the physics input to future constraints on those parts of effective interactions which are not probed by ground-state fits.This can be added to the increasing body of work indicating that the time-odd contributions to the nuclear meanfield are in need of constraining to observables [24][25][26]. DOI: 10 .1051/ C Owned by the authors, published by EDP Sciences, 2015 Table 1 . Fusion upper threshold energies for the16O +16O collision using various parametrisations of the Skyrme interaction.For references to interactions, see text.The energies are calculated to the nearest 1 MeV.
1,779.4
2015-01-01T00:00:00.000
[ "Physics" ]
Relational, Cooperative Information for IPv7 The implications of ambimorphic archetypes have been far-reaching and pervasive. After years of natural research into consistent hashing, we argue the simulation of public-private key pairs, which embodies the confirmed principles of theory. Such a hypothesis might seem perverse but is derived from known results. Our focus in this paper is not on whether the well-known knowledge-based algorithm for the emulation of checksums by Herbert Simon runs in Θ(n) time, but rather on exploring a semantic tool for harnessing telephony (Swale). Introduction Real-time technology and access points have garnered great interest from both leading analysts and security experts in the last several years.The notion that steganographers interact with virtual information is usually adamantly opposed.On a similar note, in fact, few security experts would disagree with the synthesis of rasterization, which embodies the unproven principles of robotics.However, 802.11b alone will not able to fulfill the need for mobile epistemologies.Our algorithm is copied from the principles of topologically mutually exclusive networking.We emphasize that our heuristic develops collaborative archetypes.Unfortunately, this method is rarely adamantly opposed [1].But, indeed, voice-over-IP and Web services have a long history of interfering in this manner.Our framework requests the location-identity split.Combined with signed communication, such a claim synthesizes an analysis of the location-identity split. To our knowledge, our work in this paper marks the first algorithm investigated specifically for Boolean logic.We emphasize that our system is in Co-NP.Two properties make this solution optimal: Swale manages access points, and also, we allow flip-flop gates to explore electronic configurations without the understanding of superpages.The drawback of this type of method, however, is that information retrieval systems [2] and the memory bus can agree to fix this riddle.Nevertheless, this method is entirely considered extensive.As a result, we verify not only that consistent hashing can be made scalable, unstable, and wireless, but that the same is true for B-trees.Here, we prove not only that forward-error correction and hierarchical databases are entirely incompatible, but that the same is true for link-level acknowledgements.Along these same lines, we view machine learning as following a cycle of four phases: deployment, provision, analysis, and evaluation.We view electrical engineering as following a cycle of four phases: allowance, evaluation, investigation, and construction.Combined with Lamport clocks, this discussion develops an analysis of B-trees.Although such a hypothesis is mostly a structured goal, it fell in line with our expectations.The rest of this paper is organized as follows.To begin with, we motivate the need for wide-area networks [2].Similarly, to realize this ambition, we better understand how the UNIVAC computer can be applied to the exploration of local-area networks.Along these same lines, we prove the development of linked lists.As a result, we conclude. Related Work A number of prior applications have developed the refinement of vacuum tubes, either for the development of randomized algorithms [3] or for the construction of Internet QoS [3].Lakshminarayanan S [4] presented the first known instance of highly-available modalities [1,[5][6][7].Finally, note that our methodology analyzes the synthesis of the Ethernet; thus, our method is impossible [8]. Erasure coding The exploration of write-ahead logging has been widely studied [9].Unfortunately, the complexity of their solution grows linearly as virtual epistemologies grows.New heterogeneous technology proposed by Thompson D [10] fails to address several key issues that our framework does fix [11].The original approach to this obstacle [12] was well-received; on the other hand, such a hypothesis did not completely fulfill this intent.A comprehensive survey [13] is available in this space.All of these methods conflict with our assumption that interrupts, and lambda calculus are significant [14,15].On the other hand, the complexity of their method grows sub linearly as the study of checksums grows. Read-write symmetries The concept of client-server modalities has been studied before in the literature [16].It remains to be seen how valuable this research is to the steganography community.A recent unpublished undergraduate dissertation described a similar idea for the emulation of Internet QoS [17][18][19].Without using reliable epistemologies, it is hard to imagine that wide-area networks and rasterization can agree to fix this quagmire.We had our method in mind before Taylor & Garcia published the recent seminal work on agents [20][21][22][23].A recent unpublished undergraduate dissertation [24,25] constructed a similar idea for the visualization of scatter/ gather I/O.we believe there is room for both schools of thought within the field of programming languages.A recent unpublished undergraduate dissertation [26] motivated a similar idea for the compelling unification of digital-to-analog converters and rasterization [20].We plan to adopt many of the ideas from this previous work in future versions of our algorithm. Congestion control Our solution is related to research into the study of spreadsheets, consistent hashing, and kernels.A litany of related work supports our use of the improvement of search [27].Swale is broadly related to work in the field of software engineering by Takahashi, but we view it from a new perspective: von neumann machines.Thus, comparisons to this work are idiotic.Finally, the heuristic of Bhabha et al. [16,[28][29][30] is a confirmed choice for byzantine fault tolerance [31]. Pervasive Symmetries Our heuristic relies on the essential model outlined in the recent infamous work by Zhao and Ito in the field of complexity theory.Continuing with this rationale, rather than caching the simulation of 64-bit architectures, our algorithm chooses to cache virtual epistemologies.Figure 1 depicts Swale's cooperative storage.This is a significant property of Swale.Similarly, the design for Swale consists of four independent components: Byzantine fault tolerance, web browsers, systems, and robots.Although statisticians often postulate the exact opposite, our algorithm depends on this property for correct behavior.We assume that each component of our application follows a Zipf-like distribution, independent of all other components.This seems to hold in most cases.Next, we believe that each component of our methodology synthesizes online algorithms, independent of all other components.Such a hypothesis might seem perverse but is derived from known results.Despite the results by Robert Tarjan [17] we can verify that cache coherence and superpages are always incompatible.We show a decision tree detailing the relationship between our application and extensible epistemologies in Figure 1.Further, we consider an algorithm consisting of n systems.This is an appropriate property of Swale.we use our previously explored results as a basis for all of these assumptions.Reality aside, we would like to study a framework for how Swale might behave in theory.This seems to hold in most cases.We show the methodology used by Swale in Figure 1.This may or may not actually hold in reality.Further, we believe that electronic technology can locate IPv7 [6,29,32] without needing to manage adaptive archetypes.Further, we estimate that each component of Swale locates the improvement of DHCP, independent of all other components.Therefore, the model that our application uses is solidly grounded in reality (Figure 2). Implementation Though many skeptics said it couldn't be done (most notably Clarke E), we describe a fully-working version of our framework.On a similar note, our algorithm is composed of a virtual machine monitor, a centralized logging facility, and a hacked operating system.Along these same lines, since our heuristic is built on the principles of hardware and architecture, optimizing the centralized logging facility was relatively straightforward.Despite the fact that we have not yet optimized for security, this should be simple once we finish coding the client-side library.The centralized logging facility contains about 91 semi-colons of SQL [21].Since Swale will SBB.MS.ID.000555.3 (1).2019be able to be visualized to harness optimal configurations, coding the collection of shell scripts was relatively straightforward. Results Our performance analysis represents a valuable research contribution in and of itself.Our overall evaluation seeks to prove three hypotheses: a) that Internet QoS no longer toggles system design b) that suffix trees no longer influence optical drive throughput; and finally c) that the Motorola bag telephone of yesteryear actually exhibits better distance than today's hardware. Unlike other authors, we have intentionally neglected to visualize flash-memory space.Further, we are grateful for mutually pipelined, DoS-ed 802.11 mesh networks; without them, we could not optimize for scalability simultaneously with usability constraints.Our evaluation strives to make these points clear. Hardware and software configuration Though many elide important experimental details, we provide them here in gory detail.We scripted an encrypted emulation on MIT's decommissioned PDP 11s to quantify the work of Japanese gifted hacker M. Frans Kaashoek.Had we emulated our mobile telephones, as opposed to simulating it in courseware, we would have seen exaggerated results.To begin with, we added some ROM to DARPA's system.Further, we doubled the effective NV-RAM speed of our desktop machines.Along these same lines, we removed some NV-RAM from our desktop machines to discover our desktop machines.Configurations without this modification showed amplified average latency.Continuing with this rationale, we removed 3MB of flash-memory from our desktop machines.On a similar note, we doubled the ROM throughput of our millenium overlay network to measure the extremely encrypted behavior of collectively mutually exclusive models.Finally, we added 300Gb/s of Ethernet access to our mobile telephones to better understand our cooperative overlay network (Figure 3).Swale does not run on a commodity operating system but instead requires an opportunistically hacked version of Microsoft Windows Longhorn.We added support for our system as a fuzzy runtime applet.All software components were hand hex-editted using a standard toolchain built on J Dongarra's toolkit for opportunistically harnessing distributed effective response time.Continuing with this rationale, Furthermore, all software was linked using AT&T System V's compiler built on the Swedish toolkit for topologically enabling mutually exclusive PDP 11s (Figure 4).All of these techniques are of interesting historical significance; Garey M & Martin KH investigated an orthogonal setup in 1980. Experiments and results Our hardware and software modifications show that deploying Swale is one thing, but deploying it in the wild is a completely different story.With these considerations in mind, we ran four novel experiments: We discarded the results of some earlier experiments, notably when we compared response time on the GNU/Hurd, LeOS and Microsoft Windows 3.11 operating systems.Now for the climactic analysis of experiments (c) and (d) enumerated above.Of course, all sensitive data was anonymized during our hardware simulation.Second, the many discontinuities in the graphs point to muted average bandwidth introduced with our hardware upgrades.Note Significances Bioeng Biosci Copyright © Sam SBB.MS.ID.000555.3(1).2019 that Figure 4 shows the 10 th -percentile and not median noisy effective NV-RAM speed.Shown in Figure 4, all four experiments call attention to Swale's average complexity.The key to Figure 4 is closing the feedback loop; Figure 4 shows how our heuristic's effective RAM throughput does not converge otherwise.The results come from only 9 trial runs and were not reproducible.Along these same lines, the many discontinuities in the graphs point to weakened expected throughput introduced with our hardware upgrades.Lastly, we discuss the second half of our experiments.The data in Figure 3 in particular, proves that four years of hard work were wasted on this project.The curve in Figure 4 should look familiar; it is better known as h(n)=logn.Third, we scarcely anticipated how precise our results were in this phase of the evaluation. Conclusion Swale will surmount many of the grand challenges faced by today's leading analysts.We disconfirmed that despite the fact that the acclaimed homogeneous algorithm for the analysis of symmetric encryption [33] runs in Ω(logn) time, link-level acknowledgements and superblocks are often incompatible.To fix this grand challenge for the simulation of the memory bus, we presented a novel framework for the simulation of the memory bus.We validated that scalability in our methodology is not a quandary.Swale has set a precedent for wireless archetypes, and we expect that scholars will measure Swale for years to come.Swale has set a precedent for the exploration of write-back caches that made simulating and possibly synthesizing checksums a reality, and we expect that mathematicians will simulate Swale for years to come.Although such a claim might seem perverse, it is supported by related work in the field.The characteristics of Swale, in relation to those of more foremost methodologies, are particularly more confusing.We proved not only that Smalltalk can be made distributed, wearable, and mobile, but that the same is true for sensor networks.Next, our framework for harnessing interactive communication is famously bad.The practical unification of IPv6 and write-back caches is more unfortunate than ever, and Swale helps system administrators do just that. Figure 1 : Figure 1: A distributed tool for visualizing courseware. Figure 2 : Figure 2: An analysis of redundancy. Figure 3 : Figure 3: These results were obtained by Lampson B; we reproduce them here for clarity. a) We measured Web server and DHCP latency on our mobile telephones b) We deployed 95 Next Workstations across the Planet lab network, and tested our I/O automata accordingly c) We compared average throughput on the AT&T System V, Coyotos and LeOS operating systems; and d) We measured ROM throughput as a function of USB key speed on an Apple Newton. Figure 4 : Figure 4: The 10 th -percentile hit ratio of our framework, compared with the other algorithms.
2,972.2
2019-04-29T00:00:00.000
[ "Computer Science" ]
Analysis of steel reinforced functionally graded concrete beam cross sections Owing to continuously changing strength moduli properties, functionally graded concrete (FGC) has remarkable advantages over the traditional homogeneous concrete materials regarding cement optimization. Some researchers have studied mechanical behaviors and production methodologies. Problems arise as to how to incorporate the effects of the non-homogeneity of concrete strengths in the analysis for design. For a steel Reinforced Functionally Graded Concrete (RFGC) beam structure, the associated boundary conditions at both ends have to be at the neutral axis position after the occurrence of the presumed cracks. Because the neutral axis is no longer at the mid-plane of the beam crosssection, an iterative procedure has to be implemented. The procedure is somewhat complicated since the strength of the beam cross section has to be integrated due to the non-homogeneity in concrete strengths. This paper proposes an analytical procedure that is very straightforward and simple in concept, but accurate in designing the steel reinforced functionally graded concrete beam cross-sections. Introduction Most human-made materials such as concrete and ceramics are deliberately designed and manufactured with homogeneous properties.The homogeneity is effective to ensure the safety of a structure.However, sometimes it can lead to ineffectiveness in the use of natural resources since high-stress concentration problems only occur in certain parts of the structural element.On the contrary to the homogeneity assumed in the analyses and design, steel Reinforced Concrete (RC) structure elements in built structures are mostly found as graded concrete material [1][2].The non-homogeneous material property is inherent as a result of mixing, placing, consolidating and curing procedures, in addition to the segregation and accumulation of the aggregates during the mixing.Additionally, some workability factors such as bleeding and microcracking due to premature water evaporation also play a role in making the concrete material non-homogeneous. Functionally Graded Material (FGM) is a new kind of combining two or more materials where the essential properties are varied over a specified orientation to obtain some desired function abilities [3].In FGM compositions, two or more material properties are blended functionally to improve material performances.Bamboo is a kind of material in nature that shows a radial gradient of property as a result of the evolutionary process to adapt their living environmental conditions [4][5][6]. Studies on the FGC, however, are very limited.Attempts to manufacture [7][8] an FGC material face one challenging difficulty: creating a smooth transition between two different properties.Unless the continuous transition is not achieved, a laminated or composited material will be produced.The transition zones, where the stress concentrations occur, will degrade the quality of the FGC material.In [7][8], a variation in the cement content relative to volume is adapted to manufacture the FGC.The method developed in [7][8] was found useful in creating a graded material of FGC having both strength and stiffness properties varying through the depth of the concrete specimen. Experimentally and numerically [8][9][10][11], some studies on the effects of two concrete strengths gradation of FGC cylinder compressive strength specimens have reported that the ultimate strengths of the FGC were limited by the lowest concrete strength of the FGC and their rigidities are close to the highest compressive strength of the FGC mixture. However, FGC has not been implemented widely in construction projects.One major problem in implementing FGC is that there are no building codes available for analyzing and designing FGC elements in structures.The variation of concretes with different strengths and elastic moduli on an FGC element need an accurate method to estimate their strength and behavior.In this paper, we show a method and corresponding analysis to design a steel Reinforced Functionally Graded Concrete (RFGC) beam subjected to a bending moment.By using the presented method, the RFGC can be designed similarly with the conventional steel RC member.In the end, a study on price comparison is conducted to highlight the economic feasibility of the RFGC. Functionally graded concrete 2.1 FGM concept applied to the steel reinforced concrete beam Applying the FGM concept to the RC beam can be achieved by manufacturing the FGC reinforced by steel bars.Figure 1 depicts the resulting RFGC beam.By grading two different types of the concrete strengths throughout the thickness of the cross-section of a beam, a possible scheme to reduce the unnecessary concrete strength in the tension zone and to increase the necessary concrete strength in the compressive zone.Optimally, there will be no reduction of the beam's bending strength, and at the same time, this can potentially reduce the material prices.Similar ideas could lead to the enhancement of a wide range of other building components.The graded elastic modulus of the concrete beam shown in Fig. 1. and is defined by the following: where ( ) E y is the concrete elastic modulus at y ordinate; cb E is the concrete elastic modulus at the bottom of the beam cross-section; ct E is the concrete elastic modulus at the top of the beam cross-section; y is measured from the bottom of the beam cross-section; h is the height of the beam cross-section; and p is the gradient of the modulus variation. Allowable stress design method of RFGC The allowable stress design (ASD) code standard for designing an RC beam subjected to bending moment has been used for many years.For comparison purposes in this paper, we will examine how RFGC is analyzed in the ASD and then discuss the advantages of the RFGC in the design of an RFGC beam subjected to a bending moment.In the ASD, the following assumptions are made: 1.In the calculation of stresses at the FGC and steel bars at a section, the tensile strength of the assumed cracked concrete part below the natural axis is neglected.2. The dimension of the length of the beam is relatively long compared to the maximum dimension of the cross-section.Hence, the section remains plane and perpendicular to the neutral axis of the beam after the deformation.3. The material properties of steel and concrete are linear elastic. The kinematics of the RGFC beam cross-section under a bending moment are illustrated in Fig. 2. The compression forces consist of the uncracked concrete area and steel bar in compression, while the tensile force is only resisted by the steel bar in the cracked concrete area.The design of the beam cross-section is iterated by the calculation of balancing moment strengths controlled by the concrete and steel.In ASD, the modular ratio, n, as the function of y is defined as: The equilibrium state of the steel and concrete forces after neglecting the tension portion of concrete can be obtained from the stress distributions shown in Fig. 2, which can be expressed by the following: with ( ) where S is the total tensile force from the steel bars; S ′ is the total compressive force from the steel bars; C is the total compressive force from the uncracked concrete portion; , and ( ) ( ) The second moment of inertia of the beam cross-section then can be computed by: ( ) ( ) The stress of an arbitrary point can be calculated by: The arm length e of the couple between compressive and tensile forces of the beam cross-section can be determined from: Given the allowable stress of steel bar and concrete (compressive side), the bendingresistant moments of the RFGC beam cross-section can be determined from the lowest value between: ( ) where sr M and cr M are the allowable resisting moments of steel and concrete, respectively. Economic considerations: a price material comparison study In this study, the Japanese design standard for structural calculation of RC structures [12] is referred in the calculations of design and analysis.Only the material price of the concrete will be compared because the manufacturing process for the RFGC technologically is not available in the current time frame of development. Design of rectangular RC and RFGC beams Figure 3 shows two homogeneous (p = 0) RC beams of 69 MPa and 28 MPa concrete compressive strengths, namely Case-1 and Case-3, respectively.Case-2 in Fig. 3 shows an RFGC beam cross-section of functionally graded concrete compressive strengths which vary from 28 MPa at the bottom fiber and 69 MPa at the top fiber of the beam crosssection. In this study, the graded function in Eq. ( 1) of the concrete compressive strengths is selected to follow the degree of polynomial order of p = 1 (linear), 2 (quadratic) and 3 (cubic) variation of cases.The selection of higher order degree of the polynomial in this study is because in practice, it is not easy to manufacture or by chance find a smooth linear gradation of FGC beam.Instead, the quadratic or cubic functions are more likely to be found in real concrete structures. Cases for a comparison study of a rectangular RF/RFGC beams. The elastic moduli [12] can be evaluated from the compressive strength and weight density of the concrete as, ( ) where, c E is the elastic modulus of concrete in N/mm 2 ; c F is the compressive design strength of concrete in N/mm 2 is assigned to the steel bar. Volume fraction of FGC beam cross-section To calculate the price material of each case, the volume fraction of the respective material constituent in the FGC can be determined from the weight density distribution following Eq.( 1). The FGC weight density distribution can be expressed as: where ( ) c y γ is the weight density of concrete at y ordinate; cb γ is the weight density of concrete at the bottom of the beam cross-section; ct γ is the weight density of concrete at the top of the beam cross-section.By integrating Eq. ( 16) along the depth of the FGC beam's cross-section, we can obtain the total volume per unit depthV of the FGC from: 0 1 ( ) where ( ) b y is the y-variable width of the FGC beam cross-section.The volume of a fraction of a rectangular section, where the width ( ) b y b = is a constant, of the FGC made of two concrete strength components, then can be determined by integrating Eq. ( 16) which results in: where , cb ct V V andV are the volumes of the corresponding compressive strength of concretes at the bottom fiber, top fiber and entire beam's cross-section, respectively. Price of material calculation Table 1 shows the average unit price of homogeneous concrete material and steel bar available in the construction material price standard in Japan at present (2018).As a base for comparison study, the high strength concrete of RC beam (Case-1) is designed to determine the appropriate bending moment for loading, its dimension of the cross-section and required reinforcing steel bars.The results are then used for designing the other cases to find the optimal beam cross-section dimensions and required reinforcing steel bars.The design is optimized by finding a suitable loading of bending moments that are close to Eqs. (13-14), the dimensions of the cross-section, and requirement of reinforcing steel bars within the allowable stress limit of concrete and steel bar materials.The allowable stress of concrete is determined from (Steel Grades Carbon Steel SD345).The results of the design and analysis of the RC and RFGC beam subjected to the bending moment are tabulated in Table 2.The beam cross-section dimensions (b × h) and required steel bars and the total material prices are also shown in Table 2. In each Case, the volume of the concrete is computed from the fraction volume given by Eq. ( 18) multiplied by the corresponding weight density and unit price shown in Table 1.The price of steel bars per unit length is calculated from the unit price shown in Table 1. Results and conclusion From Table 2, in general, Cases 2A-C are the RFGC beams with lower material prices compared to either the normal RC (Case-3) and high strength RC (Case-1).Apparently, the high strength RC (Case-1) has a higher material price compared to the normal RC (Case-3).In Fig. 4, the lowest material price is found at Case-2C where the gradation function of compressive strength is following the 3 rd degree of polynomial.The following conclusions can be drawn from the present case study: 1.The 3 rd degree of polynomial assumption, which is most likely found in the built structural members of the graded concrete distribution, was found to be the most effective combination for FGC material.Therefore, there is no need to produce the linear (p = 1) FGC which is, indeed, more difficult to manufacture.2. By using the RFGC (Case-2), the weight of the normal RC (Case-3) can be reduced by 41.8% less, and thereby, lightweight structure and more spaces can be gained in designing high-rise building.3. Given the material price comparison, the RFGC beams are more economical than both the normal and high strength RC beams. ′ are measured from the neutral axis of the cracked beam cross section; ct σ is the stress of the concrete fiber at the top of the beam's cross-section; c h is the height of the uncracked steel reinforced FGC section from the neutral axis; , ns ns′ are the number of tensile and compressive steel bars; and , and the area of the uncracked concrete, and i-th tensile steel bar and compressive steel bar, respectively.By substituting Eqs.(4-6) into Eq.(3), the neutral axis position ( ) beam cross-section can be calculated from: and cγ is the weight density of concrete in kN/m3 Table 1 . Concrete and steel bar prices (2018 average price market in Japan). Table 2 . Price comparison of RC and RFGC rectangular beams subjected to bending moment
3,130.4
2018-01-01T00:00:00.000
[ "Engineering" ]
Effect of pneumococcal conjugate vaccines and SARS-CoV-2 on antimicrobial resistance and the emergence of Streptococcus pneumoniae serotypes with reduced susceptibility in Spain, 2004–20: a national surveillance study Background Epidemiological studies are necessary to explore the effect of current pneumococcal conjugate vaccines (PCVs) against antibiotic resistance, including the rise of non-vaccine serotypes that are resistant to antibiotics. Hence, epidemiological changes in the antimicrobial pattern of Streptococcus pneumoniae before and during the first year of the COVID-19 pandemic were studied. Methods In this national surveillance study, we characterised the antimicrobial susceptibility to a panel of antibiotics in 3017 pneumococcal clinical isolates with reduced susceptibility to penicillin during 2004–20 in Spain. This study covered the early and late PCV7 periods; the early, middle, and late PCV13 periods; and the first year of the COVID-19 pandemic, to evaluate the contribution of PCVs and the pandemic to the emergence of non-vaccine serotypes associated with antibiotic resistance. Findings Serotypes included in PCV7 and PCV13 showed a decline after the introduction of PCVs in Spain. However, an increase in non-PCV13 serotypes (mainly 11A, 24F, and 23B) that were not susceptible to penicillin promptly appeared. A rise in the proportion of pneumococcal strains with reduced susceptibility to β-lactams and erythromycin was observed in 2020, coinciding with the emergence of SARS-CoV-2. Cefditoren was the β-lactam with the lowest minimum inhibitory concentration (MIC)50 or MIC90 values, and had the highest proportion of susceptible strains throughout 2004–20. Interpretation The increase in non-PCV13 serotypes associated with antibiotic resistance is concerning, especially the increase of penicillin resistance linked to serotypes 11A and 24F. The future use of PCVs with an increasingly broad spectrum (such as PCV20, which includes serotype 11A) could reduce the impact of antibiotic resistance for non-PCV13 serotypes. The use of antibiotics to prevent co-infections in patients with COVID-19 might have affected the increased proportion of pneumococcal-resistant strains. Cefotaxime as a parenteral option, and cefditoren as an oral choice, were the antibiotics with the highest activity against non-PCV20 serotypes. Funding The Spanish Ministry of Science and Innovation and Meiji-Pharma Spain. Translation For the Spanish translation of the abstract see Supplementary Materials section. Introduction Invasive pneumococcal disease and community-acquired bacterial pneumonia are infectious diseases of high priority for prevention as they are associated with high morbidity and mortality rates. 1,2 Streptococcus pneumoniae (also known as pneumococcus) is the most common cause of community-acquired bacterial pneumonia and is one of the most frequent causes of bacterial meningitis and sepsis. 1,2 Pneumococcal conjugate vaccine (PCV) use is the best prophylactic strategy to prevent invasive pneumococcal disease and community-acquired bacterial pneumonia in children, 2 although several clinical trials have also shown great effectiveness in the adult population. 3,4 In Spain, PCV7 was first used in 2001, although mainly in private practice, and vaccine coverage was less than 50% before 2006. 5 PCV10 was authorised for use in 2009, but promptly replaced by PCV13 in 2010. PCV13 was highly prescribed by paediatricians and in 2016 was included in the national immunisation schedule of the Spanish public health system, leading to high vaccine coverage rates. In adults, pneumococcal vaccine coverage rates are not made public, although in 2018 they were 22% for Spanish regions that used PCV13, and 26% for those that used pneumococcal polysaccharide vaccine (PPV)23. 5 A marked reduction in the incidence of invasive pneumococcal disease caused by PCV13 serotypes has been reported in Spain, not only in children but also in adults, owing to the herd-immunity effects of paediatric vaccination. 5 Another important benefit of using PCVs is their contribution to lowering the burden of antimicrobial resistance, by controlling serotypes that have reduced susceptibility. 6 However, an increase in non-PCV13 serotypes, mainly in adults, might jeopardise the effectiveness of this vaccine. 5,7 There have been constant increases in serotypes associated with antimicrobial resistance, with declines in susceptibility rates after the introduction of PCVs and increases in non-PCV serotypes after PCVs were implemented in the paediatric population. 8 In addition, the emergence of multidrug-resistant serotype 19A isolates was reported shortly after the introduction of PCV7 globally. 8 This occurrence is consistent with a 2011 report 9 that explored antimicrobial resistance rates in S pneumoniae globally, which showed that susceptibility rates had decreased throughout the years in particular regions. Therefore, we did a national longitudinal study to characterise the evolution of antibiotic susceptibility throughout 16 years (2004-20), with a special focus on third-generation oral cephalosporins, because in Spain these antibiotics are widely used to treat patients with pneumonia who have not been hospitalised. Another major goal of our study was to evaluate the contributions of PCV7, PCV13, and the COVID-19 pandemic to the emergence of non-vaccine serotypes that are associated with antibiotic resistance. Studies published in the past 2 years suggested that S pneumoniae could interact with SARS-CoV-2. 10,11 Vaccination with PCV13 has been associated with a reduced risk of COVID-19 diagnosis, hospitalisation, and mortality in patients infected by SARS-CoV-2. 10 Furthermore, pneumococcal carriage has been linked with impaired anti-SARS-CoV-2 immune responses, affecting mucosal IgA concentrations in individuals with mild or asymptomatic infection and the cellular memory responses in most patients who are infected. 11 Hence, vaccination using PCVs that reduces both the duration of and the number of people in the carrier state could preserve the immune response against SARS-CoV-2 and be the reason for a lowered risk of COVID-19. 10,11 Study design In this national surveillance study, we character ised 3017 clinical isolates that were non-susceptible to penicillin, which we received at the Spanish Pneumococcal Reference Lab ora tory (Madrid, Spain) in 2004-20. These isolates were from adult patients hospitalised with Research in context Evidence before this study We searched PubMed on Jan 2, 2021 for studies in children and adults published between Jan 1, 2004, and Dec 31, 2020, using the terms "invasive pneumococcal disease" and/or "serotypes", and "pneumococcal conjugate vaccines", and "antibiotic resistance", and "SARS-CoV-2", with no language restrictions. We screened the search results, which included populationbased studies and observational studies related to the epidemiology of invasive pneumococcal disease caused by antimicrobial-resistant strains affecting adults before and after the introduction of pneumococcal conjugate vaccines (PCVs). Overall, studies that included countries that had introduced PCVs in childhood immunisation programmes reported a reduction in overall invasive pneumococcal disease, and a decline in incidence by vaccine serotypes, including serotypes with antibiotic resistance. Herd-immunity protection in adults has been observed in countries with long-term use of PCVs in children, upholding the importance of indirect protection conferred by PCVs. The replacement of serotypes after PCV13 has been subject to geographical discrepancies globally. Added value of this study Different PCVs have been used since 2001 (with the introduction of PCV7, followed by PCV13) and it is important to determine the effect of these vaccines on the epidemiology of resistant strains. Such knowledge is especially important given the COVID-19 pandemic, in which many antibiotics have been prescribed at hospital and community level to prevent the potential risk of co-infection by bacterial pathogens, and the resulting threat of increased resistance is of concern. In this national-level longitudinal study in Spain, 2004-20, we evaluated the evolution of pneumococcal strains resistant to a range of antibiotics, including penicillin, amoxicillin, cefotaxime, erythromycin, levofloxacin, and third-generation oral cephalosporins such as cefixime, cefpodoxime, and cefotaxime. We also analysed the patterns of antibiotic resistance before and during the first year of the COVID-19 pandemic, to see if there were variations that might be attributable to the use of antibiotics to prevent co-infections in patients infected by SARS-CoV-2. Implications of all the available evidence The study shows a reduction in vaccine-serotypes displaying antibiotic resistance after the introduction of PCV7 and PCV13, confirming the importance of these vaccines in controlling the problem of antibiotic resistance. However, a rise in non-PCV13 serotypes that harbour resistance (including 11A, 24F, and 23B), has been observed in the past 5 years. Future vaccines that contain additional serotypes associated with antibiotic resistance will partly solve this problem by increasing potential coverage against some of these emerging non-vaccine serotypes. Our data suggest that the increased proportion of resistant strains during the first year of the COVID-19 pandemic should be taken into consideration regarding the use of antibiotics as a routine strategy to prevent bacterial co-infections, as such use could exacerbate the problem of antibiotic resistance. invasive pneumococcal disease or non-bacteraemic pneumococcal pneumonia. We did not include strains from adults with meningitis. We also analysed the effect of PCVs in the epidemiology of S pneumoniae strains with reduced susceptibility to penicillin assessed at different time periods. We compared 2019 (pre-COVID-19) and 2020 to analyse the effect of SARS-CoV-2 in the antimicrobial susceptibility of S pneumoniae (appendix 2, pp 2-3). We included around 500 clinical isolates that were nonsusceptible to penicillin and had a minimum inhibitory concentration (MIC) of at least 0·12 µg/mL from 2004 (early PCV7 period), 2008 (late PCV7 period), 2012 (early PCV13 period), 2016 (middle PCV13 period), 2019 (late PCV13 and pre-COVID-19 period), and 2020 (COVID-19 period; appendix 2, p 3). These strains were obtained using our collection programme at the Spanish Pneumococcal Reference Laboratory for strains from hospitals distributed throughout the entire country. To avoid possible bias, we did a random selection using the RAND function in Microsoft Excel (2016 [Windows]) from our database of collected strains to ensure a general distribution from around the country. This study was done as a public health investigation with internal approval from Instituto de Salud Carlos III (Madrid, Spain) for the characterisation of pneumococcal strains; as such, external ethics committee approval was not required. Antibiotic susceptibility was evaluated by the test diffusion method and the MIC values were determined by the agar dilution technique in accordance with the European Committee on Antimicrobial Susceptibility Testing (EUCAST) criteria, using EUCAST breakpoint recommendations for data interpretation. 8 For those antibiotics without a defined breakpoint by EUCAST or the Clinical and Laboratory Standards Institute such as cefixime and cefditoren, we used the same breakpoints as cefotaxime (appendix 2, p 4). Statistical analysis Statistical analysis was done by using a two-tailed Student's t-test (for two-group comparisons), and ANOVA followed by Dunnett's post-hoc test was used for multiple comparisons. The effect of vaccination against resistant serotypes was calculated by comparing the incidence rates of resistant strains during the different periods and calculating the incidence rate ratio (IRR) with 95% CI using Poisson regression models. The effect of SARS-CoV-2 in the rise of pneumococcal-resistant strains was measured using Fisher's exact test. GraphPad InStat (version 8.0; GraphPad Software, San Diego, CA, USA) was used for statistical analysis. Differences were considered significant if p<0·05 and highly significant if p<0·01 or p<0·001. Role of the funding source The funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. Results To characterise the evolution of the clinical isolates by their susceptibility pattern 2004-20, we used EUCAST criteria to categorise the strains as fully susceptible, susceptible with increased exposure, or resistant (figure 1; appendix 2, p 4). With cefotaxime, more than 40% of strains were susceptible with increased exposure, followed by (in order of declining percentages) cefixime, amoxicillin, cefditoren, cefpodoxime and erythromycin ( figure 1A). Among resistant strains, the antibiotic associated with the highest proportion of resistant strains was cefixime (>68%), followed by cefpodoxime (>50%), erythromycin (>50%), and amoxicillin (>33%; figure 1B). In addition, the antibiotics showing the lowest proportion of resistant strains during the study period were cefditoren (<0·4%), cefotaxime (<5%), penicillin (<6·5%), and levofloxacin (<7%; figure 1B). A decrease in the proportion of resistant strains was observed after the introduction of PCVs (ie, in the late PCV7 and early-to-middle PCV13 periods), suggesting that these vaccines were effective in controlling the emergence of resistant strains. However, a trend towards a moderate increase in the proportion of resistant strains was observed for some antibiotics in the late PCV13 period (figure 1B). A comparison of the 2019 (pre-COVID-19) and 2020 (COVID-19) periods showed an increase (p<0·05) in the proportion of strains that were resistant to various antibiotics, such as penicillin (3% in 2019 vs 6% in 2020), amoxicillin (33% vs 36%), cefixime (68% vs 72%), cefpodoxime (50% vs 56%), and erythromycin (55% vs 59%), but showed no differences for cefditoren or levofloxacin (figure 1B). For cefotaxime, which is widely used in Spanish hospitals as a parenteral antibiotic against respiratory and systemic infection, we also found an increase in the proportion of strains with reduced susceptibility during the first COVID-19 pandemic year (42% in 2019 vs 48% in 2020; figure 1A). Cefditoren was the antibiotic showing the highest proportion of susceptible strains (>81%), followed by cefotaxime (>45%) and erythromycin (>37%; appendix 2, p 5). By contrast, cefixime, followed by cefpodoxime, had the lowest proportion of susceptible strains (appendix 2, p 5). We evaluated the contribution of pneumococcal vaccination (PCV7 until 2009 and PCV13 since 2010) to the national epidemiology of pneumococcal strains with reduced susceptibility to penicillin and either reduced susceptibility with increased exposure or resistance to erythromycin (figures 2, 3). The susceptibility with increased exposure or resistance of pneumococcal serotypes included in PCV7 and PCV13 decreased in the middle and late periods after the introduction of these vaccines (PCV7: IRR 0·31 [95% CI 0·26-0·38] for penicillin vs 0·35 [0·27-0·46] for erythromycin; PCV13: 0·37 [0·32-0·43] for penicillin vs 0·38 [0·31-0·47] for erythromycin). Serotype 14 accounted for the highest proportion of non-susceptible strains, showing a constant and steady trend in the last 5 years of the study period, especially for penicillin (figures 2 and 3). A reduction between 2004 and 2020 in PCV13 strains with susceptibility with increased exposure or resistance to penicillin (88% in 2004 vs 40% in 2020) and erythromycin (88% vs 46%) was obtained after the introduction of these PCVs, strengthening the importance of these vaccines in the fight against antibiotic resistance (figures 2 and 3). In the case of erythromycin, because we selected strains that had penicillin susceptibility with increased exposure or resistance, a limitation of our study was that we did not measure the effect of PCVs against strains that were fully susceptible to penicillin, but resistant to erythromycin. Also, an increase in non-susceptible strains belonging to serotype 19A was observed from 2008, coinciding with the late-PCV7 period (figures 2 and 3). Hence, the use of PCV13 enabled the control of serotype 19A strains that had reduced susceptibility to penicillin and Figure 2: Evolution of pneumococcal serotypes with reduced susceptibility to penicillin, 2004-20 The number and proportion of pneumococcal cases caused by serotypes covered by PCV7, additional serotypes covered by PCV13, or not included in PCV13 (the top five or five most frequent) that were either susceptible with increased exposure or resistant to penicillin. Reduced susceptibility was defined as a minimum inhibitory concentration of 0·12 µg/mL or higher. PCV=pneumococcal conjugate vaccine. erythromycin, although in the past 5 years, a situation of stability was observed for both penicillin and erythromycin (figures 2 and 3). We found an increase in non-PCV13 strains that had susceptibility with increased exposure or resistance since the introduction of both of these PCVs (12% in 2004 vs 54% in 2020 for erythromycin and 12% vs 60% for penicillin; figures 2 and 3). With non-PCV13 serotypes, we observed an increase of serotype 11A strains that were not susceptible to penicillin and an increase of serotype 24F strains that were not susceptible to penicillin or erythromycin (figures 2 and 3). With penicillin resistance, currently serotype 11A, followed by serotype 24F, are the two most frequent causes of pneumococcal disease caused by non-susceptible strains, and account for 30% of all cases associated with reduced susceptibility to penicillin. For simultaneous resistance to penicillin and erythromycin, epidemiol ogical data suggested that serotype 24F was responsible for 24% of all cases, with a secondary role for serotype 11A, as only 5% of cases caused by this serotype had resistance to both antibiotics. To evaluate the effect of PCVs and SARS-CoV-2 in the MIC values to β-lactams, we explored the evolution of MIC 50 and MIC 90 , analysing the three most prevalent PCV13 serotypes (19A, 14, and 19F) and non-PCV13 serotypes (11A, 24F, and 23B) associated with reduced susceptibility (ie, susceptibility with increased exposure or resistance; table). Among third-generation oral cephalosporins, cefixime had the highest MIC 50 and MIC 90 values, irrespective of the serotype, followed by cefpodoxime, whereas cefditoren was the most active cephalosporin showing the lowest MIC 50 or MIC 90 value, which was even lower than for cefotaxime-one of the most widely used parenteral cephalosporins against invasive pneumococcal disease (table). Overall, these MIC 50 or MIC 90 values indicated that cefotaxime, and cefditoren to a greater extent, were the β-lactam antibiotics with the highest activity against the most frequent serotypes with susceptibility with increased exposure or resistance to penicillin. Hence, our results showed that cefditoren achieved the lowest MIC values between 2004 and 2020, which was significant compared with each β-lactam antibiotic, including cefotaxime (p<0·001, two-tailed Student t-test) and even if multiple comparisons were done with oral cephalosporins such as cefixime and cefpodoxime (p<0·01 for one-way ANOVA followed by Dunnett´s post-hoc test). In addition, PCV13 serotypes (19A, 14, and 19F), and serotype 11A as a non-PCV13 serotype, had higher MIC 50 or MIC 90 values than all of these cephalosporins compared with serotypes 24F and 23B. For penicillin and amoxicillin, the three most frequent PCV13 serotypes (19A, 14, and 19F) had higher MIC 50 or MIC 90 values than the non-PCV13 serotypes 24F and 23B. However, serotype 11A (which is not included in PCV13 but is included in PCV20 and PPV23) was the serotype with the highest MIC 50 or MIC 90 values since 2008, being even higher than the three PCV13 serotypes studied (table). In terms of antibiotic resistance and SARS-CoV-2, we found an increase in the MIC 90 values to penicillin for serotype 11A, which changed the interpretation from susceptible with increased exposure to resistant. Hence, the MIC 90 value for serotype 11A increased from 2 µg/mL in 2016-19 to 4 µg/mL in 2020 (table). In this study, we explored the proportion of pneumococcal disease caused by strains with reduced susceptibility to different antibiotics that are potentially covered by different PCVs and PPV23 (figure 4). During the late PCV7 and early PCV13 periods (2008-12), the majority of pneumococcal cases associated with reduced susceptibility were caused by PCV13 serotypes ( figure 4). Figure 3: Evolution of pneumococcal serotypes with reduced susceptibility to erythromycin among strains that are either susceptible with increased exposure or resistant to penicillin, 2004-20 The number and proportion of pneumococcal cases caused by serotypes covered by PCV7, additional serotypes covered by PCV13, or not included in PCV13 (the top five or five most frequent) that were either susceptible with increased exposure or resistant to penicillin or erythromycin. Reduced susceptibility was defined as a minimum inhibitory concentration of 0·5 µg/mL or higher. PCV=pneumococcal conjugate vaccine. Our results suggest that in comparison with PCV13 or PCV15, PCV20 would increase by up to 30% the potential coverage of cases by strains with reduced susceptibility to β-lactams ( figure 4). Overall, the use of PPV23, despite containing three more serotypes than PCV20, offered similar protection against resistant strains ( figure 4). From the antibiotic perspective, for cefditoren and cefotaxime (which were the cephalosporins showing the best antimicrobial activity), the use of PCV20 would prevent more than 92% of all cases produced by pneumococcal strains that have reduced susceptibility to those antibiotics ( figure 4). Discussion Antibiotic treatment with β-lactam antibiotics, inclu ding the use of third-generation cephalosporins, is one the first options for the management of pneumococcal infections. 13,14 A major threat in public health is the rise of resistant strains that can increase mortality rates by reducing the efficacy of antibiotic treatment. 15 The use of PCVs in children and adults has been shown to be an effective intervention to control the burden of invasive and non-invasive disease and a great measure to reduce the effect of antimicrobial resistance. 16,17 In this study, we analysed the evolution of antimicrobial resistance in S pneumoniae in strains not susceptible to penicillin, including the contribution of different PCVs, to ameliorate the problem of antibiotic resistance. One of the main mechanisms for reduced susceptibility to β-lactam antibiotics (including penicillins and cephalosporins) is the mutation in penicillin-binding proteins. 18 Our results showed that the cephalosporin with the highest activity in terms of MIC 50 or MIC 90 values was cefditoren, which showed the greatest proportion (>80%) of susceptible strains during 2004-20. These results are in agreement with previous reports 19,20 that suggested a marked activity of this cephalosporin against penicillin-resistant pneumococcal strains, because of its high affinity to penicillin-binding protein 2X (PBP2X). Owing to its high antimicrobial activity, the proportion of strains resistant to cefditoren in our study was extremely low (<0·4%), despite the long-term use of this oral antibiotic in Spain since 2004. 21 These results were substantially different to those for the other oral cephalosporins tested, which had far higher proportions of resistant strains (68% for cefixime and 50% for cefpodoxime). Cefditoren, followed by cefotaxime, were the cephalosporins with the highest activity against serotypes during the study period. This activity is important against respiratory infections, because cefditoren has a similar bacterial spectrum to cefotaxime or ceftriaxone, and can be used as an oral treatment against community-acquired bacterial pneumonia in patients who have not been hospitalised or after intravenous treatment with parenteral cephalosporins. [21][22][23] Another benefit of using cefditoren is that, because its intrinsic activity is higher than for other cephalosporins, it could help to reduce the length of hospital stay and thereby the risk of hospital-acquired infection by multidrug-resistant strains. 24 Levofloxacin was one of the antibiotics with the lowest proportion of resistant strains. This result was in agreement with a 2016 surveillance study 6 that compared different countries, which found that levofloxacin was one of the most active agents against multidrug-resistant pneumococcal strains. 6 Our data show the effectiveness of the different PCVs for controlling the dissemination of pneumococcal-resistant strains, and suggest that the use of these vaccines in national immunisation schedules is a cost-effective countermeasure to antibiotic resistance. 16 In the pre-PCV period, the majority of cases were caused by serotypes included in the vaccines and were associated with multidrug resistance. 8,15,25 Our results reinforce this relationship, but also show a clear benefit in reducing vaccine-serotypes after the use of PCV7 and PCV13, although for serotype 19A, a situation of stability was observed in the past 5 years. This stability is intriguing, because PCV13 was included in the national paediatric immunisation schedule, which has had high coverage rates since 2016 and which led to the expectation of a more profound effect. This plateau perhaps therefore indicates the maximum benefit that can be achieved after several years of use. However, the emergence in our study of nonvaccine serotypes that harbour antibiotic resistance shows that this issue is a global threat, given that many other countries have reported similar replacement in pneumococcal serotypes and lineages. 5,[25][26][27] The emergence of penicillin-resistant strains of serotype 11A is concerning from a pathogenesis perspective. This serotype contains a particular clone (ST6521 11A ) that has become one of the most prevalent among serotype 11A, with an increased ability to produce biofilms and invasive disease by very efficiently diverting the host immune response. 15 Hence, the profound potential of this serotype to produce infection might explain why serotype 11A was the serotype with the second highest fatality rate in a lethality study. 28 Another non-PCV13 serotype that has emerged is serotype 24F. This serotype is also alarming, because it displays resistance to penicillin and erythromycin, and its prevalence in the paediatric and adult population is increasing in various countries. 5 A limitation of our study is that from each year we selected around 500 strains with penicillin susceptibility with increased exposure or resistance, rather than all pneumococcal strains, and therefore our results might underestimate the potential effect of PCVs in reducing the burden of disease caused by resistant serotypes. We did not include paediatric strains and, although the majority of serotypes affecting children are similar to those in adults, 5 our results might not be generalisable to children. During the first year of the COVID-19 pandemic, generic use of antibiotics to avoid co-infections with bacterial pathogens might explain the increased pro portion of pneumococcal strains resistant to different antimicrobial drugs. 29 This idea is consistent with a clinical trial 30 published in 2021 that advised against the routine use of azithromycin in people with suspected COVID-19 in the community because it might exacerbate the antimicrobial resistance problem. The increased resistance to penicillin for serotype 11A in Spain during the COVID-19 pandemic is worrying and deserves further attention, because it changes the consideration from a serotype with reduced susceptibility to a serotype that is resistant, according to MIC 90 values in 2016-2019 and 2020. The introduction of newer PCVs with a broader spectrum of covered serotypes might help to resolve the problem of non-PCV13 serotypes with antibiotic resistance. The difference in the effect of PCV15 compared with PCV13 was minimal in terms of increased coverage against non-susceptible strains to antibiotics, whereas PCV20 markedly enhanced the potential coverage against non-susceptible strains, as PCV20 could prevent 92% of strains not susceptible to cefotaxime. In the context of using PCVs that have a higher spectrum, such as PCV20 (with the potential risk of replacement by non-vaccine serotypes after their implementation), the antibiotics with the highest activity against non-PCV20 strains were cefotaxime as a parenteral option, and cefditoren as an oral option. This finding might be important in helping to avoid the selection of resistant strains after massive use of this vaccine in the general population. For resistance to erythromycin, the potential coverage of PCV20 and PPV23 is more limited than for penicillin because they did not prevent cases caused by serotype 24F, which was the most frequent cause of infection associated with erythromycin resistance. Overall, our results support the potential of cefditoren as an oral administration option for pneumococcal disease, based on its high antimicrobial activity, and highlight the increase in non-PCV13 serotypes, especially serotype 11A, which can be further prevented by the use of PCV20 or PPV23. Contributors JY was responsible for the management of the epidemiological surveillance data. JY wrote the first draft of the paper. ML, JS, BLR, IDR, CPG, DL, MG, PC, FGC, MD, and JY provided technical support for the study. MG, PC, MD, and JY contributed to the study conception, design, data analysis, and interpretation. All authors contributed to the review of the different drafts, and approved all versions of the manuscript. All authors had full access to all the data in the study and had final responsibility for the decision to submit for publication. JS, MD, and JY accessed and verified all the data. Data sharing All data requests should be submitted to MD<EMAIL_ADDRESS>or JY (jyuste@isciii.es). Requests will be assessed for scientific rigour before being granted and a data-sharing agreement might be required.
6,265
2022-08-01T00:00:00.000
[ "Medicine", "Biology" ]
The First Approved “Deuterated” Drug: A Short Review of the Concept The first “deuterated” drug has recently been approved by the U.S. FDA (Food & Drug Administration). A “deuterated” drug is a drug in which the hydrogen atom in one or more of the carbon-hydrogen bonds in its chemical structure is replaced by deuterium (“heavy hydrogen”, a hydrogen isotope that has a neutron, i.e., one neutron instead of the usual no neutrons). A carbon-deuterium (C-D) bond is more stable in the body than a carbon-hydrogen (C-H) bond. If the deuterium is strategically located in a drug’s chemical structure, the extra stability of the bond will be more resistant to metabolic breakdown, and the duration of drug action will be prolonged. We review the general concept of deuterated drugs, historical examples of the classes of application, and the new approval. Introduction Although there are notable exceptions, the majority of drugs are administered over the course of several days, or even much longer.In such cases, the dose and dosage regimen needs to be designed such that the blood level (concentration) of the drug falls within the therapeutic "window", i.e., between too low (below the required threshold) and too high (into toxicity).Keeping the level constant over time is almost impossible, unless the drug is infused at a constant rate directly into a vein.Alternatively, a drug could be administered so frequently that the process mimics continuous infusion.Neither of these is very practical for most drugs or for most patients. A good design of a dosing regimen keeps the drug's blood level (surrogate for target organ drug concentration) within the therapeutic window as much as possible throughout the course of therapy.A key factor in this design is the drug's half-life (t 0.5 ), which is the time that it takes for a drug level (usually the blood level) to drop to 1/2 its initial value.Several factors contribute to a drug's half-life, but one of the most important for most drugs is the rate of its metabolism [1].Anything that slows a drug's rate of metabolism will increase its half-life.Drug metabolism involves the breaking of chemical bonds, so something that inhibits or delays the breaking of chemical bonds will increase the half-life, which usually translates into favorable features.We present the conceptual framework for this approach, and we discuss a few drugs as representative examples. Drug Metabolism, Duration of Action, and Compliance The typical profile of plasma level of a drug demonstrating 2 nd -order pharmacokinetics, the more common type [2], is shown in the upper panel of Figure 1.The ascending limb of each tracing follows administration of the drug (indicated by the arrows).The descending limb of each tracing occurs because of elimination of drug.For most current drugs, the major contributor to drug elimination is metabolism.Thus, if metabolism is prolonged, so is elimination.The resulting pharmacokinetic curves then look like the bottom panel in the figure.Note that the requirement for frequency of dosing is reduced.If the dotted line in the figure represents one day, then prolongation of metabolism translates to a dosing regimen of once-per-day instead of twice-per-day.Patient compliance is better when only once-per-day dosing is required [3] [4]. Chemical Reactions in Drug Metabolism Drugs are metabolized in two major ways: modification of their chemical structure (termed Phase 1), and linking to an endogenous substance (termed Phase 2).Both ways involve chemical reactions and both tend to decrease efficacy (e.g., by decreasing the goodness of "fit" to a receptor) and to increase elimination rate (e.g., by making the molecule more hydrophilic).All of these reactions involve the breaking of bonds and the making of new bonds.Most of these reactions are catalyzed by enzymes.The effectiveness of catalysis is determined by the specific atoms and the 3-dimensional shape of the drug molecule, and the complementary fit with the enzyme.An alteration in any one of these components will alter the efficiency of catalysis, and hence the rate of drug metabolism (Figure 2).Common chemical reactions involved in the metabolism of drugs include one or more "non-synthetic" types (e.g., involving oxidation, reduction, hydrolysis, cyclization, decyclization, or the addition of oxygen or removal of hydrogen) and "synthetic" types, which involve joining (conjugation) the drug molecule with an endogenous substance (such as glucuronic acid to form Breaking/Making Chemical Bonds during Drug Metabolism As a drug molecule is metabolized to one or more metabolite molecules, it proceeds to a lower-energy state.But it must pass through the transition state, which is at higher energy than is the parent drug molecule (Figure 3) [5].Enzymes facilitate the reaction by lowering the activation energy needed to get to the transition state.Carbon bonds are almost always involved, e.g., C-H, O-H, N-H, etc. Deuterium Deuterium is a stable isotope of hydrogen.In contrast to the more common hydrogen isotope on Earth (one proton and no neutron in the nucleus), the nucleus of deuterium consists of one proton and one neutron (Figure 4).It is not radioactive, as is tritium (one proton and two neutrons). Carbon-Deuterium Bonds The effect of deuterium is summarized by Blake et al. ( 1975): [6] The difference in mass between deuterium and hydrogen causes the vibrational frequencies of carbon, oxygen, and nitrogen bonds to deuterium to have lower frequencies than corresponding bonds to hydrogen.As a result, the chemical bonds involving [deuterium] will general be more stable than those of [hydrogen]. …The more stable deuterium bond requires a greater energy of activation to achieve the transition state; as a consequence, the rate of reaction involving a bond to deuterium is generally slower than that involving a bond to hydrogen. Thus, substitution of deuterium for hydrogen in a chemical bond can affect significantly the rate of bond cleavage and exert marked effects on the relative rates of chemical reactions.Large isotope effects on reaction rates are apparent where cleavage involves a bond to deuterium at the reaction site. An observable isotope effect will only be apparent, of course, where the breaking of a C-H or C-D bond is involved in the rate-determining step. Early Examples Efforts to modify the pharmacology of drugs by using deuterium date back at least several decades.As reviewed by Blake et al. (1975) [6], the effect of deuterium on the duration (metabolism) of a variety of classes of drugs have been investigated, including barbiturates, anesthetics, antibiotics, and others. Recent Approval The first U.S. FDA-approved deuterated drug is deutetrabenazine (AUSTEDO®; by Teva Pharmaceutical Industries) to treat the involuntary movements (chorea) of Huntington's disease.Tetrabenazine has been used clinically for many years as treatment for movement disorders.The molecule has two methoxy groups, which are targets of rapid metabolism to demethylated metabolites that subsequently undergo Phase 2 conjugation reactions.Deutatetrabenazine (Figure 5) [7] is metabolized at a slower rate than is non-deuterated tetrabenazine, which imparts less patient-to-patient variability and a dosing advantage to the deuterated drug. Perspective and Conclusions One or more well-placed deuterium isotope in place of hydrogen in a drug molecule can delay metabolism and extend a drug's half-life.The hoped-for result is a drug that can be administered less frequently, thereby enhancing patient compliance.Because the strategy is so attractive for new drugs and for revamping older ones, at least a dozen companies are pursuing the idea. However, the strategy does not always work, and the path to regulatory approval is not always shortened.In addition, there is question about patentability and other issues of IP (intellectual property) protection.Nevertheless, the approach promises to provide a new tool for drug development. Figure 1 . Figure 1.The effect of prolonged metabolism on pharmacokinetic curves.The upswing of each curve is a function of drug de-formulation, absorption and distribution.The downswing results from metabolism and elimination.If metabolism is prolonged, dosing (arrows) can be less frequent, e.g., once/d instead of twice/d if dotted line = 24 h.Modified from Wikimedia Commons. Figure 2 . Figure2.The rate of a drug-metabolism reaction depends on the rate of catalysis by an enzyme.The rate of catalysis is, in turn, a function of the complementarity of the 3-dimensional shape of the drug and the enzyme (protein).From Wikimedia commons. Figure 3 . Figure 3.A drug metabolism reaction proceeds through several steps before reaching the final product(s).The rate at which the reactants overcome the activation energy influences the rate of reaction.Therefore, if the chemical bonds in one molecule are stronger than corresponding bonds in another, the rate of metabolism of the first drug will be slower.Modified from Wikimedia Commons.
1,929.2
2018-10-26T00:00:00.000
[ "Chemistry" ]
EFFECT OF DIFFERENT YARN COMBINATIONS ON AUXETIC PROPERTIES OF PLIED YARNS : This study presents the effects of a novel plied yarn structure consisting of different yarn components and yarn twist levels on the Poisson’s ratio and auxetic behavior of yarns. The plied yarn structures are formed with bulky and soft yarn components (helical plied yarn [HPY], braided yarn, and monofilament latex yarn) and stiff yarn components (such as high tenacity [HT] and polyvinyl chloride [PVC]-coated polyester yarns) to achieve auxetic behavior. Experimental results showed that as the level of yarn twist increased, the Poisson’s ratios and the tensile modulus values of the plied yarns decreased, but the elongation values increased. A negative Poisson’s ratio (NPR) was obtained in HT–latex and PVC–latex plied yarns with a low twist level. The plied yarns formed with braid–HPY and braid–braid components gave partial NPR under tension. A similar result was achieved for yarns with HT–latex and PVC–latex components. Since partial NPR was seen in novel plied yarns with braided and HPY components, it is concluded that yarns formed with bulky–bulky yarn components could give an auxetic performance under tension. Liu et al. [12] reported a novel interlaced-helical wrapping yarn with NPR and a stable structure. The differences in geometry and auxetic behavior between helical and interlaced-helical yarn structures were analyzed. Results showed that the interlaced-helical complex yarn has advantages over a helical wrap yarn with the same materials and initial wrap angle in terms of both structural stability and auxetic effect. The results also showed that all interlaced-helical wrap yarns with lower initial wrap angle and larger tensile modulus of the stiff yarns would produce a better auxetic effect [12]. Jiang and Hu [13] reported a new type of auxetic yarn made with circular braiding technology to overcome the yarn slippage problem in the conventional HAY structure. Results showed that the slippage problem was eliminated in the newly developed auxiliary yarn structure; moreover, early activation and a higher magnitude of NPR could be achieved. The results also showed that all structural parameters, including stiff yarn number and arrangement, core yarn, and elastic yarn diameter, influence the auxetic effect of the novel yarn structure [13]. A novel type of braided structure with NPR behavior was proposed in a study on the tubular braided structure. It was found that the NPR behavior could be achieved in a specially designed tubular braided structure with the use of a wrap yarn having a higher modulus than the braiding yarns and core yarn. It was shown that the effects of three structural parameters, such as the initial wrap angle, the initial braiding angle, and the braiding yarn diameter, influenced the tensile behaviors and the NPR effect of the auxetic braided structure. It was found that the wrap angle had more noticeable influences than the initial braiding angle and the braiding yarn diameter. It was concluded that an auxetic braided structure with a lower initial wrap angle, a higher initial braiding angle, and a larger braiding yarn diameter had a better auxetic performance [14]. In this study, the effects of a novel plied yarn structure, consisting of different yarn components and yarn twist levels, on the Poisson's ratio and auxetic behavior of yarns were examined. In particular, it was investigated whether the plied yarn structures formed by bulky yarn components, such as braid and helical plied yarn (HPY), showed changes in the Poisson's ratio under tension and a possible auxetic behavior. Materials In this study, the tension-related auxetic effect of novel plied yarns at different twist levels (100, 200, and 300 T/m) formed by yarn components with different structural properties has been an elongation of 30 mm was obtained during the tensile testing. The setup of the testing system is presented in Figure 3. The average yarn diameter was calculated by a software developed using MatLab, which takes the average of 1,200 readings of the diameter through the length of a 10 mm yarn. Moreover, for each yarn, this measurement was taken from both the middle and the bottom regions of the yarn held by the clamp of the device. It was seen that the yarn diameter values obtained from the two regions were close to each other. Based on this result, the yarn structure was evaluated as homogeneous. Considering that the yarn consists of a homogeneous and continuous structure, it can be said that the elongation reflects equally on each region of the yarn. The average values of the results obtained from two different regions were taken as the yarn diameter in the evaluations. In addition, the yarn diameters were measured manually with the help of ImageJ program on the yarn images. Correlation coefficients between the yarn diameter values obtained from ImageJ and the results obtained from MatLab were calculated. Correlation coefficient values between the results obtained by evaluated. The rotating probe of the Officine Brustio test device ( Figure 2) was used to twist the two component yarns together to form the novel plied yarn structure. The yarn components to be combined were attached between the jaws and twisted to give the specified number of twists per unit length. The structural properties and the tensile strength values of the component yarns are given in Tables 1 and 2, respectively. In Table 3, the codes of the novel plied yarn and the tensile strength properties at twist levels of 100, 200, and 300 T/m are given. Tensile testing and calculation of Poisson's ratio The tensile tests were carried out on a Shimadzu AG-X plus testing machine. The gage length and tensile test speed were set at 250 mm and 10 mm/min, respectively. To calculate the radial strain of the yarn structure for any given axial strain, the yarns were photographed at 50 times magnification using a digital microscope (INSIZE ISM-PRO) with a time interval of 5 sec (which corresponded to a 0.83 mm extension), until Assessment of the changes in Poisson's ratios of the plied yarns under different elongations The Poisson's ratios for 10, 20, and 30 mm elongations depending on the different twist levels of novel plied yarns are presented in Figures 4-6. In Figure 4, it was observed that for novel plied yarn structures containing a monofilament latex component, NPRs were obtained at a low twist level (100 T/m). This result was obtained if the yarn that combined with latex, as in D and E yarns, were stiff component yarns, such as HT and polyvinyl chloride (PVC) polyester. While it was observed that the yarn structure formed with HT and PVC yarns with latex components had an NPR of the two measurement methods were in the range of 0.9966-0.9991. A digital microscope was utilized to capture images of the yarns at a 10 mm distance. Subsequently, thresholding operation was applied to the images using MatLab to create binary images and obtain the outer edges of the yarns. A computer program developed to obtain the yarn diameter processed the binary image in the following order: the number of pixels was counted starting from one side of the yarn edge to the other side. This operation was applied for each row in the binary image along the yarn. Then, the average number of pixels between the outer edges was calculated. Finally, the average pixel value was converted to millimeter scale using the known image dimensions in millimeters to obtain the average yarn diameter. D 0 and D represented the diameters at the initial and the stretched states of the yarns, respectively. The radial strain Ɛr was calculated from Eq. (1). As the axial strain (Ɛa) was directly provided by the tensile tester, the Poisson's ratio, n, of the plied yarn structure ν was calculated using Eq. (2) [11,12]. (1) The binary images of the yarns produced at different twist levels (50 times magnification) are presented under different tension values, such as tension-free state, 5 sec (0.83 mm elongation), 10 sec (1.67 mm elongation), and 60 sec (10 mm elongation). http://www.autexrj.com/ increased (due to the positive correlation). It was observed that as the twist level increased, the Poisson's ratios of the yarns decreased (due to the negative correlation). Correlation coefficients of the D and E yarns are not given in Table 4 since these yarns showed a tendency from an NPR to a positive value as the twist level increased. In Figures 4-6, when the Poisson's ratios at 10, 20 and 30 mm elongation were examined, it was observed that the Poisson's ratios decreased when the elongation value increased. Moreover, it was found that the plied yarn with braid components at a high twist level gave low Poisson's ratios under high elongation. The changes in yarn diameters and Poisson's ratios along the 30 mm extension distance of each yarn structure are presented in the following sections. Evaluation of the change in yarn diameter and Poisson's ratios of the A-coded yarn structure The changes in the average yarn diameter and the Poisson's ratio of the A-coded yarn (HPY-HT) depending on elongation are given in Figures In Figure 7, it was observed that the average yarn diameter of the A-coded yarn formed with HPY-HT decreased slightly linearly due to the tension. The changes in the average yarn diameters in yarns with 200 and 300 T/m twist levels were close to each other. Depending on the tension in the yarn with 100 T/m, the change in the yarn diameter was significant up to an elongation value of 10 mm, after which it had approximately the same tendency as the other yarns. In Figure 8, when the Poisson's ratios of the yarns were examined, it was observed that the yarn produced with low twist level due to tension showed a significant change. The Poisson's ratios of the yarns produced with twist levels of 200 and 300 T/m were slightly decreased due to the tension. 100 T/m, the Poisson's ratio was positive as the twist level was increased. In the latex-braid component yarn structure (G), NPR could not be obtained. However, it was seen that the G yarn with low twist level gave a low Poisson's ratio at 10 mm elongation compared to other yarns. In Figure 4, for the D, E, and G yarn structures, it was seen that as the yarn twist level increased, the values of Poisson's ratio increased. However, in the other yarn groups (C, F, A, and B), as the twist level increased, the Poisson's ratio of the yarns decreased significantly. When the properties of C, F, A, and B yarns were examined (Table 3), it was observed that these yarn groups have a bulky multifilament structure such as braid and HPY. Especially, in the C yarn (braid-braid), it was found that the Poisson's ratio approached ≈ 0 at high twist levels such as 300 T/m. This result showed that the stiff and bulky components forming the yarn at high twist levels did not cause a significant change in the Poisson's ratio. Correlation coefficients between the yarn twist value and the tenacity, elongation, and Poisson's ratio of the yarns are presented in Table 4. The correlation coefficients due to changes in the twist levels of the novel plied yarn structure consisting of different structural component yarns are given in Table 4. It was seen that as the twist level increased, the tenacity values of the yarns decreased (due to the negative correlation) and the elongation values Here, an NPR could not be obtained from a yarn structure produced by twisting together a bulky HPY yarn component and a stiff HT yarn. However, it was seen that the yarn twist values used in such a yarn structure have a significant effect on the Poisson's ratio. Although the results obtained did not yield an NPR for the HPY-HT plied yarn structure, it was observed Evaluation of the change in yarn diameter and Poisson's ratio of the B-coded yarn structure The changes in the average yarn diameter and Poisson's ratio of the B-coded yarn (HPY-braid) depending on elongation are given in Figures 11 and 12. Moreover, the structure and binary images under different tension values of B-coded yarn produced at different twist levels are presented in Figures 13 and 14. In Figure 11, when the changes in the average yarn diameter of B-coded yarn were examined, fluctuating changes were observed in the average yarn diameter, unlike the changes of A-coded yarn. It was seen that these fluctuating changes in the yarn diameter were more pronounced in the yarn with a low twist level (100 T/m). As a remarkable case in this study, it could be said that the yarns that have fluctuating changes in the average yarn diameter could yield partial NPRs under tension. Normally, Poisson's ratios are calculated by taking the difference between the initial diameter of the yarn and the diameter under tension (Eqs 1 and 2). Therefore, Poisson's ratios are obtained depending on the tensionless diameter value of the yarn. However, when the change of the yarn diameter was examined in Figure 11, it could be seen that for the 100 T/m yarn, the yarn diameter obtained at 9.17 mm (if the initial yarn diameter value was accepted) was lower than the yarn diameter obtained in the elongation range of 10.00-14.17 mm. Similarly, the yarn diameter obtained at 23.33 mm (if the initial yarn diameter value was accepted) was lower than the yarn diameter obtained in the elongation range of 24.17-30 mm. As a result of this, it could be said that the yarns that have a fluctuating change in average yarn diameter could yield partial NPRs under tension. The partial Poisson's ratios of 100 T/m twisted B-coded yarns are given in Table 5. In Figure 12, when the changes in Poisson's ratios were examined, a positive Poisson's ratio was also obtained in the B-coded yarn structure. As observed in the A-coded yarn structure, it was found that the Poisson's ratios were low in high twist levels in the B-coded yarn structure. Unlike the A-coded yarn, the Poisson's ratio of B-coded yarn (HPY-braid) with 100 T/m was lower. It was observed that the Poisson's ratio of B-coded yarn structure at 300 T/m twist level remained stable, as was found in the A-coded yarn. Evaluation of the change in yarn diameter and Poisson's ratios of the C-coded yarn structure The changes in the average yarn diameter and Poisson's ratio of C-coded yarn (braid-braid) depending on elongation are given in Figures 15 and 16. In addition, the structure and binary images under different tension values of C-coded yarn produced at different twist levels are presented in Figures 17 and 18. In Figure 15, when the changes in the average yarn diameter of C-coded yarn were examined, a fluctuating change was observed, as was found in the B-coded yarn. This fluctuating change in yarn diameter was more pronounced in the yarn with a low twist level (100 T/m). In Figure 15, when the change in yarn diameter was examined, it could be seen that for the 100 T/m yarn, the yarn diameter obtained at 5.00 mm (if the initial yarn diameter value was accepted) was lower than the yarn diameter obtained in the elongation range of 5.83-10.83 mm. Moreover, the yarn diameter obtained at 20.00 mm (if the initial yarn diameter value was accepted) was lower than the yarn diameter obtained in the elongation range of 20.83-30 mm. As a result of this, it could be said that the yarns that have fluctuating changes in the average yarn diameter could yield partial NPRs under tension. Partial Poisson's ratios of the 100 T/m twisted C-coded yarns are given in Table 6. In Figure 16, when the changes in the Poisson's ratios of the C-coded yarn consisting of braid-braid yarn components were examined, it was seen that the 200 and 300 T/m twist levels were in the same trend. Furthermore, it was found that for these yarns (unlike the A and B yarns), Poisson's ratios approximately equaling zero could be obtained under approximately 1 mm elongation. High Poisson's ratios were obtained at low twist value, as seen in the A-and B-coded yarns. In the yarns with 100 T/m twist level, it was observed that after an elongation value of 10 mm, the Poisson's ratios approach those of the yarn structures with 200 and 300 T/m twist levels. In Figure 16, it was observed in the 200 and 300 T/m twisted yarn structure with the braid-braid yarn component that the Poisson's ratios, which were ≈ 0 in the 1 mm elongation state, increased positively due to the elongation. Besides, this increase did not rise to very high values and remained at a Poisson's ratio of about +2. Evaluation of the change in yarn diameter and Poisson's ratios of the D-coded yarn structure The changes in the average yarn diameter and Poisson's ratio of the D-coded yarn (HT-latex) depending on elongation are given in Figures 19 and 20. In addition, the structure and binary images under different tension values of D-coded yarn produced at different twist levels are presented in Figures 21 and 22. In Figure 19, when the changes in the average yarn diameter of the D-coded yarn were examined, a fluctuating change was observed, as was found in the B-and C-coded yarns. It was seen that this fluctuating change in yarn diameter was more pronounced in yarns with a low twist level (100 T/m). In Figure 19, when the change of yarn diameter was examined, for the 100 T/m yarn, the yarn diameter obtained at 5.83 mm (if the initial yarn diameter value was accepted) was lower than the yarn diameter obtained in the elongation range of 6.67-16.67 mm. Moreover, the yarn diameter obtained at 18.33 mm (if the initial yarn diameter value was accepted) was lower than the yarn diameter obtained in the elongation range of 19.17-27.50 mm. Partial Poisson's ratios of the 100 T/m twisted C-coded yarns are given in Table 7. In Figure 20, when the changes in the Poisson's ratios of the yarn consisting of HT-latex components were examined, an NPR was obtained in the range of 7.50-13.33 mm elongation for the yarn with 100 T/m. Besides, it was seen that the Poisson's ratios remained at quite a low level (range: ≈ 0.0-0.5) at all twist levels of the yarn structure formed by these components along the 30 mm extension distance. It was observed that the change in Poisson's ratio due to tension was lower in a D-coded yarn structure with latex-HT compound than the yarn structures coded A, B, and C. Evaluation of the change in yarn diameter and Poisson's ratios of the E-coded yarn structure The changes in the average yarn diameter and Poisson's ratio of the E-coded yarn (latex-PVC) depending on elongation are given in Figures 23 and 24. Moreover, the structure and binary images under different tension values of the E-coded yarn produced at different twist levels are presented in Figures 25 and 26.. In Figure 23, when the changes in the average yarn diameter of E-coded yarn were examined, a fluctuating change was observed, as was found in the B-, C-, and D-coded yarns. It was seen that this fluctuating change in yarn diameter was more pronounced in the yarn with a low twist level (100 T/m). The E-coded yarn gave NPRs from the beginning up to 11.67 mm elongation value (Figure 24). When the partial Poisson's ratios were examined (Figure 23), it could be seen that for the 100 T/m yarn, the yarn diameter obtained at 20.83 mm (if the initial yarn diameter value was accepted) was lower than the yarn diameter obtained in the elongation range of 21.67-30.00 mm. Partial Poisson's ratios of the 100 T/m twisted C-coded yarn are given in Table 8. In Figure 24, when the changes in the Poisson's ratios of the yarn consisting of latex-PVC yarn components were analyzed, it was seen that the 100 T/m twisted yarn gave NPRs from the beginning up to 11.67 mm elongation value. Although the same result was observed in the yarn where the D-coded latex yarn component was used, an NPR was obtained after a certain tension in the D-coded yarn. In the E-coded yarn structure, the combination of the PVC-coated polyester yarn component, which had a harder structure than the HT yarn component used in the D yarn, with the latex yarn could have caused this result. Evaluation of the change in yarn diameter and Poisson's ratios of the F-coded yarn structure The changes in the average yarn diameter and Poisson's ratio of the F-coded yarn (braid-PVC) depending on elongation are given in Figures 27 and 28. Moreover, the structure and binary images under different tension values of the F-coded yarn produced at different twist levels are presented in Figures 29 and 30. In Figure 27, it was observed that the average yarn diameter of the F-coded yarn formed with the braid-PVC had a linear decreasing slope due to the tension, as was found in the A-coded (HPY-HT) yarn. In Figure 28, when the Poisson's ratios of the yarn consisting of braid-PVC components were examined, positive Poisson's ratios were obtained. The most stable yarn structure was observed for the 300 T/m twisted yarn. There was no significant change in the Poisson's ratios due to tension in this yarn. The most significant change in Poisson's ratio was seen in the 100 T/m twisted yarn. It was observed that an NPR could not be In Figure 31, the average yarn diameter of the G-coded (braidlatex) yarn twisted with 200 and 300 T/m showed a linear decreasing slope due to the tension, as was observed in the A (HPY-HT) and F (braid-PVC)-coded yarns. It was observed that there was a slight fluctuation in the 100 T/m twisted yarn; this tendency first decreased, then remained approximately constant at the same level, and then gave decreasing change of yarn diameter values. In Figure 32, when the Poisson's ratios of the G-coded yarn produced from braid-latex yarn components were examined, it was seen that stable Poisson's ratios were obtained at 200 and 300 T/m. It was also observed that the 200 and 300 T/m twisted yarns gave similar trends. Conclusion In this study, the change in Poisson's ratios of novel plied yarns consisting of different component yarn structures and with different twist levels were examined. The aim was to investigate the Poisson's ratios, as well as the possible auxetic effects, of novel plied yarn structures formed by bulky/soft (HPY, braid, and latex) and stiff (HT and PVC) yarn components. The experimental results showed that as the twist level increased, the Poisson's ratios and the tenacity values of the novel plied yarns decreased, but the elongation values increased. NPR was observed in HT-latex and PVC-latex plied yarn with a low twist level. As a remarkable case in this study, it could be said that the yarns that have fluctuating changes in the average yarn diameter could yield partial NPRs under tension. Results showed that the plied yarn structures consisting of braid-HPY and braid-braid component yarns gave a partial NPR effect under tension. A similar result was achieved in yarns with HT-latex and PVC-latex components. Since the partial NPR effect was observed in novel plied yarns with braid and HPY components, it was concluded that yarns formed with bulkybulky yarn components could give an auxetic performance under tension. Experimental results showed that plied yarn structures with fluctuating yarn diameters under tension could give partial NPR and exhibit auxetic behavior under tension. It was also found that the fluctuating changes in yarn diameter were more pronounced in yarns with a low twist level. The different structural architectural design of the yarns examined in the study is based on the auxetic behavior of plied structures formed by the combination of bulky yarn groups and rigid yarn groups. Besides, the effect of yarn twists on this plied yarn structure was evaluated. In this study, the NPR effect could be obtained from structures consisting of latex and rigid structured yarn components. In order to evaluate whether a structure consisting of bulky yarn components, such as braid and HPY, could exhibit an auxetic performance under tension, obtained from the yarn structure formed by braid-PVC (bulkystiff) components. However, it was observed that the Poisson's ratios of the braid-PVC plied yarn at a high twist level (300 T/m) remained approximately constant and low (≈ +1.5) under tensile loading. Evaluation of the change in yarn diameter and Poisson's ratios of the G-coded yarn structure The changes in the average yarn diameter and Poisson's ratio of the G-coded yarn (braid-latex) depending on elongation are given in Figures 31 and 32. Moreover, the structure and it was planned to examine the tension-dependent auxetic performances of the structure of plied yarns composed of bulky-rigid and bulky-bulky yarn components. It was tested whether the yarn structure consisting of braid and HPY could also expand in the transverse direction under tension. As a result of the study, it was observed that the structure composed of bulky yarn components, such as braid and HPY, exhibits partial NPR behavior under tension. It could be envisaged that such plied yarns could be used in applications where they could exhibit intermittent (with a fluctuating) auxetic property up to a certain elongation stage after a certain tension level. For this purpose, it is proposed that research should be continued to produce braid and HPY yarn structures with different structural parameters to obtain partial NPR.
5,905.2
2021-10-15T00:00:00.000
[ "Materials Science" ]
A Model to Approximate the Distribution of Rank Order Associations The relationship between two set of ranks can be evaluated by several coefficient of rank-order association. To judge the significance of an observed value of one of these statistics we need a reliable procedure for determining the p-value of the test. In several works the t-Student has been suggested as being relevant for the description of the null distribution of many coefficients. In this article, we propose a new model of density function, the generalized Gaussian on a finite range, which can be used to model data exhibiting a symmetrical unimodal density with a bounded domain. Several simulations illustrate the advantages of this technique over conventional methods. This is particularly useful in the case the number of ranks is larger than the threshold for which the exact null distribution is known, but lower than the threshold for which the asymptotic Gaussian approximation becomes valid. Introduction The extent of agreement between two rankings of n items, numbered from 1 to n, can be tested by using a non-parametric statistic of rank correlation in place of the Pearson product-moment correlation.The most well known statistics of this type are the Spearman, Kendall and Gini coefficients, which will be denoted, respectively, as r 1 , r 2 and r 3 .The former is the most often used measure in research in which the dependence is assumed monotonic but otherwise arbitrary.In comparison with r 1 and r 2 , Gini's r 3 seems to be applied rather rarely at present, although its characteristics are similar, and sometimes better, to those of the other rank correlations. To judge the significance of an observed value of one of these statistics, say r h , we need the exact distribution of r h under the hypothesis of independence or, at least, a reliable procedure for determining the p-value of the test.Significance levels could, for example, be calculated using asymptotic methods.In this regard, the convergence to the Gaussian distribution renders its use legitimate in interpolating the p-values of r h , h = 1, 2, 3. Nonetheless, already Old (Olds, E. G., 1938) states that a distribution with a finite range causes trouble at the tails when a Gaussian fit is attempted, and, this is particularly relevant to studies where we are particularly interested in the tails.KIendall et al. (KIendall, et al., 1939) add that Gaussian approximation is satisfactory for moderately large values, but for small values it is subject to the disadvantage inherent in any attempt to represent a distribution of finite range by one of infinite range, that is, the fit near the tails it is not likely to be very good.On the other hand, rank correlation statistics lies in the interval from −1 to 1 and we think it is better for clarity to test them by using a theoretical curve with a bounded, rather than infinite, domain. In order to circumvent these difficulties, many researchers have looked for probability densities which are capable of fitting the distribution of rank correlations appropriately, including: Johnson S B (Johnson, N-L., 1949), Tadikamalla-Johnson L B (Tadikamalla & Johnson, N-L., 1994).In particular, Pitman (Pitman, E. J. G., 1937) noted that the first four moments of the r 1 -distribution were similar to the first four moments of the symmetrical beta or Pearson type II distribution.Continuing this idea, Landenna et al. (Landenna, G., 1989) proposed the symmetrical beta for the Gini coefficient r 3 and Vittadini (Vittadini, G., 1996) suggested it for the Kendall coefficient r 2 .One key factor behind the wide diffusion of this model is the strict relationship between the symmetrical beta curve and the Student's t density function (Willink, 2009).This allows for the use of easy tables and hence ensures computational convenience and simple checking of results. Our objective in this paper is to devise a new model for estimating the p-values of some rank association indices in the case n is larger than the threshold for which the exact null distribution is known but lower than the value of n for which the Gaussian approximation becomes valid.The structure of the paper is as follows.In Section 2, we succinctly discuss the characteristics of r 1 , r 2 and r 3 .A new density function, the generalized Gaussian on a finite range (GGFR), is introduced in Section 3 and the prediction of p values is presented in Section 4. We conclude in Section 5. Indices of Rank Order Association The degree of monotone association between rankings can be measured by rank index of association.The coefficients reported in Table 1 are in general use. ] where π is a permutation of order n. S where sign(x) takes the values −1, 0, +1 according to whether x is negative, zero or positive, We remark that the expressions presented in Table 1 are the reverse permutations of π and η.The larger r h is, ignoring the sign, the stronger the association between rankings is.All the three indices can be interpreted as differences between the distance from perfect direct association (1, 2, & Di Bucchianico, A., 2001), compute the null distribution of r 1 for n = 19, . . ., 22 using the representation of its probability generating function as a permanent (a signless determinant) with monomial entries.See also Maciak (Maciak, W., 2009).It is interesting to note that he quantities S 1 and S 3 appearing in r 1 and r 3 can be expressed as a sum of parts which allows the use of combinations of sub permutations that significantly reduce the amount of computation required to build the exact distribution.See Otten (Otten, A., 1973) for a division of the permutations in two groups.Girone et al. (Girone, G., et al., 2010) went further by breaking up the permutations into four groups and executing a parallel processing scheme that, by the way, is naturally fit to Otten's proposal.Research to date has obtained the null distribution of the Spearman coefficient up to n = 26 (Gustavson, 2009) and that of the Gini coefficient for up to n = 24.The same procedures cannot be applied for Kendall's S 2 , which, however, benefits from a recurrence relationship.See Panneton & Robillard (Panneton & Robillard, 1972). Under the null hypothesis of independent rankings, the distributions of r 1 , r 2 and r 3 are symmetrical and have support in [−1, 1].All the odd moments are zero because of the symmetry.Furthermore, and this is essential in our paper, their variance and kurtosis are known as polynomials in n, as it is shown in Table 2. Table 2. Second and fourth moments of where k n = nmod 2. Both µ 2 (n) and µ 4 (n) are decreasing function of n with the values relative to r 3 always intermediate between those of r 1 and r 2 (the former systematically greater than the latter).It can also be observed that, because of the presence of k n , the moments of r 3 have an oscillating character due to the odd-even parity of n, that is the number of items. A New Model of Density Function A good model should reproduce the characteristics of r 1 , r 2 and r 3 generally observed over the whole population of permutations.Specifically, curves must be unimodal, symmetrical around zero, bounded in the interval [−1, 1]; moreover, as the range widens, they must tend towards the Gaussian probability distribution.The usual procedure to determine theoretical probabilities and expected frequencies is to find a curve capable of providing all the required peculiarities cited above, and then to integrate for the probabilities over the given intervals. The main contribution of the present paper is to provide a theoretical explanation for the behavior of several coefficients of rank order association.Suppose that the relative variation of the probability density of the absolute value of the random variables f (|r|) representing the rank order association is inversely proportional to 1 − f (|r|). To estimate the parameters of the GGFR we will follow the moment-matching method as.In this regard, the second and fourth centered moments of the GGFR density are ) . (3) The variance µ 2 (λ) increases for a higher λ 1 or for a lower λ 2 , whereas the excess kurtosis γ 2 (λ) = µ 4 (λ) /σ 4 (λ) − 3 rises to zero for a decreasing λ 2 or diminishes to zero for an increasing λ 1 . Let us consider a loss function in which the lowest two even moments of r h,n (for n ranks) are matched to those of a GGFR density. In addition, the presence of a beta function depending on the unknown parameters can create difficulties in numerical stability.To increase the chances of getting a global solution in reasonable computational times, we executed a controlled random search (CRS) algorithm discussed, for example, in Conlon (Conlon, 1992) and Brachetti et al. (Brachetti, et al., 1997).See Amerise et al. (Amerise, 2015) for more details on the procedure used. In Table 3 we show the estimates of λ for a few values of n with G (λ) < 0.1 × 10 −16 in each experiment.The rows σ 2 and γ 2 indicate, respectively, the variance and the excess kurtosis of the coefficients obtained on the basis of Table 2.All the three rank correlations have a platykurtic null distribution, which is flatter than Gaussian.This characteristic is more evident for Spearman's r 1 whereas, under this point of view, Kendall's r 2 is the closest to the Gaussian distribution. Generally, as n increases, the parameter estimates increase; moreover, the variance of the best fitting density decreases and the associated excess kurtosis remains negative but tends to zero (which could be a symptom of asymptotic Gaussianity). The trend for Gini's cograduation has two branches, one for even and the other for odd parity of n.This alternating behavior is due to the strong effect of k n = n mod 2, which appears both in the expressions and in the moments of r 3 . Approximations of p-values To compare the GGFR solution with the asymptotic method (Gaussian density) and t-Student alternative for small samples, we consider both exact and fitted significance levels α of the test H 0 : rankings are independent against H 1 : rankings are dependent, by using ρ 1 (Spearman), ρ 2 (Kendall), ρ 3 (Gini).Let r 1 , r 2 and r 3 indicate, respectively, the empirical values of Spearman, Kendall and Gini rank order associations.The statistics involved in the t-Student approximation are with The statistics involved in the standard Gaussian approximation are Accuracy of Approximations Iman & Conover (Iman & Conover, 1978) correctly observe that the discreteness of rank correlations often leads into situations where no critical region has exactly the size α.Rather there will be a choice of using the next smaller exact size called conservative p-value (denoted by C α,h ) or the next larger exact size called liberal p-value (denoted by L α,h ).Let ρ α,h,C and ρ α,h,L be the quantiles of ρ h , h = 1, 2, 3 corresponding to the probability levels C α,h and L h,α , respectively.The test of H 0 and H 1 above is conclusive if both the conservative and the liberal p-values lie on the same side with respect of the prefixed nominal level α.If C α,h < α < L α,h , then the test is unreliable. To investigate the accuracy of the proposed approximations, we examine a set of 500 nominal levels α = 0.0001, 0.0002, • • • , 0.0500.For each α we compute both the actual C α,h and the fitted C α,h,k conservative p-values and repeat the same calculation for the actual L α,h and the fitted L α,h,k liberal p-values.The fitted p-values are based on GGFR (k = 1), Gaussian (k = 2) and t-Student (k = 3) probability densities.A summary of the results is presented in Table 4.The most notable figures are emphasized in bold font.For reason of space, attention is focused on n = 19, • • • , 24 which are the largest values of n for which the exact null distribution is known for all the three rank correlations.The quantity δ α,h,k = 0.5 Discussion and Conclusion Over recent years, the number of ranks for which the exact null distribution is fully available has increased for many measures of monotone association.For problems involving a number of ranks, which is not included in the existing software though, it is necessary to resort to the omnipresent Gaussian approximations while awaiting faster and more economical computers.However, the Gaussian density can be misleading, particularly in the tails, which often are the most important part.In this paper, we have demonstrated the usefulness of the generalized Gaussian density with finite range (GGFR) for fitting the exact null distribution of three statistics which are routinely used for measuring the correlation between two rankings: Spearman, Kendall and Gini coefficients.All that is required is that variance and kurtosis be known functions of the number of ranks. The performance of the GGFR is decidedly superior to that of t-Student and Gaussian distributions, which are traditionally employed to estimate tail probabilities for the Spearman and the Gini coefficients.The situation regarding the Kendall coefficient is rather different.In this case, the Gaussian model achieves the best results.Improvement over conventional procedures (Gaussian and t-Student densities) does not appear impressive, but touches on the distribution tails, which are the most interesting from a practical point of view.It must be added that the GGFR achieves the largest improvement in fitting the null distribution of Spearman's ρ, which is the most known and probably most used rank correlation coefficient. Table 1 . Rank association statistics assume absence of ties.Coefficients in Table 1 vary within the range:[−1, 1].The extremes are achieved if and only if there is perfect association, negative or positive, for all pairs:r h Table 5 . Observed and predicted p-values.thecombined evaluation of more than one approximation may throw light on the correct significance of a test.With regard of r 3 , the p-values proposed by GGFR are more conservative than Gaussian and t-Student distributions. 0001, • • • , 0.05} gives the average distance between lower and upper fitted and actual significance levels and it is used to assess the quality of approximation.High values of δ α,h,k indicate that approximations to the null distribution of the rank correlation h, based on model k, far exceed or under-run at least one exact threshold at
3,372.6
2017-02-20T00:00:00.000
[ "Mathematics" ]
Explicit parallel co-simulation approach: analysis and improved coupling method based on H-infinity synthesis Co-simulation is widely used in the industry for the simulation of multidomain systems. Because the coupling variables cannot be communicated continuously, the co-simulation results can be unstable and inaccurate, especially when an explicit parallel approach is applied. To address this issue, new coupling methods to improve the stability and accuracy have been developed in recent years. However, the assessment of their performance is sometimes not straightforward or is even impossible owing to the case-dependent effect. The selection of the coupling method and its tuning cannot be performed before running the co-simulation, especially with a time-varying system. In this work, the co-simulation system is analyzed in the frequency domain as a sampled-data interconnection. Then a new coupling method based on the H-infinity synthesis is developed. The method intends to reconstruct the coupling variable by adding a compensator and smoother at the interface and to minimize the error from the sample-hold process. A convergence analysis in the frequency domain shows that the coupling error can be reduced in a wide frequency range, which implies good robustness. The new method is verified using two co-simulation cases. The first case is a dual mass–spring–damper system with random parameters and the second case is a co-simulation of a multibody dynamic (MBD) vehicle model and an electric power-assisted steering (EPAS) system model. Experimental results show that the method can improve the stability and accuracy, which enables a larger communication step to speed up the explicit parallel co-simulation. Introduction Co-simulation is widely used in the virtual development of multidomain systems. It brings about new opportunities and challenges in the simulation of a multibody dynamic (MBD) system interacting with other systems, e.g., the pantograph-catenary interaction [21], or the vehicle-track interaction [2,22]. Especially in automotive engineering, an MBD vehicle system always needs to interact with hydraulic, and electronic subsystems. The subsystem models from each domain are usually built in domain-specific software tools, shared as black boxes from suppliers and integrated using co-simulation for holistic system development. To integrate the different models in a unified format, a functional mock-up interface standard has been introduced in the academia and industry [8]. According to this standard, each model can be calculated as a single functional mock-up unit (FMU) and its input-output variables (i.e., coupling variables) are synchronized and communicated by a co-simulation master. As each local solver can adapt to the subsystem model, co-simulation is a computationally time efficient solution. A multicore distributed simulation of a combustion engine has been presented by Khaled [4]. The simulation is accelerated by partitioning different cylinder models involving discrete events. Andersson partitioned a race car model to achieve an accelerated parallel co-simulation [1]. Gallrein used a co-simulation of high-fidelity tyre models and an MBD vehicle model for real-time driver-in-the-loop application [15]. In a mono-simulation where the dynamic equations are solved together by a single solver, the simulation accuracy and stability depend on the time-stepping method. However, the cosimulation accuracy and stability are also related to the discrete communication between each subsystem, and this issue has been an active topic of research in the last decade [1,3,9,29,30]. An extensive state-of-the-art survey on the co-simulation of continuous, discrete, and hybrid systems was conducted by Gomes [16]. First, a co-simulation can be distinguished by the time-stepping method of the master, namely the explicit (non-iterative), semi-explicit, and implicit (iterative) co-simulation. In addition, the slave subsystems can be calculated in parallel (Jacobi scheme), sequential (Gauss-Seidel scheme), and iterative schemes. For a co-simulated mechanical system, the coupling configuration can be further distinguished by the algebraic constraint [29] and applied force [30] approaches. The applied force approach, in which the coupling variables are force-displacement (FD coupling) or displacement-displacement (DD coupling), is the preferred one because an algebraic loop can be avoided [9]. The explicit parallel co-simulation, i.e., where each subsystem model is simulated on its own in parallel and exchanges the coupling variables only at specified communication instants, can be easily implemented and is more common than the alternatives. In this approach, the master is not required to control an iterative process or a calculation sequence of the slaves. Besides, the slave model is not required to be controllable for rollback or observable for the internal states. This feature can be supported by most commercial software tools and black-box models for intellectual property protection. In general, the explicit parallel co-simulation has a reduced computational burden and a shorter elapsed time, which is more suitable for optimization and real-time applications, e.g., the hardware-in-loop simulation. However, it is well known that the explicit parallel co-simulation has drawbacks in accuracy and stability, because the input to each subsystem is unknown during the communication interval t (i.e., the macro-step) and needs to be approximated by some extrapolation methods. The simplest method is to keep the latest exchanged value during the macro-step, i.e., the zeroth-order hold (ZOH) method. The resulting approximation error, i.e., the coupling error, can be significant regarding the accuracy and stability. Unlike iterative approaches [29], explicit co-simulation cannot undo the step and recalculate the input. Therefore, to improve its result by coupling methods is challenging but highly needed in engineering. Busch [9] systematically analyzed extrapolation methods with Lagrange and Hermite polynomials containing first-order derivatives. It shows that higher-order extrapolation polynomials increase the error order, may stabilize or destabilize the system, and the performance varies with the system parameters. To predict the coupling variable, usually additional information about the system is needed. Andersson [1] used the partial derivatives with respect to the coupling variables for a linear correction. Another interesting concept is the energy-based coupling method. The rationale is that the inconsistent energy from the discrete communication can yield instability of the system, and thus, should be avoided. Benedikt [7] used the generalized energy in a macro-step to correct the coupling variables. Drenth [13] proposed a new sample-hold design to preserve the energy in a power bond. In these approaches, the coupling variables are actually corrected separately to preserve their product, i.e., the energy. However, if only the energy is conserved, the result might be still incorrect, as shown by González [17] and Wu [32]. González [18] developed an energy-leak monitoring framework, in which the dissipated energy inside the system is needed to correct the coupling variables. Rahikainen [25] took the residual energy as an indicator, using its linearity with the macro-step to verify the co-simulation accuracy and stability. In the aforementioned methods, the energy reference is usually calculated from the available results in a previous macro-step, which causes an inherent macro-step delay. Furthermore, some adaptive coupling methods are developed for complex systems. Sadjina [28] considered the residual energy as an error estimator to control a variable macro-step. Stettinger [31] developed a model-based coupling approach using extended Kalman filter and recursive least square algorithms, which are commonly used control techniques. Khaled [5] developed a context-based heuristic method to adapt the extrapolation polynomial. Peiret [23] used an adaptive reduced-order interface model to represent complex systems and generate approximated variables during the communication interval. From the point of view of the authors, some challenges still remain in the state-of-theart methods. First, the parameters of the aforementioned methods do not always have a straightforward or physical interpretation, which makes their tuning work less transparent. Second, the parameter values are not optimized due to the lack of an objective function and a reference system. Several parameters can be dependent and difficult to tune together to improve the performance. Third, the performance of the coupling method can be strongly case-dependent. The combination effect of different system dynamics and coupling configurations (e.g., DD and FD couplings) makes the performance assessment less intuitive and its generalization to a more complex engineering system even impossible [17,30]. In this work, we see the explicit parallel co-simulation in the frequency domain as a sampled-data interconnection. The objective is to focus on the coupling interface itself, which releases the complexity of subsystems. Some well-established control theorems are adopted to interpret the co-simulation problems. Furthermore, we design a new coupling method similarly as a signal reconstruction work, which relies on the H ∞ synthesis. This method intends to reduce the coupling error directly by minimizing its L 2 norm. This paper is organized as follows: a co-simulated system is formulated as a closedloop interconnection in Sect. 2. The stability is analyzed by the Nyquist stability criterion. Then the coupling method design is formulated as a H ∞ synthesis problem in Sect. 3 and solved by an optimization routine, followed by a convergence analysis and a parameter study. In Sect. 4, the new method is verified with a dual mass-spring-damper system and a real engineering case, which is a co-simulation of an MBD vehicle model and an electric power-assisted steering (EPAS) system model. The work is further discussed and concluded in Sect. 5. Analysis of co-simulated system In the first step, we present a basic co-simulated system as a sample-data system because of the common nature of discrete communication. Then we show how the co-simulation degrades in terms of error and stability. Closed-loop interconnection formulation A basic parallel co-simulated system can be simplified as two weakly coupled subsystems. For ease of analysis, we assume that 1. the subsystems are linear time-invariant (LTI) with zero initial condition; 2. the subsystems are coupled by a single input and a single output; 3. each subsystem can be accurately solved by an appropriate solver, so the integration error is minor compared to the coupling error [14,26]. Then we use transfer functions Q 1 (s) and Q 2 (s) to represent two subsystems, s denoting the Laplace domain. A non-feed-through subsystem [20] yields a strictly proper transfer function (i.e., the degree of the numerator polynomial is less than that of the denominator). A feed-through subsystem yields a proper transfer function. In parallel co-simulation, the input-output variables are communicated every macro-step t . This is similar to adding sample and hold devices to the continuous reference system. Thus, the system becomes a sampled-data closed-loop interconnection ( Fig. 1(a)), which introduces error and stability issues [24]. The sampled input u * (t) is a product of the continuous input u(t) and a periodic impulse train and its Laplace transform is known as where ω s = 2π/ t. The continuous approximationũ(s) during t is obtained from holding u * (s) with an extrapolation operator H (s), e.g., the ZOH method. Actually, H (s) can differ in each subsystem, but we assume that the same H (s) is applied in the interconnection. Afterwards, two important characters of co-simulation are concerned: 1. the accuracy of the coupling method; 2. the stability and robustness of co-simulation. Analysis of the coupling error The coupling error ξ u (s) is the difference of the continuous input and its approximation Fig. 1 (a) Co-simulated system is formulated as a closed-loop interconnection. (b) A truncated subsystem on one side and coupling error ξ u is an input multiplicative disturbance which can be modeled as an input multiplicative disturbance ( Fig. 1(b)). When the sampling frequency ω s = 2π/ t is not typically higher than the signal frequency ω, the highfrequency components can be mirror into the low-frequency part, i.e., an aliasing effect occurs in the co-simulation [6]. In this circumstance, a severe low-frequency error is introduced and should be avoided in the first place. Engineers can select t according to the subsystem bandwidth or an estimation of frequency components from its standalone simulation. However, this requirement cannot guarantee the accuracy and stability of the co-simulation. The hold operator H (s) varies with the extrapolation degree k. For simplicity, we consider the zeroth-order hold H zoh (s), first-order hold H f oh (s), and second-order hold H soh (s) methods (k = 0, 1, 2, respectively): ξ u (s) in combination with different H (s) can be expanded with the Taylor series When t is sufficiently small ξ u (s) can be approximated adequately by its low-frequency component [24]. Then a k-degree extrapolation method yields an error with an order of O( t k+1 ). This might not be true if the high-frequency component (s) is non-negligible. For LTI subsystems, the output error is a result of linear mapping from the input error which has a same error order of O( t k+1 ), and the convergence property is preserved. This is consistent with a time-domain analysis based on a LTI system [9]. Similarly, the state error ξ x (s) is mapped from ξ u (s) with a transfer function Q x (s), and thus, it has the same order. When the subsystem Q(s) and Q x (s) are underdamped and have fast dynamics, they become less robust to the disturbance ξ u (s). The corresponding output y and state x can be more easily excited by its high-frequency component, and consequently, ripples may occur in co-simulation. Analysis of stability and robustness In a stable co-simulation, the error ξ y (s) is convergent. In other words, it will not propagate incrementally in the closed-loop interconnection. To derive ξ y (s), an exogenous input vector u e should be added to excite both subsystems in the interconnection ( Fig. 1(a)). The output of the two subsystems becomes where the notation s is dropped for clarity and φ is the operator for the multiplicative disturbance. In the continuous nominal system, the error-free output is then ξ y can be derived from the difference (9) Subsystems Q 1 , Q 2 , and the terms cascaded with φ are always stable. In addition, the nominal closed-loop system is stable. Therefore, the convergence of ξ y is determined by −(1 + φ) 2 Q 1 Q 2 , i.e., the loop transfer function of the system. For a stable system, its loop transfer function should not encircle the point −1 + j 0 in the complex plane as s ∈ (−j ∞, +j ∞) according to the Nyquist stability criterion. Besides this geometrical approach, two well-established control theorems can be used in co-simulation problem. is the maximum gain of a single-input single-output system or the maximum singular value of a multi-input and multi-output system. It means that the system is stable if −(1 + φ) 2 Q 1 Q 2 is bounded within a unit circle. Since the ZOH method does not amplify the system gain, it guarantees a stable co-simulation if the nominal system fulfills ||Q 1 Q 2 || ∞ < 1 and no aliasing occurs. Furthermore, the system with a smaller loop gain −(1 + φ) 2 Q 1 Q 2 has a better rejection to the disturbance (i.e., the coupling error). This can be achieved by selecting a more robust coupling configuration [27]. In an FD coupling, applying the force variable to the stiffer side can also reduce the loop gain and make the co-simulation more stable and accurate. Examples can be seen in the vehicle-steering interaction [10] and the vehicle-track interaction [22]. Scaling down the coupling variables can also reduce the loop gain and enhance the stability. It gives incorrect simulation results but can be useful to obtain a stable initial setup. It means that the system is stable if −(1 + φ) 2 Q 1 Q 2 has a phase angle in (−180 o , 180 o ). However, extrapolation method (4) always shows an ever-increasing phase delay in high frequency. This destroys the passivity of subsystem Q 1 , Q 2 . Physically, an additional energy flows into the interconnection, and if it is not sufficiently dissipated or stored, the system might get unstable. This brings an intuitive explanation on the physics of a co-simulated system. Herein, we can conclude that to improve the stability, the phase delay should be compensated or the loop gain should be reduced. Improved coupling method by H ∞ synthesis From the foregoing analysis, the sample-hold process is the error source of co-simulation. To reduce this error, a new coupling method is given next. It adds a compensator and a smoother at the coupling interface. Formulation of the error system The concept can be illustrated using an error system (Fig. 2) inspired by the modern signal reconstruction work [33]. u(s) is a coupling variable from subsystem 1 to subsystem 2. An appropriate coupling method should minimize ξ u (s) in the entire frequency range or at least in the bandwidth of interest. A compensator K 1 (s) and a smoother K 2 (s) are added, respectively, to the output and input of the two subsystems. They can be calculated by different solvers and should be invariant with the integration step. Therefore, a continuous expression is taken. In addition, the sample-hold process H * (s) ≈ H (s)/ t is simplified using a second-order Padé approximation. The problem is to find the best pair of K 1 (s), K 2 (s) to reduce ξ u (s). In this method, we focus on the interface itself and exclude the subsystem dynamics, which can be quite complex or difficult to know. On the contrary, the sample-hold process is determined (2), and it is invariant with a fixed macro-step t . The subsystem dynamics is implicitly incorporated by u(s). In practice, the exact input u(s) is unspecified and not accessible, and consequently ξ u (s) is unknown. However, it is apparent that Fig. 2 Formulation of an error system for one coupling variable this solution is not valid because it is unstable and improper, and thus not implementable. Instead, the design objective can be formulated as a minimization of the L 2 norm of error ||ξ u || 2 . We denote the transfer function of the error system as T ue , then ||ξ u || 2 fulfills ||ξ u || 2 = ||T ue u|| 2 ≤ ||T ue || ∞ ||u|| 2 (10) which means that ||ξ u || 2 is upper-bounded by ||T ue || ∞ ||u|| 2 . Therefore, the well-designed terms K 1 (s), K 2 (s) should give a minimal ||T ue || ∞ . ||T ue || ∞ by definition is the worst-case energy gain. This implies that the concept intends to minimize the energy of the coupling error, which is similar to the energy-based concept. At this stage, the coupling design problem can be solved by the H ∞ synthesis framework. H ∞ synthesis for the coupling design To apply the H ∞ synthesis, the error system ( Fig. 2) needs to be reformulated into a generalized plant G connected with a controller K (Fig. 3). W f is a weighting function added to the error system and will be explained later. The problem can be stated as follows. H ∞ synthesis problem: Given a LTI system G, find a feedback controller K such that the closed-loop system is stable and the following objective is satisfied: where the scalar γ is the L 2 gain performance to be minimized. The solution of control law K is the correction term K 1 (s), which is always proper, and therefore implementable. In the aforementioned assumption, H * (s) is simplified using Padé approximation. However, its high-frequency component (3) still exists in reality, which yields a large piecewise constant input after H * (s). To address this issue, K 2 (s) is designed as a low-pass filter to smooth the input signal to the subsystem. The weighting function W f (s) cascaded to the output ξ u is another low-pass filter, and its purpose is to reduce more the error in low frequency. The introduction of W f (s) is also necessary for a feasible solution. Because the worst-case ξ u occurs in high frequency, a minimization of ||ξ u || 2 in all frequency range would largely distort the low-frequency component. The problem (11) can be readily solved using the Matlab Robust Control Toolbox. For the scope of this journal, we provide the detailed procedure of the solution in Appendix A. In the optimization, a pole-placement constraint is given to bound the poles of T ue (s), and consequently, the poles of K 1 (s) [12]. The constraint is for the purpose of implementation: Fig. 3 A generalized plant G and an undetermined controller K as an equivalence to the error system 1. K 1 (s) can be guaranteed to be stable with a specified solver. If its poles λ are in a discshaped region {λ ∈ C, |1 + hλ| < 1}, a forward Euler method with step h, and other methods, can be applied. 2. The fast mode of K 1 (s) can be removed to avoid a small integration step and a longer computation time. The pole-placement constraint mainly affects the fast modes, and thus it is more relevant to computation than to the accuracy. In summary, K 1 (s) is optimized according to the base terms K 2 (s), W f (s), and sample-hold process H * (s) ( Table 1). Therefore, their selection and effects are studied in the next section. Convergence analysis and parameter study The accuracy of the coupling method can be verified by the transfer behavior of T ue (s). In this analysis, the error ξ u with different coupling methods is approximated by the lowfrequency component (3). To show the convergence property with t , the error magnitudes are plotted versus a normalized frequency f n = ω t/2π similarly to [6], in both decibel and absolute scales (Fig. 4). In the decibel scale, the error order can be clearly observed from the slope of the error magnitude, and a higher-order ξ u converges faster by reducing t . In the absolute scale, it is apparent that a higher-degree extrapolation is more accurate with a low f n and a small t . However, ξ u is minor in this circumstance and the co-simulation problem might be less crucial. Meanwhile, a higher-degree extrapolation introduces a larger ξ u with a high f n and a big t , and the co-simulation problem becomes more critical. Therefore, a high-degree extrapolation is rarely employed for coupling in practice. A parameter study is taken to investigate how K 2 (s), W f (s), and H * (s) in H ∞ method influence its convergence property. First, the ZOH, FOH, and SOH methods are selected for H * (s). Actually, K 1 (s) is optimized accordingly to compensate H * (s), the resulting T ue (s) is very similar. This means that H * (s) is less important to ξ u , and the result is shown in Appendix B. Moreover, a general H ∞ synthesis gives a K 1 (s) with a same order as the generalized plant, so that a higher-degree H * (s) adds to the computation and implementation difficulty. As a consequence, H * (s) can be simply fixed with the ZOH method without a loss of accuracy improvement, and its only parameter is t . Similarly, a low-order smoother K 2 (s) is preferred. Thus, a second-order low-pass filter is taken to mitigate the sharp edges of the input signal, which can be incurred with a first-order filter. The key parameter is the cut-off frequency f K 2 . The tuning of f K 2 is very intuitive and it defines how smooth the input signal is desired. In a general setup, f K 2 can be specified with the Nyquist frequency 0.5/ t , because the main component of the signal should have a frequency lower than 0.5/ t to be sufficiently sampled. W f (s), which can have various orders and cut-off frequencies f W f , is important to the accuracy because it is the weighting of the optimization target. In the study, W f (s) are specified as first-order, second-order, and third-order Butterworth filters, and the corresponding . The H ∞ method with a higher-order W f yields an error that converges faster, and a lower limit occurs below f W f error magnitudes are compared in Fig. 4. It can be seen that W f (s) introduces an error with a same order. In this regard, the H ∞ method with a higher-order W f (s) behaves as a higherdegree extrapolation. Moreover, ξ u does not drop monolithically and it reaches a lower limit below f W f . From the point of view of the authors, this saturation is not a weakness of the method because the lower limit can be substantially small. In addition, t is lower-bounded by the solver integration step and cannot be arbitrarily small in reality. By lowering f W f , the lower limit can be reduced, but ξ u is amplified in high frequency (see Fig. 4, and when W f (s) is of third order, f w f reduces from 0.06/ t to 0.01/ t ). This implies a compromise between low-frequency and high-frequency accuracy. The lowfrequency accuracy weights more with a higher-order W f (s) and a smaller f W f . To achieve a good compromise, f W f can be specified, by trial and error, as 0.01/ t , 0.03/ t , 0.06/ t for the first-order, second-order, third-order W f (s), respectively. With the proposed specification, ξ u is reduced compared with other basic methods (Fig. 4). The reduction occurs in a wider frequency range, which implies that the method is robust and can well approximate an input with diverse frequency components. This feature is achieved by the worst-case minimization nature of the H ∞ synthesis method. In summary, the three base terms H * (s), K 2 (s), and W f (s) can be simplified without a loss of accuracy improvement. Only three key parameters need to be tuned and their effects are independent. f K 2 defines the input signal smoothness. W f (s) is relevant to the convergence property, and f W f defines the weights of the frequency components. Approximation by H ∞ method The H ∞ method is further experimented in the time domain to demonstrate how it works. We assume a sweep signal (e.g., a force/velocity variable) ranging from 0.001 to 30 Hz is communicated with a t of 10 ms. The performance of the method can be assessed by how well it reconstructs the reference input. According to the parameter study, K 2 (s) is a second-order filter with f K 2 = 0.5/ t = 50 Hz and W f (s) is a first-order filter with f W f = 0.01/ t = 1 Hz, and the base terms are specified as follows: (by second-order Padé approximation), The pole-placement constraint is defined as {λ ∈ C, |1 + 0.001λ| < 1}. The optimization takes 55 iterations and an elapsed time of 1.628 s to determine K 1 (s): which can be seen as a combination of terms with different orders and optimal weights. Alternatively, the smoother K 2 (s) can be specified with f K 2 = 30 Hz, which is the input bandwidth. A different K 1 (s) is synthesized accordingly: K 1 (s) = (s + 1682.9)(s + 6.2825)(s 2 + 600s + 1.2e05)(s 2 + 266.6s + 3.553e04) (s + 1470.1)(s + 6.2833)(s 2 + 1870s + 9.674e05)(s 2 + 617.6s + 5.111e05) . (14) The approximation results by the ZOH and H ∞ methods are shown in Fig. 5. The rebuilt signal is fairly close to the reference. In addition, the large piecewise constant signal is smoothed, which introduces a phase delay. Actually, this phase delay is already compensated by K 1 (s). In this experiment, a quite large t is taken to make the deviation more visible. Fig. 5 Comparison of input approximation. h ∞ unsmoothed is the compensated signal sent every t and h ∞ smoothed is the signal sent to the model after the smoother. With a stronger smoother (b), K 1 (s) amplifies more the input magnitude for compensation K 2 (s) with a lower f K 2 = 30 Hz makes the input signal smoother, and the compensator K 1 (s) increases more the input magnitude and adds more phase-lead in advance. In another aspect, the H ∞ method works similarly to a correction-interpolation approach given that a corrected value, instead of the exact value (as in the ZOH method), is communicated. Case study In this section, the H ∞ method is implemented in two co-simulation cases, where the subsystems are involved. The first case is a dual mass-spring-damper system, which is a classic benchmark problem in co-simulation. The second case, is a co-simulation of multibody vehicle system with a steering mechatronic system. Co-simulation of a dual mass-spring-damper system The dual mass-spring-damper system can be partitioned into two models with a single mass (Fig. 6). Both models are solved by a forward Euler method with a step of 1 ms. The coupling variables are the force F c = k c (x 1 − x 2 ) + d c (ẋ 1 −ẋ 2 ) and the velocityẋ 2 , which is the same as FD coupling and velocity being used to avoid a derivation error. For comparison, a mono-simulation reference and co-simulation with other coupling methods (ZOH, FOH, and SOH) are implemented. The macro-step is defined as t = 50 ms, and the H ∞ method is designed following a general setup: W f (s) is of first order with f W f = 0.01/ t = 0.2 Hz and K 2 (s) is of second order with f K 2 = 0.5/ t = 10 Hz. A coupling method might perform well in a specific case but much worse in other cases. To avoid this case-dependent effect, the parameters are specified in a stochastic way as the uniform distributed random variables in Table 2. The damping coefficients d 1 and d 2 are calculated to maintain the damping ratio ζ 1 = d 1 / √ m 1 k 1 , ζ 2 = d 2 / √ m 2 k 2 in the target range. Thus, it is possible to cover various cases such as stiff and non-stiff systems, overdamped and underdamped systems, and highly asymmetric systems. An external input at a given frequency may excite the system in a certain frequency range that makes a coupling method always win (see [25]). To avoid this, the system dynamics is examined by its impulse response. During a simulation of 5 s, two external force impulses of 1 N are applied on m 2 at the first and the fourth second. In total, 2000 random cases are Fig. 6 The dual mass-spring-damper system is coupled by force and velocity. The state vectors of the two models are [ẋ 1 , simulated, and the coupling methods are fixed. Some cases, where the solver is unable to calculate the model owing to an extremely small mass or large stiffness, are excluded. The accuracy of the results is evaluated by a normalized root mean square (NRMS) error of the coupling variable: where x is the coupling variable, T is the simulation time, and x ref,max and x ref,min are the maximum and minimum values of the reference. The NRMS errors of both coupling variables, i.e., ε nrms,Fc and ε nrms,ẋ 2 , can reflect the simulation accuracy. If they exceed a threshold value η ε nrms,Fc > η or ε nrms,ẋ 2 > η (16) then an inaccurate case can be counted. The numbers of inaccurate cases with different threshold values η are presented in Table 3. The H ∞ method is more accurate in more possible cases, showing its advantages of accuracy and robustness. The other coupling methods have more unreliable cases, which might be due to the imprecision of a low-order approximation (ZOH) or the lack of robustness of a high-order method (SOH). The stability is examined by the simulation traces ofẋ 2 , F c . The impulse response of a stable LTI system should either converge monotonically (overdamped) or oscillate with a decay (underdamped). Otherwise, the system is unstable. The statistical results of the unstable case are presented in Table 4. In general, the stability deteriorates with the increase in extrapolation degree, and it is enhanced with the H ∞ method. Furthermore, four representative cases are shown in Fig. 7-Fig. 10. The system is highly underdamped in the first case (Fig. 7). Two masses oscillate after the impulse excitation. The SOH method is better than the lower-order coupling method. The second case is an overdamped system (Fig. 8), in which the SOH method introduces an oscillatory result. The H ∞ method yields a small ε nrms,ẋ 2 and a minimum ε nrms,Fc . In the third case (Fig. 9), the system is numerically stiff with small masses and large stiffness, and the mass ratio m 1 /m 2 is very small. The result is similar to the previous case in that the H ∞ method can approximate the coupling variable fairly well. The fourth case is also a stiff system (Fig. 10), but the mass ratio m 1 /m 2 is very large. This can introduce a severe instability problem, because the system loop gain is enlarged [10]. In this case, the co-simulation is stable only with the H ∞ method, and the error grows with the extrapolation degree. Even in a specific case, for one coupling method it is difficult to be the optimum for both coupling variables. Therefore, it is difficult to assess their performance with a complex system. To adapt the coupling method to the model might be a solution. However, this can be challenging in implementation and computation. Alternatively, the H ∞ method may address this issue with a fixed solution owing to its robustness, which has been verified in the convergence analysis and the statistic experiment. This is similar to the H ∞ control technique, which can control a complex, nonlinear, or even uncertain system with a robust linear control law. Co-simulation of an MBD vehicle model and an EPAS system model The second application case is a co-simulation of an MBD vehicle model and an EPAS system model. The vehicle model is composed of a vehicle body, four suspensions and wheels. One of the front suspension is presented in Fig. 11. The knuckle is constrained by five linkages so it moves up and down, and steers by the moving tie rods. The wheel rotation and forces are transmitted to the steering rack through the linkages, which are modeled as rigid bodies. The vehicle model is created using the multibody system library in Dymola and it has 36487 equations. It is computationally heavy owing to the calculation of largesize matrices and a DASSL solver is used. In a high-frequency maneuver, the maximum integration step is around 18 ms. In a low-frequency maneuver it can be 100 ms with much less Jacobian evaluations. The EPAS system model has 3 degrees of freedom (Fig. 11): the rotation of steering column δ s , EPAS motor δ m and the rack displacement x r : the forces F pinion and F assist can be calculated from the transmission ratios: F pinion = T pinion /i pinion , F assist = T belt /(i belt i bs ). The belt drive and the ball screw mechanism generate a large inertia ratio and highly underdamped dynamics, which makes the model numerically stiff. The friction force F r friction (similarly to friction torque T c friction ) are represented by the LuGre friction model:ż to capture the stick-slip effect, which further adds to the stiffness. v is the sliding velocity, z is the internal state. The bristle stiffness σ 0 and micro-damping σ 1 produce a spring-like behavior in small displacements. σ 2 is the viscous friction coefficient. g(v) is a velocitydependent term based on the Coulomb friction F c , the static friction F s and the Stribeck velocity v s (g(v) has been simplified in this case). The parameter values are summarized in Table 5. To solve the EPAS system model, a fourth-order Runge-Kutta method with a step of 0.25 ms is employed in the FMU. The EPAS system model is further coupled with an electric control unit (ECU) model which is discrete with a step of 1 ms (Fig. 11). It is a black box from the supplier and comprises the control code to generate T m . More information of the model is provided in the work of the authors [11]. An explicit parallel co-simulation is applied in this complex engineering case. The vehicle model and the EPAS model are coupled using the rack force F r and rack speedẋ r with t 1 = 1 ms and t 2 = 20 ms. Here, the H ∞ method with a general setup is implemented inside the FMUs. Two steering tests are simulated, in which a steering torque T s with a magnitude of 2.5 Nm is applied. A low-frequency T s growing from 0 to 1 Hz is applied in the first test, and a high-frequency T s from 0 to 3 Hz is applied in the second test. The system states, i.e., the steering angle δ s , rack speedẋ r , vehicle yaw rate, and vehicle lateral velocity are shown in Fig. 13 and Fig. 14. The vehicle states show less discrepancies due to their slow dynamics. The SOH method gives unstable results with large deviations. According to the NRMS error (Fig. 12), the H ∞ method is more accurate than the ZOH and FOH methods in both the low-frequency and the high-frequency cases. In the low-frequency case (Fig. 13), no significant error is incurred with all the methods, and the accuracy improvement is a bit saturated due to the inherent error from the discrete communication. In the high-frequency case (Fig. 14), the FOH method gives an oscillatory rack speed. However, the H ∞ method shows both an oscillation depression and accuracy improvement. Furthermore, the elapsed time has been reduced drastically in the co-simulation (Table 6), comparing it to the mono-simulation. The H ∞ method shows an elapsed time close to that of the basic coupling methods, because the additional workload is only the computation of the fixed compensator and smoother. Conclusion In this work, we reviewed the explicit parallel co-simulation approach in a new framework. Its analysis has been conducted in the frequency domain and we have the following observations: -The coupling method has a frequency-domain characteristic. Therefore, its performance depends on the system dynamics and also the system input, which was discussed previously in the literature. -Co-simulation stability is a closed-loop property, which is highly dependent on the inputoutput transfer behavior of each subsystem. -There is no optimal coupling method in general. One should specify the possible frequency range of the coupling variable. Otherwise, it is expected that the coupling method can reduce the error in a wider frequency range. Based on the new framework, a coupling method relying on the H ∞ synthesis is developed, which can fulfill the aforementioned needs. Despite its theoretical complexity, the implementation is not challenging as the H ∞ synthesis problem can be solved by Matlab functions. The limitation and unmet challenges are: -To add a compensator and a smoother at the interface can be easy for the engineers who prepare the subsystem models, but it might be challenging when the subsystems are unchangeable black boxes by the current standard. -The aliasing effect should be taken into account, but has been simplified in the current step. Nonetheless, the H ∞ method has shown a potential in accuracy improvement and robustness, which is much desired for complex systems but has not been addressed explicitly before. The approach might be also useful to optimize other existing coupling methods, if they can be formulated as a fixed-structure H ∞ synthesis problem. (19) then the generalized plant G added with W f (s) can be derived as Closing the loop with the undetermined controller the error system becomes T ue (s) = C cl (sI − A cl ) −1 B cl + D cl , where C cl = C 1 + D 12 D k C 2 D 12 C k , D cl = D 11 + D 12 D k D 21 . According to the bounded real lemma [12], problem (11) is equivalent to the existence of a positive definite matrix P 0 fulfilling the linear matrix inequality (LMI) condition: min γ subject to A T cl P + P A cl + C T cl C cl C T cl D cl + P B cl B T cl P + D T cl C cl D T cl D cl − γ I ≺ 0 and K 1 (s) can be determined with a feasible γ . Appendix B: Parameter study of H * (s) ZOH, FOH, and SOH methods can be applied in H * (s), and different K 1 (s) are synthesized accordingly. It can be seen that no significant change of ξ u occurs (Fig. 15), and only the high-frequency component increases with a higher-order approximation. In addition, the approximated H * (s) has an order of two with the ZOH method, four with the FOH method, and six with the SOH method. This results in a K 1 (s) with an order of five, seven, and nine, respectively, which needs an unnecessary effort in implementation and computation.
9,623.6
2021-03-10T00:00:00.000
[ "Engineering", "Computer Science" ]
High-threshold quantum computing by fusing one-dimensional cluster states We propose a measurement-based model for fault-tolerant quantum computation that can be realised with one-dimensional cluster states and fusion measurements only; basic resources that are readily available with scalable photonic hardware. Our simulations demonstrate high thresholds compared with other measurement-based models realized with basic entangled resources and two-qubit fusion measurements. Its high tolerance to noise indicates that our practical construction offers a promising route to scalable quantum computing with quantum emitters and linear-optical elements. Introduction.Scalable quantum-computing architectures [1,2] are built on quantum error-correcting codes [3] that identify and correct for errors that quantum hardware experiences as logical algorithms progress.However, it remains difficult to produce an architecture with a sufficiently large number of high-quality qubits to complete long quantum computations reliably.To overcome this technological challenge, we should design bespoke quantum architectures that take advantage of the positive features of scalable physical hardware [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18].Ideally, we should exploit the native operations of the underlying quantum system to minimize the noise processes that will lead to computational errors. Quantum emitters are a promising tool to implement photonic architectures [19,20].They enable the deterministic generation of a variety of entangled states with even a single emitter [21,22], such as one-dimensional cluster states [21,23], via appropriate pulse sequences driving a spin-photon interface interleaved with photon emission.Quantum emitters have been demonstrated with several hardware platforms including quantum dots [24,25], atomic systems [26,27], superconducting circuits [28], and nitrogen-vacancy centers [29].Indeed, deterministic entanglement has recently been demonstrated between as many as 14 photons [27]. Measurement-based models for fault-tolerant quantum computing [30] are particularly well suited for photonic architectures.In this model we prepare entangled resources that can be produced by constant-depth circuits.Measuring these resources process quantum information and, at the same time, extracts syndrome data for error correction.We often consider realizing the resource states by entangling individual physical qubits [6,9] or small, constant-sized, entangled resources [7,12].These proposals place a significant demand on optical hardware to entangle the individual physical systems, using either unitary gates or fusion-based operations [31]. Here, we find that we can reduce the complexity of realizing entangled resources by taking advantage of the one-dimensional entangled states we can produce readily with quantum emitters.The remaining entangling op-erations that are needed to perform measurement-based fault-tolerant quantum computing are made at the readout stage using two-qubit fusion measurements [31].Central to our architecture is a specific resource state whose geometry is obtained by foliating [32,33] the Floquet color code [34,35]; an example of a dynamically driven code [36].These codes are of recent interest due to their high thresholds [37,38] and their implementation with weight-two parity measurements.Our numerical simulations demonstrate very high thresholds by refining our architecture.Specifically, we propose using a decorated or 'branched' one-dimensional cluster states to reduce the number of qubits we need to prepare and measure in the fault-tolerant system.In what follows we describe our construction before presenting numerical results. The foliated Floquet color code.In measurementbased fault-tolerant quantum computation we prepare a resource state that can be specified by a graph [23,39].A qubit is initialised in an eigenstate of the Pauli-X matrix on every node of the graph v. Controlled-phase gates are then applied to pairs of qubits whose corresponding nodes share an edge.The stabilizer group [40] of the resource state R is generated by elements R v = X v q∈∆v Z q for each node v, with ∆v the neighbourhood of v and X v , Z v the standard Pauli matrices acting on v.The resource is then measured, projecting it onto the state with stabilizer group M, generated by the commuting Pauli measurements we make.The measurements provide syndrome data S = R ∩ M that is used to identify errors [33,41,42].We find our measurement-based model by foliating [32,33] the Floquet color code [34,35].The resulting graph is shown in Fig. 1(a).We call our model the foliated Floquet color code (FFCC). Foliation is a method for mapping circuit-based models, that are realized with static qubits, onto the measurement-based picture.Roughly speaking, data qubits of the static model are replaced with linear cluster states, where the ordering of the qubits of these linear clusters can be regarded as a time-like direction in the static picture.Measurements that are made in the The resource state can be composed of one-dimensional cluster states, or 'chains', as in (c), shown by thick solid lines, and fusion measurements, marked by wavy red lines.The qubits of each chain are indexed with label τ .We can make variations of our construction with different input resources.We identify the qubits of the chain in (c) with those in (d-f) with their numerical indices.We obtain branched chains (d), up to local Clifford operations, by measuring white qubits of (c) in the Pauli-X basis.We can also produce long chains from small resource states by fusing the first and last qubit of linear chains of length = 4 for example, (e).We can also fuse the end points of short branched chains, (f), where we show a short branch of = 8 qubits. static picture are realized in the measurement-based picture by coupling additional 'check qubits' to the appropriate qubits of the linear clusters that model the time evolution of data qubits of the static model. We obtain a three-dimensional lattice by applying the foliation methods described in Ref. [33] to the Floquet Correlation surfaces in the foliated Floquet color code.(a) Shows the orientation correlation surfaces that lie orthogonal to a canonical spatial direction and the temporal direction in green and blue, respectively.We show the microscopic details of these operators in (b) and (c), respectively, with time-like axes, t, marked with arrows. color code [34,35], where data qubits of lie on vertices of a two-dimensional hexagonal lattice, shown at the base of Fig. 1(a).The graph describing R has two types of nodes; let us call them data nodes and check nodes.Data nodes have indices (q, t) where 1 ≤ q ≤ n index data qubits of the hexagonal lattice.The second index 1 ≤ t ≤ T denotes a temporal order.Data nodes (q, t) and (q, t + 1) share an edge for all q and t. The remaining edges to complete the graph for R connect data nodes to check nodes.The check nodes are associated with the edges of the hexagonal lattice.There are three types of edges, that are assigned colors red, green or blue [43].Specifically, we three-color the faces of the hexagonal lattice such that no two adjacent faces have the same color.Edges of the hexagonal lattice are then assigned the color of the two faces they connect.Let us also denote {v, w} = ∂e the pair of vertices of the hexagonal lattice that are connected by edge e.We have check nodes associated to edges of different colors at different times.Let us denote the check nodes with indices (e, t).We have check nodes associated only to the blue, green and red edges at times 3t + 1, 3t + 2 and 3t + 3, respectively.Every check node (e, t) of the appropriate color then shares and edge with the two data nodes (v, t) for both v ∈ ∂e.These check nodes are entangled to the data nodes to correspond precisely to the measurement sequence of the Floquet color code accord-ing to the foliation methods of Ref. [33].This completes the construction shown in Fig. 1(a). Up to lattice geometry, the model we have produced shares many features with the topological cluster-state model [41] where, for now, we assume we measure all of the qubits in the Pauli-X basis, i.e., we project the system onto the stabilizer group M = ±X v to obtain the code detectors that identify errors.The FFCC has local detectors S = R ∩ M on cells of the lattice.We show an example of a local detector in Fig. 1(a).Similiarly, the foliated code gives rise to correlation surfaces that propagate quantum information between input and output regions [2,41,44].We show examples of correlation surfaces in Fig. 2. Additionally, our model can be divided into two disjoint lattices of qubits; the primal and dual lattice, where detectors and correlation surfaces are supported on only one of the two disjoint lattices.The primal and dual lattices are distinguished with black and white vertices in Fig. 1(a) and Figs.2(b) and (c). The common features of our model with that in Ref. [41] mean that we can adopt the fault-tolerant gate set presented in Ref. [2] by reconfiguring our measurement pattern such that gates are performed by braiding different types of defect punctures, and by distilling magic states to complete a universal gate set.It may be interesting to consider adapting the methods of Ref. [9,33,45,46] to our lattice geometry for more general gate operations based on the braiding of twist [47] and corner [33,48] defects.It may also be interesting to investigate implementations of non-Clifford gates with this lattice geometry [49,50]. Quantum computing with fusion measurements. Let us now look for practical ways of preparing and measuring the resource state R.Recently Ref. [12] has shown that we can eliminate the difficulty of preparing a large entangled resource state by completing the preparation and readout of a resource state with probabilistic entangling Bell measurements, i.e. fusion operations [31].With an appropriate choice of fusion measurements, we find that we can decompose the graph shown in Fig 1(a) into a series of physical one-dimensional cluster states, that we call chains, and fusion measurements only, see Fig. 1(b).The qubits of the chain are indexed 0 ≤ τ ≤ 3T − 1, see Fig. 1(c) where a single chain is laid out independent of the three-dimensional construction.In the figure each chain is marked by bold lines of different colors where we see that every chain has three qubits at each time step t. It is helpful to bicolor the vertices of the hexagonal lattice black and white such that no two vertices of the same color share an edge.Likewise, data nodes (v, t) have the same color as vertex v of the hexagonal lattice.We identify all of the black qubits (v, 1) of the foliated system at t = 1 with the τ = 0 qubit of each chain.The next qubit of the chain with τ = 1 is identified with the unique edge qubit (e, 1) with v ∈ ∂e and then qubit τ = 2 is identified with (w, 1) = (v, 1) with w ∈ ∂e.The chain then progresses to the next level before repeating, where the τ = 3 element of the chain is identified with (w, 2).The progression continues ad infinitum.In general we have that the τ = 3t(3t + 1)-th qubit lies at qubit (v, t) ((e, t) with v ∈ ∂e), and the 3t + 2-th qubit of the chain lies at (w, t) and w ∈ ∂e with w = v.The next qubit in each chain lies at qubit (w, t + 1).Indices (v, t) and (w, t) are, respectively, black and white (white and black) for odd (even) values of t. By comparing Figs.1(a) and (b) we see the chains are organized such that many of the edges of the resource state graph are completed in the production of the chains.However, some entangling operations remain to be performed.We then complete the resource state with fusion operations.Up to local Clifford operations, we can interpret a successful fusion measurement as a controlled-phase gate, i.e. creation of an edge, followed by two single-qubit measurements in the Pauli-X basis.Fusion measurements shown in Fig. 1(b) therefore complete the entangling operations needed to produce the resource state and subsequently make the single-qubit measurements we need for readout.Specifically, we perform fusion measurements between black (white) data nodes (v, t) and (v, t + 1) at odd (even) values of t.Final read out in single-qubit bases different from Pauli-X, required for universal gate sets, can also be simply implemented by reconfiguring the linear optical fusion circuit used [7,12]. We consider variations where chains are replaced by other resources that may be more readily implemented.In Fig. 1(c) we show a single chain, where the data(check) nodes are marked black(white).As Fig. 1(b) shows, the check nodes are completely entangled to their neighbours ∂e when the chain is produced.We may therefore consider replacing the chain with a decorated chain that is the post-measurement state that would be obtained if the check nodes of a standard chain are measured in the Pauli-X basis.Up to local Clifford operations [39] we obtain the resource state with branches, as shown in Fig. 1(d).This state is readily prepared with quantum emitters, see Appendix A. We call the measurementbased model realized with these decorated chains the branched construction.The branched construction therefore neglects to produce check nodes and only instead construct the data nodes, such that the edge measurements are effectively already made. We might also look for variations where we produce the large resource states from smaller entangled resources.For example, fusion measurements between qubits of = 4-qubit linear cluster states, as shown Fig. 1(e), produce the chains we use in our construction.We can also interpolate between the short chain construction and the branched chain construction by connecting finite-size branched chains at their endpoints via fusion measurements.We show a short branched chain of length = 8 qubits in Fig. 1(f). Error correction.We can also adopt methods used for the topological cluster state to perform error-correction with the FFCC.We are interested in correcting Pauli errors as well as heralded qubit erasure occurring on the physical qubits of the system. Every single qubit supports exactly two detectors [35].It follows that, if a Pauli error occurs, two detectors are violated.We can therefore regard a single error as a string-segment with violated detectors at its end points [1,41].In general, multiple Pauli errors compound to make multiple strings of potentially greater length.We can correct the errors by finding pairs of violated detectors that are corrected by short string operators.Provided the proposed correction has an equal parity of errors supported on the correlation surface as the initial error, we declare the correction successful.We find nearby pairs of violated detectors using PyMatching [51]; an implementation of minimum-weight perfect matching [1]. We also adopt the methods presented in Ref. [52] for dealing with heralded erasure.If a qubit v is erased, we no longer have access to its two stabilizers S b and S c that support v. To deal with this, we neglect the erased qubit, and we replace these two supporting stabilizers with their product S b S c , thereby creating a super cell.In general we have to update the lattice to produce super cells for all the qubits that experience erasure.Error correction for Pauli errors on the updated lattice then proceeds in the same way using super cells on qubits that have not been erased.It is also important to find a correlation surface that contains no erased qubits.To do so, we multiply the correlation surface by detector operators to find a variation of the operator such that no qubits are erased, which can be readily achieved by Gaussian elimination [53].Otherwise, we consider error correction to fail. The error-correction methods described above are readily adapted for the fusion error model [12].In this model, a fusion erasure takes into account the erasure of measured qubits as well as the possible failure of the fusion measurement.The fusion error model also includes fusion measurement errors induced by Pauli errors on the fused qubits. Threshold estimates.We evaluate threshold error rates for a phenomenological fusion-based noise model as in Ref. [12] where erasures and errors occur independently for each fusion measurement and with a probability equal to the associated noise rate.Fusion networks are simulated with periodic boundaries so that we can check logical failures for the two distinct correlation surfaces shown on orthogonal planes in Fig. 2. Thresholds are evaluated by comparing the logical error rate of our decoder for different noise parameters and different lattice sizes.We evaluate logical failure rates with 10 4 Monte-Carlo samples (see Appendix C for details).We also report on an analysis for bare lattices in Appendix B. We show threshold error rates for different constructions in Fig. 3 Solid lines show thresholds when constructing the FFCC lattice by fusing branched chains with length ∈ {4, 8, 14} and for the limit → ∞ where the length is much longer than the lattice unit cell size.For comparison, we also show the performance of the constructions from Ref. [12] using hexagonal and four-qubit star-shaped resource states by the dashed and dotted black lines, respectively.thresholds are obtained using the branched chain construction.The threhsold interpolates between an erasure rate ∼ 13.2% to an error rate ∼ 1.5%.The thresholds we obtain outperform previous constructions based on hexagonal and star-shaped resource states used to produce Raussendorf lattice structures [12].We reproduced the thresholds for these models.They are shown by dashed and dotted lines in Fig. 3.Such improvements manifest the advantages of constructing a model with a lower valency lattice, a direct consequence of having only weight-two parity measurements, while still inheriting good error-correction performances of the underlying Floquet color code.The thresholds for constructions that include fusing branched chains of constant size, as shown in Fig. 1(e-f), are also reported in Fig. 3 for different lengths ∈ {4, 8, 14}.Significant improvements over previous constructions can be observed already for chains with a moderate constant size of 14 qubits.Thresholds relative to erasure can also be significantly improved by biasing the fusion failures, as recently shown in Refs.[54,55].In Appendix D we report how these techniques can enhance erasure thresholds also in our constructions. Discussion. To summarize, we have demonstrated a practical architecture to realize fault-tolerant measurement-based quantum computation using onedimensional entangled resources.Such resource states are readily and deterministically prepared with quantum emitters, avoiding the large overheads required for preparing entangled resource states via multiplexed probabilistic processes in all-optical approaches [56] and can also be relevant for other physical systems with probabilistic entangling operations [57][58][59].Furthermore, our numerical simulations demonstrate very high thresholds.We obtained these results focusing on phenomenological noise that models all noise sources with a single parameter.This allows us to compare our proposal with others already presented in the literature.In the future, it will be important to run simulations that consider more representative models of noise sources that we anticipate in the laboratory.In Appendix E we discuss some of them in the context of preparing linear cluster states with quantum emitters. We argue that the high-thresholds we have obtained are due to the resource efficient and practical construction we have designed that requires a relatively low number of fusion operations for its realization.Better models with higher noise tolerance could, perhaps, be obtained by finding more general lattice structures, see e.g.Refs.[60,61].Ultimately, we may find more robust mod-els by finding more general resource states that can be readily produced, for example, with interactions between multiple emitters [22,62]. Appendix A: Generation of branched chains via a single quantum emitter Here we describe how to generate branched chain clusters with a single quantum emitter.We consider the recursive procedure as described in Fig. A1 and its caption.The procedure describes the generation in the graph state picture, where the spin-controlled photon generation at the quantum emitter corresponds to adding a leaf, i.e. a single-edged node, to the node associated to the spin qubit.The qubit associated to the spin is shown in orange in the figure.We also use a graph transformation called local-complementation (LC) operation.The operation, that acts on a node of the graph, applies the complementary subgraph of its neighboring qubits.Notably, local complementation can be implemented via local Clifford gates [39].It, therefore, corresponds to single-qubit operations on the spin qubit and on the photons, which can be readily implemented deterministically.The procedure can thus be readily translated into a sequence of photon generation operations by the quantum emitter, that are interleaved with single-qubit gates on the spin, as well as local operations on the emitted photons. Finally, we note that, while the procedure generates branches with a single leaf per branch, it can be easily expanded to multiple leaves per branch.It is in fact sufficient to repeat the photon generation multiple times before doing the local complementation steps.Such states, which can be seen as a more hybrid version in between star graphs and linear chains, present nodes with higher valency, and could thus be investigated for the resourceefficient construction of more general lattices.Here we present threshold estimates for the FFCC model with periodic boundary conditions with an independent and identically distributed noise model for both qubit erasure and Pauli bit-flip errors.We consider both a weighted-and an unweighted-noise model where error rates are weighted according to the valency of the underlying graph state [60].We evaluate thresholds by measuring logical failure rates for varying system sizes and noise parameters.For each combination of parameters we perform 10 4 Monte-Carlo runs to estimate the logical error rate, see Appendix C for more details. In Fig. A2 we present thresholds for models undergoing an unweighted independent and identically distributed noise model, delimiting the region in parameter space where the model is below threshold.Considering bare lattices, the thresholds obtained with our model are lower than those of the topological cluster state model on the Raussendorf lattice [2,52], also reproduced in Fig. A2.These results are perhaps unsurprising, given that the FFCC has demonstrated a lower noise threshold compared to the surface code under a circuit-based noise model [35].Thresholds are improved when considering the lattice for the FFCC being built via branched chains.This is because we effectively remove the two-valent nodes from the lattice.Nevertheless, measurement-based quantum computation still demonstrates lower thresh- olds than those of the topological cluster-state model on the Raussendorf lattice.For completeness, we compare the logical error rate for different orientations of the correlation surface, with orientations that lie orthogonal to a canonical spatial direction, and the time-like direction.We observe that thresholds are the same for both types of correlation surface, see Figs. (b) and (c). We find then a discrepancy between the results obtained using the fusion-based noise model, and the independent and identically distributed noise model studied here.Unlike the fusion-based noise model, the independent and identically distributed noise model does not account for noise that accumulates in the construction of the lattice [60].The fusion-based description of measurement-based quantum computing [12] provides a better model, as it facilitates a direct description of probabilistically fusing small resource states with linear optics, and enables a simple treatment of failed fusion operations and qubit loss and errors when building faulttolerant structures. Finally, we study the weighted-independent and identically distributed noise model.This noise model partially accounts for the complexity of a lattice construction while still being platform-agnostic.Here, the error probability for each qubit in the lattice is multiplied by its valency [60]. The threshold estimates for the bare lattices using the weighted error model are shown in Fig. A3.The results obtained are qualitatively similar to those obtained in Fig. A2, where now the difference in performance between the FFCC lattice and the topological cluster state model on the Raussendorf lattice is smaller compared with the study using the unweighted phenomenological error model.This is another indication that, because the FFCC lattice has been designed for ease of practical construction rather than of the bare lattice itself, the closer the error model gets to practical implementations the better the performance of FFCC becomes, relative to other well-studied constructions. Appendix C: Simulation methods Here we provide details on the numerical implementation of noise models and decoders to estimate thresholds in both the simulations for measurement-based lattices and fusion-based constructions.The source code used is openly accessible on GitHub [63].The simulations and decoding can be performed in the same way when considering both bare lattices and fusion networks.In the first case, the decoding is performed using a matching matrix that describes the lattice qubits that contribute to each detector, while in the second case it instead describes the fusions that contribute to each detector in an analogous fashion.The phenomenological independent and identicallly distributed error models also have identical descriptions via these matching matrices: erasure and error for a fusion outcome has the same effect of erasure and error for a qubit.This means that simulating noise can be done in the same manner for both cases, it is only the interpretation of the results that differs. To estimate the thresholds we perform Monte-Carlo simulations where, for given values of noise parameters, in each sample we model qubit erasures and errors for each single-qubit measurement (fusion outcomes) in the primal lattice (fusion network) independently.Decoding is performed via a minimum-weight perfect matching(MWPM) decoder, which we implement with PyMatching [51].The MWPM decoder provides a correction, which we combine with the real error to determine if a logical error has occurred.For each combination of noise parameters, we perform 10 4 Monte-Carlo samples to estimate the logical error rate, and repeat it for various lattice sizes L ∈ {4, 8, 12} to identify thresholds.As the dual lattices and networks have an identical structure to the primal lattice for all the models we study, we focus on the primal lattice as the dual lattice will have the same performance. To calculate thresholds in the two-dimensional phase space with erasure and loss and identify the correctable regions, we perform linear scans of erasure and error values, obtained setting (p erase , p err ) = (xp 0 erase , xp 0 err ) and varying x.For each value of p 0 erase and p 0 err , a threshold is observed along the linear scan in the two-dimensional space, which corresponds to a single mark plotted in Figs.A2, 3 and A3.Repeating this procedure for various values of p 0 erase and p 0 err allows us to reconstruct the fault-tolerant regions.An example of this procedure for the fusion-based construction of the FFCC lattice using branched chains ( → ∞ case) is shown in Fig. A4.versa, can significantly improve the fault-tolerant threshold relative to photon loss.Such advantages arise from mechanisms analogous to the XZZX surface code constructions [11,17,64]: the idea is to exploit the bias to restrict errors in 2-dimensional cuts of the 3-dimensional code, where they can be decoded with higher success probability.Furthermore, Ref. [55] also showed that further improvements, by an additional factor ∼ 3 can be obtained by adding adaptivity in the failure bias choice.These advantages come without any additional hardware overhead compared to the unbiased case, although the adaptive case requires fast classical processing of the fu-sion outcomes, feedback, and reprogramming of the fusion circuits. In Refs.[54,55] the biasing techniques were demonstrated for the fusion constructions based on the Raussendorf lattice from Ref. [12].These techniques are readily adapted to our construction as well, as we show here.Analogously to Ref. [54], we consider passive biasing of fusion failures and show how it can readily be implemented with our model to improve loss thresholds.In Fig. A5a the configuration we use to bias the fusions is described, with an alternation of three layers with failure fully biased on the primal fusion outcomes (i.e.fusion failure always erases only the fusion outcome associated to the dual detectors) and three layers with failure fully biased on the dual fusion outcomes.In this way, in the low loss regime, both in the primal and dual lattices, erasure errors due to fusion failures are not transmitted between the blue and red detection cells (see Fig. A5b), resulting in the restriction of fusion erasures within twodimensional layers. In Fig. A5c we report the fault-tolerance thresholds relative to photon loss with biased and unbiased fusions for our construction with branched chains.Here, we consider a loss-only noise model, in analogy with Ref. [54], and a physical boosted fusion gate with failure probability p fail = 25% [65].A significant improvement, by more than a factor of two, is obtained for the loss threshold for the passive-bias case (0.52%) compared to no bias (0.24%).For the same noise model and physical fusions with failure probability p fail = 25%, Ref. [54] obtains a threshold for the fusion-based construction of the Raussendorf lattice from six-ring resource states can tolerate up to 0.37%, indicating the FFCC construction provides improved thresholds also in when biased fusions are utilized.These performance enhancements are summarised in Table A1. Adaptive approaches such as those investigated in Refs.[55,66] can then also be explored to further improve the loss threshold for our constructions, which we will investigate in future works. Appendix E: Noise sources in resource state generation Errors in the outcomes of fusion measurements can arise from a variety of physical processes, e.g.imperfect resource state preparation, photon distinguishability, etc.In the main text, for simplicity, we used a phenomenological model where all such noises are taken into account into a single parameter, the "fusion error rate", and fusion errors are assumed to be independent and identically distributed distributed.In general, physical errors will differ from such a model to some degree, e.g. by inserting correlations between separate fusions.We here qualitatively discuss errors in the generation of the resource states with quantum emitters to argue that, when considering more detailed error models, one-dimensional FIG.A6.Circuits associated with the graph state generation for star graphs (a) locally equivalent to GHZ states, chains (b), and branched chains (c) are shown, together with Pauli X and Z error probabilities for individual qubits and error correlation matrices between different qubits.In all plots, the spin qubit (shown in red in the circuits) has qubit index zero, while photonic qubits (shown in blue in the circuits) have indexes which follow their order of generation.All plots consider resource states a single spin qubit and 20 photonic qubits. cluster states are preferable compared to other types of resource states generatable with quantum emitters due to maintaining locality in the generated correlations.This observation was already presented for chains by Linder and Rudolph in Ref. [21], with a detailed error model for a quantum emitter based on a quantum dot.Here we extend a similar investigation to branched chains and GHZ-type (star graphs) states, which represent all possible classes of resource graph states (up to local transformations) generatable with a single quantum emitter [22].Fig. A6 shows the circuits that correspond to the generation of GHZ states, chains, and branched chains with a spin-photon interface in a quantum emitter Ref. [21].The spin-dependent photon generation corresponds to CNOT operations between the spin qubit and the photonic qubits initially prepared in the |0 state, and different types of states are obtained applying Hadarmad gates on the spin qubit (corresponding to π/2 pulses) between consecutive photon generations.To have a qualitative description, we simulated errors arising in the resource state generation by considering a circuit noise model for such circuits where T 1 and T 2 noise processes of the spin are simulated via a bit-and phase-flip channel adding an X and a Z Pauli error to the spin qubit between the generation of two consecutive photons independently and with probability p 1 and p 2 , respectively.Noisy gate operations on the spin are also simulated by applying a perfect operation followed by independent bit-and phaseflip channels, both adding a Pauli error with probability p gate .For simplicity, we considered p 1 = p 2 = p gate = p to describe the system with a single noise parameter p. In Fig. A6 we report the final probability to have a Pauli X and Z error on each qubit (where the spin is always labeled with index zero), as well as the correlation matrices for Pauli errors between qubits, obtained with the noise model described above.It can be observed that, while for GHZ-type states very non-local error correlations are present, for the one-dimensional resource states (chains and branched chains) correlations are always restricted between neighboring or next-neighboring photons, consistently with the observations in Ref. [21].This indicates that if fusions are applied locally, as in our constructions, such locality will be maintained when using chains and branched chains as resource states, so errors will not spread rapidly in the lattice.This is not necessarily the case for GHZ-states due to the arising non-local correlations, suggesting further advantages for one-dimensional resource states when considering more physical noise models.A full analysis of fault-tolerance performance for physical noise models specific to hardware platforms for quantum emitters will be investigated in future work. 12 FIG. 1 . FIG.1.Realizing the foliated Floquet color code with linear cluster states and fusion measurements.(a) The resource state R we construct is illustrated by graph.Detectors S ∈ S = R ∩ M are the product of Pauli-X terms at the boundaries of local cells.The vertical time-like axis is labeled t that indexes layers of the three-dimensional structure.(b) The resource state can be composed of one-dimensional cluster states, or 'chains', as in (c), shown by thick solid lines, and fusion measurements, marked by wavy red lines.The qubits of each chain are indexed with label τ .We can make variations of our construction with different input resources.We identify the qubits of the chain in (c) with those in (d-f) with their numerical indices.We obtain branched chains (d), up to local Clifford operations, by measuring white qubits of (c) in the Pauli-X basis.We can also produce long chains from small resource states by fusing the first and last qubit of linear chains of length = 4 for example, (e).We can also fuse the end points of short branched chains, (f), where we show a short branch of = 8 qubits. FIG. A1 . FIG. A1.Steps in the recursive procedure for the generation of branched chains with a quantum emitter.Each recursion adds a branched layer in the one-dimensional cluster.In each recursion, we (step 1) generate a new photon with the quantum emitter, which effectively adds a new leaf to the graph state attached to the spin node.Then (step 2) local operations on the spin and photons are performed to perform a local complementation (LC) operations on qubit 0 and (step 3) on qubit 1, which correspond to the graph transformations shown.Finally, (step 4) by generating a new photon we again have a branched chain with now an additional layer.Repeating these steps generates a branched chain of arbitrary length. FIG. A2 . FIG.A2.Threshold error rates for the measurement based model under an unweighted independent and identically distributed noise model for both Pauli errors and loss.The correctable regions in the phase diagram are shown for the FFCC lattice for time-like (Z direction, dark blue) and spacelike (X direction, purple with crossed marks) correlation surfaces shown in Fig.2, the branched FFCC lattice obtained transforming the FFCC via X-measurements on the (now virtual) two-valent nodes (dark green), and the topological cluster state model on the Raussendorf lattice[2] is shown by a dashed line.In all lattices, the thresholds for time-like and space-like correlation surfaces present identical behavior, and are thus shown only for the FFCC lattice, in which they in fact overlap. FIG. A4.Examples of threshold estimates along linear paths (perase, perr) = (xp 0 erase , xp 0 err ) in the two-dimensional erasureerror phase space.The data reported here are for the fusion-based construction of the FFCC lattice using branched chains ( → ∞ case), with each threshold representing a marker in the associated plot in Fig.3. for various rates of error and erasure that occur under fusion measurements.The highest TABLE A1 . Summary of performances of different fusion-based fault-tolerant constructions in terms of thresholds.
8,157.8
2022-12-13T00:00:00.000
[ "Physics", "Computer Science" ]
Assessment in Vitro of Antibacterial Activity of Manipulated Product , on Solution Form , Obtained from Dry Extract of Leaves of Psidium guajava L . The Psidium guajava L. specie is a perennial shrub, belonging to the Myrtaceae family and it is popularly known as guava, its leaves are used in therapy for treating various diseases. The study aims to evaluate the antibacterial activity in vitro of manipulated product obtained from dried extract of the leaves of P. guajava L. front standard bacteria ATCC and clinical isolates. The tests were conducted on bacterial samples: Staphylococcus aureus (ATCC 25923), Pseudomonas aeruginosa (ATCC 27883), Escherichia coli (ATCC 25922), Salmonella spp, Acinetobacter baumannii, Proteus mirabilis, Shigella flexneri, Staphylococcus epidermidis, Staphylococcus haemolyticus, Streptococcus agalactiae, Streptococcus mutans. Among the tests can be mentioned phytochemical of the P. guajava leaves ethanolic extract (EE), microbiological control and physical-chemical analysis of the product and microbiological tests such as agar diffusion method (wells), minimum inhibitory concentration (MIC), minimum bactericidal concentration (MBC), an evaluation test of hemolytic capacity of the solution and in vitro assay cytotoxic activity were performed, . The best result of the product in the agar diffusion method was front Staphylococcus epidermidis, while the lower MIC and MBC were front Staphylococcus aureus (ATCC 25923). The product showed no hemolytic activity and no cytotoxic activity at the tested concentrations. According to the test results, it is believed on the possibility of the production of a pharmaceutical formulation derived from the dry extract of Psidium guajava, since it showed great antibacterial activity. Introduction Because of the biodiversity present in different biomes in Brazil, there is growing demand for natural products by domestic and international pharmaceutical industries, which drives scientific research for natural drugs.A study can be more effective if the investigation cover the pharmacological potential of various species of a particular genus guided by popular medicinal use (Duarte, 2006). According to the World Health Organization, medicinal plant it is "all and any plant which has in one or more organs, substances that can be used for therapeutic purposes or are precursors semisynthetic drugs" (World Health Organization [Who], 1998).It has been proved the importance of medicinal plants to the population, especially populations with low purchasing power.These populations found simple and easy solutions due the tradition of using medicinal plants to various diseases (Vieira & Araújo, 2012).The big problem in this practice is the lack of direction for much of the population that uses this type of treatment, so that there is in some cases an aggravation of the problem, poisoning or drug interactions (Filho, 2009). Due to the great accessibility and use of medicinal plants therapy in Brazil and worldwide, the standardization of the control of drugs is necessary.The plants need to be properly cultivated, collected, identified, must be free of foreign matter, parts of other plants and inorganic or microbial contamination.It is necessary that medicinal plants meet certain quality standards in order to meet the minimum criteria of efficiency and safety (Souza-Moreira, 2010).penicillin, the treatment options for infectious processes are becoming smaller and smaller.Members of the family Enterobacteriaceae, Pseudomonas, and Streptococcus are examples of antibiotics resistant organisms old one of the outputs which might be taken to limit bacterial resistance would be improved or creation of antimicrobial agents (Neu, 1992). The World Health Organization report of Health on global surveillance of microbial resistance shows that resistance is no longer a forecast for the future, this is happening at the present time, around the world, and it is endangering the ability to treat common infections communities and hospitals.Without urgency or coordinated action, the world is heading towards a post-antibiotic era, in which common infections, which have been treatable for decades, may kill again (Who, 2014). Due to the problem of bacterial resistance to antibiotics available on the market, it was necessary to further research on the study and evaluation of natural products with antimicrobial property.Among the main tools in this context, it highlights ethnopharmacological studies (Albuquerque & Hanazaki, 2006).Natural products have shown to be quite effective from the standpoint of antimicrobial activity (Cunico, 2004;Maia, 2009;Aresi, 2011). Studies of Souza et al. (2007) primarily responsible for antimicrobial activity of medicinal plants can be flavonoids and tannins.According to Schenkel et al. (2002) among the secondary metabolites produced by plants, saponins are one of the most prominent classes because of its wide distribution in the plant kingdom and their important biological activities as well as anti-inflammatory, antimicrobial activity, hemolytic and antiviral. Saponins are compounds originating from the secondary metabolism of plants, usually found in the most vulnerable tissues to fungal attack, bacterial or predatory insects.These compounds act as a chemical barrier or as a protector of the plant's defense system (Lima, 2009).The saponin is a very present metabolite in Psidium guajava (Maia, 2009). Native to tropical America and is now distributed in all tropical and subtropical regions of the world, including in some temperate regions of Europe in altitude up to 1,200 m.There are over 90 varieties of guava, and most production is in Brazil, India, Colombia, Cuba, the United States and Mexico (Maia, 2009). The species P. guajava L. has several therapeutic indications and is grown around the Brazil and the world.The use of guava plant parts are signals of the importance of scientific research in the same, in order to evaluate its effects on therapies, as well as validate its use of standards for the rational use of medicinal plants.This study differs to present a comparative approach between the action of the extract from the leaves with pharmaceutical product from the same plant. Thus, this work is done in order to determine the antimicrobial activity of the extract of guava leaves (Psidium guajava L.) and product derived from dried extract of the leaves of this plant against a selection of microorganisms of clinical interest and ATCC standard strains.This may open our possibility of treatment front some bacteria. Type of Study and Place of Research This work is a descriptive experimental study in Microbiology Laboratory of Pharmacy building at UFMA (Federal University of Maranhã o. Botanical Material P. guajava L. used to prepare the EE is cataloged and identified in the Herbarium Atico Seabra UFMA under the number 1203.This botanical material was collected in Sã o Luí s / MA at UFMA. Ethanolic Extract Obtaining The ethanolic extract (EE) of the P. guajava L. (guava) leaves were obtained from the fresh plant by cold maceration process.Thus, it was obtained 0.1 g / ml concentration of the ethanolic extract from the plant 50g and 450 ml of ethanol (99.5%).Initially, the leaves were duly selected, dried, ground and packaged in glass with screw cap and protected from light with aluminium foil for 14 days with the extractor solvent being agitated periodically.Thereafter, it was filtered, and the ethanol extract obtained, and packed in suitable container amber glass.The dry weight was obtained 292,5g.The preparation of the extract was held at the Pharmacy Microbiology Laboratory of the Federal University of Maranhã o, where it also was performed the tests of antibacterial activity. Manipulated Product: Solution A product has been manipulated from the dried extract: a hydroalcoholic solution (Pharmacopoeia, 2010).The alcohol solution was prepared with 20% alcohol, glycerin, propylene glycol and water (qs) in addition to put 1% of the dry extract of leaves of P. guajava L., placed in an amber vial.A quantity of basic product was separated for use as controls in the tests with the product (hydroalcoholic solution). Phytochemical Analysis of EE of P. guajava Leaves Based on the results obtained and the comparison with the data described in the literature, it is clear the presence of secondary metabolites of P. guajava (Ilha, 2008;Okamoto, 2010).The results are classified as negative (-), weak positive (+), moderate positive (++) and strong positive (+++).Phytochemical analysis of EE showed the presence of flavones, flavonoids and xanthones, as well as condensed tannins and saponins classified as moderately positive, as shown in Table 1.Flavonoids are bioactive compounds that come from vegetable source present in the humans diet and exhibit many biological properties.As example of the flavonoid activities, they have the ability to modulate many enzymes, and action on the vascular system including anti-inflammatory action.Furthermore, they act in reduction of atherosclerotic plaques, inhibition of platelet aggregation, vasodilation promotion, hormonal action (especially isoflavones) and significant antioxidant activity (Dovichi;Lajolo, 2011). Most vegetables carriers tannin, which may be in several parts of the plant as the roots, wood, bark, leaves, fruits, seeds and sap.However, tannin content varies not only from a plant to another as a part to another of the same plant (Battestin et al., 2004). Tannins have various applications mainly relating astringent properties.Among the functions that it is intended we can highlight the antidiarrheal effect, antiseptic, antimicrobial (Ilha et al., 2008) and antifungal (Monteiro et al., 2005).Furthermore, the tannins are haemostatic and may serve as an antidote in cases of poisoning.In healing process of wounds, burns, and inflammation, tannins assist forming a protective layer on the epithelial tissues injured and may just below this layer, the healing process naturally occurring (Monteiro et al., 2005). Saponins are found in fruits, vegetables, nuts, seeds, stems, flowers, tea, wine, honey and propolis.They form a foam (like soap) when they are mixed and stirred in an aqueous medium.The antimicrobial activity of saponins was proven when extracted from the roots, stem bark, leaves and wood of certain plants (Food Brazil, 2010). Analysis of the Manipulated Product The obtained product had a good primary stability.No phase separation occurring or another instability in the tests performed as centrifugation.As regards the sensory characteristics presented solution is dark brown probably due to the color of the extract of P. guajava leaves.Likewise it can characterize the odor characteristic of the plant remained although other substances making up the product. Chemical and Physical Analysis and Microbiological Control of the Product The results of microbiological control and chemical and physical analysis of the product (solution 1%) is described in Table 2 and 3, respectively.The results of the microbiological control indicates that no contamination of the solution, since the results showed no presence of microorganism in all three culture media used since the presumptive evidence to the confirmatory test.The chemical and physical characteristics of the solution were analyzed to establish the primary stability analysis of the product.In this analysis the formulation is presented within the appropriate standards.The pH of skin is around 4,9 to 5.9 and the pH of the solution was 5.1 indicating the possibility of a product of topical use. Screening: Diffusion Method of the EE and Solution in Agar The antimicrobial activity of EE was noted in solid culture medium front some bacterial samples.The biggest halo obtained from EE that remained close in the use of the solution was front S. epidermidis.As noted in Table 4 bacterial samples that had higher halo to the solution were: A. baumannii, S. flexneri, S. mutans, S. haemolyticus and S. epidermidis.As a negative control, we used the base product and the solvent from the ethanol extract, ethyl alcohol 99.5%.As a positive control was used tobramycin (10mcg). Minimum Inhibitory Concentration (MIC) of EE and Solution Concerning the EE, the lowest MIC found was 3.125 mg.ml -1 for samples of ATCC S. aureus, A. baumannii and S. epidermidis.For bacteria S. agalactiae, ATCC P. aeruginosa and ATCC E. coli, the MIC found was 6.25 mg.ml -1 .In the case of the solution, the better results observed of the MIC were front AT CC S. aureus with 0.156 mg.ml -1 , A. baumannii 0.625 mg.ml -1 and S. epidermidis with 1.25 mg.ml -1 . Future studies are needed to determine the MIC of isolated active ingredients found in the leaves of P. guajava since the action of active ingredients in isolation may be present in much lower concentrations.The results of MIC were determined as shown in Table 5.The EE of P. guajava leaves have a weak inhibition compared with the solution to all tested bacterial samples, probably because the crude extract.Regarding the solution was a good antibacterial activity front the bacteria tested among these gram-positive and gram-negative bacteria.However, it is emphasized that there was better activity front gram-positive bacteria at a lower concentration, as in standard ATCC S. aureus bacteria.Other authors also found the antimicrobial activity of P. guajava extract (Gonç alves, 2008;Okamoto, 2010) The efficacy of a medicinal plant may not be due to a major active component, but the combined action of different compounds originally in the plant.There are studies in the literature about the synergistic effect between extracts with antibiotics where there is an increased clinical efficacy of these antibiotics by the extracts, such as extract of P. guajava, for example (Maia et al., 2009).This fact demonstrates the importance of future studies in order to select a synergistic blend of assets with improved therapeutic properties. The antibacterial activity of plant extract front to Acinetobacter baumannii has been reported by Borges (2007) being flavonoids possibly responsible for the activity, it is also mentioned by Lucarini (2009).Another metabolite that may be responsible for antibacterial activity are tannins (Scalbert, 1991) and saponins (Food Brazil, 2010). Minimum Bactericidal Concentration (MBC) of the EE and Solution The best results analyzing MBC of the EE of P. guajava leaves was 1.56 mg.ml -1 in the sample of S. epidermidis and 3.125 mg.ml -1 compared to standard sample (ATCC) S. aureus as shown in table 6.For the solution the best MBC was presented front standard bacteria ATCC S. aureus being 0.156 mg.ml -1 , after this result highlights the MBC for clinical isolates of A. baumannii and S. epidermidis being 1.25 mg.mL -1 .It shows the importance of this product as a possible therapeutic option, since these bacteria are highly prevalent among multidrug-resistant bacteria found in hospitals.A study of Andrade (2006) showed that between multidrug-resistant bacteria found in ICU, 19% were Staphylococcus aureus and 14.3% were A. baumannii. The nosocominais infections, which are mainly caused by S. epidermidis and S. aureus, are a serious problem in hospitals also occurring with greater occurrence in intensive care units (ICUs) (Michelim et al, 2005).The labor of Michelim et al. (2005), there was 76.5% of 98 clinical isolates of S. epidermidis, analyzed in ICU patients, which were demonstrated as multiresistant to antibiotics. Search Hemolytic Activity in the Culture Supernatant (Microplate) In this study, the sheep and horse blood hemolysis test in vitro were employed to evaluate the hemolytic activity of the derivative solution of the dry extract of the leaves of P. guajava. The highest concentration of the solution used was 5 and 0.3125 mg.mL -1 was the lowest concentration in the test.Under these conditions was not observed positive result.It means that the derived product from dry extract has no hemolytic activity at the concentrations tested.In addition, this test is indicative of cytotoxic activity, and this resulted in a good premise in this regard, however it is necessary to carry out further tests. In Vitro Assay Cytotoxic Activity Cytotoxicity is expressed as the concentration of the substance inhibiting cell growth by 50% (CD50).Three replicate plates were used to determine the cytotoxicity of each sample, and this test showed that the samples used there were no cytotoxic activity.This resulted in a good premise in this regard; however, it is necessary to carry out further tests. Phytochemical study of EE of P. Guajava Leaves Qualitative phytochemical screening was performed according to Matos (2009).This test has the purpose of detecting the presence of classes of secondary metabolites.To identify the steroids and triterpenes group was used the reaction of Lieabermann-Buchard; the class of phenols and tannins applied the reaction with an alcohol solution of ferric chloride; classes for anthocyanins, proanthocyanidins, flavonoids, leocoantocianianidinas, catechins, flavonols, flavonones and xanthones has been used the pH change and Dragendorff test, Mayer and Hager for alkaloids. To the reaction with ferric chloride was added 3 to 5 drops of 1% ferric chloride aqueous solution in 1 mL of aqueous extract.The development of blue color occurs in the presence of tannins; green coloring in the presence of flavonoids; and brown color in the presence of poly phenol.The methodology employed for determination of polyphenols and total tannin content is based on the ability of the phenolic compounds react with metal salts, as in the case of FeCl3 in alkaline medium, forming blackish coloring solutions and the ability of tannins to precipitate compounds protein product, the complexation of phenolic hydroxyl groups with the amine of amino acids (Couto, et al, 2009). The reaction Libermann Bouchard consists on evaporate 30 mL of EE in a water bath until dry.Dissolve the residue in 5 ml of chloroform and filtered.With the aid of a pipette, it was taken from each of the fractions, the following quantities three different test tubes: 0.1 ml; 0.5 mL; 1.0 ml.Then the volume was completed to 2 ml with chloroform.In Chapel, was added 1 ml of acetic anhydride and 2 ml of sulfuric acid (H 2 SO 4 ) concentrated.The change of the color of the extract to pink or blue staining indicates the presence of steroid or triterpenes with carbonyl function (C = O) on carbon 3 and double bond between carbons 5 and 6 of the structure.Being this change of color with the greatest occurrence, although there may be changes to green color, which indicates hydroxyl function (OH) on the carbon 3 and double bond between carbons 5 and 6.The yellow color indicating possible methylation (CH 3) on carbon 14 (Machado, Nakashima, Silva, & Krüger, 2011).In this method consisting in treating the sample in the presence of acetic acid and a few drops of sulfuric acid, there is dehydration and oxidation of cyclopentanoperhydrophenanthrene ring system forming an aromatic steroid, which is evidenced by the appearance of a blue-green color (Queiroz, 2009). It was transferred 2 ml of EE for five test tubes and proceeded the alkaloids research by adding 2 drops of the following reagents: reactive Mayer (tetraiodide mercury potassium) and Dragendorff reagent (tetraiodide potassium bismuth).It was observed if there was a white precipitate formation or light white turbidity for the reactive Mayer and brick color precipitate for Dragendorff reactive.The tests are described below in Table 7 with their goals and principles. Microbiological, Chemical and Physical Control of the Product The preliminary stability test was performed, and this test was conducted by centrifuge, viscosity, pH, sensory characteristics (color, odor and appearance) and density (Brazilian Pharmacopoeia, 2010). To determine the pH was used digital pH meter, evaluating the potential difference between two electrodes immersed in the sample.The electrode was inserted directly into the solution (Brazilian Pharmacopoeia, 2010).The specific density was determined in pycnometer, coupled with thermometer, previously weighed empty.The sample was inserted into the pycnometer and the temperature was adjusted to 20 ° C, whereupon the pycnometer was weighed again.The difference between the mass of the pycnometer with the sample and empty pycnometer is the mass of the sample.The ratio between the sample mass and the mass of water, both at 20 ° C represents the specific density of the test sample (Brazilian Pharmacopoeia, 2010).The viscosity was measured at Brookfield viscosimeter, which measures the viscosity of a pharmaceutical form by the force required to rotate the spindle in the liquid being tested (Brazilian Pharmacopoeia, 2010).The test using the centrifugation was used as primary stability parameter (Brazilian Pharmacopoeia, 2010). The sample was analyzed in order to ascertain gross features that indicate instability.The stability is indicated by the non-occurrence of phase separation, precipitation and turbidity.The odor was examined by smell and color was examined by visual comparison under white light conditions. The microbiological control of the product was done with three races being one a presumptive test, in which it was inoculated 1 ml of the solution in a series of 3 tubes containing broth sodium lauryl sulfate.The tubes were then incubated at 36 ° C for 24 to 48 hours.A confirmatory test of total coliform was made from the peal of each positive tube lauryl for the tubes with bright green bile broth 2% lactose and incubated at 36 ° C for 24 to 48 hours.The confirmatory test for fecal coliform was taken from each subculture positive bright green tube to tube with EC broth and incubated at 45 ° C for 24 to 48 h.After the period determined for each test observations were carried out (Brazilian Pharmacopoeia, 2010). Strains: Bacterial Bacterial samples are from the Clinical Microbiology Laboratory at Federal University of Maranhã o.The tests were performed using standard micro-organisms (ATCC -American Type Culture Collection) and clinical isolates samples.Bacterial samples are: Staphylococcus aureus (ATCC 25923), Pseudomonas aeruginosa (ATCC 27883) and Escherichia coli (ATCC 25922).The clinical isolates samples are: Salmonella, Acinetobacter baumannii, Proteus mirabilis, Shigella flexneri, Staphylococcus epidermidis, Staphylococcus haemolyticus, Streptococcus mutans and Streptococcus agalactiae. Preparation of Bacterial Suspensions The microorganisms were initially reactivated from their original cultures and maintained in liquid BHI medium (Brain Heart Infusion) at 37 ° C for 24h.Subsequently, the samples were grown on Mueller Hinton plates for 37 ° C for 18-24 hours.Isolated colonies were then resuspended in 3 ml of saline (0.9% NaCl) sterile until turbidity equivalent in the range of 0.5 McFarland (1.5 x 108 CFU / mL). Screening: Diffusion method in EE Agar and Solution To carry out this test the following aforementioned bacterial samples in item 4.3 were used. The antimicrobial potential of hydroalcoholic solution and EE was initially evaluated by diffusion technique in Mueller Hinton.The wells were identified and 30 uL of the solution and EE were added with the pipet (Clsi, 2012).As a positive control was used tobramycin (10mcg) and as a negative control ethanol 99.5% and the product base.Then the material was taken to the incubator at 35 ° C for 24 hours.The following day, the formed halos were measured in millimeters with a ruler.The results of the screening method of diffusion in agar was obtained from the average of triplicate results. Minimum Inhibitory Concentrations (MIC) of the EE and Solution The MIC determination was performed using the macrodilution method (Phillips, 1991;Piddock, 1990;Nccls, 2003;Clsi, 2007).For this test each bacterial suspension was homogenized in BHI broth in a ratio of 1: 1000 (v / v), which was obtained from a bacterial concentration of about 1-2 x 105 CFU / mL.Sterile glass vials were prepared with 5 ml of serial dilutions sterile on ratio of 2 of EE and product.Also, 100 ul solution of 2,3,5-triphenyltetrazolium chloride (CTT) at 1% dilution were added in each tube whose bacterial growth makes it red solution.After dilution, 30μL of each inoculum was transferred to tubes.The negative control to EE was 99.5% ethanol and the product base to the solution.The tubes were then homogenized in with the aid of a vortex at low speed, and incubated under the same conditions described above.The MIC was the lowest concentration of EE and the solution where there is no bacterial growth.The CIM was determined after the triplicate experiment. Minimum Bactericidal Concentrations (MBC) of the EE and Solution The tubes incubated for determination of MIC in the liquid medium were used to determine the MBC (Phillips, 1991;Piddock, 1990;Clsi, 2007).An aliquot (1ml) of each of it was inoculated into Mueller Hinton Agar plates, and subsequently these plates were incubated at room at 37 ° C for 24h.The MBC have been considered for the lowest concentration of the EE or solution where there is no bacterial growth on the surface of the inoculated agar (99.9% of microbial death).The MBC was determined after the triplicate experiment. Search of Haemolytic Activity in the Culture Supernatant (Microplate) The hemolytic activity was assayed by incubating the extract with 1% erythrocytes (sheep and horse) washed 3 times with PBS (phosphate buffer saline), pH 7.2 for 1 hour at 37 ° C at the bottom of ELISA 96-well plates.The product was added in 5 wells varying the average from 5 to 0.156 mg.ml -1 against all erythrocytes.The well control was added to the base product for the solution.As positive control, the water was used, and as negative control a buffer solution was used.In addition, held the same procedure to the controls (Bhakdi et al., 1986).The hemolytic activity was expressed as the maximum concentration of extract and products that do not cause hemolysis.The hemolytic activity was determined after the triplicate experiment. Cell Culture HeLa cell lines (human cervical carcinoma), A-549 (human lung carcinoma), HT-29 (human colon adenocarcinoma) and Vero (monkey kidney) were obtained from the American Type Culture Collection (ATCC).The cells were grown in RPMI 1640 supplemented with 10% fetal calf serum 1% (w/v) glutamine, 100 U/ml penicillin, 100 μg/ml streptomycin and 5 μg/ml amphotericin B. Cells were cultured in a humidified atmosphere at 37 ºC in 5% CO2.The macrophages were isolated from mice and kept according to the methodology described by Tseng et al. (2006). In Vitro Assay Cytotoxic Activity Cells were washed with phosphate-buffered saline (PBS) free of magnesium and calcium.After PBS decantation, cells were detached by addition of 0.025% trypsin-EDTA and PBS to a final volume of 50 ml and centrifuged.The pellet was suspended in 10 ml of a medium to obtain a single cell suspension.The density of viable cells was determined by Trypan blue exclusion in a hemocytometer and the preparation was diluted with a medium to yield previously determined optimal plating densities for cells.Before the assay, 5 x 104 cells / well were seeded on 96-well plates and the suspension was incubated 24 h at 37º C to cell attachment.After 24 h, the cells were treated with the essential oil and terpenes.The oil was dissolved in ethanol and a serial of doubling essential oil dilution was added to five replicate wells, over the range of 600 -0.6 μg/ml against all cell lines and macrophages.Terpenes were also dissolved in ethanol and tested to five replicate wells, but over the range of 200 -0.2 μg/ml.The final concentration of ethanol in the culture medium was kept at 0.5% (v/v) to avoid solvent toxicity.The activities of the essential oil and terpenes were considered according to the survival of 50% or less cells after an exposure time of 72 h.The cell culture used as control received only 0.5 % ethanol at final concentration.Cytotoxicity was measured using the MTT (3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay.After an exposure time of 72 h, the medium was removed and then, MTT assays were performed using the cell titer kit (Promega Corp., USA).On a 96-well plate, 20 ml of MTT (5 mg/ml) in PBS was incubated with cells for 2 h at 37º C.After this period, the medium containing MTT was removed and 100 ml of acidified isopropanol (0.04 mol/l HCl) was added.The absorbance was measured at 570 nm using a microplate reader (Bio-Rad Laboratories, model 3550, USA).Cell viability was expressed with respect to the absorbance of the control wells, which were considered as 100% of absorbance.Cytotoxicity is expressed as the concentration of the substance (essential oil and terpenes) inhibiting cell growth by 50% (CD50).Three replicate plates were used to determine the cytotoxicity of each sample (Stavri et al., 2005;Hou et al., 2006, Xiao et al., 2006). Statistical Analysis Data are reported as the mean ± SD for at least three replicates.Statistical analysis was performed using the Student t-test, with significance level set at P < 0.05. Conclusion The study of the solution and EE of Psidium guajava L. leaves allowed to evaluate the antibacterial activity in relation to different bacterial samples tested both gram-negative and gram-positive. Conducting a comparative study of the activity of an EE of P. guajava L. leaves with a product derived of dry extract from the same plant as opens up the possibility for innovation in the pharmaceutical sector with the possibility of new antibacterial pharmaceutical product. The lowest minimum inhibitory concentration (MIC) was found for the hydro-alcoholic solution was 0.156 mg.ml -1 front ATCC S. aureus and lower minimum bactericidal concentration was found to be 0.156 mg.ml -1 front the same bacterium.It is noteworthy that the best results of the product are given front bacteria usually prevalent mutirresistentes in hospital. The tests indicated that the product formulation based on natural extracts is viable way, and that the extract incorporated into a formulation may still be responsible for improving its activity with regard to its antibacterial properties. Table 1 . Phytochemical screening of the ethanol extract of P. guajava leaves Table 2 . Microbiological Control Result of the Solution AbsentSource: Prepared by the author (2015). Table 3 . Result of Chemistry and Physics Analysis of the Solution Table 4 . Inhibitory activity of ethanol extract of P. guajava and the solution front clinical isolates and reference strains of bacteria through the agar diffusion test Table 5 . Minimum inhibitory concentration in vitro of P. guajava ethanolic extract and solution front clinical isolates and reference strains of bacteria a Ethanolic extract of P. guajava leaves (0.1 mg/ml), b Soluç ã o obtained from dried extract of the leaves of P. guajava (0.01 mg / mL), c Negative Control -Product base.Source: Prepared by the author(2015) Table 6 . Minimum bactericidal concentration of the ethanolic extract of P. guajava leaves and solution front reference strains and clinical isolates bacteria Ethanolic extract of P. guajava leaves (0.1 mg/ml), b Soluç ã o obtained from dried extract of the leaves of P. guajava (0.01 mg / mL), c Negative Control -Product base.Source: Prepared by the author(2015) Table 7 . Metabolites classes analyzed by phytochemical screening of the extract of P. guajava leaves
6,799.8
2015-12-21T00:00:00.000
[ "Biology", "Medicine", "Agricultural And Food Sciences" ]
On a conjecture regarding the exponential reduced Sombor index of chemical trees∗ Let G be a graph and denote by du the degree of a vertex u of G. The sum of the numbers e √ (du−1)+(dv−1) over all edges uv of G is known as the exponential reduced Sombor index. A chemical tree is a tree with the maximum degree at most 4. In this paper, a conjecture posed by Liu et al. [MATCH Commun. Math. Comput. Chem. 86 (2021) 729–753] is disproved and its corrected version is proved. Introduction Let G be a graph. The sets of edges and vertices of G are represented by E(G) and V (G), respectively. For the vertex v ∈ V (G), the degree of v is denoted by d G (v) (or simply by d v if only one graph is under consideration). A vertex u ∈ V (G) is said to be a pendent vertex if d u = 1. The degree set of G is the set of all unequal degrees of vertices of G. The set N G (u) consists of the vertices of the graph G that are adjacent to the vertex v. The members of N G (u) are known as neighbors of u. A chemical tree is the tree of maximum degree at most 4. The (chemical-)graph-theoretical terminology and notation that are used in this study without explaining here can be found in the books [1,2,11]. For the graph G, the Sombor index and reduced Sombor index abbreviated as SO and SO red , respectively, are defined [5] as These degree-based graph invariants, introduced recently in [5], have attained a lot of attention from researchers in a very short time, which resulted in many publications; for example, see the review papers [4,9], and the papers listed therein. The following exponential version of the reduced Sombor index was considered in [10]: Let n i denote the number of vertices in the graph G with degree i. The cardinality of the set consisting of the edges joining the vertices of degrees i and j in the graph G is denoted by m i,j . Denote by T n the class of chemical trees of order n such that n 2 + n 3 ≤ 1 and m 1,3 = m 1,2 = 0. Deng et al. [3] proved that the members of the class T n are the only trees possessing the maximum value of the reduced Sombor index for every n ≥ 11. Keeping in mind this result of Deng et al. [3], Liu et al. [10] posed the following conjecture concerning the exponential reduced Sombor index for chemical trees. Conjecture 1.1 was also discussed in [12] and was left open. In fact, there exist counter examples to Conjecture 1.1; for instance, for the trees T 1 and T 2 depicted in Figure 1, it holds that Figure 1: The trees T 1 and T 2 providing a counterexample to Conjecture 1.1. Proof of Theorem 1.1 If T is a chemical tree of order n with n ≥ 3, then 1≤i≤4 i =j m j,i + 2m j,j = j · n j for j = 1, 2, 3, 4. By solving the system of equations (2)-(4) for the unknowns m 1,4 , m 4,4 , n 1 , n 2 , n 3 , n 4 and then inserting the values of m 4,4 and m 1,4 (these two values are well-known, see for example [6]) in Equation (1), one gets We take Then, Equation (5) can be written as For any given integer n greater than 4, it is evident from Equation (8) that a tree T attains the greatest value of e SO red over the class of all chemical trees of order n if and only if T possess the greatest value of Γ in the considered class. As a consequence, we consider Γ(T ) instead of e SO red (T ) in the next lemma. Concluding remarks Recently, Liu [7] reported some extremal results for the multiplicative Sombor index. For a graph G, its multiplicative Sombor index and multiplicative reduced Sombor index are defined as As expected, among all chemical trees of a fixed order n ≥ 11, the trees attaining the maximum (reduced) Sombor index (see [3]) are same as the ones possessing the maximum multiplicative (reduced) Sombor index. Analogous to the definition of the exponential reduced Sombor index, the exponential Sombor index can be defined as Denote by T n the class of chemical trees of order n such that n 2 + n 3 ≤ 1 and m 3,4 + m 2,4 ≤ 1. As expected, among all chemical trees of a fixed order n ≥ 7, the trees attaining the maximum exponential reduced Sombor index (see Theorem 1.1) are same as the ones possessing the maximum exponential Sombor index. Theorem 3.2. For every n ≥ 7, the trees of the class T n uniquely attain the maximum value of the exponential Sombor index among all chemical trees of a fixed order n. Because the proofs of Theorems 1.1, 3.1, and 3.2 are very similar to one another, we omit the proofs of Theorems 3.1 and 3.2.
1,215.2
2022-05-19T00:00:00.000
[ "Mathematics" ]
Topological features of spike trains in recurrent spiking neural networks that are trained to generate spatiotemporal patterns In this study, we focus on training recurrent spiking neural networks to generate spatiotemporal patterns in the form of closed two-dimensional trajectories. Spike trains in the trained networks are examined in terms of their dissimilarity using the Victor–Purpura distance. We apply algebraic topology methods to the matrices obtained by rank-ordering the entries of the distance matrices, specifically calculating the persistence barcodes and Betti curves. By comparing the features of different types of output patterns, we uncover the complex relations between low-dimensional target signals and the underlying multidimensional spike trains. Introduction The challenge of understanding how spatiotemporal patterns of neural activity give rise to various sensory, cognitive, and motor phenomena in nervous systems is a significant task in computational and cognitive neuroscience.A prominent paradigm for proposing hypotheses about potential mechanisms involves training recurrent neural networks on target functions, considering biological constraints and relating dynamic and structural features in the obtained networks to characteristics of inputs and outputs (Sussillo, 2014;Barak, 2017;Yang and Wang, 2020;Amunts et al., 2022;Maslennikov et al., 2022).This approach is in line with a more traditional research domain of finding dynamical mechanisms underlying various spatiotemporal patterns observed in the brain (traveling waves, oscillatory rhythms in different frequency domains, chaotic or disordered spike firing etc.) (Liu et al., 2022a,b;Yu et al., 2023a,b), which is highly interdisciplinary and borrows different approaches from physics and mathematics. An interdisciplinary approach has emerged at the intersection of computational neuroscience, machine learning, and non-linear dynamics.This approach considers similarities in time-dependent processes in biological brains and artificial neural networks as consequences of computations through population dynamics (Marblestone et al., 2016;Hassabis et al., 2017;Cichy and Kaiser, 2019;Vyas et al., 2020;Dubreuil et al., 2022;Ramezanian-Panahi et al., 2022).Works in this direction focus on training networks of rate neurons on cognitive-like and sensorimotor neuroscience-based tasks, revealing computational principles for completing target tasks in terms of dynamics, functional specialization of individual neurons, and coupling structure (Sussillo and Abbott, 2009;Mante et al., 2013;Sussillo and Barak, 2013;Abbott et al., 2016;Chaisangmongkon et al., 2017;Maslennikov andNekorkin, 2019, 2020;Maslennikov, 2021). Real neural networks differ from rate-based models, primarily in that they produce sequences of action potentials or spikes.To account for this important aspect, another class of neural networks-spiking ones-has been developed.On the one hand, they are more biologically realistic in producing firing patterns of a similar structure, leading to a more thorough comparison between artificial and biological spiking networks in their dynamics and structural mechanisms of functioning (Eliasmith et al., 2012;Gilra and Gerstner, 2017;Kim et al., 2019;Lobo et al., 2020;Pugavko et al., 2020Pugavko et al., , 2023;;Amunts et al., 2022).On the other hand, spiking networks are the next-generation class of neural networks that are capable of energy-efficient computations when performed on specialized neuromorphic chips (Schuman et al., 2022).Although they can be obtained from convenient neural networks using some conversion techniques, to take their full advantage, one needs to use specific algorithms to train them (Demin and Nekhaev, 2018;Neftci et al., 2019;Tavanaei et al., 2019;Bellec et al., 2020;Dora and Kasabov, 2021).Spiking neural networks have demonstrated their capabilities in various applications including processing signals of different modalities (Bing et al., 2018;Auge et al., 2021;Yamazaki et al., 2022), robotics (Lobov et al., 2020(Lobov et al., , 2021;;Angelidis et al., 2021), and more generally in brain-inspired artificial intelligence tasks and brain dynamics simulations (Zeng et al., 2023). As in the case of biological neural systems, artificial spiking networks are hardly interpreted when they perform complex motor or cognitive-like tasks.While rate-based neural networks organize their dynamics along smooth manifolds which can be often studied as projections to low-dimensional subspaces, for spiking networks, such procedure in general may not be done (Muratore et al., 2021;Cimeša et al., 2023;DePasquale et al., 2023).One of the promising approaches to characterize spiking patterns is methods of algebraic topology.Such tools as persistent homology analysis have been used in relating spike patterns with functions of neural networks (Dabaghian et al., 2012;Petri et al., 2014;Curto, 2017;Bardin et al., 2019;Santos et al., 2019;Sizemore et al., 2019;Naitzat et al., 2020;Billings et al., 2021;Guidolin et al., 2022)-both biological and artificial-and more widely for studying topological aspects of dynamical systems (Maletić et al., 2016;Stolz et al., 2017;Salnikov et al., 2018;Myers et al., 2019). In this study, we explore topological features in spiking neural networks trained to generate low-dimensional target patterns.We study recurrent networks in the class of reservoir computers (Maass et al., 2002;Lukoševičius and Jaeger, 2009;Sussillo, 2014) where training only occurs at the output connections.After training, the networks produce spiking dynamics which underlie the generation of output patterns, and our goal is to study how topological features of the spike trains carry information about output patterns in terms of persistence barcodes and Betti curves.In Section 2, we present the system under study and the mkey findings of our study.Section 3 sums up the results and Section 4 gives particular details of the model and methods. . Training recurrent spiking neural networks to generate target outputs We consider recurrent networks of spiking neurons trained to generate two-dimensional spatiotemporal signals and study how topological signatures of their spike patterns relate to the readout activity.The pipeline of our study is schematically presented in Figure 1.The neurons are randomly connected with sparse links whose weights are drawn from Gaussian distribution and kept fixed.The structure of links is determined by the adjacency matrix A. Two scalar outputs (which can be considered as one vector output) x(t) and ŷ(t) linearly read out the filtered spiking activity of the recurrent network via output weight vectors w 1 and w 2 .The output signals also send feedback connections given in matrix U to the recurrent neural network.While the feedback links are initialized and kept fixed as the recurrent ones, the output links are changed during training in order to minimize the error e(t) between the target pattern [x(t), y(t)] and the real output signals x(t), ŷ(t), see Figure 1A.Such training setting is a particular case of the reservoir computing paradigm in which the weights only in the last layer are trained.In this study, training is made by the FORCE method (see details in Section 4). The networks we study consist of leaky integrate-and-fire neurons with an absolute refractory period, and the output trajectories are chosen as closed polar curves, see details in Section 4. After training, the networks are capable of producing these two-dimensional signals which can be treated as target motor patterns produced by spiking activity, see Figure 1B.Our purpose is to relate the spiking patterns of the trained neural networks with the target trajectories.The output signals are produced as weighted sums of the firing-rate activity, but the question is to what extent the detailed spike trains-not the averaged rates-are responsible for producing target patterns?To answer this question, we measure how dissimilar individual spike trains are from each other.There are many correlation-based characteristics which enable to quantify similarity between signals produced by neurons, but they do not capture the fine structure of spike timing.Here, we adopt the method proposed by Victor and Purpura to compute a special quantity-the Victor-Purpura (VP) distance which considers a spike sequence as a point in some metric space.We calculate a matrix of VP distances where each entry (i, j) quantify how dissimilar or distant are the spike trains produced by the i-th and jth neurons.After that we transform the obtained distance matrices by rank ordering their entries.Then, we apply the method of algebraic topology-the persistent homology-to the latter matrix and obtain the so-called persistence barcodes and Betti curves, see Figure 1C.These topological signatures are detailed characteristics of spike patterns responsible for the generation of the patterns under study, so we study how the spiking topology features relate to low-dimensional output signals. We applied the proposed pipeline to different types of teacher signals but in order to conveniently visualize and easily understand topological features of the target patterns themselves, we show the results for four polar closed curves having different number of holes, as shown in Figure 2. The figures in fact show the real outputs perfectly matching the target signals and having a small noisy component resulting from the spiking nature of the network. In terms of individual spiking activity, different neurons in the trained networks fire at various rates.Namely, within particular segments of the target pattern some neurons actively generate action potentials while other are silent and start to fire in further segments of the pattern.The overall network activity can be characterized by the mean firing rate as the average number of spikes per second and per neuron.Figures 3A, C, E, G shows evolving mean firing rates in the networks, producing four corresponding target patterns, as shown in Figure 2. Notably, for all the cases, the firing rate changes within the interval of 20-80 Hz except the simplest circle target pattern where the firing rate varies within the narrow interval of 72-84 Hz.For target signals in the form of multipetal roses, the firing rate increases and decreases following each petal, see Figures 3C, E, G.The corresponding spike rasterograms shown in Figures 3B, D, F, H indicate that rises and falls of the mean firing rate are supported by the activity of different neurons.Therefore, although output patterns are produced by filtered spikes, i.e., instant firing rates of neurons, one cannot relate the rate activity with the properties of the output pattern in a direct way.Moreover, the temporal structure-not only rates-of spikes is responsible for generating output patterns of different forms. . Distance matrices for spike trains in the trained neural network To compare different neurons in terms of dissimilarities between their spike trains, we apply the method proposed by Victor and Purpura (see Guidolin et al., 2022 and works cited therein).Namely, this method endows a pair of spike trains with a notion of distance.This is in contrast with the frequently used method to quantify pairs of neuronal responses by the rate-based correlations.Spike trains of some finite length are considered as points in an abstract space where a special metric rule is defined which assigns a non-negative number D ij to each pair of points i, j.The Victor-Purpura (VP) distance has several basic properties required to be a true metric, namely, it vanishes only for the pair of identical spike trains (D ii = 0) and it is positive otherwise (D ij > 0, i = j), it is symmetrical (D ij = D ji ), and it fulfills the triangle inequality (D ik ≤ D ij + D jk ).The VP distance between spike trains is defined as the minimum cost of transforming one spike train into the other via the addition or deletion of spikes, shift of spike times, or change in the neuron of origin of the spikes.Each modifying move is characterized by cost q which controls the timescale for shifts of spikes.In general, there is a family of distances defined in this way which can capture the sensitivity to the neuron of origin of each spike.Here, we use the basic VP metric which assigns cost q = 1 per unit time to move a spike (see details in Section 4). For each of the four target patterns illustrated here, we collect spike trains S (i) = [t (i) 1 , t (i) 2 , . . ., t (i) s i ], i = 1, . . ., N for the period of 1 s (the duration of the target generation).Then, we calculate the VP distances for each pair of spike trains and obtain matrices D = [D ij ], as shown in Figure 4.These matrices are symmetric and reflect the intricate temporal structure of the spike patterns supporting corresponding output trajectories.Notably, even at this stage, one can make several qualitative conclusions about differences in spike patterns relating to different target outputs.Despite similar ranges of the firing rate varying for all targets, as shown in Figures 3A, C, E, G, matrices of VP distances for their spike trains have distinct differences.The simplest circle target correspond to the matrix where most of entries take similar values in the middle of the range of possible distances (see Figure 4A).For multi-petal closed trajectories, the maximum distance becomes smaller for increasing the number of holes in the output patterns, cf.Figures 4B-D.Moreover, less-petal polar roses require more neurons that produce less distant spike trains. To get more insight into intricate structure of spike trains, following Giusti et al. (2015) and Guidolin et al. (2022), we transform the obtained matrices by rank ordering their entries.Namely, given a matrix of VP distances D ij with zeros on its main diagonal, we consider the entries of its above-diagonal part and replace them by natural numbers 0, 1, . . . in ascending order of their value.active neurons with largest indices are the most similar between each other, see the up-right parts of matrices in Figure 5, and far away from the most active neurons.The down-left part of rankordered matrices correspond to neurons which fire most actively during the task implementation and thus mostly contributing to the output patterns.Their spike trains show highly complicated structure which depends on the target pattern.To characterize the structure of the relations between the core neurons, we take 100 most active ones and study the topological features of the graph of their rank-ordered VP distances. . Persistent homology of rank-ordered matrix Most frequently used tool in topology data analysis is persistent homology.While initially this framework has been developed for static data sets, many ideas are adopted to studying time-varying dynamic data (Petri et al., 2014;Curto, 2017;Stolz et al., 2017;Myers et al., 2019;Santos et al., 2019).Homology refers to certain topological properties of data, whereas persistence reflects the properties which are maintained through multiple scales of the data. Our set of neurons and the corresponding spike trains form a point cloud of vertices for which a notion of distance collected in M is determined.This set of vertices form zero-dimensional simplices while one-dimensional simplices are the edges between them.Imagine each vertex is surrounded by a circle of radius ρ, and this value is gradually increasing starting from zero.If the circle centered in vertex i has radius ρ larger than the distance M ij to vertex j, the pair of nodes i and j are considered coupled and form a one-dimensional simplex.Initially (ρ = 0), all the vertices are isolated, hence they form a set of zero-dimensional simplices and there is no one-dimensional simplices.When radius becomes such that some pair of vertices become coupled, a new one-dimensional simplex appears while zero-dimensional simplices corresponding to the vertices disappear.Such gradual increase of the radius is called filtration and can be presented in the form of persistence barcodes, as shown in Figure 6.Here, parameter ρ indicates the radius of circles surrounding each vertex and the bars show zeroand one-dimensional simplices: at which values of ρ they appear and when dissapear. In the top subfigures, the number of initially existing zerodimensional simplices is equal to the chosen number of most active neurons (100), and with increasing filtration parameter ρ, the number of bars gradually decreases, finally leading to one remaining simplex corresponding to the connected component which contains all the vertices.In the bottom subfigures, the barcodes show the birth and death of one-dimensional simplices with increasing filtration parameter.Altogether, these persistence barcodes describe topological signatures of spike trains relating to most active neurons which mostly contribute to generating particular target outputs. To summarize the filtration process, the number of persisting topological invariants for particular values of ρ is plotted in the form of Betti curves shown in Figure 7.These generalizing curves show the course of emergence and disappearance of zerodimensional (left column) and one-dimensional (right column) simplices in the point clouds formed by spike trains of most active neurons.The number of zero-dimensional simplices shows the distinct monotonically decreasing dependence on the filtration parameter.The most sharp drop is observed for the two-petal target pattern (Figure 7C) while the one-, three-, and four-hole trajectories result in a smoother decrease until the threshold value of ρ, in turn, slightly increases around ρ = 0.4 with increasing number of holes, cf.Figures 7A, E, G.The number of onedimensional simplices shows a more intricate structure where the maximum is in complex dependence on the features of the target output.The largest one relates to the two-petal polar rose and the smallest maximum to the three-petal one while the remaining targets lead to the plots with with similar maxima. Comparing these figures with the corresponding barcodes in Figure 6 and matrices in Figure 5, one concludes that the chosen target patterns which have easily explainable forms in terms of topology require spike trains which are characterized by topologically complex characteristics.We found that there is no direct correspondence between Betti numbers of generated trajectories and simplicial complexes built upon the spike trains.However, topological analysis according to the proposed pipeline allows us to extract valuable information about coding principles of spikes at the level of precise firing timing and the topological relations between spike trains of different neurons. Discussion We applied algebraic topology methods, specifically persistent homology, to characterize geometry of spike trains produced by recurrent neural networks trained to generate two-dimensional target trajectories.We considered several easily interpreted twodimensional closed trajectories as the target patterns for training recurrent spiking networks.The FORCE method is used for supervised learning which is a particular framework of reservoir computing, where weight modification occurs at the output layer while recurrent connections are randomly initialized and kept fixed.In addition, the random feedback connections from the output provide indirect low-rank perturbation to the recurrent matrix, thus creating the modified effective coupling architecture capable of producing target patterns.The neural spike trains in the trained networks were considered as points in some metric space, where the distances between them were calculated as cost-based Victor-Purpura quantities.We rank-ordered the measured distances and chose one hundred most active neurons for which we performed persistent homology analysis.We plotted persistence barcodes and Betti curves, which characterize how specific topological objects in the spiking data were preserved under continuous transformation.We find complicated relation between topological characteristic of spike trains and those of target patterns.The novelty of our study is that we apply persistence homology methods to the spiking networks trained to autonomously generate planar output trajectories.Previously, such methods were mostly applied to the neural networks performing navigation tasks and consisting of neurons which fire preferentially in particular locations in the environment-place fields.Thus, one was able to find a one-to-one correspondence between topological features of the environment and those of spiking patterns.Our study is an attempt to establish regularities in a more general case where generated trajectories do not carry navigation information yet have clear topological interpretation. Methods . Spiking neural network and target outputs We consider a recurrent spiking neural network consisting of N leaky integrate-and-fire neurons whose activity is projected into M scalar outputs, see Figure 1.Autonomous dynamics of the spiking network is described by the following system (Nicola and Clopath, 2017): where v i is a membrane potential (voltage) of the i-th neuron, τ m is a time constant for the voltage relaxation, v rest is the resting voltage, I bias is an input bias current (the default value in our numerical experiments is I bias = 0 ), and a ij are the weights describing the strength of recurrent links.After the membrane potential reaches the threshold v th , the neuron generates a spike and the voltage resets to v 0 .During the absolute refractory period τ r after the spike generation, the voltage value remains constant at v 0 , i.e., during this interval the neuron is unaffected by external stimulation.The coupling in (1) is implemented via the double exponential synaptic filter given by the dynamics of variables r i and h i for the i-th neuron: where τ r and τ d are the synaptic rise and decay time constants, respectively, t (i) k is the moment of generation of the k-th spike by the i-th neuron. The coupling structure of recurrent connections is described by the weight matrix A = [a ij ] whose elements are drawn from a Gaussian distribution with zero mean and standard deviation g(pN) −1/2 where p is a fraction of non-zero elements, g is a global coupling strength.The output is given by M readout units whose dynamics are determined as follows: where w ki is the weight coefficient between the i-th neuron and k-th output (resulting in the output matrix W = [w ki ]), and r i (t) is the neural firing rate filtered according to Equation (2). The FORCE method requires that the output units send feedback links to the spiking neurons whose weights are stored in N × M matrix U composed of concatenated vectors u k (k = 1, . . ., M) and whose elements are drawn randomly and uniformly from a uniform distribution between −q and q, where q is a feedback coupling strength.Therefore, the complete system taking into account the recurrent and feedback links is as follows: where matrix = A + UW T = [ω ij ] determines the efficient topology shaped by the fixed recurrent and feedback links and the trained output weights. The goal of training is to modify output weights w ij in such a way that the linear readout (3) approximates the target signal: ẑk (t) ≈ z k (t).In this study, we use two-dimensional target signals in the form of closed polar figures which have a clear geometrical interpretation in order to study whether it is possible to relate distinct features of output geometry with the hidden geometry of spike pattern produced by (4).Namely, we illustrate our results by four target curves (5-8) for which equations governing their generation in (x, y)-plane are as follows: (a) a circle (b) two-petal (c) three-petal, and (d) four-petal . /fncom. . . Force training The output weights in matrix W are trained according to the algorithm of first-order reduced and controlled error (FORCE) learning adopted to spiking neural networks, see Sussillo and Abbott (2009) and Nicola and Clopath (2017).The error e(t) between the teaching signal and the real output is computed after each time interval of t: In addition, a running estimate of the inverse of the correlation matrix of the network rates P is computed as follows: where matrix P is initialized by I/α in which I is the identity matrix and α is a learning rate parameter.Moreover, after each time period of t matrix, W is trained according to the following rule based on (9, 10): Initially, the elements of output matrix W equal to zero, and after each interval, t changes with the adaptation rule according to Equation (11).Gradually, the values w k become close to some stationary states.After that, the learning procedure stops, and we have a multidimensional dynamical system of a complex network with fixed weights.This supervisely trained system is able to autonomously generate the target output closed trajectories.The core structure of the network defined by the adjacency matrix A after learning remains the same as before learning.The trained vectors w k multiplied by the feedback vectors u k introduce some low-rank perturbation to the coupling topology, and the corresponding network activity dramatically changed.Such structural perturbation leads to a global disturbance in the phase space of the recurrent network. . Victor-Purpura distance for spike trains Each spike train is considered to be a point in an abstract topological space.A spike train metric is defined according to the special rule, which assigns a non-negative number to pairs of spike trains and expresses how dissimilar they are (Guidolin et al., 2022).We use the variant of the spike time VP distance which is parametrized by the cost quantity q in units of the inverse time.To compute the VP distance, the spike trains are compared in terms of allowed elementary steps, which can be applied to one sequence of spike timings to get another one.The following steps and associated costs are as follows: (a) insertion of a spike with the cost of one, (b) deletion of a spike with the cost of one, and (c) shifting a spike by amount of time t with the cost of qt.If q is very small, the metric becomes the simple spike count distance.If q is very large, all spike trains are far apart from each other, unless they are nearly identical.For intermediate values of q, the distance between two spike trains is small if they have a similar number of spikes, occurring at similar times.The motivation for this construction is that neurons which act like coincidence detectors should care about this metric.The value of q corresponds to the temporal precision 1/q of the coincidence detector.We calculate the VP distance of the described types using the scripts provided by the authors of this metric: http://www-users.med.cornell.edu/~jdvicto/metricdf.html, http://www-users.med.cornell.edu/~jdvicto/spkdm.html. Each matrix of VP distances D ij is transformed via rankordering its entries, i.e., we replace the original entries in the above-diagonal part by natural numbers 0, 1, . . . in ascending order of their value (Giusti et al., 2015).The below-diagonal part of the rank-ordered matrix is obtained according to the symmetrical transformation of the above-diagonal part.After that, the entries are normalized by N(N − 1)/2 and reindexed in descending order of the neural firing rates.Finally, we obtain the normalized rank-ordered matrix M = [M ij ] which contains non-linearly transformed VP distances while having unchanged their relative order. . Persistence barcodes and Betti curves For the normalized rank-ordered matrices, we perform persistence homology analysis of the following form.The set of 100 most active neurons with their spike trains are considered as vertices in an abstract space, where the distance between the i-th and j-th neurons are given by entry M ij . These vertices form zero-dimensional or 0-simplices, and we introduce a filtration parameter ρ which defines the radius of abstract circles centered at vertices.With increasing ρ, two vertices i and j considered coupled if M ij ≤ ρ.The edge resulting from such construction is one-dimensional or 1-simplex.With increasing filtration parameter ρ, the number of 0-simplices and 1-simplices changed but may be unchanged or persisted over some intervals.This property is quantified by so-called Betti numbers, which count the number of correspondinc topological invariants at the current filtration scale.For example, the 0-th Betti number β 0 (ρ) gives the number of connected components and the 1-st Betti number β 1 (ρ) counts the number of onedimensional simplices (edges).How particular simplex (k) emerges and disappears is reflected in the persistence barcode which consists of bars [ρ (k) b , ρ (k) d ], indicating the birth ρ b and death ρ d values of the filtration parameter for that simplex.Betti curves β 0 (ρ) and β 1 (ρ) summarize this information showing how the number of simplices of corresponding dimensions varies with increasing filtration parameter. FIGURE FIGURE Flowchart of the study.(A) Training a spiking neural network to generate a target trajectory at the output by the FORCE method.(B) After training, the self-sustained spiking patterns support the generation of the pattern of interest.(C) Spike trains are analyzed in several steps.First, matrix D of the Victor-Purpura distances is calculated.Second, matrix M obtained by rank ordering the entries of D. Finally, we apply several approaches of the algebraic topology, namely, we compute the persistent homology of the rank-ordered matrix obtaining persistence barcodes and Betti curves which give topological signatures of the spiking patterns. FIGURE FIGUREExamples of output trajectories [x(t), y(t)] produced by the trained recurrent spiking neural network: (from left to right) a circle, two-petal, three-petal, and four-petal polar roses. FIGURE Instant firing rate of the full network averaged over ms for four di erent target outputs shown in Figure (left column).Corresponding spike trains of randomly chosen neurons (right column).For the circle target output, the firing rate changes within the interval of -Hz (A) and the corresponding spike train does not exhibit any distinct phases (B).For the target patterns in the form of polar roses (C, E, G), the network firing rate varies within the band of -Hz making discernible rise-fall excursions for each petal of the output trajectory.The corresponding spike trains (D, F, H) contain the same number of distinct phases as the number of petals in the target pattern. FIGURE FIGURE Matrices of the Victor-Purpura distance D = [D ij ] obtained for spike trains underlying the generation of four di erent target outputs in Figure : (A) a circle, (B) two-petal, (C) three-petal, and (D) four-petal polar roses.More distant neurons shown by red entries fire most dissimilar spike trains while less distant ones given by blue entries generate comparable spike patterns.The matrices show that di erent target patterns require special organization of spike trains. FIGURE FIGURE Matrices M = [M ij ] produced by rank ordering and normalization of the entries in the corresponding VP matrices D = [D ij ] shown in Figure for four di erent target outputs, as shown in Figure : (A) a circle, (B) two-petal, (C) three-petal, and (D) four-petal polar roses.The smaller entries (blue) of these matrices M correspond to most dissimilar spike trains while the larger ones (red) indicate the closest neurons in term of VP distance.The neurons here are reordered based on the individual average firing rates, thus the units with a smaller index produce more spikes during trials than those with larger indices.The form of the matrices emphasize that neurons that produce spikes with close firing rates are closer to each other than to those producing a greatly di erent number of spikes.However, di erent target patterns are characterized by individual signatures. FIGURE FIGUREBarcodes showing the persistence of zero-dimensional and one-dimensional simplices for most active neurons taken from the matrices shown in Figure for four di erent target outputs: (A) a circle, (B) two-petal, (C) three-petal, and (D) four-petal polar roses.Bars in the top subfigures show existence of zero-dimensional simplices and those in bottom subfigures indicate the birth and death of one-dimensional simplices.These persistence barcodes reflect complex topological structure of spike trains produced by the neural networks performing particular output trajectories. FIGURE FIGURE Betti curves for zero-dimensional (left column) and one-dimensional (right column) simplices associated with increasing filtration of the persistence barcodes shown in Figure for four di erent target outputs: (A, B) a circle, (C, D) two-petal, (E, F) three-petal, and (G, H) four-petal polar roses.Each curve indicates the number of simplices of particular dimension with varying filtration parameter ρ.
7,056
2024-02-23T00:00:00.000
[ "Computer Science" ]
Exploring essential oil-based bio-composites: molecular docking and in vitro analysis for oral bacterial biofilm inhibition Oral bacterial biofilms are the main reason for the progression of resistance to antimicrobial agents that may lead to severe conditions, including periodontitis and gingivitis. Essential oil-based nanocomposites can be a promising treatment option. We investigated cardamom, cinnamon, and clove essential oils for their potential in the treatment of oral bacterial infections using in vitro and computational tools. A detailed analysis of the drug-likeness and physicochemical properties of all constituents was performed. Molecular docking studies revealed that the binding free energy of a Carbopol 940 and eugenol complex was −2.0 kcal/mol, of a Carbopol 940-anisaldehyde complex was −1.9 kcal/mol, and a Carbapol 940-eugenol-anisaldehyde complex was −3.4 kcal/mol. Molecular docking was performed against transcriptional regulator genes 2XCT, 1JIJ, 2Q0P, 4M81, and 3QPI. Eugenol cinnamaldehyde and cineol presented strong interaction with targets. The essential oils were analyzed against Staphylococcus aureus and Staphylococcus epidermidis isolated from the oral cavity of diabetic patients. The cinnamon and clove essential oil combination presented significant minimum inhibitory concentrations (MICs) (0.0625/0.0312 mg/mL) against S. epidermidis and S. aureus (0.0156/0.0078 mg/mL). In the anti-quorum sensing activity, the cinnamon and clove oil combination presented moderate inhibition (8 mm) against Chromobacterium voilaceum with substantial violacein inhibition (58% ± 1.2%). Likewise, a significant biofilm inhibition was recorded in the case of S. aureus (82.1% ± 0.21%) and S. epidermidis (84.2% ± 1.3%) in combination. It was concluded that a clove and cinnamon essential oil-based formulation could be employed to prepare a stable nanocomposite, and Carbapol 940 could be used as a compatible biopolymer. Introduction Oral bacterial infections in diabetic patients are fairly common (Shillitoe et al., 2012).Poor glycemic control facilitates increased and diversified microbial growth in the oral cavity (Xiao et al., 2020), which results in an imbalance in oral microbiota (Al-Janabi, 2023).Gram-positive bacteria, including Staphylococcus aureus and Staphylococcus epidermidis, are the most common bacteria in the Staphylococci genus responsible for infections in clinical settings (Balachander and Alexander, 2021).Although most Staphylococci species are thought to be part of the natural flora, under specific conditions, they can become opportunistic pathogens that can generate a variety of virulence factors (Tong et al., 2015).Various investigations have confirmed the occurrence of Staphylococci oral flora; however, they are considered transient members of oral microbiota (McCormack et al., 2015).S. aureus and S. epidermidis are mainly reported in older people (Murdoch et al., 2004), denture wearers (Tawara et al., 1996), or patients with periodontitis (Loberto et al., 2004).Both strains possess several virulent genes that may be present as distractive loci or as genetic elements (Cheung et al., 2021), including arginine catabolic mobile elements, which give the bacteria the ability to resist some heavy metals, particularly copper ions, and also make it easier for the bacteria to colonize the skin and mucous membranes (Al-Jabri et al., 2021).S. aureus and S. epidermidis create an extracellular polymeric substance (EPS) that allows the bacteria to settle at the infection site and form a biofilm.EPS acts as a physical barrier to external stress and promotes the growth and maturation of microorganisms (Nguyen et al., 2020).Detachment, the last phase, releases single cells to encourage the spread of biofilm clusters to distant areas (Bertoglio et al., 2018).During the development of biofilm, different soluble factors are produced, including proteins, eDNA, exopolysaccharide, polysaccharide intercellular adhesion (PIA), carbohydrates, teichoic acids, and surfactants (Le et al., 2018).Cell-to-cell signaling by quorum sensing systems also plays an important role in virulent pathogens associated with biofilm formation (Kaur et al., 2021).The main challenge in the oral cavity is the development of bacterial biofilms that limit the permeability of drug moieties to the target site, thus leading to the development of antimicrobial resistance and treatment failure (Dagli and Dagli, 2014;Prestinaci et al., 2015).Thus, there is a great need to investigate new drug moieties to solve this major health concern. Essential oils (EOs) are gaining the attention of researchers in medical science due to their significant biological activities, high penetration power, and less toxic effects (Chouhan et al., 2017).Essential oils are hydrophobic and evaporated at room temperature (Dhifi et al., 2016).These are mainly comprised of low molecular weight compounds, including monoterpenes and phenolic compounds (Gheorghita et al., 2022).Since ancient times, EOs, including clove oil, have been used in oral hygiene as anti-inflammatory and antimicrobial agents (Saleem et al., 2023).The significant antimicrobial activities of EOs are mainly due to their interference with bacterial membranes due to their hydrophobic nature, which affects cellular structures, and their efflux pump, enzymatic inhibition (β-lactamase), and strong antioxidant properties (Devi et al., 2010;Oliveira et al., 2022).Improved essential oil delivery at the target site can be efficiently achieved through advanced drug delivery systems, including nanocomposites (Guidotti-Takeuchi et al., 2022).These nanocomposites invade the EO molecules and not only protect them from light but also limit their evaporation and offer an efficient delivery at the target site (Joye et al., 2016).The biocomposite materials, including Carbapol 940, are advantageous in achieving efficient drug delivery goals and a high safety profile (Varaprasad et al., 2014).However, weak binding forces between the EOs and bio-composite materials may interfere with the overall formulation. Modern technology has proposed several tools for new drug discovery and development.The use of computation methods can be a valuable tool for predicting several features of drug likeness, absorption, distribution, metabolism, excretion (ADME) properties, bioavailability, and safety profiling.During drug designing, scientists can make structural changes in molecules with the aim of modifying ADMET (ADME and toxicity) and bioactivity features (van de Waterbeemd and Gifford, 2003).The in silico methods offer rapid, easy, and reliable predictions regarding drug-target interaction.Molecular docking is the most common computational structure-based drug design (SBDD) method and has been widely used since the early 1980s (Stanzione et al., 2021).It is the tool of choice when the three-dimensional (3D) structure of the protein target is available, and it mainly helps to understand and predict molecular recognition, both structurally (i.e., finding possible binding modes) and energetically (i.e., predicting binding affinity) (Stanzione et al., 2021).Molecular docking was originally designed to be performed between a small molecule (ligand) and a target macromolecule (protein) (Pawar and Rohane, 2021).However, in the last decade, there has been a growing interest in protein-protein docking, nucleic acid (DNA and RNA)-ligand docking, and nucleic acid-protein-ligand docking.(Rahman et al., 2022). In formulation design, component interactions with polymers are considered very important because they can affect the permeability and bioavailability of drug molecules.Therefore, we aimed to investigate EO-based nanocomposite components analysis by using molecular dynamics, docking, and in vitro analysis to determine the efficacy of the formulation. Physicochemical in silico analysis The drug likeness and ADMET analyses of the essential oils were determined by using different online tools like pkCSM ADMET and SWISS ADME (Douglas et al., 2015;Daina et al., 2017;Amin et al., 2020).The simplified molecular input line entry system (SMILES) was used to load the tested compounds on the input path of the abovementioned online computational tools, and data were generated.All results were recorded, and interpretations were framed accordingly. Polymer docking The structures of Carbapol 940, eugenol, and anisaldehyde were downloaded from PubChem.Energy minimization of all generated structures was carried out using YASARA Structure software (Karieger and Vriend, 2014).The structures of nanocomposites, including Carbapol 940, eugenol, and anisaldehyde, were considered as alternative receptors (host) and ligands (guest) to obtain the stable emulsion complex.AutoDock Vina v 4.2.6 was used for molecular docking calculations in PyRx, in which the grid box was set to cover the entire component to ensure that all possible interactions with the S. No Composite ingredient (kcal/mol) a system were searched (Dallakyan and Olsan, 2015).Discovery Studio Visualizer was used for the visualization and graphical representations of all complexes (Trott and Olson, 2010). Ligand docking (MD studies) AutoDock Vina v 4.2.6 was used for molecular docking.The Protein Data Bank (PDB) was used to obtain the X-ray crystallographic structure of the transcriptional regulators 2Q0J, 3QP1, 1JIJ, 2XCT, and 4M8I in PDB format.These target genes were further aligned for molecular docking in Discovery Studio 2.0 for the removal of water and H atoms and the addition of charges.The ligand 3D structures were collected from the PubChem database.The active pocket determination was performed by using the CASTp 3.0 tool.Molecular docking was performed by AutoDock v 4.2.6 using the Lamarckian genetic algorithm.Both the ligand and target were further processed for torsion, Kollman charges, and other required alignments.Finally, a command prompt was used for molecular docking.The best-docked molecules with the highest free binding energy [ΔG] were examined using Ligplot + Accelrys DS Visualizer 2.0 and PyMOL.The generated poses were classified according to their root mean square deviation (RMSD) values (Qaiserani et al., 2021). Minimum inhibitory concentration (MIC) The minimum inhibitory concentration of the essential oil was determined by the broth dilution method with slight modification (Ullah et al., 2019).A 50-µL aliquot of nutrient broth was added to each well in a 96-well microplate, and 50 µL of the test sample was added to this with serial dilution.Finally, 50 µL of the test strain was added to each well, and the 96-well microplate was incubated for 24 h at 37 °C.Afterward, an aliquot of 40 μL of resazurin solution (0.015%) was added to each well and incubated for 1 h.The color changes in the 96-well microplates were recorded. Anti-quorum sensing The anti-quorum sensing activity of the test sample was determined by using biomarker strain C. violaceum (Amin et al., 2020).A 24 h-old Frontiers in Chemistry frontiersin.org05 strain of the Chromobacterium violaceum (1/100 ratio) was steaked onto the LB agar, and sterilized 6 mm filter paper discs were placed in the center of the Petri dishes.A 15 μL aliquot of the test sample was loaded on filter paper discs and allowed to dry for 30 min.The plates were placed in the incubator at 30 °C for 24 h.After 24 h, the zone of inhibition was measured, and the results were recorded. Violacein quantification assay The violacein quantification was performed by a standard procedure (Amin et al., 2020).Briefly, a 200 μL aliquot of C violaceum (OD = 0.4 OD at 600 nm) along with 25 µL of the test sample was added in a 96-well microplate.The 96-well microplate was then incubated at 30 °C for 24 h.Then, the decrease in the violacein pigment synthesis was 3 Results Polymer docking The relative binding free energies between the clove oil (eugenol), cinnamon oil (cinnamaldehyde), and the gelling agent (Carbopol 940) molecules were determined using AutoDock Vina, as indicated in (Table 1).Binding interactions are depicted in Figure 1.The molecular docking investigations revealed that the proposed formulations might be stable. Drug likeness The drug-likeness studies were performed using the Molinspiration tool following Lipinski's rule of five.The major constituents of essential oils identified by gas chromatographymass spectroscopy (GCMS) data (Rafey et al., 2021) were screened for drug likeness.It was evident from the computational data that all tested compounds showed good compliance with drug likeness according to Lipinski's rule (Table 3).Thus, all tested compounds could be drug candidates. Drug-likeness score To further extend the scope of the investigation, the druglikeness score was determined using Molsoft's chemical fingerprints model (Kamoutsis et al., 2021).The bioavailability was predicted using Bioavailability Radar (SWISS ADME) and the Brain Or IntestinaL EstimateD permeation (BOILED-Egg) model (SWISS ADME) (Kamoutsis et al., 2021).The cinnamaldehyde presented a slight deviation from the standard value of the drug-likeness score (less than one), whereas all other molecules were within the permissible range (Figures 7-9).Likewise, all molecules were within permissible limits of bioavailability (Figures 7-9) and could cross the blood-brain barrier (Figure 10). ADMET analysis The ADMET analysis was performed by using the Molinspiration tool to determine the ADMET attributes of all tested compounds (Table 4).The results of the analysis revealed that all tested molecules are in agreement with the set parameters and could be utilized for oral formulations (Table 4). Determination of MICs The cinnamon, cardamom oil, and clove EOs were tested individually and then mixed in definite ratios and tested for the effects of the different combinations (Table 5).In the individual cases, the clove EO showed the highest MIC (0.024 mg/mL) against S. epidermidis, followed by the cinnamon EO (MIC 0.039 mg/mL).In the case of S. aureus, the cinnamon EO showed the highest inhibition (0.078 mg/mL), followed by the clove EO (0.097 mg/mL) (Table 6; Figure 11).Based on these findings, it was decided to further process only clove, cinnamon, and cardamom oil to see their combined effect. In the MIC assays of diverse combinations (as explained above), combination F2 comprising cinnamon and clove EOs was chosen for further activity because this mixture showed excellent inhibition of S. epidermidis (0.0625/0.0312 mg/mL) and S. aureus (0.0156/ 0.0078 mg/mL) (Table 7).It was concluded that the mixture of clove and cinnamon EOs may have a synergistic effect in the eradication of oral bacteria. Discussion Molecular docking is an important modeling approach that gives an idea about the interactions between receptor (host) and ligand (guest).This in silico method depicts the ligand binding sites and conformations within a host.Molecular docking simulation gives insight into the orientation of the drug in a binding site (called its "pose") and also gives an estimation of the binding affinity of the identified pose in the form of a scoring value (Ahmed et al., 2018).The AutoDock Vina algorithm uses a machine learning method that merges the advantages of knowledge-based potentials and empirical scoring functions to calculate the binding energy of a given ligand pose.Ligand docking of cinnamaldehyde, eugenol, and cineol was performed with the transcriptional regulator 2Q0J (Pseudomonas quinolone signal response protein PqsE), and interactions were recorded.All ligands showed interaction with the target; however, in the case Frontiers in Chemistry frontiersin.org08 of eugenol, maximum H-bonding interactions were recorded (3) with pose 2 (−6.2 ΔG (kJ mol -1 ).The participating amino acids were Asp73, His71, and Asp178, and neighboring amino acids included Tyr72, His159, Leu193, Leu277, Ser273, His282, Ser285, and Phe195 (Table 2; Figure 2).It was concluded that both H-bonding and hydrophobic interactions participate in this case and may contribute towards antibiofilm potential.Another interesting interaction in 2Q0J was recorded in the case of Frontiers in Chemistry frontiersin.org09 cinnamaldehyde, which showed a better interaction in the case of pose 2 with a binding energy of −6.0 ΔG kJ mol -1 .In this case, Ser273 and His282 showed H-bonding interaction whereas hydrophobic interactions were recorded with Leu193, Glu182, His71, Asp73, Asp73, Asp178, Leu277, and Phe195. In the case of the anti-quorum sensing regulator gene 3QP1, eugenol showed the best fit in the active pocket with pose 3 (−5.0ΔG (kJ mol -1 ).The H-bonding was contributed by Trp111, Gly128, and Gla112, and other non-H bonding interactions were contributed by neighboring amino acids, including Arg159, Gly158, Gy162, Frontiers in Chemistry frontiersin.org10 Arg163, Ser137, and Met110 (Table 2; Figure 3).Most importantly, in the case of cinnamaldehyde, only one H-bonding interaction was recorded with Arg101 amounting to low binding energy (−4.0 ΔG (kJ mol -1 ), and no interaction was seen in the case of cineol.This predicts that eugenol may have anti-quorum sensing activities. In the case of transcriptional regulator 1JIJ (S. aureus tyrosyl-tRNA synthetase), no interaction was recorded with cinnamaldehyde or cineol, which shows no or less inhibition potential of these compounds.In contrast, the eugenol showed best fit in the active pocket of the target site (1JIJ).The best fitting occurred with pose 1 and had low binding energy (−3.1 ΔG (kJ mol -1 ), three H-bonding interactions, comprising Arg158, Gly162, and His161, and the neighboring amino acid was Arg88, involved in hydrophobic interactions (Table 1; Figure 4).Docking of Drug-likeness score and bioavailability radar prediction for eugenol. FIGURE 7 Drug-likeness score and bioavailability radar prediction for cinnamaldehyde.Frontiers in Chemistry frontiersin.org11 Ullah et al. 10.3389/fchem.2024.1383620compounds on the active pockets of transcriptional regulator 2XCT (S.aureus topoisomerase-II DNA gyrase) revealed that all tested compounds had low levels of interaction comprising one H-bond interaction (Table 1; Figure 5).However, the binding energy of eugenol was significant (−5.9 ΔG (kJ mol -1 ) with pose 1 and showed interactions with Val1268 and Met1113.In this case, the hydrophobic interactions were seen with Asn1269, Arg1092, Gln1267, Ser1098, Phe1266, Phe1097, and Thr1220.In the case of transcriptional regulator 4M8I (the bacterial cytoskeletal division protein filamentous temperature-sensitive mutant Z (FtsZ)) in Drug-likeness score and bioavailability radar prediction for cineol. Frontiers in Chemistry frontiersin.org15 amino acid, Thr102, contributed to H-bond formation, which shows a lesser participation of this compound in inhibition. Molecular docking investigations were performed to investigate the stability of the major components of the formulation and active essential oils.The binding free energies between Carbapol 940 (host) and eugenol and anisaldehyde molecule (guests) estimated the strength of the interactions between them.Tighter interactions between the drug molecules and gelling agent might lead to a stable emulsion and may result in a more sustained drug release profile than looser interaction/binding (Hameed et al., 2020;Khan et al., 2021).It was also apparent from the binding free energies table that the emulgel form has a lower binding affinity than the co-ligand form.For example, mono-ligand complexes, including Carbapol 940-eugenol and Carbapol 940-anisaldehyde (−2.4 kcal/mol) complexes, were found to have less binding affinity than Carbapol 940-eugenol-anisaldehyde (−3.4 kcal/mol).It was evident that bio-composite Carbapol 940 and EO components were compatible with each other and could offer a stable nanocomposite, as reported earlier (Lu et al., 2021).Furthermore, in the case of EOs, nanocomposite-based formulations are advantageous compared to conventional dosage forms because they limit the EO evaporation and allow enhanced drug delivery (Varaprasad et al., 2014). The empirical range of drug-likeness scores is −1 to +1.In the case of cinnamaldehyde, the drug-likeness score is −1.54, which is out of range, whereas eugenol and cineol have −0.74 and −1.04, are within range and thus fulfill the criteria of drug-likeness (Figure 6).Bioavailability Radar is another helpful tool for quickly seeing the drug likeness of a molecule.The pink region reflects the best range for oral bioavailability for each particular property.[Lipophilicity (LIPO): XLOGP3 ranges from 0.7 to +5.0; Polarity (POLAR): A topological polar surface area (TPSA) between 20 and 130; Molecular weight (SIZE): MW between 150 and 500 g/mol; Insolubility (INSOLU): log S less than 6; Flexibility (FLEX): no more than nine rotatable bonds; Saturation (INSATU): fraction of carbons in the sp3 hybridization not less than 0.25)] Based on these criteria, INSATU for cinnamaldehyde, that is, the fraction of carbons in the sp3 hybridization, violates the rule, and thus, the values cross the pink area.In the case of eugenol and cineol, all parameters stay within limits and thus support a good bioavailability, keeping in mind that eugenol has a little crossing of pink area. In the log P method developed by Wildman and Crippen (WLOGP) vs. TPSA reference, the BOILED-Egg model (Daina and Zoete, 2016) agrees on an assessment of passive gastrointestinal absorption (HIA) and brain penetration (BBB) as a function of molecule position.The white zone denotes that passive absorption through the gastrointestinal system is likely, whereas the yellow region (yolk) indicates that brain penetration is likely.The yolk and white parts are not mutually exclusive.The points are additionally colored blue if they are expected to be actively effluxed by P-gp (PGP+) and red if they are expected to be a P-gp nonsubstrate (PGP).Considering this interpretation, cinnamaldehyde, eugenol, and cineol are effluxed by PGP-and have a high likelihood of brain penetration, as demonstrated by the BOILED-Egg model. The evaluation of the ADMET properties of a medicinal drug is becoming crucially influential.The use of computational techniques has made determining ADMET characteristics of substances much easier.All the investigated drugs had TPSA values of less than 100, indicating good oral absorption or membrane permeability.Chemical absorption levels are often estimated using the Caco-2 permeability, intestinal absorption (human), skin permeability, and P-glycoprotein substrate or inhibitor.The tested molecule has a high Caco-2 permeability and is quickly absorbed when the Papp coefficient is larger than 8 × 10 −6 and the anticipated value is greater than 0.90.The permeability of Caco-2 was high in all compounds.Anything less than 30% absorption in the human gut is termed inadequate absorption.In this case, all the substances tested exhibited a high absorption rate (>90%).Because a chemical with a log Kp > −2.5 has a relatively low skin permeability, cinnamaldehyde and cineol may have poor skin permeability, while eugenol and cineol may have excellent skin penetration.P-glycoprotein is a member of the ATP-binding transmembrane glycoprotein family [ATP-binding cassette (ABC)], which can excrete medicines from cells.Except for cineol, the ADMET data revealed that the compounds tested are neither substrates nor inhibitors of P-glycoprotein. Drug distribution in tissues is primarily demonstrated by distribution volume at steady state (VDss), fraction unbound [human], CNS permeability, and blood-brain barrier membrane permeability in tissues (logBB).When VDss is less than 0.71 L kg −1 , the distribution volume is thought to be quite low (log VDss = -0.15).When VDss is larger than 2.81 L kg −1 (log VDss> 0.45), the distribution volume is considered to be high.The distribution volumes of our tested substances were low. In terms of permeability, compounds with logBB>0.3 are thought to flow through the blood-brain barrier with ease.Except for eugenol, the compounds tested have a logBB greater than 0.3, indicating that they may easily cross through the blood-brain barrier (logBB 0.185).Because logPS-3 is present in our tested drugs, they are unable to cross the CNS.Cytochrome P450 enzymes are very important for the metabolism of many drugs in the liver.This class is comprised of more than 50 enzymes, and CYP 1A2, 2C9, 2C19, 2D6, 2E1, and 3A4 with CYP3A4 and CYP2D6 possess a key role in drug metabolism (above 90%).Our results confirm that all tested four compounds were not substrates for CYP3A4 and CYP2D6.All tested compounds were inhibitors of CYP1A2.Thus, our tested molecules may be metabolized in the liver.Among the tested molecules, cineol has the highest total clearance.Regarding safety profiling, eugenol showed mild AMES toxicity, and all compounds were observed as sensitive to the skin.These findings clearly indicate that skin sensitization may occur; however, it mainly depends on the dose utilized and the degree of encapsulation in the typical formulation. Based on the findings of our earlier investigations (Rafey et al., 2021), the cinnamon EO, cardamom EO, and clove EO were tested individually and mixed in defined ratios and then tested for their combined effect in the different combinations.In the individual case, the clove EO showed the highest MIC (0.024 mg/mL) against S. epidermidis, followed by the cinnamon EO (0.039 mg/mL).In the case of S. aureus, the cinnamon EO showed the highest inhibition (0.078 mg/mL), followed by the clove EO (0.097 mg/mL).Based on these findings, it was decided to further process only clove, cinnamon, and cardamom EOs to see their combined effect. In the MIC assays of diverse combinations (as explained above), combination F2 comprising cinnamon and clove EOs was chosen for further activities because they showed excellent inhibition of S. epidermidis (0.0625/0.0312 mg/mL) and S. aureus (0.0156/0.0078 mg/ mL).It was concluded that the EOs of clove and cinnamon may have a synergistic effect in the eradication of oral bacteria.A possible reason for promising activities may be the presence of eugenol and cinnamaldehyde, which have both bactericidal and bacteriostatic activities (Ali et al., 2005). Quorum sensing is one of the major mechanisms for bacterial biofilm formation; thus, inhibition of quorum sensing can demonstrate that a compound has antibiofilm features.Chromobacterium violaceum is a biomarker strain for bacterial quorum sensing.In this experiment, different concentrations of essential oils were tested for anti-quorum sensing activities, and diverse concentrations of clove and cinnamon EOs were analyzed for inhibition of zones and violacein.The N2 formulation presented promising results and was processed further.The tested molecules were analyzed further for antibiofilm potential, and a strong antibiofilm activity was recorded for N2 and N3 against tested strains.It confirmed that biofilm inhibition was due to quorum sensing (Qaisrani et al., 2021). Conclusion The molecular dynamic and docking studies revealed that major components of essential oils were compatible with each other and the bio-composite polymer Carbapol 940.Further in silico characterization indicated that the nanocomposite components complied with the established parameters.Moreover, the combination of cinnamon and clove EOs showed significant antimicrobial, anti-quorum sensing, and antibiofilm activity against both the clinical oral strains S. aureus and S. epidermidis.These EOs are considered to have synergistic effects and will be considered for encapsulation in a nanocomposite dosage form.Thus, it was concluded that cinnamon and clove EO-based nanocomposite with the Carbapol 940 formulation could be further processed for oral formulation and dental material development due to strong stability and enhanced biological activities.Further investigations on the effect of copolymer addition and clinical investigations are suggested. TABLE 1 Interaction analysis of diverse formulation systems. TABLE 2 Docking analysis of major essential oil components. TABLE 3 Lipinski's rule of five application data. TABLE 4 ADMET properties of compounds. TABLE 6 Minimum inhibitory concentration of the individual essential oil. TABLE 7 Minimum inhibitory concentration of the different combinations of essential oils. TABLE 11 Antibiofilm properties of essential oil combinations against S. aureus.
5,763.8
2024-07-17T00:00:00.000
[ "Medicine", "Materials Science", "Chemistry", "Environmental Science" ]
Design of Learning Activities using Rigorous Mathematical Thinking (RMT) Approach in Application of Derivatives Sections Info ABSTRACT Article history: Submitted: December 21, 2020 Final Revised: January 15, 2021 Accepted: January 20, 2021 Published Online: January 31, 2020 Learning design is one of the factors that support the learning process in order to achieve learning objectives in all subjects including mathematics. Many approaches can be employed by a teacher in making learning design, one of which is rigorous mathematical thinking (RMT) approach. The RMT approach puts forward students actively in constructing their knowledge through the use of psychological tools and mediation. This article reports a set of learning activities designed through a developmental study using the RMT approach in the topic of application of derivatives. Participants of this research were twenty-six of 11th grade students from a private secondary school. Data were collected through written test and classroom observation. The research instruments were student worksheet and observation sheet. In the learning process, students use psychological tools to connect their previous knowledge to the material being studied. This makes students able to construct their own knowledge more thoroughly. On the other hand, with the mediation carried out by the teacher, students can focus more and understand each material well and bridge the conceptual errors. Based on the results of the study and some literature, Design of learning activities using Rigorous Mathematical Thinking (RMT) on application of derivative can be an alternative as effective learning. INTRODUCTION The derivative concept and its application in real life is one of the essential mathematical material in high school and college students. However, there are still many difficulties students have in understanding and solving problems related to derivatives. The difficulty is mostly because students are not able to understand the derivative concept well (Tall, 2011). Moreover, Hashemi et al. (2014) inform that the weakness of making connection between graphical and symbolic aspects of the concepts was a difficulty for most students in the learning of derivation. It seems apparent that most students only memorize instructions and remember algebraically without relating the function graph as visual representation. In fact, Van Garden and Montague emphasize that visual representation by students has a positive correlation on mathematical problem solving (Van Garderen & Montague, 2003). Therefore, it needs a learning design that is able to facilitate students well to understand the concept of derivatives and its application. One of the mathematical approaches that facilitates students in using visual representation for deep understanding is rigorous mathematical thinking (RMT). The IJORER: https://journal.ia-education.com/index.php/ijorer RMT paradigm based on two main theories: theory of psychological tools from Vygotsky and theory of mediated learning experience (MLE) from Feuerstein. The RMT implies that cognitive processes are formed through appropriation, internalization, and utilization of psychological tools through the application of MLE interactional dynamic (Kozulin & Kinard, 2008). Furthermore, RMT can be interpreted as mental operation to acquire insights about patterns and relationship, to elaborate these insights for their organization to form emerging conceptualizations and understandings, and then to transform and generalize these emerging conceptualizations and understanding into coherent, logically-bound ideas and networks of ideas (Kinard & Kozulin, 2006). Therefore, the use of psychological tools at the RMT allows students to be involved in designing visual representations as a method to organize and shape relationships based on data received through mathematical reasoning (Kozulin & Kinard, 2008). Hidayat et al. (2017) suggest that RMT emphasizes the mediation between teacher and student which results in a good understanding of matter to be transformed further conceptually through interrelated ideas. Several studies related to RMT have been conducted. Some studies discuss RMT as a level of cognitive function and others discuss RMT as a learning approach. Research on RMT as a level of cognitive function has been carried out on various topics such as geometry (Nugraheni et al., 2018;Yunita et al., 2019), Integer and fraction (Resmi & Caswita, 2020) and real analysis (Firmasari et al., 2019). As an approach, previous research shows some positive results supporting students' improvement on particular topics in mathematics. For example, Hendrayana (2017) reported that students' conceptual understanding who obtain RMT approach is better than those who obtain direct learning. In addition, Hidayat et al. (2017) concluded that RMT approach can also improve a better achievement of students' mathematical creative and critical thinking abilities rather than expository approach. Previous studies have not explained qualitatively about learning design using the RMT approach, especially on the topic of application of derivatives. Furthermore, studies on difficulties in studying derivatives have been discussed (Hashemi et al., 2014) such as weaknesses in connecting graphical and symbolic aspects. However, it is not discussed in detail how aspects of graphs can help students in solving problems related to derivatives. This study explains the relationship between RMT and the use of psychological tools to help students to build knowledge through graphical and algebraic aspects in the same time. Therefore, purpose of this study is to describe qualitatively the design of mathematics learning activities using the RMT approach in the topic of the application of derivative as well as the student response description. The results and findings of this study can be a reference for teachers in designing learning activities in the topic of application of derivative. THEORITICAL BACKGROUND Learning activities Learning is conceived as an active, constructive, collaborative, and context-bound activities (Kwakman, 2003). This activity aims to provide an effective experience for students in building their own knowledge and gaining a thorough understanding of the material discussed. Current theory assume that students learn best when they have opportunity to actively construct their own knowledge (McLaughlin, 1997). In order for student to construct their own knowledge, teacher should stimulate students to engage in learning environment that focus on active and constructive activities. Thus, the teacher must plan and organize learning activities that are good in facilitating students to gain a deep understanding of a material. Theory of Psychological Tools Mathematical specific psychological tools are developed from general psychological tools from Vygotsky in 1979. These tools are symbolic artifacts-signs, symbols, texts, formulae, and graphic-symbolic devices. Kinard and Kozulin reveal that some tools that are specific psychological tools are symbols and code, number system, table, number line, Cartesian coordinate system (Kinard, 2007). Regarding to Utilization of psychological tools, students can organize their knowledge well and support mathematical generalizations and abstractions. Theory of Mediated Learning Experience (MLE) Theory of mediated learning was developed by Feuerstein in 1980. He suggested that mediated learning experience (MLE) reflect the quality of interaction between the learner, the material, and the human mediator. There are three criteria of mediation which are important to meet the best quality of interaction. These criteria are intentionality/reciprocity, transcendence, and mediated of meaning. Intentionality/reciprocity means that the mediator should attract and keep student's attention and engagement to the task. Transcendence means that mediator should establish the way for bridging students to relate their prior knowledge and the task. Mediated of meaning means that mediator should make students understand the motivation behind each step of learning activity and become self-directed learner. The primary roles of MLE is to guide students establish the basis for efficient learning and problem solving strategies (Kozulin & Kinard, 2008). Rigorous Mathematical Thinking (RMT) as an approach in Mathematical Classroom RMT as an approach means that the use of mediated learning experience (MLE) theory and psychological tools theory in learning activities. The RMT application focuses on mediation for students to be involved in the construction of mathematical concept using three phases: Cognitive development, Content as process development, and Cognitive conceptual construction practice. Cognitive development phase is the phase where the teacher mediates students to appropriate the structure of cognitive tasks with psychological tools based on previous experience. Content as process development is the phase where the teacher mediates students to discover and formulate mathematical pattern and relationship through the use of psychological tools to construct mathematical understanding. Cognitive conceptual construction practice is the phase where the teacher mediates student to practice the use of psychological tools to organize the use of cognitive functions to solve cognitive exercise. The RMT process reflect a quality of learning which aims to engage all learners to construct mathematical conceptual learning with deep understanding (Kozulin & Kinard, 2008). When precise understanding is achieved, students become able not only to be involved in solving certain problems but also reflective thinking. METHOD This study uses quasi-experimental research methods. However, this paper does not discuss the statistical results on examining the effectiveness of the RMT approach as reported in previous study (Hidayat et al., 2017). Instead, this paper reveals that learning design descriptively. The learning design describes learning activities between students and teacher supported by students' responses on the task corresponding to each of RMT phases sequentially. Participants of this research were twenty-six 11th grade students from a private secondary school in Bandung. The participants selected by considering the heterogeneity of the class background in term of gender and mathematical ability. Data were collected through written test and classroom observation. The research instruments were student worksheet and observation sheet. Data were analyzed by comparing the actual activities experienced by the student participants and the activities expected within RMT activities theoretically. The discussion aims to know about student learning activities through rigorous mathematical thinking (RMT) approach on the subject of the application of derivatives. The description of learning activities in this paper includes a description of learning with a rigorous mathematical thinking approach. RESULTS AND DISCUSSION Phase I Cognitive Development In this phase, students were mediated to adjust cognitive tasks with psychological tools. First, the teacher mediates students by giving an explanation of the big picture of the material that will be studied to students. It aims to keep students intentionally involved in the learning process. Next the teacher provides a worksheet that refers to the RMT approach to students who have formed a small group. The worksheets provided aim to enable students to understand the derivative meanings as the slope of the tangent line. The teacher monitors the needs of students in answering the worksheets given. Mediation of transcendence is carried out by the teacher in the form of questions and answers to students regarding matters relating to the ability of prerequisites such as gradient formulas and limit concepts. Some students can immediately remember and respond well. However, others did not answer and try to remember. The essence of this phase is that students are given experience to find derivative meanings on the curves at a certain point with psychological tools through tangent line. The student response above shows how students use their cognitive functions to know the coordinates of points A and B (Figure 1a) and form a slope of line through points A and B (Figure 1b). After that, the teacher gives mediation by asking whether the gradient will be the same as the line gradient . Some students do illustrations in a worksheet and answer conditions that are met when or (Figure 2a). Next students represent in the form of limits and connect with algebraic definitions of derivative that they have learned at the beginning of the derivative chapter ( Figure 2b). After that students write a conjecture on the worksheet. There are differences at the conjecture of each group, most of which answer that the slope of the tangent line at certain point was the result of substitution absis of the point to the derivative of the function of the curve (Figure 3a). However, some of them cannot conclude completely (Figure 3b). Mediated of meaning is given by teacher to student in the term of confirming all students understand about what the slope of the tangent line means and why and how it is related to derivative. Moreover, teacher highlighted the importance of understanding the slope of the tangent line to solve problems related to maximum and minimum. The experience of students in understanding the relationship between the slope of tangent line and derivative concepts is very valuable. This can provide a large scheme in the brain that is interrelated between one idea and another. Some references state that students' understanding of the slope of tangent line is more instrumental without knowing the meaning of the relationship with derivative concepts (Sahin et al., 2015). Therefore, by providing direct experience to students using psychological tools in the form of Cartesian coordinates and graphs very helpful in cognitive development for organizing the big idea of the relationship between the slope of tangent line with derivative concept. In addition, Mielicki & Wiley (2016) suggest that graphical representations, in this study are psychological tools, can help students informally in their lack of understanding the slope concept. This is because most students understand the slope material related to its graphic representation (Nagle et al., 2013). Phase II Content as Process Development In this phase students are mediated to find and formulate patterns or relationships between cognitive tasks and psychological tools in the form of tangent curves in the Cartesian coordinates that were previously understood. The teacher mediates by giving scaffolding to students in finding the relationship and ensuring that each student is involved in the process. In the Figure 4 it appears that students are given experience to draw a graph of a function and choose a number of points along with drawing a line to observe and find a pattern or characteristic of an increasing and decreasing function ( Figure 5). Figure 5. Students' response After that, students write a conjecture to find out whether a function is decreasing or increasing. In this section, all students have same conjecture that a function increasing when and decreasing when ( Figure 6a) and some groups complete with the illustration of the line graph as shown in the Figure 6b. . Some students answered that they did not increasing or decreasing. Furthermore, the teacher informs that the situation referred to as a turning point is one possibility in determining the local maximum and minimum values. Students' understanding of making conjecture for increasing and decreasing function is better when students makes connection with the concept of slope of tangent line that they have understood. This happens because Rigorous mathematical thinking approaches facilitate students making connection in forming a deep understanding of relationships through the use of psychological tools. These connections are needed by students as explained by Adu-Gyamfi et al. (2017) in his research that it is very important to know how students make connections, what kinds of connections are made, and how to change learning practices in helping students make these connections in understanding mathematical concepts. If mathematical concept understood relationally, students may compartmentalize the big ideas related to the concept in their conceptual system (Sahin et al., 2015). Phase III Cognitive Conceptual Construction Practice In this phase students are mediated to be able to use psychological tools to organize their cognitive functions in mathematical problem solving. Some mathematical questions are given in the worksheet to be worked on and discussed by each group. The following is an example of two student responses to questions. Most students answer algebraically and complete using graphs (Figure 7a). However, there are still students who think that has maximum value ( Figure 7b). This shows that some students still have difficulty in connecting the use of graph in determining maximum-minimum concepts. (a) (b) Figure 8. (a) Student uses psychological tools to formulate solution (b) Student does not use psychological tools to solve problem Figure 8a shows students using psychological tools to understand problems and relate the knowledge they already know in formulating solutions to the problem. On the other hand, Figure 8b shows students without using psychological tools trying to formulate problem solving. It is seen that students are confused in formulating problem solving because they assume that the point known is the tangent point. Students who have a deep understanding of tangent curves and can use psychological tools well will be able to plan problem solving. This is of course because a deep understanding of the relevance of the material will make students fluently develop problem solving strategies. In line with research conducted by Fransisco (2013), which is reported that high school students recognize that mathematical understanding is a key aspect of knowing mathematics. In addition, the use of psychological tools in the form of Cartesian coordinates and graphs can make it easier for students to represent visually the problems faced and to arrange the resolution of the problem. Conversely, students who do not have the understanding to associate material and not use psychological tools will be difficult in formulating problem solving. This is due to the lack of visual representations of objects understood by students so that students are unable to formulate problem solving. Whereas, visual representations can be another important representation to designate or communicate mathematical concepts and promote a
3,904.6
2021-01-31T00:00:00.000
[ "Mathematics", "Education" ]
Levelling Profiles and a GPS Network to Monitor the Active Folding and Faulting Deformation in the Campo de Dalias (Betic Cordillera, Southeastern Spain) The Campo de Dalias is an area with relevant seismicity associated to the active tectonic deformations of the southern boundary of the Betic Cordillera. A non-permanent GPS network was installed to monitor, for the first time, the fault- and fold-related activity. In addition, two high precision levelling profiles were measured twice over a one-year period across the Balanegra Fault, one of the most active faults recognized in the area. The absence of significant movement of the main fault surface suggests seismogenic behaviour. The possible recurrence interval may be between 100 and 300 y. The repetitive GPS and high precision levelling monitoring of the fault surface during a long time period may help us to determine future fault behaviour with regard to the existence (or not) of a creep component, the accumulation of elastic deformation before faulting, and implications of the fold-fault relationship. Introduction The study of low rate deformation tectonic structures is of great relevance, because although they are related with high seismic risk, it is difficult to accurately determine the deformation rates. With most active tectonic studies focused on faults, the analysis of related fold development is poorly documented. The southeastern region of the Betic Cordillera (Figure 1), constituting one of the most seismically active areas within the Iberian Peninsula, is a prime target for such analysis. The current NNW-SSE convergence between the African and Eurasian plates mainly develops compressive structures such as large folds and strike-slip faults. Many NNW-SSE normal faults are coeval with compressive structures. The Campo de Dalias and surrounding areas (Figures 1 and 2) comprise one of the most interesting zones for a characterization of present deformation as well as the relationship between the different compressive and extensional structures, folds and faults. The aim of this contribution is to describe the geologic, geomorphologic and seismic features related with recent tectonic activity, and to present data from the non-permanent GPS network and the high precision levelling profiles installed in this region. Horizontal NNW-SSE shortening through plate convergence and NE-SW coeval extension could thereby be determined, and the vertical uplift or subsidence related with development of folds and faults might be elucidated as well. In quantifying the deformation of a region, the establishment of GPS networks and levelling profiles prove adequate [1][2][3]. The best option is to use a permanent network of GPS stations; however, when this is not possible because of financial constraints, a network of closely-spaced pillars and its periodic re-observation through campaigns spaced over time is a very good alternative [4,5]. High precision levelling allows us to determine accurate vertical slips in the studied area. This study establishes a regional GPS network ( Figure 2) and two levelling profiles across the Balanegra Fault, one of the faults showing most recent seismic activity in the zone (Figure 3). The benchmarks located in the hangingwall and footwall through the profiles permit assessment of its co-seismic activity with related elastic deformation or creep character, at least for the time period studied. The combination of these two geodetic methods in repeated surveys will enhance our understanding of the relationship among fault movement and folding [2,3]. This research ultimately intends to answer the following questions: Is the deformation of folds and faults simultaneous? Is the fold development a result of aseismic movement? Can we accurately determine the uplift and slip rates? Can we use these data in the future for geological hazard assessment? Recent Folds and Faults The Betic Cordilleras (Figure 1) have undergone a polyphase evolution involving alternate and sometimes simultaneous compressional and extensional deformations since the early Miocene. However, the most recent tectonics, since late Miocene times, can be characterised by mainly NNW-SSE compression as a result of the Eurasian and African plate convergence. These plates converge at a rate of approximately 5 mm/y [6,7]. Coeval with compression, ENE-WSW orthogonal extension takes place. The NNW-SSE compression mainly develops large open folds with an ENE-WSW trend in a basin and range landscape in the surrounding mountains [8] (Figure 1). In the studied region, the reliefs of the Sierra de Gador correspond to a broad antiform that folds late Miocene calcarenites [9]. To the south, the Campo de Dalias plain is a large open synform having a WSW-ENE axis and an adjacent anticline southwards that gently folds late Miocene up to Quaternary sediments. This geometry in depth is well established from geologic, geophysical and well data (Figure 2), and reveals a growth fold//fold growth?? that continues southwards to the Alboran Sea. The offshore seismic profiles indicate that the most recent sediments are gently folded, so these folds might be active at present [9]. High-resolution sea floor imaging yields evidence of an active offshore rupture along a strand of the Carboneras Fault Zone in the Gulf of Almería, which suggests that these active and seismogenic faults may entail specific seismic hazards and tsunami potential [10]. The ENE-WSW coeval extension develops normal faults and tensional joints. The NW-SE normal faults also affect Quaternary sediments [11]. One of the most important faults within the area of study is the Balanegra Fault, featuring recent associated seismic activity. The Balanegra Fault zone comprises two main parallel faults along with other minor synthetic faults, producing a staircase morphology ( Figure 3). The recent activity of these faults can also be inferred from geomorphological features, including evident topographic scarps in the landscape [12] (Figure 4). Furthermore, the coastline of the Campo de Dalías features some straight segments several kilometres long and a NW-SE trend that coincides with some of the principal faults, such as the Balanegra Fault, evidencing its recent activity. The difference in topographic height between the hangingwall and footwall of Quaternary normal faults reflects the minimum vertical throw. The vertical scarps have average heights of about 2 m, though they may reach 35 m (Loma del Viento Fault) or 38 m (Balanegra Fault), representing cumulative vertical throws. Seismicity The southern Betic Cordilleras are characterized by continuous shallow seismic activity of low to moderate magnitude, and less frequent large earthquakes that reveal the relatively high seismic hazard of this region. There are historical records of at least fifty destructive earthquakes, with detailed description of the main shocks, surface breaks and related damage distribution [13,14]. The village of Adra and its surroundings near the Balanegra Fault have undergone long periods of seismic activity, particularly in the past two centuries. The major earthquakes and their intensities, as documented near the Balanegra Fault, are shown in Table 1. For example, in 1804, an intensity of IX (MSK) was registered [15]. The most intensive earthquake activity recorded recently in this region occurred from December 1994 to January 1995, including Mb (body wave magnitude) = 4.9 and Mb = 5.0 mainshocks (maximum intensity VII, MSK) [17]. Over the following three months, 350 events (Md (duration magnitude) ≥ 1.5) were recorded in the area, with most hypocentres at a depth between 0 and 12 km. The spatial distribution of seismicity defines an approximately NW-SE elongated region, limited to the North and South by the two mainshocks running parallel to the NW-SE Balanegra Fault ( Figure 5). The focal mechanisms of the last two mainshocks show fault-plane solutions involving normal faulting with an oblique slip component [18] (Figure 5). Uplift and Slip Rates Several geologic data underline a clear uplift in the region since the late Miocene that suggests active tectonic deformation. In Tortonian times, a major transgression in the Betic Cordillera largely covered the region, and no previous relief can be seen [19]. Afterwards, the development of large folds and local normal faults caused either uplifting or the sinking of marine sediments, thereby generating the present relief [19,20]. The later sediments were deposited simultaneous to the fold and fault processes. Previous research efforts [19] calculated a minimum average uplift rate of 0.028 cm/yr for the highest peak in Sierra de Gádor, considering the absence of any relief in Tortonian times. Near the study area, at the northeastern boundary of the Sierra de Gádor, the drainage network shows incision rates between 0.03 and 0.07 cm/yr over the last 245,000 years, confirming the regional uplift of the area [21]. Meanwhile, Pleistocene marine terraces develop in the littoral, providing information about recent uplift of the region. Near the Loma del Viento Fault a total of 16 marine terraces have been described [22], forming a staircase profile rising up to 82 m above the actual sea level. The uplift rate in the upthrown block of the Loma del Viento over the last 130 ka is 0.0046 cm/yr [22], but uplift varies throughout the region because of interaction with the development of the folds and faults. Other marine terrace ages obtained near the Balanegra Fault [23] provide uplift rates of 0.009-0.012 cm/yr. Table 2 shows the uplift rates computed by several authors in previous works. High Precision Levelling The quantification of recent vertical movements through the comparison of high precision levelling data is a widely used technique for the monitoring of active faults and seismic deformations, since it determines vertical changes with great accuracy. The method is based on the comparison of changes in height measured in the levelling line in different surveys. Comparison of the levelling data provides information about the amount of deformation accumulated around the main fractures within the area of interest in the time span considered. This study shows the results obtained for two levelling profiles across the Balanegra Fault (Almeria, Spain) in two field campaigns, in 2006 and 2007 (Figure 3). The Greenhouses Profile (Profile 1, Figure 3) is 800 metres long and stretches from 997 to 996 benchmarks, whereas the Old Guards Fortress Profile (Profile 2, Figure 3) has a length of 350 metres and runs from 993 to 991 benchmarks ( Figure 6). All benchmarks were embedded vertically on rocks to guarantee their stability. The accuracy of levelling operations is dependent upon the quality of the instrument used and its adjustment [24]. A Leica DNA03 digital level, two 3-metre Invar bar-coded staffs, two sturdy steel-spiked base plates for a stable staff position on all types of terrain between benchmarks, and a tripod with fixed-length legs were used in the high-precision levelling campaigns. All data were saved automatically in the internal memory. In this way, the data could easily be uploaded into a computer. The manufacturer guarantees a standard deviation of ±0.3 mm/km of double levelling, but there are many factors affecting the accuracy of readings in the development of a high-precision levelling profile. Therefore it is necessary to consider, on the one hand, the staff calibration; and on the other hand, the method of operation with the level. The measurements were taken automatically, free of the influence of the Earth's curvature, due to an option activated on the digital level. The collimation error was calculated by means of the Förstner Method. The level automatically corrects all the measurements as well. Each levelling run always started and finished with the same benchmark rod in order to avoid zero-point differences between rods. Every profile was observed back and forth. Allowable misclosures were computed by ±0.5 mmk, where k is the total length levelled in kilometres. The levelling method used was BFFB (backsight-foresight-foresight-backsight). The time interval between readings was kept as short as possible. Likewise, backsight and foresight distances were kept as close to one another as possible, up to some centimetres per set-up, meaning that the total difference throughout the line was less than 10 cm. A staff scale range of 0.5 m to 2.5 m was used. In order to prevent atmospheric refraction errors, the line of sight was at no point closer to the ground than 50 cm. Temperature changes during the observation campaigns were minor. The standard deviation σ ∆z in mm of the height difference between every two benchmarks is obtained by: where σ ISO-LEV is the experimental standard deviation of a 1-km double-run levelling (ISO 17123-2, Levels) and k the total length levelled in kilometres. The standard deviation σ when different campaigns are compared is computed by: Two recent vertical movement profiles were constructed by comparing the benchmark height differences measured in the 2006 and 2007 campaigns by using the raw height differences between consecutive benchmarks obtained from the forward and the backward levelling data (Figure 7). In these profiles, the error bars are equal to two standard deviations with respect to the previous benchmark. As the error bars are larger than the vertical changes, the movements in this period are not significant in the two profiles. GPS Network For the purpose of quantifying deformation now occurring in the Sierra de Gádor and Balanegra Fault, the first non-permanent GPS network of this area was installed in 2006 ( Figure 2). Ten sites form the geodetic network. Sites 900, 920, 930 and 940 are located in the Alpujarride Complex, in the Sierra de Gádor antiform area. Site 900 is atop the Sierra de Gádor mountain range, which corresponds to the antiform hinge, whereas sites 920 and 940 are on the southern limb. Sites 910 and 950 are located in high reliefs west of Sierra de Gádor. Between the two principal site groups, several NW-SE faults have been mapped and indicate recent activity, as Quaternary sediments are affected and fault scarps develop. Sites 960 and 990 are located near the coastline, in the hangingwall of Balanegra Fault, whereas site 980 is in the footwall. Site 970 is in the Campo de Dalías area near the antiform hinge. The first survey was carried out in June 2006, tracking the GPS constellation throughout a five-day campaign, with 12-hour sessions over baselines with lengths from approximately 7 to 16 km. For data acquisition, nine dual frequency carrier phase GPS GX1230 receivers with LEIAX1202 antennas were used. Each GPS site consists of a benchmark anchored to solid rock (limestones, conglomerates and metamorphic rocks). During measurements an aluminium tube of 0.5 m is screwed on with the GPS antenna located at the top (Figure 8). Only site 980 is a classical pillar. Simultaneous recording was done for five days. The GPS data processing was done using the Bernese 5.0 software [25]. The basic observables are dual-frequency GPS carrier phase observations. They were preprocessed in a baseline by baseline mode using triple differences. The preprocessing related to receiver clock calibration, performed by code pseudoranges, and detection and repair of cycle slips and removal of outliers, was carried out simultaneously for L1 and L2 data. During the final estimation, based on ionosphere-free double differences, a 10 degree elevation angle cutoff was used, and an elevation-dependent weighting scheme was applied. Precise ephemeris [26] and relative antenna phase centre variation files were used as recommended with ITRF2000. The a priori tropospheric refraction was modelled using the Dry-Niell [27] model; the remaining wet part was estimated hourly for all but one site, using the Wet-Niell mapping function [27] without a priori sigmas. Horizontal gradient parameters are estimated as well. Phase ambiguities which had been fixed to their integer values in a previous step are now introduced as known parameters. The strategy used to fix the ambiguities was QIF (Quasi Ionosphere-Free). To these results we applied an intermediate program to produce GPS baselines with their covariance matrices. Afterwards the NETGPS software [28] was used to obtain a minimal-constrained network adjustment. NETGPS is a software package for analysing small and large networks established for high precision engineering surveys and geodynamic deformation control. This program performs the adjustment of GPS networks from the approximate coordinates of the network sites and a set of baselines with the covariance matrices estimated from processing the GPS phase and pseudorange observations. NETGPS estimates the station coordinates, both cartesian and ellipsoidal, and their covariance matrix along with the external reliability according to the Baarda theory by means of classical least squares adjustment. Table 3 gives the parameters of the minimum constrained adjustment.  theoretical 2  with "redundancy" degrees of freedom at the 99% confidence level; RMS: root mean square in mm, Smaj: semimajor axis of the 99% confidence ellipse in mm. Smin: semiminor axis at the 99% confidence level in mm; CI: 99% mean value of the confidence height interval in mm. Table 4 shows the adjusted geodetic coordinates in ITRF2000 at epoch 2006.4. It is important to stress that station 940 was not included in the processing because of problems with data recording. Discussion and Conclusions There is substantial geological evidence of recent and active deformation by folding and faulting in the eastern Betic Cordillera, which currently undergoes the NNW-SSE convergence of the Eurasian and African plates, while also affected by orthogonal extension. The dense fracture network is responsible for widespread seismic activity and is likewise responsible for the low to moderate magnitudes. If the fault length/magnitude relationships [29] and the maximum outcropping fault lengths are considered, earthquakes may reach up to 6 in magnitude. In any case, the presence of liquefaction structures provides evidence at least for the 5.5 magnitudes recorded [30]. In this framework, several faults like the Balanegra Fault have been shown to be the most active in view of related seismicity and geological features. Altogether, they permit characterization of the activity of these low-rate tectonic structures in widespread deformation zones, which have important related seismic hazards, but are nonetheless more difficult to constrain than the large high activity faults. If we assume a relationship between the uplift rate and slip rate of the Balanegra Fault, it is possible to estimate the average recurrence interval. Firstly, a given displacement of 2.7 cm per event with magnitude 5 for the Balanegra Fault is assumed as a function of the fault length/magnitude [29] relationship. The uplift rates obtained [23] near the Balanegra Fault range from 0.009 to 0.012 cm/yr, so the recurrence interval obtained is 225 to 300 years. This is coherent with historical data, recording major earthquakes near the Balanegra Fault with VIII to IX intensities taking place in a time interval between 100 and 282 years. However, this a rough approximation, because the fault slips and time recurrence tend to be variable and do not follow a simple pattern. The establishment of a non-permanent GPS network and two levelling profiles already measured affords an initial opportunity to monitor the present deformation and the relationships between fault and fold development. It is important to elucidate the fault behaviour, establish the maximum amount of elastic deformation before earthquake triggering, and determine if there is a creep slip component in addition to the seismic motion in order to more accurately assess the recurrence interval. However, due to the low activity rate, a long time span is needed to have accurate results with the present-day resolution of geodetic techniques. If we consider the uplift rates obtained by previous authors (Table 2) in conjunction with the accuracy of the GPS method applied, a minimum time span of 1.5 to 21 years would be necessary to measure significant vertical displacement. For the whole the Eurasian-African plate boundary, a regional convergence rate of 5 mm/yr [6] is detectable using the GPS method, with a one-year interval between two surveys. However, if this rate is distributed along the broad deformation zone, a longer time span is needed to determine deformation in the precise study area, affected by one of the most prominent and active faults of the Cordillera. The effect of horizontal WSW-ESE regional extension may be more easily determined, because the addition of several normal faults (accumulative heaves) could be more apparent and measurable than the effect of a single fault. Two levelling profile surveys have thus far been measured, over a time span of one year. The results do not show significant movements, at least in view of the method´s accuracy. The [29] relationships between moment magnitude versus maximum displacement indicate a 0.027 m slip for an earthquake of magnitude 5, which was the last important earthquake involving the Balanegra Fault. Such coseismic slip, or creep, can be detected with this levelling method. The absence of significant vertical movement between the two surveys would suggest a purely seismic character for the Balanegra Fault, evidencing an interseismic period of accumulation of elastic energy and no slip on the main fault surface. Yet its long term activity is supported by geological features, and its seismic character is also constrained by the historical seismicity. In the future, levelling data after a medium magnitude earthquake would provide a good constraint for measuring the surface slip of this fault. Therefore, one of the conclusions to be underlined here is the seismic character of the Balanegra Fault. In the study of these low-active tectonic structures, the advantages and disadvantages of geodetic methods, GPS and levelling, become evident. The levelling method is a powerful tool with high accuracy, able to detect small vertical slips. It would appear to constitute a good method for monitoring small or localized geological structures like the normal Balanegra Fault. The horizontal movements in this region of plate convergence are important, and they can be observed by GPS. Furthermore, in the case of large faults, both the elastic vertical and horizontal strain accumulated in the region between two main shocks can be detected using the GPS network. The study of low-rate active tectonic structures, such as the Balanegra Fault and the Sierra de Gador Antiform, should be the main target of future hazard studies, because their seismogenic behaviour is capable of producing spaced high magnitude earthquakes. Their location right on the coastline makes them decisive as well for the constraint of possible tsunamigenic activity. At present however, a main concern regarding low-rate active folds and faults is that they require a very considerable time span to be studied, even with the highly accurate present-day geodetic techniques, a hindrance that must be overcome in the future for a better understanding of these regions with related seismic hazard.
5,094.6
2010-04-08T00:00:00.000
[ "Environmental Science", "Geology" ]
Implementation and Visualization of Discrete Computational Geometry Using Database Managers Database manager systems (DBMS) have been traditionally used for handling economic and alpha-numeric information, related to business, government, education, health, urbanism, etc. Making 3D geometry and topology and their related algorithms the raw material and subject of DBMS is rare. Attempts have been made in the specific field of Geographic Information Systems (GIS). However, for historical and economical reasons 3D geometry and topology was appended on top of 2D entities. The mechanism found is usually the addition of 3D information in the form of attributes for, for example, vertical extrusions of 2D entities. The related algorithms usually handle the interrogations and constructs of GIS, even though some GIS systems contain3D geometry and Topology, in the sense of Geometric Modelling. This manuscript presents the usage of DBMS graph capabilities for approximation of hard core computational geometry algorithms. This rarely used approach has the advantage of avoiding degenerate situations, at the price of lower precision. Taping into the vast graph capabilities of DBMSs has the obvious advantage of large algorithm libraries, which in this case we apply to computational geometry. The lower geometric precision of graph-based DBMS algorithms do not hamper their application, in problems of dimensional reduction, mesh parameterization and segmentation, etc. This manuscript is also attractive in that it illustrates the articulation of freeware display systems (e.g. JavaView) with DBMS for computational geometry applications. Keyword Geometry Databases, Topology Operations, Computational Geometry, Graph Databases, Triangular Manifolds GLOSSARY TIN Triangulated Irregular Network, used in this work to express terrain topography. DBMS Database Manager System GIS Geographic Information System I. INTRODUCTION Algorithms and constructions in Computational Geometry are usually served by dedicated CAD or similar software or libraries. In the particular case of Piecewise Linear SHELLs, LOOPs, etc. (e.g. in triangular meshes), floating point computations are conducted to calculate discrete counterparts of smooth operators (normal vector, derivatives, geodesic curves, etc.). However, it is not frequent the usage of database operations and interrogations on the 3D mesh graph to respond constructive or boolean queries on the mesh. As an example, geodesic curves between vertices v p and v q are usually calculated by using the fact that their acceleration is always parallel to the vector locally normal to the mesh. As an example, geodesic curves are rarely approximated by finding a shortest path between nodes v p and v q of the mesh graph. The usage of constructors and queries on the mesh graph has the advantage of avoiding degenerate cases (e.g. the geodesic hitting a mesh VERTEX or flowing along a triangle EDGE). The obvious disadvantage is that discrete graph operators are approximations coarser than the floating point computations on the already approximated PL mesh. However, these approximations are still good enough for mesh operations. As an example, the dimensionality reduction of IsoMap [20] can be used as a mesh parameterization tool and the shortest path approximation of geodesics in IsoMap is good enough to find a parameterization for quasidevelopable meshes. Failures of IsoMap are not due to the coarse graph approximation of geodesic curves. Instead,holes or to the non-developable character of the mesh hinder IsoMap performance [15]. In addition to the attractiveness of discrete interrogations and constructions on a 3D mesh graph, there is the additional advantage of these operations being possible as Database operations. For that purpose, the 3D mesh has to be structured as a database. Although this is an apparently straightforward task, the correct database declaration (e.g. in Oracle TM ) of separate Topology and Geometry of a 3D mesh is not a trivial one, due to a historical bias in favour of (a) 2D entities (node, polyline) and (b)geographical TIN format for3D meshes. The TIN format (i) only serves meshes which represent surfaces z=f(x,y), excluding all meshes that are the boundary ∂B of a solid body B, and, (ii) assumes a connectivity (Topology) dictated by proximity of vertex projections on the XY plane and thus rejects the actual connectivity of the 3D mesh. In response to these opportunities, this manuscript shows the conjunction between database resources (.e.g Oracle TM -Spatial, -Graph and -Point Cloud) and visualization tools (e.g. JavaView TM [11]), to implement discrete counterparts of computational geometry floating point constructs. The emphasis is set on the usage ofas far as possible-freeware tools (e.g. JavaView TM ). A. Terminology 1) Topology and Geometry in Computational Geometry. In Computational Geometry, the term geometries refers to points p: R 0  R 3 , curves C: R 1  R 3 and surfaces S: R 2  R 3 . The term topologies refer to the connectivity among the three basic ones VERTEX, EDGE, FACE, which are subsets of the geometries point, curve, surface, respectively. The connectivity among topologies is expressed on the terms BODY, LUMP, SHELL, FACE, LOOP, EDGE, VERTEX. The term Edge relates to an Oracle TM name, while the term EDGE relates to the classic solid modelling topology. 2) Topology and Geometry in Oracle TM . Oracle TM Spatial [3] calls geometries the basic types point and polyline. On the other hand, Oracle TM topologies may be either: (a) simple topological types (node, edge, face), (b) composite types built by using the simple topological types (node, edge, face) , and (c) attributes of geometrical data (e.g. 'park' applied to a 2D polygonal region or 'street' applied to a polyline). 3) Topology and Geometry in Graphs. In graphs, the term topology refers to the connectivity among graph nodes. The term geometry is not directly expressed in graphs. A graph might have a geometric incarnation but geometric information such as location, size, orientation, etc. is not usually considered as inherent graph information. In this manuscript, we will in general refer to topology and geometry in the classic sense of Computational Geometry. In specific locations in which the context is explicit (e.g. Oracle TM ), a clear warning will be issued for the reader. 4) Manifold Condition. A set M⊂R 3 is a 2-manifold [18] if for each point p ∊ M there exists a  ∊ R + such that for all radius r with 0 < r < , B(p,r) ∩ M is isomorphic to the disk S 2 . Informally, this definition denotes a closed or watertight shell, which presents no self-intersections. It indicates that all neighbourhoods of M are bijectively similar to, or deformable into, flat 0-thickness disks. 5) High Quality Triangular Mesh This mesh contains quasi-equilateral triangles, whose minimal EDGE length  satisfies ≤ ( min / 2), where  min is the smallest geometric detail of the original BODY boundary (skin of a solid) that its sample M is able to detect or preserve (Nyquist -Shannon Sampling Theorem [10], [16]). This manuscript is organized as follows: section II examines the state of the art. Section III describes the methods that materialized our work, section IV describes the results obtained. Section V concludes the manuscript and discusses possible new research directions. II. STATE OF THE ART A. Oracle TM Triangulated Irregular Network (TIN) Libraries. This section explores existing Database and Libraries or Oracle TM (i.e. PL/SQL) in order to construct a connection capacity between 3D geometry Viewers and Spatial databases. It must be noticed that commercial software may be used as viewer (e.g. MicroStation TM ) , but it is usually expensive and with other emphasis in the final goal. The freeware JavaView TM is used here as an alternative. The final goal is obviously to use the TIN representation to support 3D computational geometry constructs and related interrogation algorithms. The Oracle TM -TIN library (SDO_TIN_PKG) was used to import the legal 2-manifold CAT model ( Fig. 1(a)).A TIN data set is only able to import the geometry (i.e. vertex positions, Fig. 1(b)), and the result is an incorrect object representation ( Fig. 1(c)), due to the fact that TIN does not accept importing connectivity information. Instead, TIN establishes an (x,y)-Delaunay vertex connectivity, derived from proximity of vertices in the XY plane (i.e. ignoring the $z$ coordinate).In addition, TIN only handles surfaces z=f( x, y ) (i.e. having at most a unique z value for a given 2D location ( x, y )). These limitations make TIN un-suitable for representing general triangular meshes. As the TIN representation collapses, any subsequent query or derived object algorithm does not proceed. One concludes that, within a database environment, an alternative to the TIN formalism is required, which allow for the explicit declaration of vertex connectivity. A possibility explored later in this manuscript is the Point Cloud Oracle TM (SDO_PC_PKG) data type. (b) Geometry Alone. (c) Wrong Topology inferred from VERTEX closest neighbours according to distance projected onto the XY plane. B. Oracle TM Database Topology and Geometry In Oracle TM , a Node has an associated( x, y ) pair. An Edge supports 2 or more Nodes. Edges do not include circular arcs. A Face is built by one or more Edge loops. Each Edge may contain the information of incident Faces. There may be island entities (Edge loops, Edges, Nodes) inside a Face. As in a Wing Edge data structure [8], the concepts of NEXT LEFT, PREVIOUS LEFT, NEXT RIGHT, PREVIOUS RIGHT exist in Oracle TM . An indirect guarantee for manifold condition exists in the fact that there may be at most one NEXT or PREVIOUS Edge for a given Edge, both in the right and left directions (e.g. there can be no 2 NEXT LEFT Edges for a given Edge). This is a 2D world, with a primigenial Face created (with id = -1), and it is the universe on which all Nodes, Edges and Faces are contained. This topology has no geometry associated with it. A loop Edge is an Edge whose initial and final Node are the same. A loop Edge may have intermediate nodes. This means that a closed circuit may be represented by using one Edge. In Oracle TM , the name Topology Geometry is what Computer Aided Design and Manufacture call a feature. For example, a vehicular roundpoint or traffic roundabout [13] is a feature or topology geometry, made by a particular combination of Edges. C. GIS Databases. Reference [17] examines the extension to 3D of 2D geometry and topology primitives already present in GIS software and DBMS. An approach is proposed which amends the 2D polyline and XY-plane polygons to assume diverse z coordinates. Discussion also touches the usage of CGS modelling applied to GIS entities. This reference does not discuss graph operators for 2D or 3D GIS. Notice that the triangle -based Boundary Representation (a) is general and sound in geometric and topological terms, (b) allows for the most sophisticated scientific simulations (since it is based on 0-, 1-, 2-and 3-simplexes, base of Finite Element Analysis) and (c) has direct application in visualization, additive manufacturing, calculation of mass properties, etc. Triangular B-Reps are naturally representable by graphs, thus accepting tools present in Graph libraries of DBMS. Reference [5] attempts the extension to 3D of 2D approximations stored of GIS objects, by using objectrelational databases. This reference proposes a relational DBMS equivalent to a flat -polygon -based Boundary Representation. However, the 3D model is not implemented. An example query of region proximity / intersection based on bounding boxes is presented. The long-term goal is to implement a 3D model of GIS systems, but it is not discussed in this reference. The reference does not use Graph operators nor exploits the flexibility of 0-, 1-, 2-simplicial complexes for full 3D representation (i.e. quadrangular / triangular meshes). Reference [19] explores editing and visualization of 3D data in GIS databases. The visualization by commercial packages (e.g. MicroStation TM ) is possible, but it is economically expensive and computationally cumbersome. This reference cites VRML as a possible way for visualization. However, VRML does not solve the problem of editing, as it is usually a model -to -image unidirectional solution. The reference finds that3D primitives not supported by the DBMS. This limitation leads to exotic behaviour, such as impossibility to handle vertical Faces (a problem already encountered in the TIN standard). D. Graph-based Geometry Computations. Reference [6] is an example of continuous domain computational algorithms which are approximated by a reasonable graph discrete counterpart. In this particular case, a curve is required in a continuous domain, on which a line integral of a penalty function is minimized. In this particular case, the anisotropic penalty function depends on the position and tangent vector on the curve. The authors show that such a problem has an approximate solution if the function and curve domains are represented by a graph. The optimal curve is the shortest path on the synthesized graph. Reference [20] presents a graph minimal path variant for the calculation of geodesic curves in an mdimensional manifold M embedded in R n with n > m. This graph -based distance measured on the m-manifold is used to assess an isometric map MR m . In the case n=3 and m=2 this dimensional reduction algorithm constitutes in fact a mesh parameterization [15]. Reference [9} surveys data structures used in GIS. References [21] (Simplified Spatial Model -SSM) and [2] (Urban Data Model -UDM) appear as the earliest data structures that are topologically complete for expressing flat face 3D BODYs and SHELLs. The representation uses the concepts of BODY, SHELL, FACE, LOOP, EDGE and VERTEX (possibly with other names). No mention is made respect to the usage of DBMS to implement such structures. This survey expressly favours Object Oriented over Relational databases, and does not include a discussion on any concrete database. Reference [12] in the survey addresses the time stamps of the database to register the history of the sites and estates. However, no reference is made to actually using the database for the storage of 3D full topological / geometrical information. E. Conclusions of Literature Review. The reviewed literature shows that, possibly for historic reasons, Geographic Information Systems consumed spatial services from databases. Therefore, the spatial topology and geometry were oriented to serve 2D versions of features of a GIS system (roads, plots, parcels, rivers, lakes, etc.). Then, features and additional information (taxes, price, people) was hanged to these features, which is naturally stored in a DBMS. Then, a drive for 3D modelling was experienced for 3D spatial databases, but it collided against an already essential 2D structure. As an example, 3D topology cannot be stored in TIN format, since TIN topology is dictated by 2D proximity. This absence of full 3D topological and geometrical representation in DBMS domain is also visible in the absence of computational geometry algorithms actually mounted on DBMS -geometry based software. Our manuscript responds to such void, taping from the DBMS Graph libraries, with the expectation that more CAD, CAM or Reverse Engineering SW may actually be placed in DBMS. According to the reviewed literature, we find that the following features of our manuscript are attractive for the reader: (a) replacement of the default TIN topology in OracleTM Spatial by the actual mesh topology. This replacement is conducted here by using additional relation declarations of Logical Network of the OracleTM Spatial database. (b) Implementation of the Boundary Representation Wing-Edge data structure [8] by using the spatial database. (c) Replacement of floating-point operations on 3D meshes (e.g. Geodesic curves) with discrete approximations (e.g. Shortest Path) of Graph Operations mounted on a spatial Database. (d) Additional operations / constructs on the B-Rep Graph mounted on spatial databases (e.g. Minimal Spanning Tree). (e) Usage of JavaViewTM for end-user interaction as either (1) Master, which uses the services of a Database server or (2) Visualization Server, subordinated to a Master Database application. A. Implementation. The present section reports the implementation and results of module articulation as per Fig. 2. A standard database (e.g. Oracle TM ) appears as foundation. On top of this database, an abstraction (Oracle TM Spatial) is built, in which the entities have geometric meaning. Likewise, the Graph abstraction to implemented (e.g. Oracle TM Graphs). Both, Oracle TM Spatial and Oracle TM Graphs are provided by Oracle TM . On top of them, we built the abstraction of a Boundary Representation(B-Rep) which is a standard formalism for solid and 3D SHELLs description. For the purposes of illustration, we have chosen to implement 2 different discrete approximations of Geodesic Curves on the 3D mesh. We do not implement a floating point PL approach (briefly commented in section I). Instead, we use the discrete approximation of Geodesic Curves made possible by the B-Rep on top of Oracle TM Spatial and Oracle TM Graphs. B. Required Topology Constructors and Queries. The SW component structure shown in Fig. 2 supports the following functionality by a combination of the database, its functions and its additional SW layers. initial_vertex(EDGE):  The interrogation owner_edge(VERTEX):  EDGE} returns any EDGE for which the given VERTEX is the head. Consistently, the interrogation incident_edges(VERTEX):  [EDGE] returns a CW-or CCW-ordered sequence of EDGEs, incident to the given VERTEX. Fig. 3 represents 3 different B-Rep graph construction processes, with mixed success results, implemented for the present manuscript. A pre-condition is, of course, the existence of a 3D 2-manifold triangular mesh M. Although it is not a requisite for 2-manifold condition, the used mesh is closed or watertight and thus contains no border. A necessary step to declare the 3D mesh M inside Oracle TM is the importation of Geometry and Topology information. C. B-Rep Database and Interface Architecture. 6) Construction of B-Rep Database and Graph. As established in the Literature Review section, declaring a Topology (i.e. connectivity) in the TIN standard is not possible, ending up in an illegal situation. The TIN option connects a VERTEX v with the neighbour ones that are closest according to their separation with v, projected on the XY plane. TIN assumes that M represents a surface z = f( x, y ), with f: R 2 R being a function. Therefore, TIN excludes, among others, meshes M which represent the boundary of a solid region. Typically, TIN allows only terrain -like digitisations. Fig. 1(a) shows the Cat data set. Fig. 1(b) shows the Geometry (point set) as imported and inserted in Oracle TM . Fig. 1(c) displays the defective Topology that Oracle TM TIN infers from the Geometry. Declaring an arbitrary Topology in Oracle TM implies to ignore the TIN standard and instead to declare the actual Topology via a Oracle TM Graph. The Geometry (point set) is still declared via Oracle TM Point Cloud. Our implementation compares two graphs: (a) with EDGEs having unitary weight, (b) with EDGEs whose weight is their Euclidean length. 7) Graphical User Interface. JavaView TM Fig. 4 displays an implementation of the Graphic User Interface, using JavaView TM . The user interacts with, and receives feedback from, the JavaView TM client. This client passes the queries to a Computational Geometry application. In this report, the particular case of discrete approximation of geodesic curves on the mesh M is conducted. The application communicates the relevant interrogations to a translator Java/Database, which expresses the interrogations in terms of the Graph and Database operations. The database responds with the query / construct result, which is then translated into the data structures of the Computational Geometry Application and into Java commands, for human visualization. D. Background on Mesh Geodesics. The purpose of this section is to give a short background on the Computational Geometry problem which serves as an example for our discrete approximation by using the Oracle TM Graph and JavaView TM capabilities. In this case, we consider the problem of finding a discrete approximation for the geodesic curve that joints two points of a surface M. The geodesic curve remains on M and has the property of being straight within M and thus uses a minimal length for joining these two points on M. For the purpose of completeness, we briefly discuss geodesic curves on a PL triangular mesh M. For interesting discussions of geodesics, see [34]. (a,b). The geodesic curve, on a surface, joining two points of that surface is in general not unique (for both PL and smooth surfaces). There exist particular cases, in which the geodesic hits a VERTEX or aligns itself with an EDGE. Notice that such cases do not appear if the geodesic is approximated by the shortest or lowest cost paths between the two vertices in question. 8 shows two low quality data set meshes M (Cat and Antelope). For each data set, 2 graph -geodesic approximations are shown: (a) by a path with minimal EDGE count, (b) by a path with minimal distance. The two approximations render different paths. A reasonable expectation is that, as the triangulations have better quality, the two paths will converge to one, and they will converge to the PL curve obtained with the floating point geodesic calculation. Fig. 9 shows a higher quality data set mesh M, called Vertebra. Fig. 9(a) shows illustrative paths from VERTEX v 0 to two VERTEXes v 1 PL Geodesic. A PL path :[a,b] M is a piecewise linear geodesic curve on M if and only if ''(t) is parallel to the vector normal to M, n((t)), for each t in the interval If a second vertex v 2 is chosen, such that its geodesic distance to v 0 equals the distance from v 0 to v 1 (Eq. 3), one has the situation of Fig.9(a). Notice that the colour texture in Fig.9(a) Notice that the E M ( ) count of minimal path EDGEs does not represent a distance unless multiplied by the length of the EDGE (assumed constant in a High Quality Triangular Mesh). Table I Homogeneous EDGE length meshes are not a rarity, since tasks such as mesh segmentation and parameterization are very difficult with other meshes. Meshes with non-homogeneous EDGE length are convenient for data reduction, but are very counter-productive for mesh segmentation and parameterization. Fig. 10 displays several nearly iso-length paths (green colour) from an origin node in the Vertebra data set. As expected, the locus of the path end vertices approximates the geodesic iso-distance map of Fig. 9. Fig. 10 also shows iso-step paths, which are based on the number of edges traversed (assuming nearly constant edge length). As the triangulation improves in quality (small constant size EDGEs forming equilateral triangles), the iso-length and iso-step paths tend to be the same. The yellow strip represents a vertex set reached by paths which start from vertex v 0 and reach vertices whose lengths are in a given interval. Notice the (expected) resemblance with the iso geodesic lengths of Fig. 9(a). G. Minimal Cost Spanning Tree on M. Fig. 11 presents the Minimal Cost Spanning Tree for the Vertebra data set. This result is also achieved using the Oracle TM Point Cloud type (for the geometry data), supplemented with the Oracle TM -Graph tables (for the topology or connectivity) and algorithm. The display is executed in JavaView TM . V. CONCLUSIONS AND FUTURE WORK This manuscript presented an articulation of database types and algorithmic, for the purpose of treating computational geometry and topology as other usual database information (client, payroll, assets, debt, address, etc.). This work has relevance given that spatial databases, mostly related to GIS, historically deal with 2D entities (parcel, road, block, highway, bay) and only ad-hoc incorporate 3D information. The manuscript remarks the usefulness of database operators (e.g. graphs) given the fact that many interrogations and constructs in 3D Computational Geometry may be defined or approximated in terms of graph operations. A specific example is presented, of the geodesic field on a manifold triangular mesh M, which can be approximated by shortest path interrogations in a high-quality mesh M. High-quality meshes are compulsory for mesh segmentation and parameterization. Thus, the application pre-conditions for graph operations do not seem overly demanding. This manuscript presents an added value of tool articulation, by including the application of JavaView TM to serve as interface with the DBMS and with the Graphic User Interface. The facts that JavaView TM is open software and database managers may be also found as open, indicate the possibility of cost-effective articulation of open tools for large data set computational geometry.
5,616.8
2019-06-30T00:00:00.000
[ "Computer Science" ]
DIVERSITY OF ANGIOSPERM CLIMBER SPECIES IN POINT CALIMERE WILDLIFE AND BIRD SANCTUARY, TAMIL NADU Climbers are currently understood to have a range of important ecological functions in forest dynamics. Climbers are already recognized as an important group for tropical biodiversity, playing a key role in ecosystem level processes and providing resources for pollinators and dispersers. The present study is an attempt to document different climber species and their uses in Point Calimere Wildlife and Birds Sanctuary, Tamil Nadu, India. The present study recorded 53 herbaceous climbers and 21 lianas from all the forests types of Point Calimere Sanctuary, covering 25 families. Considering all climbers and lianas, 40 species are stem twiners, 2 species are branch twiners, 4 are spiny Climbers, 19 species are tendril climbers and 8 species are hook climbers. Most of the lianas are distributed in scrub forests and Methodology:- Field trips were carried out in whole areas of the Point Calimere Wildlife and Bird Sanctuary in various seasons. The climbers are collected and identified with the help of floras (Gamble andFischer (1915 -1936), Mathew (1981 -1988) and Daniel and Umamaheswari (2001). The details like name (family, plant name, and local name), locality, date of collection, habit and habitat, uses, distribution and salient features like association were recorded in an elaborate field book. The voucher specimens are housed in Medicinal Plants Garden, CCRS, Mettur, Salem, Tamil Nadu. Information on nomenclature and family was taken from an online botanical database Tropicos (2017). For the uses and common names, Useful plants of India (1986) and Yognarasiman (2000) were referred. Observation And Discussion:- The present study recorded 53 herbaceous climbers and 21 lianas from all the forests types of Point Calimere Sanctuary, covering 25 families. In dicotyledons, there are 23 families containing 60 genera and 71 species. In monocotyledons, there are 2 families containing 3 genera and 3 species; considering all climbers (C) and lianas (L), 40 species are stem twiners, 2 species are branch twiners, 4 are spiny climbers, 19 species are tendril climbers and 8 species are hook climbers (Table 1). In Cardiospermum petiole modified into tendrils, whereas in Cissus and Cyphostemma axillary tips are modified into tendrils. In Passiflora, branch and peduncle are modified into tendrils. In Strychnos minor modified branchlet ends into tendrils. Thorns act as hook to climb over the support in Ziziphus oenoplia and Scutia myrtina. Inflorescence axis modified into hook in Aristolochia indica. Hugonia mystax is straggling climber with spiral hooks. Some geophytic plants such as, Dolichos trilobus consists of root with a fascicle of 3-6 tuberous rootlets. Fleshy tubers present in Trichosanthes tricuspidata, Cyphostemma setosum, Asparagus racemosus and Gloriosa superba. Two villages are located inside the area of study, Kodikkadu in the north and Kodikkarai near angular extreme of Point Calimere are connected by road. Jasminum sambac, Jasminum officinale are planted in the household gardens. Trichosanthes tricuspidata is common in well drained soil. Citrullus colocynthis, Mucuna pruriens and Caesalpinia bonduc are common in the coastal vegetation. Ctenolepis garcinii, Rhynchosia minima, Lablab purpureus, Cardiospermum canescens and Ipomoea obscura forms mats over other vegetation during the monsoon period. 1148 Exotics Biodiversity loss caused by invasive species may soon surpass the damage done by habitat destruction and fragmentation. Biological invasions are an important component of human-caused global environmental change. Invasive alien species are now a major focus of global conservation concern. The decisions need to be made on whether benefits derived from the invasive spread of an alien species outweigh the reduced value of ecosystem services (Sudhakar Reddy et al., 2008). The present study reported invasive species such as Ipomoea obscura, Ipomoea pes-tigridis, Clitoria ternatea and Passiflora foetida. Economical importance The people who dwell in Point Calimere jungles are presently called "Seenthil Valayars". It is said the name Seenthil Valayars came because these people are known to consume the climber Seenthil (Tinospora cordifolia) stems. Mucuna pruriens seeds are edible after processing by the native forest dwellers. Lablab purpureus, Momordica charantia, Momordica dioica and Canavalia virosa fruits are used as vegetable. Dioscorea pentaphylla tubers are edible. Basella alba, Ipomoea obscura and Ipomaea marginata young leaves used as spinach. Ziziphus oenoplia fruits are edible. India has about 265 climber species, of which 125 are woody and the rest are herbaceous. About 100 species are medicinal in nature (Chaudhuri, 2007). Climbers are widely used in traditional systems of medicine (Eilu and Bukenya-Ziraba, 2004). 53 medicinal climbers are recorded in the study area ( Aganosma cymosa Useful in diseases of parapelegia, sciatica and neuralgia 3. Aristolochia indica Root, stem used as antidote and anti-inflammatory 4. Azima tetracantha Juice of the leaves used to relieve cough and phthisis 6. Cardiospermum The bird's congregation of the Point Calimere Sanctuary depends on the forest canopy. The canopy of the scrub jungle is significantly mated by the lianas. The lianas provide habitat for the migratory birds. In these nests, the birds had skirted Point Calimere in their route towards Sri Lanka. The loss of green cover certainly drastically damages the bird's life. Though it is a protected area, chemical companies and small-scale shrimp farms around the wetland have started to pose a threat to the biodiversity and ecosystem of the sanctuary. Strict environmental regulations should be imposed and salt pan and other aquaculture practices, unregulated economical activities around the sanctuary should be prohibited. This effective action will help in maintaining species diversity and composition to provide suitable breeding sites in the sanctuary.
1,257.2
2020-11-30T00:00:00.000
[ "Environmental Science", "Biology" ]
Transcranial Magnetic Stimulation as a Therapeutic Option for Neurologic and Psychiatric Illnesses In recent years, transcranial magnetic stimulation has become an area of interest in the field of neurosciences due to its ability to non-invasively induce sufficient electric current to depolarize superficial axons and networks in the cortex and can be used to explore brain functioning. Evidence shows that transcranial magnetic stimulation could be used as a diagnostic and therapeutic tool for various neurological and psychiatric illnesses. The aim of this review is to introduce the basics of this technology to the readers and to bring together an overview of some of its clinical applications investigated thus far. Introduction And Background The first electromagnetic experiment on conscious humans was performed by Merton and Morton in 1980s in which they electrically stimulated the motor cortex through the scalp using transcranial electrical stimulation (TES) [1]. An electric field was generated by placing the electrodes over the head that were connected to a high-capacity condenser and charged up to 2,000 volts. Unfortunately, through TES, only a fraction of current passed through the scalp while the rest spread between the electrodes that resulted in a painful contraction of the scalp muscles and caused significant pain and discomfort. A few years later, Barker and colleagues proposed a method of transcranial magnetic stimulation (TMS) that could replace TES. TMS used a magnetic field via a wire coil [2]. It was based on the principles of magnetic field induction as described by Faraday's law. In this technique, the investigators placed a coil near the scalp. A brief and rapidly changing high-intensity electrical current was then passed through it, which generated a powerful magnetic field capable of producing an electric current across the brain tissue. The induced current then depolarized nearby neural networks located beneath the coil and produced neurophysiological and behavioral effects [3]. Motor units are recruited from the smallest to the largest, according to the Henneman size principle [7]. In 1987, a study demonstrated that a motor unit recruited during minimal voluntary contraction was also that recruited by TMS of the motor cortex; it was also noted that the motor unit was recruited in the same order like that with voluntary contraction [8]. To determine motor cortical and corticospinal excitability, TMS uses MT and MEP, which are based on various physiological mechanisms. MT depends on the excitability of cortico-cortical axons and excitatory pathways to corticospinal neurons. Agents that block voltage-gated sodium channels and those that act on ionotropic non-N-methyl D-aspartate (non-NMDA) glutamate receptors, such as ketamine, affect the MT [9]. Neurotransmitters, such as gammaaminobutyric acid (GABA), acetylcholine, dopamine, serotonin, or norepinephrine, do not affect MT. MEP can also be blocked by agents that block sodium channels, such as volatile anesthetics [10]. The sodium channel inactivation leads to decreased action potential firing that, in turn, reduces calcium entry at the presynaptic terminal and finally synaptic transmission resulting in reduced MEP [11]. MEP amplitude was also found to be affected by the modulators of inhibitory and excitatory transmission in neuronal networks. The neurotransmitter modulators for GABA-A receptors depressed MEP while dopamine agonists and norepinephrine agonists raised the MEP. Hence, there is a difference between the physiologies of MT and MEP [12]. In 1990, Burke et al. compared descending volleys evoked by TES and TMS in the corticospinal tracts of animal models [13]. The corticospinal volleys evoked by TES recruited motor units in a similar pattern as those evoked in animals by electrical stimulation of the motor cortex that concluded the fact that TES stimulates cortical neurons in a plane vertical to the surface of the brain [14], whereas the recruitment pattern of TMS differed from that of TES, as demonstrated in the Tofts model. TMS preferentially activated cortical interneurons transmitting excitatory inputs to the pyramidal neurons. The electrical field generated in the brain by TMS depends on many physical and biological parameters, such as the frequency, intensity, and pattern of stimulation, the magnetic pulse waveform, orientation of the current lines evoked in the brain, orientation and shape of the coil, and excitable neural elements. TMS can deliver a monophasic pulse or biphasic pulses as discussed later in the article [4]. Apart from affecting the electrical activity of the brain, it is postulated that TMS might also affect the blood-brain barrier permeability [15], neuronal metabolism [16], and protein signaling and transcription [17]. Moreover, TMS might affect the brain via non-electromagnetic pathways as well (for example, the stimuli from sounds from the machine, pressure stimuli from overlying coils placed on the scalp, or even secondary afferent effects from peripheral nerve activation in the vicinity) [18]. This will require further research to determine its accuracy and effectivity and is beyond the scope of this article. Types of TMS pulses There are three main types of pulses ( Table 1). Types of pulse Definition Effect Single pulse Discharge of single pulses separated by a time interval of at least 4 sec to 8 sec, resulting in an individual effect constitutes single pulse-TMS. The induced effects can be quantified in the form of 1) motor evoked activity in primary motor areas; 2) evoked visual activity and visual precepts, such as phosphenes; 3) short-acting disturbances in cognitive tasks, such as changes in performance. Double pulse Also known as paired pulse-TMS, it consists of two pulses, i.e., discharge of a CS followed by a TS that are separated by an ISI. By using subthreshold CS and suprathreshold TS and using varying ISI from short (< 5 ms), intermediate (7 -15 ms), and long (50 -200 ms), the effect can be modulated from intracortical facilitation to inhibition. They can also be used to determine the presence or absence of connectivity and estimation of conduction time between two distinct cerebral sites Repetitive pulse Repetitive pulse-TMS consist of delivering any combination of more than two pulses with in-between time interval of 2 sec or less to generate different effect than that produced from the isolated pulse. It involves delivery of short bursts or trains of 3 -4 pulses at high frequency and of long periods of stimulation at a fixed frequency with or without interruption by stimulation free intervals in between charges. The effects can range from 1) online TMS effect: This results from direct and measurable interference with patterns of ongoing neuronal discharge at the time of stimulation; 2) offline TMS effect: This constitutes the lasting impact on the cerebral process that results from a previously administered pattern of repetitive stimulation. Types of Coils Depending upon their difference in size, dimensions, and electric field characteristics, various types of TMS coils have been developed and range from simple geometry circular coils to the more complex figure-8 coil, double cone coil, the Hesed coil, the slinky coil, cloverleaf coil, the -D differential coil, C-core coil, air-cooled coil, and the circular crown coil. General Characteristics of Coils During selection of TMS coils, the two major areas of interest related to electric field characteristics are the (a) focality and (b) depth of brain stimulation. Coils with larger dimensions have a deeper penetration due to slower electric field attenuation but with less focality, although smaller coils have better focality over larger coils in terms of activated brain volume. However, this advantage diminishes with increasing depth of the target region, such as the cortical lower limb region. Electric Field Characteristics of the Individual Coil Circular coil: This non-focal, ring-shaped coil induces an electric field that stimulates a broader region of the brain present beneath the coil perimeter [19]. [19]. Cloverleaf coil: This coil consists of four coils of nearly circular windings and better stimulates long fibers as compared to figure-8 coils [20]. Slinky coil: This coil is composed of multiple circular or rectangular loop windings joined together at one edge while fanned out from another to form a half toroid. It can achieve larger field magnitude and better focality at the cortex near the coil center [19][20]. Three-dimensional (3-D) differential coil: This coil consists of a small figure-8 coil with a third loop present perpendicular to its center and surrounded by two additional loops to limit the area of stimulation. It has the advantage of more focal stimulation when compared to figure-8 and slinky coils [19]. Double cone coil: This coil consists of two large adjacent circular windings fixed at an angle to each other. Compared to the figure-8 coil, the stimulation produced does have a deeper penetration but a less focal electric field [20]. Hesed (H) coil: With more complex winding patterns and larger dimensions compared to the conventional TMS coils, the H coils can stimulate deeper brain regions more effectively because of their slower electric field decay at depth but at the expense of decreased focality. The high permeability ferromagnetic cores have been introduced in recent advances with the hope to improve the focality, penetration, and electric field efficiency of these coils [19][20] Other coil designs: The C-core coil, circular crown coil, the large halo coil, and MRI gradient coil designs with larger dimensions than conventional and H coils have also been under investigation for deeper TMS with the expectation of slower electric field decay at the expense of reduced focality [19]. Obsessive-compulsive Disorder (OCD) OCD affects about 1% to 2% of the Western population. The obsessions are characterized by recurring thoughts that are difficult to prevent or control and could lead to irresistible and recurring behaviors, also known as the compulsions. The compulsions are actions that are taken to negate or resolve the disruptive, anxiety-provoking, and disturbing thoughts. A familiar example is of an individual who fears "contamination" and feels compelled to cleanse their hands with disinfectants repeatedly. The unwanted obsessive thoughts frequently disrupt personal relationships, work, sleep, and almost all other aspects of daily life. Up to 60% of patients with OCD do not respond to first-line treatment. In a meta-analysis of 20 single-or double-blind, randomized studies, researchers in China examined the efficacy of repetitive transcranial magnetic stimulation (rTMS) in 791 patients with OCD [21]. Overall, rTMS showed a large effect size on OCD symptoms compared with the sham condition immediately after treatment lasting for two to 12 weeks. High-(HF) and low-frequency (LF) rTMS had approximately equal efficacy, as did the treatment that targeted the supplementary motor area and the left, right, and bilateral dorsolateral prefrontal cortex. Larger effects were associated with not having comorbid major depression, lack of resistance to other therapies, and receiving 100% resting motor threshold intensity rTMS. Tilted coils produced larger effects than sham coils, indicating that sham coils have a larger placebo effect and possibly more effective blinding [21]. However, these therapeutic effects are short-term, and further metaanalyses are yet to show long-term therapeutic effects. Major Depression (MD) Major depression is diagnosed in the presence of five or more of the following symptoms for at least two consecutive weeks: depressed mood, loss of interest or pleasure in most or all activities, insomnia or hypersomnia, significant weight loss or weight gain or decrease or increase in appetite, psychomotor retardation or agitation, fatigue or low energy, decreased ability to concentrate or make decisions, thoughts of worthlessness, excessive or inappropriate guilt, and thoughts of death or suicidal ideation or a suicide attempt [21]. Many patients with unipolar MD do not respond to standard treatment with pharmacotherapy and psychotherapy [22]. Although electroconvulsive therapy (ECT) is more efficacious than repetitive TMS, patients may prefer repetitive TMS because it is better tolerated, and unlike ECT, it does not require general anesthesia and induction of seizures. In a study conducted by Eranti et al. in 2007 to compare the efficacy of ECT with rTMS, 46 patients with major depression referred for ECT were randomly assigned to either a 15-day course of rTMS of the left dorsolateral prefrontal cortex (N = 24) or to a standard course of ECT (N = 22) [23]. Results were calculated based on the score on the 17-item Hamilton Depression Rating Scale (HAM-D) and the percentage of patients with remissions (Hamilton score of ≤ 8) at the end of treatment and six months follow-up, which were the primary outcomes of the study. Secondary outcomes included mood self-ratings on the Beck Depression Inventory-II and visual analog mood scales, Brief Psychiatric Rating Scale (BPRS) score, self-reported and observerrated cognitive changes. HAM-D scores were calculated, also, at the end of the treatment and the six-month follow-up. It was found that the HAM-D scores at the end of treatment were significantly lower for ECT, with 13 patients (59.1%) achieving remission in the ECT group, while there were four (16.7%) in the rTMS group. However, at the six-month follow-up, the HAM-D scores did not differ between groups that were either treated with ECT or rTMS. Beck scale, visual analog mood scale, and BPRS scores, however, were lower for ECT at the end of treatment and remained lower after six months. Self-and observer-rated cognitive measures were similar in the two groups. Hence, rTMS was not as effective as ECT, and ECT was substantially more useful for the short-term treatment of depression but nearly equal in the long-term [23]. Meta-analyses have shown that HF-rTMS has antidepressant properties when compared with sham rTMS. Data from 29 RCTs were included, totaling 1,371 subjects with MD. Following approximately 13 sessions, 29.3% and 18.6% of subjects who received HF-rTMS were classified as responders and remitters, respectively. The results were compared with 10.4% and 5% of those who received sham rTMS (pooled odds ratio (OR): 3.3, p < 0.0001, with numbers needed to treat (NNT) of 6 and 8, respectively). Furthermore, it was found that HF-rTMS was equally effective as an augmentation strategy or as monotherapy for MD and when used among patients with primary unipolar MD or mixed with unipolar and bipolar MD [24]. Bipolar Disorder Bipolar disorder is a mood disorder that is characterized by episodes of mania, hypomania, and major depression [25]. The subtypes of bipolar disorder include bipolar type I and type II. Patients with bipolar I disorder experience manic episodes and nearly always experience major depressive and hypomanic episodes. Bipolar II disorder is marked by at least one hypomanic episode, at least one major depressive episode, and the absence of manic episodes. Additional information about the diagnosis of bipolar disorder is discussed separately. Many patients with acute bipolar major depression or mania do not respond to pharmacotherapy. Also, maintenance treatment with medications and psychotherapy often fails to prevent recurrent mood episodes. Patients unresponsive to standard treatment may be candidates for neuromodulation therapies [22]. In 1998, a study was done in which 16 patients completed a 14-day double-blind, controlled trial of right versus left prefrontal transcranial magnetic stimulation at 20 Hz (two-second duration per train, 20 trains per day for 10 treatment days) [26]. In 2014, a group of European experts was commissioned to establish guidelines on the therapeutic use of rTMS through the collection of data published up until March 2014. The data included the use of rTMS in various neurological and psychiatric conditions, such as movement disorders, stroke, multiple sclerosis, epilepsy, depression, anxiety disorders, obsessivecompulsive disorder, schizophrenia, and addiction, among many others. Despite some unavoidable inhomogeneities, the collected evidence was sufficient to accept with level A (definite efficacy) the analgesic effect of HF-rTMS on the primary motor cortex (M1) and the antidepressant effect of HF-rTMS on the left dorsolateral prefrontal cortex (DLPFC). A Level B recommendation (probable efficacy) was proposed for the antidepressant effect of LF-rTMS of the right DLPFC, HF-rTMS of the left DLPFC for the negative symptoms of schizophrenia, and LF-rTMS of contralesional M1 in chronic motor stroke. However, more research is needed to figure out how to optimize rTMS protocols and techniques that would give it relevance in routine clinical practice. Also, professionals carrying out rTMS protocols should undergo rigorous training that would specialize them into proper handling of the instruments used and would maximize the chances of success. Under these conditions, the therapeutic use of rTMS should be able to develop in the coming years [27]. Schizophrenia Schizophrenia is diagnosed on the basis of its characteristic symptoms (positive symptoms, i.e., delusions, hallucinations, disorganized speech, or behavior, and/or negative symptoms, i.e., diminished emotional expression and lack of motivation) coupled with social and/or occupational dysfunction for at least six months in the absence of another diagnosis that would explain for the presentation. Schizophrenia is considered to be the most debilitating of psychiatric illnesses, psychologically, socially, and financially. Although pharmacotherapy, especially antipsychotics, remains the mainstay in the acute treatment and maintenance of schizophrenia, an alternate treatment modality (such as TMS) is needed due to the limitations of antipsychotic agents. Literature review in the last 15 years supports the fact that TMS is a safe and efficacious means of treating the positive and negative symptoms of schizophrenia. The most notable body of evidence supports the reduction of auditory hallucinations by targeting LF-TMS stimuli to Wernicke's area in the left temporoparietal cortex [28]. The aforementioned conclusion was not based on the results of all the studies unanimously; some studies also showed negative results for the efficacy of TMS. Almost a quarter of patients with schizophrenia present with resistant auditory verbal hallucinations (AVHs), a phenomenon that may relate to activation of brain areas underlying speech perception. In 2005, Poulet et al. conducted a study compromising of 10 right-handed schizophrenia patients with resistant AVH who received five days of active rTMS and five days of sham rTMS (2,000 stimulations per day at 90% of motor threshold) over the left temporoparietal cortex in a double-blind crossover design. AVHs were robustly improved (56%) by five days active rTMS, whereas no variation was observed after sham treatments. Seven patients were responders to active treatment, five of whom maintained improvement for at least two months [29]. In 2007, Prikyl et al. conducted a study which concluded that augmentation of rTMS enabled patients to experience a significant decrease in the severity of the negative symptoms. During the real rTMS treatment, a statistically significant reduction of negative symptoms was foundapproximately a 29% reduction in the parasympathetic autonomic nervous system (PANS)negative symptom subscale and a 50% reduction in the sympathetic autonomic nervous system (SANS) [30]. rTMS is noted to be greater if it is used at a frequency of stimulation of 10 Hz, 110% motor threshold, and stimulates the left DLPFC. Moreover, positive results are observed in illnesses that have a longer duration of treatment (at least three consecutive weeks) and a shorter duration of disease [31]. Although TMS showed promising results in a few studies, large-scale studies are required to conclude its efficacy in patients with schizophrenia. Autism Spectrum Disorders Autism spectrum disorder (ASD) is a biologically based neuro-developmental disorder which is characterized by deficits in social communication and interaction and restricted, repetitive patterns of behavior, interests, and activities [25]. ASD is diagnosed clinically, based on the presence of key behavioral symptoms, but the underlying pathophysiology in the brain behind these symptoms is unknown. The treatment is mainly supportive with a focus on early intensive behavioral interventions [32]. Single pulse TMS: In ASD, single pulse TMS has been used to probe baseline levels of corticospinal excitability and modulation of corticospinal excitability in response to visually presented stimuli. Six independent studies have shown no difference in either motor threshold (the lowest intensity of stimulation required to induce an MEP) or the size of MEP in response to a suprathreshold pulse of TMS between individuals with ASD and neurotypical individuals. The studies suggest that baseline M1 excitability is not affected in ASD [33]. LF-rTMS: In 2014, Sokhadze et al. studied whether 1 Hz rTMS improves electrocortical functional measures of information processing and behavioral responses in autism [34]. Post-TMS evaluations showed decreased irritability and hyperactivity on the Aberrant Behavior Checklist (ABC) and decreased stereotypic behaviors on the Repetitive Behavior Scale (RBS-R). Following the rTMS course, they found decreased amplitude and prolonged latency in the frontal and frontocentral N100, N200, and P300 (P3a) and event-related potentials (ERPs), which are small voltages generated in the brain in response to specific events or stimuli; N100 -N300 are the negative deflections after presentation of stimulus, whereas P100 and P300 are positive wave deflections after the stimulus [35]. TMS resulted in an increase of P2d (P2a to targets minus P2a to non-targets) amplitude. ERP changes along with increased centroparietal P100 and P300 (P3b) to targets were indicative of the more efficient processing of information post-TMS treatment. Enhanced information processing was also noted in a lower error rate [34]. HF-rTMS: The only study in the published literature that included participants with intellectual disability was conducted by Paerai et al. in 2013 where high-frequency 8 Hz rTMS was applied to the left premotor cortex in children with ASD with intellectual instability [33]. The report, based on four studies with children with low-functioning autism, aimed at evaluating the effects of rTMS delivered on the left and right premotor cortices (PrMC) on eye-hand integration tasks, defining the longlasting effects of HF-rTMS. The study investigated the efficacy of high-frequency rTMS by comparing three kinds of treatments: HF-rTMS, a traditional eye-hand integration training, and both treatments combined. Results showed an increase in eye-hand integration performances only after HF-rTMS was delivered on the left PrMC (mean increased performances = +4.11; standard deviation (SD) = 2.61; in the remaining conditions, mean increased performances ranged from −0.11 to +1). It also showed that effects of HF-rTMS in increasing eye-hand integration performances turned out to be rather persisting up to one hour after the end of the left-PrMC stimulations [33]. Based on these preliminary findings, further evaluations on the usefulness of HF-rTMS in the rehabilitation of children with autism are strongly recommended. Conversion Disorder Conversion disorder is described as neurological signs and symptoms, such as movements, seizures, or sensory symptoms, without an underlying neurological or medical cause. Symptoms usually follow a psychosocial or traumatic life experience. Subconscious psychological factors are judged to be associated with the symptoms because of a temporal relation between a psychosocial stressor and initiation or exacerbation of a symptom [25]. A variety of treatment options have been tried to treat conversion disorder. TMS as a therapy for the reversal of conversion disorder symptoms is now being applied to various patients with a promising future. For instance, in one experiment, four patients were treated over a period of five to 12 weeks with rTMS applied to the contralateral motor cortex [36]. In one patient, motor function was completely restored; two patients experienced a marked improvement correlating with rTMS treatment. Research in conversion disorder has been quite neglected. It is partly attributed to the fact that not many such patients report to the doctors. A larger sample size study is needed to understand this disorder further and experiment with various therapies to treat successfully. rTMS, in particular, is promising in that it has been successful as a treatment modality in other psychiatric illnesses, like depression. Eating Disorders The use of transcranial magnetic stimulation can aid in decreasing some of the behaviors and symptoms that are seen with bulimia, but for a short period. Various brain stimulation techniques have been studied for their effectiveness with eating disorders, and transcranial magnetic stimulation has had success. Bulimia nervosa is a psychiatric disorder which is characterized by cyclical binging overeating episodes characterized by a subjective loss of control, followed by extreme or inappropriate compensatory behaviors to expunge calories taken in during the binge episodes. These patients use laxatives, exhibit self-induced purging behavior, or exercise excessively. Anorexia may also include binging and purging, but its hallmark feature is the active maintenance of dangerously low body weight for age, sex, and developmental trajectory. Anorexia is coupled with a strong drive for thinness and intense fear of weight gain. Both bulimia and anorexia can include significant body image distortions [25]. Currently, family-based therapy for anorexia and cognitive behavioral therapy for bulimia is used along with anti-depressants. However, relapse rates are high in anorexia and bulimia. Because of the serious impact of eating disorders and incomplete efficacy of existing interventions, new treatment modalities are of significant interest and transcranial magnetic stimulation is no exception. Studies show that rTMS methods yield a statistically significant effect on eating disorders, though the source of such methodological variability remains uncertain (i.e., site of stimulation, stimulation technique, or other unknown factors) [37]. Further findings from a meta-analysis revealed that TMS techniques did not yield a significant effect in a singlesession format for actual food consumption [38]. However, the relatively limited number of studies and significant methodological heterogeneity in an assessment of observed dietary behavior makes the null effect somewhat challenging to interpret. Trials have shown a reduction in short-term binging episodes among bulimic patients and no difference in longterm binging. Concerning anorexia, the benefits of TMS have been significantly more unclear. However, in clinical trials, among the symptoms affected by TMS, anxiety and stress appeared to be most improved [37]. In summary, it is difficult to recommend excitatory rTMS as a definite treatment for bulimia or anorexia. Drug-resistant Epilepsy Epilepsy is marked by altered cortical excitability that affects an estimated one in 26 individuals over their lifetime. Only two-thirds of patients with epilepsy respond to antiepileptic drugs (AEDs). The rest of the one-third of patients has drug-resistant epilepsy. These patients are at an increased risk of morbidity. Low-frequency TMS plays a critical role in the management of drug-resistant epilepsy, especially for patients with the type of epilepsy not suitable for surgical ablation. A combination of various frequency, strength, and type of coils are used to achieve varying degrees of transcranial magnetic stimulation. A meta-analysis of 12 studies investigating the use of low-frequency (≤ 1 Hz) rTMS for the treatment of drug-resistant epilepsy, found a significant reduction in overall seizure frequency over an average follow-up period of six weeks. Also, rTMS was found to be more effective in patients with a mean age of < 21 years. In studies without IPD, targeted stimulation was associated with the strongest treatment effect. Similarly, in an analysis of individual participant data (IPD), a univariate analysis of the coil type revealed that the use of a figure-8 coil was associated with greater treatment response and extratemporal seizure focus predicts worse treatment outcomes [39]. The meta-analysis also identified a 30% reduction in seizure frequency following LF-rTMS for the treatment of refractory epilepsy. This effect is consistent with a previous meta-analysis by Hsu and colleagues that identified a 34% reduction in seizure frequency following rTMS treatment [40]. More research with a large sample, using different frequencies of pulses, is needed to prove if TMS could be a therapeutic option for epilepsy. It should be noted that in the studies quoted, very few participants achieved full seizure remission; however, the vast majority of participants experienced a therapeutic reduction in seizure frequency. Multiple Sclerosis Multiple sclerosis (MS) is an autoimmune, chronic central nervous system disease of unknown etiology. It presents as an ongoing demyelinating, inflammatory, and degenerative process, affecting both grey and white matters of the brain and the spinal cord. It results in the accumulation over the years of disabling motor and cognitive handicaps affecting the personal, professional, and social quality of life. Studies have revealed that repeated stimulation of a single neuron at low-frequency produces longlasting inhibition of cell-cell communications; conversely, repeated high-frequency stimulation can improve cell-cell communication [39]. Trains of rTMS pulses can induce modification of activity in the targeted brain region, which can last for minutes or even hours. Various disease-modifying drugs are used for the treatment of MS. TMS has no interaction with such drugs and can be used as an adjunct to manage motor and sensory symptoms of MS, such as pain, fatigue, and spasticity. Based on the recent guidelines, there are still no recommendations for the therapeutic use of repetitive TMS in MS patients. Burhan et al. reported better gait performance as assessed by an electronic walkway system after the application of rTMS in a patient with a four-year history of relapsing-remitting MS presenting with cognitive and gait abnormalities. They showed a shorter ambulation time (time elapsed between the first contact of the first and the last footfalls, measured in seconds), faster gait velocity in response to three rTMS daily sessions, and an increase of cadence after one and three sessions [40]. Based on the findings, it cannot be concluded whether TMS could be considered as a therapeutic measure for MS. These results were obtained through the application of various rTMS protocols over the motor cortex for improving spasticity and ambulation in MS patients, and these results are needed to be confirmed by conducting studies on a bigger sample size. Movement Disorders Recent studies suggest that repeated transcranial magnetic stimulation (TMS) improves functional movement disorders (FMDs), such as dystonia, tremor, myoclonus, and Parkinsonism, but the underlying mechanisms are unclear. Therapeutic efficacy of TMS in patients with FMDs is mainly due to a cognitive-behavioral effect rather than neuromodulation. The suprathreshold intensity of TMS might thus be an essential prerequisite for efficacy. During suprathreshold magnetic stimulation sessions, patients experience the unexpected stimulation-induced movement of their affected limbs. This may make the patient realize that his or her motor system is working properly and thereby allow the brain to "relearn" or "reprogram" a normal pattern of movement [41]. It is this relearning of the normal pattern of movement that contributes to the longlasting therapeutic effect of TMS. Parkinson's disease (PD) has wide-ranging clinical features, and rTMS therapy has been tried for many aspects of PD. The underlying mechanism, however, remains unclear; however, several possibilities are proposed, such as endogenous dopamine release or restoration of network activity. Motor symptoms are a cardinal feature of PD, for which evidence suggests the moderate efficacy of rTMS. HF-rTMS over the M1, including less focal stimulation (e.g., leg and bilateral hand M1 rTMS) or over the DLPFC, and LF-rTMS over the supplementary motor area (SMA) were most favorable. rTMS is reportedly also effective for levodopa-induced dyskinesia (LID), which is caused by long-term administration of levodopa among PD patients. rTMS has also been tried for non-pharmacological treatment of non-motor symptoms of PD, including depression. A "weak recommendation" in favor of HF-rTMS of the left DLPFC is given for the treatment of depressive symptoms associated with PD. These are examples of the growing application of rTMS therapy for PD for symptoms other than the classic motor symptoms. As such, rTMS has a potential to become an important adjunctive treatment for PD. Well-designed large clinical trials are still needed to establish its utility in the clinical settings [42]. Migraine A migraine is a type of chronic headache related to cortical excitability. TMS has been shown to activate or suppress the cortex excitability. Because there are limited drugs that can improve the quality of life for people with a migraine, TMS is a promising therapy. It can facilitate or inhibit the electrical activity of the cerebral cortex. Some existing randomized clinical trials (RCTs) reveal that TMS can relieve a headache. Nonetheless, there are a few meta-analyses discussing the effect of TMS on migraine. A metaanalysis reported some studies in which HF-rTMS was effective and well tolerated for migraine prophylaxis [43]. In another study, no statistically significant difference between LF-rTMS with sham stimulation was found. When the effect of TMS on chronic migraine was evaluated, it was concluded that there was no statistically significant difference in effect between the active TMS group and sham TMS group. In light of this, the hypothesis that was put forward suggested that TMS can change the excitability of cortex. However, it needs more time to do this. Unfortunately, due to the small sample size, this conclusion was not definite. More well-designed RCTs are needed to confirm this conclusion. The reasons for variability of TMS results in various patients is not only the dose but also the side, location of stimulation, type of coil, and the number of sessions. Nevertheless, at present, there is no common standard of TMS for migraine. For a chronic migraine, TMS is even more ineffective as chronic pathologic changes in excitability of cortex in a chronic migraine would need a longer duration of TMS to show any effect [43]. Fibromyalgia Fibromyalgia is the presence of widespread musculoskeletal pain that cannot be attributed to any cause. Evidence from recent studies indicates that the pain in fibromyalgia springs from CNS augmentation of pain centers in the brain. Gracely and colleagues found that approximately 50% lower stimulus intensity was needed to evoke a pain response in patients with fibromyalgia as compared to healthy controls (P < 0.001) [44]. In the background of this mechanism, TMS therapy targeted on the motor cortex has been successfully employed for the treatment of these patients, probably by modifying these pain centers in the brain. A study conducted in 2001 showed that unilateral rTMS of the motor cortex induces a long-lasting decrease in the widespread chronic pain experienced in fibromyalgia [45]. Another study conducted in 2010 suggested long-term improvement related to the quality of life (including fatigue, morning tiredness, general activity, walking, and sleep), which were reported to be directly correlating with changes in intracortical inhibition [46]. However, a recent metaanalysis on the efficacy of rTMS for fibromyalgia showed mixed outcomes. rTMS improved quality of life among patients with fibromyalgia with a moderate effect size (Pooled standardized mean difference (SMD) = −0.472 95%CI = −0.80 to −0.14); it also showed a trend toward reducing pain intensity (SMD = −0.64 95% confidence interval (CI) = −0.31 to 0.017) but did not change depressive symptoms [47]. In view of this, further investigations and trials based on using different frequencies of rTMS or combining it with other treatment modalities (like serotonin, which is found to be low in the serum of patients with fibromyalgia) to treat the disorder are needed to establish a consequent and firm efficacy or non-efficacy of rTMS to treat pain in fibromyalgia. Tinnitus Tinnitus, also known as phantom auditory perception, describes the conscious perception of an acoustic sensation in the absence of a corresponding external stimulus [48]. The American Academy of Otolaryngology guidelines for the treatment of tinnitus includes stress reduction, cognitive therapy, masking, and sleep improvement [49]. In regard to the neuronal and central auditory involvement in the pathophysiology of tinnitus, TMS therapy is emerging as a new treatment modality with promising results. A study showed that rTMS did produce changes in the auditory cortex. Moreover, it was found that only some patient characteristics, such as patients with shorter duration of tinnitus, normal hearing, and those with little or no sleep disturbance, showed a significant reduction in tinnitus after TMS [50]. Conclusions Research on TMS as a treatment option for various neurological and psychiatric illnesses has increased in the recent years. The aim of this review was to shed light on the basics of TMS and to summarize its use in various diseases. Although TMS showed some efficacy in treating symptoms of various neurological and psychiatric illnesses, further investigation is required with large sample sizes and comparison with standard treatments. Based on these studies, TMS could also serve as a good diagnostic and research tool in the field of neurosciences.
8,145.2
2018-10-01T00:00:00.000
[ "Psychology", "Biology" ]
Primary Production: From Inorganic to Organic Carbon Primary production involves the formation of organic matter from inorganic carbon and nutrients. This requires external energy to provide the four electrons needed to reduce the carbon valence from four plus in inorganic carbon to near zero valence in organic matter. This energy can come from light or the oxidation of reduced compounds, and we use the terms photoautotrophy and chemo(litho)autotrophy, respectively. Total terrestrial and oceanic net primary production are each *50–55 Pg yr (1 Pg = 1 Gt = 10 g; Field et al. 1998). Within the ocean, carbon fixation by oceanic phytoplankton (*47 Pg yr) dominates over that by coastal phytoplankton (*6.5 Pg yr; Dunne et al. 2007), benthic algae (*0.32 Pg yr; Gattuso et al. 2006), marine macrophytes (*1 Pg yr; Smith 1981) and chemo(litho) autotrophs (*0.4 and *0.37 Pg yr in the water column and sediments, respectively; Middelburg 2011). Much of the chemolithoautrophy is based on energy from organic matter recycling. Since, photosynthesis by far dominates inorganic to organic carbon transfers, we will restrict this chapter to light driven primary production. Gross primary production refers to total carbon fixation/oxygen production, while net production refers to growth of primary producers and is lessened by respiration of the primary producer. Net primary production is available for growth and metabolic costs of heterotrophs, and it is the process most relevant for biogeochemists and chemical oceanographers. For the time being, we present primary production as the formation of carbohydrates (CH2O) and ignore any complexities related to the formation of proteins, membranes and other cellular components (Chap. 6), because these require additional elements (nutrients). The overall photosynthetic reaction is: Primary Producers Primary producers in the ocean vary from lm-sized phytoplankton to m-sized mangrove trees. Phytoplankton refers to photoautotrophs in the water that are transported with the currents (although they may be slowly settling). Biological oceanographers usually divide plankton (all organisms in the water that go with the current) into size classes (Table 2.1). Most phytoplankton are in the pico, nano and microplankton range (0.2-200 lm). The prefixes pico and nano have little to do with their usual meaning in physics and chemistry. Their small size gives them a high-surface-area-to-volume ratio which is highly favourable for taking up nutrients from a dilute solution. Within these phytoplankton size classes there is high diversity in terms of species composition and ecological functioning. Both small cyanobacteria (Synechococcus and Procholoroccus) and very small eukaryotes (e.g., Chlorophytes) contribute to the picoplankton. Microflagellates from various phytoplankton groups (Chlorophytes, Cryptophytes, Diatoms, Haptophytes) dominate the nanoplankton and differ in many aspects (cell wall, nutrient stoichiometry, pigments, number of flagellae, life history, presence/absence of frustule). While phytoplankton communities can be described in terms of species, size classes or molecular biology data based partitioning units, they can also be divided into different functional types (diatoms because of Si skeleton, coccoliths with CaCO 3 skeleton, N 2 -fixers, etc.). Unfortunately, taxonomic, functional and size partitionings among phytoplankton groups are not necessarily consistent. A substantial fraction of the ocean floor in the coastal domain receives enough light energy to sustain growth of photoautotrophs. This includes not only intertidal areas, but also the subtidal. Small-sized photoautotrophs (microphytobenthos, including diatoms and cyanobacteria) are again the dominant primary producers, but macroalgae, seagrass, saltmarsh plants and mangrove trees contribute as well. Seagrasses, saltmarsh macrophytes and mangrove trees have structural components and specialised organs (roots and rhizomes) to tap into nutrient resources within the sediments. The Basics (For Individuals and Populations) Carbon fixation by (and growth of) primary producers will be discussed based on the master equation of Soetaert and Herman (2009): This master equation simply states that production (P, mol/g per unit volume per unit time) is proportional to the biomass (B, mol/g per unit volume) of the primary producer, the actor, which has an intrinsic maximum growth rate of µ (time −1 ) and is limited (0 < f lim < 1) by either physical conditions (e.g., temperature, turbulence) or resources such as light, nutrients and dissolved inorganic carbon. This equation is simple and generic, and we will show below how it relates to phytoplankton global primary production estimates using remote sensing, to expressions used in numerical biogeochemical models and to exponential growth in the laboratory. Maximum Growth Rate (µ) Consider a primary producer in an experiment supplied with all the resources it needs and under ideal conditions, in other words the limitation function f lim is equal to one and optimal growth occurs. Equation 2.1 then reduces to the change in B with time, or production P, is equal to µÁB: This is the well-known equation for exponential growth: where B is the biomass at times t and B 0 is the initial biomass. Plotting the logarithm of biomass development as function of time yields then a slope corresponding to µ. Sometimes data are reported as the number of cell divisions (or doublings) per day: l d ¼ 1 t log 2 B B 0 . Maximum growths for phytoplankton typically varies from 0.1 to 4 d −1 , implying doubling times ln2 l of a fraction of a day to one week. Figure 2.1a shows a typical example of exponential growth for maximum growth rates of 0.1 to 2 d −1 . Exponential growth leads to rapid depletion of substrates and after some time, resources become limiting and phytoplankton enters into a stationary phase ( Fig. 2.1b). Maximum growth size depends on phytoplankton group and size .1 a The increase in biomass during exponential growth with growth rates of 0.1, 0.5, 1 and 2 d −1 . b Cell growth of the diatom Thalassiosira pseudonana is exponential (growth rate of 1.4 d −1 ) till nitrate is depleted and then stationary growth occurs (Data from Davidson et al. 1999) Temperature Effect on Primary Production The temperature of a system provides a strong control on the functioning of organisms. Growth responses of populations to temperature are usually expressed by thermal tolerance curves, also known as reaction norms. Starting at low temperatures, growth initially increases linearly or exponentially up to a maximum T opt and then typically declines relatively more rapidly: i.e. the response curve is often skewed to the left. In other words, phytoplankton growing near its optimum temperature is more sensitive to warming than to cooling ( Fig. 2.3). Although populations show distinct unimodal responses to temperature, mixed communities, and thus ecosystems, usually exhibit a smooth, monotonical increase best described by an exponential (l ¼ ae bT , Fig. 2.4). The thermal response can then be described by where a and b are empirical parameters describing the maximum envelope for the mixed community and T opt and width describe the maximum growth rate and temperature range of individual populations. Eppley's (1972) growth rates at other temperatures, with the consequence that species replace each other ( Fig. 2.4). This exponential temperature response of natural communities is usually expressed in terms of Q 10 values or Activation energies E a , both rooted in chemical thermodynamics (van 't Hoff and Arrhenius equations). The temperature Q 10 is normally defined as where µ T and µ Ref are the rate (e.g. growth) at temperature T and the reference temperature T Ref (Celsius). Q 10 can be simplified to Phytoplankton growth rate (d −1 ) for a mixed community comprised of polar, temperate and tropical species. The mixed community response is based on Eppley (1972) because it gives the rate increases for a 10°C increase in T and is related to the parameter b of the exponential increase: Q 10 ¼ e 10b : Eppley's curve thus corresponds to a Q 10 of 1.88. Typical Q 10 values for biological processes are between 2 and 3. The Arrhenius equation is very similar and reads where A is a pre-exponential factor (time −1 ), E a is the activation energy (J mol −1 ), R is the universal gas constant (8.314 J mol −1 K −1 ) and T is the absolute temperature (K). Sometimes the universal gas constant R is replaced by the Boltzman constant k (8.617 10 5 eV K −1 ) and then E a is expressed in eV (energy per molecule) rather than J mol −1 . For the temperature range of seawater, E a and Q 10 values are related via where T is again given in degrees Kelvin. Eppley's Q 10 of 1.88 corresponds to activation energies of about 0.47 eV or 45 kJ mol −1 at 20°C. One should realize that this is the optimal community temperature response, i.e. no other limiting factors. Apparent activation energies and Q 10 values in the ocean are *0.30 eV (29 kJ mol −1 ) and *1.5, respectively, close to that of Rubisco (Edwards et al. 2016). Light Photosynthesis is a light dependent reaction, and light intensity has a major impact on growth rates. The relationship between photosynthesis and irradiance is normally presented as a P versus E curve, where E refers to radiant energy (mol quanta m 2 s −1 ). Multiple equations have been presented to represent the photosynthesis to light relation, which differ in the number of parameters and whether or not they include the photo-inhibition effect at high light intensities or respiration of the autotroph. Photorespiration, the breakdown of photo-labile, intermediate carbon fixation products, is important in full-light exposed organisms, such as terrestrial plants, microphytobenthos and phytoplankton in the surface layer. Common simple limitation functions are the hyperbolic, Monod model: where f lim (E) is the light limitation function (0 < f lim (E) < 1), K E is a light-saturation parameter (typically 50-150 lmol quanta m −2 s −1 for marine phytoplankton), and the Steele model (1962): where E max is typically 50-300 lmol quanta m −2 s −1 for marine phytoplankton (Soetaert and Herman 2009). The Steele model represents both the initial increase and the subsequent decrease due to photo-inhibition with only one parameter ( Fig. 2.5). The Webb et al. (1974) model is based on an exponential: where P max is the maximum rate at high light and a is the initial slope (increase in P with E at low light intensity). This equation ignores photo-inhibition. Alternatively, one can use the two-parameter Platt et al. (1980) equation: where is b the intensity at the onset of photo-inhibition. Figure 2.5 illustrates the light limitation functions or PE curves presented above. Nutrient Limitation Growing phytoplankton needs a steady supply of resources to maintain growth. Nutrient uptake and growth kinetics are usually described using Monod or Droop kinetics. The former is the simpler model and normally used for steady-state conditions, while the Droop or internal quota model is preferred for transient conditions, e.g. in fluctuating environments. The equation for nutrient limitation following Monod kinetics is: where S is the substrate concentration of the medium water, f lim (S) is the nutrient limitation function, l max is the maximal growth rate, and K l is the half saturation constant for growth. The Droop equation expresses growth rate as a function of the cellular quota (Q) of the limiting nutrient (Droop 1970): where Q min is the minimum cellular quota for growth. Maximum growth rate on substrate (l max ) and cellular quota l where Q max is the maximum cellular quota if S increases. From Theory and Axenic Mono-Cultures to Mixed Communities in the Field Progress in theory, creativity in experimental design, and dedicated hard laboratory work has generated process-based understanding of phytoplankton growth in the laboratory. This body of knowledge has deepened our understanding and guided our modelling efforts and field observation strategies, but we need to make many assumptions before we can apply this mechanistic approach to the field. Let us return to our master Eq. (2.1): P = µÁBÁf lim (resources, environmental conditions). Ignoring environmental conditions, such as temperature, and substituting the simplest expressions introduced above we arrive at: This equation for primary production contains 6 terms that need to be quantified for the case of a single limiting nutrient and a single phytoplankton species. The light availability (E) and nutrient concentration (S) display spatial and temporal gradients in nature, and the maximum growth rate µ max and half-saturation dependences (K E and K l ) require experimental or laboratory studies. Does Diversity Matter or Not? One of the most critical restrictions on the use of mechanistic complex models is related to phytoplankton diversity. Hutchinson (1961) identified the paradox that phytoplankton is highly diverse, despite the limited range of resources they compete for, in direct contrast to the competitive exclusion principle (Hardin 1960). Seawater typically contains tens of different species of primary producers, many for which there are no maximum growth data and known limitation functions. Accordingly, it is not feasible to simply apply Eq. 2.15 to individual species in the field and sum their contributions to obtain the primary production. Besides these theoretical arguments against the single species approach, there are also empirical reasons. Primary production and its dependence on environmental conditions (nutrients, temperature, light) are normally quantified at the community level in the absence of techniques to quantify species-specific primary production in natural waters. This discrepancy between, on the one hand, mechanistic, single-species approaches in the laboratory and, on the other hand, quantification of community responses and activities is somewhat unfortunate (Box 2.2). Chl the Biomass Proxy The biomass of the primary producer (B) is the second term in our master equation and quantifying this term in natural systems is more difficult than one initially would anticipate. Particulate organic carbon (POC) concentrations (g C per unit volume) are a direct measure of phytoplankton biomass in laboratory settings with axenic cultures. However, in natural systems, the pool of particulate organic carbon comprises not only a mixture of phytoplankton species, each with its own maximum growth rate, temperature, light and nutrient dependence, but also a variable and sometimes dominating contribution of detritus (dead organic matter), bacteria and other heterotrophic organisms. It is for this reason that chlorophyll concentrations (Chl) are used as a proxy for living primary producer biomass. The rationale is that Chl is only produced by photosynthesizing organisms, degrades readily after death of the primary producers and can be measured relatively easily using a number of methods. Primary producer biomass (B) can then be calculated if one knows the C:Chl (or Chl:C) ratio of the phytoplankton. However, this ratio differs among species and depends on growth conditions, in particular light and nutrient availability (Cloern et al. 1995). Chl:C ratios vary from *0.003 to *0.055 (gC gChl −1 ; Cloern et al. 1995), complicating going from phytoplankton growth to primary production. The very reason that Chl is such a good proxy for photosynthesizing organisms is also the reason why it is not well suited to the task of partitioning itself among different phytoplankton species: it is in all primary producers harvesting light energy. Accessory and minor pigments such as zeaxanthine and fucoxanthine, do, however, have some potential to resolve differences among phytoplankton groups, but not at the species level. Light Distribution The distribution and intensity of photosynthetically active radiation in seawater is governed by the intensity at the sea surface (E 0 ) and scattering and absorption of light, with the result that light attenuates with depth. The decline of light intensity E with water depth z can be described by a simple differential equation, expressing that a constant fraction of radiation is lost: where the proportionally constant k PAR is known as the extinction coefficient (m −1 ). Solving this equation using the radiation at the seawater-air interface (E 0 ) yields the well-known Lambert-Beer equation: :17Þ The extinction coefficient k PAR includes the absorption of radiation by water (k w ), by the pigments from various primary producers (k Chl ), by coloured dissolved organic matter (k DOC ), and by suspended particulate material (k spm ). The light extinction coefficient of pure water (k w % 0.015-0.035 m −1 ) depends on the wave length of light, with longer wavelength (red) being adsorbed more strongly than shorter wavelengths (blue); this is the cause of the blue appearance of clear water. The other light extinction components have a different wavelength dependence: the attenuation coefficients of dissolved organic matter (k DOC ; "gelbstoffe") and detritus (k SPM ) increase with shorter wave length, while that of phytoplankton (k Chl ) varies depending on the species, i.e. the pigment composition of the primary producers (Kirk 1992;Falkowski and Raven 1997). Oceanographers often divide ocean waters into two classes with respect to light absorption: case 1 waters in which phytoplankton (<0.2 mg Chl a m −3 ) and its debris add only to k w , and case 2 waters which have high pigment concentration and light attenuation because of (terrestrially derived) dissolved organic carbon and suspended particulate waters. The overall light attenuation (k PAR ) in case 1 waters can be approximated by (Morel 1988): where Chl is in mg Chl a m −3 . Other useful empirical relations link light attenuation (k PAR ) to the Secchi depth (z Sec , m), the depth at which a white disk disappears visually: 19Þ where q varies from 1.7 in case 1 waters to 1.4 in case 2 waters (Gattuso et al. 2006) and k PAR ¼ 0:4 þ 1:09 z Sec ð2:20Þ for turbid estuarine waters (Cole and Cloern 1987). Light attenuation coefficients vary from 0.02 m −1 in oligotrophic waters, 0.5 m −1 in coastal waters, and to >2 m −1 in turbid waters Light attenuation by water and phytoplankton dominate in the open ocean and on the shelf. In other coastal waters, including estuaries, phytoplankton and suspended particles dominate light attenuation, while light attenuation is primarily due to suspended particles in more turbid systems (Heip et al. 1995). The light attenuation governs the euphotic zone depth (z eu , m), i.e., the depth where radiation is 1% of the incoming: ln 0:01 ¼ Àk PAR z EU or z EU ¼ 4:6 k PAR ð2:21Þ The euphotic zone is a key depth horizon in aquatic sciences because photosynthesis is largely limited to this zone. Moreover, the bottom of the euphotic zone is often used as reference for export of organic matter. Euphotic zone depths vary from about 200 m in the oligotrophic ocean, to tens of meters in shelf systems, to meters in coastal waters and a few decimetres in turbid and/or eutrophic estuaries (Fig. 2.6). Factors Governing Primary Production Having presented the factors governing phytoplankton production in laboratory studies and the limitations in applying that knowledge to natural systems, we have all the ingredients to explore the factors governing the (depth) distribution and rate of primary production in natural ecosystems. . The euphotic zone is more >200 m in the clearest ocean water with very low phytoplankton and light attenuation by water itself dominates. In most of the ocean, phytoplankton dominates light attenuation and euphotic zone depth scales with phytoplankton concentration. In estuaries and other turbid systems, dissolved organic matter and in particular suspended particles attenuate light and the euphotic zone narrows to less than one meter. Coastal systems and eutrophic parts of the ocean are in between. Case 1 and 2 oceanic waters are indicated. Light attenuation due to phytoplankton was modelled following Morel (1988; Eq. 2.18), while that due to suspended particles followed Cloern (1987) and euphotic depth was calculated as 4.6/k PAR Depth Distribution of Primary Production Consider a system with a light profile following the Lambert-Beer equation (2.17) with E o = 10 mol m −2 d −1 and k PAR = 0.1 m −1 (corresponding to a euphotic zone of 46 m) and a nutrient pattern as shown in Fig. 2.7. Nutrients are low in the upper 25 m (N = 0.1 µmol m −3 ) and then exponentially increase with a depth coefficient 0.1 m −1 to a maximum of 10 µmol m −3 . If we further assume (1) that physical mixing homogenizes phytoplankton biomass (B = constant), (2) that there is only one limiting nutrients (N), and (3) that light and nutrient limitations can be described by Monod relations with parameter K E and K N . This allows combining µ max and B into a depth independent maximal production P m . The modelled P is then: Taking K N and K E values of 1, i.e. 10% of E o and maximum N at depth, and combining Eq. 2.22 with the light and nutrient profiles, we can then calculate the primary production as a function of depth ( Fig. 2.7, green curve). Although these light and nutrient profiles and the model parameters K N and K E numbers have been chosen arbitrarily, they are reasonable and generate a representative depth profile for primary production with a subsurface maximum, as observed as a deep chlorophyll maximum ( Fig. 2.8). In the upper 25 m, primary production is rather low because of nutrient limitation and declines slightly with depth because of light attenuation ( Fig. 2.7). Primary production is optimal at depths between 25 and 40 m, i.e. where the nutricline and the lower part of the euphotic zone overlap. Primary production below 25 m is primarily light-limited, but accounts for about 75% of the depth-integrated primary production. Increasing surface-water nutrient concentrations or the phytoplankton affinity for nutrients (lowering K N ) would increase primary production in the top 25 m, but not so much at depth (Fig. 2.9a). Increasing the photosynthetic performance at low light levels (lowering K E ) would increase primary production at depth (Fig. 2.9b). Phytoplankton living in the surface ocean can thus optimize their performance by investing in nutrient acquisition, while those living in the subsurface would best optimize their light harvesting organs. This simple model explains why deep chlorophyll maxima occur in low-nutrient systems and why the depth distribution of primary production follows light in eutrophic systems (e.g. during early spring in Bermuda Atlantic station, Fig. 2.8). Depth-Integrated Production The overall control of light on depth-integrated production underlies satellite-derived algorithms for primary production and coastal predictive equations. For ecosystem and biogeochemical studies, the focus is on net primary production, i.e. carbon fixation minus phytoplankton respiration, expressed per m 2 and unit time Behrenfeld and Falkowski (1997) showed that depth-integrated net primary production (P, g C m −2 yr −1 ) can be estimated as: where P opt is the maximum daily photosynthesis rate (mg C (mg Chl) −1 h −1 ), z eu is the euphotic zone depth, DL is day length (h), and f lim (E) is a light limitation function. The similarity with our master Eq. (2.1) is evident, when nutrient limitation and environmental conditions are ignored. Integrating (Eq. 2.1) with depth to z EU , and with time to sunset, we arrive at: which is identical to (2.23), with P opt = µ max ; Chl = B, R z EU 0 ¼ z eu , and R sunset sunrise ¼ DL. Behrenfeld and Falkowski (1997) showed that 85% of the variance in global net primary production can be attributed to depth integrated biomass (Chl x z eu ) and the maximal photosynthesis parameter P opt , with other factors, such as differences in light limitation functions, depth distributions of phytoplankton biomass and day length (DL), being less important. Consequently, the most rudimentary model would be (Falkowski 1981): stating that net primary production (P) scales linearly with depth integrated biomass (Chl x z EU ), incoming radiation (E o ) and an optimal photosynthetic parameter (w). Similar semi-empirical relations are often used in estuaries (Cole and Cloern 1987;Heip et al. 1995): Low half-saturation constants K N imply high availability of nutrient for phytoplankton. b The impact of light harvesting efficiency on primary production. A high affinity (low K E ) for light causes higher primary production at depth. The nominal run is presented in Fig. 2.7 where a and b are regression coefficients that are system specific. Critical Depths The overall governing role of light on primary production and phytoplankton dynamics also underlies the use of two critical depth horizons, often credited to Sverdrup (1953): the compensation depth (z c ) and critical depth (z cr ). These were introduced to understand and predict spring blooms in the ocean. At the compensation depth (z c ), phytoplankton photosynthesis is balanced by community respiration (Fig. 2.10), i.e. the depth of the radiation level at which photosynthesis (by phytoplankton) compensates their respiration (E c ). This compensation depth should not be confused with the physics governed mixed-layer depth (z mld ) and the critical depth (z cr ), where primary production integrated through the water column and over the day will equal the daily water column integrated community losses of carbon (Sverdrup 1953; Fig. 2.10). These depths are pivotal to the formation of phytoplankton blooms in the oceans (Sverdrup 1953). If the mixed layer is deeper than the critical depth (z cr ), then phytoplankton will spend relatively too much time at low irradiances and carbon losses are not compensated by sufficient growth. Conversely, if the mixed layer is shallower than z cr , phytoplankton communities can grow and blooms can develop. Assuming that carbon losses (R o ) are constant with depth, there is no nutrient limitation, and gross primary production is linearly related to radiation, which in turn depends exponentially on depth (Eq. 2.18), primary production is described by: P ¼ P 0 e Àk PAR z , where P 0 is the surface productivity. One eventually arrives at following relations for Sverdrup's critical depth, z cr : where E c is the radiation level at the compensation depth and R o is the depth-independent community respiration rate (Sverdrup 1953;Siegel et al. 2002). Clearly, light attenuation is a major factor, not only governing z eu , but also z c and z cr . The critical depth (z cr ) is usually 4 to 7 times higher than the euphotic zone depth (z eu ). The compensation depth (z c ) is typically 50-75% of the euphotic zone depth (Siegel et al. 2002;Sarmiento and Gruber 2006;Fig. 2.10). For simplicity, the compensation depth is often taken equal to the euphotic zone depth; this should be discouraged, because it implies that community respiration represents only 1% of maximal production. The depth of the euphotic zone (z EU ) is an optical depth governed by the light attenuation and thus only indirectly impacted by phytoplankton via their effect on k PAR , while the compensation depth depends on the community structure (algal physiology and heterotrophic community). The Sverdrup critical depth model is simple, instructive and predictive: it can explain bloom initiation when mixed layers shallow and link it to physical sensible and quantifiable parameters. However, it is sometimes difficult to apply because of inconsistencies and uncertainties in the parameterisation (phytoplankton vs. community respiration and other phytoplankton losses) and the validity of the assumptions (no nutrient limitation, well-mixed layer). The critical depth horizon concept has been developed for deep waters, but a similar approach can be applied to shallow ecosystems. In shallow coastal systems, it is the relative importance of water depth and euphotic zone depth that governs (a) where production occurs and (b) whether phytoplankton biomass will increase or not. If water depth is less than the euphotic depth (z EU ) light reaches the seafloor and primary production by microbial photoautotrophs (microphytobenthos), as well as macroalgae and seagrasses, may occur. Gattuso et al. (2006) showed that this may happen over about 1/3 of the global coastal ocean. If water depth exceeds the euphotic zone by more than a factor 4-7 then phytoplankton losses in the dark cannot be compensated fully by photosynthesis and phytoplankton communities will lose biomass (Cloern 1987;Heip et al. 1995). Vice versa, if water depth <4-7 times Z EU phytoplankton growth is maintained. Consequently, shallowing of ecosystems (e.g. water flowing over a tidal flat or development of stratification) stimulates phytoplankton community growth, all other factors remaining equal, while deepening of water bodies will cause a decline. Moreover, in turbid systems where the light attenuation (k PAR ), and thus z EU (Fig. 2.7), are governed by suspended particulate matter dynamics, phytoplankton communities may experience variable twilight conditions and have difficulty maintaining positive growth. Consequently, when turbid rivers and estuarine waters with high nutrients reach the sea, particles settle and light climate improves, phytoplankton blooms may develop and utilize the nutrients (Fig. 2.11). Sverdrup's critical depth hypothesis is based on the assumption that phytoplankton biomass and phytoplankton losses are homogenously distributed in the mixed layer. However, the mixed layer with uniform temperature as used in Sverdrup's approach does not match with the layer of turbulent mixing in the ocean (Franks 2015). It is more realistic to represent phytoplankton biomass (B) as governed by the balance between production, respiration losses and transport by eddy diffusion and particle settling. Again, we assume gross primary production is linearly related to radiation (which declines exponentially); hence: P ¼ P 0 e Àk PAR z . Phytoplankton respiration loss is considered a first order process: Loss ¼ rB with a first-order rate constant (r). Under the assumption of steady-state we then arrive at (see Box 1.1): where K z is the vertical eddy diffusion coefficient (m 2 s −1 ), w is the settling velocity (m s −1 ; positive downwards), the other terms have been defined before. Considering a semi-infinite domain, i.e. dB dx ¼ 0 at large depth, and phytoplankton biomass B 0 at the water-air interface, we obtain the following solution: The second exponential term accounts for light-dependent production, while the first exponential comprises water-column mixing, phytoplankton settling, and phytoplankton losses. To simplify matters, we assume that phytoplankton biomass is zero at the air-water interface. The first and second term then balance if After re-arrangement to isolate the eddy diffusion coefficient, we obtain In other words, the vertical eddy diffusion coefficient K z should be less than rÀk PAR w k 2 PAR for positive values of phytoplankton biomass (B). Huisman et al. (1999) presented a more elaborate model on phytoplankton growth in a turbulent environment, including a feedback between phytoplankton biomass and k PAR. Through scaling and numerical analysis of a model without phytoplankton sinking (w = 0), they derived a relationship between the maximum turbulent mixing coefficient K z and k PAR : K z ¼ 0:31 k 2 PAR . If we also ignore phytoplankton advection (w = 0 in Eq. 2.30), K z \ r k 2 PAR , fully consistent with Huisman et al. (1999). The critical turbulence level for phytoplankton growth is thus inversely related to the square of the attenuation of light. Moreover, the phytoplankton loss is the scaling factor. For turbid systems such as estuaries and other coastal systems with high light attenuation (k PAR ), turbulent mixing should be minimal to allow net growth, consistent with observations by Cloern (1991) that phytoplankton blooms develop during neap tide when turbulent mixing intensity is lowest. Conversely, in clear, oligotrophic waters, light attenuation is limited and phytoplankton blooms can occur at relatively high mixing rates. Sinking phytoplankton (w > 0) will lower the numerator of Eq. 2.30 and thus lower the critical turbulence levels, while buoyant phytoplankton (w < 0) will increase the maximal allowable turbulence, and thus the scope for phytoplankton growth. Box 2.1: Phytoplankton size based traits The intrinsic maximum growth of phytoplankton varies with size ( Fig. 2.2). Metabolic activity of organisms usually scales with size and when expressed in terms of mass or volume (V) follows a simple power law ð2:31Þ where b = −0.25 according to the metabolic theory of ecology (Brown et al. 2004). Accordingly, the smaller the organism, the higher the intrinsic maximum growth rate. This power law relationship holds over orders of magnitude and across a wide range or organisms (autotroph and heterotroph, eukaryotes and prokaryotes; e.g., Fenchel 1973) and implies that smaller organisms have the highest intrinsic growth (Fig. 2.12). However, some cell components are non-scaleable, such as the genome and membrane, and consequently this power-law appears to break down in the range of nanoplankton (2-20 µm). There is a trade-off between the size dependence of physiological traits (Ward et al. 2017). Burmaster's (1979) equation can be used to illustrate this: where the maximum growth for a certain size (l size ) depends on maximum nutrient uptake (h size ), minimum cell quota (Q min ) and theoretical maximum growth rate (l max ). Maximum nutrient uptake and requirement per cell scale positively with cell size (Fig. 2.2b, dashed blue line), while theoretical maximum growth rates scale negatively ( Fig. 2.2b solid blue line). The result is an optimum in growth rate for phytoplankton in the nanoplankton range ( Fig. 2.2b, black line). Very small picoplankton cells have a low intrinsic growth rate that will increase with size because more volume is then available for catalysing and synthesizing. The intrinsic growth rate of microplankton cells will decrease with increasing size, as with most organisms, for multiple reasons, including the increase in intracellular transport distances between cellular machineries (Marañón et al. 2013). Box 2.2: Phytoplankton diversity, rate measurements and biogeochemical models The high number of different species in each water sample poses a challenge to link the species-specific growth parameters obtained in the laboratory with measurements of phytoplankton growth in the field and modelling of phytoplankton primary production for natural, mixed communities. Gross primary production is normally quantified by the production of oxygen, using either 18 O-labelling or the differential evolution of oxygen in light and dark. The most common technique for measuring primary production is the 14 C labelling technique, but this method provides a result in between gross and net photosynthesis, depending on the duration of the incubation. Both approaches quantify primary production for the total community, rather than for specific species. Biological oceanographers have developed methods to quantify group-specific primary production, based on dilution approaches or the incorporation of isotopically labelled bicarbonate into biomarker or flow-cytometry separated groups of organisms (Laws 2013). These group-specific primary production measurements can be compared more directly to laboratory data. Biogeochemical modellers have explored a number of strategies to incorporate differences among phytoplankton species into their ecosystem models; e.g. the plankton functional group approach and phytoplankton size or trait based approaches. The former approach is limited to a few plankton groups that are representative for certain biogeochemical fluxes (e.g. N 2fixers, diatoms, small and large phytoplankton, coccoliths; Sarmiento and Gruber 2006). The size-based approach makes use of the systematic relationships between phytoplankton size and activity (e.g. Fig. 2.2), but some processes do not scale in a simple way with size. Trait-and genome-based approaches are the most recent, and they consider emergent phenomena (Follows et al. 2007). These approaches are instructive and needed to further our understanding and predictive capabilities in times of global change, but they are so far difficult to link with observations in the field. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
8,311.8
2019-01-01T00:00:00.000
[ "Geology" ]
Site-directed Mutagenesis of the 100-kDa Subunit (Vph1p) of the Yeast Vacuolar (H (cid:49) )-ATPase* Vacuolar (H (cid:49) )-ATPases (V-ATPases) are multisubunit complexes responsible for acidification of intracellular compartments in eukaryotic cells. V-ATPases possess a subunit of approximate molecular mass 100 kDa of un- known function that is composed of an amino-terminal hydrophilic domain and a carboxyl-terminal hydropho- bic domain. To test whether the 100-kDa subunit plays a role in proton transport, site-directed mutagenesis of the VPH1 gene, which is one of two genes that encodes this subunit in yeast, has been carried out in a strain lacking both endogenous genes. Ten charged and twelve polar residues located in the seven putative transmem- brane helices in the COOH-terminal domain of the mol-ecule were individually changed, and the effects on pro- ton transport, ATPase activity, and assembly of the yeast V-ATPase were measured. Two mutations (R735L and Q634L) in transmembrane helix 6 and at the border of transmembrane helix 5, respectively, showed greatly reduced levels of the 100-kDa subunit in the vacuolar membrane, suggesting that these mutations affected sta- bility of the 100-kDa subunit. Two mutations, D425N and K538A, in transmembrane helix 1 and at the border of transmembrane helix 3, respectively, showed reduced assembly of the V-ATPase, with the D425N mutation also reducing the activity of V-ATPase complexes that did assemble. Two mutations, H743A and K593A, in trans- membrane helix The vacuolar (H ϩ )-ATPases (V-ATPases) 1 are a family of proton pumps responsible for acidification of intracellular compartments in eukaryotic cells (for reviews see Refs. [1][2][3][4][5][6][7][8][9]. Among the compartments acidified by V-ATPases are clathrincoated vesicles, endosomes, lysosomes, secretory vesicles, such as synaptic vesicles and chromaffin granules, and the central vacuoles of plants, Neurospora and yeast. Acidification of these compartments, in turn, plays an important role in such processes as receptor-mediated endocytosis, intracellular membrane traffic, protein processing and degradation, and coupled transport of small molecules. In yeast, acidification of the central vacuole is important both to maintain the activity of degradative enzymes and to drive uptake of solutes such as Ca 2ϩ and amino acids (5). Although the V-ATPases are homologous to the F-ATPases (28 -31), both in overall structure (10,27) and in sequence homology of several of the subunits (11-13, 25, 32-38), no obvious structural homolog exists for the 100-kDa subunit in the F-ATPases. The 100-kDa subunit of the V-ATPase in yeast is encoded by two homologous genes, VPH1 (21) and STV1 (22). VPH1 encodes a 95-kDa protein, which possesses a hydrophilic amino-terminal domain of approximately 45 kDa and a carboxyl-terminal hydrophobic domain of approximately 50 kDa containing 6 -7 putative transmembrane helices (21). STV1 encodes a 102-kDa protein that shares the same domain arrangement and is 54% identical in amino acid sequence with the product of the VPH1 gene (Vph1p) (22). Disruption of the VPH1 gene leads to somewhat reduced growth at neutral pH relative to acidic pH (21) and disruption of STV1 has no obvious phenotypic consequences (22). By contrast, disruption of both VPH1 and STV1 leads to the typical Vma Ϫ phenotype, including the inability to grow at neutral pH and hypersensitivity to Ca 2ϩ (22). These results suggest that the VPH1 and STV1 gene products are at least partially able to substitute for each other. The reason that yeast possess two genes encoding the 100-kDa subunit is uncertain, but the results available thus far suggests that V-ATPases possessing Vph1p and Stv1p may be targeted to different intracellular membranes (22). In order to determine whether the 100-kDa subunit plays a direct role in proton transport by the V-ATPase complex, site-directed mutagenesis of the VPH1 gene product has been carried out in a strain lacking endogenous Vph1p and Stv1p. EXPERIMENTAL PROCEDURES Materials and Strains-Zymolyase 100T was obtained from Seikagaku America, Inc. [ 35 S]Trans-label was purchased from ICN. Bafilomycin A 1 was a kind gift from Dr. Karlheinz Altendorf (University of Osnabruck). Leupeptin was from Boehringer Mannheim. 9-Amino-6chloro-2-methoxyacridine was from Molecular Probes, Inc. ATP, phenylmethylsulfonylfluoride, and most other chemicals were purchased from Sigma. Transformation-Yeast cells were transformed with the wild type plasmid (pRS316-VPH1), mutants or the vector pRS316 alone (as a negative control) using the lithium acetate procedure (41) and were selected on Ura Ϫ plates as described previously (42). The mutants were tested for growth on pH 7.5 or pH 5.5 YPD plates buffered with 50 mM KH 2 PO 4 and 50 mM succinic acid (43). Isolation of Vacuolar Membrane Vesicles-Vacuolar membrane vesicles were isolated using a modification of the protocol described by Kim et al. (44). Yeast were grown overnight at 30°C to 1 ϫ 10 7 cells/ml in 1 liter of selective medium. Cells were pelleted, washed once with water, and resuspended in 50 ml of 10 mM dithiothreitol, 100 mM Tris-HCl, pH 9.4. After incubation at 30°C for 15 min, cells were pelleted again, resuspended in 50 ml of YEPD medium containing 0.7 M sorbitol, 2 mM dithiothreitol, and 100 mM Tris-Mes, pH 7.5, and 5 mg of Zymolyase 100T and incubated at 30°C with gentle shaking for 90 min. The resulting spheroplasts were washed twice with ice-cold 1.2 M sorbitol, and pelleted at 3500 ϫ g for 10 min at 4°C. The pellet was resuspended in 40 ml of homogenization buffer (10% glycerol, 1.5% polyvinylpyrrolidone (M r 40,000), 0.25 mM MgCl 2 , 2 mg/ml bovine serum albumin, 50 mM Tris-ascorbate, pH 7.5, 1 mM phenylmethylsulfonyl fluoride, 1 mg/ml leupeptin), transferred to a Dounce glass homogenizer and subjected to 20 strokes with a tight fitting pestle. The homogenate was centrifuged at 3500 ϫ g for 15 min at 4°C, and the supernatant was transferred to a Ti 45 centrifuge tube and spun for 35 min at 100,000 ϫ g at 4°C. The pellets were resuspend in 8 ml of overlay medium, which contained 1.1 M glycerol, 2 mM dithiothreitol, 0.25 mM MgCl 2 , 2 mg/ml bovine serum albumin, and 5 mM Tris-Mes, pH 7.6, and homogenized by 10 strokes in a Dounce glass homogenizer using a tight fitting pestle. The homogenate was overlaid onto a one step 30-ml 10 -30% sucrose gradient and centrifuged for 2 h at 100,000 ϫ g in an SW-28 rotor. Material at the 10 -30% interface was collected, diluted 10-fold with overlay medium, and centrifuged at 100,000 ϫ g for 35 min at 4°C. The pellets were resuspend in 0.5-1 ml of overlay medium, quick frozen with liquid nitrogen, and stored at Ϫ80°C until used. Biochemical Characterization-ATPase activity was measured using a coupled spectrophotometric assay (45) with the modification of using 0.35 mM of NADH instead of 0.5 mM NADH. ATP-dependent proton transport was measured in transport buffer (25 mM Mes/Tris, pH 7.2, 5 mM MgCl 2 , and 25 mM KCl) using the fluorescence probe amino-6chloro-2-methoxyacridine as described previously (46) in the presence or the absence of 10 nM bafilomycin A 1 . Protein concentration was measured using the Lowry method (47). SDS-polyacrylamide gel electrophoresis was carried out as described by Laemmli (48). Silver staining was performed using the method of Oakley et al. (49). Western blots were probed with mouse monoclonal antibodies 8B1-F3 against the 69-kDa subunit, (from Molecular Probes, Inc.) or 10D7 against the 100-kDa subunit, (a generous gift from Dr. P.Kane), followed by horseradish peroxidase-conjugated secondary antibody (Bio-Rad). Blots were developed using a chemiluminescent detection method obtained from KPL. Quantitations were done using an IS-1000 Digital imaging system (Alpha Innotech Corporation). Immunoprecipitations were carried out as described (50), with the following modifications. Cells were grown overnight in supplemented minimal medium lacking methionine and then converted to spheroplasts by incubation for 20 min with 0.5 unit of zymolase/10 7 cells in SD-Met, 1.2 M sorbitol, 50 mM Tris-Mes, pH 7.5. Aliquots containing 5 ϫ 10 6 spheroplasts were then incubated with [ 35 S]Trans-label (50 Ci) for 60 min at 30°C. Spheroplasts were then pelleted, lysed in phosphatebuffered saline with C 12 E 9 and immunoprecipitated (50) using 7.5 g of purified 8B1-F3 antibody and protein A-Sepharose followed by SDSpolyacrylamide gel electrophoresis on 12% acrylamide gels and autoradiography as described (50). RESULTS It has previously been shown that deletion of genes encoding subunits of the yeast V-ATPase leads to a conditional lethal phenotype such that strains carrying such deletions are unable to grow at neutral pH but are able to grow at acidic pH (51). We have employed a strain in which both the VPH1 and STV1 genes encoding the 100-kDa subunit of the V-ATPase have been disrupted, leading to the typical Vma Ϫ phenotype (22). Expression of the VPH1 gene on a CEN plasmid in this strain leads to growth at pH 7.5. Twenty-two individual site-directed mutations were introduced into the VPH1 gene and the mutant proteins expressed in the double knock out strain. The residues selected for mutation all correspond to polar or charged residues located within the seven putative transmembrane helices in the COOH-terminal half of the 100-kDa subunit. These residues were selected in an effort to identify buried polar or charged amino acids that might directly contribute to the proton conduction pathway in the V-ATPase complex. Polar residues were changed to nonpolar residues of similar size (for example substituting alanine for serine or phenylalanine for tyrosine), whereas charged residues were replaced either by similar polar amino acids (i.e. glutamine for glutamic acid) or by nonpolar amino acids (usually alanine). It is anticipated that replacement of a polar or charged amino acid that directly participates in proton movement across the membrane will disrupt ATP-dependent proton transport. Of these 22 mutations, only three showed greatly reduced growth at neutral pH (E789Q, D425N, and R735L) (data not shown), with the latter two having an identical growth phenotype to the double knock out strain. Because as little as 20% of the wild type V-ATPase activity is sufficient to rescue the growth phenotype of a Vma Ϫ strain, 2 it was necessary to measure the vacuolar proton pumping and V-ATPase activity in each mutant strain to assess the effect of the mutation on V-ATPase activity. Fig. 1 shows the ATP-dependent proton transport and the bafilomycin-sensitive ATPase activity for isolated vacuoles purified from a ⌬vph1⌬stv1 strain expressing the pRS316 plasmid alone, pRS316 containing the wild type VPH1 gene, or the VPH1 gene bearing the indicated point mutations. Bafilomycin A 1 has been shown to be a specific inhibitor of the V-ATPases (52). As expected on the basis of the growth phenotype, three mutations (D425N, R735L, and E789Q) showed 20% or less of the wild type proton transport and V-ATPase activity, with both D425N and R735L having virtually no detectable activity. Four additional mutations (K538A, K593A, Q634L, and H743A) showed between 20 and 70% of wild type activity, with Q634L having the lowest activity of this group. The remaining mutants all displayed 70% or greater of wild type activity. None of the mutations resulted in significant uncoupling of proton transport from ATP hydrolysis (i.e. a greater loss of proton transport than ATPase activity), although one mutation (Q634L) did decrease ATPase activity to a somewhat greater extent than proton transport, suggesting a possible increase in coupling efficiency. The most obvious ways in which mutations in the 100-kDa subunit might lead to a decrease in V-ATPase activity is if these mutations resulted in a 100-kDa subunit that was unstable and rapidly degraded or that was unable to correctly assemble with the remaining subunits of the V-ATPase complex. To assess these possibilities, Western blot analysis was performed on isolated vacuoles using antibodies directed against Vph1p and the A subunit of the V-ATPase. It has previously been shown that disruption of the VPH1 gene leads to the inability of the V 1 domain (including the A subunit) to assemble onto the vacuolar membrane (21). Moreover, the absence of any of the V 1 subunits (with the exception of the 54-kDa subunit (14)) leads to loss of assembly of the entire V 1 domain onto the vacuolar membrane (53,54). Thus the pres-ence of the 100-kDa and A subunits on the vacuolar membrane provides a reasonable measure of V-ATPase assembly. Fig. 2 shows a Western blot carried out on vacuolar membranes from the wild type, deletion, and mutant strains using antibodies directed against either Vph1p or the A subunit, whereas Fig. 3 shows the results of quantitative analysis of the Western blot as well as the activity data on each mutant for comparison. As can be seen, two mutants (R735L and Q634L) showed greatly reduced levels of the 100-kDa subunit in the vacuolar membrane as well as greatly reduced levels of A subunit. These mutations thus appear to affect stability of the 100-kDa subunit. Two other mutations (D425N and K538A) show significantly reduced levels of A subunit associated with the vacuolar membrane with only slightly reduced levels of the 100-kDa subunit. These mutations thus appear to have an effect on assembly of the V-ATPase complex. It should be noted from Fig. 3 that the D425N mutant, while still having detectable levels of assembly as assessed by the presence of the A subunit, is completely devoid of proton transport and ATPase activity, suggesting that any V-ATPase that does assemble is inactive. This result suggests that Asp 425 may play a role in both assembly and activity of the V-ATPase. The remaining mutants have near normal levels of the 100-kDa and A subunits. In particular, the E789Q mutant shows wild type levels of both subunits while possessing less than 20% of the wild type levels of proton transport and ATPase activity. Similarly, examination of Fig. 3 1. Effect of Vph1p mutations on bafilomycin-sensitive ATPase activity and ATP-dependent proton transport in purified vacuolar membrane vesicles. ATPase activity and ATP-dependent proton transport were measured on aliquots of purified vacuolar membrane vesicles containing 5 g of protein as described under "Experimental Procedures." Activities are expressed relative to the ⌬vph1⌬stv1 strain expressing the pRS316 plasmid containing the wild type VPH1 gene (defined as 100%). The specific ATPase activity of these vacuolar membrane vesicles was 3.3 mol ATP/min/mg protein at saturating ATP and 37°C, with approximately 80% of the ATPase activity inhibitable by 10 nM bafilomycin. All of the ATP-dependent proton transport in vacuolar membrane vesicles isolated from cells expressing the wild type Vph1p was inhibitable by 10 nM bafilomycin. No ATP-dependent proton transport or bafilomycin sensitive-ATPase activity was observed in vacuolar membrane vesicles isolated form cells transformed with the vector alone. Each bar represents the average of two or three determinations made on two or three independent vacuolar membrane preparations, with the error corresponding to the standard deviation. WT, wild type. Vph1p Mutagenesis proton transport and ATPase activity as predicted on the basis of the amount of assembled V-ATPase. To test whether, for these mutations, the observed decrease in activity might be due to loss of one of the other V-ATPase subunits from the complex, the following experiment was performed. Cells were converted to spheroplasts and metabolically labeled with [ 35 S]Trans-label for 60 min at 30°C followed by cell lysis, detergent solubilization, and immunoprecipitation using the anti-A subunit antibody 8B1-F3 and protein A-Sepharose. As can be seen in Fig. 4, the complete complement of V 1 and V 0 subunits are immunoprecipitated from cells expressing wild type Vph1p, whereas only the V 1 subunits are immunoprecipitated in the vector control. Of the three mutants listed above, H743A and E789Q showed wild type patterns of immunoprecipitation, whereas K593A showed the normal pattern of subunits but with somewhat reduced levels of the V 0 subunits, as predicted on the basis of the data shown in Fig. 3. Thus, all three of these mutations appear to impair activity rather than assembly of the V-ATPase complex. It should be noted, however, that the absence of one of the smaller V-ATPase subunits (i.e. the VMA7 or VMA10 gene products) might not be detectable by this method and that no test for "correct" assembly of the V-ATPase complex has been performed. To further characterize these mutants, we have determined the K m for ATP and V max values as well as the pH optima for the wild type and for each of the mutants affecting activity as well as for Q634L. As can be seen from the data in Table I, no change greater than 40% was observed in K m values for any of the mutations tested, although the V max values are in good agreement with the activity data shown in Fig. 1. These data indicate that the observed decrease in activity in these mutants is not due to a greatly diminished affinity for ATP. Interestingly, two of the mutations did result in a very significant shift in the pH optima of the enzyme. Thus, whereas the optimum pH for the wild type V-ATPase was approximately 7.2, that for H743A was 8.2, whereas that for E789Q was 9.7. In addition, the activity at the pH optimum of these mutants was approximately 75% (for H743A) and 30% (for E789Q) of the activity at the optimum pH of the wild type. An important question with regard to the V-ATPases is the mechanism by which they translocate protons. For the bovine coated vesicle V-ATPase, the V 0 domain, which is responsible for proton translocation (58), is composed of four subunits of approximate molecular masses 100, 38, 19, and 17 kDa (subunit c) (57) that are present in a stoichiometry of 100 1 38 1 19 1 c 6 (27). Of these subunits, only the 17-kDa c subunit has been shown to directly participate in proton translocation by virtue of its reaction with dicyclohexylcarbodiimide (59). By analogy with the F-ATPase c subunit (60), dicyclohexylcarbodiimide reacts with a buried carboxyl group located near the middle of the last of four putative transmembrane helices. Although a low level of proton translocation has been reported for the isolated, reconstituted c subunit (61), optimal levels of dicyclohexylcarbodiimide-inhibitable proton transport are only observed when the complete complement of V 0 subunits are reassembled prior to reconstitution (58). The basis for difference in proton conduction properties of the native and reassembled V 0 domains remains uncertain. In the case of the F-ATPases, the F 0 domain of Escherichia coli is composed of three subunits of molecular masses 30 (a), 17 (b), and 8 kDa (c) that are present in a stoichiometry of a 1 b 2 c 10 -12 (62). Reassembly studies indicate that for F 0 , all three subunits are required to form a functional proton channel (63,64). Mutational analysis has demonstrated that the buried carboxyl group located in the second of two transmembrane helices of the c subunit is critical for proton translocation (60). Moreover, genetic analysis of the a subunit indicates the presence of several buried polar and charged residues, particularly in the last two transmembrane helices, which also play a critical role in proton transport (65,66). Thus, substitutions at Ser 206 , Arg 210 , and Glu 219 in the fifth putative transmembrane helix or His 245 in the sixth putative transmembrane helix of the a subunit significantly impair proton translocation through F 0 . These studies have led to a model in which the proton conduction pathway is composed of several residues of the a subunit, possibly arranged as an amphipathic helix, with the buried carboxylate of the c subunit providing a gate in the conduction pathway (60,65). By analogy with the F-ATPases, we would predict that the V-ATPases should have some homolog to the a subunit, which would serve a comparably important role in proton translocation. Of the V 0 subunits besides the c subunit (which contains no more buried polar or charged residues than the F 0 c subunit (38,25)), the 36-kDa subunit (Vma6p) contains no putative transmembrane helices (23,67). The sequence of the highly hydrophobic mammalian 19-kDa subunit has not been obtained; however, no yeast counterpart to this subunit has yet been identified. The remaining yeast V 0 subunit, the 13-kDa product of the VMA10 gene (24), resembles the product of the VMA6 gene in possessing no putative transmembrane helices. Thus the only yeast V 0 subunit that has been identified that might serve the role of the a subunit in the V-ATPase complex is the 100-kDa subunit. FIG. 2. Effect of Vph1p mutations on stability of the 100-kDa subunit and association of the 69-kDa A subunit with the vacuolar membrane. Vacuolar membrane vesicles (5 g) isolated from the ⌬vph1⌬stv1 strain expressing the wild type VPH1 gene, the VPH1 gene bearing the indicated mutations, or the vector alone were subjected to SDS-polyacrylamide gel electrophoresis on a 12% acrylamide gel followed by transfer to nitrocellulose and Western blot analysis using the monoclonal antibody 10D7 against Vph1p or 8B1-F3 against the 69-kDa A subunit as described under "Experimental Procedures." WT, wild type. Vph1p Mutagenesis The 100-kDa subunit is composed of an NH 2 -terminal hydrophilic domain of 45 kDa and a COOH-terminal hydrophobic domain of 55 kDa containing 6 -7 putative transmembrane helices (21,22,68). In addition to the two yeast 100-kDa subunit genes (VPH1 and STV1), three additional cDNAs from mouse, rat, and bovine have been cloned (68 -70), with 40 -95% amino acid identity observed between pairs of these sequences (22). Based upon hydropathy analysis (21), a tentative model for the folding of the 100-kDa subunit in the membrane is shown in Fig. 5. The amino-terminal hydrophilic domain has been placed on the lumenal side of the membrane based upon two observations. First, labeling of the coated vesicle 100-kDa subunit by membrane impermeant reagents is only observed after detergent permeabilization of the membrane (27), suggesting that the large hydrophilic domain is sequestered within the lumen of the coated vesicle. Second, proteolysis of the 100-kDa subunit by trypsin in intact coated vesicles results in cleavage at a site between transmembrane helices five and six. 3 Because coated vesicles are oriented with the cytoplasmic surface exposed, this places the loop between H5 and H6 on the cytoplasmic side of the membrane. Tracing of the polypeptide back to the amino terminus places the amino-terminal domain on the lumenal side of the membrane. There are other data, however, that make this assignment tentative. Thus protease treatment of intact yeast vacuoles leads to disappearance on Western blots of any band recognized by a polyclonal antiserum raised against a peptide located in the amino-terminal domain of Vph1p, suggesting that this epitope is exposed on the cyto-plasmic side of the vacuole. 4 Further work is thus necessary to resolve the actual orientation of the 100-kDa subunit in the membrane. Like the F 0 a subunit, the 100-kDa subunit possesses multiple polar and charged residues located within the putative transmembrane helices in the hydrophobic COOH-terminal domain. To test whether the 100-kDa subunit might be playing an analogous role to the a subunit in proton translocation, we carried out site-directed mutagenesis of 22 such polar and charged residues in this domain. The residues selected for mutagenesis (shown circled in Fig. 5) are all conserved between the available 100-kDa sequences from yeast, mouse, rat, and bovine. Interestingly, most of the mutations tested did not show any impairment of proton translocation. These residues are presumably not individually critical for proton translocation by the V-ATPase, although it is possible that replacement of several of these residues together might impair proton transport. Mutations at Arg 735 and Gln 634 in H6 and the border of H5, respectively, led to the nearly complete absence of the 100-kDa subunit in the vacuolar membrane, suggesting that the proteins containing these mutations were either unstable and rapidly degraded or else mistargeted to some other intracellular membrane. Given the results that suggest that the 100-kDa subunit possesses targeting information in yeast (22), the latter possibility should not be ruled out. However, the fact that for most integral membrane proteins in yeast targeting to the vacuole represents the default pathway (71) makes it more 3 I. Adachi and M. Forgac, unpublished data. 4 M. Manolson, unpublished observations. FIG. 3. Comparison of effects of mutations in Vph1p on bafilomycin-sensitive ATPase activity, ATP-dependent proton transport, and the presence of A subunit on isolated vacuolar membrane vesicles. Bafilomycin-sensitive ATPase activity and ATP-dependent proton transport were measured on vacuolar membrane vesicles (5 g of protein) as described in the legend to Fig. 1. Each bar represents the average of two determinations made on a single vacuolar preparation. An aliquot of the same preparation containing 5 g of protein was analyzed by Western blot using the antibody 8B1-F3 against the 69-kDa A subunit as described in the legend to Fig. 2, and the resultant blot was quantitated using an IS-1000 Digital Imaging System from Alpha Innotech Corporation. WT, wild type. Vph1p Mutagenesis likely that these mutations have affected stability of the 100-kDa subunit. Mutations at Asp 425 and Lys 538 in H1 and at the border of H3, on the other hand, did not prevent folding and targeting of the 100-kDa subunit to the vacuolar membrane but did interfere with proper assembly of the V-ATPase complex as demonstrated by the reduced level of the peripheral A subunit on the vacuolar membrane. Because these four mutations prevented the appearance of a V-ATPase complex in the vacuolar membrane, it is not possible to determine the role of the corresponding residues in proton transport. The mutations of greatest interest are those that inhibited proton transport and ATPase activity but did not have obvious effects on assembly or stability of the V-ATPase complex. The three residues that fell into this last category are Lys 593 at the border of H4, His 743 in H6, and Glu 789 in H7. For both K593A and H743A, proton transport and ATPase activity were only 50% of that predicted on the basis of the amount of assembly observed, whereas for E789Q, the V-ATPase complex was vir- FIG. 4. Effect of Vph1p mutations on V-ATPase assembly. Yeast cells (the ⌬vph1⌬stv1 strain) expressing the wild type VPH1 gene, the indicated mutations or the vector alone were grown overnight in methionine-free medium followed by conversion to spheroplasts and incubation with [ 35 S]Trans-label (50 Ci/5 ϫ 10 6 spheroplasts) for 60 min at 30°C. Spheroplasts were then pelleted and lysed in phosphate-buffered saline with C 12 E 9 , and the V-ATPase immunoprecipitated using the monoclonal antibody 8B1-F3 directed against the 69-kDa A subunit and protein A-Sepharose followed by SDS-polyacrylamide gel electrophoresis on a 12% acrylamide gel and autoradiography as described under "Experimental Procedures." The positions of the V-ATPase subunits are indicated and were confirmed by comparison with the migration of 14 C-labeled molecular mass standards. WT, wild type. The orientation of the 100-kDa subunit in the membrane is based upon labeling and proteolysis data obtained for the 100-kDa subunit of the bovine coated vesicle V-ATPase (see text). In particular, the position at which trypsin cleaves the bovine 100-kDa subunit from the cytoplasmic side of the membrane is indicated. Residues that were mutated in this study are circled. Two mutations (R735L and Q634L) affected stability of the 100-kDa subunit, two mutations (D425N and K538A) affected assembly of the V-ATPase complex, whereas three mutations (H743A, K593A, and E789Q) affected proton transport and ATPase activity of the V-ATPase complex. K593A, Q634L, H743A, and E789Q pH optima were determined by measurement of bafilomycin-sensitive ATPase activity in purified vacuolar membranes (5 g of protein) over the pH range of 5.5-10.5 in the assay mixture. K m for ATP and V max values were determined for bafilomycin-sensitive ATPase activity on purified vacuolar membranes (5 g of protein) over a range of ATP concentrations of 0.5-10 mM. MgCl 2 was varied such that the total Mg 2ϩ concentration was in all cases 1.0 mM higher than the ATP concentration. This ensured that most of the ATP was present as the MgATP complex but avoided the possibility of inhibition of activity due to high concentrations of free Mg 2ϩ . The V max values are reported relative to the wild type, which had a specific activity of 3.3 mol ATP/min/mg protein. The values shown are the average of two independent determinations (at five ATP concentrations each) on a single vacuolar membrane preparation for each mutant, with the errors corresponding to the average deviation from the mean. Vph1p Mutagenesis tually completely devoid of activity. In addition, both the H743A and E789Q mutations resulted in a significant change in the pH optimum of the enzyme. While it is difficult to assign a precise interpretation to these findings, they may reflect an alteration in the environment of residues whose protonation state is important for transport to occur. Interestingly, as with the F 0 a subunit, positively and negatively charged residues in the last two putative transmembrane helices appear to be important for activity, possibly serving to line a polar channel necessary to allow protons to gain access to the buried carboxyl group of the c subunit. These results suggest that the 100-kDa subunit may serve an analogous role to the a subunit in the V-ATPase complex. It is important to recognize, however, that further work will be required to determine whether the residues identified play a direct role in proton translocation or whether they serve some other function in the V-ATPase complex, such as coupling of proton transport to ATP hydrolysis. We have previously presented data that suggest that the 100-kDa subunit may also possess the binding site for the specific V-ATPase inhibitor bafilomycin A 1 (58). Thus, reconstituted V 0 or isolated 100-kDa subunit are both able to protect the intact V-ATPase from inhibition by bafilomycin. When the 100-kDa mutants constructed in the present study were tested for their sensitivity to 1 nM bafilomycin (a subsaturating concentration that inhibits 40 -50% of the V-ATPase activity in yeast vacuoles), no significant differences in bafilomycin sensitivity were observed. Obviously mutations that resulted in complete loss of activity (such as D425N and R735L) could not be tested for bafilomycin sensitivity, but none of the remaining residues appears to be critical for bafilomycin binding to the V-ATPase. It is possible that the bafilomycin binding site might reside on the soluble amino-terminal domain or that hydrophobic rather than hydrophilic residues in the integral domain are important in bafilomycin binding. Further studies will be necessary to resolve this question.
7,008.4
1996-09-13T00:00:00.000
[ "Biology" ]
Genetic parentage reveals the (un)natural history of Central Valley hatchery steelhead Abstract Populations composed of individuals descended from multiple distinct genetic lineages often feature significant differences in phenotypic frequencies. We considered hatchery production of steelhead, the migratory anadromous form of the salmonid species Oncorhynchus mykiss, and investigated how differences among genetic lineages and environmental variation impacted life history traits. We genotyped 23,670 steelhead returning to the four California Central Valley hatcheries over 9 years from 2011 to 2019, confidently assigning parentage to 13,576 individuals to determine age and date of spawning and rates of iteroparity and repeat spawning within each year. We found steelhead from different genetic lineages showed significant differences in adult life history traits despite inhabiting similar environments. Differences between coastal and Central Valley steelhead lineages contributed to significant differences in age at return, timing of spawning, and rates of iteroparity among programs. In addition, adaptive genomic variation associated with life history development in this species varied among hatchery programs and was associated with the age of steelhead spawners only in the coastal lineage population. Environmental variation likely contributed to variations in phenotypic patterns observed over time, as our study period spanned both a marine heatwave and a serious drought in California. Our results highlight evidence of a strong genetic component underlying known phenotypic differences in life history traits between two steelhead lineages. been transplanted by humans, how dynamic traits respond depends on the relative response of genetic variation developed in the previous environment to new environmental cues (Yamamichi, 2022). Steelhead, the migratory anadromous form of the salmonid species Oncorhynchus mykiss, exhibits variation in numerous life history traits.Steelhead undergo complex phenotypic, behavioral, and physiological modifications enabling migration from their natal streams to the ocean, where they mature for at least 1 year before returning to their natal streams to spawn.Unlike most other anadromous salmonid species that die following their first reproduction (Christie et al., 2018), O. mykiss individuals may survive through multiple reproductive events, and the frequency of iteroparity varies among populations.Populations often contain multiple life history strategies in dynamic proportions, with alternative ecotypes frequently interbreeding (Kendall et al., 2015;Ohms et al., 2013;Olsen et al., 2006;Satterthwaite et al., 2009;Sloat & Reeves, 2014).Both evolutionary and ecological mechanisms influence the life history expression of O. mykiss (Kendall et al., 2015;Pearse et al., 2019;Phillis et al., 2016), and important adaptive genomic variation has been identified for key migratory life history traits (Hess et al., 2016;Waples et al., 2022;Waters et al., 2021;Willis et al., 2020).In particular, one genetic region, a 55-Mb double-inversion on chromosome Omy05, features ancestral (A) and rearranged (R) variations that have been repeatedly associated with multiple traits, including egg and early juvenile development (Miller et al., 2012;Nichols et al., 2008;Sundin et al., 2005), juvenile growth (Rundio et al., 2021), age at spawning (Beulke et al., 2023), and sex-specific resident and anadromous migratory strategies (Arostegui et al., 2019;Pearse et al., 2014Pearse et al., , 2019)).Because of this association with multiple life history traits and population specific differences, the Omy05 inversion complex (hereafter "Omy05"), appears to influence the fast-slow development continuum, consistent with the important role of "pace of life" in animal life history development, age-specific mortality, and reproduction (Healy et al., 2019).Thus, comparisons of different genetic lineages inhabiting the same environment can help to elucidate the relative effects of adaptive genetic variation and environmental factors on the expression of life history traits. California is composed of multiple microhabitats with distinct environmental conditions and comprises the southern extent of the native range of O. mykiss (Abadía-Cardoso et al., 2016;Satterthwaite et al., 2010;Sogard et al., 2012).The construction of dams, which may form impassable barriers to spawning habitat and modify natural streamflows, has contributed to the decline of anadromous O. mykiss and other native fishes (He & Marcinkevage, 2017;Lindley et al., 2006).This decline prompted the creation and maintenance of hatchery populations to support anadromous fish in their native ranges.Anadromous salmonid hatcheries most often operate as semi-captive populations; juveniles are reared on-site and released to migrate and mature in the ocean before returning as adults that are manually spawned at the hatchery.In the California Central Valley (CCV), four hatchery programs rear and release steelhead in highly regulated watersheds below dams.Notably, one of them was founded with non-native steelhead of coastal California origin, which are known to be genetically distinct from CCV steelhead (Pearse & Garza, 2015).While regulation of CCV streams by dams homogenizes stream flows throughout the year (Sogard et al., 2012), the CCV displays higher temporal and spatial variation in stream flow and temperature, lower rainfall, higher summer temperatures, and high variation among watersheds (Satterthwaite et al., 2009;Sogard et al., 2012) as compared to coastal California habitats.Statedependent models predict different O. mykiss life history compositions among CCV rivers based on this high environmental variability (Satterthwaite et al., 2009;Sogard et al., 2012).Thus, the CCV study system provides an excellent opportunity to compare variation in life history traits among different genetic lineages now exposed to similar environmental conditions.Furthermore, California experienced an intense drought between 2012 and 2016 (Eschenroeder et al., 2022;Herbold et al., 2018), as well as a marine heatwave in 2015-2016 which severely impacted the ocean conditions experienced by salmonids (Di Lorenzo & Mantua, 2016;Free et al., 2023), providing an additional opportunity to investigate shifts in life history trait patterns in response to a sudden environmental change. In this study, we investigated life history variation in steelhead from the four hatchery populations in the CCV, three of which were founded from local sources, while the fourth was founded from a distinct coastal steelhead lineage (see "Study System" below).We nonlethally collected fin clips from every spawning steelhead from 2011 to 2019, including during the record-setting drought and marine heatwave events.These fin clips enabled both population genetic analysis and parentage-based tagging (PBT; Anderson & Garza, 2006), which has been successfully employed to understand and manage anadromous fish populations (Abadía-Cardoso et al., 2013;Evans et al., 2018;Horn et al., 2022).We reconstructed pedigrees with 13,576 parent-offspring trios and 19,043 unique adult steelhead, representing nearly all steelhead spawned at four CCV hatcheries over 9 years, spanning two to three generations. These data allowed us to describe temporal changes in patterns of iteroparity, age at spawning, migration (straying) of hatchery steelhead, and adaptive genetic variation associated with life history over almost a decade, highlighting the interaction between genetic and environmental factors influencing important life history traits. | Study system The CCV contains the Sacramento-San Joaquin River system, a highly impacted region that occupies the central part of California (Figure 1).This low-elevation area has warmer seasonal temperatures compared with northern O. mykiss habitats (Eschenroeder et al., 2022;McEwan, 2001).Landscape and hydrograph alterations from dams built over more than a century have reduced access to over 80% of previous salmonid spawning grounds and homogenized temperature and flow profiles, contributing to decreased numbers of anadromous steelhead (Eschenroeder et al., 2022;He & Marcinkevage, 2017;Lindley et al., 2006).Four hatchery programs produce steelhead in the CCV to mitigate these effects: Coleman National Fish Hatchery (CH), Feather River Hatchery (FRH), Mokelumne River Hatchery (MRH), and Nimbus Hatchery (NH; Figure 1).Situated on different tributaries of the Sacramento River, these four hatcheries capture and spawn returning adult steelhead, incubate the eggs, and rear and release hundreds of thousands of marked (adipose-fin removed) hatchery-produced juveniles each year (California HSRG, 2012). The steelhead spawned at CH, FRH, and MRH were all derived from local CCV populations ("CV-lineage") and are part of the California Central Valley Distinct Population Segment (DPS) that is protected as "threatened" under the US Endangered Species Act (NMFS, 2006(NMFS, , 2020)).While CH is genetically distinct, FRH and MRH broodstock were previously shown to be almost genetically identical due to increased transfers of FRH eggs from 2002 to 2007, when steelhead returns to MRH were low (Del Real et al., 2012;Pearse & Garza, 2015). Unlike the other three hatcheries, the NH broodstock was founded by the importation of eggs from coastal California steelhead populations beginning in the 1950s, shortly after the construction of Nimbus Dam (California HSRG, 2012).Consequently, NH steelhead are more genetically similar to coastal steelhead populations than to CV-lineage hatchery steelhead (Pearse & Garza, 2015). For this reason, steelhead from NH are not included in the CCV steelhead DPS and are managed as a "segregated" program that does not incorporate unmarked (natural-origin) fish (McEwan, 2001;NMFS, 2006NMFS, , 2020)).However, this does not prevent NH-origin steelhead from mating in the wild with each other or with listed CV-lineage steelhead.It is also possible that steelhead migrants from the CCV hatcheries could be spawned at Nimbus, although they are phenotypically distinct and efforts are made to visually identify and exclude them from the broodstock. Steelhead begin returning to CCV hatcheries in late October and continue through late March.Spawning is typically conducted between December and February, but varies among programs (Figure S1).Not all steelhead that return to a hatchery are spawned.All hatcheries attempt to exclude nonanadromous O. mykiss (freshwater resident rainbow trout) by spawning only fish larger than 16 inches (40.64 cm).Rarely, hatchery staff exclude some returning steelhead from spawning because they are phenotypically distinct (a notable case occurred in 2017 at NH when 166 returning fish were not spawned because they were phenotypically dissimilar from NH broodstock, with later genetic analyses confirming these fish were migrants from MRH).At all hatcheries, eggs are stripped from females and fertilized with milt from one or two males.At hatcheries with fewer than 250 returning female steelhead on average per season (MRH and NH), each female's eggs are divided between two males.At the two larger hatcheries (CH and FRH), there are more than 250 returning females on average, each mated with a single male.This practice is intended to mitigate reductions in effective population size (Ne) at the two smaller programs.Hatcheries also differ in how long postspawn steelhead are held before release, which can affect the frequency of repeat spawning within a single season (Fisher & Julienne, 2023).At all four hatcheries, juvenile steelhead are raised on-site through a year of life before being released either at the hatchery or downstream in the same river; however, size at release varies among the hatcheries.Juvenile steelhead from all four hatcheries are marked by the removal of their adipose fins shortly before release. This practice allows natural steelhead to be visually differentiated from hatchery-produced steelhead throughout their lifespan, as adipose fins do not regenerate when fully removed. | Sampling and DNA extraction Tissue samples were taken from each fish spawned at all four hatcheries from 2011 to 2019 and dried on blotting paper in ventilated coin envelopes.The date of spawning, phenotypically identified sex, length, and presence/absence of an adipose fin were recorded for each sample.DNA was extracted from dried fin tissue with QIAGEN DNeasy 96 Tissue Kits following the manufacturer's animal-tissue protocol using a BioRobot 3000 (QIAGEN Inc.).DNA was then diluted 1:2 in ddH2O prior to genotyping. | SNP loci, genotyping, and basic population genetics analysis Samples were genotyped with a panel of 96 biallelic SNP markers (Abadía-Cardoso et al., 2013), including a Y chromosome-linked marker to determine genetic sex (Brunelli et al., 2008).However, the marker composition of the panel varied slightly over time, with 92 loci genotyped across all years of the study; markers not typed across all years were removed from downstream analyses (Table S1). All individuals were genotyped using TaqMan assays (Applied Biosystems) on 96.96 Dynamic Genotyping Arrays with the EP1 Genotyping System (Fluidigm Corporation) following the manufacturer's protocols.Two negative controls were included in each array, and genotypes were called using SNP GENOTYPING ANALYSIS SOFTWARE V 3.1.1(Fluidigm). To evaluate genotyping error rates for each SNP marker, we inferred parent-offspring trios using parentage analysis (see below) and estimated the minimum genotyping error rate expected to produce the Mendelian incompatibilities observed at each marker across the trios.Of the 23,670 genotyped samples, 83 yielded lowquality genotypes after the initial round of genotyping (indicated primarily by large fractions of missing genotypes).These samples were re-genotyped.Any individuals missing more than 10% of loci (fewer than 82 successful genotype calls) were identified and removed. We utilized the R package "strataG" (version 2.0.2;Archer et al., 2016) to calculate the mean expected and observed heterozygosity averaged over loci for all genotypes in recorded spawned steelhead for each hatchery, excluding individuals that returned to the hatchery but were not spawned.F ST values between years across the study period, both within and between hatcheries, were calculated using strataG on a random subset of 300 broodstock from each hatchery.Next, to evaluate gene flow and population structure among programs, this subset of 1200 individuals (300 from each hatchery) was evaluated in the model-based clustering program STRUCTURE version 2.2 (Falush et al., 2003;Pritchard et al., 2000) with a hypothesized number of genetic groups of K = 2, 3, or 4. Finally, a principal component analysis was conducted on this subset of data to visualize relationships among hatchery programs. | Matching samples, repeat, and iteroparous spawners We refer to individuals that enter the hatchery and are spawned multiple times within a single year as "repeat spawners."These can be differentiated from iteroparous individuals that spawn in more than 1 year.Each time any fish is spawned at these hatcheries, a tissue sample is collected and assigned a unique sample ID.Therefore, the same individual may occur multiple times in our dataset with different sample IDs.In order to identify all unique sample IDs belonging to a single repeat spawning or iteroparous individual, we searched our genotype database for samples with identical or near-identical genotypes using the close_matching_samples() function from the R package "rubias" (Moran & Anderson, 2018).A preliminary analysis with a minimum of 80% of markers with matching genotypes provided a visualization of the distribution of numbers of matching genotypes (Figure S2), from which it was clear that pairs of identical samples shared at least 95% of genotypes.Thus, we identified "clusters" of sample IDs that were from the same individual, including matches observed between different hatcheries, and determined the number of iteroparous and repeat spawners at each program overall, by year, and by sex.To handle cases where more than two sample IDs were from the same fish, we created a graph by defining edges between all pairs of sample IDs that were from the same individual, and then identified all the sample IDs associated with a single fish as members of a connected component in that graph using the R package "igraph" (Csardi & Nepusz, 2006).The significance of spawner sex and hatchery program on the type of multiple spawning event (iteroparity vs. repeat spawners) was determined using Kruskal-Wallis rank sum tests. | Pedigree reconstruction To infer the multigenerational pedigree, we conducted parentage analyses, separately for each spawn year and hatchery program included in the study period, using our package HatcheryPedAgree (https:// github.com/ eriqa nde/ Hatch eryPe dAgree) to implement the SNP Program for Intergenerational Tagging (SNPPIT, Anderson, 2012). SNPPIT assigns each offspring to the most likely parent pair-yielding a parent-offspring trio-and calculates a false discovery rate (FDR) score for each offspring assignment (to a pair of parents).Before pedigree reconstruction, we removed two markers (genetic sex and Omy_ R04944).Based on a preliminary SNPPIT run, the loci Omy_128851-23 and Omy_131965-120 displayed an excess of Mendelian incompatibilities (over 2%) and were also removed from subsequent analyses.Final parent-offspring trios were assigned with 92 loci, assuming a genotyping error rate of 0.005 per gene copy (effectively 1% per locus). For pedigree reconstruction, the sample IDs belonging to a single fish were all re-assigned to be the same as the sample ID associated with the most complete genotype for that fish.In cases where we identified one or more loci scored as different homozygotes among the multiple genotypes in a cluster of genotypes from a single individual, we removed that individual from the data set.To account for potential errors in metadata (i.e.spawn date and/or sex recorded by hatchery staff), we reconstructed pedigrees by using the results of two different SNPPIT runs: one (referred to as the "constrained" run) requiring parents to have the same recorded spawn dates and different recorded sexes, and the second (the "unconstrained" run), in which only spawning year was provided for SNPPIT to determine the possible pairings of parents.Potential parents included fish from all hatcheries, and the list of sample ID clusters was referenced while running SNPPIT to ensure iteroparous and repeat spawners were included in the appropriate potential parent pool based on their multiple spawn dates. Only trios for which the maximum a posteriori relationship was "parent-offspring trio" were considered, and only those with FDR ≤0.01 were retained as candidate parent-offspring trios.For most of the offspring, the constrained and unconstrained runs recovered the same parent-offspring trio assignments.Assignments that differed by SNPPIT run type were largely associated with errors in the metadata (incorrect sex or spawn date).We reconciled these disparities by inspecting the associated metadata and FDR scores of the trios in the two different SNPPIT runs, as described fully in the Results section. | Pedigree-based analysis Based on the final pedigrees, parentage assignments and percent of offspring assigned were determined for each parent hatchery, as well as by offspring hatchery, year of spawning, and year of return. The number of offspring per spawning event for females and males was determined, then grouped by type of spawning event (single, iteroparous, and repeat) to calculate both the mean observed reproductive success for each type of spawning event and the mean total observed reproductive success.Repeat and iteroparous spawners that were recorded as different sexes during different spawning events were removed. Offspring age was calculated for all trios by subtracting the parent spawn year (the year the offspring was born) from the offspring spawn year (the year the offspring returned and spawned).Finally, straying rates were calculated for each hatchery program across years.Strays were defined as fish that returned to spawn at a hatchery that differed from that where the parents were sampled. | Omy05 Two SNP loci in our panel (Omy_114448 and Omy_R04944) are located within the inversion complex on chromosome Omy05 (Pearse et al., 2019).Previous analyses have shown that, while Omy_R04944 is nearly perfectly associated with the inversion karyotype, the association of Omy_114448 with the inversion in CCV steelhead is imperfect (Pearse et al., 2014;Pearse & Garza, 2015).Thus, we used locus Omy_R04944 as an indicator of inversion karyotype for all analyses but excluded it from population genetics and pedigree reconstruction.Frequencies of Omy05 karyotypes were determined for each hatchery program to assess patterns related to O. mykiss life history in relation to age, sex, and spawn date.Allele frequencies and adherence to the Hardy-Weinberg Equilibrium were evaluated using the R package HardyWeinberg (Graffelman, 2015). Finally, the frequencies of Omy05 genotypes among returning offspring resulting from heterozygous matings (AR × AR) were determined by hatchery and by sex to test for deviations from the expected 1:2:1 genotype frequencies. | Samples Fin clips were collected from returning adult steelhead at all four hatcheries from 2011 to 2019, for a total of 23,670 samples (Table 1). The two largest programs, CH and FRH, yielded 9420 and 8218 fin clips, respectively, whereas 2510 samples were collected from MRH and 3522 from NH (Table 1).However, the number of returns at each hatchery varied across years, with all hatcheries experiencing decreases in 2015 and 2016 (Table 1). | Data preparation overview For samples that were re-genotyped due to low genotyping success, we retained the most complete genotype for those individuals, resulting in 23,670 unique individual genotypes at 92 loci. Setting a minimum of 82 nonmissing loci in the final dataset removed 422 samples (1.78%), leaving 23,248 for further analysis. Recorded phenotypic and genotypic sex were used to determine sex, with nine samples removed for missing both genetic and phenotypic sex. Identifying samples sharing a minimum of 95% matching genotypes revealed that 4119 fin clip samples could be assigned to 1925 clusters, each representing a single individual that had been sampled between two and seven times due to repeat spawning or iteroparity.The majority of these repeat spawning/iteroparous fish were spawned at FRH (53.74%), while 19.29% were spawned at CH, 16.25% were spawned at MRH, and 10.8% were spawned at NH. In 19 of the 1925 clusters, sex was not consistently recorded for the individual; these individuals were removed.Two samples were identified with mismatching homozygous loci in their cluster of genotypes and removed from further analysis, leaving 23,191 unique individuals for pedigree reconstruction. | Population genetics Estimates of heterozygosity were determined from 22,765 recorded spawned broodstock.Rates of heterozygosity at all programs fluctuated over time, but NH had higher estimated observed and expected heterozygosities than all three CV-lineage hatcheries in all years (Table S2).F ST and analysis with the program STRUCTURE showed CH, FRH, and MRH are most similar to each other, while NH has the most genetically distinct fish, consistent with their coastal origin (Figure 2; Figures S3 and S4; Table S3).Interannual genetic divergence was lower at CH and FRH, likely due to their larger effective population sizes (Table S3).Notably, MRH and FRH were most similar at the beginning of the study period but became more distinct over time (Figure S4; Table S4). | Iteroparity and repeat spawning Kruskal-Wallis rank sum tests revealed that the rate of iteroparity and repeat spawning was significantly different across hatchery programs and between sexes (by hatchery: chi-square = 52.581,df = 4, p-value = 1.043e-10; by sex: chi-square = 27.212,df = 4, pvalue = 1.801e-05).The overall rate of iteroparity was low to moderate (range = 6.3%-14.6%)at all three CV-lineage hatcheries and was strongly female biased (88.5% of iteroparous spawners were female, Table 2; Tables S5).In contrast, the coastal-lineage NH had a low proportion of both male and female iteroparous spawners (0.2%; male-biased (Table 2).FRH had the highest overall rate of repeat spawning (20.0%;Table 2), with a notable reduction from 2016 onward, consistent with changes in management practices (Table S6). The highest rate of repeat spawning within a single season occurred at MRH in 2013 at 50.77% (Table S6).The lowest proportion of repeat spawners was observed at CH (2.24%) in roughly equal numbers of males and females (Table 2). | Pedigree reconstruction The pedigrees inferred from the unconstrained SNPPIT runs, as well as the runs constrained by sex and spawn date, were each reconstructed from 23,191 unique individuals.The number of trios selected for the final pedigree based on matching spawning metadata and statistical requirements were as follows: 13,657 trios from the constrained run had a maximum a posteriori relationship of "parentoffspring trio" and an FDR ≤0.01.Over 93% (12,774) of these trios were also found to be statistically supported trio assignments in the unconstrained run, while 883 trios were discrepant between the constrained and unconstrained runs.Of these discrepant trios, 818 offspring did not have statistically supported assigned parents in the constrained run-likely due to errors in the metadata-but were assigned parents that met our confidence criteria in the unconstrained SNPPIT run and were therefore retained.Removing an additional 16 improbable trios with sex or spawning date conflicts left 13,576 trios with confident assignments. | Pedigree-based analysis The percentage of spawning steelhead confidently assigned to the pedigree varied by program and year.Table S7 provides details of parentage assignments, total offspring, and number of assignments by cohort and return year. Given the filtered parentage assignments, we calculated the age distribution among the spawners at each program.Returning steelhead spawned at two through 6 years of age, with fewer numbers of older fish (Table 3).Age structure varied among programs, with CV-lineage hatcheries featuring predominantly age-two steelhead (Falush et al., 2003;Pritchard et al., 2000) results from 1200 individuals (300 per program) for the hypothesized number of genetic groups K = 2, 3, 4. and the coastal-lineage NH dominated by age-three steelhead (chi-squared = 65.25,df = 4, p-value = 2.282e-13; Figure 3; Table 3). Comparing age structure across the spawning season with spawn dates grouped into equal, 10-day, bins revealed striking differences in the spawn dates of age-two, -three, and -four spawners at NH, but not at CV-lineage hatcheries.Mean and median spawn dates by age shift earlier in the season with increasing age of spawner at NH, but not at the CV-lineage programs (chisquare = 44.673,df = 10, p-value = 2.491e-06; Table S8; Table S9). NH showed a clear shift in relative proportion of ages as the spawning season progressed, with older fish returning earlier than younger fish (Figure 4). Migration among hatcheries was rare but occurred between all programs (Table 4; Table S10).MRH had the highest straying rate due to one significant event, when 165 steelhead that were assigned to parents at MRH in 2015 returned to NH to spawn in 2017.This single event represented 56.66% of all observed straying (Table 4). | Omy05 associations The marker locus for Omy05 was successfully genotyped for all steelhead sampled from 2015 onward (N = 13,090), including 10,293 offspring among the 13,576 inferred trios.Frequencies of Omy05 genotypes were estimated among these individuals overall, by hatchery program, and by hatchery and sex (Table 5).Genotypes AA and AR were most common at all hatcheries, with RR occurring rarely, regardless of sex (Table 5).Using the Haldane Exact test for Hardy-Weinberg equilibrium on the 13,090 genotypes, CH and MRH did not deviate from expected frequencies of Omy05 genotypes, while FRH and NH both had slight, but significant, heterozygote excess (Table 5).We identified 465 returning offspring resulting from AR × AR pairings across all hatcheries, with statistically significant deviations from expected Mendelian Omy05 genotype frequencies overall and for both males and females across all programs, reflecting an excess of heterozygotes and a deficit of RR homozygotes relative to expected Mendelian proportions (Table S11).Age at spawning was not associated with Omy05 genotype among CV-lineage broodstock, but older coastal-lineage steelhead from NH were more likely to have AA or AR genotypes, while younger fish had proportionally more RR genotypes (Figure 5; Table S12). | DISCUSS ION We characterized patterns of variation for several important life history traits in the four steelhead hatchery programs present in the CCV from 2011 to 2019, which include two genetically distinct steelhead lineages.Despite inhabiting the CCV environment since the 1950s, we found that the coastal-origin steelhead at NH maintained genetic and phenotypic distinction from CVorigin hatchery steelhead, including for key life history traits. Adaptive genetic variation genotyped on chromosome Omy05 varied among populations and showed deviations from expected Mendelian proportions, as well as an association between age at spawning and Omy05 genotype that differed between the CVlineage hatcheries and the coastal-origin steelhead at NH. Finally, we observed strong temporal variation in genetic and phenotypic patterns that coincided with major climatic events over the course of the study. | Distinct genetic lineages The initiation of NH's hatchery program with coastal-origin steelhead from the Eel and Mad Rivers created a natural experiment, providing the opportunity to evaluate phenotypic and genetic F I G U R E 3 Age structure by program for cohort (above) and return (below) years, with counts of steelhead per year above bars.Note that all fish in return year 2013 and return year 2017 are identified as 2-year-olds due to the beginning and ending of the study sampling period for parents in 2011 and offspring in 2019. We also found differences among hatchery programs in the distribution of adaptive genetic variation.We investigated the O. mykiss chromosome Omy05 chromosomal inversion that has been associated with adaptive growth and developmental traits, as well as migration strategy (Miller et al., 2012;Nichols et al., 2008;Pearse et al., 2014Pearse et al., , 2019)).NH steelhead appeared distinct from CCV fish in their association of Omy05 genotype with age at spawning, and followed a pattern similar to that observed in coastal steelhead in the Russian River (Beulke et al., 2023).Beulke et al. (2023) found a significant association between age at maturity and Omy05 genotype in males, with the R haplotype more frequent in younger spawners. In NH steelhead, AR and RR genotypes were also proportionally more frequent in age-two spawners than among older age classes for both sexes. Within all CCV steelhead hatchery programs, the RR genotype associated with expression of faster development and residency (Pearse et al., 2019;Rundio et al., 2021) was found at low frequencies, consistent with previous observations that southern O. mykiss populations below barriers to anadromy possess high frequencies of the A haplotype at Omy05, particularly in the CCV (Abadía-Cardoso et al., 2019;Eschenroeder et al., 2022;Leitwein et al., 2017;Pearse et al., 2019;Pearse & Campbell, 2018).In addition, Omy05 genotype frequencies significantly deviated from HWE in two programs (FRH and NH).Similarly, across all programs, there was a significant deviation from the expected 1:2:1 Mendelian ratio of Omy05 genotypes in returning offspring from matings between AR parents. Together, these patterns suggest that the phenotypic effects of Omy05 variation lead to genotype-specific disassortative mating, growth, or survival, independent of genetic lineage, supporting its role in the developmental "pace of life" (Healy et al., 2019).However, the specific mechanisms driving this selection remain unclear.Future work will investigate genomic regions Greb1L/Rock1, vgll3, and six6, which have previously been significantly associated with adult migration timing and age at maturity in salmonids (Ayllon et al., 2015;Thompson et al., 2020;Waples et al., 2022;Waters et al., 2021;Willis et al., 2020Willis et al., , 2021)). Because NH steelhead originate from a coastal lineage and are not included in the Central Valley Steelhead Distinct Population Segment, managing agencies in California prohibit interbreeding of NH broodstock with any returning unmarked or visually apparent CV-lineage steelhead to prevent the introgression of coastaladapted genetic architecture into CV-lineage gene pools (California HSRG, 2012).Our population genetic analyses confirm introgression between programs occurs rarely, but our pedigree also identified a large straying event of 188 CV-lineage steelhead (most originating from MRH) to NH in 2017.However, these straying steelhead were visually distinguishable from NH broodstock, so they were genotyped and released without spawning.Conversely, we found low levels of straying from NH to the three CV-lineage hatchery programs across the study period, although NH broodstock are known to interbreed with wild steelhead in the lower American River (Abadía-Cardoso et al., 2019).The California Hatchery Scientific Review Group (HSRG) ultimately recommended replacing NH broodstock with steelhead suitable for the American River to decrease risks to natural populations, but it is unclear when this will be initiated (California HSRG, 2012;Fisher & Julienne, 2023;NMFS, 2014). Genetic variation within CV-origin hatchery steelhead reflected differences in program management strategies, past movement of eggs between programs, and the accumulation of random genetic changes.Human management of spawning steelhead most strongly influenced repeat spawning, with the highest overall rates occurring in FRH.Our pedigree identified FRH steelhead with high rates of spawning multiple times within 1 year until 2016, after which changes in spawning protocols contributed to consistently low rates of repeat spawning (Table S6).We also found evidence of strong population genetic similarity between FRH and MRH, reflecting previous transport of eggs from FRH to MRH (California HSRG, 2012;Pearse & Garza, 2015).However, we also found that MRH became more differentiated from the CV-lineage programs over time, suggesting that genetic divergence rapidly accumulated after egg transport stopped in 2007.In contrast, we also observed a decrease in F ST values between CH and FRH.These small changes in population structure over time suggest genetic drift acting in local, differentiated pools with limited interbreeding. This may reflect the prevalence of alternative life history strategies in CV-lineage steelhead, including use of freshwater and brackish habitats in the Sacramento-San Joaquin delta rather than undergoing fully anadromous marine migrations (Abadía-Cardoso et al., 2019;Leitwein et al., 2017;Olsen et al., 2006;Pearse & Campbell, 2018), and highlights the importance of the portfolio effect in maintaining diverse life histories in salmonid populations (Carlson & Satterthwaite, 2011;Price et al., 2021). Steelhead exhibit plasticity in the number of lifetime reproductive events, with most individuals dying after first reproduction (semelparous), while some live to reproduce in multiple years (iteroparous), with both life history strategies maintained by fitness tradeoffs involving fecundity and mortality (Christie et al., 2018;Seamons & Quinn, 2010).For this reason, all four CV hatchery programs release steelhead after spawning to provide the opportunity for iteroparity (California HSRG, 2012).In our pedigree analyses, hatchery steelhead spawned either only once, more than once within a season (repeat spawning), or in more than one spawn year (iteroparity).Strikingly, NH steelhead differed significantly from CV-lineage populations when comparing the average number of observed lifetime spawning events.Lower rates of iteroparity occurred in NH steelhead overall (0.2%), which is consistent with previous estimates of iteroparity in coastal California steelhead populations (Abadía-Cardoso et al., 2013;Beulke et al., 2023).In contrast, rates of iteroparity were higher in the CCV hatchery program populations, with the highest overall rate occurring at MRH (14.58%) and an even higher rate among MRH females (27.6%).Thus, despite sharing a watershed in the CCV, NH hatchery steelhead possessed low rates of iteroparity, suggesting a strong genetic influence from their coastal lineage that has not been largely altered by current environmental conditions. | Environmental influence Plasticity in life history traits enables expression of more appropriate phenotypes based on environmental cues.The individual's response depends on the heritability of the conditional response threshold sensitivity, in addition to environmental conditions.The most optimal phenotype best balances producing the largest number of offspring possible and maximizing their probability of surviving to spawning (Satterthwaite et al., 2009(Satterthwaite et al., , 2010)).Considering return timing and age at spawning, this decision depends on growth and successfully surviving emigration.Important environmental cues trigger genetically encoded thresholds that initiate phenotypic expression to optimize survival and reproduction in the local environment (Reid & Acker, 2022;Sogard et al., 2012;Sommer, 2020).Environmental cues, such as the difference between relative streamflows at release and on returning to spawn (release and return stream-flow differentials), smolt release location, route complexity, and water-chemistry variation, significantly affect both growth rates and emigration survival, thus influencing steelhead life history selection (Kendall et al., 2015;Satterthwaite et al., 2009Satterthwaite et al., , 2010;;Sturrock et al., 2019). Our results show that genetic differences in environmental threshold sensitivity may persist in novel environments for many generations. Temporal variation in CCV watersheds likely influenced the distribution of age at spawning in all four hatchery programs, though identifying specific environmental factors and their interactions with disruptions in genetic and phenotypic patterns remains challenging.The record-setting 2012-2016 drought in the California Central Valley reduced streamflows by an estimated 85%-90%, with a concordant increase in stream temperatures (Eschenroeder et al., 2022;Herbold et al., 2018).Simultaneously, a strong marine heatwave affected the West Coast of North America in 2014-2016, impacting many anadromous salmonid populations (Di Lorenzo & Mantua, 2016;Free et al., 2023).Sudden relief of the drought in 2017 coincided with higher proportions of age-two spawners across all programs (Herbold et al., 2018), seen most dramatically in the coastal-lineage steelhead at NH, where age-three spawners are typically the most abundant.Furthermore, a combination of NH's proximity to San Francisco Bay and the increased use of downstream smolt-release sites by all programs during the drought, followed by watershed-wide flooding in 2017, likely influenced the high proportion of steelhead released from MRH in 2015 to return to spawn at NH in 2017 (Sturrock et al., 2019). These observations highlight the impacts of environmental variability as well as the underlying genetic basis of life history variation.However, because NH is the only hatchery in the CCV that supports coastal-lineage steelhead, it is unclear exactly how environmental factors differentially impact the expression of life history traits in these divergent lineages. | CON CLUS IONS The coexistence of multiple hatchery-managed steelhead lineages in the CCV provided the opportunity to investigate how different genetic lineages respond to similar environmental cues within a shared landscape.We found that coastal-lineage NH broodstock F Map of California Central Valley showing locations of hatcheries producing steelhead in relation to San Francisco, CA, USA. Age distribution was considered by hatchery program and spawn year, as well as by hatchery program and cohort year across years.Including age distribution by hatchery program and cohort year ensures identification of any cohort effects that could influence patterns observed in age distribution by spawn year.Spawn dates were binned into 5-day units to group spawning steelhead by relative spawn timing.Counts of individuals of each age were noted across recorded lengths (mm) at spawning (size-at-age) by program. have maintained both genetic and phenotypic differentiation compared with steelhead from CV-lineage programs, despite sharing a watershed for over 50 years.NH steelhead spawned at older ages, maintained lower rates of iteroparity, and showed evidence of novel phenotypic effects of the Omy05 genotype associated with age at spawning.We also observed temporal variation in patterns of life history variation within and among programs, consistent with patterns of climatic variation across years.Collectively, these results highlight the interplay between management practices and biological drivers leading to realized patterns of life history variation in hatchery programs.Our study provides clear evidence that different steelhead genetic lineages may respond differently to novel and changing environments, maintaining strong differences in phenotypic and adaptive genetic variation and life history traits over many generations. Table 2 ) . Six individuals returned to and spawned at more than one hatchery on different spawn dates.Two of the six individuals were spawned within the same year at different programs: in 2017, one fish spawned at NH and FRH, and in 2018, one fish spawned at CH and FRH.Three individuals were spawned at NH in 2011 and at CH in 2012, and one fish was spawned at MRH in 2018 and NH in 2019.Repeat spawning of the same fish multiple times within a year also varied among hatcheries and across years and was strongly Counts for single, iteroparous, and repeat spawning for each hatchery program overall, and by sex.Percentages of uses (iteroparous, once, or repeat spawn) for each program and by sex were calculated from the total count of individuals per program, or total females and males. Counts and percent of steelhead at age at spawning by program and sex; Kruskal-Wallis results for age-based results. TA B L E 3 Spawn dates binned over 10-day intervals and separated by sex over all spawning seasons.Sample sizes for sex and age classes for each binned spawn dates are provided in TableS9.Total counts and frequency of Omy05 genotypes by program and by program and sex.Hardy-Weinberg Equilibrium results are included.Count and frequency of return types in all recorded returning steelhead.Rates of return are provided for both fish returning to their origin program and those that returned to a different hatchery (strays). F I G U R E 5 Frequencies of Omy05 genotypes by program, sex, and age at spawning.Sample sizes for each sex, age, and Omy05 genotype class are provided in TableS12.
8,890
2024-03-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Dielectric Spectroscopy of PP/MWCNT Nanocomposites: Relationship with Crystalline Structure and Injection Molding Condition In this paper, we study the correlation between the dielectric behavior of polypropylene/multi-walled carbon nanotube (PP/MWCNT) nanocomposites and the morphology with regard to the crystalline structure, nanofiller dispersion and injection molding conditions. As a result, in the range of the percolation threshold the dielectric behavior shifts to a more frequency-independent behavior, as the mold temperature increases. Moreover, the position further from the gate appears as the most conductive. This effect has been associated to a modification of the morphology of the MWCNT clusters induced by both the flow of the molten polymer during the processing phase and the variation of the crystalline structure, which is increasingly constituted by γ-phase as the mold temperature increases. The obtained results allow one to understand the effect of tuning the processing condition in the frequency-dependent electrical behavior of PP/MWCNT injection-molded nanocomposites, which can be successfully exploited for an advanced process/product design. Introduction In the last decades, scientists all over the world have paid attention to carbon nanotubes (CNTs), because of their outstanding electrical properties [1]. Moreover, thanks to their tubular shape with a very high aspect ratio, they can turn the insulating behavior of polymers into conductive, even at very low content [2]. This is particularly interesting because a low CNT fraction does not modify the mechanical performance of the pristine polymer in which they are embedded, differently from other carbon-based additives (e.g., carbon black or graphite), which usually lead to brittle materials, due to the considerable amount needed to significantly modify the electrical behavior [3][4][5][6][7][8]. It is widely known that the processing technique can influence the morphology of a polymer nanocomposite [9]. As for the injection molding of thermoplastic polymers, fibrous and lamellar fillers are typically oriented in the flux direction [10,11]. This was also demonstrated for CNT-based nanocomposites [12]. Several studies have shown that the variation of the processing condition of injection-molded components, manufactured with CNT-based nanocomposites, can lead to drastic modification of the electrical conductivity, especially when the CNT content is in the range of the electrical percolation [13][14][15][16][17][18]. Villmow et al. [16] studied injection-molded polycarbonate-based CNT nanocomposites, demonstrating that lowering the cooling rate and residence time seems to favor the attainment of higher electrical conductivity. Several researchers focused their attention to semi-crystalline polymers [17,18]. For instance, Ameli et al. [17] studied injection-molded PP/CNT nanocomposite foams, finding that an increase of the injection flow rate favored the electrical conductivity. The relationship between processing condition and resulting crystalline structure was studied by Von Baeckmann et al. [19], who found the PP γ-phase formation is favored with a reduction of the cooling rate. However, to the best of our knowledge, very few efforts have been conducted in order to correlate the crystalline structure of nanocomposites with electrically conductive nanofillers and their resulting electrical features [20][21][22][23]. In a previous paper by the same authors [24], the effect of the injection molding condition on the DC electric properties of PP/multi-walled carbon nanotube (MWCNT) nanocomposites was investigated, showing that rising up the temperature of the mold and the injection rate, a decrease of electric resistivity can be attained. This was connected to the formation of a crystalline structure based on a relevant fraction of γ-phase, and the resulting inter-cluster morphology of the conductive network based on MWCNT. Dielectric spectroscopy allows the study of the frequency-dependent AC electrical properties. It can be used as a valuable support for correlating the morphology of a polymer nanocomposite filled with conductive additives, with its electrical behavior, which may be masked under the DC approach [25][26][27][28][29]. In this paper, we report a study on the dielectric behavior of multi-walled carbon nanotubes (MWCNTs)/PP nanocomposites. As a result, the AC dielectric properties and the related electrical relaxations have been correlated to the morphology of the MWCNT clusters and the crystalline structure of PP, emphasizing the effect of the variation of the processing condition on the other characteristics. In particular, the study has highlighted the effect of rising the temperature of the mold for increasing the electrical conductivity and the mobility of the electric dipoles attributed to the polymer/tube interface, allowing the correlation with specific crystalline structures and MWCNT cluster morphology. Materials An injection molding grade PP (Moplen RP348R), produced by LyondellBasell (Rotterdam, The Netherlands), was selected as a matrix. As claimed by the manufacturer, it is a random copolymer, with a melt flow index equal to 25 g/10 min (230 • C, 2.16 kg). Multi-walled carbon nanotubes (NC7000) were purchased from Nanocyl (Sambreville, Belgium). As reported by the manufacturer, a catalytic carbon vapor deposition process was used for their production. Their average diameter was 10 nm and they were 1.5 µm long, with a surface area of 250-300 m 2 /g. The fraction of pure carbon in their composition is 90%. Processing Several MWCNT contents were selected to be added to PP (1, 2, 3, 4, 5, 6, 7 wt%), which have been homogeneously mixed by a co-rotating twin-screw extruder, Leistritz 27E (Nuremberg, Germany). The diameter (D) of the screw is 27 mm, with a length of 40D. A constant screw speed of 220 rpm was fixed. The temperature profile was set in the range of 190-200 • C. MWCNT have been added through a gravimetric dosing unit (Brabender FlexWall, Duisburg, Germany). An injection molding machine, Ferromatik K-Tec 200 with a diameter D of the screw of 50 mm, was used to process the nanocomposites. Samples with a rectangular shape (100 × 140 × 2 mm 3 in size) were prepared. The temperature of the mold was fixed at 25 • C and a flow rate of 70 cm 3 /s was used. These parameters were referred to as standard conditions hereinafter. Samples were also produced with two different mold temperatures (70 • C and 100 • C), in order to study its effect on the morphology and on the dielectric behavior of the nanocomposites. A temperature of the mold equal to 100 • C is typically considered very high for a polymeric matrix like PP and it has been obtained through the so-called Heat&Cool (H&C) process. A Vario-Therm control unit (HB-Therm, St. Gallen, Switzerland), kindly supplied by Nickerson Italia, was used to obtain this process. It consists of a dynamic heating of the surface of the mold, which is heated to high temperature during the injection phase, followed by a rapid cooling during the packing phase. This processing parameter was tuned for the nanocomposites containing 3 and 4 wt% MWCNTs. Throughout the paper, samples will be mentioned as "Xwt%-MWCNT T-Y • C", with X indicating the MWCNT content and Y the used mold temperature. Characterization AC dielectric characterization was performed in the 10 1 -10 6 Hz frequency range, using a HP 4284A precision LCR meter (Agilent, Santa Clara, USA). The used test fixture has an electrode surface area of 4.90 cm 2 . All the tests have been performed by applying a voltage amplitude of 0.5 V at room temperature. The samples were positioned between two copperplated electrodes and the complex impedance Z * = Z − iZ = |Z|e −iθ was obtained by measuring real part (Z'), imaginary part (Z"), module |Z| = Z 2 + Z 2 and phase angle (θ) of impedance with respect to the frequency. Starting from these measurements, the bulk conductivity σ AC was computed according to the equation where A is the area of contact and d is the thickness of the specimen. The complex permittivity ε * = ε − iε was calculated from the equation Figure 1. Position 1, hereinafter referred to as P1, is the center position of the sample: all the results reported in this paper, are based on experimental tests in this position, wherever not specified differently. Position 2, hereinafter referred to as P2, is the position closest to the gate, which represents the part firstly reached by the molten material. Position 3, hereinafter referred to as P3, is the position furthest from the gate. consists of a dynamic heating of the surface of the mold, which is heated to high temperature during the injection phase, followed by a rapid cooling during the packing phase. This processing parameter was tuned for the nanocomposites containing 3 and 4 wt% MWCNTs. Throughout the paper, samples will be mentioned as "Xwt%-MWCNT T-Y°C", with X indicating the MWCNT content and Y the used mold temperature. Characterization AC dielectric characterization was performed in the 10 1 -10 6 Hz frequency range, using a HP 4284A precision LCR meter (Agilent, Santa Clara, USA). The used test fixture has an electrode surface area of 4.90 cm 2 . All the tests have been performed by applying a voltage amplitude of 0.5 V at room temperature. The samples were positioned between two copper-plated electrodes and the complex impedance Figure 1. Position 1, hereinafter referred to as P1, is the center position of the sample: all the results reported in this paper, are based on experimental tests in this position, wherever not specified differently. Position 2, hereinafter referred to as P2, is the position closest to the gate, which represents the part firstly reached by the molten material. Position 3, hereinafter referred to as P3, is the position furthest from the gate. The morphology of the obtained nanocomposites has been studied according to two different approaches: one focused on the crystalline structure, the other to the morphology of the MWCNT clusters embedded in the polymer matrix. X-ray diffraction (XRD) and DSC were used to study the crystalline structure of PP. XRD analysis was performed using a Panalytical X'Pert PRO (Cu Kα radiation, wavelength of 1.54187 Å, Malvern Panalytical, Malvern, UK) diffractometer, from 2 to 30 • 2θ with a step rate of 0.026 • /min. Differential scanning calorimetry (DSC) was performed on a representative specimen cut from the whole cross-sectional area of the injection-molded sample. A Q800 equipment, from TA Instruments, was used. A single heating scan was set from 25 to 190 • C, with a heating rate of 10 • C/min. Field emission scanning electron microscopy (FESEM) was used to investigate the morphology of the MWCNT clusters in the cross sectional area taken in different position over the surface of the samples. Micrographs were taken by means of a Zeiss Merlin 4248 FESEM instrument (Carl Zeiss, Oberkochen, Germany). The specimens were cryo-fractured in liquid nitrogen and coated with a thin layer (<10 nm) of chromium before observation, using a Sputter Coater BAL-TEC SCD (Balzers, Liechtenstein). Results and Discussion 3.1. Dielectric Analysis 3.1.1. Effect of the MWCNT Content Dielectric properties were studied as a function of MWCNT content and the results are reported in Figure 2. Figure 2a shows the electrical conductivity (σ AC ) as a function of frequency (log-log plot). The electrical conductivity of neat PP increases with frequency in the whole frequency range, as expected for insulating materials [29]. The same behavior can be observed for MWCNT nanocomposites with the lower nanofiller contents. Indeed, when the concentration of MWCNTs is lower than the percolation threshold, the nanocomposite acts as an insulator and its electrical conductivity is frequency-dependent because of the hopping and tunneling mechanism [30]. As the frequency increases, the capacitive part, which can be associated to both the capacitive behavior of the polymer matrix and to the contribution of the tube/polymer/tube structures, contributes to an increasing conductance directly proportional to the frequency (iωC). scanning calorimetry (DSC) was performed on a representative specimen cut from the whole cross-sectional area of the injection-molded sample. A Q800 equipment, from TA Instruments, was used. A single heating scan was set from 25 to 190 °C, with a heating rate of 10 °C/min. Field emission scanning electron microscopy (FESEM) was used to investigate the morphology of the MWCNT clusters in the cross sectional area taken in different position over the surface of the samples. Micrographs were taken by means of a Zeiss Merlin 4248 FESEM instrument (Carl Zeiss, Oberkochen, Germany). The specimens were cryo-fractured in liquid nitrogen and coated with a thin layer (<10 nm) of chromium before observation, using a Sputter Coater BAL-TEC SCD (Balzers, Liechtenstein). Effect of the MWCNT Content Dielectric properties were studied as a function of MWCNT content and the results are reported in Figure 2. Figure 2a shows the electrical conductivity (σAC) as a function of frequency (log-log plot). The electrical conductivity of neat PP increases with frequency in the whole frequency range, as expected for insulating materials [29]. The same behavior can be observed for MWCNT nanocomposites with the lower nanofiller contents. Indeed, when the concentration of MWCNTs is lower than the percolation threshold, the nanocomposite acts as an insulator and its electrical conductivity is frequency-dependent because of the hopping and tunneling mechanism [30]. As the frequency increases, the capacitive part, which can be associated to both the capacitive behavior of the polymer matrix and to the contribution of the tube/polymer/tube structures, contributes to an increasing conductance directly proportional to the frequency (iωC). When the concentration of MWCNTs is higher than the percolation threshold, the nanocomposite exhibits a transition of the AC conductivity passing from a capacitive-like (frequency dependent) to a resistive-like (frequency independent) behavior, at a certain cut-off frequency, called the characteristic frequency (fc). The frequency-independent plateau in the AC conductivity can be found up to 70 Hz (fc) for the 5 wt%-MWCNT nanocomposite. This value shifts to higher frequencies by increasing the MWCNT When the concentration of MWCNTs is higher than the percolation threshold, the nanocomposite exhibits a transition of the AC conductivity passing from a capacitive-like (frequency dependent) to a resistive-like (frequency independent) behavior, at a certain cutoff frequency, called the characteristic frequency (f c ). The frequency-independent plateau in the AC conductivity can be found up to 70 Hz (f c ) for the 5 wt%-MWCNT nanocomposite. This value shifts to higher frequencies by increasing the MWCNT content. The overall behavior of σ AC can be described as the superposition of the capacitive and resistive response, the latter being visible only when the possibility for electrons to pass through the MWCNT conductive network is not negligible. This aspect has been already deeply investigated by Monti et al. [29], who showed similar results related to thermosetting MWCNT-based nanocomposites. Figure 2b shows the phase angle as a function of frequency of the studied materials. The phase angle is negative without any significant variation at MWCNT contents below the percolation threshold. Over the percolation threshold, the phase angle shows a peak that shifts to higher frequency with increasing MWCNT content. At the highest MWCNT contents, the phase angle tends to zero in the low frequency region, partially hiding the aforementioned peak, corresponding to the current flowing almost exclusively through the conductive MWCNT network, which behaves as a resistive path. This is perfectly coherent with the physical theory, which describes the ohmic behavior of conductors with no phase angle between the electric impulse and the circuit response. Figure 3 shows the dielectric results of the 3 and 4 wt% MWCNT-based nanocomposites manufactured at the three different mold temperature (25,70 and 100 • C). These contents were selected since they are in the lower part of the range of the electrical percolation threshold. As it is possible to observe, the increase of mold temperature leads to a shift of the dielectric behavior from a fully capacitive to an increasingly higher ohmic behavior. Indeed, from Figure 3a, it can be observed that σ AC of the sample manufactured with standard conditions (already reported in Figure 2a) has the typical insulating behavior, with the conductivity continuously increasing with the frequency. The conduction mechanism can be ascribed to the tunneling effect rather than the direct contact among the MWCNTs. However, in the case of 4 wt% MWCNT content, with the increase of mold temperature, conductivity becomes frequency-independent in the low frequency region, indicating that a three-dimensional conductive network has been formed and the transport of electrons mainly occurs through it. In order to make it clearer and to overcome this difficulty in evaluating the interfacial polarization, McCrum et al. [37] introduced the electric modulus formalism (M * ), which is the inverse of complex dielectric permittivity and is defined as Effect of the Processing Condition . M * describes the dynamic characteristics of the charge motion in conductors with regard to relaxation in an electric field [38][39][40]. The use of electric modulus has remarkable advantages in the analysis of the bulk relaxation processes compared to other electrical functions, since by its definition, suppresses undesirable electrode polarization effects that might mask any other response The phase angle (Figure 3b) shows a peak at about 400 Hz in the 3 wt%-MWCNT T-100 • C nanocomposite, which cannot be seen at mold temperatures 25 • C and 70 • C. In the 4 wt%-MWCNT samples, the peak is visible at all mold temperatures and it shifts to higher frequency with increasing mold temperature. The same results can be observed in the ε" curves reported in Figure 3c, with the peak at about 250 Hz in the 3 wt%-MWCNT T-100 • C nanocomposite and at about 40, 250 and 3800 Hz in the 4 wt%-MWCNT T-25 • C, T-70 • C and T-100 • C nanocomposite, respectively. The presence of a peak in the phase angle and imaginary part of permittivity curves can be associated to a dipole relaxation, whose characteristic time can be evaluated as the opposite of the frequency. In the 3 wt%-MWCNT samples, this peak is visible only at mold temperature 100 • C. This can be associated to a reorganization of the MWCNTs in the PP polymer matrix, induced by this processing condition, which allows the creation of a stronger interaction (detectable from this peak) between the insulating polymer and the MWCNTs [31]. For inhomogeneous materials, as in the case of PP-MWCNT nanocomposite, this relaxation can be attributed to Maxwell-Wagner-Sillars (MWS) interfacial polarization [32,33]. In the case of 4 wt%-MWCNT samples, the shift of the peak to higher frequency means that the relaxation mechanism is faster (lower relaxation time) as the mold temperature increases and that the electric dipoles associated to the interfacial polarization have a higher mobility. In the 4 wt%-MWCNT T-100 • C sample, phase angle curve tends to zero in the low frequency region, which corresponds to the current flowing almost entirely through the MWCNT network. Indeed, this conductive structure behaves as a resistive path. In the imaginary part of permittivity curve, this conductive component masks the interfacial polarization, as it well described in literature [34][35][36]. In order to make it clearer and to overcome this difficulty in evaluating the interfacial polarization, McCrum et al. [37] introduced the electric modulus formalism (M * ), which is the inverse of complex dielectric permittivity and is defined as M * = 1 ε * = ε ε 2 +ε 2 + i ε ε 2 +ε 2 = M + iM . M * describes the dynamic characteristics of the charge motion in conductors with regard to relaxation in an electric field [38][39][40]. The use of electric modulus has remarkable advantages in the analysis of the bulk relaxation processes compared to other electrical functions, since by its definition, suppresses undesirable electrode polarization effects that might mask any other response stemming from the interior of the specimen. Recent examples of overcoming huge electrode polarization masking are given in [41][42][43]. Figure 3d shows M" as a function of frequency. This graph allows a more in-depth analysis of the dielectric results and represents a further compelling evidence of the presence of a MWS interfacial polarization in the polymer/tube/polymer structure, which is faster as the mold temperature increases. In particular, a barely visible peak is present in the 3 wt%-MWCNT T-25 • C and T-70 • C nanocomposites, while it was not detected in ε" and θ curves. Since it appears at about 60 and 100 Hz (while the one related to 3 wt%-MWCNT T-100 • C nanocomposite is visible at 740 Hz), this further confirms the tendency of interfacial polarization of being faster as the mold temperature increases. It is worth noting that the M" curve of the 4 wt%-MWCNT T-100 • C nanocomposite shows two distinctive peaks, which could be associated to two different tube-polymer interphases. Position over the Sample It is well known that injection molding induces an inhomogeneity in the morphology of the produced component. This is due to the shear stress induced on the molten material, by the flow during the mold-filling phase, which changes over the thickness and the surface of the mold cavity and the produced component [12]. In the case of polymers filled with conductive particles, this inhomogeneity in the morphology, results in an equivalent inhomogeneity in the electrical behavior [3,14,30,41]. As an example, Cesano et al. [30] highlighted this effect in the thickness of PP-MWCNT nanocomposites, while Pötschke et al. [44] studied this phenomenon on several positions over the surface of polycarbonate-MWCNT nanocomposites. Figure 4 reports the results of the dielectric properties of 3 and 4 wt%-MWCNT nanocomposites, manufactured at mold temperatures 25 and 100 • C, as measured in the studied different positions over the surface of the sample. As it is possible to observe from Figure 4a,b, AC electrical conductivity of the samples manufactured at mold temperature 25 • C, regardless of the MWCNT content and the tested position, shows the typical insulat-Nanomaterials 2021, 11, 550 7 of 14 ing trend, with the conductivity continuously increasing with frequency. As for the results obtained with mold temperature 100 • C, the 3 wt%-MWCNT sample shows a plateau in the low frequency region in P3, which means that, in this position, at least a weak conductive MWCNT path is formed. The 4 wt%-MWCNT sample shows in all the positions a plateau in the low frequency region, with a clear tendency of increasing conductivity when moving from the position closest to the gate to the position further from the gate. Therefore, it seems that conductivity in P3 is always higher than in the other positions, whereas P2 represents in all cases the least conductive zone. P3 represents the area where the flux of the molten material reaches the end of the cavity. Therefore, the shear stress in this region is most likely lower if compared with all the other positions, because the velocity of the flux is rapidly going to zero. As a consequence, it turns out that this lower shear stress leads to a rearrangement of the MWCNT clusters and a PP crystalline structure, which is more efficient in terms of electrical conduction mechanism. al. [44] studied this phenomenon on several positions over the surface of polycarbonate-MWCNT nanocomposites. Figure 4 reports the results of the dielectric properties of 3 and 4 wt%-MWCNT nanocomposites, manufactured at mold temperatures 25 and 100 °C, as measured in the studied different positions over the surface of the sample. As it is possible to observe from Figure 4a,b, AC electrical conductivity of the samples manufactured at mold temperature 25 °C, regardless of the MWCNT content and the tested position, shows the typical insulating trend, with the conductivity continuously increasing with frequency. As for the results obtained with mold temperature 100 °C, the 3 wt%-MWCNT sample shows a plateau in the low frequency region in P3, which means that, in this position, at least a weak conductive MWCNT path is formed. The 4 wt%-MWCNT sample shows in all the positions a plateau in the low frequency region, with a clear tendency of increasing conductivity when moving from the position closest to the gate to the position further from the gate. Therefore, it seems that conductivity in P3 is always higher than in the other positions, whereas P2 represents in all cases the least conductive zone. P3 represents the area where the flux of the molten material reaches the end of the cavity. Therefore, the shear stress in this region is most likely lower if compared with all the other positions, because the velocity of the flux is rapidly going to zero. As a consequence, it turns out that this lower shear stress leads to a rearrangement of the MWCNT clusters and a PP crystalline structure, which is more efficient in terms of electrical conduction mechanism. The ε" and M" curves reported in Figure 4c,d are related to the samples manufactured at mold temperatures 25 • C and 100 • C, respectively. They show the presence of a peak related to the interfacial polarization, which in the case of mold temperature 100 • C (Figure 4d) tends to shift to higher frequency, as the tested position moves from the gate to the end of the cavity. This further confirms that when the conductive mechanism is more efficient, the MWS interfacial relaxation is faster. Crystalline Structure and MWCNT Cluster Morphology The relationship of the processing condition and tested position over the surface of the sample, with the morphology have been investigated following two different approaches, the former focused on the crystalline structure, the latter to the morphology of the MWCNT clusters embedded in the polymer matrix. XRD was performed in order to thoroughly understand how the injection molding process affects the crystalline structure of the produced samples, with the final aim of correlating it with a diverse MWCNT percolation network and dielectric behavior measured and observed. As reported in literature, PP shows three polymorphic crystallographic forms: monoclinic α-phase, a hexagonal β-phase and a triclinic γ-phase [45]. Each crystalline form presents its own characteristic peaks in the XRD pattern. In the reported XRD graph, the following peaks at 2θ = 14.0 • , 16.9 • , 18.7 • , 21.2 • , 21.8 • and 25.4 • are related to the (110), (040), (120), (131), (041) and (060) crystalline planes of α-PP, respectively. The triclinic γ-phase corresponds to 2θ = 20.1 • , which is associated to the (117) crystalline plane [21,45]. The γ-phase fraction (X γ ) can be estimated by the formula X γ = h γ h γ +h α , where h γ and h α are the peak height of the two crystallographic planes (117) and (120), representative of the two phases [21,46,47]. The results related to the 3 wt%-MWCNT samples are reported in Table 1. Table 1. Fraction of γ-phase in 3 wt% MWCNT nanocomposite manufactured at mold temperature 25-100 • C and in the different positions over the samples. Mold Temperature Positions over the Sample 2 1 3 Figure 5 reports the XRD spectra of 3 wt% MWCNT nanocomposites produced at 25 • C and 100 • C mold temperature, as measured in the three different positions. As it is possible to observe, an increase of mold temperature results in an increase of intensity of (117) peak, indicating that γ-phase is favored at slower cooling rate (achieved when higher mold temperatures are applied) [47]. This tendency is obtained in all the studied positions and further confirmed by the results reported in Table 1. This outcome, together with the dielectric results, which show an increase of the electrical conductivity and faster interfacial polarization mechanism at a slower cooling rate, demonstrates that PP γ-phase leads to the formation of a more efficient MWCNT conductive system. Figure 6 shows the FESEM images of the global cross section of the 3 wt%-MWCNT T-100 • C sample, in P2 and P3. Similar results have been obtained at mold temperature 25 • C (not reported in this paper). In both cases, it can observe that MWCNT clusters with different dimension and shape are evenly distributed in the whole section of the samples. More in detail, as it is possible to observe, the MWCNT clusters are more elongated in the flux direction in P2, even in the core region. On the other hand, P3 is characterized by the presence of clusters with a more homogeneous shape, especially in the core region. Finally, Figure 7 and Table 2 report the DSC curves and the degree of crystallinity of 3 wt%-MWCNT nanocomposites, manufactured with the mold temperature set at 25 and 100 • C, in the three studied positions. Observing results reported in Table 2, it is possible to notice that the sample manufactured with mold temperature 100 • C shows a higher crystallinity if compared to the sample manufactured with mold temperature 25 • C, with no significant difference related to the position over the surface of the sample. Moreover, as it can be observed from the thermograms of Figure 7, a broad melting peak, formed by a second peak in the temperature range between 120 • C and 145 • C, partially hidden by the principal peak (at around 150 • C), is observable for the nanocomposites produced with a mold temperature of 100 • C. This second peak was ascribed to the melting of the PP γcrystals by Zhu et al. [21]. Hence, the increasing broadness of this peak with the increment of mold temperature is consistent with the obtained results for the XRD estimations, where an increase of X γ in the 3 wt%-MWCNT occurs when identical processing condition are applied. Figure 6 shows the FESEM images of the global cross section of the 3 wt%-MWCNT T-100 °C sample, in P2 and P3. Similar results have been obtained at mold temperature 25 °C (not reported in this paper). In both cases, it can observe that MWCNT clusters with different dimension and shape are evenly distributed in the whole section of the samples. More in detail, as it is possible to observe, the MWCNT clusters are more elongated in the flux direction in P2, even in the core region. On the other hand, P3 is characterized by the presence of clusters with a more homogeneous shape, especially in the core region. Finally, Figure 7 and Table 2 report the DSC curves and the degree of crystallinity of 3 wt%-MWCNT nanocomposites, manufactured with the mold temperature set at 25 and 100 °C, in the three studied positions. Observing results reported in Table 2, it is possible to notice that the sample manufactured with mold temperature 100 °C shows a higher crystallinity if compared to the sample manufactured with mold temperature 25 °C, with no significant difference related to the position over the surface of the sample. Moreover, as it can be observed from the thermograms of Figure 7, a broad melting peak, formed by a second peak in the temperature range between 120 °C and 145 °C, partially hidden by the principal peak (at around 150 °C), is observable for the nanocomposites produced with a mold temperature of 100 °C. This second peak was ascribed to the melting of the PP γcrystals by Zhu et al. [21]. Hence, the increasing broadness of this peak with the increment of mold temperature is consistent with the obtained results for the XRD estimations, where an increase of Xγ in the 3 wt%-MWCNT occurs when identical processing condition are applied. Conclusions This paper reports a study on the correlation between the dielectric properties of injection-molded PP-MWCNT nanocomposites and the related morphology, focusing on how they are influenced by tuning the mold temperature of the injection molding process. Conclusions This paper reports a study on the correlation between the dielectric properties of injection-molded PP-MWCNT nanocomposites and the related morphology, focusing on how they are influenced by tuning the mold temperature of the injection molding process. The produced crystalline structure has been investigated through X-ray diffraction and DSC, while MWCNT cluster morphology has been observed by FESEM microscopy. The AC electrical conductivity of MWCNT-based nanocomposites shows less dependency on frequency in the low-frequency region and at high MWCNT content, which is typically associated to a shift from a fully capacitive to a resistive behavior. This phenomenon was also confirmed by the phase angle curves, which tends to zero with increasing MWCNT content in the low frequency region. The effect of the temperature of the mold was studied on 3 and 4 wt% MWCNT nanocomposites and has shown a change of the dielectric behavior from a fully capacitive to a partially resistive behavior, as mold tem-perature increases. This indicates the formation of an MWCNT network, which allows the transport of electrons. Moreover, as the conductive network becomes more efficient, a peak appears in the phase angle and in the imaginary part of permittivity curves. This peak can be associated to the Maxwell-Wagner-Sillars (MWS) interfacial polarization. The analysis described so far has been performed in the center of the sample. Test were also performed in other two positions over the surface of the samples, one closer to the gate (P2), one close to the end of the cavity (P3). The dielectric results of the samples manufactured at mold temperature 100 • C demonstrate that P3 is the most conductive, whereas the P2 represents the least conductive zone. The morphological study performed by XRD and DSC has shown that an increase of mold temperature results in an increase of PP γ-phase crystalline structure. The correlation with the dielectric results indicates that this crystalline structure favors the formation of a more efficient MWCNT conductive network. The effect of the position over the surface of the sample on the morphology has been studied also by FESEM microscopy. As a result, P3 is characterized by the presence of clusters with a more homogeneous shape, especially in the core region, which can be most likely associated to the lower shear stress on the molten material in this region of the cavity, if compared with all the other regions, due to the completion of the filling phase. In conclusion, this paper has shown that tuning the injection molding processing conditions can drastically modify the frequency-dependent electrical behavior when the MWCNT content is in the range of the electrical percolation threshold. This has been associated to the selective formation of PP γ-phase crystalline structure, in the presence of higher temperature of the mold. The achieved results enlarge the awareness of the significance of the advanced process design of electrically conductive injection-molded components. The in-depth understanding of the possibility in tuning the frequencydependent electrical behavior, and how it can be measured by XRD, can offer an essential step forward for the delineation of optimized processing procedures. Data Availability Statement: The data presented in this study are available on request from the corresponding authors. Conflicts of Interest: The authors declare no conflict of interest.
7,981.4
2021-02-01T00:00:00.000
[ "Materials Science" ]
Efficacy of metabolites of a Streptomyces strain (AS1) to control growth and mycotoxin production by Penicillium verrucosum, Fusarium verticillioides and Aspergillus fumigatus in culture The objectives of this study were to determine the efficacy of metabolites of a Streptomyces strain AS1 on (a) spore germination, (b) mycelial growth, (c) control of mycotoxins produced by Penicillium verrucosum (ochratoxin A, OTA), Fusarium verticillioides (fumonisins, FUMs) and Aspergillus fumigatus (gliotoxin) and (d) identify the predominant metabolites involved in control. Initial screening showed that the Streptomyces AS1 strain was able to inhibit the mycelial growth of the three species at a distance, due to the release of secondary metabolites. A macroscopic screening system showed that the overall Index of Dominance against all three toxigenic fungi was inhibition at a distance. Subsequent studies showed that the metabolite mixture from the Streptomyces AS1 strain was very effective at inhibiting conidial germination of P. verrucosum, but less so against conidia of A. fumigatus and F. verticillioides. The efficacy was confirmed in studies on a conducive semi-solid YES medium in BioScreen C assays. Using the BioScreen C and the criteria of Time to Detection (TTD) at an OD = 0.1 showed good efficacy against P. verrucosum when treated with the Streptomyces AS1 extract at 0.95 and 0.99 water activity (aw) when compared to the other two species tested, indicating good efficacy. The effective dose for 50% control of growth (ED50) at 0.95 and 0.99 aw were approx. 0.005 ng/ml and 0.15 μg/ml, respectively, with the minimum inhibitory concentration (MIC) at both aw levels requiring > 40 μg/ml. In addition, OTA production was completely inhibited by 2.5 μg/ml AS1 extract at both aw levels in the in vitro assays. Ten metabolites were identified with four of these being predominant in concentrations > 2 μg/g dry weight biomass. These were identified as valinomycin, cyclo(L-Pro-L-Tyr), cyclo(L-Pro-L-Val) and brevianamide F. Introduction There has been interest in the utilization of actively growing natural microorganisms for the competitive exclusion of toxigenic fungal species or by using their naturally produced metabolites for inhibiting the germination and growth of these pathogens that cause diseases of humans and contaminate food and feed (Dogi et al. 2013;Faheem et al. 2015;Guo et al. 2011). A key driver is the strict legislative limits, which exist in many countries for mycotoxins in a range of raw and processed staple commodities. Thus, minimization strategies are actively being sought to reduce potential exposure to spoilage and toxigenic moulds in food and feed chains. There is thus interest in the identification of natural compounds that may control or reduce mycotoxigenic mould colonization and toxin contamination of staple commodities. Food and feeds such as cereals, nuts and spices are commonly contaminated with mycotoxigenic fungi and mycotoxins that are a serious problem faced by many countries (Lee and Ryu 2017). This causes significant losses to producer countries when their exports are rejected because they do not meet the legislative limits, especially in Europe. The Rapid Alert System for Food and Feed shows that up to 30% of commodities imported into the EU are rejected because of mycotoxin contamination (RASFF 2018). Species from the genera Penicillium, Fusarium and Aspergillus are of greatest concern in terms of mycotoxin contamination of food and feed. A survey conducted in 2014 revealed that more than half of the worlds' regions were severely affected by mycotoxins including fumonisins (FUMs), deoxynivalenol (DON) and zearalenone (ZON) from Fusarium species which had increased when compared to 2013 (Kovalsky 2015). Moreover, the maximum and average ochratoxin A (OTA) concentration in samples from Europe in 2017 was the highest when compared to other regions (Biomin 2018). In a 4-year survey by Limay-Rios et al. (2017) of stored wheat in Canada, Penicillium verrucosum was commonly isolated, as well as contamination with OTA, ochratoxin B (OTB) and citrinin. Aspergillus fumigatus is an opportunistic fungal species, responsible for aspergillosis due to lung infection. Resistance to azoles has resulted in a number of virulent strains which have become difficult to control (Chowdhary et al. 2013), especially in immunocompromised age groups (Steinbach et al. 2012). A. fumigatus also produces the mycotoxin gliotoxin (GLI) (Kupfahl et al. 2008). Indeed, in Manchester, UK, azole-resistant strains were first detected in 1999 (Howard et al. 2009) and are now commonly found in Switzerland, USA, India and China (Hurst et al. 2017;Lockhart et al. 2011;Riat et al. 2018). However, at the present time, no regulations exist with regard to GLI exposure. Streptomyces species are gram-positive filamentous bacteria, which can grow in various ecosystems, including sea sponges (Han et al. 2009), soil (Nguyen et al. 2015), animal faeces (Wang et al. 2014) and termites (Zhang et al. 2013). They are able to produce both secondary metabolites (Wang et al. 2013;Zhang et al. 2013) and hydrolytic enzymes (Karthik et al. 2015;Nagpure and Gupta 2013) or potentially novel anti-microbial compounds (Yang et al. 2015;Yekkour et al. 2015;Shakeel et al. 2018). Some compounds have been shown to inhibit spore germination (Wang et al. 2013;Zhang et al. 2013) and mycelial growth of spoilage fungi (Nguyen et al. 2015). However, many studies have screened metabolites for efficacy only against mycelial growth with less emphasis on control of mycotoxin production or in relation to different interacting environmental conditions. There have thus been significant research efforts to screen, isolate and identify novel compounds with antifungal activities from Streptomyces strains. Previously, Sultan and Magan (2011) examined a Streptomyces strain (AS1) isolated from peanuts. This was found to be competitive and some extracts from the culture were found to be very effective at inhibiting Aspergillus flavus and aflatoxin B 1 production, both in vitro and in stored peanuts. However, this Streptomyces strain and its metabolites have not previously been screened against other spoilage toxigenic moulds or indeed against any human fungal diseasecausing pathogens or for control of toxin biosynthesis. In addition, the compounds responsible for the inhibition were not previously identified. The objectives of this study were to determine the efficacy of the Streptomyces AS1 metabolites on (a) spore germination and mycelial growth of P. verrucosum, Fusarium verticillioides and A. fumigatus, (b) efficacy for control of the production of OTA, FUMs and GLI and (c) to identify the major metabolites produced by the Streptomyces AS1 strain responsible for the control achieved. Bacterial strain A Streptomyces AS1 strain was obtained from the Applied Mycology Collection, Cranfield University. This strain was previously isolated from Egyptian peanuts by Dr. Y. Sultan (Sultan and Magan 2011). This strain was subsequently identified as Streptomyces parvus based on molecular analyses (99%; EU accession number: EU841619.1). Mycotoxigenic fungal strains Fusarium verticillioides was from the Applied Mycology Collection, Cranfield University. It was isolated from maize by Dr. N.I.P. Samsudin and molecularly identified (Samsudin et al. 2017) and Penicillium verrucosum (OTA11) and Aspergillus fumigatus (strain Mi538) were kindly provided by Dr. M. Olsen (National Food Administration, Uppsala, Sweden). These fungal strains have all previously been demonstrated to produce high titres of their respective toxins (Cairns-Fuller et al. 2005;Samsudin et al. 2017). Preparation of spore suspensions Streptomyces AS1 and test mycotoxigenic fungi A glycerol stock solution of Streptomyces AS1 was inoculated on half nutrient agar (½NA) and incubated at 25°C for 5 days or until sporulation had occurred. The colonies were flooded with sterile 10 ml of 0.1% (w/v) Tween-80/water solution and harvested by gently scraping the colony with a sterile spreader to release the spores and this was then transferred aseptically into a sterile 50-ml conical tube and the density adjusted to 1.0 at OD 600 or approximate 10 8 spores/ml. A. fumigatus and F. verticillioides were grown on malt extract agar (MEA; Oxoid Ltd) and P. verrucosum on potato dextrose agar (PDA, Oxoid Ltd) for 7 to 10 days or until sporulation occurred at 25 and 30°C (A. fumigatus only). Fungal spores were harvested by pouring 10 ml of sterile 0.1% Tween-80/water solution onto the agar surface containing the cells or spores and gently scraping with a surface-sterilized glass rod. The cell/spore suspensions were transferred into sterile 50-ml tubes, centrifuge at 2000 rpm for 2 min and the supernatants discarded. Fresh sterile 0.1% Tween-80/water was added. The concentration of fungal spores was counted using a haemocytometer (Thoma, Germany) and adjusted to approx. 10 6 /ml with sterile 0.1% Tween-80/water solution. Spore germination assays: media preparation, inoculation and incubation Molten cooled autoclaved ½NA was poured into Petri plates (90 mm ∅) in a sterile flow bench and allowed to solidify. The cooled media were overlaid with a sheet of sterile cellophane (8.5 cm diameter) carefully to avoid any air bubbles. A single colony of Streptomyces AS1 was streak plated on the ½NA previously overlaid with a sterile cellophane sheet and incubated at 25°C for 10 days. At different time intervals (days 2, 5 and 10), the cellophane layer with the Streptomyces AS1 biomass was carefully removed and 100 μl of spore suspension (10 6 spores/ml) of the test pathogens was spread plated onto the agar surface with a sterile glass spreader. These Petri plates were then incubated for 48 h. P. verrucosum and F. verticillioides were incubated at 25°C and A. fumigatus treatments at 30°C. After 24 and 48 h, two agar plugs (18 mm ∅) were taken randomly and placed on a glass slide. They were stained with lactophenol cotton blue, covered with a coverslip prior to microscopic examination. A total of 50 spores in each of four fields were counted in each replicate and the number recorded. Spores were considered germinated when the germ tube length was equal to or greater than the spore diameter (Magan 1988). The experiments were all carried out with three replicates per treatment and repeated once. Mycelial growth and mycotoxin assays Preparation of cell-free supernatant for growth and mycotoxin inhibition assays The Streptomyces AS1 spore suspension (200 μl) was inoculated into 200 ml ½ strength sterile nutrient broth (NB) and incubated at 30°C at 200 rpm for 4 days. After 4 days, the supernatant was separated from the mycelium by filtration using Whatman filter paper no. 4. Extraction of Streptomyces AS1 bioactive metabolites using ethyl acetate Bioactive metabolites from the Streptomyces AS1 supernatant were extracted three times with ethyl acetate (EA) (Sultan and Magan 2011). Approx. 900 ml of cell-free supernatant was mixed with 300 ml of EA in a separating funnel and after shaking for a few seconds, the mixture was left to separate into two layers. The EA layer (upper layer) containing the bioactive metabolites was collected and the extraction phase was repeated three times. The EA layers containing the bioactive metabolites were combined and the solvent was removed using a rotary evaporator at 38°C. The dried film was dissolved in DMSO under sterile conditions for performing the assays. Efficacy of Streptomyces AS1 EA extract against fungal pathogens using the BioScreen C turbidimetric assay Preparation of culture medium The culture medium used to inoculate the fungal pathogen was prepared as described by Medina et al. (2012). Water was used to prepare the medium and this was adjusted with glycerol to 0.99 and 0.95 water activity (a w ). The culture media used were semi-solid YES (yeast extract sucrose) for P. verrucosum and A. fumigatus and PD for F. verticillioides. Each culture medium contained the following: (1) YES: yeast extract 20 g/l, sucrose 150 g/l, MgSO 4 .H 2 O 0.5 g/l and 0.05% agar (w/v) and (2) PD: potato extract 4 g/l, dextrose 20 g/l and 0.03% agar. Both culture media were sterilized at 121°C for 15 min. Culture medium containing AS1 ethyl acetate extract and spore suspensions A mixture of 9800 μl culture medium, 100 μl of spore suspensions (final spore count 10 5 spores/ ml) and 100 μl of EA extract (final concentrations of 5 μg/ml, 10 μg/ml, 20 μg/ml, 30 μg/ml and 40 μg/ml; w/v) was prepared in sterile 25-ml universal bottles. A total of 300 μl of the mixture were loaded into 100-well microtitre plates and the density measured automatically at 600 nm every 30 min at 25°C for 7 days for P. verrucosum and F. verticillioides and at 37°C for A. fumigatus. This method was based on that developed for screening compounds for efficacy against filamentous fungi using the BioScreen C bioassay (Medina et al. 2012). Each set of experiments was carried out with ten replicates. The relative time to reach an absorbance value of 0.1 (time to detection; TTD-0.1) was then compared as an indication of growth rates. The raw data was analysed using Microsoft Excel to obtain the growth curve for the fungal species. The TTD of 0.1, minimum inhibitory concentrations (MIC) and IC 50 concentrations were calculated using the Lambert-Pearson model (Lambert and Pearson 2000). Agar medium containing EA extract and inoculation of fungal spores Approximately 100 μl of the different concentrations of the EA extract (0-40 μg/ml) was added aseptically into 10 ml sterile YES (for P. verrucosum and A. fumigatus) and PD (for F. verticillioides) agar (± 52°C) with different a w levels (0.99 and 0.95) and inverted several times before pouring into Petri plates (53 mm ∅). After the agar media had solidified, 100 μl of spore suspension (10 7 spores/ml) was spread plated using a surface-sterilized glass spreader and incubated at 25°C for P. verrucosum and F. verticillioides and 37°C for A. fumigatus for 7 days. After 7 days, the agar plugs were collected for mycotoxin analysis. Extraction and quantification of ochratoxin A, gliotoxin and fumonisins Ochratoxin A A total of five agar plugs of P. verrucosum after 7 days growth on YES medium were randomly collected across the colony using a sterile cork borer (number 5) and transferred into pre-weighed 2-ml tubes and re-weighed to obtain the weight of the agar plug. The extraction of ochratoxin A (OTA) from the agar plug was carried out using 1 ml of methanol. The samples were shaken at 200 rpm and 30°C in the dark for 1 h. Samples were then centrifuged at 15,000g for 5 min and the methanol containing OTA was filtered (nylon syringe filter, 0.22 μm pore size, Fisher) into amber and silanized HPLC vials (Agilent, UK) for analysis. The separation and quantification of OTA was done using an Agilent 1200 Series HPLC system (Agilent, UK) with a fluorescence detector. The separation was done at 25°C using a Poroshell 120 EC-C18 (4.6 mm × 100 mm, 2.7 μm) column fitted with a guard column (4 mm × 3 mm cartridge, P h e n o m e n e x , U S A ) . T h e m o b i l e p h a s e w a s water:acetonitrile:acetic acid (41:57:2, v/v/v). The flow rate and injection volume were 1 ml/min and 20 μl, respectively. The detection wavelength was 333 nm for excitation and 460 nm for emission. Different concentrations of OTA standards (0-400 ng/ml) were prepared by dissolving OTA standard (Sigma) solution in methanol (R 2 = 0.9994). The LOD and LOQ were 12.3 ng/ml and 41.1 ng/ml, respectively. Gliotoxin The agar plugs of the growing colonies were obtained as described previously for OTA. The extraction was carried out using chloroform (1 ml) for 1 h at 30°C and shaken at 200 rpm. After this, 800 μl chloroform containing gliotoxin (GLI) was transferred into a new 2-ml tube and dried overnight in a fume cupboard. The dried extract was dissolved in 700 μl of mobile phase solution (1% acetic acid:acetonitrile (75:25, v/v) and filtered into amber and silanized HPLC vials for analysis. The separation and quantification of GLI was done according to Alonso et al. (2016) with slight modification. The GLI concentration was measured using a reversed-phase HPLC system linked to a diode array detector (DAD). The separation was done at 25°C using a Zorbax Eclipse XDB-C18 (4.6 mm × 150 mm, 5.0 μm, Agilent, USA) column fitted with a guard column (4 mm × 3 mm cartridge, Phenomenex, USA). The mobile phase was acetic acid:water (1:99, v/v) (eluent A) and acetonitrile (eluent B). The flow rate, injection volume and detection wavelength were 1.3 ml/min, 50 μl and 268 nm respectively. The gradient programme was 25% B for 10 min followed by rapid increased to 100% in 1 min and this was held for 8 min before decreasing this to 25% in 2 min. Working GLI standard solution with a concentration range from 0 to 3000 ng/ml was prepared by diluting GLI standard (Sigma) in mobile phase (R 2 = 0.9999). The LOD and LOQ were 41 ng/ml and 138 ng/ml, respectively. Fumonisins Fumonisins B 1 and B 2 (FB 1 , FB 2 ) are the main mycotoxins in the suite produced by F. verticillioides. The agar plugs were extracted by adding 1 ml of acetonitrile:water (50:50, v/v) and shaking the mixture at 200 rpm for 1 h at 30°C. Acetonitrile:water (50:50, v/v) containing FB 1 and FB 2 was then filtered into amber and silanized HPLC vials for analysis. The separation and quantification of FBs was done using the HPLC-FLD system (Agilent, UK). The separation was done using a Zorbax Eclipse Plus C18 (4.6 mm × 150 mm, 3.5 μm) column + a guard column (security guard, 4 mm × 3 mm cartridge, Phenomenex, USA) at 30°C. The detection was at 335 nm for the excitation and 440 nm for the emission wavelength. The mobile phase was 50 mM NaH 2 PO 4 (pH 4.01):methanol (50:50, v/v) (eluent A) and acetonitrile:water (80:20) (eluent B). The injection volume was 15 μl with a flow rate of 1 ml/min. The gradient programme was 0% B for 5 min, increasing B to 50% in 1 min and held for 7.10 min before slowly increasing to 80% in 6.90 min. Before injecting the standards or samples, 10 μl of standards or samples was mixed with 5 μl of derivatization solution (OPA). The derivatization solution consisted of 1 ml ortho-phthaldialdehyde (40 mg of OPA in 1 ml absolute methanol), 5 ml of 0.1 M Na 2 B 4 O 7 .10H 2 O and 50 μl of 2-mercaptoethanol. The derivatization was carried out with an auto-derivatization programme in the HPLC system. Different concentrations of FB standards (0-5 μg/ml) were prepared by diluting FB standard solution (Sigma-Aldrich, USA) in acetonitrile:water (50:50, v/v) (R 2 = 0.9972 for FB 1 and 0.9959 for FB 2 ). The LOD and LOQ were 0.37 μg/ml and 1.21 μg/ml for FB 1 and 0.44 μg/ml and 1.48 μg/ml for FB 2 , respectively. Identification of bioactive compounds from Streptomyces AS1 EA extract Sample preparation, detection and quantification were performed as described by Malachová et al. (2014). Briefly, the extraction solvent (acetonitrile/water/acetic acid; 79/20/1) was added to dried ethyl acetate extract and after shaking and centrifugation, the extract was injected into LC-MS/MS equipped with a TurboV electrospray ionization (ESI) source. The Phenomenex C18-column (150 × 4.6 mm, 5 μm) fitted with a C18 security guard cartridge (4 × 3 mm) was used to separate the compounds. The mobile phases consisted of methanol/water/acetic acid with the ratio of 10/89/1 (v/v/v) for eluent A and 92/2/1 (v/v/v) for eluent B. Both eluents contain 5 mM ammonium acetate. Dual-culture assays Single colonies of the Streptomyces AS1 were inoculated onto ½NA as a 2-cm streak approximately 2 cm from the 9-cm Petri plate edge. After incubation at 25°C for 48 h, an amount of 5 μl of fungal spore suspension (10 6 spores/ml) was applied at a distance of 3-4 cm from the Streptomyces AS1. A. fumigatus assays were incubated at 30°C and the other assays at 25°C for 7 days. The inhibition was determined based on the fungal colony area and macroscopic interaction between the dual cultures with each colony given an individual numerical score. These were added up to obtain an overall index of dominance (I D ) as developed by Magan and Lacey (1984). Each interacting species was given an individual score based on the following numerical scores: 1:1-mutual intermingling, 2:2-mutual antagonism on contact, 3:3-mutual antagonism at a distance, 4:0-dominance of one species on contact and 5:0-dominance of one species over the other at a distance. All the experiments were done with three triplicates per treatment and repeated once. Statistical analysis Normal distribution of data was checked by the Shapiro-Wilk W test. The general influence of antifungal extract on fungal growth and mycotoxin production were checked using one-way analysis of variance (ANOVA) for normal distribution and the Kruskal-Wallis tests (rank sums) for non-normally distributed data. Student's t test was further applied to compare the means for each treatment for normally distributed data and the Wilcoxon method (nonparametric comparison) for non-normal distribution datasets. A significance level of p < 0.05 was used to compare treatment. The JMP Pro (SAS Institute Inc., Cary, NC, USA) was used for these analyses. Results Streptomyces AS1 for control of fungal pathogens using dual-culture assays A significant reduction in the colony area (cm 2 ) of all the toxigenic fungi occurred in the presence of Streptomyces AS1 culture (Fig. 1). The inhibition was best against P. verrucosum (90%) followed by A. fumigatus strain (Mi538; 59%) and F. verticillioides (51%). The interaction score between the Streptomyces AS1 and the isolates of all three species was 5:0, indicating dominance at a distance. The total I D , which was the sum of the individual scores, was thus 15:0 (AS1:mycotoxigenic strain). Indeed, the mycotoxigenic strains of all three species grew away from the Streptomyces AS1 strain. There was no increase in the colony area of the pathogens after day 3, especially for P. verrucosum. Table 1 shows the major compounds extracted from the ethyl acetate fraction of the Streptomyces AS1 strain biomass. There were four major compounds present. These included valinomycin (150 μg/g dry biomass), cyclo(L-Pro-L-Tyr) (22 μg/g), cyclo(L-Pro-L-Val) (10 μg/g) and brevianamide F (3 μg/g). In addition, six very minor compounds common in some food matrices were identified as rugulusovin, Effect of Streptomyces AS1 ethyl acetate extract on time to detection of P. verrucosum, A. fumigatus (Mi538) and F. verticillioides using the BioScreen assay method Figure 3 shows an example of the mean growth curve of P. verrucosum and F. verticillioides obtained after inoculation with different concentrations of Streptomyces AS1 ethyl acetate extract at 0.95 a w levels over 7 days. Figure 4 shows the effect of different concentrations of the AS1 ethyl acetate extract at two water activity levels (0.99 and 0.95 a w ) on the time to detection (TTD) at an OD equal to 0.1 for P. verrucosum, A. fumigatus and F. verticillioides. Overall, the TTD of P. verrucosum was the highest at both a w levels, followed by A. fumigatus and F. verticillioides. For P. verrucosum, AS1 extract at 5 μg/ml resulted in a significant F. verticillioides at 0.95 a w in semi-solid YES media containing different concentrations (0-40 μg/ml) of Streptomyces AS1 ethyl acetate extract over 7 days incubation at 25°C using the BioScreen C. Each curve represents the mean of ten individual replicates increase (p < 0.05) of the TTD at both a w levels indicative of effective control of growth. At 0.99 a w , there was a significant increase in the TTD (p < 0.05) until 30 μg/ml after which the effect stabilized. For A. fumigatus, with freely available water (0.99 a w ), the TTD was significantly increased until 30 μg/ml concentration. At 0.95 a w , this was for concentrations up to 20 μg/ml. For F. verticillioides, the TTD was significantly increased (p < 0.05) with concentrations up to 20 μg/ml of AS1 extract at both a w levels. Identification of bioactive compounds from the Streptomyces AS1 strain The minimum inhibitory concentration (MIC) and the IC 50 concentrations of the AS1 extracts were determined and shown in Table 2. The MIC for P. verrucosum, A. fumigatus (Mi538) and F. verticillioides growth at both a w levels was more than the highest concentration tested (> 40 μg/ml). The IC 50 for P. verrucosum suggested that it was the most sensitive with 0.15 μg/ml and 0.005 ng/ml at 0.99 and 0.95 a w , respectively. Efficacy of Streptomyces AS1 ethyl acetate extract on mycotoxin production The effect of the AS1 extract (Table 3) on mycotoxin production was examined. For P. verrucosum, complete control of OTA production was achieved, regardless of a w level at the lowest concentration tested (5 μg/ml). For gliotoxin, there was relatively little control by the concentration range tested at both a w levels. Indeed, there appeared to be some stimulation at intermediate concentrations. Even at 40 μg/ml, there was no difference between the control and the treatment at both a w levels. For F. verticillioides, there was a significant reduction (p < 0.05) in FB 1 and FB 2 at both a w levels when compared with the control. AS1 concentrations of 5-40 μg/ml gave similar inhibition of FB 1 at both a w levels and FB 2 at 0.95 a w . For FB 2 /0.99 a w , > 20 μg/ml AS1 was needed to significantly (p < 0.05) inhibit production. For P. verrucosum, because no OTA was detected at 5 μg/ml AS1 extract, further studies with lower concentration of AS1 extract were carried out to identify more accurately the concentrations at which complete inhibition of OTA occurred. Figure 5 shows that there was a significant decrease in OTA production at 0.99 and 0.95 a w when the concentration was 1.25 μg/ml and complete inhibition at ≥ 2.5 μg/ml. However, at very low concentrations of 0.15 μg/ml and 0.99 a w , the OTA production was stimulated when compared with the untreated control. Discussion In the present study, the Streptomyces AS1 strain produced compounds which could control the activity of food spoilage mycotoxigenic fungi and an opportunistic human one. In colony-based interactions, the Streptomyces AS1 generally inhibited all three species tested with an interaction score of 5:0 indicating dominance at the distance and the production of antifungal metabolites. Previous studies with Streptomyces species have been associated with the production of primary and secondary metabolites including hydrolytic enzymes and antibiotic-like compounds (Prapagdee et al. 2008;Taechowisan et al. 2005). The effect of the mixture of Streptomyces AS1 metabolites on spore germination of strains of P. verrucosum, F. verticillioides and A. fumigatus was dependent on the time frame of cultivation of the Streptomyces AS1 strain. With longer culturing times, more metabolites were produced resulting in better efficacy, especially against P. verrucosum. None of the spores germinated under all conditions tested. In contrast, the mixture of compounds was not effective against A. fumigatus and F. verticillioides although the metabolites did delay germination. Previously, complete inhibition of conidial germination of A. flavus spores by metabolites of Streptomyces AS1 has been reported (Sultan and Magan 2011). The efficacy of the mixture of the Streptomyces AS1 extract was further examined using a spectrophotometric turbidimetric rapid assay method using the BioScreen C system. The effects of different concentrations of metabolites from the AS1 strain and water activity (a w ) on the TTD for P. verrucosum, F. verticillioides and A. fumigatus and on mycotoxin production were studied. The TTD represents the initial growth phase of the fungi. The shorter the TTD, the less effective the mixture of metabolites were on fungal growth. Overall, the AS1 mixed extract was more effective in controlling growth of P. verrucosum, even at low concentrations. This paralleled the findings on effects on conidial spore germination inhibition of the strain of this species. More importantly, the mixed AS1 extract suppressed production of OTA by P. verrucosum and also reduced the production of FB 1 and FB 2 by F. verticillioides at both a w levels examined. Indeed, OTA was not detected when 5 μg/ml of the the AS1 extract was used. Thus, more detailed efficacy testing was done to determine the lowest concentrations which could be used to inhibit OTA production. Complete inhibition of OTA production was achieved at very low concentrations (2.5 μg/ml). Thus, the predominant metabolites produced by the Streptomyces AS1 were fungistatic against spore germination and growth of P. verrucosum, and the IC 50 concentrations were < 0.20 μg/ml and resulted in complete inhibition of OTA mycotoxin production at both a w levels tested. However, this did not occur with the A. fumigatus, responsible for aspergillosis of the lungs, where there was no control of gliotoxin production. Previous studies with other Streptomyces strains have suggested no effect on P. verrucosum growth, only some (Nguyen et al. 2018;Paškevičius et al. 2014). However, previous studies only focused on control of mycelial growth and not on mycotoxin production. A previous study by Medina et al. (2007) found compounds from another Streptomyces strain were effective in controlling Aspergillus carbonarius growth and OTA production. A total of 10 compounds were found to be present in the AS1 ethyl acetate extract. The major compound present was valinomycin (150 μg/g). Three others present which may have contributed to the anti-fungal activity were cyclo(L-Pro-L-Tyr), cyclo(L-Pro-L-Val) and brevianamide F. Comparisons were made between efficacy of the mixed AS1 extract and valinomycin and cyclo(L-Pro-L-Tyr) alone and as a mixture, for which standards are available. These were found to have no effect on growth of the target species when compared with the mixed AS1 metabolites (MohD Danial 2019). This suggests that the combined mixture has much better efficacy than the major individual compound or a mixture of the two main compounds alone. The production of these compounds by other Streptomyces species and bacteria such as Bacillus sp. N strain, Pseudomonas aurantiaca and Cellulosimicrobium cellulans has been reported previously (Buedenbender et al. 2018;Gwee Kyo et al. 2011;Kumar et al. 2013;Li et al. 2006;Park and Shim 2014;Park et al. 2008, Wattana-Amorn et al. 2016. The other compounds found were mainly cyclic ionophores which were present in very low amounts and thus probably did not directly contribute to the antifungal activity. Previous studies have shown variable results in terms of efficacy depending on the target fungal genera or species including Aspergillus, Fusarium, Penicillium, Rhizoctonia and Candida species (Gwee Kyo et al. 2011;Kumar et al. 2013;Park and Shim 2014;Park et al. 2008). Although brevianamide F was a relatively minor component, its potential role in antifungal activity has not been previously described. In conclusion, the mixture of metabolites produced by the Streptomyces AS1 species was more effective in suppressing spore germination and mycelial growth of P. verrucosum than of F. verticillioides or A. fumigatus. A very low concentration of AS1 extract was able to inhibit mycelial growth of P. verrucosum by 50% although > 40 μg/ml of the mixture of compounds was needed for complete inhibition. The mixture of metabolites produced by the Streptomyces AS1 successfully inhibited OTA production by P. verrucosum completely at concentrations of 2.5 μg/ml. The potential for using the mixture of these metabolites now needs to be examined in situ in stored temperate cereals to examine the efficacy for control of colonization by P. verrucosum and OTA contamination during short-and medium-term storage.
7,177.8
2020-01-20T00:00:00.000
[ "Biology", "Chemistry", "Environmental Science" ]
A study on fluoroscopic images in exposure reduction techniques ― Focusing on the image quality of fluoroscopic images and exposure images Abstract The quality of the present day fluoroscopic images is sufficiently high for use as exposure images depending on the environment where the fluoroscopic images are recorded. In some facilities which use fluoroscopic images as exposure images they are recorded with a radiological x‐ray diagnostic device equipped with a fluoroscopic storage function. There are, however, cases where fluoroscopic images cannot be used as exposure images because the quality of the fluoroscopic image cannot be assured in the environment where the fluoroscopic images are recorded. This poses problems when stored fluoroscopic images are used in place of exposure images without any clearly established standard. In the present study, we establish that stored fluoroscopic images can be used as exposure images by using gray values obtained from profile curves. This study finds that replacement of stored fluoroscopic images with exposure images requires 20.1 or higher gray scale value differences between the background and signal, using a 20 cm thick acrylic phantom (here an adult abdomen as representing the human body) as the specific geometry. This suggests the conclusion that the gray value can be considered a useful index when using stored fluoroscopic images as exposure images. the purpose to record the progress of the treatment being the record used for treatment assessments of the blood vessel in the diseased area and as image data for repeated observations following the treatment. For this reason, the imaging dose is set higher than the fluoroscopy dose. 8 Images are also used in documenting that appropriate treatment has been performed. However, the quality of the present day fluoroscopic images is sufficiently high for use as exposure images depending on the environment where the fluoroscopic images are recorded, 9 2.A | Definition of gray value Digital images are collected as numerical data that need to be converted to brightness values to be observed as images. The brightness is expressed as the gray level. With x-ray images that have 8 bits per pixel of information, values from 0 to 256 in gray scale are represented by gradations from dark (black) to light (white). The representation with numerical values using this gradation is termed the gray value. 10,11 This study uses the gray value as a value to determine balloon expansion in coronary artery treatment, as depicted in fluoroscopic and exposure images obtained with a cardiovascular x-ray diagnostic device. 2.B | Equipment used For the cardiovascular x-ray diagnosis device, we used the Allu-ra10/10 manufactured by Philips, Innova 2100IQ manufactured by GE, and the Trinias B8 MiX package manufactured by Shimazu Co., Ltd., Hamamatsu, Japan. For the video view, we used the kada-View manufactured by Photron M&E Solutions Inc., Tokyo, Japan, and ImageJ was used to analyze the images. We used a 20 cm thick acrylic phantom as the subject, and the area dose meter installed in the manufacturers' equipment as the dosimeter. For visual and physical evaluations, we used the balloon for coronary artery treatment commonly used in clinical settings: 2.5/ 15 mm, 3.0/10 mm, 3.5/10 mm, and 4.0/10 mm (balloon diameter/ balloon length) manufactured by Kaneka Corporation, and Iovelin 350 (Teva Takeda Pharma Ltd., Nagoya, Japan) as the contrast agent. 2.C | Dose settings of x-ray diagnostic devices for circulatory organs Using each of the tested devices, we measured the x-ray doses of fluoroscopic and exposure images at the irradiation reference point of patients. Figure 1 shows the measurement geometry and Table 1 shows the conditions of the exposure and fluoroscopic imaging in the direction of imaging angles, Anterior-Posterior (AP). Table 3, to be able to overcome the phenomenon with the tube voltage rises leading to blurring of images, which is a characteristic of the Flat Panel Detector (FPD). The characteristic of the dose settings with device B is to control the x-ray dose with a large current, and the dose which patients are assuming device A, which has the lowest dose setting, as the standard, the device B setting is 1.6 times and for device C it is about 2.2 times higher than that of device A. Usually, when the dose rate is high, the image quality improves, but the evaluation of the balloons with small diameters in devices B and C with the higher dose rate was lower than in device A. This is because device A controls the image quality by controlling the x-ray dose. However, in devices B and C the signals of the balloons are flattened because these devices cut soft x-rays using an additional filter, and this may lower the visual evaluation due to poor control of the image quality with the digital filter as shown in Table 3. | CONCLUSIONS This study finds that replacement of stored fluoroscopic images with exposure images requires 20.1 or higher gray value differences between the background and signal, using an acrylic phantom of 20 cm thickness (representing the abdomen of a human adult) a specific geometry. This suggests the conclusion that the gray value can be considered a useful index when using stored fluoroscopic images as exposure images. ACKNOWLEDGMENTS We express our gratitude to physicians of the Cardio Vascular Center and radiological technologists of the Radiation Technology Division of the Showa University for cooperation in data collection and image evaluations in this study. CONFLI CT OF INTEREST The authors declare no conflicts of interest associated with this manuscript.
1,265.6
2019-04-01T00:00:00.000
[ "Medicine", "Physics" ]
A Deep Coordination Graph Convolution Reinforcement Learning for Multi-Intelligent Vehicle Driving Policy With the growing up of Internet of Things technology, the application of Internet of Things has been popularized in the fi eld of intelligent vehicles. Therefore, more arti fi cial intelligence algorithms, especially DRL methods, are more widely used in autonomous driving. A large number of deep reinforcement learning (RL) technologies are continuously applied to the behavior planning module of single-vehicle autonomous driving in early. However, autonomous driving is an environment where multi-intelligent vehicles coexist, interact with each other, and dynamically change. In this environment, multiagent RL technology is one of the most promising technologies for solving the coordination behavior planning problem of multivehicles. However, the research related to this topic is rare. This paper introduces a dynamic coordination graph (CG) convolution technology for the cooperative learning of multi-intelligent vehicles. This method dynamically constructs a CG model among multiple vehicles, e ff ectively reducing the impact of unrelated intelligent vehicles and simplifying the learning process. The relationship between intelligent vehicles is re fi ned using the attention mechanism, and the graph convolution RL technology is used to simulate the message-passing aggregation algorithm to maximize the local utility and obtain the maximum joint utility to guide coordination learning. Driving samples are used as training data, and the model guided by reward shaping is combined with the model of the free graph convolution RL method, which enables our proposed method to achieve high gradualness and improve its learning e ffi ciency. In addition, as the graph convolutional RL algorithm shares parameters between agents, it can easily build scales that are suitable for large-scale multiagent systems, such as tra ffi c environments. Finally, the proposed algorithm is tested and veri fi ed for the multivehicle cooperative lane-changing problem in the simulation environment of autonomous driving. Experimental results show that our proposed method has better value function representation in that it can learn better coordination driving policies than traditional dynamic coordination algorithms. Introduction Autonomous driving, regarded as a cognitive system, is composed of the following three main models: perception, planning, and control [1,2]. The various models of this cognitive system comprise many methods, each of them describing the subcomponents of those models and the interaction interfaces between the models [3]. The planning model can be divided into route planning, behavior planning, and motion planning [4]. An increasing number of research has used artificial intelligence in recent years to solve the behavior planning problem of autonomous driving [5], especially since the deep reinforcement learning (DRL) method has achieved great success [6], and many researchers have applied DRL to such a problem [7]. Most of the studies have applied the single-agent reinforcement learning (RL) method to the autonomous driving environment [8][9][10]. For example, the classic deep Q-network (DQN) method is applied to automatic driving to solve the lane-changing problem [11], and the actor-critic method is applied to the behavioral decision-making problem of automatic driving [12]. Most of the work has focused on the behavior decision making of single-intelligent vehicles. Although some of them have considered other road elements to predict their behavior [13], the goal had been to learn a decision-making method of single-intelligent vehicles, and the decision making had not considered the integrated coordinated decision making of multi-intelligent vehicles. However, the future scheme will likely involve intelligent transportation systems with multi-intelligent vehicles. Autonomous vehicles can obtain more information about other vehicles. If such vehicle types can coordinate with other intelligent vehicles; then, they can drive more safely and efficiently. For example, the interaction and coordination between intelligent vehicles and surrounding vehicles can help intelligent vehicles to understand road traffic information, the location of other vehicles, or the different behavior plans of other vehicles. Consequently, intelligent vehicles can make coordinated decisions that are conducive to the overall road situation and ensure a safer and more efficient intelligent transportation system. Multiagent DRLs typify a good method of solving the decision-making problem of multivehicle coordination driving [14]. However, the studies on multi-intelligent vehicle coordination methods at present are few. In the multi-intelligent vehicle environment, the multiagent RL (MARL) [15] method can be used to learn the coordination policy of intelligent vehicles. However, large numbers of intelligent vehicles and dynamically changing environments may both complicate the interaction relationship in the policy learning process. Consequently, simplifying the relationship between agents in the learning process has gradually become a vital research field [16]. In a general multiagent environment, predefined rules are usually used to abstract the relationship between agents [17]. However, with the increasing number of agents and the growing complexity of environments, accurately defining the relationship between agents only by using predesigned rules has become increasingly difficult [16]. Some researchers have used the soft attention mechanism to calculate the importance distribution of each agent to its neighboring agents [18]. Although the attention mechanism can be used to learn the interaction between agents in the graph convolutional RL method, the output value of the softmax function is a relative value [19]. Consequently, cooperative agents unnecessarily obtain important weights, and truly modeling the relationship between agents becomes impossible. In addition, the softmax function usually generates extremely small but nonzero probability values that are assigned to irrelevant agents, thus, weakening the degree of attention that should have been given to cooperative agents. Especially in multiintelligent vehicle environment, when the driving distance between vehicles is small and the vehicle density is large, as well as because the driving behavior of vehicles affects each other, the coordination between intelligent vehicles is particularly important for improving vehicle safety and traffic efficiency. For traffic environments, such as highways, our previous work proposed a multivehicle coordination method based on a dynamic collaboration graph [20]. We use the safety field model between vehicles to dynamically construct the cooperative relationship between vehicles and use the multiagent learning method to learn the decision-making strategy of multivehicle cooperative driving. However, in the process of learning, we use the variable elimination method (VE) to solve the global utility, which needs to specify some rules artificially, which is contrary to the purpose of agent self-learning. Therefore, we use a graph neural network combined with reinforcement learning, which is a method of autonomous discovery and collaborative utility. Based on the previous multivehicle coordination learning strategy decision [20], we have made the following contributions. (1) Further, the security field model is combined with the attention mechanism of the graph model and graph neural network, and the security field model is used as the hard attention mechanism of the graph model to dynamically construct the collaborative relationship. (2) The attention mechanism is used to learn the interaction weights between the explicit CGs. (3) The graph convolution process is used to simulate the belief propagation algorithm and solve the overall maximum utility, which is subsequently used to guide the intelligent vehicle to learn the coordination policy. (4) Existing expert knowledge is used to initially discover the coordination rules between intelligent vehicles and, on this basis, further learn coordination driving behavior policies. Moreover, in view of determining the effectiveness of the method, a set of scenarios involving 5, 8, and 11 vehicles are verified in a highway simulation environment. We conduct multivehicle training in an open-source simulation environment. Our method can get higher safety rewards and driving speed when multiple vehicles drive together and have scalability. Related Work In the studies about autonomous driving, many research institutions and scientific research teams have used artificial intelligence methods to enable intelligent vehicles to learn autonomously and promote the intelligent development of autonomous vehicles [1]. Among them, RL is an unsupervised learning method that can learn a policy based on real-time feedback, and it is widely used in the field of intelligent vehicle driving policy learning [4]. The RL method treats vehicle as an agent and learns driving policies through interaction with the environment [21]. The interaction process is a Markov decision process (MDP). Loiacono et al. [22] used traditional reinforcement learning to train autonomous vehicles to learn driving strategies in the simulation environment. Guo and Wu [23] used the approximate function combined with the policy gradient method to achieve good results in the racing game environment. In recent years, the DRL method that combines deep learning and RL has greatly promoted the application of RL in more complex driving environments. Some researchers combine driving rules with RL to train driving strategies [24]. Talpaert et al. [25] used DRL to learn in real-world simulation. In a simple autonomous driving scenario, Chae et al. [26] used the DRL method to make the autonomous vehicle learn how to brake. Belletti et al. [27] proposed a multiobjective vehicle merging strategy. Makantasis et al. [28] proposed a Q-mask DRL method to learn highway driving policies. In other studies, the DRL method was used to train autonomous vehicles to learn a safety policy in a variety of scenarios [29]. A hierarchical DRL framework was proposed to help 2 Wireless Communications and Mobile Computing vehicles focus on surrounding vehicles and learn a smooth driving policy [30]. The proximal policy optimization was applied to control autonomous driving and subsequently to actual vehicles [31]. The other studies [32][33][34] introduced important research aspects pertaining to DRL in autonomous driving. Behavior planning is one of the most concerned fields in automatic driving [35]. This aspect can make autonomous vehicles drive safely and efficiently. Many research on behavior planning has increasingly applied RL technology [36]. Alizadeh et al. [37] trained the DRL agent to control the transformation policy of intelligent vehicles in a simulated environment. Chen et al. [30] designed a hierarchical DRL algorithm to learn the lane-changing behavior in a dense traffic environment. Wang et al. [38] proposed a Qlearning method for automatic lane changes in highway environments. Yuan et al. [39] used various excitation mechanisms to learn different lane-changing policies in highway environments. Wang et al. [40] proposed a Q-learning method based on the dense microsimulation to learn lane changes in highways. Bey et al. [41] learned the tactical behavior planning of intelligent vehicles by predicting the characteristics of other vehicles. Sefati et al. [42] proposed an RL method to learn the tactical behavior planning of intelligent vehicles in urban scenes under uncertain conditions, in which the intentions of surrounding road users are taken into account in this method. The above research shows that artificial intelligence methods, especially DRL, have been widely used in the field of automatic driving, but at present, there are few scenarios in which multivehicle cooperation is considered. These DRL methods mainly study the driving strategy of a single vehicle, while ignoring the interaction and coordination between multiple vehicles [4]. Obviously, the benefits of applying single agent learning method directly to multivehicle environment may be limited. Other methods propose graph theory as an abstract model of vehicle interaction and formation [43], but they mainly focus on formation and signal. Although some researchers have abstracted the cooperative relationship between multiple vehicles by using cooperative graph, they are only based on the relative position or initialization sequence number between vehicles [20,44]. In the process of learning, they could only consider the local joint utility, but the individual utility is ignored, thus affecting the results of coordination learning. Markov Decision Processes and Reinforcement Learning. The nature of intelligent vehicle decision making is a random process according to the environment. Markov decision processes or MDPs are an important stochastic decision model of sequential decision making [45], which is the basis theoretical of reinforcement learning algorithm. Define fX n gðn = 0, 1, 2,⋯Þ is nonnegative integer random values, where n ≥ 0, and nonnegative integer sequence: i 0 , i 1 , ⋯, i n and j, constant has established: PfX n+1 = jjX n = i, X n−1 = i n−1 ,⋯,X 1 = i 1 , X 0 = i 0 g. Where fX n gðn = 0, 1, 2,⋯Þ is discrete time Markov chains, for any i and j constant with: PfX n+1 = jjX n = ig = PfX 1 = jjX 0 = ig. The Markov chains are homogeneous and independent increments. We consider X, it as a random state, and state transition function is independent of the state of history; this property is significant that it should be satisfied when solving the engineering problems; in this paper, the Markov chains are homogeneous chains. The Markov chains, where fX n gðn = 0, 1, 2,⋯Þ is state space in S, i and j are belong to S, the states i after n step to the states j the transition probability is p ij n = pfX n = jj X 0 = ig, the probability p represents when n the value is 1. Describe the movement's influence on the state transition in MDPs. The MDPs are defined as a 5-tuble fS, A, r, P, ηg, where S is discrete or continuous state space, A is discrete or continuous action space, r : S × A ⟶ R is reward function, η is to be optimistic objective function and satisfied the following Markov property, that is, ∀i, j ∈ S, a ∈ A,and n ≥ 0. The Pði, a, jÞ is transition probability where state i after performing an action a turn to state j, rði, a, jÞ is the reward function where state i after performing an action a turn to state j. The decision making objective is η, η = E½∑ t=0 γ t r t is total expected return function, where Eð⋅Þ is mathematic expectation, γ ∈ ½0, 1Þ is the delayed parameter that represents a discount on the rate of return over time, and r t is a immediately reward for performing an action in a state at t moment. Reinforcement learning by optimizing the object of value function or policy to realize the control optimization in finally. Suppose S t and A t states and actions set at t moment. π t is a policy at t moment, π = ðπ 0 , π 1 ,⋯Þ is MDPs action policy set, the action policy set as a mapping π : S ⟶ A. The decision objective can be maximized with any initial state. The MDPs state value functions: where E π ð⋅Þ is mathematical expectation of strategic π. V π ðsÞ is the total expected return of policy π and a discount on the subsequent state. Define the value of a state S under policy π, VðÞ is the expected return when starting in S and following π. The concept of value function is introduced to optimize a policy. The MDP action value function is defined as where E π ð:Þ is mathematical expectation π. In our approach, state transition probability and reward return model are unknown, but we can observe completely state information for each of intelligent vehicle. This paper we applied is known as Q-learning [46]. Q-learning makes an expected discount on future rewards. The state s takes an action a at t moment, the R is reward value when the state s at t moment, and they are all observed. The s ′ is next state. Synchronization of Q values is updated as Qðs, aÞ ≔ Qðs, aÞ + α 3 Wireless Communications and Mobile Computing ½Rðs, aÞ + γ max a ′ Qðs ′ , a ′ Þ − Qðs, aÞ. Where α ∈ ð0, 1Þ is a learning rate, Q-learning converges to an optimal Q * ðs, aÞ value, if all state action pairs have been detected with a reasonable exploring strategy. 3.2. Single-Intelligent Vehicle MDP and RL. In this paper, we simulated the motorway scenarios. Figure 1 shows the surroundings the intelligent vehicle perceived, which is showed how constructing the coordination graph. Take the no. 0 intelligent vehicle as an example, where d 1 is the distance between the nearest vehicle and in front of no. 0 intelligent vehicle in carriageway, v 1 is velocity, and a 1 is acceleration. d 2 is distance between the nearest vehicle and in the rear of no. 0 intelligent vehicle in carriageway, and v 2 and a 2 are, respectively, for velocity and acceleration. d 3 is distance between the nearest vehicle and in front of no. 0 intelligent vehicle in overtaking lane. v 3 and a 3 are, respectively, for velocity and acceleration. d 4 is distance between the nearest vehicle and in the rear of no. 0 intelligent vehicle in overtaking lane, and v 4 and a 4 are, respectively, for velocity and acceleration. We apply the perceptions of intelligent vehicle to represent the state space of each intelligent vehicle, the state set is s, where l indicates lane occupancy, values can take 1 or 2, respectively, take 1 means take up the carriageway, and 2 means occupied overtaking lane. State space dimensions are too high that will lead to information redundancy and not conducive to solving the problem. According to research literature [20] comprehensive consideration of vehicle states and the process of driving surroundings, take residual reaction time in the driving as state variables, the following calculation: where t 1 is the reaction time where in front of the vehicle decelerate in carriageway, d m1 is minimum safety distance in front of the carriageway which the intelligent vehicle with -6m/s 2 deceleration driving. t 2 is the reaction time where in rear of vehicle in carriageway, d m2 is minimum safety distance in rear of the carriageway which the intelligent vehicle with-6m/s 2 deceleration driving. t 3 , d m3 , t 4 , and d m4 are overtaking lane parameters. Then, the state space becomes S = fðl, t 1 , t 2 , t 3 , t 4 Þg. In order to avoid the decision of intelligent vehicle driving too frequently, when the intelligent vehicle between the two lanes is using the last moment of the decision, the independent decision of intelligent vehicle which is a set of MDPs actions: a 1 is velocity limit in carriageway, driving in the carriageway to velocity the task; a 2 is accelerated on lane, in the carriageway the velocity accelerate to the maximum safety velocity; a 3 is the minimum velocity limit driving in the carriageway; a 4 is on the overtaking lane to follow; a 5 is the task velocity on overtaking lane; a 6 maximum velocity on overtaking lane. The carriageway task velocity in the [60,100] km/h velocity interval is randomly generated, and max velocity 100 km velocity following is in front of vehicle no barrier-free vehicle where velocity to task velocity. If there is in a planned velocity v plane , the calculation method [47] is as follows Table 1. The following distance is the shortest distance required for the intelligent vehicle to follow the vehicle ahead to stop (follow distance). For example, the intelligent vehicle running the carriageway, the d f ollow = ðv 2 0 /ð−2a 0 Þ + 5Þ. Where a 0 = −6m/s 2 is the preset braking acceleration of vehicle deceleration, and 5m is the length of the vehicle. Each ten seconds to update a task velocity. Overtaking velocity on overtaking lane is 110 km/h. In the process of independent driving of intelligent vehicles, the return rewards as follows: 0 else: ( ð4Þ l = 1 represents the distance between the intelligent vehicles which is greater than 3 meters in carriageway, in the same, l = 2 represents in overtaking lane, in the case of collision the value is -5. The MDP states interval is 10 seconds. When the vehicle driving distance among vehicles results in safety issues of mutual influence among vehicles, according to the shortest distance between the reaction time of the intelligent vehicle, and the intelligent vehicle perceived around context, then on the basis of context coordination among the behaviors of intelligent vehicles to collaborative intelligent vehicle actions. Multivehicle Relationship Representation in the CG Model. When multi-intelligent vehicles coexist in the same environment and learn the policy at the same time, this scenario can be regarded as a MARL problem for multiintelligent vehicles. Among them, the instability problem is caused by multi-intelligent vehicle learning in the same environment. However, teaching agents the coordination policies in this nonstationary environment is extremely challenging, especially when the intelligent vehicle still needs to deal with incomplete information caused by communication constraints or local observability constraints. The existing methods rarely focus on the coordination relationship between intelligent vehicles. In the CG [48] models in which the environment of multi-intelligent vehicles presents an undirected graph structure, the points in the graph represent 4 Wireless Communications and Mobile Computing intelligent vehicles, while the edges represent the coordination relationship between agents. This setting provides a modeling basis and a theoretical basis for agents to achieve coordination decisions. CGs are an effective method of solving the abovementioned problems. CGs can use a linear combination of local value functions to represent the global value function and subsequently reduce the influence of the number of intelligent vehicles on the complex computational domain. This approach of decomposition can be described using the undirected graph denoted by G = ðV, EÞ in which each node i ∈ V represents an agent, and the edge ði, jÞ ∈ E represents the corresponding agents that must make coordination decisions. On the basis of the CG model representing the coordination relationship of intelligent vehicles, the use of VE or maximum sum and other belief propagation algorithms [49] for solving the global maximum utility can be used to guide the vehicle to learn the coordination policies. Method First, we use the dynamic CG model to represent the objects that need to cooperate with a vehicle. Our dynamic coordination model uses DSF [50] as a danger relationship repre-sentation method of intelligent vehicles for dynamically constructing a CG that can represent the interaction relationship among the vehicles. On this basis, we can further refine the interaction weight by using the attention mechanism. Then, we use the graph convolution to simulate the belief propagation process to learn the driving policy. At the beginning of the training, we use the existing expert samples as a model to guide the policy learning, and we determine the potential coordination policies in the existing rules. After learning a policy under the guidance of the expert samples, we continue to explore new coordination policies. The relationship between models is shown in Figure 2. Dynamic CG Generation Model Based on the Safety Field. We take the DSF model of automatic driving as the dynamic relationship generator of intelligent vehicles and express the interaction relationship between intelligent vehicles as a graph model. Through the DSF, the risk relationship between vehicles can be dynamically calculated to identify which intelligent vehicle needs to undergo cooperation. In using this method, the global policy learning problem can be simplified to a coordination policy learning problem among several small-scale intelligent vehicles, and Wireless Communications and Mobile Computing the simple abstraction of the relationship between intelligent vehicles can be realized. DSF is a kind of "physical field" characterizing the influence of various factors in the vehicle driving environment on the driving risk. As a physical quantity, DSF is calculated using the dynamic changes of various factors in the driving process. This study is aimed at investigating the scene of a two-lane highway where all of the vehicles are moving autonomous vehicles. As the vehicles are assumed to strictly abide by the traffic rules, we only consider the "kinetic energy field" and "behavior field" between vehicles. Figure 3 shows the field strength distribution of driving safety that can directly judge the degree of interaction between vehicles. We define the vertex set and edge set to construct the CG denoted by G ðV, EÞ. The vertex set is composed of vehicle set V = C. Given a group of vehicles denoted by C, we check whether all vehicle pairs need to establish a coordination relationship according to the motion characteristics of the vehicles to avoid possible collision accidents. A general scenario is used to illustrate in detail the analytic method of the coordination relationship between two vehicles. This scheme represents a general vehicle driving scenario on a highway. Among the vehicles, vehicles 1 and 2 are both running in the same lane (vehicle 1 is the leading vehicle, and vehicle 2 is the lagging vehicle). The corresponding speeds are v 1 and v 2 , and the following distance is d. In this scenario, the field strength affecting the driving safety of vehicle 1 is composed of the kinetic energy field formed by vehicle 2 and the behavior field formed by its driving style. The direction of the field strength of these two fields at vehicle 1 is opposite the direction of v 1 . According to the formula of the DSF model, where E V 21 is the kinetic field strength of vehicle 1 received by vehicle 2, E D 21 is the behavior field strength of vehicle 1 received by vehicle 2, E S 21 is the total field strength of the where G = 0:001, k 1 = 1, k 2 = 0:05, R 1 = R 2 = 1, M 1 = M 2 = 5000kg, and D r1 = D r2 = 0:2. According to Equation (6), when the distance between the two vehicles decreases and the relative speed increases, the greater force between the two vehicles indicates the degree of driving danger. We set F safe to 360 N [32]. If F 1 > F safe ; then, the driving risk between the two vehicles is great, and a coordination relationship between the two vehicles should be established. Through the DSF, we can then use the CG to express intelligent vehicles with interactive relationships. Coordination Relationship Based on the Attention Mechanism. In the process of intelligent vehicle driving, each intelligent vehicle in the region should play a different role in the decision making. The method structure is shown in Figure 4. The weight of each edge in the CG should also be different. Therefore, we train an attention model to learn the importance weight of each edge in the CG. In this manner, multi-intelligent vehicles can be constructed as a complete graph structure, in which the intelligent vehicle is only connected with the intelligent vehicle that needs interaction. The weight on the edge describes the importance of each relationship. In our method, the dynamic CG is used to represent the interaction between two intelligent vehicles. The attention mechanism can calculate the importance of the interaction between vehicles and refine the relationship between the intelligent vehicles in the graph model. In the previous section, our dynamic graph model has used the DSF model to dynamically calculate and determine whether an interaction exists between any two intelligent vehicles as a means of preliminarily judging the relationship between agents. In this section, we use the attention mechanism to further determine the relationship weight. The attention mechanism is a widely used technology for improving the accuracy of a model, and it can effectively learn the relationship representation between entities. We take each intelligent vehicle as an entity and use the multihead dot product attention as a convolution kernel, as this approach can effectively calculate the coordination relationship between vehicles. For each vehicle i, we calculate the relationship between this vehicle and its k-neighboring vehicles. The input features of each intelligent vehicle are mapped onto the query, key, and value representation of each independent attention head. For the attention head m, the relationship between vehicle i and neighbor vehicle j is calculated as follows: where d k is the dimension of the key (k) vector for preventing the dot product of two vectors from becoming too large. For each attention head, as shown in Equation (8), the value representations of all input features are weighted and aggregated by the learned relationships between vehicles. The attention coefficient α ij further refines the relationship between agents in the graph model, but the input order of the features is ignored by the kernel. In our proposed scheme, the multihead attention mechanism allows the kernel to simultaneously focus on the different relation representation subspaces by reusing the attention mechanism as a means of further stabilizing the training. The attention mechanism is used in this study to derive the weight of the coordination relationship (edge) between the intelligent vehicle (nodes) in the multivehicle CG. Graph Convolutional Coordination RL. On the basis of the weighted CG model, we can learn the coordination policy. In traditional methods, belief propagation algorithms, such as VE or maximum sum, are used to solve the global utility problem, but the coordination function needs to be artificially defined in advance. The graph convolution method can function similarly to belief propagation on the graph by means of auto-learning, and it can aggregate messages from local to global to solve the joint utility [18]. In this study, we use the graph convolution method, an automatic learning belief propagation algorithm, to guide the policy learning of intelligent vehicles. In addition, the convolution kernel in the graph convolution network (GCN) [50] can further learn how to refine the relationship representation between agents and aggregate the contributions of neighboring agents with influences on the agents. GCN allows agents to adjust the focus according to the driving state of the vehicle, and it uses the superposition of multiple GCN layers to extract high-order relationship representations. The GCN can effectively capture the interaction between vehicles in a larger-scale domain to promote the coordination decision making among vehicles in a much larger range. For each intelligent vehicle, the generated state and relationship features are connected and inputted into the deep Q network. Then, the deep Q network selects the action to maximize the Q value and executes it through the exploration strategy. Each intelligent vehicle calculates the loss gradient through the global Q value and reward value and then applies the global loss gradient to all intelligent vehicles. This approach allows the intelligent vehicle to not only focus on maximizing its expected return but it can also consider how its decision will affect other intelligent vehicles. As such, the intelligent vehicle can learn the coordination policy. In addition, each intelligent vehicle is connected via 7 Wireless Communications and Mobile Computing the state code of nearby intelligent vehicles, which results in a much more stable environment from the perspective of single-intelligent vehicles. The forward reasoning can be formatted as follows: where L is the number of GCN layers (each GCN represents a layer graph neural network structure), and Qðo t i Þ is the Q value of the final output. In the model for control-based graph convolution enhancement learning, the vehicles adopt the centralized training and distributed execution mode, and all vehicles share the weight. At each time step during the training, tuples ðS, A, S ′ , R, CÞ are stored in the experience playback buffer B. Then, we randomly take a small batch of s samples from B and minimize the Q loss as follows: wherey i = r i + γ max a′ Qðo′ i , a′ i , c i ; ω′Þs i ∈ S is the current state of intelligent vehicle i, c i is the adjacency matrix composed of intelligent vehicle and neighboring intelligent vehicles, γ is the discount factor (the model is parameterized by ω), and R is the immediate reward value of the intelligent vehicle. The Q loss gradients of all intelligent vehicles are accumulated, and the parameters are updated. As each intelligent vehicle only needs information from its k-neighboring intelligent vehicles during the execution of the action, the total number of intelligent vehicles can be ignored. This scheme allows the graph convolution RL method to be easily scaled and applied to large-scale multiagent systems, such as autonomous driving. Model-Based Dynamic Graph Convolution RL. Although the existing DRL has good performance in many application scenarios, it continues to encounter serious learning efficiency problems when faced with complex tasks, especially sequential decision making. DRL often consumes numerous computing resources to achieve satisfactory results, which is far from the efficiency of humans. The blind trial of DRL in the early stage of learning greatly limits the learning efficiency of agents. The use of prior knowledge or experience to improve the algorithm performance is considered to be important for artificial intelligence. For example, imitation learning uses prior knowledge to directly guide each decision of RL, thus greatly speeding up the process of policy learning. To accelerate the early learning, we use the idea of model-based RL and reward shaping [51] to pretrain the model and introduce the expert samples generated by other excellent coordination algorithms as an additional reward value to guide the agent's decision making. In this manner, the learning efficiency of the model-free RL algorithm can be further improved. Although some experimental data have shown that the method can significantly improve the learning efficiency of agents, implementing artificial expert rules as model constraints in complex environments is difficult. Especially in the MARL scenario, the coordination decision making between agents is hardly realized by establishing clear expert rules. Most prior knowledge in complex scenes is contained in rich expert samples, such as human driving data in traffic environments or driving data generated by other excellent algorithms. Therefore, we attempt to use an offline guidance method to guide the learning process of the model-free graph convolution RL by using the model constraints learned from the expert samples. In this manner, the intelligent vehicle can fully use the "existing knowledge," and it has a good learning effect in the early stage of training. In the later stage of learning, in the face of complex multiagent coordination tasks, we can realize exploratory learning Wireless Communications and Mobile Computing through the continuous trial and error of the graph convolution RL, thus ensuring the high gradualness and generalization ability of the algorithm. At the beginning of the training phase, we judge the similarity between the policy we have learned and the expert sample. If the result is consistent with the policy given by the expert sample; then, the reward value function r i = rðs t , a t Þ + r d ðs t , a t ; φÞ is adjusted, where rðs t , a t Þ is reward under normal circumstances, and r d ðs t , a t ; φÞ is additional rewards, which used to encourage the current policy to act like an expert. Then, the reward value function is fed back to the graph convolution RL and combined with the immediate reward value function of the environment feedback to derive the following formula: We extend this idea to the environment of multiintelligent vehicles to guide the learning of intelligent vehicles by taking the excellent coordination driving sample data as the prior knowledge. Experimental Results and Analysis In the experimental environment of the highway, we used different methods to learn the driving policy of the vehicle. A total of 5000 rounds of training was utilized for all of the methods. Then, using the average of ten training results, we introduced the model-based (reward shaping) dynamic CG convolutional RL (MB-GCN) method guided by expert rules and the graph convolution RL based on the dynamic CG model of the driving security field (DSF-GCN). Finally, the graph convolutional RL (GCN) was evaluated. At the same time, we adjusted the linear ratio between the safety reward and the rapidity reward in the model to test the diversity of the developed driving policies. We defined the model that was trained by increasing the rapidity reward ratio as DSF-GCN2. To better explain the performance of the model, we used the classic mobility model of Mobil [52] and the expert rules [53]. Then, the two CG methods (I-DCG and P-DCG) [20] were compared with our method. Figure 5(a) shows the learning curve of the different methods with respect to the average rewards. MB-GCN, DSF-GCN, and DSF-GCN2 can finally converge to a higher average reward value and converge faster than all of the other models. As expected, independent learning, mobile models, and expert rules do not consider the coordination relationship between agents; they may also reach the wrong driving decisions and perform poorly. Even though the mobile models and expert rules do not have the ability to relearn, the final result is much worse than those of the other methods. Unexpectedly, the P-DCG method had frequently selected the lane changing (fierce driving) decision because of its excessive pursuit of speed reward. Although this scenario had caused the vehicle to pursue a much higher driving speed, the driving safety of the vehicle was ignored. This finding We also compared the effects of the different reward ratios on the diversity of policy learning. Although the differences among DSF, GCN, and DSF-GCN2 are relatively small, the driving policies learned by DSF and GCN were more inclined to safe driving environments, which would quickly return to the driving lane on the premise of ensuring driving safety. DSF-GCN2, which had a more rapid reward, learned a more radical driving policy. Although this scheme could also ensure driving safety, its controlled autonomous driving vehicles tended to choose the advantages of speed and eventually formed a stable vehicle formation on the overtaking lane. This finding is a good simulation of reallife human drivers, as conservative drivers usually drive smoothly on driving lanes, and only in emergency situations will they choose to change to the overtaking lane. By contrast, radical drivers tend to continue occupying the overtaking lane to achieve the purpose of fast driving. In the process of testing the performance of the different algorithms, we not only listed the curve of the final reward value but also a variety of indicators to evaluate the microattributes of the vehicles. The accumulated speed variation difference was included as an important index for evaluating However, the independent learning method has no coordination mechanism, and it could not learn the coordination policy. Moreover, due to the dynamic changes in the environment caused by the decisions of the surrounding vehicles, the scheme was often sensitive towards the selection of frequent lane-changing decisions. The relationship learning between intelligent vehicles can help intelligent vehicles to generate coordination policies, indicating that all methods in the GCN are better than the independent learning methods. However, the traditional GCN may rely too much on the reward in which an evaluation index brings to the vehicle and drops the policy learning into a local optimum. The DSF-GCN can be used to develop more complex driving policies, as it will hardly focus on conservative driving policies, but it will appropriately increase its driving speed while ensuring driving safety. This approach can greatly help to improve traffic efficiency. To study the influence of vehicle density on the performance of the model, we conducted experiments in an autonomous vehicle environment by using different vehicle densities. As shown in Figures 6(a)-6(c), as the density of the vehicles increases, the differences between the various methods become more apparent. Among them, MB-GCN remains to be the method that can obtain the highest reward value. This finding fully proves the benefits of our method in terms of learning efficiency and learning effect. The increase in the number of vehicles had caused an increase in environmental instability, which subsequently caused great obstacles to the GCN for learning the relationship between agents. A good convergence effect can be achieved in a low-density five-car environment. However, with the increase in the number of agents, although a quick convergence can be attained, the final learning results indicate that the values can prematurely fall into the local optimal solution. Especially during traffic congestion, the learning of this policy can hardly solve the coordinated decision making among the multiple vehicles. However, when the traffic density is low, the simple relationship between vehicles is beneficial to the learning of the GCN. In Figure 6(d), we show the results of using the previously trained model parameters directly in the 11 car environment without retraining. It is worth noting that the MB-GCN method without retraining can still get the highest reward value, and the gap with the retraining method is the smallest, which fully proves the scalability of MB-GCN. Interestingly, the reward value of all retrained GCN methods is slightly higher, in which the vehicle speed is reduced, and the number of lane changes is also significantly reduced, but the safety of vehicles is not greatly affected. The reason is that in the case of low density, due to the small number of vehicles, the possibility of collision between vehicles is small. The learning of vehicle driving decision mainly focuses on how to accelerate through the overtaking lane, so as to get rid of traffic congestion quickly and get a better driving environment for vehicles. However, with the increase of vehicle density, vehicles need to walk longer to get a better driving environment, and the increase of collision probability makes vehicles tend to choose more conservative driving decisions, so the driving decisions learned tend to avoid collision accidents. Conclusion We focus on promoting the coordination among multiintelligent vehicles through the relationship learning of vehicles and propose a dynamic CG convolutional RL method that introduces model constraints. By combining the method of constructing a dynamic CG with the soft attention mechanism, the interference of irrelevant vehicles can be effectively removed, and the learning efficiency of the algorithm can be accelerated while ensuring better progressive performance. The vehicle can adapt to the dynamic changes of the underlying graph, and it can use the potential features of the relational kernel convolution to learn coordination policies from the gradually increasing receptive field. The method of intensive training allows the gradient of an intelligent vehicle to not only counteract itself but also counteract other intelligent vehicles in its receptive domain. In this manner, the intelligent vehicles can learn to be coordinated. At the same time, excellent driving samples have been used as the training data to combine the model guided by the reward value with the graph convolution RL without the model. This approach can reduce the invalid exploration of intelligent vehicles and guide them to ignore certain driving policies resulting from the reduction of their own speed in view of obtaining the driving policy that balances safety and efficiency. In the scene of multivehicle cooperative driving on highways, the convolution RL of the dynamic cooperative graph is significantly better than those of the existing methods. In the future, we will continue the research on multivehicle cooperative driving. At present, our research focuses on fully cooperative automatic driving. In the next step, we will study man-machine hybrid multivehicle cooperative driving. In the man-machine hybrid cooperative driving mode, it is difficult for an autonomous vehicle to drive safely and efficiently with a human driver. Data Availability The data used to support the findings of this study are included within the article. 11 Wireless Communications and Mobile Computing
9,977.8
2022-06-28T00:00:00.000
[ "Computer Science", "Engineering" ]
J/ ψ production as a function of charged-particle pseudorapidity density in p–Pb collisions at √ s NN = 5.02 TeV We report measurements of the inclusive J/ ψ yield and average transverse momentum as a function of charged-particle pseudorapidity density d N ch / d η in p–Pb collisions at √ s NN = 5.02 TeV with ALICE at the LHC. The observables are normalised to their corresponding averages in non-single diffractive events. An increase of the normalised J/ ψ yield with normalised d N ch / d η , measured at mid-rapidity, is observed at mid-rapidity and backward rapidity. At forward rapidity, a saturation of the relative yield is observed for high charged-particle multiplicities. The normalised average transverse momentum at forward and backward rapidities increases with multiplicity at low multiplicities and saturates beyond moderate multiplicities. In addition, the forward-to-backward nuclear modifi-cation factor ratio is also reported, showing an increasing suppression of J/ ψ production at forward rapidity with respect to backward rapidity for increasing charged-particle multiplicity. Introduction Quarkonium states, such as the J/ψ meson, are prominent probes of the deconfined state of matter, the Quark-Gluon Plasma (QGP), formed in high-energy heavy-ion collisions [1].A suppression of J/ψ production in nucleus-nucleus (AA) collisions with respect to that in proton-proton (pp) collisions has been observed by several experiments [2][3][4][5][6][7][8][9][10].A remarkable feature is that, for J/ψ production at low transverse momentum (p T ) at the Large Hadron Collider (LHC), the suppression is significantly smaller than that at lower energies [5,7,10].The measurements of J/ψ production in proton (deuteron)-nucleus collisions, where the formation of the QGP is not expected, are essential to quantify effects (often denoted "cold nuclear matter, CNM, effects"), present also in AA collisions but not associated to the QGP formation.At LHC energies, gluon shadowing/saturation is the most relevant effect which was expected to be quantified with measurements in p-Pb collisions [11,12].Furthermore, a novel effect, coherent energy loss in CNM (medium-induced gluon radiation), was proposed [13]. The measurements in d-Au collisions at the Relativistic Heavy Ion Collider (RHIC) have underlined the role of CNM effects in J/ψ production at √ s NN = 200 GeV [14][15][16].At the LHC, the first measurements of J/ψ production in minimum-bias p-Pb collisions at √ s NN = 5.02 TeV [17,18] showed that J/ψ production in p-Pb collisions is suppressed at forward rapidity with respect to the expectation from a superposition of nucleon-nucleon collisions.The data have been further analysed to provide more differential measurements and discussed in comparison with several theoretical models [19].A fair agreement is observed between data and models including nuclear shadowing [12] or saturation [20,21]; also models including a contribution from coherent energy loss in CNM [13] describe the data.These measurements are also relevant with respect to J/ψ production in Pb-Pb collisions at the LHC [5,7], currently understood to be strongly influenced by the presence of a deconfined medium.The measurements of ϒ production in minimum-bias p-Pb collisions at the LHC [22,23] are also consistent with predictions based on CNM effects.Recent measurements of the ψ(2S) state in p-Pb collisions have revealed a larger suppression than that measured for J/ψ production [24,25].Such an observation was not expected from the available predictions based on CNM effects. Concurrently, measurements of two-particle angular correlations in p-Pb collisions at the LHC [26][27][28][29][30][31][32] revealed for high-multiplicity events features that, in Pb-Pb collisions, have been interpreted as a result of the collective expansion of a hot and dense medium.Furthermore, the identified particle p T spectra [33] show features akin to those in Pb-Pb collisions, where models including collective flow, assuming local thermal equilibrium, agree with the data. The measurement of J/ψ production as a function of centrality in p-Pb collisions at the LHC [34] showed that the nuclear effects depend on centrality.ϒ production has been studied as a function of chargedparticle multiplicity in pp and p-Pb collisions by the CMS collaboration [35].The yields of ϒ mesons increase with multiplicity, while a decrease of the relative production of ϒ(2S) and ϒ(3S) with respect to ϒ(1S) is observed.The measurement of D-meson production as a function of event multiplicity in p-Pb collisions [36] exhibits features similar to those observed earlier in pp collisions, both for J/ψ [37] and D-meson [38] production. In this Letter measurements of the inclusive J/ψ yield and average transverse momentum as a function of charged-particle pseudorapidity density in p-Pb collisions at √ s NN = 5.02 TeV are presented.Performed in three ranges of rapidity for p T > 0 with the ALICE detector at the LHC, these measurements complement the studies of J/ψ and ψ(2S) production as a function of the event centrality estimated from the energy deposited in the Zero Degree Calorimeters (ZDC) [34,39].A measurement as a function of the charged-particle multiplicity does not require an interpretation of the event classes in terms of the collision geometry.Importantly, it enables the possibility to study rare events where collective-like effects may arise.The present data allow the investigation of events with very high multiplicities of charged particles, corresponding to less than 1% of the hadronic cross section and establish as well a connection to the recent measurements of D-meson production as a function of event multiplicity [36].A measurement of the forward-to-backward J/ψ nuclear modification factor ratio is also presented. Experiment and data sample The ALICE central barrel detectors are located in a solenoidal magnetic field of 0.5 T. The main tracking devices in this region are the Inner Tracking System (ITS), which consists of six layers of silicon detectors around the beam pipe, and the Time Projection Chamber (TPC), a large cylindrical gaseous detector providing tracking and particle identification via specific energy loss.Tracks are reconstructed in the active volume of the TPC within the pseudorapidity range |η| < 0.9 in the laboratory frame.The first two layers of the ITS (|η| < 2.0 and |η| < 1.4), the Silicon Pixel Detector (SPD), are used for the collision vertex determination and the charged-particle multiplicity measurement.The minimum-bias (MB) events are triggered requiring the coincidence of the two V0 scintillator arrays covering 2.8 < η < 5.1 and −3.7 < η < −1.7, respectively.The two neutron Zero Degree Calorimeters (ZDC), placed at 112.5 m on both sides of the interaction point, are used to reject electromagnetic interactions and beam-induced background.The muon spectrometer, covering −4 < η < −2.5, consists of a front absorber, a 3 T • m dipole magnet, ten tracking layers, and four trigger layers located behind an iron-wall filter.In addition to the MB trigger condition, the dimuon trigger requires the presence of two opposite-sign particles in the muon trigger chambers.The trigger comprises a minimum transverse momentum requirement of p T > 0.5 GeV/c at track level.The single-muon trigger efficiency curve is not sharp; the efficiency reaches a plateau value of ∼ 96% at p T ∼ 1.5 GeV/c.The ALICE detector is described in more detail in [40] and its performance is outlined in [41]. The results presented in this Letter are obtained with data recorded in 2013 in p-Pb collisions at √ s NN = 5.02 TeV.MB events are used for the J/ψ reconstruction in the dielectron channel at mid-rapidity.The dimuon-triggered data have been taken with two beam configurations, allowing the coverage of both forward and backward rapidity ranges.In the period when the dimuon-triggered data sample was collected, the MB interaction rate reached a maximum of 200 kHz, corresponding to a maximum pile-up probability of about 3%.The MB-triggered events used for the dielectron channel analysis were collected in one of the beam configurations at a lower interaction rate (about 10 kHz) and consequently had a smaller pile-up probability of 0.2%. Due to the asymmetry of the beam energy per nucleon in p-Pb collisions at the LHC, the nucleon-nucleon center-of-mass rapidity frame is shifted in rapidity by ∆y = 0.465 with respect to the laboratory frame in the direction of the proton beam.This leads to a rapidity coverage in the nucleon-nucleon center-of-mass system −1.37 < y cms < 0.43 for the MB events, while the coverage for the dimuon-triggered data for the two different beam configurations is −4.46 < y cms < −2.96 (muon spectrometer located in the Pb-going direction) and 2.03 < y cms < 3.53 (muon spectrometer located in the p-going direction).The integrated luminosities used in this analysis are 51.4±1.9 µb −1 (mid-rapidity), 5.01±0.19nb −1 (forward y) and 5.81±0.20 nb −1 (backward y). Charged-particle pseudorapidity density measurement The charged-particle pseudorapidity density dN ch /dη is measured at midrapidity, |η| < 1, and is based on the SPD information.Tracklets, i.e. track segments built from hit pairs in the two SPD layers, are used together with the interaction vertex position, which is also determined with the SPD information [42].Several quality criteria are applied to select only events with an accurate determination of the z coordinate of the vertex, z vtx .To ensure full SPD acceptance for the tracklet multiplicity N trk evaluation within |η| < 1, the condition |z vtx | < 10 cm is applied for the selection of the events. During the data taking period about 8% of the SPD channels were inactive, the exact fraction being time-dependent.The impact of the inactive channels of the SPD on the tracklet multiplicity measurement varies with z vtx .A z vtx -dependent correction factor is determined from data, as discussed in [37].This factor also takes into account the time-dependent variations of the fraction of inactive SPD channels.The correction factor is randomised on an event-by-event basis using a Poisson distribution in order to emulate the dispersion between the true charged-particle multiplicity and the measured tracklet multiplicities. The overall inefficiency, the production of secondary particles due to interactions in the detector material, particle decays and fake-tracklet reconstruction lead to a difference between the number of reconstructed tracklets and the true primary charged-particle multiplicity N ch (see details in [42]) 1 .Using events simulated with the DPMJET event generator [43], the correlation between the tracklet multiplicity (after the z vtx -correction), N corr trk , and the generated primary charged particles N ch is determined.The correction factor β to obtain the average dN ch /dη value corresponding to a given N corr trk bin is computed from a linear fit of the N corr trk -N ch correlation.The charged-particle pseudorapidity density value in each multiplicity bin is given relative to the event-averaged value and is calculated as: dN R ch /dη = dN ch /dη/ dN ch /dη = β • N corr trk / (∆η • dN ch /dη ), where ∆η = 2 and dN ch /dη is the charged-particle pseudorapidity density for non-single diffractive (NSD) collisions, which was measured to be dN ch /dη = 17.64 ± 0.01(stat.)± 0.68(syst.)[42].The resulting values for the multiplicity bins are summarised in Tables 1 and 2 for forward and mid-rapidity, respectively.For the data at backward rapidity, the values are well within the uncertainties of those at forward rapidity.1, but for the analysis of J/ψ production at mid-rapidity. The fraction of the MB cross section contained in each multiplicity bin (σ /σ MB , derived from the respective event counts in the multiplicity bins and total number of MB events) is reported in Tables 1 and 2. The softest MB events, which lead to absence of tracklets in |η| < 1 are not accounted for in this analysis.They correspond to 1.2% of σ MB for the MB-triggered events and 1.9% for the muon-triggered data; the difference is due to the different fraction of inactive channels in SPD and affects, albeit in a negligible way, only our first multiplicity bin. The multiplicity selection in this analysis allows to sample the data in bins containing a small fraction of the MB cross section.Therefore, it gives the possibility to study the J/ψ production in rare highmultiplicity events which were not accessible in the centrality-based analysis [34] (where the mostcentral event class corresponds to the range 2-10% in σ /σ MB ). J/ψ measurement For the J/ψ analysis at forward and backward rapidities, muon candidates are selected by requiring the reconstructed track in the muon chambers to match a track segment in the trigger chambers.Furthermore, the radial distance of the muon tracks with respect to the beam axis at the end of the front absorber is required to be between 17.6 and 89.5 cm.This criterion rejects tracks crossing the high-density part of the absorber, where the scattering and energy-loss effects are large.A selection on the muon pseudorapidity −4 < η < −2.5 is also applied to reject muons at the edges of the spectrometer's acceptance. When building the invariant mass distributions, each dimuon pair (of a given p T and y) is corrected by the detector acceptance times efficiency factor 1/(A×ε(p T ,y)).The A×ε(p T ,y) map is obtained from a particlegun Monte Carlo (MC) simulation based on GEANT 3 [44] and simulating the detector response as in [18].Since the A×ε-factor does not depend on the multiplicity for the event multiplicities relevant for the p-Pb analyses, the simulated events only contain a dimuon pair at the generator level.The simulations assume an unpolarised J/ψ production.The same reconstruction procedure and selection cuts are applied to MC events and to real data.The extraction of the J/ψ signal in the dimuon channel is performed via a fit to the A×ε-corrected opposite-sign (OS) dimuon invariant mass distributions obtained for p T < 15 GeV/c.The fitting procedure is similar to that used in a previous J/ψ analysis in p-Pb collisions [18].The distributions are fitted using a superposition of J/ψ and ψ(2S) signals and a background shape.The resonances are parameterized using a Crystal Ball function with asymmetric tails while for the background a gaussian with its width linearly varying with mass is used.In the present analysis, the parameters of the non-gaussian tails of the resonance shape are determined from fits of the MC J/ψ signal, and fixed in the data fitting procedure.Examples of fits of the A×ε-corrected dimuon invariant mass distributions for two selected bins, low and high multiplicities, are given in the left panel of Fig. 1. In the dielectron decay channel, electrons and positrons are reconstructed in the central barrel detectors by requiring a minimum of 70 out of maximally 159 track points in the TPC and a maximum value of 4 for the track fit χ 2 over the number of track points.Furthermore, only tracks with at least two associated hits in the ITS, one of them in the innermost layer, are accepted.This selection reduces the amount of electrons and positrons from photon conversions in the material of the detector beyond the first ITS layer.In addition a veto cut on topologically identified tracks from photon conversions is applied.The electron identification is achieved by the measurement of the energy deposition of the track in the TPC, which is required to be compatible with that expected for electrons within 3 standard deviations. Tracks with specific energy loss being consistent with that of the pion or proton hypothesis within 3.5 standard deviations are rejected.These selection criteria are identical to those used in [34].Electrons and positrons are selected in the pseudorapidity range |η| < 0.9 and in the transverse momentum range The background in the OS invariant mass distribution is estimated with dielectron pairs formed with tracks from different events (mixed-event background).The background shape is normalised such that its integral over ranges of the invariant mass in the sidebands of the J/ψ mass peak equals the number of measured OS dielectron pairs in the same ranges (typical ranges used are [3.2,3.7] GeV/c 2 and [2.0, 2.5] GeV/c 2 ).The signal itself is extracted by counting the entries in the background-subtracted invariant mass distribution (the standard range used is [2.92, 3.16] GeV/c 2 ).Due to bremsstrahlung of the electron and positron in the detector material and radiative corrections of the decay vertex, the J/ψ signal shape has a tail towards lower invariant masses.The standard range for the signal extraction contains, according to MC simulations, about 69% of the J/ψ signal.The number of reconstructed J/ψ mesons and its statistical uncertainty are derived from the mean obtained when varying the counting window for the signal extraction and the invariant mass ranges used for the normalisation of the background.The variations that are taken into account are the same as in [34].Examples of the dielectron invariant mass distributions in data, for two selected analysis bins at low and high multiplicities, are given in the right panel of Fig. 1. The correction for the acceptance and efficiency of the raw yields is based on simulated p-Pb collisions with the HIJING event generator [45] with an injected J/ψ signal.The dielectron decay is simulated with the EVTGEN package [46] using PHOTOS [47,48] to describe the final state radiation.The production is assumed to be unpolarised as in the muon decay channel analysis.The propagation of the simulated particles is done by GEANT 3 [44] and a full simulation of the detector response is performed.The same reconstruction procedure and selection cuts are applied to MC events and to real data.The inclusive J/ψ yield per event is obtained in each multiplicity bin as N J/ψ = N corr J/ψ /N MB , where N corr J/ψ is the number of reconstructed J/ψ mesons corrected for the acceptance times efficiency factor.In the dimuon decay channel analysis, the number of MB events equivalent to the analysed dimuon sample (N MB ) in each multiplicity bin is obtained from the number of dimuon triggers (N DIMU ), through the normalisation factor of dimuon-triggered to MB-triggered events F 2µ/MB , as This factor is computed using two different methods, as discussed in [34].The J/ψ cross section values for minimum-bias events obtained in the dimuon channel at forward and backward rapidities, and in the dielectron channel at mid-rapidity are compatible with those presented in [18] and [19], respectively.The results presented here are provided relative to the yield in NSD events, dN J/ψ /dy .The event-averaged yield is normalised to the NSD event class; the normalisation uncertainty is 3.1% [42].In previous analyses, e.g.[19], the J/ψ yield was extracted in p T bins, and the resulting distribution was fitted to extract the p T value.The present analysis aims at studying effects that may arise at high charged-particle multiplicities, where the usual method is no longer suitable due to statistical limitations.The method presented here does not require to sample the data in p T bins, hence allowing the analysis in finer multiplicity bins.The extraction of the average transverse momentum of J/ψ mesons is done via a fit to the dimuon mean transverse momentum as a function of the invariant mass, p A correction for the acceptance times efficiency has to be applied when building these distributions.Hence, the contribution of each dimuon pair with a certain p T and y in a given invariant mass bin is weighted with the two-dimensional A×ε(p T ,y).In order to extract the J/ψ p T , the A×ε-corrected p µ + µ − T (m µ + µ − ) distributions, which are shown in Fig. 2, are fitted using the following functional shape: where α(m µ + µ − ) = S(m µ + µ − )/(S(m µ + µ − ) + B(m µ + µ − )); the signal (S) and background (B) dependence on the dimuon invariant mass is extracted from the corrected invariant mass spectrum fits mentioned above.The J/ψ and ψ(2S) average transverse momenta, p J/ψ T and p ψ(2S) T , respectively, are fit parameters assumed to be independent of the invariant mass, while the background one, p bkg T , is parameterized with a second order polynomial function.Note that, as for the yield extraction, the quantity p ψ(2S) T is not a measurement of the ψ(2S) mean transverse momentum, since the A×ε is obtained only from J/ψ signals in the simulation.The p T results presented here for bins in multiplicity are relative to the value obtained for inclusive events, p T MB [19]. Systematic uncertainties The systematic uncertainty of the overall average charged-particle pseudorapidity density was estimated to be 3.8% [42].This includes effects related to the uncertainties in the simulations, detector acceptance and event selection efficiency, and it is dominated by the normalisation to the NSD event class.Possible correlation between the average multiplicity and that evaluated in a given bin would lead to a partial cancellation of certain sources of uncertainty when computing the relative multiplicity.As a conservative estimate, the uncertainty on the relative multiplicity is considered to be equal to the uncertainty on the overall charged-particle pseudorapidity density. The influence of variations of the η distribution in the calculation of the β correction factors, is estimated from the difference between the average number of tracklets obtained in the data taken with the two different beam configurations.The corresponding uncertainty on the multiplicity determination amounts to 1%.The uncertainties arising from the fit procedure of the N corr trk -N ch correlation in simulated events, used to obtain the correction factors, are also included.This uncertainty ranges between 0.2% (at high multiplicity) and 2% (at low multiplicity).The event selection related to the vertex quality has a 1% effect on the average multiplicity in the lowest multiplicity bin and a negligible effect for the other bins.Due to the uncertainty on the determination of the multiplicity of the individual events, there could be a migration of events among the multiplicity bins (bin-flow).This bin-flow effect is determined by running the analysis several times with different seeds for the random factor of the multiplicity correction (bin-flow test).The bin-flow uncertainties are obtained from the dispersion of the average multiplicity values in the bin-flow tests for each multiplicity bin.Finally, the effect of pile-up is studied using a toy model that reproduces the main features of the multiplicity determination, and takes into account the mis-identification of multiple collisions in the same event.The contributions of bin-flow and pile-up to the measured multiplicities are found to be negligible for all the data sets (taken at different interaction rates).The bin-dependent uncertainty is added in quadrature to the 3.8% uncertainty of dN ch /dη , resulting in a systematic uncertainty of the relative charged-particle multiplicity of 4 − 4.5% depending on the multiplicity bin. The yields reported here are provided relative to the event-average yield and the uncertainties are estimated for this ratio.The systematic uncertainties related to trigger, tracking and matching efficiency are correlated between the multiplicity-differential and the integrated determinations.They cancel out to a large extent. In the dimuon analysis, a combined systematic uncertainty which includes the A×ε variations due to the uncertainty of the J/ψ p T and rapidity input distributions used in the simulation and multiplicity bin-flow effects is derived.Due to the multiplicity bin-flow, and the fact that the invariant mass and p µ + µ − T (m µ + µ − ) spectra are weighted by A×ε, these uncertainties can not be computed separately.The combined uncertainty is obtained from the r.m.s. of the relative yield values obtained running the analysis several times with different seeds for the random factor of the multiplicity correction.In addition the systematic uncertainty for the signal extraction is estimated as the r.m.s. of the results obtained using different fitting assumptions for a given bin-flow test.The fit procedure is varied by adopting a pseudo-gaussian function for the signal, a polynomial times an exponential function for the background and by using two additional fitting ranges.The uncertainties due to the determination of the parameters of the signal tails is estimated by using several sets of parameters from different MC simulations.The uncertainty related to the computation method of the relative F 2µ/MB is estimated considering the difference between the two available methods to measure the factor in multiplicity bins [34].The effect of the vertex quality selection is estimated from the difference of the obtained yields with and without this selection.Finally, in order to determine the pile-up effect on the measured yield in each multiplicity bin, the pile-up toy model is extended by including the production of J/ψ using as input the measured yields as a function of multiplicity.The difference between the measured and toy MC yields is taken as systematic uncertainty.All these effects are uncorrelated within a given multiplicity bin, hence they are added quadratically to obtain the systematic uncertainty of the relative yield in a multiplicity bin.Also, these systematic uncertainties are considered as uncorrelated between the different rapidity intervals.A summary of the maximum and minimum relative yield systematic uncertainties is shown in Tab. 3. In addition, the 3.1% uncertainty of the event-average yield normalisation to NSD, is reported separately. The systematic uncertainties are computed also for the absolute yields in multiplicity bins at forward and backward rapidities.The absolute yields are used to compute the ratio of the nuclear modification factors at forward and backward rapidities.The values of the uncertainties on the absolute yields are shown in parentheses in Tab. 3, when they are different from the ones obtained for the relative yield.In addition, for the absolute yield measurement, the muon tracking, trigger and matching efficiency uncertainties need to be taken into account [18].They amount to 4% (6%), 3% (3.4%) and 1% (1%) at forward (backward) rapidities.For the dielectron decay channel, the signal extraction uncertainty is derived based on the r.m.s.value of the different signal yield ratios obtained for the variations of the background and the signal integration window as in [34].The uncertainty is largest for the highest multiplicity bins.Since the p T distribution of J/ψ may depend on multiplicity, the unmeasured p T spectrum leads to a multiplicity-dependent uncertainty, determined as in [34].As explained in section 2, the pile-up contamination is very low and the induced uncertainty is negligible for all the multiplicity intervals.The uncertainty related to bin-flow is estimated with the same method as in the dimuon analysis.The total systematic uncertainty varies as a function of multiplicity between 4.5% and 13%, see Tab. 3. For the relative J/ψ p T , the effects of the uncertainty on the determination of A×ε, the p T extraction procedure and bin-flow are computed together following the same procedure as for the relative yield.The p T extraction uncertainty is obtained from the dispersion of the results using different fit combinations, including variations of the invariant mass signal and background parameterisations, fitting range and the use of a second order polynomial times an exponential function for the p T of background dimuons.The effect of considering the J/ψ p T as independent of the invariant mass in the p µ + µ − T fits is found to be negligible.The impact of fixing the signal and background parameters during the fitting procedure is observed to be negligible as well.The events removed by the vertex quality selection do not have reconstructed J/ψ and therefore the p T remains unmodified.Finally, using a pile-up toy model, it is shown that the pile-up has no effect on the p T measurement, except for the two bins corresponding to the largest multiplicities.All these effects are considered as uncorrelated in a given multiplicity bin and hence their respective uncertainties are added quadratically to obtain the relative p T systematic uncertainty in each multiplicity bin.These systematic uncertainties are considered as uncorrelated between the different rapidity intervals.The results of the uncertainties entering in the relative p T measurement are reported in Tab.The dependence of the relative J/ψ yield on the relative charged-particle pseudorapidity density for three J/ψ rapidity ranges is presented in Fig. 3.An increase of the relative yield with charged-particle multiplicity is observed for all rapidity domains, with a similar behaviour at low multiplicities.At multiplicities beyond 1.5−2 times the event-average multiplicity, two different trends are observed.The relative yields at mid-rapidity and backward rapidity keep growing with the relative multiplicity in p-Pb collisions similarly to the observation in pp collisions at 7 TeV [37].At forward rapidity the trend is different.In this rapidity window a saturation of the relative yield sets in for high multiplicities.In lack of theoretical model calculations, it is unclear at the moment what is the cause of this observation.We recall that the explored Bjorken x ranges in the forward rapidity region are in the domain of shadowing/saturation, and that a variety of models [12,13,20,21,49] are fairly successful in describing the recent centralityintegrated and differential measurements of ALICE [19,34], which correspond in terms of our relative multiplicities to dN ch /dη/ dN ch /dη 2.5 at most. In Fig. 4 The nuclear modification factor for J/ψ production in p-Pb collisions (R pPb ) as a function of centrality was presented in [34].The relationship between geometry-related quantities, that quantify the centrality of the collision, and experimental observables in p-Pb collisions may be subject to a selection bias [50] which needs care in interpretation.By performing the ratio of the nuclear modification factors at forward and backward rapidities as a function of multiplicity, the dependence on geometry-related quantities is eluded.The forward-to-backward nuclear modification factor ratio is defined as: Since the average charged-particle multiplicities and their uncertainties are consistent with each other for the two sets of data, the values of R FB are shown versus the average value of the two in each multiplicity bin.Note that, differently than for the case of the nuclear modification factor measurement in [18], for the present measurement the rapidity ranges are not symmetric with respect to y cms = 0 to take advantage of all the signal yield, allowing the study up to high multiplicities.The values of the reference pp cross section were obtained by means of an interpolation procedure using measurements at center-ofmass energies of 2.76 and 7 TeV [51].The resulting backward-to-forward ratio of J/ψ production cross sections in pp collisions is 0.691 ± 0.048, leading to a global uncertainty on the R FB measurement of 6.9%. For the R FB ratio, the systematic uncertainties of the absolute yields in p-Pb collisions (Tab.3) are considered as uncorrelated between forward and backward rapidities, and therefore added in quadrature. The uncorrelated systematic uncertainties of the production cross sections in pp collisions are the same as a function of multiplicity, so they are added in quadrature to the global uncertainty (quadratic sum of muon tracking, trigger and matching efficiency uncertainties) of the p-Pb data, resulting in a total relative uncertainty of 11%.The R FB ratio is shown as a function of the relative charged-particle pseudorapidity density in Fig. 5.In multiplicity-inclusive collisions for symmetric y ranges at forward and backward rapidities [18], R FB is smaller than unity and described by theoretical models.The present measurement shows that the suppression of J/ψ production at forward rapidity with respect to backward rapidity increases significantly with charged-particle multiplicity, since R FB reaches values as low as 0.34 ± 0.06 (stat.)± 0.05 (syst.).A forward-backward asymmetry can be noticed for inclusive charged-particle production studied in [50].Even though the range of relative charged-particle multiplicities probed in that measurement is not as large as in the present measurement of J/ψ production, the apparent similarity of the trend seen in Fig. 5 to soft particle production is intriguing. In Fig. 6 the relative p T of J/ψ mesons at backward and forward rapidity is shown as a function of the relative charged-particle pseudorapidity density.The results are similar at forward and backward rapidities.An increase of the relative p T with multiplicity at low charged-particle multiplicity is observed, but for multiplicities beyond 1.5 times the average multiplicity it saturates.For backward rapidity, the simultaneous increase of the yield and the saturation of the relative p T could be an indication of J/ψ production from an incoherent superposition of parton-parton interactions, as suggested by data on correlations of jet-like yields per trigger particle [32].6: Relative p T of J/ψ mesons for backward and forward rapidity as a function of the relative chargedparticle pseudorapidity density, measured at mid-rapidity.The bars show the statistical uncertainties, and the boxes the systematic ones.The data for charged particles (h ± ) [52] are included for comparison.The latter are for |η cms | < 0.3 and with p T in the range 0.15 to 10 GeV/c and have an additional normalisation uncertainty of 3.4%.The p T broadening observed in the analysis of J/ψ production in p-Pb collisions as a function of centrality [34] is well described by initial and final-state multiple scattering of partons within the nuclear medium [53].The comparison of data to model calculations, performed in [34], corresponds in terms of relative multiplicities to a range up to roughly dN ch /dη/ dN ch /dη = 2.5.It remains to be seen whether such models can explain the saturation observed in the relative p T of the J/ψ mesons for events with higher multiplicities. It is interesting to contrast the observed saturation of p T for J/ψ mesons with the monotonic increase of p T for charged hadrons (dominated by pion production) [54] with the multiplicity measured at mid-rapidity also shown in Fig. 6.Note that this measurement is for particles in |η cms | < 0.3 and with p T in the range 0.15 to 10 GeV/c, and it is relative to events with at least one particle in this kinematic range (for which N ch = 11.9 ± 0.5 and p T = 0.696 ± 0.024 GeV/c [54]).Although the different kinematic regions may play a role and care is needed in the interpretation, it is apparent that the two observables, characterised by rather different production mechanisms (and momentum-transfer) exhibit different patterns in the multiplicity dependence of the average transverse momentum. Conclusions Measurements of the relative J/ψ yield and average transverse momentum as a function of the relative charged-particle pseudorapidity density in p-Pb collisions at the LHC at √ s NN = 5.02 TeV have been presented in this letter.The measurements were performed with ALICE in three ranges of rapidity.The charged-particle multiplicity was measured at mid-rapidity; multiplicities up to 4 times the value of NSD events were reached, corresponding to rare events of less than 1% of the total hadronic interaction cross section.An increase of the relative J/ψ yield with the relative multiplicity is observed, with a trend towards saturation at high multiplicity for the forward rapidity (proton-going direction).For the J/ψ data at mid-rapidity, a comparison to corresponding measurements of D-meson yields is performed, revealing similar patterns for the two meson species.At forward and backward rapidities, the relative average transverse momenta exhibit a saturation above moderate values of relative multiplicity. The present data are expected to constitute a stringent test for theoretical models of J/ψ production in p-Pb collisions and help to understand the effects associated with the production of a deconfined medium in Pb-Pb collisions. Fig. 1 : Fig.1: Opposite-sign invariant mass distributions of selected muon (left panel, for the forward rapidity) and electron (right panel) pairs, for selected multiplicity bins.In the left panel, the distributions are corrected for A×ε.The curves show the fit functions for signal, background and combined signal with background (see text for details).In the right panel, the background is evaluated with the event-mixing technique, and the overlaid signal is obtained from Monte Carlo (see text for details). Fig. 2 : Fig.2: Average transverse momentum of opposite-sign muon pairs as a function of the invariant mass at forward rapidity, for two multiplicity bins.The curves are fits of the background and combined signal and background (see text). Fig. 3 : Fig.3: Relative yield of inclusive J/ψ mesons, measured in three rapidity regions, as a function of relative charged-particle pseudorapidity, measured at mid-rapidity.The error bars show the statistical uncertainties, and the boxes the systematic ones.The dashed line is the first diagonal, plotted to guide the eye. Fig. 4 : Fig. 4: Relative yield of inclusive J/ψ mesons as a function of relative charged-particle pseudorapidity density, measured at mid-rapidity, in comparison to D mesons (average of D 0 , D + , and D * + species), for the p T interval 2-4 GeV/c [36].The error bars show the statistical uncertainties, and the boxes the systematic ones (additional systematic uncertainties due to the b feed-down contributions and the event normalisation are not shown for the D mesons). Fig. 5 : Fig. 5: R FB of inclusive J/ψ in p-Pb collisions at √ s NN = 5.02 TeV as a function of relative charged-particle pseudorapidity density, measured at mid-rapidity.The red box around unity represents the global uncertainty.The error bars show the statistical uncertainties, and the boxes the systematic ones. Table 1 : Average charged-particle pseudorapidity density values (absolute and relative) in each multiplicity bin, obtained from N corr trk measured in the range |η| < 1.The values correspond to the data sample used for the forward rapidity analysis.Only systematic uncertainties are shown since the statistical ones are negligible.The fraction of the MB cross section for each multiplicity bin is also indicated. Table 2 : As Table Table 3 : The relative systematic uncertainties of the relative J/ψ yield measurement in the three rapidity ranges.The values in parentheses correspond to the absolute yield measurement when different from the relative ones.The ranges represent the minimum and maximum values of the uncertainties over the multiplicity bins.For the vertex quality selection, the uncertainties marked with * refer only to the lowest-multiplicity bin; for all other bins the value is 0.3%.The trigger, tracking and matching efficiency uncertainties are not listed in the table. Table 4 : The systematic uncertainties for the relative p T measurement at forward and backward rapidities.The values represent the minimum and maximum values of the uncertainties over the multiplicity bins.The uncertainties marked with * only refer to the two highest-multiplicity bins.
8,653.4
2018-01-01T00:00:00.000
[ "Physics" ]
An Artificial Intelligence-Based Bio-Medical Stroke Prediction and Analytical System Using a Machine Learning Approach Stroke-related disabilities can have a major negative effect on the economic well-being of the person. When left untreated, a stroke can be fatal. According to the findings of this study, people who have had strokes generally have abnormal biosignals. Patients will be able to obtain prompt therapy in this manner if they are carefully monitored; their biosignals will be precisely assessed and real-time analysis will be performed. On the contrary, most stroke diagnosis and prediction systems rely on image analysis technologies such as CTor MRI, which are not only expensive but also hard to use. In this study, we develop a machine learning algorithm for the prediction of stroke in the brain, and this prediction is carried out from the real-time samples of electromyography (EMG) data. The study uses synthetic samples for training the support vector machine (SVM) classifier and then the testing is conducted in real-time samples. To improve the accuracy of prediction, the samples are generated using the data augmentation principle, which supports training with vast data. The simulation is conducted to test the efficacy of the model, and the results show that the proposed classifier achieves a higher rate of classification accuracy than the existing methods. Furthermore, it is seen that the rate of precision, recall, and f -measure is higher in the proposed SVM than in other methods. Introduction and Related Works e 4 th Industrial Revolution has arrived, bringing with it a wide range of businesses and research fields and huge opportunities as well as substantial challenges. ese might be thought of as two sides of the same coin. Because of the role they play in the Fourth Industrial Revolution, artificial intelligence, big data, the Internet of things (IoT), and cloud computing have become popular topics of discussion in the healthcare business [1][2][3][4]. As a result of the Internet of things (IoT), medical devices and facilities now can send and receive biosignals, medical records, and even genetic data over the Internet. e rapid ageing of the world population will lead to an increase in the prevalence of chronic diseases as well as an increase in the cost of providing medical care [5]. As a means of preparation, national healthcare systems are shifting their emphasis away from the treatment of illness and disease and toward the promotion of overall health and well-being. e different types of health data that may be easily analysed to the creation are shown in Figure 1. (i) Big data are personal health records (PHRs) (ii) Electronic medical records (EMRs) (iii) Genomic information Classifier is the block that is used to classify the different dimensions of the image. is was very useful to identify the different stroke locations and depth of the stroke. Even though enormous volumes of medical data have been gathered and stored over the years, the data have not yet been utilised to its full potential. Combining the technologies of big data and artificial intelligence (AI) enables the development of novel intelligent medical solutions, as shown in Figure 2. (i) Precision healthcare services (ii) Predictive health care services ey are both examples of those that are conceivable. However, it is currently difficult to derive relevant insights from multiple types of healthcare data by merging them. Because of recent developments in computer infrastructure and the appearance of multiple AI frameworks in information and communication technology (ICT), AI-based digital healthcare analysis has recently become more complex and feasible. A new method called smart healthcare allows for the remote management of people's health by utilising ICT and large amounts of medical data [6]. According to the World Health Organization, the leading causes of death worldwide in 2016 were malignant neoplasms (often known as cancer) and heart disease. In 2016, diseases related to strokes were responsible for 5.7 million fatalities, making them the third greatest cause of death overall. In the year 2016, most of the people were affected by the stroke, and this stroke played the critical factor to increase the death ratio [7]. is will result in the death of brain cells. It is more common for elderly people to suffer from strokes, which can result in a variety of symptoms, including hemiplegia, slurred speech, and loss of consciousness, in addition to various forms of brain damage. Because of them, adults are at risk of severe disability and possibly death. If an impending stroke can be recognised or predicted in its early stages, it may be feasible to significantly mitigate its effects [8][9][10]. Several risk factors for stroke have been established through the course of a great number of investigations and clinical trials. Tobacco use, high blood pressure, diabetes, and obesity are all preventable risk factors for stroke that can be managed and treated to lower the risk of having a stroke. Both medical and personal efforts, as well as immediate research and steps, are required to be performed at the national level to prepare for a forecasted increase in more stroke disorders as a result of the worldwide trend toward an older population [11]. People who think they are having a stroke should go to the hospital immediately, be examined by a stroke doctor, have an X-ray of the brain taken, and receive anticoagulation as soon as possible. ey should not feel that they are too old for treatment. Treatment given within three hours of stroke is most effective; treatment given after 4.5 hours maximizes yield. Studies are testing this because it is unclear about the effectiveness of treatment given after that. Data from studies are needed to determine whether the benefit of anticoagulation therapy outweighs the risk of bleeding in patients with mild stroke. It may be difficult to anticipate stroke symptoms or outbreaks using risk factors because the definitions of various risk variables and methods for correlating the likelihood of various diseases occurring are variable. When it comes to determining the prognosis of an illness such as stroke, placing all of your faith in the risk factors alone presents several challenges. e Framingham Heart Study proposed a methodology for predicting the risk of stroke based on a prospective cohort study of cardiovascular illness [12,13]. Especially, if this treatment is given within three hours of stroke onset, older people are benefited as much as younger people. Adding aspirin to anticoagulants increases the risk of bleeding; so, it should be avoided. Further analysis of individual data factors, such as pretreatment CT scans of patients' brains and different treatment options rather than aggregate data, will reveal more. It is difficult to recognise a stroke and the accompanying brain damage early on due to the wide variety of symptoms and categories that are associated with a stroke. It is a risk factor, the severity of which depends on the outcomes of prior medical exams. It is difficult to apply this to the multiple symptoms and prognoses of an elderly person (patient) before the beginning of a stroke because they show the predicted possibility of disease developing in the far distant future, which is approximately five to ten years away. People who are forced to work long hours can avoid problems if they exercise and eat healthy food in between their working hours. e researchers came to this conclusion by analysing the data of age and smoking and working hours of more than 143,000 people. at is, 1,224 people who worked more than 10 hours for more than 10 years had suffered a stroke. A well-planned diet, regular exercise, quitting smoking, and eating the right amount of food can make a huge difference in people's health. Artificial neural networks, also known as ANNs, have been utilised in several studies for the diagnosis or prediction of strokes. Singh et al. [14] concluded that the ANN is capable of identifying people who are at risk of having a stroke. In that particular study, the backpropagation method was utilised to improve the accuracy of both prediction and diagnosis. Because of an artificial neural network, it was feasible to predict the risk of having a stroke using only 300 pieces of experimental data (ANN). e inquiry led to the creation of a model that is accurate 95.33% of the time for patients suffering from a stroke. In this particular instance, however, the primary focus is placed entirely on the precision of the forecast, which makes it challenging to analyse the underlying operational concept in finer detail. e researchers [15] looked at how well CT scans and clinical factors could predict the risk of cerebral haemorrhage occurring during the freezing treatment for ischemic stroke patients. By analysing CT images of 116 patients suffering from ischemic stroke, SVMs were successful in identifying nine out of sixteen patients with symptomatic cerebral bleeding. ey [16] employed the kernel function of SVM to update the parameter values of a prediction model for stroke risk. e variables that contribute to the risk of stroke were analysed. By making use of the RBF kernel function, this model was able to accomplish a degree of accuracy that was satisfactory. In contrast to early detection or the prediction of preoccurrence symptoms, the research that was conducted using SVMs focused on predicting severity and prognosis after an epidemic had already taken place. is system has several flaws, one of which is that its working principles are a mystery. is is because all it does is improve the precision of traditional stroke predictions. Because it relies on the findings of individual clinical diagnostic tests and CT scans, this method is unable to detect and forecast the presymptoms of stroke disorders based on real-time biosignals or life logs. In addition, it is not possible to determine whether or not a stroke disorder will occur. As a consequence of this, additional research, as well as clinical trials, are required to recognise strokes in their earliest stages. To address the limitations, Yu et al. [17] conducted an investigation and released research that made use of data mining techniques. According to research, a decision tree algorithm-based data mining classification approach was applied to automatically classify and interpret the results of the NIHSS in terms of the severity of the condition being measured. In addition, a fresh approach to the semantic interpretation of stroke severity was produced by carefully analysing the rules on the principle of motion. ese rules offer further data for the C4.5 decision tree, and their analysis led to the development of a fresh approach. Because of the decision tree approach, the predictive model algorithm can only provide a limited interpretation of the data. As a result of this limitation, the predictive model is considered. Another study to predict the risk of stroke was conducted in [18], which looked at more than 50 potential risk variables. In this study, accurate stroke prediction was achieved with the application of data mining techniques such as the K-nearest neighbours algorithm and the C4.5 deception tree. e approach, like others that were used in prior studies, is not appropriate for the use in everyday life to detect and forecast presymptoms of stroke. e processing and interpretation of time-series data is a common application of recurrent neural networks, also known as RNNs, in the field of deep learning. RNNs use circulatory neural networks as a component of the current learning process. ese networks are founded on the results of prior phases of the learning process. Because the outputs of recurrent neural network structures, also known as RNNs, include information about the outcomes of calculations that have been performed in the past, RNNs can acquire knowledge of sequential data. Models such as Long Short-Term Memory (LSTM), which is also a class of the RNN, can manage the challenge of computing and reducing values when wrong values are sent to the neural network layer. is allows these models to overcome the structural defects that are present in the current RNN. is LSTM was initially proposed in [19] in the year 1997. Within these neural networks, the sigmoid and tan(h) layers, in conjunction with the cell states, input gates, forget gates, and output gates, are responsible for generating vector output values at each of these gates. e cells of the LSTM are responsible for learning how to recognise and protect essential inputs often known as input gates. e cells that make up the LSTM learn how to remove them whenever it becomes necessary to do so. An EHR risk factor analysis has been used recently to predict LSTM-based cerebrovascular diseases. is represents a new study trend in the field of LSTM research. In particular, Chantamit-opas and Goyal [20] discovered that the LSTM algorithm is the best at predicting any cerebrovascular disease or stroke by using ICD-10 codes and other pertinent risk factor patterns from EHRs. A method for predicting the transition from an ischemic stroke to a hemorrhagic one was proposed in [21] using the LSTM model (HDM). It was determined that diffusion and perfusion-weighted magnetic response photographs were necessary to create the LSTM network topology. A comparative analysis of 155 patients with acute stroke who participated in clinical trials revealed an accuracy rate of 89.4%. Even though these studies have shown that the LSTM can predict strokes, they have still relied on data such as EHR. Because no study of this kind has been conducted in this field, it is not possible to forecast or evaluate the likelihood of a stroke by analysing real-time biosignals generated by everyday activities such as walking or driving. A unique strategy that is based on real-time biosignals is required as an alternative to the existing traditional methods that are currently being used to predict strokes. Because of an artificial neural network, it was feasible to predict the risk of having a stroke using only 300 pieces of experimental data (ANN). e inquiry led to the creation of a model that is accurate 95.33% of the time for patients suffering from a stroke. In this particular instance, however, the primary focus is placed entirely on the precision of the forecast, which makes it challenging to analyse the underlying operational concept in finer detail. e goal of this research is to build a machine learning system capable of predicting brain strokes. is forecast is based on information derived from real-time EMG data. e support vector machine (SVM) classifier is trained with simulated data first and then evaluated with real-world data. The Proposed Method In this study, we develop a machine learning algorithm for the prediction of stroke in the brain and this prediction is carried out from the real-time samples of electromyography (EMG) data as illustrated in Figure 3. e study uses synthetic samples for training the support vector machine (SVM) classifier, and then, the testing is conducted in realtime samples. To improve the accuracy of prediction, the samples are generated using the data augmentation principle, which supports training with vast data. Most strokes are caused by a blood clot in an artery in the brain. Timely treatment with thrombolytic drugs can help restore blood flow before a large stroke occurs and improve recovery after a stroke. However, death can occur from severe bleeding in the brain caused by anticoagulants. ese classifiers are often known as support vector machines (SVMs). Support vector machines can be modified to conduct nonlinear classification, and with the addition of a few extra techniques, they can perform classification on an endless number of different classes. By utilising a support vector machine, it is possible to generate an arbitrary hyperplane between two different sets of vectors that can be differentiated linearly. In particular, the hyperplane that minimizes the distance between itself and any point in either of the clusters is the one we are interested in. When classifying a testing point, the side of the calculated hyperplane, on which the point lies, is taken into consideration. It is possible to train support vector machines to function in other contexts in addition to the two linearly distinct classes for which this straightforward approach is effective. Nonlinearly Separable Classes. If the two classes cannot be linearly separated using this approach (no hyperplane can be generated that perfectly splits the classes on either side), then you will not be able to use this method. In this scenario, you will need to construct a soft margin that will almost exactly split the data into two distinct classes. During this step of the process, cost functions are determined. e most significant component of this cost function is referred to as the hinge loss function. If a point is located on the appropriate side of the hyperplane, then the value of this function is equal to zero. However, if the point is located on the incorrect side of the hyperplane, then the value of this function is proportional to the distance from the hyperplane. It is the hinge loss function that is averaged over all of the data points, plus a parameter that defines the trade-off between increasing the number of points that are on the correct side of the hyperplane and maximizing the smallest distance that correct points are from the hyperplane. By lowering this cost function, it is possible to generate a classification hyperplane with respectable results. is method performs relatively similar to the classic SVM algorithm in situations when the input can be linearly separated, and it continues to function admirably in situations where the input cannot be linearly separated. ere are two primary methods for classifying a large number of classes when using support vector machines; both of these methods function by establishing several binary classification support vector machines and obtaining a consensus vote among them to perform classification, respectively. Both of these methods use support vector machines. It is possible to train a support vector machine, often known as an SVM, for each class. When utilising this SVM, the selected class is considered to be one class, while the other data points are considered to be the other class. To put it another way, each sub-SVM is responsible for determining whether or not a particular point belongs to the class in question or another class. e class to which the point is assigned is determined by the category that the person making the assignment feels most strongly about. In the alternative approach, a classifier is designated for each possible combination of classes. When classifying points, we look at which of two possible classes is supported by the greatest number of pair wise classifiers. Performing Nonlinear Classification. It is possible, with the help of a hyperplane, to split up some data sets into two categories that do not naturally correspond to one another. In this scenario, it does not matter which SVM algorithm is utilised; the result will be the same: an inaccurate classification of the data. In this section, the data can be altered to appear in a higher-dimensional feature space, which enables the classes to be differentiated from one another straightforwardly. You are engaging in the kernel trick at this very moment. When mapping points in two dimensions, you can make use of a Gaussian function, which positions points closer to the centre of the data higher on the map than points further away from the centre. Data Augmentation. e capacity of data augmentation to improve photographs may be traced back to its earliest applications, such as horizontal flipping, colour space augmentation, and random cropping. Using these adjustments, which encode many of the invariances discussed earlier, is one way to relieve some of the challenges associated with image recognition. In this review, several different types of learning techniques, including geometric, colour space, kernel, mixing, random erasing, feature-space, adversarial training, GAN-based (GAN represents the Gathering and Additive Node. is node is an interconnection node between the two end points of different clusters. is node was helpful here to gather the various attributes and transmit the details between the end points), neural style transfer, and meta-learning, were identified as areas for improvement. In this section, we will explain how each method of enhancement functions, report the findings of our trials, and discuss some of the restrictions imposed by the enhancements. Geometric Transformations. is section goes over a lot of different image processing functions, including geometric transformations, among other things. One method to categorise these enhancements is based on how straightforward it is to put them into action. A comprehensive understanding of these changes is required to lay a solid foundation for additional research into the various ways of data augmentation. Within the context of this conversation, we will also talk about the many different geometric augmentations in terms of their safety. When evaluating the safety of data augmentation methods, one must consider the likelihood that the label will continue to exist in its original form after the change. It may be possible, with the use of a nonlabel preserving transformation, to improve the capability of the model to offer a response indicating that it does not have confidence in its prediction. On the contrary, this would require postaugmentation labelling that is more precise. If the image label is [0.5 0.5] after a modification that does not preserve the label, the model may be able to generate predictions with a higher level of strong confidence. On the contrary, the process of constructing enhanced labels for every nonsafe data augmentation is both time-consuming and expensive. Because it is difficult to produce revised labels for data that has been updated, augmentation needs to be regarded as safe before it can be implemented. Because augmentation policies need to be tailored to each industry, it might be challenging to generalise them. In the process of image processing, there is not a single function that does not, at some point in time, result in a transformation that modifies the labels. is demonstrates how challenging it can be to build generalizable augmentation rules due to the dataspecific nature of their construction. is consideration is necessary because of the geometric augmentations that are detailed later. Flipping. e horizontal axis is the one that is typically flipped, as opposed to the vertical axis, which is the more uncommon choice. Including this augmentation in datasets such as CIFAR-10 and ImageNet is one of the easiest and most effective ways to improve their accuracy. is is not a label-preserving transformation and therefore should not be used on datasets involving text recognition such as MNIST or SVHN. Colour Space. Tensors are the common format for the recording of the dimensions of digital image data, which include height, breadth, and colour channels. Increasing the vibrancy of individual colour channels is another strategy that can be easily put into action. A straightforward method for improving a picture colour can be accomplished by isolating a single colour channel, such as R, G, or B. Quickly converting an image into its representation in a particular colour channel can be accomplished by isolating the matrix of one colour channel and adding the zero matrices from the other channels. Performing matrix operations on the image RGB values is all that is required to either increase or decrease the brightness of an image. Histograms of colours are employed in the process of developing increasingly intricate colour augmentations. Changing the intensity values of these histograms is the method that is used by photo editing software to make adjustments to the lighting. Cropping. When processing picture data that has mixed height and width dimensions, cropping an image centre Computational Intelligence and Neuroscience patch might be a valuable step to take as part of the processing workflow. A translation can also be imitated by employing random cropping, which generates an effect that is analogous to that of a translation. However, random cropping will result in a reduction in the image dimensions, for example, from (120,120) to (256,256). Translations will keep the image's spatial dimensions intact (224,224). Depending on the cropping threshold that was used, this modification may or may not preserve the labels of the original data. Rotation. To execute rotation augmentations, the image is either rotated to the right or left along an axis that ranges from 1 degree to 359 degrees. Here, the term degree represents the image dimensional places. is was very helpful here to identify the stressful locations of the patients. As a consequence of this, the security provided by rotational enhancements is heavily reliant on the rotation degree parameter. Applications such as MNIST that need digit recognition do not keep the original data label after it has been transformed if the rotation degree is increased. Translation. To eliminate any positional bias in the data, images can be rotated to the left or right, moved up or down, or shifted in any of these four directions. If all of the images in a collection are centred, for instance, a facerecognition model would have to be validated using photographs that are centred to the same degree. Depending on the direction in which the original image is translated, it is possible to fill the space that is left behind with either a set value, such as 0s or 255s or with random or Gaussian noise. One of these alternatives is superior to the other. e spatial dimensions of the postaugmentation image can be preserved with the help of padding. Noise Injection. In most cases, a Gaussian distribution of arbitrary values is used in the noise injection process. e training of CNNs to learn more detailed characteristics can be helped by increasing the amount of noise that is present in the images. Using geometric modifications to reduce biases in training data is a wonderful way to go about it. e distribution of the training data could be very different from the distribution of the testing data in a variety of ways. If there are any placement biases in the dataset, such as in a facial recognition dataset when all of the faces are exactly centred, this is a good technique to use. Geometric transformations are helpful not only because of their ability to counteract the effects of positional biases but also because of how straightforward it is to put them into practice. Because there are so many libraries to choose from, beginning image editing techniques such as horizontal flipping and rotation can be accomplished with relative ease. As a consequence of the alteration of the geometry, additional resources, including memory, additional time for training, and additional costs associated with calculation, are required. To verify that the image label has not been changed in any way, it is necessary to manually analyse any geometric changes that have been made, such as translation or arbitrary cropping. Last but not least, in many of the application disciplines that have been addressed, such as medical image analysis, biases are more complicated than positional and translational changes. ese biases are that which differentiate training data from testing data. As a direct consequence of this, the application window for geometric transformations is quite limited. Colour Space Transformations. e image data are encoded using three stacked matrices, each of which has a height and width that is proportional to the height. Each RGB colour value is represented by its separate dot in these matrices. When it comes to picture identification, one of the most common types of roadblocks is bias in the illumination. It is simple to understand why colour space alterations, sometimes referred to as photometric transformations, are effective. Looping through the images in question is all that is required to swiftly adjust the pixel values of photographs that are either too bright or too dark. A more straightforward method for manipulating the colour space is to splice off the various RGB colour matrices. In a transformation, pixel values can also be constrained to fit within a range with a minimum value and a maximum value, respectively. e inherent colour representation of digital photos makes a wide variety of improvements possible to be made to the quality of the images. Alterations to the colour space can also be generated through the use of image editing software. A colour histogram is a graphical representation of how the pixel values of an image are distributed across each of the RGB colour channels. Increasing the vibrancy of individual colour channels is another strategy that can be easily put into action. One method to categorise these enhancements is based on how straightforward it is to put them into action. By adjusting the values of this histogram, it is possible to apply various filters to an image. is study, which was conducted using only data, revealed that self-employed people, chief secretaries, managers, etc., do not suffer from stroke even if they work long hours, while those who work long hours at irregular hours and at night are severely affected. Results and Discussion In this section, the process of monitoring and collecting biosignal data to verify the proposed AI-based stroke sickness prediction system is broken down in detail. e data from a real-time electromyogram will be the primary biosignal that is utilised. An electromyogram, or EMG, is a diagnostic tool that can be used to determine the speed of nerve conduction or record electrical activity within the muscle itself. An EMG does this by using electrodes to apply electrical stimulation to a muscle or nerve. According to the findings of several studies that utilised EMG, there is a little imbalance in the body both before and after a stroke, as well as imbalances in gait and locomotion. In this study, we investigate biosignal abnormalities as well as gait issues, both 6 Computational Intelligence and Neuroscience of which have been linked to an increased risk of stroke. Our stroke rehabilitation group consisted of patients aged 70 and older who had received a stroke diagnosis during the previous 30 days. ree hundred patients in the rehabilitation division fulfilled our requirements. Signals such as electrocardiography, electromyography, speech recordings, electroencephalography, and foot pressure were among the many that were collected for processing. Before any biosignal data was acquired, the sensors were put through a series of rigorous tests to guarantee that they were in proper working order. Data were collected from a total of 300 patients, none of whom had survived a stroke and all of whom were considered to be in the normal group. Each patient was put through a variety of exercises that included standing, walking, sitting, raising their arms, and even sleeping to simulate actions that people do regularly. Before beginning the actual measuring technique in each situation, one practise run was provided for each individual to ensure accurate results. e first values that were collected and gathered could have been influenced by human noise caused by a subject's tension or discomfort; as a result, these values were not included as experimental data. Exerting senior citizens over and over again in trials is a waste of their time, which is why the most recent measurement methods were left out. rough the use of the Bluetooth protocol, each piece of biosignal data was transmitted instantly and directly to the main server. e data from the gateway biosignals is sent to the server that accumulates and predicts bio-signals via the Wi-Fi connection protocol. A medical doctor oversaw the entirety of the measurement experiment and a fresh set of measurements was taken to check if any of the collected biosignal data was corrupted or destroyed. A bogus value consisting of four bytes of sampling rate in voltage was utilised for the EMG data that were transmitted from each of the four EMG sites. To detect neuromuscular abnormalities and problems with balance, electrical activity in the muscles, as measured by EMG biosignals, is used. is activity can also be considered a muscle reflex. In this particular work, EMG biosignals are utilised to construct a machine learning model for stroke prediction. e characteristics are arrived at by deriving them from the raw muscle data of the biceps and gastronomies muscles. Variables obtained from EMG raw data were incorporated into tests of prediction models based on machine learning as well as multidimensional analyses. When extracting characteristics, data points were obtained by dividing the raw data into 0.1-second units. Since the variability of muscle movement was deemed acceptable at 1500 Hz per second EMG, this division was used. e procedures described in this research were utilised in the collection of the data. In our investigation, we relied entirely on the electromyographic (EMG) biosignals that were captured at the time of our scenario, which encompassed activities such as standing, walking, stretching one arm, and even sleeping. Only the data regarding walking was analysed, even though there existed a wide variety of experimental data. In Tables 1 and 2, it shows the classification accuracy used on the training and testing set. is section is focused on the findings related to the classification of elderly stroke patients and nonstroke patients using the LSTM, which is based on a recent neural network (RNN). e development and evaluation of prediction models were based on the data collected from 271 stroke patients and 271 healthy individuals, respectively. After being randomly separated from the data sets used for learning, the data that were not put to use for learning were transformed into test sets. e experiment and the analysis had a ratio of 70/30 and 80/20, respectively. ese ratios were used to divide the experiment and the analysis. In Figures 4 to 7, the result achieved using the proposed method with several metrics has been discussed. By collecting EMG bio-signals while the subject was walking, the prediction accuracy of stroke disorders of 90.38% was reached using the machine learning algorithm and random forest, and 98.958% was acquired using the deep learning LSTM method. In the course of this inquiry, EMG healthcare devices were utilised to gather real-time data at a frequency of 1500 Hz from a total of four different sites, including the left and right biceps fenestrum as well as the gastrocnemius muscle. Because it enables medical professionals and hospitals to take preventative measures for the early detection and diagnosis of stroke diseases in patients who are in the danger group, using these real-time biosignals to analyse and forecast stroke illnesses is a wise strategy. e model that was developed by machine learning and deep learning should be thoroughly analysed by medical staff, electronic medical records, clinical data, emergency blood test and MRI information, and other relevant data to ensure accurate stroke illness analysis and forecasts. is will allow for better patient care. As a consequence of this, rather than placing all of their faith in the AI-based model that was built as a result of this body of work, researchers ought to incorporate studies that make use of medical expertise and clinical experimental data to forecast stroke disease. Because of the experiments done in this study with EMG biosignals just while walking, the prediction of multimodal biosignals such as EEG and ECG should also be taken into consideration. e ultimate goal was to ensure that intensive research into the early diagnosis and prognosis of strokes and other chronic diseases, such as diabetes and heart disease, would be carried out using signaling as the primary method. Using the experimental findings from this study that are based on EMG, medical practitioners can effectively foresee and identify stroke illnesses. Furthermore, the preliminary findings suggest that AI approaches can be utilised to assist in the diagnosis of diseases purely in everyday life. Conclusion e purpose of this study is to construct a machine learning system capable of predicting strokes in the brain using realtime EMG data. en, the SVM classifier is trained using simulated data and is then tested using actual data. e samples are generated by the application of the notion of data augmentation, which makes it easier to train with a substantial quantity of data. When the simulation is run to check if the model performs as anticipated, the results show that the proposed classifier is more accurate than the approaches that are currently being used. In the cutoff point, the proposed model achieved 91.77% of accuracy, 90.28% of precision, 91.44% of recall, and 91.12% of F1-Score. e proposed model was getting a high level while compared with other models. Furthermore, the SVM accuracy, recall, and f-measure are all higher than those of the competing approaches. Data Availability e data used to support the findings of this study are included within the article and can be obtained from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
8,493.8
2022-10-12T00:00:00.000
[ "Computer Science" ]
Chinese Language Theory Textbook: Challenges and Solutions Proposed is a project (or principles) of a new version for the textbook of theoretical Chinese grammar based on the Predicative Concept of Language and continuous to the 2005-2006 edition. The book would be more systematic and will consider in more detail the non-equivalence of the language levels in Chinese and in Russian; the Topic-prominent typology of Chinese will be will be considered in more depth, as well as the variety of regional variants of Chinese; the impossibility of assimilating the Topic-prominent grammar to the same in the inflectional Subject-prominent languages will be systematically argued. Introduction Discussed are the basic principles of a textbook of the theory of Chinese language for universities, first of all in Russia, where the «Linguistics» (equivalent to the «Foreign language» specialties in other countries) is the most common (as well as the "Oriental Studies" and the "Pedagogical Education"). In Russia, considerable attention is paid to theoretical disciplines, and although their number is constantly decreasing, it is still possible to offer them as electives. First of all, it is the «Theoretical Grammar» usually lectured during one term or an academic year on the 3rd or 4th bachelor's year. * Corresponding author. E-mail<EMAIL_ADDRESS>VladimirA. Kurdyumov / Proceedings TSNI-2021 555 The main problem is the understanding of the essence of the Chinese language and the language's typology explanation. In Soviet textbooks after 1950 Chinese was rigorously adapted to the grammar of the Russian. In Western countries, a descriptive approach prevailed. After 1980s the market has been gradually filled with textbooks published in PRC, where the theory is either absent, or applied minimally (the same descriptive approach), or reduced to formulas or "trees" ("generative" approach) or interpreted as «slightly changed» «European» theory (simply originating in the universal grammars of 17th century). In Russia, in 1960Russia, in -1980, there was a move towards a radical revision of the theory of both general and eastern linguistics, but since the 1990s, and even more so, in 2000s, there was a "rollback"ever more often appeals to return to a fairly simple and consistent "middle school view" close to elementary Russian grammar with all its categories. As for our two-volume «Theoretical Grammar of the Chinese Language» (Kurdyumov 2014), recommended by the Ministry of Education, its introduction and numerous discussions have shown that 1) there is still a lack of the understanding of the specifics of Chinese (an isolating & topic-prominent language), 2) teachers of language practice do not seek theoretical generalizations, 3) the students in Russia (mostly accustomed to the format of Unified Graduate Exam, "EGE") are accustomed to simple grammar standards and are unwilling to give up. The material of the Chinese language and the author's more than 30 years of pedagogical experience show that the study (both practically and theoretically) would be more successful if to apply the principles previously outlined by Yuen-ren Chao (1968) and Thompson (1976, 1981), and alsoto rely on the principles of the Predicational Concept of Language (Kurdyumov 2013): it is necessary to consistently explain to students how Chinese differs from European languages. So the main emphasis should be on 1) topic-prominent typology of the Chinese language, 2) non-importance of categories of actor and action, 3) the significance of the language level system opposing to the "European", 4) the positionality of the parts of speech , 5) micro-syntax as the basis for the formation of lexical units (not "words"), 6) the significance of units higher than the clause, 7) the possible changes (with all its contradictions) in Chinese towards the agglutinating type. In addition, the "Chinese language" as a system should be considered as a whole, including "dialects" and the classical literary language wenyan, i.e. examples from these areas are valid and meaningful (if provided with explanations). Purpose and objectives of the study The main purpose: continuing the tradition of the edition of 2005-2006 create a comprehensive textbook of Chinese language theory, to define the basic principles of improving, to determine how to convey to the students its typological specifics. Objectives: to show linguistic principles the textbook of the theory should be based, and how the essence of the Chinese language can determine the structure and content of the textbook. The structure can / should be the following: review of achievements of Chinese linguistics in Russia, in the West and in the regions of the Chinese regions, the latest achievements in genealogy of Chinese language, the latest achievements in typology of Chinese language, Chinese "dialects" and "regional variants", language level system and its fundamental differences from "European", (topicprominent) syntax of Chinese language, the micro-syntax as an analog and an opposite to "word formation", the problem and system of the (positional) parts of speech, description of specific categories, the complete index of function words, classifiers, interjections. A significant role should be played by the student's understanding of his own identity: as a critical researcher who knows well the theoretical principles of the organization of language and at the same time is "outside" (Zheltukhina 2016) and a culture-based approach to study (Tareva 2017). Literature review There are, of course, no guidelines as to how theory textbooks should be build: there are (only) programs and curricula that, if prepared in good faith, are based on the existing textbooks. So in most cases the author can follow the patterns (the books on the theory of the Chinese, the textbooks of the theory of other languages). The existing do not satisfy us as a whole: in Russia they are archaic and based on stereotypes of 1950s, in Western countries they are often oriented to the more practical demands of the audience, those which published in the Chinese-speaking regions are secondary to Western languages descriptions and stereotypes. Our own Grammar (Kurdyumov 2014), published only in one volume (instead of two), also followed the previous samples, and both volumes were about morphology (parts of speech). The most famous works. Dragunov's book (1952) was well known, although its first part was written in Leningrad before World War II (and published later -Dragunov 1962). The most known is so called "morphological part" (1952). The grammar was written in the last years of Stalin's life and during the Soviet debates on linguistics (overcoming Marrism, i.e. the (class-based) "Japhetic theory" that replaced linguistics in 1924-1950). Dragunov was forced to add a lengthy (more than 30 pages) preface criticizing the views of another famous sinologistprofessor Oshanin (editor in chief of the monumental dictionary later in 1980s). Dragunov's grammar is constructed as a consistent proof of the existence of a stable system of (vocabulary-based) parts of speech in Chinese. Main achievements: proposed are the basic positions: Noun and Predicate (predicate includes verbs and (qualitative) adjectives), which the author deduces from VladimirA. Kurdyumov / Proceedings TSNI-2021 557 syntax compatibility. The most examples were taken from "parallel Chinese", i.e. Dungan language (heavily influenced by Russian). To overcome the problem of instability of parts of speech (the same unit can act differently in different positions, for example, noun → relative adjective → qualitative adjective → verbwithout affixing). Dragunov proposed the concept of "basic" and "derivative" values of parts of speech: 建設 jiànshè 'to build' (verb) as the main, and as 'constructing / a construction' (gerund / noun) as a derivative one. From our point of view, it is practically impossible to distinguish the "basic" and "not basic" values: the parts of speech "travel" (on different "distances"), and the spoken language baihua, as well as archaic wenyan, continues to remain isolative.In addition, Dragunov was forced to follow an ideological dogma: "There should be everything as in Russian: Chinese is normal". Solntseva & Solntsev (1978) follows the same path: the main pathos is to prove the "normality" of Chinese and its similarity to Russian. The authors proposed the concept of a "zero" form of a word like "zero flexions" in Russian, and alsoso-called "conversional homonymy". "Conversional homonymy" was previously argued by Smirnitsky (1957) to describe the parts of speech in English (where lexical units also "travel"): if a word goes into another part of speech, then it becomes another one. From our point of view, Chinese speakers do not differentiate the lexical meanings of units in different positions, and the "conversional homonymy" itself is nothing more than an ingenious method for solving the "eternal drama" of Soviet sinologyparts of speech in Chinese. If Dragunov's grammar (taking into account its two parts: 1956&1962) was relatively complete, then Solntsev and Solntseva focused only on the the morphology. Speaking about the typology of the Chinese language, they came to the rather exotic idea that Chinese is "isolating in type and agglutinating in technique", and all languages, unlike the classes of Humboldt and Schleicher, can be divided only into two types: "isolating" and "non-isolating" (obviously repeating Schleicher's ideas about analytical and synthetic languages). It is also impossible to borrow ideas from N.N. Korotkov's work (1968), with the same issue (parts of speech): trying to convince of their existence, the author constantly made contradictory conclusions, constantly denying himself. Gorelov's grammar (1982) was excellent in its coverage of literary language material. It described in detail the types of Chinese syntactic and morphological structureswith only peculiarity: copying the middle school rules and descriptions, for what Gorelov was criticized even in the 1980s. 558 VladimirA. Kurdyumov / Proceedings TSNI-2021 Nevertheless, creating our own grammar already in the 2000s, we tried to adhere to the structural principle and sequence of Gorelov's: the morphology, the syntactic types, index of function words, refusal to analyze lexical structures, an excessive number of examples. The grammar of Tan Aoshuang (2002) is written in line with "Moscow Semantic School" and can be more viewed as a monograph, based on the descriptivist principle of "minimal theory", as well as on a similar principle (quite widespread in the Chinese linguistics): do not strive for global generalization and do not proceed from philosophical ones, but strive to reveal some "small" / "tasty" hidden aspects of the grammatical structures that are incomprehensible to the European reader. Of course, it was an interesting work with well-reasoned approaches, but also -more a presentation of a methodology not (general) results that would be explained to Russian students. Of a number of grammars written in English, to be mentioned (for us) are the grammars of Yuen-ren Chao (Chao 1968) and Li & Thompson (1981). These books are significant because they first proposed a methodology for analyzing Chinese as topic-prominent: Chao put forward the very idea: what is usually called a "Subject" is really a Topic (i.e. the described, characterized part of the syntaxeme), and so called "Predicate" (in realitythe Comment ), unlike European languages, does not express the idea of "actor and action" (which is absolutely familiar to a native speaker of Russian) (Kotsik 2017, Kotcik 2018). Li&Thompson proposed formal criteria for finding the Topic and Comment, as well as the idea of a new, four-stage typology of languages (in synchrony and diachrony), where they move changing the Topicprominent and Subject-prominent types (also Li&Thompson 1976). Both textbooks are detailed, and explaining well the nuances of the grammar of the "Beijing" type of the Chinese language. Nevertheless, they are designed for the American students and the nuances of studying in the United States, the books are replete with theoretical simplifications and a tendency for descriptive methods (not stressing the priority of theory). To talk about the grammars published in Mainland and Taiwan, basically, they are guided by the categories of classical European grammars and the authors do not strive for theoretical generalizations. In some cases, such grammars simply reproduce categories of the English language (Tang 2016), in the best cases, there can be many new features of Chinese so they can be used as (marvelous) reference books (Xiandai 1984) . From the point of view of a Russian, when some sections could be divided only into few main points,they really are subdivided into too many subcategories. In addition, such books lack an extremely significant component: comparativevery important for a non-native speaker: a comparisons with the native language of students (in this case, with Russian) and an explanations (of reasons) for certain "dissimilar phenomena". Methodology We believe that Chinese as a language of non-inflectional typology can be an excellent key for explaining many controversial issues in the general linguistics, including, for example, those in Englishextremely analytic, and in fact, close to isolating. The methodological basis of our textbooks is the Predicational Concept of Language (Kurdyumov 2013), with the following basic principles: all languages have everything, but only in different proportions; following Li&Thompson and Yuen-ren Chao, languages may be divided (at least) into Topic-and Subjectprominent and, therefore, Chinese can not be described by the rules of Russian / European, the Topic and Comment, proposed in the origin on the basis of Chinese, are quite universal categories and suitable for creating a new theory of general linguistics (Simatova 2019). In the frames of such theory, language in general can be considered neither a product nor a static system, but a motion, "flow", a set of processes of generation and perception, the "key" points of which are Topic-Comment structures. Language can be modeled and explained as a multidimensional (2 axes, at least) dynamic system, where levels are "vertical" (with the clause as a primary communicative-autonomous unitin the center), and the processes of generation and perception are "horizontal". When explaining the structure of sentences, it should be explained that there are no subjects and predicates in Chinese, since they (in Russian, English, etc.) convey the "actoraction" idea: in Chinese, to the maximum, no action comes from anything, sentences like "Time (itself) is running fast" or "The phone is (is lying) on the table" (as in Russian) are impossible, most "actions" are nothing else but descriptions of a situation / state, or their changes. At the same time, even when really "someone is doing something"there can not be agreement between the noun and verb in two parts of a sentence: so, anyway, there are Topic and Comment. That is why colloquial Chinese avoids formalized passive constructions (what causes another difficulty: Russian students interpret sentences like 茶碗打破了Cháwǎn dǎpò-le as "The teacup is broken" (that is, it broke "itself" in Russian, which is unacceptable for a Chinese speaker). In addition, we proceed from the unity, but at the same time, the differences inside the phenomenon called "Chinese language" (with "dialects" as being described usually in Russia and Mainland China which are "separate languages" in Western Sinology). The textbook should describe not only the "Beijing version", but also the regional features (Mainland / Taiwan: Putonghua and Guoyu); if possible, take into account the features of the "dialects" (and provide examples). In addition, the literary written language wenyan should be described and constantly mentioned in comparisons. Results We believe that Chinese as a language of non-inflectional typology can be an excellent key for explaining many controversial issues in the general linguistics, including, for example, those in Englishextremely analytic, and in fact, close to isolating. The methodological basis of our textbooks is the Predicational Concept of Language (Kurdyumov 2013), with the following basic principles: all languages have everything, but only in different proportions; following Li&Thompson and Yuen-ren Chao, languages may be divided (at least) into Topic-and Subjectprominent and, therefore, Chinese can not be described by the rules of Russian / European, the Topic and Comment, proposed in the origin on the basis of Chinese, are quite universal categories and suitable for creating a new theory of general linguistics. In the frames of such theory, language in general can be considered neither a product nor a static system, but a motion, "flow", a set of processes of generation and perception, the "key" points of which are Topic-Comment structures. Language can be modeled and explained as a multidimensional (2 axes, at least) dynamic system, where levels are "vertical" (with the clause as a primary communicative-autonomous unitin the center), and the processes of generation and perception are "horizontal". When explaining the structure of sentences, it should be explained that there are no subjects and predicates in Chinese, since they (in Russian, English, etc.) convey the "actoraction" idea: in Chinese, to the maximum, no action comes from anything, sentences like "Time (itself) is running fast" or "The phone is (is lying) on the table" (as in Russian) are impossible, most "actions" are nothing else but descriptions of a situation / state, or their changes. At the same time, even when really "someone is doing something"there can not be agreement between the noun and verb in two parts of a sentence: so, anyway, there are Topic and Comment. That is why colloquial Chinese avoids formalized passive constructions (what causes another difficulty: Russian students interpret sentences like 茶碗打破了Cháwǎn dǎpò-le as "The teacup is broken"(that is, it broke "itself" in Russian, which is unacceptable for a Chinese speaker). In addition, we proceed from the unity, but at the same time, the differences inside the phenomenon called "Chinese language" (with "dialects" as being described usually in Russia and Mainland China which are "separate languages" in Western Sinology). The textbook should describe not only the "Beijing version", but also the regional features (Mainland / Taiwan: Putonghua and Guoyu); if possible, take into account the features of the "dialects" (and provide examples). In addition, the literary written language wenyan should be described and constantly mentioned in comparisons. Discussions Taking into account the "Chinese fever" in Russia, when lots of students are ready to learn Chinese to become translator or teacher of Chinese, the statements outlined above have been repeatedly discussed and tested. In 2016-2020, the author headed the working group to determine the principles of the school graduate unified exam (EGE) in the Chinese language, and many theoretical provisions were included in the competency rubricator of it. On April 27, 2020, a discussion took place at the Russian State Humanitarian University, where the author's concept were once again discussed. In contrast to our concept, opponents believe, the basic order of SVO should be observed -on the contrary to Topic-Comment Structures. Our teaching experience, many years of work as a translator, constant communication with native speakers in mainland China and Taiwan, nevertheless, show the basicity of the Topic-Comment structures and the possibility of direct correspondence between the "isolating" and "topic" type of language. Conclusion The textbook with the principles proposed above, despite their "avant-garde nature" (during 30 years already), should serve the formation of the person of a linguist-sinologist who clearly understands the typological differences between languages, differences in the worldview / mentality and discourses of the existence of speakers. Chinese, and more broadly, isolating languages are typologically different from "Western", while "contradictions" are not a goal in themselves -but only a way of forming a student's holistic picture of language in general, and in the future -a new theory of general linguistics, embodied already in the new textbooks.
4,373.6
2021-05-31T00:00:00.000
[ "Linguistics" ]
Physical modelling techniques for the dynamical characterization and sound synthesis of historical bells Capable of maintaining characteristics practically intact over the centuries, bells are musical instruments able to provide important and unique data for the study of musicology and archaeology essential to understand past manufacturing and tuning techniques. In this research we present a multidisciplinary approach based on both direct and reverse engineering processes for the dynamical characterization and sound synthesis of historical bells which proven particularly useful to extract and preserve important information for Cultural Heritage. It allows the assessment of the bell’s 3D morphology, sound properties and casting and tuning techniques over time. The accuracy and usefulness of the developed techniques are illustrated for three historical bells, including the oldest recognized bell in Portugal, dated 1287, and two eighteenth century bells from the Mafra National Palace carillons (Portugal). The proposed approach combines non-invasive up-to-date imaging technology with modelling and computational techniques from vibration analysis, and can be summarized in the following steps: (1) For the diagnosis of existing bells, a precise assessment of the bell geometry is achieved through 3D scanning technologies, used for the field measurement and reconstruction of a 3D geometry model of each bell; (2) To access the modal properties of the bells, for any given (at the design stage) or measured geometry, a finite element model is built to compute the significant frequencies of the bell partials, and the corresponding modal masses and modeshapes. In the case of existing bells, comparison of the computed modes with those obtained from vibrational data, through experimental modal identification, enables the validation (or otherwise correction) of the finite element model; (3) Using the computed or experimentally identified modes, time-domain dynamical responses can be synthesized for any conceivable bell, providing realistic sounds for any given clapper and impact location. Although this study primarily aimed to better understand the morphology and sounds of historical bells to inform their conservation/preservation, this technique can be also applied to modern instruments, either existing or at design stages. To a larger extent, it presents strong potential for applications in the bell industry, namely for restoration and re-tuning, as well as in virtual museology. Introduction The existence of bells dates back to the Bronze age, c.a. 2nd millenium BC [1], and their morphology has taken many forms over time, depending on aspects such as culture, function, knowledge and technology. Intrinsically related to the history, arts and science of every culture, bells carry important and unique value of cultural heritage. Through the creative efforts of gifted crafters, primitive bells gradually became capable musical objects fulfilling cultural criteria for music performance. If not physically altered, bells maintain their characteristics over time. Their morphology and tuning can be taken as representative of where the instrument was built or located, providing a testimony of the casting and tuning tecniques of the time they were cast. Apart from shape or size, all bells follow the same operation and physical principles. Their sound results from the vibrational response to a clapper excitation and depends essentially on three main aspects: (1) the bell 3D geometry, (2) the bell material and (3) the excitation conditions (e.g. clapper mass and geometry, impact velocity and contact location). These principles have captured the attention of bell-founders and scientists for centuries. However, only from the 1980s, with the significant growth of technology and computational power, significant steps have been made on physical modelling and simulations of musical instruments' dynamical behaviour. Using computational and experimental techniques, essential information has been obtained regarding vibrational modes of bells [2], the influence of the bell geometry on the bell sound [3], the analysis of bell material [4][5][6], dynamical responses of bells to an excitation [7,8], as well as the development of physics-based sound synthesis techniques [9,10]. Several studies have included 3D analysis of bell vibrations [11][12][13][14]. However, as far as we know, previous research on bell sound and their vibrational features has not yet included precise 3D geometrical data, that can only be obtained through cuttingedge 3D imaging technologies, such as structured-light scanning, a technique based on distortion of known projected light patterns. In this paper we propose a general physical modelling approach, which takes into account all the underlying physical principles of the bell functioning, in order to characterize its dynamical behaviour and generate realistic synthesized sounds for virtually any bell excited by a clapper. A significant aspect of this research is the use of 3D imaging technologies, namely structured-light scanning, for obtaining the bell geometry, which combined with finite elements computations [15] and modal methods, allows a highly accurate approximation of the dynamical behaviour of these musical instruments. Ultimately, this approach allows a morphological and dynamical characterization of bells, thus contributing to the preservation of important information of the bell sound properties. The techniques used throughout this work are presented in detail, and their usefulness and effectiveness are illustrated by results from several case studies on historical bells. After providing an overview of the research workflow ("Research workflow" section), the three main stages of this approach are presented in a thorough manner. Section "Geometry assessment" presents the techniques for the measurement of the bell 3D geometry. After describing the used 3D imaging technology, we compare two metrology methods used to infer the full 3D geometry of a thirteenth century bell. The "Modal analysis" section presents the developed modal computation techniques for assessing the modal parameters of bells. Two different modal analysis approaches were used and their effectiveness is illustrated for a eighteenth century bell from the Witlockx carillon at the Mafra National Palace (MNP). Finally, the "Sound synthesis" section addresses the dynamical system formulation and timedomain numerical simulations, in order to perform the sound synthesis of a bell excited by a clapper. To validate our approach, we compare the simulated and measured bell responses for two differently sized bells from the Witlockx carillon at the MNP (see "Illustrative results" section). Additionaly, several simulations on these bells are used to illustrate the power and potentialities of the proposed approach. Research aims The main aim of this research is to develop a multidisciplinary approach, which combines up-to-date imaging technology with physical modelling techniques for the dynamical characterization and sound synthesis of historical bells. This methodology allows access to the bell morphology and the significant modal parameters of bells, thus providing and preserving information on sound qualities such as the bell tuning, the effects of asymmetry and ornamentation on the warble [16] and the partials decay times. Finally, the developed sound synthesis approach allows a set of realistic and comprehensive simulations of the vibrational responses of bells of any size and shape to be performed, either existing or at the stage of design. The developed methodology has potential to be applied in many fields, including bell design, bell restoration, re-tuning, virtual museology and Cultural Heritage preservation. Research workflow A flowchart of the global research strategy applied on this study is presented in Fig. 1 and is based on three stages: (1) geometry assessment, (2) modal analysis and (3) sound synthesis. Stage 1-geometry assessment A precise assessment of a bell profile is a challenging issue even for modern bell-founders. The usage of calipers, plaster moulds, or laser beams are some of the currently used methodologies for that purpose. In the specific case of carillon bells, given their variable sizes and their frequently hard-to-access locations, these techniques can be painstaking, unreliable, and in some cases, unfeasible. Moreover, because of unpredictable aspects inherent to the bell casting process, the bell geometry varies not only along its height but also slightly along its radius. These radius variations translate in symmetry breaks with important consequences for the bell sound. If perfectly axi-symmetric, bells normal modes would occur in degenerate pairs with identical frequencies and possibly distorted modeshapes. However, because of the mentioned symmetry imperfections, modal pairs become non-degenerate with slightly different frequencies. This causes beats in the radiated sound, a phenomenon known as warble among campanologists [16]. For these reasons, in order to achieve a realistic physical model of the structure, an accurate and detailed assessment of the full 3D geometry of the bell in the order of tenths of millimetre is essential. Today's 3D imaging technologies combine refined computational methods of in-depth image acquisition with processing algorithms, providing fast, precise and versatile tools with numerous advantages for the assessment of complex geometries. In this work we use a high resolution, portable 3D scanner to assess the full 3D geometry of bells. Because many musically important aspects of the bell sound are related to fine details of the bell geometry, this tool can provide comprehensive ways to characterize and describe historical bells, as well as giving information on casting and tuning techniques. Stage 2-modal analysis In a second part of the work, the objective is to assess the modal parameters relevant for the dynamical characterization and sound synthesis of bells (i.e. modal frequencies, modal damping values and modeshapes). Despite the existence of a wide literature on bells, there is still a lack of research addressing the full assessment of these parameters, notwithstanding their significance. The identification of the ratios between the first five partials, at least, is of crucial importance for asserting the bell tuning quality, as bell-founders commonly agree that they should fall into the specific ratios of 0.5 (hum-note), 1 (prime), 1.2 (tierce), 1.5 (fifth), 2 (nominal) [17,18]. On the other hand, as mentioned above, an accurate characterization of close modal pairs can contribute to a more efficient control of the warble. Moreover, the knowledge of modal damping provides important information on the energy dissipation processes, allowing an objective quantification of the partials' decay rate [19]. Finally, through the modeshapes, we can examine the spatial vibration patterns associated with each modal frequency, thus assessing relevant spatial information on the nature of the respective modes, which can be particularly useful to correct the tuning of bells through local structural modifications. In practice, these parameters can be asserted either experimentally or numerically. In this work both approaches are considered and can be used individually or combined, depending on the available information. Numerical approach Numerically, the bells modal parameters are assessed, in this work, through the finite elements method (FEM). The basic idea of this technique is to obtain a numerical model that approximates the dynamical behaviour of the structure by using the fundamental equations of dynamics. The continuous structure is discretized into a mesh constituted by a finite number of elements with mass and stiffness properties, which are interconnected at points called nodes. Ultimately, by taking into consideration the continuity relations between neighbouring elements and the conditions of the structure fixation (boundary conditions), the modal parameters of the structure can be computed by solving an eigen-problem. This method has been widely used in the field of campanology for analysing bell modes [15], for exploring new bell designs, including the major third bells [20], as well as for several other purposes on the bellfounding industry. However, apart from recent studies [21,22], reverse engineering techniques, combining 3D imaging technologies with FEM for the study of bells are still scarcely existent in the literature. In this work we explore this powerful combination to achieve a virtual model as close as possible to the real bell. This constitutes a more accurate and efficient approach, which incorporates several steps ranging from the bell geometry (using 3D scans) to the model parameters computations (using FEM). A clear advantage of using FEM for the modal computations comes from the fact that one can easily assess the modal frequencies and the 3D modeshapes of the structure. However, despite quite straightforward and capable of providing very satisfying results, this method still has some limitations. From the bell 3D geometry, its frequency ratios can be understood by assuming a general bell bronze. However, for an accurate assessment of the modal frequencies values, a precise knowledge of the material properties (mass density, Young's modulus and Poisson coefficient) is needed. Also, the computation of the modal damping values is still a challenging issue. These limitations can be overcome, in the case of existent bells, through the use of experimental techniques. Experimental approach The musically important modal parameters (modal frequencies, modal damping values and modeshapes) can also be assessed experimentally, from vibrational measurements. To that end, several experimental approaches were proposed, which can be divided into frequencydomain or time-domain identification techniques [23]. In this work we use a recently developed experimental technique [24], based on a robust time-domain modal identification algorithm, the Eigensystem Realization Algorithm (ERA) [25], which has proven suitable for assessing the modal parameters of complex structures with close frequencies. The basic idea is to adjust some parameters of a mathematical model of the bell response (in this case a time-domain impulsive response) to the vibrational measurements obtained experimentally in real bells. From the ones that best suit the model we can identify the bell modal parameters, including the modal damping values. Despite the referred advantages, the experimental approach cannot apply for the case of non-existent or non-functional bells. Besides, performing a full vibratory assessment of a bell is a painstaking process, which implies considerable work in practice. A detailed presentation of both numerical and experimental methods used in this work is given in the "Modal analysis" section. Stage 3-sound synthesis During the last decades, research in the field of music acoustics has provided a set of techniques for simulating the real behavior of musical instruments. Generally speaking, sound synthesis methods can be divided into two groups: abstract and physical [26]. The fundamental difference between these two categories is the absence (abstract synthesis) or existence (physical synthesis) of an underlying modelling of the instrument physical behaviour. Despite higher computational complexity, physics-based sound synthesis methods allow to perform physically meaningful computations by simple changes in the model parameters. Sound synthesis of musical instruments has been investigated extensively and significant efforts have been made to simulate their dynamic responses to an excitation, namely for wind instruments [27], string instruments [28] and percussion instruments [29]. A thorough treatment for the specific case of bells can be found in Roozen-Kroon [7]. In this work, based on the computed and experimentally identified modal parameters, a physics-based synthesis technique capable of simulating the time-domain dynamical responses of a bell to an excitation is developed. The idea is to obtain a physical representation of the musical instrument through a set of partial differential equations. In the case of a bell excited by a clapper, this includes a formulation of the bell's structure dynamics, clapper dynamics and the bell/clapper interaction force. Finally, from the physical modelling, a time-step integration algorithm is used to calculate the temporal responses of the bell to a clapper excitation. A particular feature of the bell sound is that the perceived notes, although resulting from actual modal frequencies, often arise as virtual pitches. This phenomenon, known as strike note [30], relates to physical and subjective aspects which are still a matter of debate among campanologists. Although these aspects are beyond the scope of this study, it is worth noting that the strike note is a psychoacoustic effect that results from the behaviour of the auditory system when reacting to the physical modes, and as such, it is intrinsically taken into account in this work. As a result, realistic sounds for any conceivable bell and clapper excitation conditions can be generated, including from archeological artefacts. A detailed description of this approach is presented in the "Sound synthesis" section. Methodology As previously mentioned, in order to obtain a precise assessment of the full 3D nearly axi-symmetrical geometry of a bell, we use 3D scanning techniques. In this work we opted for structured-light scanning, a non-invasive contactless method that infers an unknown 3D geometry of a surface based on the distortion of known projected light patterns [31]. This technique is rapid and allows to assess bells 3D geometry in situ, independently of their sizes even in hard-to-access locations as it is often the case in bell towers. We use a portable and high resolution scanner (model Artec Eva), equipped with a structured light pattern generator and two cameras, capable of capturing and simultaneously processing up to two million geometrical points per second, with an accuracy of up to 0.5 mm. The bell is scanned in several parts capturing 3D frames in a point-and-shoot manner (see Fig. 2a). The device is swept along the bell's inner and outer surfaces and the data is stored in the form of sets of point clouds containing the geometrical information. Figure 2a shows the scanning of a thirteenth century bell with the Artec Eva scanner and Fig. 2b shows the real time monitoring of the scanning process. The scanning is followed by a post-processing of the gathered data, using a dedicated software (Artec Studio 12). During this process the separate scans are cleaned and aligned to obtain the full 3D geometry of the bell. Finally, the bell 3D geometrical information can be exported as a point cloud or a surface polygons mesh to a CAD file. Applicable in situ to objects from a couple of centimetres to several meters, this is a versatile, rapid and non-invasive technique which as proven to be effective for scanning bells and has been tested successfully by the authors in different scenarios as presented throughout this paper. Illustrative results To illustrate the effectiveness of the scanning technique, we present results from a case study on a thirteenth century bell (see Fig. 2). Dated 1287, this bell was found during an archaeological excavation next to the church of S. Pedro de Coruche, Portugal, and is considered the oldest known bell in Portugal. Figure 3a and b show the photo from the bell from S. Pedro de Coruche, first as it was found in pieces and second after cleaning and restoring the bell. The bell parts were assembled by means of an hypoxic glue. The bell has a diameter of 0.22 m at its mouth and 0.18 m height to its shoulder, with a total weight of 5.6 kg. Given its delicate condition, the bell was scanned at the museum in Coruche and the post-processing was performed at the laboratory in a total time of one day. The scanning results are shown in Fig. 3c. On the left, we can see the colored rendered scan of the outer and inner surfaces of the bell. Once the objects are inevitably subject to transformations over time due to corrosion or even sometimes due to accidents, the storage of the textured 3D geometry of the bell can be of great relevance for Cultural Heritage preservation purposes. At the center we can see the same geometry without color. This representation, together with the colored one, can provide a complementary view to analyse some details of the bell geometry, highlighting for instance geometry imperfections and blisters from the corrosion of the bell bronze [32]. Finally, on the right, we can see a simplified polygon's mesh of the bell geometry, which can be readily imported to a FEM software. For comparative reasons we present below the results from a previous work by Debut et al. [9], in which a different approach was used for assessing the 3D geometry of the same bell. Without the possibility of using a scanner at the time, a series of geometrical measurements were made, including diameters and thicknesses of the bell in a total of 936 points. A mesh with 36 points regularly spaced azimuthally at 26 height locations was defined and the bell was placed upside-down in a rotating support. At every ten degrees, absolute position measurements were performed through the usage of a singlepoint optical displacement transducer and a slide gauge was used to assess the thickness of the bell wall at each point (see Fig. 4). The 3D coordinates of each point of the mesh were subsequently computed from the geometrical measurements. Because of its complexity, the geometry of the bell crown could not be asserted through this technique. The complete process took approximately 4 weeks (a detailed explanation can be found in Debut et al. [9]). Figure 5 shows the bell geometry assessed through this approach. When comparing with the results from Fig. 3c, the scanner advantages are clear. Its capacity of capturing millions of geometrical reference points results in a much more detailed geometrical representation of the object when comparing with the laser/gauge technique. Moreover, the analysis is much faster. In Fig. 6a and b, measurements stemming from the scanner (in green) are superimposed with the ones from the laser/gauge technique (in red). The superposition of the profile measurements (Fig. 6a) highlights the differences between the two approaches. We can see that apart from the head, the bell radius from the laser/gauge technique are, in this case, smaller than the ones obtained with the scanner. This verifies both for the outer and the inner surface. In Fig. 6b the same results can be observed from a different perspective. The bell radius of the outer and inner surface of the bell are alternately represented at four different heights. The larger two circles correspond to the bell soundbow (at 10 mm heigth) and the smaller two circles (184 mm height) correspond to the point where the laser could no longer be used, because of limitations due to the angle of the bell profile at the shoulder. As we can see, the green circles (measurements from the scanner) are, consistently, slightly larger than the red ones. The same can be observed from the bell profile. In Fig. 6c and d, the differences between the two 3D geometries are represented. The geometries obtained through both approaches were superimposed, taking the scanner as the reference and the distances between them were computed. In the figure, we can see the geometry of the outer (Fig. 6c) and inner (Fig. 6d) profile of the bell assessed using the laser/gauge technique. The distances in millimeters to the reference geometry are represented through the colour map. This representation allows to visualize what is inside (in blue) and what is outside (in red) of the volume circumscribed by the reference Fig. 4 Setup for the bell geometry description. a Outer surface measurements b horizontal thickness measurements [9] geometry surface. In white are represented the parts where both meshes nearly coincide. Figure 6c shows that apart from the bell head, in general, the measurements of the outside surface with the laser/gauge technique are inside the volume created by the surface measured using the scanner. Figure 6d shows the inner surface, where apart from the lip and top plate, the bell walls with the laser/gauge technique "come out" from the scanner measurements. Ultimately, this means that the bell represented through the laser/gauge technique was in this case slightly smaller than the one stemming from the scanner. From the histograms we can also understand that, despite a general good agreement between both approaches, significant differences up to 1.5 mm can be found. In conclusion, a comparison between the scanner and the laser/gauge technique demonstrates an improvement of more than one order of magnitude in dimensional precision and an enhancement of spatial coverage by several orders of magnitude. Additionally, only the scanner allows the assessment of the bell crown, a crucial element for the computation of the full volume of the bell. Apart from morphological information, this is also extremely useful for asserting material properties such as the mass density from the bell. Another advantage is that the geometrical data can be obtained in a fraction of the time using the scanner and the bell geometry can be readily exported in CAD files to input FEM software. Finite elements modelling As previously mentioned, through the finite elements method, the modal parameters are computed based on a physical model of the bell. In a first step, the geometry of the bell needs to be inserted into a finite elements package. In our work, we import it as a CAD file with the bell surface geometry stemming from the 3D scanning methodology, described in the previous section. This constitutes a significant aspect of this work, as the accuracy of the physical model is strongly dependent on the accuracy of the geometrical data. From the surface geometry, a 3D volumetric mesh is generated to create a FEM model. Mass and stiffness properties can then be defined for each element of the mesh, based on bell bronze material properties such as the Young's modulus, the density and the Poisson's ratio. Taking into consideration the continuity relations between neighbouring elements and the conditions of the structure fixation (boundary conditions), a physical model of the bell can then be obtained. Mathematically, most structural dynamics problems of a linear mechanical system may be described by the finite element equation of motion where M and K are the global inertia and stiffness matrices of the system (built by assembling the elementary mass and stiffness matrices that describe the physical properties of each element of the mesh), and y(t) is the vector of physical displacements at location r [33]. Solution of Eq. (1) can be assumed in the following harmonic form where ϕ is the vector of modal amplitudes of each node of the mesh in every direction of motion. From this assumption, Eq. (1) becomes the generalized eigenvalue problem with the classic formulation from which the system modal frequencies f n = ω n /2π and respective modeshapes ϕ n can thus be obtained. It is worth noting that the convergence of the FEM solutions to represent the real bell behaviour is dependent on the mesh parameters, namely the type and number of elements, that need to be sufficiently detailed for representing the structure. The choice of these settings is problem-dependent. Based on preliminary convergence tests, we use in this work solid elements and take profit of the spatial resolution of the scan data for obtaining a highly detailed mesh, thus enhancing the approximation solutions. For results see "Illustrative results" section. Experimental modal identification Experimental procedure The bell's responses to an excitation are collected at several points along the bell rim. A mesh of 32 points equally spaced at the bell's mouth is defined and an instrumented Fig. 6 a Superposition of the thirteenth century bell profile measurements assessed through the 3D scanning technique (in green) and the profile obtained through the measurements with the optical displacement transducer and the slide gauge (in red). b Inner and outer surfaces of the bell at four different heights (scanner in green and optical displacement transducer and laser/gauge technique in red). c and d Colour representation of the distances in millimeters between the 3D geometries, assessed through the scanning technique and the profile obtained through the measurements with the optical displacement transducer and the slide gauge impact hammer is used for the excitation at each location. Attention needs to be paid when selecting the impact hammer configurations to guarantee an adequate amount of energy for the frequency range of interest. In our case, an impact hammer (Brüel&Kjaer type 8202), instrumented with a force transducer (Brüel&Kjaer type 8200) and equipped with several tips of different stiffnesses is used for the small and mid-range bells. For heavier bells a larger hammer was designed by the authors, capable of exciting a lower frequency range. The corresponding vibrational radial responses are measured with three piezo-electric accelerometers (Brüel&Kjaer type 4375), positioned at the asymmetric angular coordinates 0°, 23° and 146° along the bell rim. This minimizes the probability of all three being located at nodal points of the same mode of vibration, thus ensuring that a maximum number of modes is detected and identified. Through this strategy the modeshapes at the bell's mouth can also be assessed, constituting a useful criterion for the modal identification process. Signals are acquired and converted into digital form using an acquisition board (SigLab/Spectral Dynamics, model 20-42). To comply with the Shannon theorem, a sampling frequency of F s = 51200Hz is used, as well as anti-aliasing filters. Signals are 12 s long, resulting in a frequency resolution f = 0.08Hz , and allowing accurate estimates of the dissipation bell properties, since the decay times are of the same order than the duration of the acquisition. Identification of the modal parameters As previously mentioned, a software was developed at the laboratory, based on a polyreference multi-modal identification algorithm, that uses the time domain responses for the modal identification. From the acquired time signals, the transfer functions H ij (ω) =Ẍ i (ω) F j (ω) are computed through the Fourier transform of the excitation force F j (ω) and accelerations Ẍ i (ω) measured at the mesh locations i and j . By computing the inverse Fourier transform of H ij (ω) , the corresponding impulse responses h ij (t) are obtained, from which the modal parameters are finally extracted using the ERA algorithm. Based on a mathematical parametric description of the system dynamical behavior, the algorithm adjusts the modal parameters to best fit the measured signals, assuming a sum of damped modal responses as where ω n = 2π f n and ζ n are the modal frequency and damping values of mode n , and A ij n is the respective modal participation factor related to the mode-shape (4) h ij (t) = N n=1 A ij n e −ω n ζ n t sin(ω n 1 − ζ 2 n t) values of ϕ n at the excitation and response locations. The modal parameters ω n and ζ n are extracted from a set of measured impulse responses organized in the form of a generalized Hankel matrix, while the modeshapes are obtained by adjusting, in the least-squares sense, a set of model responses to the corresponding measured signals. A sensitive issue of the identification process is to determine the order N of the mathematical model due to the polluting noise in the measured signals, which perturbs all identification algorithms. To overcome this problem, the number of modes of the model must be oversized, leading to the identification of some modes which are unphysical. The difficulty is then to select the physically relevant modes from the modes identified from the oversized model. For that purpose, we rely on a stability diagram that allows an evaluation of the modal convergence as the order of the model grows, as well as on the identified modeshapes. A detailed presentation of the developed ERA method applied to bell modal identification is given at Debut et al. [24]. Illustrative results The results from the numerical and experimental approaches are now presented and compared through the modal analysis of a eighteenth century bell from the Witlockx carillon of the Mafra National Palace (see Fig. 7). The bell has a diameter of 0.2 m at its mouth and 0.16 m height to its shoulder, in a total weight of 8.4 kg. Here, for the numerical approach, the bell geometry was scanned as described in "Geometry assessment" section and then used as input for the finite elements software. In Fig. 7b, on the left, is presented the polygons surface mesh from the 3D scan, where the bell inner and outer profile can be recognized. At the center, the achievable level of detail is illustrated through the full-rendering model. The finite elements software Cast3M [34] was used for constructing the bell physical model and for performing the computations. A volumetric mesh consisting of 29,070 solid elements (4-node tetrahedral) was generated (Fig. 7b) on the right. For the material properties, the bronze density was deduced from the measured mass and the computed volume of the bell, and the Young's modulus was adjusted until the first modal frequencies from the FEM computations reproduced the experimentally identified results. Finally, values of ρ = 8660 kg/cm 3 and E = 90 GPa were used for the density and Young's modulus respectively, and a typical bronze value of 0.34 was assumed for the Poisson's ratio. The computations were performed assuming free boundary conditions as if the bell was suspended. The FEM computation results for the first five lower partials are presented in Table 1. As previously mentioned, normal modes appear in pairs because of the symmetry properties of the bell structure. One of the FEM advantages is the possibility of computing the full 3D modeshapes. Table 1 shows a general view of the modeshapes of each modal pair on the left (X, Y view) and on the right, the relative orthogonal position of the modal pairs is represented from the azimuth plane relative to the bell mouth (X, Z view). Dark blue represents the parts where the bell movement is null and the gradually warmer colours correspond to areas where the displacement is larger, with red representing the maximum displacement. The computed modal frequencies are also presented in the table. The differences between modal pairs highlight the effects of symmetry imperfections on the bell geometry captured by the scanner. As previously mentioned, this translates into beats in the bell sound at a rate equal to the frequency difference between modal pairs. This way, a quantification of the warble can be asserted, which is a relevant indicator of the musical quality of bells [16]. The internal tuning of an individual bell can also be analyzed from the modal frequencies. One may notice that the frequency ratios are far from the typical targets of 0.5, 1, 1.2, 1.5 and 2, in particular for the three highest modes. This can be explained by the fact that for smaller bells, these high frequency modes die away very quickly once the bell is struck by the clapper, which makes them very difficult to tune. On the other hand, the higher partials are situated in a frequency range where the tuning differences are not easily perceptible by the ear and, for this reason, their tuning becomes of less importance for the bell sound. The bell modal parameters were also assessed experimentally through the approach presented in "Modal analysis" section. The experimentally identified modal parameters are shown on Table 1. The good agreement of both approaches with relative differences between modal frequencies of less than 0.5% validates the FEM methodology and gives us confidence on the effectiveness of the developed techniques. Through the experimental approach it was also possible to identify the modal damping values, an important parameter, namely to account for the energy dissipation phenomena for the subsequent modeling in Stage 3. The identified modal damping values are also presented in the table. Finally, through the combination of numerical and experimental approaches, the complete set of modal parameters required for the bell characterization and for the time domain dynamical simulations in Stage 3 were successfully assessed. Methodology The proposed sound synthesis approach is divided in two main steps. First, the physical modelling of a bell impacted by a clapper is presented. This includes the formulation of the bell dynamics, clapper dynamics and the non-linear interaction between bell and clapper. In a second step, the time-domain simulations of the bell responses to a given clapper excitation are computed. A detailed description of this approach is presented below and several results stemming from parametrical computations are illustrated. Bell dynamics We now turn to the bell dynamics formulation. After a modal discretization of the bell dynamics in terms of its modal properties, its physical motion y(t) at a generic position r = (r, θ, z) can be described as where ϕ n (r) are the modeshapes and q n (t) the respective modal amplitudes in every direction of motion. The partial differential equation of the bell motion is then replaced by a set of N ordinary second-order modal equations which governs the time-dependent modal amplitudes, written in matrix form as: where M , C and K are diagonal matrices of the modal masses, modal damping and modal stiffnesses. The modal forces F (t) are obtained by projecting the external forces f (t) on the modeshapes ϕ n of the modal basis, as where This modal approach is particularly suitable for the objective of the work, as it provides a physical model with a reduced number of degree-of-freedom, and consequently requires less computational efforts for the sound synthesis. Clapper dynamics The clapper is modelled as a rigid body of mass m c , moving perpendicularly to the bell wall, neglecting any elastic deformation after the bell/clapper contact. Its normal (radial) motion z(t) is thus represented by the following dynamic equation where f R (r 0 , t) is the radial component of the applied force. Bell/clapper interaction The bell/clapper interaction for the excitation must be then formulated. The Hertz theory of elasticity [35] has shown to be particularly suitable for this purpose, describing the local interaction force f c (t) during the contact time as where z(t) is the normal clapper motion, y R (r 0 , t) is the normal motion of the bell at the localized radial impact excitation at location r = r 0 , b is a contact parameter equal to 3/2 for a sphere in contact with an elastic halfspace, and K c is a contact stiffness coefficient which depends on the geometric and elastic properties of the two solids. In the modal framework, we can write the interaction force as: where ϕ R n (r 0 ) is the radial component of the mode shape ϕ(r 0 ) . Despite some limitations, the nonlinearity of the model (through the power parameter b) allows an adequate reproduction of the significant spectral changes observed in bell sounds, when a bell is struck with different impact forces [36]. Finally, from Eqs. (5)-(7), (9) and (10), the modal and physical responses of a bell to a given clapper excitation can be computed. In a last step, the time-domain responses of the bell to a given clapper excitation are computed and sound files are generated. Time domain dynamical simulations The system motion is computed by time-step integrating the modal equations presented in "Sound synthesis" section. For that purpose we use the velocity-Verlet scheme [37], which has proven to provide accurate enough results for vibro-impacting systems [29,38]. Assuming initial conditions for the positions, velocities and accelerations of the clapper and bell at a given time t n , this explicit algorithm yields these quantities at the following time step t n+1 . Illustrative results We now present the results from several parametric computations to explore the potentialities of the proposed approach. Simulations on two historical Witlockx bells from the Mafra National Palace are used to illustrate and discuss different musical effects. Witlockx small bell Concerning the eighteenth century bell from the Witlockx carillon of the MNP described in "Illustrative results" section, for comparative purposes we present below the bell sound recorded with a microphone and the corresponding synthesised sound stemming from the proposed sound synthesis approach. The modal parameters used for the sound synthesis were assessed by combining both numerical and experimental approaches as described in "Modal analysis" section. For the recorded sound, the bell was struck at the soundbow (0.03 m height from the rim) and the microphone was placed at 1 m distance from the rim. Regarding the simulations, both the geometrical and material properties of bell and clapper were taken into account, as well as the excitation conditions assuming a clapper with mass m c = 0.3 kg and ball radius r c = 0.015 m, and an impact velocity of v 0 = 0.1 m/s at a height h c = 0.03 m from the rim. Numerically, one major difficulty arises from the short duration of the contact time, usually in the range 0.2-0.5 ms, which also causes the excitation of a wide frequency range. After convergence tests, a time step in the order of 4.10 -6 s and a very extensive modal basis in the order of 110 modes were used in this work to obtain realistic simulations. Figures 8a and b show the spectrograms of the measured and synthesised sounds, respectively. On the spectrograms it is clear that a large number of partials are excited when the sound starts, dying away at different rates. The highest frequencies decay very quickly and the lower ones slowly emerge from the bright initial sound until only the hum-note remains. Apart from the noise, which is absent on the simulations for obvious reasons, a good agreement can be verified between the two spectrograms. A close look at the higher frequencies reveals some minor differences, namely on the decay rates and the partials amplitudes. A possible reason for this is the fact that the modal damping values were only identified for a limited number of modes and using extrapolated values for the non identified higher modes. Besides, in practice, it is also very difficult to assert the precise impact location which leads to some differences. Additional file 1 and Additional file 2 correspond to the measured and synthetised sound respectively. The results illustrate the achievable level of realism by the sound synthesis, thus validating our proposed approach. Figure 8c shows the simulated bell/clapper interaction force and the corresponding bell and clapper motions. On the top of the figure we can see the simulated impact force assuming an initial velocity v 0 = 0.1 m/s at height h c = 0.03 m. A short contact time of around 3 ms corroborates with typical real impacts, as observed experimentally. Also, note that the impact force is symmetrical as it should. In the middle, in blue, we can see the displacement of the clapper, penetrating the bell wall in a first moment, until it starts to move in the opposite direction at half the contact time, finally distancing itself from the bell at the end of the contact time. At the bottom, in black, the bell displacement is represented. After the initial contact, the bell wall is pushed by the clapper, both moving with the same orientation until the bell wall starts to move back towards its equilibrium position initiating a vibratory movement. Overall, these results show that the quasi-static contact Hertz model is well-suited for modeling the dynamic bell/clapper interaction. In Fig. 8d, two parametric computations simulating the response of a bell as function of the impact velocity are presented. Due to the non-linear behaviour of the bell/ clapper interaction, a large impact velocity translates into a short contact time, thus being more effective in exciting higher frequencies, ultimately resulting in a brighter sound. For comparative purposes, acceleration signals were normalized relative to their energy contents. Witlockx large bell To illustrate the versatility of the full methodology, this subsection presents the use of the above-mentioned procedures for the case of a large Witlockx bell from the same carillon of the Mafra National Palace. The bell has a diameter of 0.89 m at its mouth and height h = 0.69 m to its shoulder and a total mass of 416 kg. In this case, as for the small bell, the experimental and numerical techniques presented in "Modal analysis" section could be combined and the complete set of modal parameters could be assessed. It is worth noting that there was a time lapse between the experimental modal identification and the bell metrology. Given the carillon supporting structure deteriorating conditions, at the time the scan was made, the bell was already supported and surrounded by scaffolds for safety reasons. Together with the hammers and cables of the automatic playing system, this left us with very little room to work. Thus, an extra work for scanning and post-processing was required in this case, as all these features and nearby structures needed to be cleaned from the scans. In Fig. 9a, we can see the bell scan result. Aspects such as text and ornamentation highlight the level of detail achievable through this metrology technique, even in such difficult conditions. In Fig. 9b the effect of the impact location on the bell vibratory responses is illustrated. It is important to notice that when a clapper impacts the bell at the vicinity of a normal mode node, this partial is significantly supressed in the sound of the bell. From left to right, five similar impacts were simulated at the respective heights h c = 0.2 h, 0.4 h, 0.6 h, 0.8 h and 0.95 h, with h = 0.69 m being the bell height. Obvious changes on the spectral responses are shown as the impact location varies from the soundbow to the bell head. The spectrograms show a clear variation in the modal amplitude of the several modes depending on the striking point. This illustrates that the level of excitation of the various partials can be selected by striking the bell at different locations. Depending on the clapper excitation point, its energy is distributed differently through the bell modes, thus leading to sounds with different energy levels along the frequency range. As the excitation moves towards the bell head, in general, higher-order modes become more excited whilst the amplitudes of the lower modes decrease, leading to a gradual change of the timbral characteristics. This is illustrated in Additional file 3, where five sounds can be heard. We start by hearing a well-balanced sound (strike at the soundbow) which progressively becomes brighter as the impact location rises vertically. Another important aspect for the bell sound is the choice of the clapper material. Figure 9c shows the simulated spectrograms of the bell responses varying only the contact stiffness parameters (K c = 1e10 N/m and K c = 1e11 N/m, respectively). The spectrogram on the right shows that using a harder clapper, the higher frequencies are excited in a more efficient way, resulting in a brighter sound. A softer clapper (spectrogram on the left) will act as a low pass filter, resulting in a "sweeter" sound. Finally, in Fig. 9d, the computed spectra of the bell accelerations for different clapper masses are presented. On the left, the simulated bell responses to a lighter (in red) and a heavier (in blue) clapper, with 1.38 kg and 13.8 kg respectively, are superimposed. Spectra of the bell velocity at the contact location (normalized to their energy) are presented to emphasize the difference in their frequency contents. As we can see, lighter clappers are able to excite a larger number of modes, whilst heavier clappers are more effective for exciting the lower ones. This can be explained by the increase of contact-time due to a larger mass, which restricts the energy transfer to the higher modes. The sounds simulations considering a large (13.8 kg) and a small (1.38 kg) clapper corresponding to Fig. 9d are illustrated in the Additional file 4 and Additional file 5, respectively. Conclusions In this paper we present a general methodology for the dynamical characterization and sound synthesis of bells, comprised of three main stages. In a first stage 3D imaging techniques are used for the geometry assessment. In a second stage, numerical and experimental modal analysis techniques are combined to assess the significant modal parameters of bells. Finally, based on the computed modal parameters, this approach includes a physics-based synthesis technique capable of simulating the time-domain dynamical responses of a bell to an excitation. For each stage, the effectiveness and usefulness of each technique are illustrated through the obtained results for three historical bells. One of the strengths of this methodology is the use of leading-edge imaging technologies to overcome the challenging issue of obtaining the full 3D geometry of a bell. Given that many musically important aspects are strongly related to fine details of the bell geometry, this information is basic for a finite elements model and leads to a notable improvement on the modal computation results. Besides, the combination of finite elements analysis with experimental modal identification enables the full assessment of the significant modal parameters of bells. It is worth noting that, given the high accuracy of the developed techniques, significant information on the bell musical qualities can be obtained, namely on the bell tuning, the effects of asymmetry and ornamentation on the warble and the partials decay times. Finally, the developed sound synthesis approach allows to perform a set of realistic and comprehensive simulations of the bell vibrational responses. This includes the formulation of the bell dynamics, clapper dynamics and bell/clapper interaction forces. Despite the satisfactory results, some limitations can still be pointed out, namely to the sound synthesis approach. The used local contact model still remains a crude representation of the real interaction dynamics. However, it proved to be a plausible pragmatic approach for representing the bell/clapper interaction. Another aspect is that the used sound synthesis approach still lacks the delicate aspect of the sound
11,285.4
2021-12-01T00:00:00.000
[ "Physics" ]
Hall and Ion Slip Effects on Mixed Convection Flow of Eyring-Powell Nanofluid over a Stretching Surface The purpose of this research is to inspect the mixed convection flow of Eyring-Powell nanofluid over a linearly stretching sheet through a porous medium with Cattaneo–Christov heat and mass flux model in the presence of Hall and ion slip, permeability, and Joule heating effects. Proper similarity transforms yield coupled nonlinear differential systems, which are solved using the spectral relaxation method (SRM). The story audits show that the present research problem has not been studied until this point. Efficiency of numerous parameters on velocity, temperature, and concentration curves is exposed graphically. Likewise, the numerical values of skin friction coefficients, local Nusselt, and Sherwood numbers are computed and tabulated for some physical parameters. It is manifested that fluid velocities, skin friction coefficients, local Nusselt, and Sherwood numbers promote with the larger values of Eyring-Powell fluid parameter ε. It is also noticed that primary velocity promotes with larger values of mixed convection parameter λ, Hall parameter βe, and ion slip parameter βi, while the opposite condition is observed for secondary velocity, temperature, and concentration. Furthermore, comparative surveys between the previously distributed writing and the current information are made for explicit cases, which are examined to be in a marvelous understanding. Introduction Mixed convection streams emerge in numerous transport processes both in nature and in engineering applications. They play a vital role, for instance, in air limit layer streams, heat exchangers, atomic reactors, solar collectors, and in electronic hardware. Such procedures happen when the impacts of buoyancy forces in constrained convection or the impacts of the constrained stream in natural convection become noteworthy. The interaction of natural and constrained convection is particularly articulated in circumstances where the constrained stream velocity is low as well as the temperature contrasts are huge. This stream is likewise an important kind of stream showing up in numerous mechanical procedures, for example, assembling and extraction of polymer and elastic sheets, paper creation, wire drawing and glass-fiber creation, dissolve turning, and consistent throwing. Moreover, mixed convection in permeable media has numerous applications, for example, food handling and storage, metallurgy, geophysical framework, fibrous insulation, and underground removal of atomic waste. Then again, thermal conductivity of the ordinary heat transport liquids, for instance, water, oil, and ethylene glycol are exceptionally low. Thus, expanding thermal conductivity of the traditional liquids prompts improves the heat transport of these liquids. As of late, nanofluids are presented with upgraded thermal conductivity. A nanofluid is a suspension of nanoparticles with normal sizes beneath 100 nm in the base liquids. Due to the upgraded thermal conductivity, nanofluids are proposed for some mechanical applications, for example, transportation, atomic reactors, and nourishment. In this manner, numerous ongoing analysts have generous enthusiasm for the mixed convection stream of nanofluid over an extending sheet in a permeable medium due to their impressive use in the modern and innovative applications. Accordingly, Ameen et al. [1] investigated a 3D turning stream of carbon nanotubes (CNTs) over a permeable stretchable sheet for warmth and mass exchange with thought of kerosene oil as a base fluid and announced that both velocity fields delineate expanding conduct for enormous value of the CNT nanoparticle. Besides, Rasool et al. [2] examined a numerical investigation of the MHD Williamson nanofluid flow maintained to flow through permeable medium bounded by a nonlinearly stretching flat surface and reported that an enhancement in the values of magnetic parameter augments the drag force, whereas a decrement in the drag force can be observed as the values of Weissenberg number enhances. Also, Ibrahim and Anbessa [3] examined the mixed convection stream of a Maxwell nanofluid over a growing surface in a penetrable medium utilizing the spectral relaxation method (SRM) and found that a climb in Deborah number decreases both the stream and transverse velocity profiles, while the backwards design is seen with enlargement in the mixed convection parameter. Furthermore, Waini et al. [4] examined the consistent mixed convection stream along a vertical surface implanted in a permeable medium with hybrid nanoparticles utilizing the bvp4c solver in Matlab programming. They considered both assisting and opposing streams and found that there exist dual solutions for the case of opposing stream. Further, numerous other ongoing works have been done around there as found in references [4][5][6][7][8][9][10][11][12]. Investigation of nonNewtonian liquid has incredible significance because of its numerous industrial and engineering applications. Specifically, these liquids are used in the material preparing, concoction and atomic ventures, bioengineering, oil repository designing, polymeric fluids, and groceries. A few liquids like paints, paper mash, shampoos, ketchup, fruit purée, slurries, certain oils, and polymer arrangements are instances of nonNewtonian liquids. NonNewtonian liquids are logically perplexed when compared with Newtonian liquids due to nonlinear association among the stress and strain rate. Various models have been proposed in the writing for the examination of nonNewtonian fluids; however, not a sole model is built up that shows all properties of nonNewtonian liquids. In general, nonNewtonian fluids are mainly characterized into three kinds: to be explicit differential, rate, and integral kinds. Among the nonNewtonian liquid models, the Eyring-Powell model has accomplished exceptional consideration of the analysts because of its distinctive attributes in current science. This could be derived from kinetic theory of liquids rather than from its empirical relation. Likewise, the Eyring-Powell model decreases to Newtonian liquid qualities for low and high shear rate. Warmth and mass exchange in the Eyring-Powell liquid model assumes a significant role in the procedures which include formation and spread of haze, plotting of concoction handling instrumentation, environmental contamination, drying of permeable slides, raised oil recuperation, warm protection, and underground energy transport. Khan et al. [13] explored blended convection stream of the Eyring-Powell liquid model with variable viscosity and convective limit conditions over a slanted extending sheet utilizing the homotopy analysis method (HAM) and revealed that velocity profile increments by expanding liquid parameter M and Grashof numbers Gm and Gr, while velocity profile diminishes by expanding variable viscosity parameter A, liquid parameter k, and magnetic field parameter Ha. Moreover, Khan et al. [14] examined warm disper-sion and dissemination thermo impacts on the wobbly flow of electrically conducting Eyring-Powell liquid over an oscillatory extending sheet with convective limit conditions utilizing HAM and found that the bigger values of Eyring-Powell liquid parameter improve the amplitude of velocity and limit the layer thickness. However, inverse impacts are seen in temperature and concentration profiles. Furthermore, Ishaq et al. [15] explored two dimensional nanofluid film stream of Eyring-Powell liquid with variable warmth transmission in the presence of MHD on a shaky permeable extending sheet and announced that porosity parameter diminishes the movement of the fluid movies, and enlarging the nanoparticle concentration effectively expands the rubbing characteristic of Eyring-Powell nanofluid. In recent times, various researches have been made in the field of Eyring-Powell nanofluids that can be found in references [16][17][18][19][20][21][22][23]. Heat transport in the presence of strong magnetic field is significant in different parts of MHD power generation, nanotechnological processing, nuclear energy systems exploiting fluid metals, and blood stream control. Further, Hall and ion slip impacts become noteworthy in strong magnetic fields and can impressively influence the current density in hydromagnetic heat transport. Most recently, Krishna and Chamkha [24] explored Hall and ion slip impacts on the MHD free convective pivoting stream of nanofluids in a permeable medium past a moving vertical semiboundless level plate and announced the velocity increments with Hall and ion slip parameters. Furthermore, Rani et al. [25] explored Hall and ion slip impacts on the MHD natural convective pivoting stream of Ag-water-based nanofluid past a semiboundless penetrable moving plate with consistent warmth source utilizing the perturbation method. They found that in the limit layer area, liquid velocity diminishes with the expanding values of rotation parameter, magnetic field parameter, and suction parameter, while it increments with the expanding values of Hall and ion slip parameters. Further examinations identified with the MHD stream issues with Hall and ion slip impacts can be found in references [26][27][28][29][30][31]. The temperature differences between two distinct bodies cause heat transport device. It assumes a significant task in the creation of energy, cooling of atomic reactors, and biomedical applications such as medication targeting and heat conduction in tissues. Fourier [32] was the spearheading individual who at the outset portrayed the marvels of heat transport, which is the parabolic energy equation for temperature field and has disadvantage that initial interruption is felt right away all through the entire medium. Later on, so as to control this limitation, Cattaneo [33] improved the Fourier law of heat conduction by including the thermal relaxation term which causes heat transportation as thermal waves with limited speed. Furthermore, Christov [34] utilized Oldroyd's upper convected derivative instead of time derivative so as to achieve the material-invariant detailing. This new model is named as the Cattaneo-Christov heat flux model. After the spearheading work of Christov [34], a number of ongoing investigations in this setting are given in the references [35][36][37][38][39][40][41][42][43][44][45]. The aforesaid writing surveys affirmed that no endeavor has been made at this point to investigate the MHD mixed 2 Advances in Mathematical Physics convection flow of Eyring-Powell nanofluid past a vertical linearly extending sheet through a porous medium in the presence of the Cattaneo-Christov heat and mass flux model with the consolidated effects of Hall and ion slip, porosity, and Joule heating. With this perspective, the current correspondence aims to fill this gap inside the current writing. Consequently, the fundamental object of the present work is to examine the impact of Hall and ion slip on the mixed convection stream of Eyring-Powell nanofluid past a vertical linearly extending sheet through a penetrable medium in the presence of Joule heating with the Cattaneo-Christov heat and mass transition model. Numerical clarifications were accomplished employing the spectral relaxation method [3,[46][47][48]. The effects of embedding parameters on the velocity, temperature, and concentration fields are analyzed graphically. Moreover, numerical estimations of skin friction coefficients, local Nusselt, and Sherwood numbers for some pertinent parameters are computed and tabulated. Mathematical Formulations Considering a laminar, steady, viscous, and incompressible mixed convection stream of an electrically conducting Eyring-Powell nanofluid past a vertical linearly extending sheet through a porous medium with velocity U w ðxÞ = ax (where a is a positive constant) towardx-axis, the Cattaneo-Chirstov heat and mass flux model are utilized to explore the heat and mass transfer qualities. The surface temperature T w and concentration C w are assumed consistent at the extending surface. As y extends to infinity, the encompassing values of temperature and concentration are, respectively, T ∞ and C ∞ : Besides, a strong uniform magnetic field B 0 is applied in the direction along y − axis as shown in Figure 1. Because of this strong magnetic field force, the electrically conducting liquid has the Hall and ion slip impacts which produces a crossflow in the z-direction, and consequently, the stream becomes three dimensional. The x-axis is along the course of movement of the surface, y-axis is perpendicular to the surface, and z-axis is normal to the xyplane. The induced magnetic field can be overlooked conversely with the applied magnetic field on the assumption that the stream is steady, and the magnetic Reynolds number is very little. The extra stress tensor τ ij of the Eyring-Powell liquid model is characterized as [15,21]: where μ is the dynamic viscosity, and β and b are the material fluid parameters of the Eyring-Powell fluid model such that Generalized Ohm's law with Hall and ion slip impacts is given by [20,23] where J = ðJ x , J y , J z Þ is the current density vector, Ε is the intensity vector of the electric field, V is the velocity vector, Β is the magnetic field, ω e is the cyclotron frequency, and τ e is the electrical collision time. Advances in Mathematical Physics with the relating boundary conditions [3,36]: where u, v, and w are the velocity components along the x − , y − , and z-directions, respectively, and υ is the kinematic viscosity, ρ is the density of the fluid, g is the acceleration due to gravity, β e is the Hall parameter, β i is the ion-slip parameter, α e = 1 + β e β i is a constant, B T is the coefficient of thermal expansion, B C is the solutal coefficient of expansion, κ is the permeability of the porous medium, T and C are the fluid temperature and concentration, respectively, γ T and γ C are the heat and mass flux relaxation times correspondingly, C p is the specific heat capacity, τ is the ratio of the effective heat capacity of nanoparticle to the heat capacity of the fluid, i.e., τ = ðρCÞ p /ðρCÞ f , D B is the Brownian diffusion coefficient, and D T is the thermophoresis diffusion coefficient. The overseeing equations can be changed to coupled nonlinear ordinary differential equations utilizing the subsequent similarity transformations [3,36]: where η is the similarity variable, and ψ is the stream function defined as Equation (4) is identically fulfilled, and equations (5)-(9) become Pr The transformed boundary conditions are The dimensionless parameters showing up in equations (12)-(15) can be characterized as The skin friction coefficients C f x andC gz , the local Nusselt number Nu x , and the Sherwood number Sh x are the physical quantities of interest which can be characterized as follows: where τ wx and τ wz are the wall shear stresses in the directions of x and z, respectively, and q w is the surface 4 Advances in Mathematical Physics heat flux, and j m is the surface mass flux, which are given by From equations (10), (11), (17), and (18), we obtain Method of Solution Equations (12)- (15) subject to the boundary conditions (16) are explained utilizing the spectral relaxation method. The spectral relaxation algorithm uses the thought of the Gauss-Seidel method to decouple the arrangement of overseeing equations (12)- (15). The strategy is created by assessing the linear terms at the current iteration level r + 1 and nonlinear terms at the former iteration level r. The Chebyshev pseudospectral technique is used to unravel the decoupled equations. For the subtleties of the technique, interested readers can refer [3,[46][47][48]. Hence, to utilize the SRM, we start by decreasing the request for the momentum (equation (12)) from third to second order presenting the change f ′ = h so that f ′ ′ = h′ and f ′ ′′ = h ′ ′ :Along these lines, equations (12)-(15) become Pr Advances in Mathematical Physics The transformed boundary conditions are Executing the SRM to equations (20)-(25), we get the following iteration scheme: Pr The transformed boundary conditions are Actualizing the Chebyshev spectral collocation method to equations (26)-(30), we get the ensuing matrix equations: Advances in Mathematical Physics where A 2 = D, B 2 = h r+1 , Here, D = 2D/η ∞ where D is the Chebyshev differentiation matrix (Trefethen [50]), η ∞ is a limited length which is adequately big so that we can simply incorporate the condition at perpetuity in this point, I and diag [.] are the identity and diagonal matrices of order ðN + 1Þ × ðN + 1Þ, N is the Results and Discussions The nonlinear differential equations (12)-(15) with the limit conditions (16) have been explained numerically via the spectral relaxation method, and consequences of the numerical calculations for the velocity, temperature, and concentration profiles as well as skin friction coefficients, local Nusselt number, and Sherwood number have been obtained for different input parameters and presented through graphs and tables. Also, we contrasted our outcomes and those of the current writing as appeared in Table 1, and the outcomes give 9 Advances in Mathematical Physics viscosity can be varied as the Eyring-Powell fluid parameter varies resulting in an increasing viscous force as a result decreasing its velocity up to some maximum value, and then the velocity starts to increase due to decrease in viscosity. The optimum value is obtained around the point ≈1. Figures 6-9 divulge the impact of magnetic field parameter M on primary velocity, secondary velocity, temperature, and concentration profiles, respectively. The transverse magnetite field which is applied vertical to the direction of flow provides a resistive force known as the Lorentz force. Physically, the Lorentz force opposes the flow of nanofluid; hence, the velocity diminishes. Moreover, this force tends to hinder the movement of the liquid in the boundary layer. This assists with declining the primary velocity profile and upgrades the secondary velocity, thermal, and concentration profiles as M increments as appeared in Figures 6-9. It is additionally seen that an augmentation in M prompts a lower in primary velocity boundary layer thickness and superior in thermal and concentration boundary layer thicknesses, though the secondary velocity boundary layer thickness diminishes near the surface and upsurges far off from the surface. Figures 10-13 illustrate the impact of the Hall parameter β e on primary velocity, secondary velocity, temperature, and concentration profiles, respectively. It is seen that the primary velocity profile increments with an expansion in β e , though the contrary condition is watched for secondary velocity, thermal, and concentration profiles. Besides, as the Hall parameter β e enlarges, the primary velocity boundary layer thickness upgrades, and thermal and concentration boundary layer thicknesses lessen, while the secondary velocity boundary layer thickness improves near the surface and diminishes distant from the surface. Physically, the consideration of Hall parameter decreases the efficient conductivity and subsequently drops the magnetic resistive force. Thus, the Joule heating impact is condensed, and quantity of heat 10 Advances in Mathematical Physics owing to ohmic dissipation can be abridged by utilizing partially ionized fluid presented to magnetic field. The comparable conduct is seen with expanding ion slip parameter β i as revealed in Figures 14-17. Physically, the expansion in β i improves the efficient conductivity and therefore diminishes the damping force on the dimensionless velocity, and hence the dimensionless velocity expands. Figures 18-21 portray the impact of blended convection parameter λ on primary velocity, secondary velocity, temperature, and concentration profiles, respectively. It is deduced that as λ upgrades, the primary velocity and secondary velocity profiles ascend. In contrast, the temperature and concentration profiles reduce. These outcomes physically hold since upgrade in blended convection parameter causes upsurge in buoyancy forces which speed up fluid motion. Figures 22-25 outline the impact of penetrability parameter K on the primary velocity, secondary velocity, thermal, and concentration distributions, respectively. It is obvious from the figures that as K augments, both primary and secondary velocity profiles decline while the opposite condition is watched for thermal and concentration profiles. Physically, the porousness builds obstruction of the permeable medium which will in general, diminish the fluid velocity. It is normal that an expansion in the porousness of the permeable medium prompts the ascent in the flow of fluid through it. When the outlets of the permeable medium become big, the obstruction of the medium might be ignored. Figure 26 proves the impact of the thermal relaxation parameter γ 1 on temperature distribution. It uncovers that the fluid temperature and energy boundary layer decrease with expanding value of γ 1 . Physically, thermal relaxation time is the time required by the fluid particles to move heat energy to its neighboring particles. Subsequently, as γ 1 ascends, the material particles need additional time to transfer heat to its adjoining particles, and this prompts less Figure 27. Figure 28 depicts the impact of Eckert number Ec on the temperature profile. It is seen that the temperature profile upsurges with an expansion in Ec. Physically, by expanding the Eckert number, the heat energy is hoarded in the fluid as a result of the frictional or drag forces. Accordingly, the fluid temperature field enhances. Figure 29 demonstrates the effect of Lewis number Le on the concentration profile ϕðηÞ. By expanding Le, the concentration and the boundary layer thicknesses decline. This is because of the way that the mass transfer rate enhances by expanding Le which improves the volume fraction of nanoparticles. Table 2 specifies the impact of physical parameters on the skin fraction coefficients, local Nusselt, and Sherwood numbers. The primary skin friction coefficient − ffiffiffiffiffi Re p C f x augments as magnetic parameter M , Eyring-Powell fluid parameter ε, thermal relaxation parameter γ 1 , concentration relaxation parameter γ 2 , and permeability parameter K rise and decreases for increasing values of mixed convection parameter λ, Hall parameter β e , ion slip parameter β i , and Eckert number Ec. Also, it is observed that the secondary skin friction coefficient ffiffiffiffiffi Re p C gz amplifies as magnetic parameter M, mixed convection parameter λ, and Eyring-Powell fluid parameter ε increase and declines as Hall parameter β e , ion slip parameter β i , thermal relaxation parameter γ 1 , concentration relaxation parameter γ 2 , and permeability parameter K increase, while it remains constant as the Eckert number Ec enhances. Similarly, both the local Nusselt number -θ ′ ð0Þ and Sherwood number -ϕ ′ ð0Þ enhance for larger values of mixed convection parameter λ, Hall parameter β e , ion slip parameter β i , and Eyring-Powell fluid parameter ε and decrease for larger values of magnetic parameter M and permeability parameter K. Further, opposite behaviors of the local Nusselt number -θ′ð0Þ and the local Sherwood number -ϕ ′ ð0Þ are observed for increasing values of thermal relaxation parameter γ 1 , concentration relaxation parameter γ 2 , and Eckert number Ec: Conclusions This paper looks at the mixed convection stream of Eyring-Powell nanofluid with the Cattaneo-Christov heat and mass flux model over linearly stretching sheet through a permeable medium. The impacts of Hall and ion slip, permeability, and Joule heating are also considered. The governing nonlinear partial differential equations are changed into joined nonlinear 12 Advances in Mathematical Physics ordinary differential equations utilizing similitude transformations. These nonlinear ordinary differential equations subject to boundary conditions are handled numerically by means of SRM. Numerical results are presented for different parameters of interest, and results are discussed with the assistance of graphs and tables. The study has a momentous application in the material processing technology. From the introduced investigation, the accompanying ends were drawn:
5,178.6
2020-09-09T00:00:00.000
[ "Physics" ]
Solvable Quantum Grassmann Matrices We explore systems with a large number of fermionic degrees of freedom subject to non-local interactions. We study both vector and matrix-like models with quartic interactions. The exact thermal partition function is expressed in terms of an ordinary bosonic integral, which has an eigenvalue repulsion term in the matrix case. We calculate real time correlations at finite temperature and analyze the thermal phase structure. When possible, calculations are performed in both the original Hilbert space as well as the bosonic picture, and the exact map between the two is explained. At large $N$, there is a phase transition to a highly entropic high temperature phase from a low temperature low entropy phase. Thermal two-point functions decay in time in the high temperature phase. Introduction In this paper we are interested in the physics of a large number of non-locally interacting fermionic degrees of freedom. We will study quantum mechanical fermions with a vector-like index structure as well as a matrix-like index structure. Part of the motivation comes from recent investigation of similar systems [1,2,3,4,5,6,7,8] which may play a useful role in understanding certain problems in black hole physics. Another motivation comes from recent investigations on the emergence of bosonic matrix models from discrete systems [9,10,11,12,13]. Moreover, we are interested in how the original fermionic degrees of freedom and Hilbert space might be encoded (if at all) in those of the bosonic matrix. 1 Our goal is to study tractable systems with rich interactions. Unlike the purely fermionic models of [2], our models do not have any quenched disorder (see also [14,15,16,17]). Instead, their complexity stems from the matrix-like interactions. Given a matrix structure, we might hope, eventually, to relate our systems to some gravitational description for a new class of models. Here, we take some small steps in this direction by carefully analyzing and solving such systems with quartic interactions. Our approach is to express the systems in terms of auxiliary bosonic variables allowing us to use standard large N techniques. When possible, we carry forward the analysis in both the fermionic Hilbert space picture as well as the bosonic path integral picture and map physical questions from one picture to another. The bosonic systems exhibit a (0 + 1)-dimensional emergent gauge field and the corresponding (broken) gauge symmetry is crucial in our ability to solve the systems. Part of our treatment is in some regard a (0 + 1)-dimensional analogue of the analysis in [18,19]. Fermionic correlation functions are related to the calculation of certain Wilson line operators of the gauge field. At the end, for both matrix and vector models, we are able to express the exact finite N thermal partition function as an ordinary integral. In the matrix case, this integral is a matrix-like integral with a modified Vandermonde term that consists of the sinh or sin of the difference in eigenvalues rather than the difference eigenvalues themselves. Such eigenvalue integrals appear in the study of unitary random matrix models as well as Chern-Simons theories. At large N the systems exhibit a non-trivial phase structure which is naturally characterized, in the matrix case, by the connectivity of the eigenvalue distribution, somewhat similar to the situation encountered in [12]. The structure of the paper goes as follows: we begin by analyzing the vector model in the early sections. We show how the fermionic thermal partition function, initially expressed as a path integral over Grassmann valued functions of Euclidean time, can be expressed in terms of an ordinary bosonic integral over a single real variable. We analyze both the thermal phase structure and fermionic correlation functions. The latter are expressed in terms of the expectation values of non-local in time variables, resembling Wilson line operators of an emergent gauge field. We show, at large N , a transition from a low entropy phase to one with entropy extensive in N . These results are generalized to the matrix case in the latter sections, where the structure is significantly more intricate. The thermal partition function now reduces to a matrix integral, which as mentioned, contains a modified Vandermonde interaction among the eigenvalues. We end discussing the thermal phase structure and thermal correlation functions of the matrix model at large N . At large N , we find that the real time two-point function decays significantly faster in the matrix case than the vector case. In appendices A and B we discuss certain generalizations of the models studied in the main body. Fermionic vector model In this section we consider N complex Grassmann degrees of freedom {ψ I ,ψ I } interacting via a quartic interaction. The index I = 1, 2, . . . , N is a U (N ) index, under which the ψ I transform in the fundamental and theψ I in the anti-fundamental representation. The thermal partition function of our model is given by: We have a periodic Euclidean time coordinate τ ∼ τ + β, such that our integration variables obey ψ I (τ + β) = −ψ I (τ ). The fermion variables are dimensionless and the parameter γ is a positive number with units of τ . (In appendix B we carry over our results for the vector model to the case with negative γ.) Using a Hubbard-Stratanovich transformation the Euclidean action can be written as with λ(τ ) a periodic function of τ with units τ −1 . The only dimensionless quantity, other than N , is γ/β. It grows with increasing temperature. We set β = 1 unless otherwise stated. Exact evaluation of path integral In this subsection we evaluate the fermionic path integral exactly. Going back to (2.2), evaluation of the fermionic path integral yields: The functional determinant has a local invariance [11]: This can be viewed as a time-reparameterization of an einbein λ(τ ), and it is an exact symmetry of a system of non-interacting fermions. Due to the above invariance, the functional determinant will only depend on the Matsubara zero-mode of λ(τ ), namely where the Green function G n,m contains the diagonal component of the λ n,m : andλ has Toeplitz form:λ We can expand the logarithm on the right hand side of (2.5) in a matrix Taylor expansion: (2.8) The zeroth order piece, using the standard Euler infinite product formula, can be written as 2 The first non-trivial contribution inλ to (2.8) is quadratic and involves n∈Z 1 (2πi(n + 1/2) + λ 0 ) 10) The vanishing of the above expression implies that no derivatives of λ(τ ) are generated to leading order. It is not hard to check that this behavior continues to higher orders. Thus, the path integral (2.3) in a thermal frequency basis becomes Performing the integration over the non-zero modes, we arrive at: where we have fixed the normalization constant in (2.11) by demanding that in the infinite temperature limit we get the Hilbert space dimension Z = 2 N . We see that the partition function can be reduced to a simple integral over λ 0 (the unique quantity invariant under (2.4) that can be constructed out of λ(τ )). The integral over λ 0 can be explicitly done. Reinstating β, we find: 13) where C N n are the binomial coefficients. From Z[β] we can read off the spectrum and degeneracies: (2.14) Note that the spectrum is symmetric under n → (N − n). 2 The infinite constant is absorbed in the normalization factor N in (2.3). Hilbert space picture Can we understand the above result from the point of view of the Hilbert space? Upon quantizing the system, we impose anti-commutation relations {ψ I , ψ J } = δ IJ . If we define an empty state |0 as annihilated by all the operators ψ I , then the full set of states is given by acting with any amount ofψ I on |0 . This gives 2 N states. The Hamiltonian of the system is given by: where we have fixed the normal ordering constants such that the spectrum matches that of (2.14). The spectrum is non-positive for γ > 0. The doubly degenerate ground state has energy E g = −N/(16γ) . Since the operatorN = ψ I ψ I commutes with the Hamiltonian, we can organize the spectrum in terms of eigenstates of thê N operator, which counts the number ofψ I hitting |0 . For exampleNψ 1ψ2 |0 = 2ψ 1ψ2 |0 . Thus, states with nψ's acting on |0 have an energy E n given in (2.14) and a degeneracy given by the binomial coefficients d n = C N n . As expected, if we sum over all the degeneracies with find: n C N n = 2 N . At large N the spectrum peaks sharply about n = N/2, with d N/2 ∼ 2 N +1 / √ 2πN , consisting of states with E N/2 = 0. Thermodynamic Properties From the thermal partition function we can study various thermodynamic properties. At very low temperatures the entropy is O(1) while the energy goes as O(N ), and the free energy F = −T log Z = E −T S is dominated by the energy piece. As we increase the temperature, the energy and entropy contributions begin to compete since the number of states begins to grow exponentially in N . More precisely, at large N the degeneracies behave as d n ≈ C N N/2 e −(N −2n) 2 /(2N ) . When d n = e Sn ≈ e βEn , which at large N gives β ≈ 8γ, we expect a transition. Above this temperature, the entropy become O(N ) and dominates over the energy. The transition becomes increasingly sharp as we increase N . We show a numerical example in figure 1. Note that in the high temperature phase, the entropy of the system is extensive in the number of degrees of freedom. as a function of β. We have taken N = 500 and γ = 1. Notice the transition occurring near β = 8γ. Correlation functions of the vector model In this section we discuss the correlation functions of the model in both real and Euclidean time. We do this both in the path integral picture and the corresponding Hilbert space picture. Fermionic Hilbert space picture To compute the thermal correlator in the Fermionic Fock space picture, recall that we can organize the Hilbert space in terms of number eigenstates: |n, I n =ψ I 1 . . .ψ In |0 , m, I|n, I = δ m,n δ I,I . The real time two-point function in the thermal ensemble is explicitly given by: The (N − 1) in C N −1 n comes from the reduction in the number of states in |n, I n when hit with the operatorψ B . We note that the Green function (3.2) satisfies At low temperatures, the two-point function oscillates with a frequency given by we expect to see recurrent patterns in the two-point function for time separations of the order t ∼ N γ. In figure 2 we display a plot of G β (t) exhibiting this behavior. Path integral expression The generating function for Euclidean fermion correlation functions is given by: Integrating out the fermions we get Consequently, the exact Euclidean time two-point function is given by: The differential operator: obeys the following equation: For 0 < τ, τ < β, we find: At this point, we must fix the remaining constant. This follows from the fermionic nature of the correlator under a shift in β, i.e. G(τ 1 + β, τ 2 ) = −G(τ 1 , τ 2 ). We thus have: Notice that G(τ, τ ) is now only invariant under the transformations (2.4) that leave the end points unchanged. Recalling the interpretation of λ(τ ) as an einbein, we can view G(τ, τ ) as a gravitational Wilson line. Since G(τ, τ ) can be expressed as the exponential of a local integral of λ(τ ), we are now in a position to evaluate the path integral. Upon evaluation of the functional determinant, a cancellation occurs between c + and one of the powers of the cosh in (2.12), and we can express the path integral as: Performing the Gaussian integrals pointwise, and reinstating β we find for 0 <τ < β: The normalization constant N has been fixed by imposing G AB β (0) = 1/2. This agrees precisely with (3.2) upon Wick rotating to Euclidean time t → −iτ . Large N approximation We would like to see how much of the exact structure previously uncovered is contained in a large N approximation. It is convenient to express the partition function in terms of thermal Fourier modes and reinstate β. From (2.9) and (2.11) we get: (3.13) The large N saddle point equations for λ(τ ) are: amounting to a constant solution for λ(τ ). For low temperature one finds three saddles, which to leading order in the large β limit are: λ 0 = 0 and λ 0 = ±1/4γ, the one at the origin being subdominant. As we increase the temperature and reach β = 8γ the λ 0 = 0 saddles coalesce to the origin. Recall that β = 8γ was the temperature at which the free energy exhibited a thermal transition. For large temperatures, β < 8γ, a single saddle is found corresponding to λ(τ ) = 0. Low temperatures In the Hilbert space picture, the dominant behavior in the β → ∞ limit comes from the vacuum and its first excited state. From (3.2) we get in the large N limit which shows a single oscillatory behavior in time with frequency given by the energy difference of the first excited state with respect to the vacuum. In the large N limit this is given by ∆E = 1/4γ . This behavior is recovered, in the path integral description, from the saddles at with ω p = 2π(p + 1/2)/β and p, q ∈ Z. We can expand the Green function as: To leading order in N we therefore keep the 1 in (3.18). In the small temperature limit we transform back to τ replacing p → β dω and we also have log cosh βλ 0 2 ≈ β|λ 0 | 2 . Evaluating (3.17) at the saddles we get which coincides with (3.16) when Wick rotating to real time τ → it. Notice that τ > 0 (τ < 0) implies that only the λ 0 = +1/4γ (λ 0 = −1/4γ) saddle contributes to the ω contour integral. High temperatures Consider first the Hilbert space picture. The high temperature limit of (3.2) is dominated by the n ≈ N/2 terms due to the high degeneracy of states. At large N we can and defining the variable x = −(N − (2n + 1)))/N we get: Again, the normalization constant c was fixed by demanding that G AB β (0) = 1/2. In the second line, we have kept only the leading in N terms in the exponent. Interestingly, the correlation functions decay to exponentially small values after a time of order t ∼ N γ(γ − β/8). This is in spite of the correlator not exhibiting an initial exponential decay of the form e −t/β , characteristic of thermal systems. As mentioned below (3.3), we expect the approximation to fail and find recurrences when all the oscillating factors near n ≈ N/2 are in phase. This happens for t ∼ 4πN γ. So, the recurrences in the vector model occur much more frequently than for a strongly coupled (chaotic) system [27], where they might be separated by exponential in N (or even super-exponential) time scales. These features suggest that the system has a flavor of integrability. The approximate result (3.20) agrees well with the exact answer (3.2) for times t 4πN γ. We now consider the path integral picture. From (3.6) we have The auxiliary parameter, a, is introduced to make the exponent local in τ . The approximation in the second line corresponds to taking the β → 0 limit. In this limit λ 0 1 and we can approximate the log cosh λ 0 /2 by a quadratic approximation. We can now perform the Gaussian integral (3.22) as was done in (3.12), getting where again we have fixed the normalization such that G AB β (0) = 1/2 at τ = 0. This coincides with (3.20) upon Wick rotating. Note that in the large N approximation used above, we have gone being the leading saddle point approximation for which λ(τ ) = 0, leading to a constant correlation function in τ . The τ -dependence in (3.22) arises from the next to leading (Gaussian) correction about the large N saddle point. As a final note, we should mention that at β = 8γ, the correlator exhibits a transition between the high temperature decaying correlator and the low temperature oscillatory one. At large N and β = 8γ, we must consider the log cosh λ 0 /2 term beyond the quadratic approximation. For instance, keeping only the quartic term we are able to approximate the correlator at β = 8γ. In real time, it takes the form: (3.23) The above agrees well with numerics at large N , as seen in figure 3. It falls to small values after t ∼ N 1/4 β, which is parametrically faster than the high temperature case. Our large N analysis is generically not sensitive to the presence of recurrences. Indeed, for large times (3.22) will receive significant corrections from the terms in (3.18) that were discarded. This requires us to go beyond the saddle point approximation. These are responsible in restoring the fine structure of the correlation function. 3 Alternative view of the low energy sector Here, we provide an alternative view to the fermion determinant independence of the non-zero modes of λ(τ ) (2.4) and describe the theory's effective dynamics at low temperature. This approach will be useful when we analyze the fermionic matrix models in what follows. At low temperatures, we can insert (3.26) into (2.3) and take λ 0 to be localized at its saddle value λ 0 = 1/4γ. We arrive at an effective low temperature theory: where D φ(τ ) runs over the space of non-constant periodic functions φ(τ ). Thus we have at low energy a degree of freedom φ(τ ), with an ordinary kinetic term. Following the discussion of [25,26], this can be viewed as a low energy hydrodynamic mode (3.28) which evaluates to: Once again, we find agreement with the zero temperature correlator of the fermionic theory: where as before the fermions are anti-periodic in Euclidean time. It is convenient to consider γ > 0 such that the quartic term has the opposite sign from the vector case previously studied. At finite N , our expressions will be analytic functions of γ such that we can analytically continue γ over the complex plane. As before, we set β = 1 unless otherwise specified, in the end everything will depend on the dimensionless combination γ/β. Ai ψ iBψBj ψ jA , plus normal ordering terms which will be quadratic. The models are considerably more intricate than their vector counterpart and we will analyze them entirely in terms of the corresponding bosonic path integrals. Effective action Prior to integrating out the Grassmann matrix ψ iA (τ ) we have the following Euclidean action: We wish to understand the effective action of M ij (τ ) obtained upon integrating out the ψ iA (τ ). The partition function becomes: where N is a normalization constant which we will fix shortly. Note that both the measure over M ij (τ ) and the functional determinant are invariant under the gauge symmetry: where U ij (τ ) is a τ -dependent unitary matrix. The above gauge symmetry implies that the determinant is a function of the Polyakov loop: where P denotes path ordering. However, the quadratic in M ij part of the action resembles a mass term of the gauge field and is hence not gauge invariant. Fortunately, one can take advantage of the gauge symmetry in the following sense. We can introduce a Hubbard-Stratanovich field Λ ij (τ ) such that: Note that the M ij dependent piece of the integrand is simply the generating function of correlation functions of M ij , now viewed as a U (N ) gauge field in a theory of free fermionic matrices. We can analyze this piece more carefully: Under a change of variables, one can write the Hermitean matrix M ij as: Here µ = diag(µ 1 , . . . , µ N ) is a diagonal matrix with τ -independent elements and U ij (τ ) ∈ U (N ). We can thus express (4.7) as [24]: where [DU ] is the U (N ) Haar measure. We now make the transformation Λ = UΛU † , leaving the Λ-measure invariant. The µ i integral acquires chemical potentialsλ i ≡ tr h iΛ (τ ) for each eigenvalue 4 , giving rise to a dressed partition function. All of the U dependence appears in the dτ iU † UΛ piece. This piece is independent of the constant part ofΛ ij (τ ), which we can separate out of the trΛ 2 (τ ) term. Performing theΛ ij (τ ) integral, we obtain: (4.10) The term tr UU † 2 is analogous to the dτφ(τ ) 2 in the vector case. It is decoupled from the eigenvalue integral. The normalization constant is given by: From the above expression, we can also obtain the case forγ = −γ > 0 by analytic continuation of µ i → iµ i : where now the normalization constant becomes: In this case, what was a unitary matrix U ij in (4.10) now becomes an element of the group generated by anti-Hermitean matrices. It is of interest to note that the eigenvalue repulsion is due to a modified version of the usual Vandermonde, that involves the sin or sinh of the difference in eigenvalues. This is characteristic of gauge theories at finite temperature [24]. It is useful to consider some simple examples, to ensure that the integrals defined above indeed give a positive definite spectrum. Consider the case with L = 1 and N = 2, i.e. 2 × 1 fermionic matrices. Here the Hilbert space is 4 = 2 2×1 dimensional. An explicit evaluation of the integral (4.12) gives: (4.14) For L = 2 and N = 2 we find: The states add up to 16 = 2 2×2 , which is the correct dimension of the Hilbert space. For L = 2 and N = 3 we find: The states add up to 64 = 2 3×2 , which is the correct dimension of the Hilbert space. Summary Thus we see that much of the structure of the vector model is carried forward to the matrix model. Instead of an ordinary integral over a single variable, we now have a partition function given by an ordinary matrix integral. Instead of a large N saddle point value for a the single variable, we will find large N eigenvalue distributions. Finally, instead of a single low energy field φ(τ ), we now have a low energy unitary matrix U ij (τ ). The spectral information about the Hilbert space of the fermionic model is subsumed entirely into the structure of the eigenvalue integral, rather than the low energy fluctuations of U ij (τ ). We now explore the thermal phase structure and correlations of the fermionic matrix model. Thermal phase structure and correlations In this section we discuss the thermal phase structure of the matrix model. We will consider the case withγ > 0. To analyze this we read from (4.10) the potential acting on the eigenvalues: As before, we fix units where β = 1 and study the partition function as a function of α ≡ L/N , which we keep fixed in the large N limit, andγ. High and low temperature limits We can develop a schematic idea of how the eigenvalue distribution should look. The minima of the potential acting on the eigenvalues depends on the temperature. As in the vector model, at low temperatures there are two minima. The eigenvalues will accumulate near these two minima, and will repel each other by the Vandermonde interactions. There will be many such distributions that are solutions. For instance, all N eigenvalues might be located in either minimum or one on the left and all (N − 1) others on the right and so on. Eigenvalues in one minimum can thermally tunnel to the other. As we increase the temperature, the double well profile is lost and instead there is a single minimum at the origin. Consequently the eigenvalues will be distributed around a single minimum at high temperatures. High temperatures In theγ → ∞ limit the Gaussian part of the potential acting on the µ i dominates. Thus, the partition function receives a large contribution from small values of µ i and we can approximate our partition function by expanding the cosh µ i /2 in (5.1) to quadratic order. We obtain the following Gaussian matrix integral: where Q is defined in (4.13). This matrix integral has appeared in studies of Chern-Simons theory on an S 3 [28,29]. 5 The eigenvalue distribution is connected (single cut) and centered around the origin. Its explicit form is given by: Low temperatures At low temperatures, i.e. in theγ → 0 limit, the eigenvalue potential develops two minima around µ i = ±1/4γ. Around these, the eigenvalue potential is approximated From the result for the eigenvalue distribution (5.3) we can read the width of the distribution. Consider the case where all eigenvalues are located around one of the minima. We have: µ ± 1 4γ ∈ −2 cosh −1 e g/2 , 2 cosh −1 e g/2 , g = 1 2αγ . (5.5) In theγ → 0 limit we can approximate cosh −1 e g/2 ≈ log 2 + g/2. Note that for α > 2/(1 − 8γ log 2), the eigenvalue distribution has compact support away from µ = 0. For α < 2/(1 − 8γ log 2) we cannot have an eigenvalue distribution with compact support away from the origin. More generally however, due to the repulsion of eigenvalues the lowest energy configuration will favor the distribution of eigenvalues evenly among the two minima. Whether the eigenvalue distributions are disconnected will depend on α. A small α broadens the potential, enhances the effect of repulsion and connects the two distributions. For parametrically large α the eigenvalues peak sharply about µ = ±1/4γ. In summary, the global phase structure goes as follows. At large enough temperatures there is a single cut eigenvalue distribution located near the origin. At low temperatures, the eigenvalue distribution senses two minima in the eigenvalue potential, and may be connected (for large α) or disconnected (for small α). It is interesting to note the similarity of our phase structure to the one studied recently in [12]. We will present a detailed analysis of the phase structure in future work. Matrix Correlation functions Finally, we would like to briefly discuss the correlation functions of ψ iA (τ ). Following the discussion for the vector case, we must invert the differential operator G −1 ij (τ, τ ) = (δ(τ − τ ) ∂ τ + δ(τ − τ )M ij (τ )). Using a parallel argument to the non-matrix case, we find: We see that the inverse operator is naturally expressed in terms of Wilson lines. Note At large N we calculate: with δτ ≡ (τ − τ ). Explicitly, we must calculate two pieces. One comes from the U ij sector: (5.9) The above piece is analogous to the contribution coming from the quadratic φ(τ ) action, as in (3.28). At large Lγ, The dominant part of the dynamics comes from the eigenvalue piece, which in the large N limit, we can express as an eigenvalue density integral: For high temperatures, the above integral can be computed numerically using the single cut eigenvalue distribution (5.3). For Lorentzian times we take δτ → iδt. In figure 4 we show an example G(δt). We can give an analytic approximation of G(δτ ) at high enough temperatures, or equivalentlyγ large. There, the eigenvalue distribution is close to Wigner's semi-circle law, so we can approximate (5.10) by using ρ(y) ≈ (2πt) −1 4t − y 2 . Recalling that t −1 = 2α(γ − 1/8) implies that t is a small number at high temperatures. Moreover, the range of y is O( √ t) and thus to leading order in small t we find: where I n (z) is the modified Bessel function of the first kind. This function agrees well with the numerical result. For δt α (γ − 1/8), we find it decays as G(δt) ∼ δt −3/2 . This is a significantly faster decay than the vector model at high temperatures, which only experiences a sharp decay after times of order δt ∼ 4 2N γ(γ − 1/8). A similar calculation will hold for the low temperature (sub-dominant) saddle where all eigenvalues are gathered around a single minimum and t = 1/(2αγ). Higher point functions will be given by computing expectation values of products of Wilson lines. We hope to analyze of the matrix correlators in the β = 8γ in future work. A More general potentials In this appendix, we express the partition function of a vector theory with general potential V (ψ I ψ I ) as a simple integral. One begins by introducing a δ-functional: δ(Λ(τ ) −ψ I (τ )ψ I (τ )) = Dλ(τ )e i dτ λ(τ )(Λ(τ )−ψ I (τ )ψ I (τ )) . (A.1) Consequently, we can express the partition function of the fermionic theory as: As in the main text, det (∂ τ + iλ(τ )) will only depend on the zero Fourier-mode of λ(τ ). Consequently, upon performing the path integral over the non-zero modes of λ(τ ), the term i dτ λ(τ )Λ(τ ) in the effective action will produce δ-functions for the Λ n with n = 0. Hence we will remain with an integral over λ 0 and Λ 0 . We can write the partition function: The bosonic path integral becomes: There is no analogue of the large N low and high temperature phases. The large N saddle point equation is: The above equation has many solutions, but the dominant saddle lives at λ 0 = 0. As we lower the temperature, we must include an increasing number of saddles to obtain a good approximation. Correlation functions of the fermions are given in the bosonic picture by expectation values of the following non-local operator: The correlation function exhibits the Gaussian decay observed for the γ > 0 model discussed in the main text, except that it now does so for all temperatures.
7,236.6
2016-12-12T00:00:00.000
[ "Physics" ]
A Distributed Conjugate Gradient Online Learning Method over Networks In a distributed online optimization problem with a convex constrained set over an undirected multiagent network, the local objective functions are convex and vary over time. Most of the existing methods used to solve this problem are based on the fastest gradient descent method. However, the convergence speed of these methods is decreased with an increase in the number of iterations. To accelerate the convergence speed of the algorithm, we present a distributed online conjugate gradient algorithm, different from a gradient method, in which the search directions are a set of vectors that are conjugated to each other and the step sizes are obtained through an accurate line search. We analyzed the convergence of the algorithm theoretically and obtained a regret bound of O ( �� T √ ) , where T is the number of iterations. Finally, numerical experiments conducted on a sensor network demonstrate the performance of the proposed algorithm. Introduction Distributed optimization has received considerable interest in science and engineering, which can be applied in numerous fields such as distributed tracking and localization [1], multiagent coordination [2], distributed estimations using sensor networks [3][4][5], and machine learning [6]. Such problems can be modeled to minimize or maximize the summation of some of the local convex functions, and these local functions only use a local computation and communication in a distributed manner. With an increase in the network size and data volume, more effective distributed algorithms have become a hot research topic. In recent years, many scholars have proposed various distributed optimization algorithms to solve such problems [7][8][9][10][11][12][13][14][15]. Most of the existing algorithms assume that the cost function at each agent is fixed. However, in practical problems, the environment of an agent is uncertain and the cost function of each agent changes over time, requiring us to solve such problems through an online setting. To be more precise, in a distributed online optimization, the cost function of each agent changes during each step, and with t iterations, before a decision is given, the cost function for each agent is unknown; only when we obtain a decision from a constrained set, we can obtain the information of the cost function. In addition, we also obtain a loss at the same time. Such loss reflects the error in the cost of the objective function between the current decision point and the best decision in hindsight, which we call regret. Regret is an important criterion in evaluating a distributed online algorithm. A well-performing distributed online optimization algorithm should decrease the average total regret approach to zero over time. Because an online distributed optimization algorithm is more consistent with that used for practical problems, many scholars have conducted numerous studies and some effective algorithms have been proposed [16][17][18][19][20][21][22][23][24]. Yan et al. [20] introduced a distributed autonomous online learning algorithm, namely, a subgradient descent method using a projection. When the local objective functions are strongly convex and convex, the regret bounds of the proposed algorithm are obtained, respectively. e authors in [22] introduced an online distributed push-sum algorithm in which the search direction is a negative subgradient in each iteration, which achieves regret O((log(T)) 2 ) when the local function is strongly convex. For a time-varying directed network problem, Zhu et al. [25,26] proposed a distributed online optimization algorithm. During each iteration, the negative subgradient is randomly selected as the search direction. e authors in [27] presented a distributed online algorithm based on a primal-dual dynamic mirror descent for a problem with time-varying coupling inequality constraints and obtained a dynamic regret bound. e authors in [28] proposed a distributed online conditional gradient algorithm for a constrained distributed online optimization problem in the Internet of ings. e existing distributed online optimization algorithm based on the gradient method is simple to calculate and requires little storage; however, to ensure the convergence of the algorithm, the iterative step length usually needs to decrease with an increase in the number of iterations, which will lead to a zigzag path at the end of the algorithm. at is, the algorithm will carry out multiple iterations in the same direction or approximate direction, which greatly increases the computational time of the algorithm. e conjugate gradient algorithm also has the advantages of simple calculations and guaranteed convergence under certain conditions [29][30][31] but differs from the gradient method in that the search direction of the conjugate gradient algorithm is a group of conjugate or approximately conjugate vectors, and during the later stage of the algorithm, there are no additional repeated iterations in the same or approximate direction. us, the convergence of the conjugate gradient method is generally faster than that of the gradient descent method. In particular, for an objective functional quadratic, the conjugate gradient method has a quadratic termination. Based on these advantages, the conjugate gradient method has been used to solve numerous centralized offline optimization problems [32][33][34][35]. According to the existing literature, however, the conjugate gradient method has not been applied to distributed online optimization problems. To fill in this gap, herein, we present a distributed online conjugate gradient algorithm. ere are two main contributions provided by the present study. First, a new algorithm for a distributed online constrained convex optimization problem, namely, a distributed online conjugate gradient algorithm, is proposed. In our algorithm, a set of conjugate directions is used to replace the gradient directions used in a traditional gradient descent method, and the step size is obtained through an accurate linear search, thus effectively avoiding the slow convergence speed of a traditional gradient descent algorithm during the later stage. Second, we provide a careful analysis of the convergence of the proposed algorithm and obtain the square root of the regret bound. e remainder of this paper is organized as follows: in Section 2, we first briefly introduce the distributed online optimization model, followed by some necessary mathematical preliminaries and assumptions used in this study. We also provide a detailed statement of our algorithm in Section 3 and an analysis of the convergence of the algorithm in Section 4. e simulation results of our algorithm are then presented in Section 5. Finally, we provide some concluding remarks in Section 6. In addition, further detailed proofs of some of the lemmas applied can be found in the Appendix. Preliminaries In this section, we provide a brief background on the distributed online optimization and the conjugate gradient method. At the same time, some constructs used in this study and some relevant assumptions regarding our analysis are provided. Distributed Online Optimization. Consider a network system with multiple agents; in this network, each agent i is associated with a convex function f ti (x): R n ⟶ R. All agents aim to solve the following general consensus problem cooperatively: subject to x ∈ X. (1) During each round t ∈ {1, . . ., T}, the ith agent is required to generate a decision point x i (t) from a convex compact set X. en, the adversary replies to each agent's decision with a cost function f ti (x): X ⟶ R, and each agent has a loss of f ti (x i (t)) simultaneously. e communication between agents is specified by a graph G � (V, E), where V � 1, . . . , n { } is the vertex set and E ⊂ V × V is the edge set. Each agent i can only communicate with its immediate neighbors N(i) � j ∈ Vu(i, j) ∈ E . e goal of the agents is to seek a sequence of decision points x i (t) , i ∈ V such that the cumulative regret with respect to each agent i regarding any fixed decision x * ∈ X in hindsight is sublinear in T, that is, lim T⟶∞ R T /T � 0. Conjugate Gradient Method. For the following optimization problem, where f(x) is a quadratic continuous differentiable. e iterative form of the conjugate gradient (CG) method is usually designed as where x(k) is the point from the kth iteration, α k > 0 is the step length, and the search direction d k is defined as in which, g k is the gradient of the objective function at the current iterate point x(k), β k ∈ R is a scalar, and the different definitions of β k represent different methods of a conjugate gradient [27]. Well-known conjugate gradient methods include the Polak-Ribiere-Polyak (PRP) method and the 2 Complexity Fletcher-Reeves (FR) method. In this study, we define the parameter β k using the PRP method, the specific form of which is as follows: Gilbert and Nocedal [36] proved that if the parameter β k is appropriately bounded in magnitude, the CG method can converge globally. erefore, the CG method satisfies the sufficient descent condition under this hypothesis. To analyze the convergence of our algorithm, we provide the bound of the conjugate gradient as follows. Lemma 1 (see [37]). Let f(x) be a quadratic continuous difference convex function, and ∇ 2 f(x) be a Hessian matrix of the function. For any x ∈ R n , when there exist two positive numbers m and M such that Taking an initial point x (1) ∈ C, where x k , d k , and β k are all defined using the PRP method, in which g k � ∇f(x k ). Some Constructs and Assumptions. e following assumptions are given throughout this paper: (i) Each cost function f ti (x) is a convex and twice continuous differentiable L-Lipschitz on the convex set X. (ii) e set X is compact and convex, and 0 ∈ X, 0 denotes a vector with all entries equal to zero. (iii) e Euclidean diameter of X is bounded by R. As the Lipschitz condition in (i) implies, for any x ∈ X and any gradient g i , we have the following: Where ‖ · ‖ * : � sup ‖u‖≤1 〈·, u〉 denotes the dual norm. e next definition is used throughout this paper. Definition 1 (see [38]). Let f(x) be a function difference on an open set C⊆R n , and let X be a convex subset of C. en, for all (x 0 , x) ∈ X × X. Now, we give an important inequality in [39] that is often used in optimization problems. Let f(x) be a first-order continuous differentiable function on the set R n , whose first derivative satisfies the Lipschitz condition, and thus ∀x, y ∈ R n , where L is the Lipschitz constant and ‖·‖ denotes the European norm. Distributed Online Conjugate Gradient Algorithm For the distributed online optimization problem (1), each locally cost function f ti (x) satisfies the assumptions in Section 2. e network topology relationship among agents is specified by an undirected graph Each agent i can only communicate with its immediate neighbors. e adjacency matrix of the undirected graph is a doubly (1), we present a distributed online conjugate gradient algorithm. After giving a decision x i (t) ∈ X based on the current information, we can obtain the cost function f ti (x) and compute the gradient g i (t) � ∇f ti (x(t)). We can then calculate the value of β i (t) using the gradients in the current iteration point x i (t) and the previous iteration point , computed using a Gram-Schmidt conjugate of the gradients in the current iteration point x i (t) and the previous search direction d i (t − 1), can be constructed. If the parameter is β i (t) ≤ 0, we then obtain the new search direction d i (t) � − g i (t), which is equivalent to restarting the distributed online conjugate gradient algorithm in the direction of the steepest descent. e iteration step length α i (t) can be obtained through an exact line search, and the next iteration point x i (t + 1) can be obtained using the conjugate direction vector d i (t) and step α i (t). e specific algorithm is summarized in Algorithm 1. Here, we define the projection function used in this algorithm as follows: Regret Bound Analysis To analyze the regret bound for D-OCG, we provide some preliminary remarks and a few definitions. Using Algorithm 1, we can determine the following: Now, we define Complexity and from the evolution of z i (t + 1), we can obtain Now, the main results in our paper can be stated. Theorem 1. e sequences of x i (t) and z i (t) generated by Algorithm 1 are given for all , and we thus have the cumulative regret owing to the action of agent i , s where λ � max 1≤i≤n, 1≤t≤T {λ i (t)}, b and D are two nonnegative constants, M and m are as defined in Lemma 1, n is the number of agents, and σ 2 (P) is the second largest eigenvalue of the adjacency matrix P. From eorem 1, we obtain a regret bound of the proposed algorithm under the local convexity, which is sublinear to T, i.e., the regret bound of the D-OCG algorithm can approach zero as the value of T increases, where T is the number of iterations. It is evident that the value of the regret bound is related to the upper bound L of the gradient of the local objective functions and the diameter R of the constraint set X. By Lemma 1, we know that the regret bound is also related to the Hessian matrix of the local objective functions. Moreover, the value of the regret bound is also related to the scale and topology of the network. To prove eorem 1, we now present the following lemmas. Lemma 2. For any i ∈ V and x * ∈ X, we can obtain the following inequality: Proof. Based on assumption (i), the function f t (x i (t)) is L-Lipschitz continuous on the convex set X, that is, and thus By contrast, (1) Input: convex set X, maximum round number T (2) Initialize: e adversary reveals f ti , ∀i ∈ V (5) Compute the gradients g i (t) ∈ zf ti (x i (t)), ∀i ∈ V (6) Compute: 4 Complexity Combining equations (19) and (20), the proof of Lemma 2 is completed. Now, we prove that the last term of inequality (20) has a particular bound. □ Lemma 3. For any i ∈ V and x * ∈ X, en, based on assumption (i), we know that ‖g i (t)‖ ≤ L. We can then obtain the following: Summing for t � 1, . . ., T for the average of 〈g i (t), x i (t) − x * 〉, the following is obtained: e proof of Lemma 3 is completed. Now, we turn our attention to the following term: According to the definition of the conjugate gradient, we give the bound of equation (25) in Lemma 4. Lemma 4. For any i ∈ V and β i (t) ≤ b (where b is a nonnegative constant), the following bound holds: Proof. Based on the definitions of d i (t) and d(t), the lefthand side of the above inequality can be split into two: us, we prove that the first term in equation (27) has a bound. For any function f(x), we know that where dom f is the domain of the function f(x). erefore, for the function we can obtain for any x * ∈ X, that is, so Based on the definition of the conjugate function [40] and the updates for z(t), we have the following: Because α(t) is a nonincreasing sequence, based on the definition of the conjugate function φ * α (z), we can obtain for and thus we obtain the following: According to the inequality (11), we know that A detailed proof of equation (36) is provided in Appendix A. e following inequality is then established: Summing both sides of the above inequation from t � 1 to T, we obtain the following: rough equations (33) and (38), we can write the following: We then analyze the bound on the second term in inequality (27). Because β i (t) � max 1≤i≤n {0, β i (t) PRP }, and β i (t) ≤ b, we then analyze the following two situations. □ then, e conclusion therefore clearly holds. Because the set X is closed and φ(x) is strongly convex (for the definition of strongly convex, see [40]), the set described above is compact. By contrast, we know that 〈z, x〉 is differentiable in z, and the supremum is unique, and thus we can obtain the following: ∇φ * en, we derive the next two equations through Taylor expansion: and thus (53) and therefore us, Summing both sides of the above inequation from t � 1 to T, we obtain and combining equations (46)-(57), we obtain the following: rough equations (27), (39), and (58), we finalize the proof of Lemma 4. Lemma 5. (α-Lipschitz continuity of the projections). For any pair A detailed proof of this Lemma can be seen in Appendix B. Now, we focus on an analysis of a key result concerning regret, i.e., ‖x i (t) − y(t)‖ in Lemma 6. Lemma 6. For all i ∈ V and t ∈{0, . . ., T}, the following inequality is true: Proof. Because x i (t) and y(t) are the projections of z i (t) and z(t) onto the set X, through Lemma 5, we have the following: Now, considering the evolution of sequence {z i (t)} in Algorithm 1, we obtain the following: Because p ij is an element of a doubly stochastic matrix, n i�1 p ij � 1, then we have Based on Algorithm 1 and the definition of z(t), we can determine that z(1) � 0, and thus In addition, we can obtain and based on the definition for the 1 norm of the vector (see [41]), To obtain a more specific bound of equation (46), we introduce a useful property of a stochastic matrix as follows [12]: where P t− r− 1 denotes the (t − r − 1)-th power of matrix P, e i is the ith basis coordinate of an n-dimensional space R n , 1 denotes a vector with all entries equal to 1, and σ 2 (P) is the second largest eigenvalue of stochastic matrices P and σ 2 (P) ≤ 1, through which we obtain the following inequality: Combining equations (63), (68), and (70) yields the following: us, we complete the proof of Lemma 6 Now, we can provide a brief proof of eorem 1. □ Proof of eorem 1. Combining lemmata 2-6 yields the following regret bound: By equation (71) and based on α(t) � (λ/ � t √ ), φ(x * ) ≤ R 2 , φ * α(T) (z(T)) ≤ D, and ‖d(t − 1)‖ 2 * ≤ ((M + m) 2 / m 2 )L 2 , we can obtain the conclusion to eorem 1. Simulation Experiments To verify the performance of the D-OCG, we consider a problem of a distributed sensor network [18], which has n sensors and aims at the estimation of a random vector x ∈ X � x ∈ R d | ‖x‖ 2 2 ≤ x 2 max . In this network, at each time t ∈ {1, 2, . . ., T}, each sensor i receives an observation vector v ti : R d ⟶ R m , in which the vector v ti is time-varying owing to the effect of the observed noise. Assume that each sensor i has a linear model ϕ i (x) � A i x, where A i is the observation matrix of sensor i, and A i ∈ R m×d and ‖A i ‖ 1 ≤ ϕ max . e local cost function in sensor i is defined as f ti (x) � (1/2)‖v ti − A i x‖ 2 2 , where v ti � A i x + η ti , in which η ti is white noise. e mathematical model of this problem is subject to x ∈ X. (73) In an offline case, the cost function in each sensor i is fixed, and because we can know all information of the cost function in advance, the centralized optimal estimate for this problem can be obtained by In a practical problem, the characteristics of the white noise may be unknown, or some sensors might not work properly for a particular reason, and we therefore need to find an estimate for vector x using a distributed online algorithm. Here, we set d � 1 and A i � 1/2, and sensor i observes v ti � a ti x + b ti , where a ti ∼U(0, 1) and b ti ∼ U(− (1/4), 1/4) (in which x ∼ U(a, b) indicates a random vector x uniformly distributed on (0, 1)). en, the cost function for sensor i at each time t is given by We verified the performance of the proposed algorithm based on the following three aspects: (1) First, we determined how the number of nodes in the network affects the performance of the D-OCG. We can see from Figure 1 that the average regret decreases slowly when we increase the number of nodes, and the algorithm is convergent on different scaled networks. When n � 1, the problem is equivalent to a centralized optimization problem, and our distributed optimization algorithm can reach the same effect as the centralized algorithm. (2) We then checked how the network topology influences the performance of the D-OCG. We implemented the algorithm on three types of graphs with nine nodes. In a complete graph, each node is connected to the remaining nodes, that is, all nodes can exchange information with each other. In a cycle graph, each node is only connected to two nodes directly adjacent to it. e connectivity of a Watts-Strogatz graph is between the complete graph and the cycle graph. From Figure 2, it can be seen that a better connectivity can lead to a slightly faster convergence. (3) We next compared our algorithm with the class algorithm D-OGD in [20]. e parameters used in these two algorithms are based on their theoretical proofs. e network topology relationship among nodes is complete, whereas for nodes n � 9, the step size is α(t) � 1/ � t √ . As shown in Figure 3, the convergence speed of the two algorithms is initially close, but with an increase in the number of iterations, the D-OCG converges faster than the D-OGD, which fully reflects the excellent performance of the proposed algorithm. Complexity 9 Conclusion We proposed a distributed online conjugate gradient algorithm to solve the distributed optimization problem with a convex constraint in a network. With this algorithm, the conjugate gradient is used to replace the gradient or subgradient in a traditional gradient decent method. Because the search direction is mutually conjugated throughout the entire algorithm iteration process, we can remove the disadvantage that a slow convergence has in the later stage of a gradient decent. We also presented a detailed analysis of the convergence for the proposed algorithm and obtained a regret bound for the optimization problem. e regret bound has a sublinear convergence. We applied the proposed algorithm (D-OCG) to a distributed sensor estimation problem. e numerical results show that our algorithm is feasible and effective, and under the same assumptions, the D-OCG has a better convergence rate than the traditional D-OGD gradient method.
5,485.2
2020-03-11T00:00:00.000
[ "Computer Science" ]
The Scales of Gravitational Lensing After exactly a century since the formulation of the general theory of relativity, the phenomenon of gravitational lensing is still an extremely powerful method for investigating in astrophysics and cosmology. Indeed, it is adopted to study the distribution of the stellar component in the Milky Way, to study dark matter and dark energy on very large scales and even to discover exoplanets. Moreover, thanks to technological developments, it will allow the measure of the physical parameters (mass, angular momentum and electric charge) of supermassive black holes in the center of ours and nearby galaxies. Introduction In 1911, while he was still involved in the development of the general theory of relativity (subsequently published in 1916), Einstein made the first calculation of light deflection by the Sun [1]. He correctly understood that a massive body may act as a gravitational lens deflecting light rays passing close to the body surface. However, his calculation, based on Newtonian mechanics, gave a deflection angle wrong by a factor of two. On 14 October 1913, Einstein wrote to Hale, the renowned astronomer, inquiring whether it was possible to measure a deflection angle of about 0.84 toward the Sun. The answer was negative, but Einstein did not give up, and when, in 1915, he made the calculation again using the general theory of relativity, he found the right value φ = 2r s /b (where r s = 2GM/c 2 is the Schwarzschild radius and b is the light rays' impact parameter) that corresponds to an angle of about 1.75 in the case of the Sun. That result was resoundingly confirmed during the Solar eclipse of 1919 [2]. In 1924, Chwolson [3] considered the particular case when the source, the lens and the observer are aligned and noticed the possibility of observing a luminous ring when a far source undergoes the lensing effect by a massive star. In 1936, after the insistence of Rudi Mandl, Einstein published a paper on science [4] describing the gravitational lensing effect of one star on another, the formation of the luminous ring, today called the Einstein ring, and giving the expression for the source amplification. However, Einstein considered this effect exceedingly curious and useless, since in his opinion, there was no hope to actually observe it. On this issue, however, Einstein was wrong: he underestimated technological progress and did not foresee the motivations that today induce one to widely use the gravitational lensing phenomenon. Indeed, Zwicky promptly understood that galaxies were gravitational lenses more powerful than stars and might give rise to images with a detectable angular separation. In two letters More generally (see for details [11]), the light deflection between the two-dimensional position of the source θ S and the position of the image θ is given by the lens mapping equation: where φ = 2D LS Φ 2D N /(D S c 2 ) is the so-called lensing potential and Φ 2D N is the two-dimensional Newtonian projected gravitational potential of the lens. We also note, in turn, that the ratio D LS /D S depends on the redshift of the source and the lens, as well as on the cosmological parameters Ω M = ρ M /ρ c and Ω Λ = ρ Λ /ρ c , being ρ c = 3H 2 0 /(8πG), ρ M and ρ Λ the critical, the matter and the dark energy densities, respectively. The transformation above is thus a mapping from the source plane to the image plane, and the Jacobian J of the transformation is given by: where the commas are the partial derivatives with respect to the two components of θ. Here, κ is the convergence, which turns out to be equal to Σ/2Σ cr , where: is the critical surface density, γ = (γ 1 , γ 2 ) is the shear and A is the magnification matrix. Thus, the previous equations define the convergence and shear as second derivatives of the potential, i.e., From the above discussion, it is clear that gravitational lensing may allow one to probe the total mass distribution within the lens system, which reproduces the observed image configurations and distortions. This, in turn, may allow one to constrain the cosmological parameters, although this is a second order effect. Strong Lensing Quasars are the brightest astronomical objects, visible even at a distance of billions of parsecs. After the identification of the first quasar in 1963 [12], these objects remained a mystery for quite a long time, but today, we know that they are powered by mass accretion on a supermassive black hole, with a mass billions of times that of the Sun. The first strong gravitational lens, discovered in 1979, was indeed linked to a quasar (QSO 0957+561 [7]), and although the phenomenon was expected on theoretical grounds, it left the astronomers astonished. The existence of two objects separated by about 6 and characterized by an identical spectrum led to the conclusion that they were the doubled image of the same quasar, clearly showing that Zwicky was perfectly right and that galaxies may act as gravitational lenses. Afterwards, also the lens galaxy was identified, and it was established that its dynamical mass, responsible for the light deflection, was at least ten-times larger than the visible mass. This double quasar was also the first object for which the time delay (about 420 days) between the two images [13], due to the different paths of the photons forming the two images, has been measured. This has also allowed obtaining an independent estimate of the lens galaxy dynamical mass. Observations can also show four images of the same quasar, as in the case of the so-called Einstein Cross, or when the lens and the source are closely aligned, one can observe the Einstein ring, e.g., in the case of MG1654-1346 [14]. The macroscopic effect of multiple images' formation is generally called strong lensing, which also consists of the formation of arcs, as those clearly visible in the deep sky field images by the Sloan Digital Sky Survey (SDSS; see, e.g., [15]). The sources of strong lensing events are often quasars, galaxies, galaxy clusters and supernovae, whereas the lenses are usually galaxies or galaxy clusters. The image separation is generally larger than a few tenths of an arcsec, often up to a few arcsecs. Over the years, many strong lensing events have been found in deep surveys of the sky, such as the CLASS [16], the Sloan ACS [17], the SDSS, one of the most successful surveys in the history of astronomy (see, e.g., [18] and references therein), the SQLS (the Sloan Digital Sky Survey for Quasar Lens Search) [19], and so on. Strong gravitational lensing is nowadays a powerful tool for investigation in astrophysics and cosmology (see, e.g. [20,21]). As already mentioned in the previous section, strong lensing gives a unique opportunity to measure the dynamical mass of the lens object using, for example, the mass estimator M(< R E ) = πΣ cr θ 2 E , which directly gives the mass within R E , using Equation (5) in this regime. The result is that masses obtained in this way are almost always larger than the visible mass of the lensing object, showing that galaxy and galaxy cluster masses are dominated by dark matter. In any case, accurately constraining the mass distribution of the lens system (e.g., a galaxy cluster) is a generally degenerate problem, in the sense that there are several mass distributions that can fit the observables; thus, the best way to solve it is to use multiple images (see, e.g., [22]). Another important application of strong lensing is the study of dark matter halo substructures. Indeed, sometimes flux ratio anomalies in the lensed quasar images are detected (see, e.g., [23,24]), and while smooth mass models of the lensing galaxy may generally explain the observed image positions, the prediction of such models of the corresponding fluxes is frequently violated. Especially in the radio band observations, since the quasar radio emitting region is quite large, the observed radio flux anomalies are explained as being due to the presence of substructures of about 10 6 − 10 8 M along the line of sight. After some controversy regarding whether ΛCDM (cold dark matter plus Cosmological Constant) simulations predict enough dark matter substructures to account for the observations (for example, in [25], some indication is found of an excess of massive galaxy satellites), more recent analysis, taking also into account the uncertainty in the lens system ellipticity, finds results consistent with those predicted by the standard cosmological model [26,27]. However, at present, the list of multiply-imaged quasars observed in the radio and mid-IR bands is quite short, and further observational and theoretical work would be very helpful in this respect. Another indication of dark matter halo substructures comes from detailed analysis of galaxy-galaxy lensing. Although the results obtained are generally consistent with ΛCDM simulations, more data should be analyzed in order to get strong constraints [28,29]. Strongly lensed quasars have been observed to show a certain variability of one image with respect to the others. This can be often attributed to microlensing (see Section 5) by the stars throughout the lens galaxy. This effect, and in particular its variation with respect to the wavelength, has provided an opportunity to study in detail the central engine of the source quasar, and the magnitude of the microlensing variability has allowed astrophysicists to constrain the stellar density in the lens galaxy [30][31][32]. Strong gravitational lensing may be used as a natural telescope that magnifies dim galaxies, making them easier to be studied in detail. For this reason, mass concentrations, like galaxies and clusters of galaxies, can be effectively used as cosmic telescopes to study faint sources that would not be possible to detect in the absence of gravitational lensing (see, e.g., [33,34]). At present, there is also an event of a high magnified supernova multiply imaged and also seen exploding again, being lensed by a galaxy in the cluster MACS J1149.6+2223 [35,36]. The ultimate goal of strong lensing is not only to get information on the large-scale structure of the Universe, but also to constrain the cosmological parameters. For instance, analyzing the time delay among the lensed source images, it is possible to estimate also the value of the Hubble constant H 0 . Indeed, the time delay is given by the difference of the light paths from the images and is inversely proportional to H 0 , as first understood by Refsdal [37] (see also the review in [38]). At present, one of the most accurate measurement of the Hubble constant using a gravitational lens is provided in [39]. There is also a project (COSMOGRAIL) particularly devoted to the time delay measurements of doubly-or multiply-lensed quasars (see [40] and the references therein). Moreover, the measure of both the frequency of occurrence and the redshift of multiple images in deep sky surveys may allow one to constrain the values of Ω M and Ω Λ in an independent way with respect to other methods, such as those coming from SN Ia or the CMB (Cosmic Microwave Background) power spectrum. Weak Lensing In addition to the macroscopic deformations discussed in the previous section, in the deep field surveys of the sky, also arclets (i.e., single distorted images with an elliptical shape) and weakly distorted images of galaxies, with an almost invisible individual elongation, have been detected. This effect is known as weak lensing and is playing an increasingly important role in cosmology. The weak lensing's main feature is the shape deformation of background galaxies, whose light crosses a mass distribution (e.g., a galaxy or a galaxy cluster) that acts as a gravitational lens. Actually, as discussed in Section 2, gravitational lensing gives rise to two distinct effects on a source image: convergence, which is isotropic, and shear, which is anisotropic. In the weak lensing regime, the observer makes use of the shear, that is the image deformation (sometimes related to the galaxy orientation), while the convergence effect is not used, since the intrinsic luminosity and the size of the lensed objects are unknown. For a complete and in-depth review on the basics of weak gravitational lensing, with full mathematical details of all the most important concepts, we refer the reader to [41]. The first weak lensing event was detected in 1990 as statistical tangential alignment of galaxies behind massive clusters [42], but only in 2000, coherent galaxy distortions were measured in blind fields, showing the existence of the cosmic shear (see, e.g., [43,44]). Here, we remark that the weak lensing cannot be measured by a single galaxy, but its observation relies on the statistical analysis of the shape and alignment of a large number of galaxies in a certain direction. Therefore, the game is to measure the galaxy ellipticities and orientations and to relate them to the surface mass density distribution of the lens system (generally a galaxy cluster placed in between). There are at least two major issues in weak lensing studies, one mainly relying on the theory, the other one on observations: the former concerns finding the best way to reconstruct the intervening mass distribution from the shear field γ = (γ 1 , γ 2 ), the latter with looking for the best way to determine the true ellipticity of a faint galaxy, which is smeared out by the instrumental point spread function (PSF). To solve these issues, several approaches have been proposed, which can be distinguished into two broad families: direct and inverse methods. On the theoretical side, the direct approaches are: the integral method, which consists of expressing the projected mass density distribution as the convolution of γ by a kernel (see, e.g., [45]), and the local inversion method, which instead starts from the gradient of γ (see, e.g., [46] and the references therein). The inverse approaches work on the lensing potential φ (see Equation 3), and they include the use of the maximum likelihood [47,48] or the maximum entropy methods [49] to determine the most likely projected mass distribution that reproduces the shear field. The inverse methods are particularly useful since they make it possible to quantify the errors in the resultant lensing mass estimates, as, for instance, errors deriving from the assumption of a spherical mass model when fitting a non-spherical system [50,51]. The inverse methods allow one also to derive constraints from external observations, such as X-ray data on galaxy clusters' strong lensing or CMB lensing. In particular, one can compare mass measurements from weak lensing and X-ray observations for large samples of galaxy clusters [52]. In this respect, [53] used a large sample of nearby clusters with good weak lensing and X-ray measurements to investigate the agreement between mass estimates based on weak lensing and X-ray data, as well as studied the potential sources of errors in both methods. Moreover, a combination of weak lensing and CMB data may provide powerful constraints on the cosmological parameters, especially on the Hubble constant H 0 , the amplitude of fluctuations σ 8 and the matter cosmic density Ω m [54,55]. We also mention, in this respect, that one way to determine the fluid-mechanical properties of dark energy, characterized by its sound speed and its viscosity apart from its equation of state, is to combine Planck data with galaxy clustering and weak lensing observations by Euclid, yielding one percent sensitivity on the dark energy sound speed and viscosity [56] (see the end of this section). On the observational side, the first priority is to use a telescope with a wide field of view, appropriate to probe the large-scale structure distribution at least of a galaxy cluster. On the other hand, it is also necessary to minimize the source of noise in the determination of the ellipticity of very faint galaxies, so that the best-seeing conditions for a ground-based telescope or, better, a space-based instrument, are extremely useful. Very promising results have been obtained with the weak lensing technique so far, as, for example, the best measure, until today, of the existence and distribution of dark matter within the famous Bullet cluster [57] (actually constituted by a pair of galaxy clusters observed in the act of colliding). Astronomers found that the shocked plasma was almost entirely in the region between the two clusters, separated from the galaxies. However, weak lensing observations showed that the mass was largely concentrated around the galaxies themselves, and this enabled a clear, independent measurement of the amount of dark matter. With the major aim to map, through the weak lensing effect, the mass distribution in the Universe and the dark energy contribution by measuring the shape and redshift of billions of very far away galaxies (for a review, see [58]), the European Space Agency (ESA) is planning to launch the Euclid satellite in the near future. Also ground-based telescopes will allow one to detect an enormous number of weak and strong lensing events. An example is given by the LSST (Large Synoptic Survey Telescope) project, located on the Cerro Pachón ridge in north-central Chile, which will become operative in 2022. Its 8.4-meter telescope uses a special three-mirror design, creating an exceptionally wide field of view, and has the ability to survey the entire sky in only three nights. The effective number density n eff of weak lensing galaxies (which is a measure of the statistical power of a weak lensing survey) that will be discovered by LSST is, conservatively, in the range of 18-24 arcmin −2 (see Table 4 in [59]). The very large (about 1.5 × 10 4 square degrees) and deep survey of the sky that will be performed by Euclid will allow astrophysicists to address fundamental questions in physics and cosmology about the nature and the properties of dark matter and dark energy, as well as in the physics of the early Universe and the initial conditions that provided the seeds for the formation of cosmic structure. Before closing this section, we also mention that strong systematics may be present in weak lensing surveys. For example, the intrinsic alignment of background sources may mimic to an extent the effects of shear and may contaminate the weak lensing signal. However, these systematics may be controlled if also the galaxy redshifts are acquired, and this fully removes the unknown intrinsic alignment errors from weak lensing detections (for further details, see [60,61]). Microlensing Let us consider now the microlensing scale of the lensing phenomenon that occurs when θ E is smaller than the typical telescope angular resolution, as in the case of stars lensing the light from background stars (for a review on gravitational microlensing and its astrophysical applications, we refer to, e.g., [62]). As is clear from the discussion in Section 2, by solving the lens Equation (1), one can determine the angular positions of the primary (I 1 ) and secondary (I 2 ) images. In Figure 2, these positions are shown for four different values of the impact parameter θ S in the case of a point-like source. If the source and the lens are aligned (first panel on the left), the circular symmetry of the problem leads to the formation of a luminous annulus having radius θ E around the lens position. Otherwise, increasing the θ S value, the secondary image gets closer to the lens position, while the primary image drifts apart from it, and in the limit of θ S θ E , the microlensing phenomenon tends to disappear. However, observing multiple images during a microlensing event is practically impossible with the present technology. For instance, in the case in which the phenomenon is maximized, corresponding to the perfect alignment, for a star in the galactic bulge (about 8 kpc away), one has ∆θ = 2θ E 1 µas, which is well below the angular resolving power, even of the Hubble Space Telescope (about 43 mas at 500 nm); see, e.g., http://www.coseti.org/9008-065.htm. When a source is microlensed, its images do not have the same luminosity; therefore, the observer receives a total flux (or magnitude) different from that of the unlensed source. The flux difference can be described very simply in terms of the light magnification and the law of the conservation of the specific intensity I, which represents the energy, with frequency in the range dν crossing the surface dA during the time interval dt in the solid angle dΩ around the direction orthogonal to the surface. Indeed, the light specific intensity turns out to be conserved in the absence of absorption phenomena, interstellar scattering or Doppler shifts. This is also a consequence of Liouville's theorem, which claims that the density of states in the phase space remains constant if the interacting forces are non-collisional (and gravitation fulfills this condition due to its weak coupling constant), and the propagating medium is approximately transparent (as is the case for interstellar space). This effect can produce a magnification or a de-magnification of the images of an extended light source (see Figure 3). If the image is magnified, it means that it certainly subtends a wider angle with respect to that subtended by the source in the absence of the lens. In microlensing, the source disk size should not be neglected in general. Within the framework of the finite source approximation for a source with flux F S and assuming θ E ≤ θ S , one can show that the magnification A of an image at angular position θ is given by (1 − θ 4 E /θ 4 ) −1 . As a consequence, the observed flux corresponds to F = AF S . Of course, when the source star disk gradually moves away from the line of sight, the magnification decreases, and the unlensed F S flux is then recovered. As already anticipated, the observer cannot see, in the microlensing case, well-separated images, but, instead, detects a single image made by the overlapping of the primary and the secondary images. In this case, one can easily obtain the classical magnification factor A by summing up the individual magnifications, i.e., A = (u 2 + 2)/ u 2 (u 2 + 4), where u = θ S /θ E is the impact factor. If there is a relative movement between the lens and the source, u changes with time, and a standard Paczyǹski curve [8] does emerge. An important role in gravitational microlensing is played by the caustics, the geometric loci of the points belonging to the lens plane where the light magnification of a point-like source becomes infinite, and by the corresponding critical curves in the source plane. In the case of a single lens, the caustic is a point coinciding with the lens position; therefore, the magnification diverges when the impact parameter approaches zero. However, real sources are not point-like, so we always have finite magnifications that can be calculated by an average procedure: where A(y) is the point-like source magnification, I(y) is the brightness profile of the stellar disk (the limb darkening profile) and the integral is extended over the source star disk. Observations show that about half of all stars are in binary systems, and moreover, thousands of exoplanets are being discovered around their host stars by different techniques and instruments. Therefore, it is worth considering binary and multiple systems as lenses in microlensing observations. In this case, the lens equation, obviously, becomes more complicated, but it can still be solved by numerical methods in order to obtain the magnification map where caustics take on distinctive shapes depending on the specific geometry of the system. In Figure 4, we show the magnification map and the resulting light curve for a simulated microlensing event due to a binary lens with mass ratio q = M 1 /M 2 0.01 (e.g., a solar mass as the primary component and a Jupiter-like planet as the secondary one). In these cases, the resulting light curve may be rather different with respect to the typical Paczyǹski one, depending on the system parameters. The study of these anomalies in the microlensing light curves behavior is becoming more and more important nowadays, since it allows one to estimate some of the parameters of the lensing system (see, e.g., [63]). The main advantage of this technique, compared to the other methods adopted by the exoplanets hunters (e.g., radial velocity, direct imaging, transits), is the possibility to detect even very small planets orbiting their own star at enormous distances from Earth. It also allows one to discover the so-called free-floating planets (FFPs), otherwise hardly detectable [64]. By studying the PA99-N2 microlensing event, detected in 1999 by the French-British collaboration POINT-AGAPE [66], Ingrosso et al. [65] revealed in 2009 that the anomaly observed was compatible with the presence of a super-Jupiter with a mass of 5M J around a star lying in the Andromeda galaxy (see Figure 5), thus finding the first putative exoplanet in another galaxy. Astrometric Microlensing During an ongoing microlensing event, the centroid of the multiple images and the source star positions move in the lens plane giving rise to a phenomenon known as astrometric microlensing (see, e.g., [67,68] and the references therein). In the simplest case of a point-like lens, for a source at angular distance θ S , the position θ of the images with respect to the lens can be obtained by solving the lens Equation (1). Since the Einstein radius R E = D L θ E defines the scale length on the lens plane, the lens equation reads: where d S and d are the linear distances, in the lens plane, of the source and images from the gravitational lens, respectively. Moreover, using the dimensionless source-lens distances u = θ S /θ E andũ = θ/θ E , the previous relation can be further simplified as: Denoting with u + and u − the solutions of this equation, one notes that, in the lens plane, the + image resides always outside the circular ring centered on the lens position with radius equal to the Einstein angle, while the − image is always within the ring. As the source-lens distance increases, the + image approaches the source position, while the − one (becoming fainter) moves towards the lens location. For a source moving in the lens plane with transverse velocity v ⊥ directed along the ξ axis (η is perpendicular to it), the projected coordinates of the source result in being: where t E = R E /v ⊥ and u 0 is the impact parameter (in this case, lying on the η axis). Since u 2 = ξ 2 + η 2 is time dependent, the two images move in the lens plane during the gravitational lensing event. By weighting the + and − image position with the associated magnification [69], one gets: Finally, the observable is defined as the displacement of the centroid with respect to the source, Note that the centroid shift may be viewed as a vector: with components along the axes: Here, we remind that all of the angular quantities are given in units of the Einstein angle θ E , which, for a source at distance D S D L , results in being: which fixes the scale of the phenomenon. It is straightforward to show (see [69]) that during a microlensing event, the centroid shift ∆ traces (in the ∆ ξ , ∆ η plane) an ellipse centered in the point (0, b). The ellipse semi-major axis a (along ∆ η ) and semi-minor axis b (along ∆ ξ ) are: Then, for u 0 → ∞, the ellipse becomes a circle with radius 1/(2u 0 ), while it degenerates into a straight line of length 1/ √ 2 for u 0 approaching zero. Note also that Equation (16) implies: so that by measuring a and b, one can determine the event impact parameter u 0 . As observed in [67], ∆ falls more slowly than the magnification, implying that the centroid shift may be an interesting observable also for large source-lens distances, i.e., far from the light curve peak. In fact, in astrometric microlensing, the threshold impact parameter u th (i.e., the value of the impact parameter that gives an astrometric centroid signal larger than a certain quantity δ th ) is given by u th = T obs v ⊥ /(δ th D L ), where T obs is the observing time and v ⊥ the relative velocity of the source with respect to the lens. For example, the Gaia satellite should reach an astrometric precision σ G 300 µas (for objects with visual magnitude 20) in five years of observation [70]. Then, assuming a threshold centroid shift δ th σ G , one has u th 60 for a lens at a distance of 0.1 kpc and transverse velocity v ⊥ 100 km s −1 . For comparison, the threshold impact parameter for a ground-based photometric observation is 1. Consequently, the cross-section for an astrometric microlensing measurement is much larger than the photometric one, since it scales as u 2 th . Hence, in the absence of finite-source and blending effects, by measuring a and b, one can directly estimate the impact parameter u 0 . A further advantage of the astrometric microlensing is that some events can be predicted in advance [71]. In fact, by studying in detail the characteristics of stars with large proper motions, Proft et al. [72] identified tens of candidates to measure astrometric microlensing by the Gaia satellite, an European Space Agency (ESA) mission that will perform photometry, spectroscopy and high precision astrometry (see [70]). Polarization and Orbital Motion Effects in Microlensing Events Gravitational microlensing observations may also offer a unique tool to study the atmospheres of far away stars by detecting a characteristic polarization signal [73]. In fact, it is well known that the light received from stars is linearly polarized by the photon scattering occurring in the stellar atmospheres. The mechanism is particularly effective for the hot stars (of A or B type) that have a free electron atmosphere giving rise to a polarization degree increasing from the center to the stellar limb [74]. By a minor extent, polarization may be also induced in main sequence F or G stars by the scattering of star light off atoms/molecules and in evolved, cool giant stars by photon scattering on dust grains contained in their extended envelopes. Following the approach in [74], the polarization P in the direction making an angle χ = arccos(µ) with the normal to the star surface is P(µ) = [I r (µ) − I l (µ)]/[I r (µ) + I l (µ)], where I l (µ) is the intensity in the plane containing the line of sight and the normal, and I r (µ) is the intensity in the direction perpendicular to this plane. Here, µ = 1 − (r/R) 2 , where r is the distance of a star disk element from the center and R the star radius, and we are assuming that light propagates in the direction r × l. For isolated stars, a polarization signal has been measured only for the Sun for which, due to the distance, the projected disk is spatially resolved. Instead, when a star is significantly far away and can be considered as point-like, only the polarization P averaged over the stellar disk can be measured, and usually P = 0, since the flux from each stellar disk element is the same. A net polarization of the light appears if a suitable asymmetry in the stellar disk is present (caused by, e.g., eclipses, tidal distortions, stellar spots, fast rotation, magnetic fields). In the microlensing context, the polarization arises since different regions of the source star disk are magnified differently during the event. Indeed, during an ongoing microlensing event, the gravitational lens scans the disk of the background star, giving rise not only to a time-dependent light magnification, but also to a time-dependent polarization. This effect (see also [75]) is particularly relevant in the microlensing events where: (1) the magnification turns out to be significant; (2) the source star radius and the lens impact parameter are comparable; (3) the source star is a red giant, characterized by a rather low surface temperature (T ≤ 3000 K), around which the formation of dust grains is possible. This occurs beyond the distance R h from the star center at which the gas temperature in the stellar wind becomes lower than the grain sublimation temperature ( 1400 K). The intensity of the expected polarization signal relies on the dust grain optical depth τ and can reach values of 0.1%-1%, which could be reasonably observed using, for example, the ESO VLTtelescope (see [76]). In Figure 6, we show some typical polarization curves, expected in bypass (continuous curves) and transit events (dashed curves), in which the lens trajectory approaches or passes through the source regions where the dust grains are present. In Figure 7, the distribution of the peak polarization values (given in percent) as a function of the intrinsic source star color index (V − I) int (i.e., the de-reddened color of the unlensed source star) is shown for a sample of OGLE-type microlensing events generated by a synthetic stellar catalog simulating the bulge stellar population. As one can see, red giants with (V − I) int ≤ 3, which corresponds to the events inside the regions delimited by dashed lines, have P max ≤ 1 percent values. These are the typical events observed by the OGLE-III microlensing campaign. There are, however, a few events with 1 ≤ P max ≤ 10 percent, characterized by (V − I) int ≥ 3, corresponding to source stars in the AGB phase. These stars, which are rather rare in the galactic bulge, have not been sources of microlensing events observed in the OGLE-III campaign, but they are expected to exist in the galactic bulge. In this respect, the significant increase in the event rate by the forthcoming generation of microlensing surveys towards the galactic bulge, both ground-based, like KMTNet [77], and space-based, like EUCLID [78] and WFIRST [79], opens the possibility to develop an alert system able to trigger polarization measurements in ongoing microlensing events. Another way to study the atmosphere of the source star is to analyze the amplification curve and look for dips and peaks, typically due to the presence of stellar spots on the photosphere of the star [80,81]. These features may be easily confused, however, with the signatures of a binary lensing system. When the source star has a relevant rotation motion during the lensing event, there is the possibility to really detect the stellar spots on the source's surface and to estimate the rotation period of the star [82]. A new generation of networks of telescopes dedicated to microlensing surveys, like KMTNet [83], will provide high-precision and high-cadence photometry that will enable us to observe spots on the source's surface. We remark that also multicolor observations of the event would help to disentangle the aforementioned degeneracy, as the ratio between the brightness of the spot and the surrounding photosphere strongly depends on the frequency of the observation. It has been shown that stellar spots can be detected also through polarimetric observations of microlensing caustic-crossing events [84]. Under certain circumstances, binary lens systems are characterized by the close-wide degeneracy: if the two objects are separated by a projected distance s or 1/s, the resulting caustics have the same structure, and also, the observed light curves will appear the same [85,86]. This happens, for example, in systems with small mass ratio q, like planetary systems [87]. It is possible to resolve this degeneracy in the case of short-period binary lenses, the so-called rapidly rotating lenses, as the orbital motion induces repeating features in the amplification curve that can be exploited to estimate important physical parameters of the lensing systems, including the orbital period, the projected separation and the mass [88,89]. Retro-Lensing: Measuring the Black Hole Features Gravitational lensing at the scales considered in the previous sections can be treated in the weak gravitational field approximation of the general theory of relativity, since in those cases, photons are deflected by very small angles. This is not the case when one considers black holes, for which it may happen that photons get very close to the event horizon of these compact objects. Black holes are relatively simple objects. The no-hair theorem postulates that they are completely described by only three parameters: mass, angular momentum (generally indicated by the spin parameter a) and electric charge; any other information (for which hair is a metaphor) disappears behind the event horizon, and it is therefore inaccessible to external observers. Depending on the values of these parameters, black holes can be classified into Schwarzschild black holes (non-rotating and non-charged), Kerr black holes (rotating and non-charged), Reissner-Nordström black holes (non-rotating and charged) and Kerr-Newman black holes (rotating and charged). Even though they appear so simple, black holes are mathematically complicated to describe (see, e.g., [90]). Nowadays, we know that black holes are placed at the center of the majority of galaxies, active or not, and in many binary systems emitting X-rays. Moreover, they are the engine of gamma-ray bursts (GRBs) and play an essential role in better understanding stellar evolution, galaxy formation and evolution, jets and, in the end, the nature of space and time. One goal astrophysicists have been pursuing for a long time is to probe the immediate vicinity of a black hole with an angular resolution as close as possible to the size of the event horizon. This kind of observations would give a new opportunity to study strong gravitational fields, and as we will see at the end of this section, we think we are very close to reaching this goal. How do we measure the mass, angular momentum and electric charge of a black hole? One possibility, rich with interesting consequences, was suggested by Holz and Wheeler [91], who considered a phenomenon that was already known to be possible around black holes. They used the Sun as the source of light rays and a black hole far from the solar system. As shown in Figure 8, some photons would have the right impact parameter to turn around the black hole and come back to Earth. Other photons, with a slightly smaller impact parameter, can even rotate twice around the black hole, and so on. A series of concentric rings should then appear if the observer, the Sun and the black hole are perfectly aligned. The two authors also suggested to do a survey and look for concentric rings in the sky in order to discover black holes. Unfortunately, there are two problems with this idea. First, it is unlikely that the Sun, Earth and a black hole are perfectly aligned, and in any case, Earth moves around the Sun, so that the alignment can occur only for a short time interval. The second and most important problem is that the retro-image of the Sun is so dim, that even using the Hubble Space Telescope (HST), only a black hole with a mass larger than 10 M within 0.01 pc from the Earth could be observed with the proposed technique. Moreover, we already know that such an object cannot be so close to the solar system without causing observable perturbations in the planet orbits. A better approach to test the idea proposed by Holz and Wheeler is to consider a well-known supermassive black hole and a bright star around it. Of course, the brighter the source star, the brighter will be the retro-image. Some of us [92] soon proposed to consider retro-lensing around the black hole at the galactic center, and in particular, the retro-lensing image of the closest star orbiting around it. Indeed, it is known that at the center of our galaxy, there is a supermassive black hole, with mass about (4.2 ± 0.2) × 10 6 M , identified by studying the orbits of several bright stars orbiting around it (see [93,94] and the references therein). A method to determine the mass and the angular momentum of this black hole could then be to measure the periastron or apoastron shifts of some of the stars orbiting around it. Another method to estimate the black hole spin a is based on the analysis of the quasi-periodical oscillations towards Sgr A * . Recently, the analysis of the data in the X-ray and IR bands have allowed some astrophysicists to find that a = 0.65 ± 0.05 [94,95]. However, there is a drawback in this approach: periastron and apoastron shift of orbits depend not only on the black hole parameters, but also on how stars are distributed around the black hole and on the mass density profile of the dark matter possibly present in the region surrounding the black hole. It is possible to understand the difficulty of the measure by noting that the difference of the periastron shift of the S2 star (the closest one to the black hole at the center of our galaxy) induced by a Schwarzschild black hole or a Kerr black hole with spin parameter a = 1 (and the same mass of the Schwarzschild one) is only of 10 µas (for the dependance of the periastron shift on the black hole spin orientation see [96]). Then, even if one had succeeded in measuring the periastron shift of the closest star to the central black hole, it would be unlikely to derive the amount of the black hole angular momentum. Our goal could be achieved anyway by measuring the periastron shift of many stars orbiting around the center of the Galaxy. The measure of the periastron shift could give, in turn, also an estimate of the parameters of the dark matter concentration expected to lie towards the Sgr A * region [97], as well as to test different modifications of the general theory of relativity [98][99][100] (see also [101] for the constraints on R n theories by Solar System data)). However, this is anything but easy [102]. An important step forward in this direction has been provided recently by near-infrared astrometric observations of many stars around Sgr A * with a precision of about 170 µas in position and 0.07 mas· yr −1 in velocity [103]. A further improvement, hopefully in the near future, would make possible the direct detection of relativistic effects in the orbits of stars orbiting the central black hole. Retro-lensing images of bright stars retro-lensed by the black hole at the galactic center might give an alternative method to estimate the Sgr A * black hole parameters. Even though in general it is difficult to calculate the retro-lensing images, since this requires integrating with high precision the trajectories followed by the light, it is possible to numerically do these calculations not only for a Schwarzschild black hole, but also for Kerr and Reissner-Nordström black holes. As discussed in several papers (see, e.g., [104] and the references therein), one finds that the shape of the retro-lensing image depends on the black hole spin (see Figure 9), and then, in principle, a single precise enough observation of the retro-lensing image of a star could allow one to unambiguously estimate the parameters of the black hole in Sgr A * . It is possible to show that also the electric charge of a Reissner-Nordström black hole can be obtained [105]. In fact, although the formation of a Reissner-Nordström black hole may be problematic, charged black holes are objects of intensive investigations, and the black hole charge can be estimated by using the size of the retro-lensing images that can be revealed by future astrometrical missions. The shape of the retro-lensing (or shadow) image depends in fact also on the electric charge of the black hole, and it becomes smaller as the electric charge increases. The mirage size difference between the extreme charged black hole and the Schwarzschild black hole case is about 30%, and in the case of the black hole in Sgr A * , the shadow typical angular sizes are about 52 µas for the Schwarzschild case and about 40 µas for a maximally charged Reissner-Nordström black hole. Therefore, a charged black hole could be, in principle, distinguished by a Schwarzschild black hole with RADIOASTRON, at least if its charge is close to the maximal value. We also mention that the black hole spin gives rise also to chromatic effects (while for non-rotating lenses, the gravitational lensing effect is always achromatic), making one side of the image bluer than the other side [104]. Can we really hope to observe these retro-lensing images towards Sgr A * ? Despite what one could think, we are not so far from this goal. The successor of the Hubble Space Telescope, the James Webb Space Telescope (JWST), scheduled for launch in October 2018, has the sensitivity to observe the retro-lensing image of the S2 star produced by the black hole at the galactic center with an exposure time of about thirty hours. In Figure 10, we show the magnification (upper panel) and the magnitude (bottom panel) light curves (in K band) of the retro-lensing image of the S2 star produced by the black hole at the galactic center (see also [106]). Unfortunately, JWST has not the angular resolution necessary to provide information about the shape of the retro-lensing image. The right angular resolution could be gained with the next generation of radio interferometers. In fact, the diameter of the retro-lensing image around the central black hole should be of about 30 µas, and already in 2008, Doeleman and his collaborators [107] managed to achieve an angular resolution of about 37 µas, very close to the required one, by using interferometrically different radio telescopes with a baseline of about 4500 km. Progress in this field is so fast, that it is not hard to think we can eventually reach this aim in the near future by, e.g., the EHT (Event Horizon Telescope) project, or by the planned Russian space observatory, Millimetron (the spectrum-M project), or by combined observations with different interferometers, such as the Very Large Array (VLA) and ALMA (Atacama Large Millimeter Array). Lower panel: light curve in K-band magnitude of the two retro-lensing images (adapted from [106]). The standard interstellar absorption coefficient towards the Galaxy center has been assumed. Conclusions In the paper, we have discussed the various scales in which gravitational lensing manifests itself and that may lead us to obtain valuable information about a great variety of astronomical issues ranging from the star distribution in the Milky Way, the study of stellar atmospheres, the discovery of exoplanets in the Milky Way and also in nearby galaxies, the study of far away galaxies, galaxy clusters and black holes. Gravitational lensing, in particular in the strong and weak lensing regime, may also allow scientists to answer, in the near future, fundamental questions in cosmology related to the nature of dark matter, why the Universe is accelerating and what is the nature of the source responsible for the acceleration, which physicists refer to as dark energy.
10,966.2
2016-03-14T00:00:00.000
[ "Physics" ]
Impacts of Climate Change on Food Production and On the Agricultural Environment Despite the enormous advances in our ability to manage the natural world, we have reached the 21st century in awesome ignorance of what is likely to unfold in terms of both the climate changes and the human activities that affect the environment and the responses of the Earth to these stimuli. Globally the prospects of increasing the gross cultivated area are limited by the decease of economically attractive sites for large-scale irrigation and drainage projects. Therefore, increase in food production will necessarily rely on a more accurate application of the crop water requirements on the one hand, and modernization and improvement of irrigation and drainage systems on the other hand. These issues have to be analysed in light of the expected impacts of climate change and environmental sustainability. The present Editorial analyses the relevant aspects of these issues in light of the need to increase food production and for sustainable agricultural environment. Introduction Irrigated agriculture is expected to play a major role in reaching the broader development objectives of achieving food security and improvements in the quality of life, while conserving the environment, in both the developed and developing countries. Especially as we are faced with the prospect of global population growth from almost 6 billion today to at least 8 billion by 2025 [1]. In this context, the prospects of increasing the gross cultivated area, in both the developed and developing countries, are limited by the dwindling number of economically attractive sites for new large-scale irrigation and drainage projects. Therefore, any increase in agricultural production will necessarily rely largely on a more accurate estimation of crop water requirements on the one hand, and on major improvements in the operation, management and performance of existing irrigation and drainage systems, on the other. Concerning agricultural development, most of the world's 270 million ha of irrigated land and 130 million ha of rainfed land with drainage facilities were developed on a step-by-step basis over the centuries. In many of the systems structures have aged or are deteriorating. Added to this, the systems have to withstand the pressures of changing needs, demands and social and economic evolution. Consequently, the infrastructure in most irrigated and drained areas needs to be renewed or even replaced and thus redesigned and rebuilt, in order to achieve improved sustainable production. This process depends on a number of common and well-coordinated f a c t o r s , such as new and advanced technology, environmental protection, institutional strengthening, economic and financial assessment, research thrust and human resource development. All the above factors and constraints compel decision-makers to review the strengths and weaknesses of current trends in irrigation and drainage and rethink technology, institutional and financial patterns, research thrust and manpower policy, so that service levels and system efficiency can be improved in a sustainable manner [2]. Food Production and Agricultural Environment Over the last forty years, the irrigation has been a major contributor to the growth of food and fiber supply for a global population that has more than doubled, from 3 to over 6 billion people. Global irrigated area increased by around 2% a year in the 1960s and 1970s, slowing down to around 1% in the 1980s, and lower still in the 1990s. Between 1965 and 1995 the world's irrigated land grew from 150 to 260 million ha. Nowadays it is increasing at a very slow rate because of the significant slowdown in new investments, combined with the loss of irrigated areas due to salinization and urban encroachment. Notwithstanding these achievements, today the majority of agricultural land (1.1 billion ha) still has no water management system. In this context it is expected that 90% of the increase in food production will have to come from existing cultivated land and only 10% from conversion from other uses. In the rainfed areas with no water Open Access Journal of Nutrition and Food Processing Daniele De Wrachien* AUCTORES Globalize your Research management systems some improvements can be achieved with water harvesting and watershed management. However, in no way can the cultivated area with no water management contribute significantly to the required increase in food production. For this reason, the share of irrigated and drained areas in food production will have to increase. This can be achieved either by installing irrigation or drainage facilities in the areas without a system or by improving and modernizing existing systems. The International Commission on Irrigation and Drai-nage (ICID) estimates that within the next 25 years, this process may result in a shift of the contribution to the total food production to around 30% for the areas with no water management system, 50% for the areas with an irrigation system and 20% for the rainfed areas with a drainage system [3]. Climate Change Scenarios Scenarios are "internally-consistent pictures of a plausible future climate" [4]. These scenarios can be classified into three groups:  hypothetical scenarios;  climate scenarios based on General Circulation Models (GCMs) ;  Scenarios based on reconstruction of warm periods in the past (paleo-climatic reconstruction). The plethora of literature on this topic has been recently summarized by the Intergovernmental Panel on Climate Change [5]. The scenarios of the second group have been widely utilized to reconstruct seasonal conditions of the change in temperature, precipitation and potential evapotraspiration at basin scale over the next century. GCMs are complex three-dimensional computer-based models of the atmospheric circulation, which provide details of changes in regional climates for any part of the Earth. Until recently, the standard approach has been to run the model with a nominal "pre-industrial" atmospheric carbon dioxide (CO2) concentration (the control run) and then to rerun the model with doubled (or sometimes quadrupled) CO2 (the perturbed run). This approach is known as "the equilibrium response prediction". The more recent and advanced GCMs are, nowadays, able to take into account the gradual increase in the CO2 concentration through the perturbed run. However, current results are not sufficiently reliable. Planning and Design of Irrigation and Drainage Systems under Climate Change Uncertainties as to how the climate will change and how irrigation and drainage systems will have to adapt to these changes, are challenges that planners and designers will have to cope with. In view of these uncertainties, planners and designers need guidance as to when the prospect of climate change should be embodied and factored into the planning and design process. If climate change is recognized as a major planning issue (first step), the second step in the process would consist of predicting the impacts of climate change on the region's irrigated or drained area. The third step involves the formulation of alternative plans, consisting of a system of structural and/or non-structural measures and hedging strategies that address, among other concerns, the projected consequences of climate change. Non-structural measures that might be considered include modification of management practices, regulatory and pricing policies. Evaluation of the alternatives, in the fourth step, would be based on the most likely conditions expected to exist in the future with and without the plan [6]. The final step in the process involves comparing the alternatives and selecting a recommended development plan. The main factors that might influence the worth of incorporating climate change into the analysis are the level of planning (local, national, international), the reliability of GCMs, the hydrologic conditions, the time horizon of the plan or life of the project [7] [8] [9].  Most of the world's irrigation and drainage facilities were developed on a step-by-step basis over the centuries and were designed for a long life (50 years or more), on the assumption that climatic conditions would not change in the future. This will not be so in the years to come, due to global warming and the greenhouse effect. Therefore, engineers and decision-makers need to systematically review planning principles, design criteria, operating rules, contingency plans and water management policies.  Possible impacts of climate variability that may affect planning principles and design criteria include changes in temperature, precipitation and runoff patterns, sea level rise, flooding of coastal irrigated and rainfed lands.  Uncertainties as to how the climate will change and how irrigation and drainage systems will have to adapt to these changes are issues that water authorities are compelled to cope with. The challenge is to identify short-term strategies to face long-term uncertainties.  The planning and design process needs to be sufficiently flexible to incorporate consideration of and responses to many possible climate impacts. The main factors that will influence the worth of incorporating climate change into the process are the level of planning, the reliability of the forecasting models, the hydrological conditions and the time horizon of the plan or the life of the project.  The development of a comprehensive approach that integrates all these factors into irrigation and drainage project selection, requires further research of the processes governing climate changes, the impacts of increased atmospheric carbon dioxide on vegetation and runoff, the effect of climate variables on water demand for irrigation and the impacts of climate on infrastructure performance.
2,116.8
2020-12-21T00:00:00.000
[ "Environmental Science", "Economics" ]
Differentiating between Affine and Perspective-Based Models for the Geometry of Visual Space Based on Judgments of the Interior Angles of Squares This paper attempts to differentiate between two models of visual space. One model suggests that visual space is a simple affine transformation of physical space. The other proposes that it is a transformation of physical space via the laws of perspective. The present paper reports two experiments in which participants are asked to judge the size of the interior angles of squares at five different distances from the participant. The perspective-based model predicts that the angles within each square on the side nearest to the participant should seem smaller than those on the far side. The simple affine model under our conditions predicts that the perceived size of the angles of each square should remain 90°. Results of both experiments were most consistent with the perspective-based model. The angles of each square on the near side were estimated to be significantly smaller than the angles on the far side for all five squares in both experiments. In addition, the sum of the estimated size of the four angles of each square declined with increasing distance from the participant to the square and was less than 360° for all but the nearest square. Introduction Space perception is one of the oldest and most deeply investigated areas of perceptual psychology. The vast majority of empirical research on space perception is unidimensional in nature. For example, one set of studies might look at size perception of only frontally oriented targets, another looks at only flat targets, oriented in-depth, and yet another only at egocentric distance judgments. In addition, for over a century researchers have examined the effects of judgment method, instructions, and the meaning given to concepts like size and size constancy. (See [1,2] for meta-analyses of work concerning the direct estimation of spatial metrics, size-constancy research, and the role of instructions.) Only a relatively small subset of space perception research has attempted to describe the geometry of visual space as a whole. Researchers have suggested a number of geometries for visual space ( [3] provides a review). For example, Gibson's [4,5] doctrine of realism implies that visual space should be strictly Euclidean because our perceptions ought to match Euclidean physical reality. Thomas Reid [6] suggested that visual space follows a spherical geometry; a result supported by some recent philosophers ( [7] (Chapter 3), provides a review and interpretation). Hoffman and Dodwell [8,9] believe it displays the properties of a Lie transformation group. Drösler [10][11][12] thought it was a Cayley-Klein geometry. Visual Space as an Affine Transformation One recent model for visual space treats it as the product of a simple affine transformation of physical space. The in-depth dimension of visual space is perceptually compressed (or occasionally expanded) compared to the frontal dimension of visual space (and compared with physical space). However, after the transformation, visual space is still thought to be essentially Euclidean; parallel lines remain parallel and collinearity is preserved. For example, Wagner [33,44] pounded stakes randomly in a large, flat, grassy field, and asked observers to judge distances between pairs of stakes and angles formed by stake triplets using four different psychophysical methods. He found that stimulus orientation strongly affected judgments in the same way for all judgment methods. For stimuli the same physical distance apart, those between-stake distances receding away from the observer in depth were seen to be half as large on average as those oriented frontally with respect to the observer. Angle judgments showed a similar pattern. For angles of the same physical size, those whose open ends faced either toward or away from the observer appeared to expand perceptually, while those whose open ends faced to the right or left (the observer looks across the legs of the angle) appeared to contract perceptually. Accordingly, the judged angle is larger than the physical angle for angles that face toward or away from the observer, while the judged angle is smaller than the physical angle for angles facing off to the side. Wagner [33,44] applied 12 candidate metrics to describe these data and found that two of them fit judgments much better than the others. The first was a simple Affine Contraction model. In this model, the observer is placed at the origin of a Euclidean plane with the x-axis corresponding to the left-right frontal dimension and the y-axis corresponds to the observer's in-depth dimension. According to the model, the frontal dimension is accurately perceived, while the in-depth dimension is perceptually compressed. After the transformation, the space is still Euclidean. This leads to the following formula to describe the relationship between perceived distance, s , and the physical coordinates of the two end points (x 1 , y 1 ) and (x 2 , y 2 ): s = (x 1 − x 2 ) 2 + (c(y 1 − y 2 )) 2 (1) where c reflects the degree of compression of the in-depth dimension of visual space. Wagner found that all judgment methods displayed very similar amounts of compression, and on average c = 0. 45. In other words, a physical stimulus oriented in-depth seemed to be less than half as large as the same physical stimulus oriented frontally. Using a formula from Riemannian geometry, the same model was applied to the angle data, and it yielded a similar degree of compression across all methods that corresponded closely to the value of c obtained with distance judgments. For angle judgments, on average c = 0.48. Wagner and Feldman [45] extended this work to three dimensions, under both light and dark viewing conditions. (See [3] for details.) Under full-cue conditions, the compression parameter averaged c = 0.52 for distance judgments and c = 0.62 for angle judgments. The degree of compression was even more extreme under reduced cue conditions with c = 0.35 for distance judgments and c = 0.32 for angle judgments. Wagner discussed a second model, called the Vector Contraction model. In this model, distances in visual space can be decomposed into frontal and in-depth components and the in-depth component is compressed in visual space. Unlike the simple Affine Contraction model, parallels are not preserved. The Vector Contraction model produced similar degrees of compression and fit the data slightly better than the Affine Contraction model. In subsequent years, numerous papers have suggested that visual space displays an affine transformed structure [3,[46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63]. However, these studies show that visual space is not as simple as the affine model suggests since the amount of compression in the in-depth dimension of visual space varies considerably from one study to another. The critical variable determining the degree of compression in the in-depth dimension appears to be distance. Wagner and Gambino [34] performed a meta-analysis of past studies that examined this phenomenon and concluded that, for stimuli nearer than 1 m away, the in-depth dimension of visual space appears to expand relative to the frontal dimension (c > 1); however, for stimuli more than 1 m away from the observer, the in-depth dimension of visual space is compressed relative to the frontal dimension (c < 1). The compression parameter quickly declines as distance to the stimulus increases, but the rate of change slows beyond 7 m from the observer, reaching an apparent asymptote at about c = 0.5 for more distant stimuli. In addition, the compression parameter for visual space is smaller under monocular and reduced-cue conditions. Wagner and Gambino also confirmed this pattern directly through an experiment that systematically measured the size of the compression parameter as a function of distance to the stimulus. Wagner and Gambino suggested that the pattern of compression in the in-depth dimension as a function of distance is similar to the ratio of in-depth to frontal visual angles of stimuli, but is not as extreme as this ratio, implying that observers are incapable of fully ignoring size information provided by cues to depth. If the compression parameter varies as a function of distance, Equation (1) no longer corresponds to an affine-transformed space since length proportions of line intervals and parallelism of physical space will generally no longer be preserved after the transformation. However, since the actual function relating compression to distance is not well defined, it is not possible to make concrete predictions about the exact structure of visual space based upon this more complicated model. On the other hand, a simple affine model (with a fixed compression parameter) makes clear experimental predictions in the current experimental context. Thus, to simplify the comparison with perspective-based models, the current work contrasts the predictions of the simple affine model with those of a perspective-based approach. (Having said this, although we are focusing on the simple affine model in the paper, we believe the specific experimental layout used in the present experiment should produce the same experimental predictions for the model with variable compression as the simple affine model.) The present study looks at judgments of the interior angles of squares. A simple affine model makes a number of predictions about these judgments. First of all, most affine models assume that visual space is Euclidean after the transformation; parallels are preserved, as is collinearity. The sum of the perceived angles of a square should be 2π or 360 • . Secondly, if the sides of a physical square were parallel to the frontal (x-axis) and in-depth (y-axis) axes of visual space, then the affine transformation should simply transform the perceived square into a rectangle. The ratio of the in-depth to frontal dimensions of the perceived rectangle would depend on the distance to the object from the observer, but at all distances the perceived object should look rectangular. For this reason, in an affine transformed space, each of the interior angles of the rectangle should continue to be perceived to be 90 • . (Of course, this prediction is only possible for squares with sides oriented parallel to the observer's frontal and medial planes. If the squares were oriented from any other perspective, the affine model would predict changes in apparent angle. That is why we orient the squares as we do.) In addition, the affine model predicts that, for a given rectangle, the angles nearer to the observer should seem perceptually equal to the angles that are more distant from the observer. As previously mentioned, Wagner [33,44] and Wagner and Feldman [45] had two models for visual space that worked, the Affine Contraction model and the Vector Contraction model. Wagner could not differentiate between these models with his data. Since the two models seem so similar, the Vector Contraction model has received little attention. However, the two models predict different perceptual responses to the aforementioned square. The Vector Contraction model predicts that the near angles of the square would be perceptually compressed, that is, be less than 90 • , and that the far angles of the square would expand, that is, be greater than 90 • . The four interior angles of the quadrilateral would still sum up to 360 • (under the assumption of a Euclidean Vector transformation). Since the Vector Contraction model makes predictions similar to perspective-based models in the current experimental context, the present work focuses on the clear contrast between the predictions of the simple affine model and those of the perspective-based approach. Linear Perspective-Based Models for Visual Space Recently, the philosopher Hatfield [41][42][43] has advanced a theory for visual space deriving from his careful phenomenological observations. Hatfield began from a simple phenomenological observation: straight stretches of railway tracks, sidewalks, streets, and hallways appear to converge with distance. This commonplace observation had not been well described by some of the major visual theorists. In different ways, both Gibson [4] and Rock ([64] (pp. 562-563); [65] (pp. 339-349); [66] (pp. 254-265)) assimilated this phenomenon to the perspective projection of a three-dimensional scene onto a two-dimensional plane, that is, to ordinary linear perspective. A geometrical structure created by ordinary linear perspective does not, however, fit the phenomenal experience of looking down the railway tracks, sidewalk, street, or hallway. For although the tracks (and the edges of the sidewalk, etc.) appear to converge, they do not appear to be located in a two-dimensional plane that is perpindicular to the line of sight (and located somewhere in front of the observer), as specified by ordinary linear perspective. Rather, the tracks converge as they run away in depth. The narrower and narrower tracks are phenomenally experienced as being farther and farther away. Thus, a geometrical structure is needed that both converges and includes phenomenal depth, or the third dimension. Hatfield cast about for existing models that would account for this fact. He was not attracted to non-Euclidean models, as these often assume, as did Luneburg, that the non-Euclidean physical space is of constant curvature, which may be a desirable assumption for descriptions of physical space but need not and may not obtain for visual space (see also [3] (pp. 31-36)). He was not attracted to affine theories. Such models have the disadvantage of not preserving visual direction. That is, as space is compressed, visual directions from the observer should become more widely separated (subtend a wider angle in a horizontal plane at eye-level) than the corresponding physical directions. But visual direction is perceived with great accuracy. Although it can be slightly modified by eye occlusion, eye position, and ocular dominance, it is generally consistent with Hering's law of visual direction [67]. Hatfield also noted that, phenomenally, the floor in the hallway and the sidewalk in front of the observer appear to rise with distance. (This had been noted by the philosopher Slomann [68] and more recently by the psychologists Ooi and He [69].) Hatfield sought a geometrical stucture that would capture the convergence of railway tracks in depth and the rising of the floor, while preserving visual direction. Hatfield [41] devised a model that is related to linear perspective. Visual space is contracted in relation to physical space along the lines of sight, as illustrated in Figure 1. The diagram represents a horizontal plane that bisects the eyes of an upright observer located at P. The outer solid lines represent the physical walls in a rectangular hallway. The lines from P to points C-J are lines of sight. The dotted lines AC Φ -BH Φ represent the location of the walls as experienced in visual space, and the dotted lines C Φ -D Φ and so on represent perpindicular cross-sections. The walls appear to converge in depth (and the floor to raise, if we take the diagram to reprent a saggital or median plane). Visual direction is preserved. The phenomenal space is contracted with respect to physical space. This contraction is greater in the in-depth dimension than it is for planes situated perpendicular to the line of sight-that is, the phenomenal counterpart to segment EG is more compressed than is the counterpart to segment GH. The model is related to linear perspective in that physical locations are projected along lines of sight. It is a three-dimensional to (contracted) three-dimensional projection. A two-dimensional perspective projection is obtained by considering the scene as projected onto a plane that cuts the lines of sight EP, FP, etc. perpendicularly with respect to the central line of sight, PJ. In the perspective projection onto the plane, the sides of the hallway would converge, but this convergence is contained in the plane of projection and so does not recede in depth. Considering these observations suggests that for moderate and far distances (more than about 1 m away from the observer), visual experience does not exhibit phenomenal size constancy. The word "phenomenal" in phenomenal size constancy means the sizes that objects appear to have, rather than the sizes that they are judged to have when observers are asked to estimate objective physical size. As Boring [70] noted long ago, our experience of objects at a distance is paradoxical. On the one hand, in appearance, objects at a distance look smaller than the same objects near at hand. On the other hand, we can often tell from how they look what their objective sizes are (to some degree of accuracy). Railway tracks appear phenomenally to converge, but we also believe that they are parallel. The present investigation focuses on spatial appearances, rather than on judgments taken under instructions to report the objective physical properties of things. The situation in which an observer would experience full phenomenal constancy is also reprsented in Figure 1. If an observer experienced full phenomenal size constancy, the walls of the hallway would appear where they are, physically. That is, phenomenal segment E Φ G Φ would appear at the location where physical segment EG is found. Accordingly, there would be no phenomenal contraction of the hallway amd its sides would not seem to converge (and similarly for the railway tracks). Hatfield observed that, phenomenally, contraction is a regular and pervasive feature of human visual experience. The contraction shown in Figure 1 accords with the size-distance invariance hypothesis (SDIH). This hypotheses, which is well-confirmed empirically [2,3], is represented in the figure under the assumption that phenomenal visual angles coincide with physical visual angles (that is, that visual direction is accurately perceived). The SDIH does not by itself predict what size an object will appear to have. Rather, it specifies that the apparent size of an object is a product of the object's registered or perceived visual angle and its registered or perceived distance. The terms registered and perceived distinguish values that may be (a) nonconsciously registered and processed by the visual system as opposed to (b) being phenomenally available in consciousness [71]. If distance and visual angle are accurately registered, then full phenomenal constancy results. If distance is under-represented, then a contracted visual space occurs. The more distance is under-represented, the greater the contraction. This relation is also found in Wagner's Vector Contraction Model as mentioned above. accurately registered, then full phenomenal constancy results. If distance is under-represented, then a contracted visual space occurs. The more distance is under-represented, the greater the contraction. This relation is also found in Wagner's Vector Contraction Model as mentioned above. Three basic principles can be deduced from Hatfield's Perspective-Based Model. First, lines parallel to the y-axis (the depth dimension) of physical space appear to converge at a vanishing point at a finite apparent distance (v) away from the observer in visual space. Secondly, visual direction is preserved; that is, the angular deviation of a point from straight ahead in visual space is the same as in physical space. Thirdly, after the transformation, which is a form of perspectival projection along straight lines, visual space is Euclidean and follows the geometry of the size-distance invariance hypothesis. In the present context, this means that (as in Figure 1) squares in physical space would, in visual space, contract to trapezoids whose angles sum to 360°. This later assumption is not required by the theory, but it fit Hatfield's intuition and made the theory more mathematically tractable. We retain this assumption for now. Based upon these three assumptions, it is possible to derive equations that show how coordinates in physical space transform into coordinates in visual space. (See Appendix A for this derivation.) For a point in physical space with coordinates (x, y, z) (width, depth, height) the coordinates in visual space (x′, y′, z′) after the perspective transformation are determined by the following equations: Three basic principles can be deduced from Hatfield's Perspective-Based Model. First, lines parallel to the y-axis (the depth dimension) of physical space appear to converge at a vanishing point at a finite apparent distance (v) away from the observer in visual space. Secondly, visual direction is preserved; that is, the angular deviation of a point from straight ahead in visual space is the same as in physical space. Thirdly, after the transformation, which is a form of perspectival projection along straight lines, visual space is Euclidean and follows the geometry of the size-distance invariance hypothesis. In the present context, this means that (as in Figure 1) squares in physical space would, in visual space, contract to trapezoids whose angles sum to 360 • . This later assumption is not required by the theory, but it fit Hatfield's intuition and made the theory more mathematically tractable. We retain this assumption for now. Based upon these three assumptions, it is possible to derive equations that show how coordinates in physical space transform into coordinates in visual space. (See Appendix A for this derivation.) For a point in physical space with coordinates (x, y, z) (width, depth, height) the coordinates in visual space (x , y , z ) after the perspective transformation are determined by the following equations: Some readers may notice that these equations are not new. They closely correspond to the equations Gilinsky [40] introduced for perceived size and distance in her important Psychological Review paper. Gilinsky derived her theory from a portion of Luneburg's hyperbolic geometry model. Fry [72] and Smith [73] soon questioned whether her theory really corresponded to Luneburg's. They observed that Gilinsky's theory really assumed that visual space was Euclidean with zero curvature, rather than Luneburg's assumption of constant negative curvature. In fact, Gilinsky derives her equations in a second way based on the laws of perspective, and this derivation clearly assumes that visual space is Euclidean. Recently, Ooi and He [69] have revived interest in Gilinsky's work. They show that Gilinsky's theory should be associated with misperception of ground surface slant. As Equation (4) implies, vertical surfaces should also appear to converge to the vanishing point, and this should correspond to a perception that the ground is slanted upward. Hatfield [42,43] also made this prediction from his model. Ooi and He empirically confirmed their predictions about ground slope perception. As they predicted, the degree of error depended on the height of the observer, since a taller observer has a more distant vanishing point. The degrees of ground slope misperception are well predicted by the equations presented above. Erkelens [35][36][37][38][39] has also proposed a perspective-based model for visual space. Erkelens generalizes Gilinsky's model and uses a single parameter, a variable vanishing point, to account for numerous phenomena such as Hillebrand [16] and Blumenfeld's [17] parallel alleys, and findings on the relationship between estimated size and distance. Hatfield, Erkelens, and Gilinky's models have the odd feature of positing a bounded space that is simultaneously Euclidean. The space is bounded because visual space has a maximun distance, v. However, unlike hyperbolic or spherical spaces which are bounded spaces by their very nature, Euclidean spaces are generally not bounded. A second thing to note is the similarity between the effects of Equation (1) with a variable compression parameter and those of Equations (2)-(4). Both predict a systematic decline in perceived size as a function of distance from the observer. A perspective transformation would affect the perceived shape (appearance) of an object that is physically square (as seen along a ground plane in front of a standing observer). A physical square medially centered in an in-depth plane should look like an isosceles trapezoid with the side of the figure nearest to the observer seeming longer than the side farthest away. Assuming that visual space is Euclidean after the transformation, the interior angles of this trapezoid should sum up to 2π or 360 • ; however, the interior angles closer to the observer should seem perceptually smaller than the interior angles that are farther away. If the space is Euclidean, each of the near angles should be smaller than 90 • and each of the far angles should seem greater than 90 • . Under these assumptions, if squares were placed on the ground in front of a standing observer, the observer would look down at the nearer squares and see the figure in a more frontal orientation, while distant squares would be more oriented in depth. For this reason, a perspective-based transformation would have a greater effect on the perceived shape of distant figures than near ones, and the effects of perspective on the perceived size of distant squares should be more pronounced than for near ones. Mathematical simulations using Excel confirm this conclusion. In the simulation, squares placed on the ground at various distances from the observer were transformed into visual space using Equations (2)-(4), and the interior angles of the squares after the transformation were determined. In the simulation, for all squares, the interior angles closer to the observer within a given square were smaller (less than 90 • ) than the interior angles at the more distant end of the square (greater than 90 • ). The interior angles of near squares did not differ from 90 • as much as the interior angles of more distant squares. The deviation of distant angles from 90 • was greatest when the vanishing point distance v used in the simulation was small. The Present Experiments The present work presents two experiments that test the predictions of the simple affine and perspective-based models for visual space, including the assumption that the transformations are Euclidean. Standing observers were asked to judge the size of the interior angles of five squares placed on the ground at various distances away and centered about the median plane. The two models of visual space make different predictions about these judgments. Simple affine models predict that the angles should still sum up to 360 • , since they assume that visual space has a contracted Euclidean structure after the transformation. If the sides of the square are parallel to the frontal and in-depth dimensions relative to the observer, the affine model predicts that the 90 • corner angles should remain 90 • after the square is perceptually transformed into a rectangle. (Once again, this prediction is only true under the assumptions given; if the squares were viewed from some oblique angle, we would expect the size of the angles to change.) Perspective-based models also predict that the four angles of the square should sum up to 360 • , under the assumption that visual space is Euclidean after the transformation. However, the angles formed with the side nearer to the obsever should be slightly smaller than 90 • , and the angles formed with the side farther from the observer should be slightly greater than 90 • . The prediction that near angles should seem smaller than far angles differentiates perspective-based models from simple affine transformation models. The Vector Contraction model makes predictions similar to the perspective-based models. Finally, if the angles summed up to more than 360 • , this would be evidence that the transformation is non-Euclidean with positive curvature (elliptic). A sum of less than 360 • suggests a negative curvature (hyperbolic). Past research on spatial perception of size, distance, angle, and area have used four different instruction types to communicate the meaning of various target metric properties to participants: objective, perspective, apparent, and projective instruction types. Objective instructions ask the observer to match or estimate the actual physical size of the stimulus; if they respond accurately, all angles should be reported as 90 • for the stimuli described. Perspective instructions ask observers to take into account the laws of perspective when making judgments; they might be reminded that railroad tracks appear to converge to a vanishing point and instructed to correct for this apparent shrinkage. Apparent instructions ask observers to alter a comparison until it subjectively "appears" or "looks" to be the same size as the standard. Projective instructions require observers to ignore depth information altogether and estimate the amount of their visual field subtended by the stimulus; observers might be asked to take an "artist's eye view" in which, comparing their visual experience to a mental canvas, they estimate how much of this canvas the stimulus covers. Wagner [1] performed a meta-analysis of 413 data sets of metric judgments from the direct estimation literature and subsequently performed a meta-analysis of 119 data sets from the size constancy literature [2], finding each time that apparent and objective instruction-types were the most commonly used . The present research collects estimates of the sizes of angles under apparent size instructions. These instructions most closely correspond to the subjective visual space that is our target. Hatfield [43] and Granrud [74], for example, have argued that objective instructions add a cognitive component to judgments, while apparent instructions more closely correspond to phenomenal experience. Experiment 1 Experiment 1 represents our initial investigation of the perceived size of the four angles in each of five squares. Method Participants. Thirty undergraduate students ( Review Board approved the experiment. All participants gave their informed consent. They were allowed to withdraw at any time, but none did. Materials and experimental layout. Five squares were laid out on the floor of a corridor using 2.54 cm wide silver duct tape. The sides of each square were 0.5 m in length (external dimensions). Participants stood just behind a line on the floor. The squares were at various distances from the participant one behind the other, with their centers aligned with the participant's line of sight (which was perpendicular to the line on the floor). They were oriented with two sides parallel to the participant's frontal plane and the remaining two sides parallel to the participant's median plane (oriented in-depth relative to the observer). The center of the near edge of the first square was 0.5 m from the participant, the second was 1.5 m away, the third was 2.5 m away, the fourth was 4 m away, and the fifth was 8 m away. Angle estimates were obtained using a "compass" modeled after Proffitt's [75] "visual matching device" that he used for slant estimation. A piece of cardboard was cut into a circle and half was painted black, the other half yellow. Another piece of cardboard was cut in the shape of a half circle with a raised tab on one end of the arc. The two pieces were attached to each other loosely enough to allow the half circle to be rotated until the desired amount of yellow was showing, representing the participant's perceived angle size. A circular protractor on the back of this device allowed the experimenter to record the numeral value of the angle estimate. Participants never saw the back of the compass or the protractor, and so received no feedback about their estimates. Procedure. Each of the 30 participants performed the experiment individually. They stood just behind the line facing the five squares on the floor. Each was given the compass and told that he or she needed to adjust it in order to match the apparent size of all 20 interior angles of the squares. Half began with the nearest square, half with the most distant square. Participants made estimates working clockwise around the interior angles of each square and then went on to estimate the angles of the next square. They held the device in front of them with the wedge specifying the angle in their frontal plane. Sometimes they moved the compass slightly to the left or right of the median plane, however. After each adjustment, the experimenter retrieved the compass from the observer and used the protractor on the back to record the adjusted angle. Participants were not told the numerical value of their estimate. The compass was reset to 0 • to begin the next judgment. Participants were given instructions to judge the apparent rather than objective size of each angle. The exact instructions were: You see five squares in front of you at different distances away. Although we all know that the interior angles for each corner of the square are physically 90 degrees, they might or might not appear or "look" that size subjectively. Please adjust this compass until it matches the apparent size of each of the interior angles of the squares. Start with the bottom right angle of the nearest [or most distant] square and go clockwise around the square, and then proceed to the next square. In between each adjustment, I will take the compass from you and record your answer. Please don't turn the compass over, since we want you to rely on your subjective impressions, not the numbers on the back. Remember, we don't want you to report the actual physical size of the angle, but we want you to tell us how large each angle looks or appears. After completing their estimates, participants were thanked and debriefed about the purpose of the experiment. Results The present experiment examined estimates of the size of the interior angles of squares as a function of distance to each square and the location of angles within a square. The dependent variable was the average angle estimate and the three independent variables were the distance from the participant to the square (which we will call distance), whether an angle within a square was on the side nearer to the observer or the side more distant (which we will call near vs. far location), and whether the angle was on the right or left side of the square relative to the observer (which we will call right vs. left location). To test the effects of these independent variables on angle estimates, a three-way, repeated-measures Analysis of Variance was performed. Although the mean angle estimate was somewhat larger for angles on the right side of the squares (M = 86.17) than the left (M = 85.03), this difference was not significant, F(1,29) = 3.29, p = 0.08, η p 2 = 0.102. In addition, the left vs. right location variable did not interact significantly with either distance to squares or near vs. far location within a square and need not be considered further. There were significant main effects on angle estimates for both distance to the square and near vs. far location. Table 1 shows means and standard errors for angles estimates as a function of distance both overall and separately for near vs. far location. Figure 2 also illustrates these effects. The figure shows the mean angle estimate as a function of distance to each square and near vs. far location of the angle within a square. Notice that mean angle estimates are always smaller for near angles than for angles at the far end of a square for all distances, a result that is consistent with the predictions of Hatfield, Erkelens, and Gilinsky's perspective models. Also notice that the deviation of angle estimates from 90 • increases with increasing distance to the square. However, contrary to what any Euclidean model would predict, both near and far angle estimates are less than 90 • on average, hence sum up to less that 360 • . In addition, the table shows that the standard error of the estimates increases with distance, indicating that judgment precision goes down as distance to the square increases. Vision 2018, 2, x FOR PEER REVIEW 10 of 22 the participant to the square (which we will call distance), whether an angle within a square was on the side nearer to the observer or the side more distant (which we will call near vs. far location), and whether the angle was on the right or left side of the square relative to the observer (which we will call right vs. left location). To test the effects of these independent variables on angle estimates, a three-way, repeated-measures Analysis of Variance was performed. Although the mean angle estimate was somewhat larger for angles on the right side of the squares (M = 86.17) than the left (M = 85.03), this difference was not significant, F(1,29) = 3.29, p = 0.08, ηp 2 = 0.102. In addition, the left vs. right location variable did not interact significantly with either distance to squares or near vs. far location within a square and need not be considered further. There were significant main effects on angle estimates for both distance to the square and near vs. far location. Table 1 shows means and standard errors for angles estimates as a function of distance both overall and separately for near vs. far location. Figure 2 also illustrates these effects. The figure shows the mean angle estimate as a function of distance to each square and near vs. far location of the angle within a square. Notice that mean angle estimates are always smaller for near angles than for angles at the far end of a square for all distances, a result that is consistent with the predictions of Hatfield, Erkelens, and Gilinsky's perspective models. Also notice that the deviation of angle estimates from 90° increases with increasing distance to the square. However, contrary to what any Euclidean model would predict, both near and far angle estimates are less than 90° on average, hence sum up to less that 360°. In addition, the table shows that the standard error of the estimates increases with distance, indicating that judgment precision goes down as distance to the square increases. The repeated measures ANOVA revealed that the mean angle estimate was significantly greater for near squares and smaller for more distant squares, F(4,116) Euclidean geometry would predict that the angles of a quadrilateral should always sum up to 360 • , while a sum of less than 360 • would be consistent with a geometry of negative curvature. To examine these predictions, each participant's angle estimates for a given square were summed. Figure 3 shows the average sum of the angle estimates as a function of distance to the square. Notice that the sum of the angle estimates is always less than 360 • and that this sum decreases significantly as a function of distance from the participant to the square, F(4,116) = 4.21, p = 0.003, η p 2 = 0.13. The repeated measures ANOVA revealed that the mean angle estimate was significantly greater for near squares and smaller for more distant squares, F(4,116) = 4.21, p = 0.003, ηp 2 = 0.13. In addition, mean angle estimates for angles near to the observer (M = 83.71) were significantly smaller on average than those more distant from the observer (M = 87.43) within the same square, F(1,29) = 10.00, p = 0.004, ηp 2 = 0.26. On the other hand, the interaction effect between distance and near vs. far location on angle estimates was not significant, F(4,116) = 0.34, p > 0.05. Euclidean geometry would predict that the angles of a quadrilateral should always sum up to 360°, while a sum of less than 360° would be consistent with a geometry of negative curvature. To examine these predictions, each participant's angle estimates for a given square were summed. Figure 3 shows the average sum of the angle estimates as a function of distance to the square. Notice that the sum of the angle estimates is always less than 360° and that this sum decreases significantly as a function of distance from the participant to the square, F(4,116) = 4.21, p = 0.003, ηp 2 = 0.13. Discussion The results of this experiment showed that the apparent size of the interior angles of squares do not match their physical size. In particular, interior angles nearer to the observer are perceived to be consistently and significantly smaller than angles farther away from the observer within the same square. In addition, the estimated apparent size of the interior angles significantly decreases as physical distance from the observer to the square increases. Finally, the sum of the angle estimates across the four interior angles of each square is consistently less than the 360° predicted by Euclidean Discussion The results of this experiment showed that the apparent size of the interior angles of squares do not match their physical size. In particular, interior angles nearer to the observer are perceived to be consistently and significantly smaller than angles farther away from the observer within the same square. In addition, the estimated apparent size of the interior angles significantly decreases as physical distance from the observer to the square increases. Finally, the sum of the angle estimates across the four interior angles of each square is consistently less than the 360 • predicted by Euclidean geometry. The sum of the angles deviates increasingly from the Euclidean prediction as the distance from the observer to the square increases. However, Experiment 1 suffers from procedural issues that call for improvement. First of all, only one judgment was collected for each angle and all estimates were based only on ascending adjustments that began with the compass set at 0 • . Using multiple judgments per angle would allow for more precision in estimating perceived angular size. More importantly, the use of only ascending trials could have resulted in systematic response bias (either errors of expectation or errors of habituation) that could result in general under-or over-estimation of angular size. This could influence the interpretation of the data. Some might argue that visual space could still be Euclidean, but this procedure led to an error of expectation where judgments systematically undershoot the true perceptual value. If this were true, however, it should have produced a specific pattern in the data. If the space were Euclidean with a constant error we would still expect that a decline in the estimated size of the near angles in a square should be linked to a corresponding increase in the estimated size of the more distant angles within the square. This pattern is contrary to what we found with our data. To redress possible response bias and to increase precision in angle estimates, Experiment 2 will use both ascending and descending adjustment trials. Another weakness of Experiment 1 is that observers were asked simply to estimate the "apparent size" of each angle. The undergraduates in this study may have estimated apparent angle size accurately, or they may have allowed their judgments to mean something different. For example, the projective size on the retina of the squares and the angles that compose them will decline as distance increases and the angle of regard becomes less frontal. If participants interpreted perceived size of an angle as being how much of the retina the angle takes up, their estimates would produce a pattern of results similar to what we report here. This ambiguity in the meaning of apparent size has a long history. Phenomenologists such as Hatfield [43] and empirical researchers such as Granrud [74] find that apparent length or area declines with increasing distance in accordance with perspective-based models. However, past research using apparent size instructions has not always been consistent with this pattern. Wagner [2,3] performed a meta-analysis of 125 size constancy data sets from published works over the last century. This analysis found that apparent size instructions led to accurate size constancy on average with as many studies reporting underconstancy as those reporting overconstancy. Wagner [2] suggests that apparent size instructions might produce a range of results that average out to constancy because observers may interpret the instructions in various ways, some interpreting the instructions as requesting projective size, some perspective size, and some objective size. To make estimates less ambiguous, Experiment 2 uses instructions that ask observers to base their estimates on the apparent number of degrees that each angle spans. Directing participants to report the apparent number of degrees invites them to focus on this specific aspect of how an angle appears; whereas, just asking them to report its "apparent size" might allow them to base their judgments on other aspects of the stimulus. Experiment 2 aims to clarify the meaning of the participants' judgments and to determine if the phenomena revealed in Experiment 1 are replicable. Method Participants. The thirty participants ranged from 18 to 30 years old. Participation was voluntary, and all participants gave their informed consent. Materials and experimental layout. As in Experiment 1, five squares were laid out on the floor of a corridor using 2.54 cm wide silver duct tape. The sides of each square were 0.5 m in length (external dimensions). Participants stood just behind a line on the floor. The squares were at various distances from the participant one behind the other, with their centers aligned with the participant's line of sight. They were oriented with two sides parallel to the participant's frontal plane and the remaining two sides parallel to the participant's median plane (oriented in-depth relative to the observer). The center of the near edge of the first square was 0.5 m from the participant, the second was 1.5 m away, the third was 2.5 m away, the fourth was 4 m away, and the fifth was 8 m away. Angle estimates were obtained using the same "compass" modeled after Proffitt's [75] "visual matching device." As before, participants received no feedback about the size of their estimates. Procedure. Each of the 30 participants performed the experiment individually. They stood just behind the line facing the five squares on the floor. Each was given the compass and told that he or she needed to adjust it in order to match the apparent size of all 20 interior angles of the squares. Half began their estimation with the nearest square, half with the most distant square. They made estimates working clockwise around the interior angles of each square and then went on to estimate the angles of the next square. The participants held the device in front of them with the wedge specifying the angle in their frontal plane. Sometimes they moved the compass slightly to the left or right of the median plane, however. After each adjustment, the experimenter retrieved the compass from the observer and used the protractor on the back to record the estimate. Participants were not told the numerical value of their estimate. For each angle, participants performed both ascending trials, where the compass was reset to 0 • prior to making the estimate, and descending trials, where the compass was reset to 180 • prior to making the judgment. Half of the participants performed the ascending trial before the descending trial, and half performed the descending trial first. Participants were given instructions to judge the apparent angular size of the angle. The exact instructions were: You see five squares in front of you at different distances away. Although we all know that the interior angles for each corner of the square are physically 90 degrees, they might or might not appear or "look" that angular size subjectively. Please adjust this compass until it matches the apparent number of degrees of each of the interior angles of the squares. For each angle we will have you do two adjustments, one where the compass angle is initially zero and must be adjusted outward and one where the compass angle is initially 180 • and must be adjusted inward. In between each adjustment, I will take the compass from you and record your estimate. Please don't turn the compass over to see the numbers, since we want you to rely on your subjective impressions, not the numbers on the back. Start with the bottom right angle of the nearest [or farthest] square and go clockwise around the square, and then proceed to the next square. Remember, we don't want you to report the actual physical number of degrees of each angle nor the area occupied by the angle, but we want you to tell us how many degrees each angle looks or appears to span. After completing their estimates, participants were thanked and debriefed about the purpose of the experiment. Results The dependent variable was the average angle estimate and the four independent variables were the distance from the participant to the square (which we will call distance), whether an angle within a square was on the side of the square nearer to the observer or the side more distant (which we will call near vs. far location), whether the angle was on the right or left side of the square relative to the observer (which we will call right vs. left location), and whether the trial involved an ascending or descending judgment (which we will call trial). To test the affects of these independent variables on angle estimates a four-way, repeated-measures Analysis of Variance was performed. Experiment 2 provided more precise and powerful results than the initial experiment, and more effects proved to be significant. In this experiment, both right vs. left location and trial significantly affect judgments. Once again, the mean angle estimate was somewhat larger for angles on the right side of the squares (M = 86.84, std. error = 0.83) than the left (M = 85.50, std. error = 0.70), but in the second experiment this difference in means proved to be significant, F(1,29) = 14.37, p = 0.001, η p 2 = 0.33. In addition, ascending trials produced significantly smaller mean angle estimates (M = 85.22, std. error = 0.80) than descending trials (M = 87.12, std. error = 0.70), F(1,29) = 127.05, p < 0.001, η p 2 = 0.81. Thus, the data show a strong error of expectation effect. In addition, there were several significant higher-order interaction effects. For example, there were significant three-way interactions between distance, left vs. right, and trial, F(4,116) = 4.04, p = 0.004, η p 2 = 0.12, and between near vs. far, left vs. right, and trial, F(1,29) = 7.08, p = 0.01, η p 2 = 0.19. These interactions do not bear on this study's main hypotheses and so will not be discussed further. Of more theoretical importance, there were significant main effects on angle estimates for both distance to the square and near vs. far location within a square. Table 2 shows means and standard errors for angles estimates as a function of distance both overall and separately for near vs. far location. Figure 4 also illustrates these effects. The figure shows the mean angle estimate as a function of distance to each square and near vs. far location of the angle within a square. Once again, mean angle estimates are always smaller for near angles within a square than for angles at the far end of a square for all distances, a result that is consistent with the predictions of Hatfield, Erkelens, and Gilinsky's perspective-based models. Also notice that angle estimates decline with increasing distance to the square. In addition, the table shows that the standard error of the estimates tends to increase with distance, indicating that judgment precision goes down as distance to the square increases. Standard errors also tend to be larger for near angles within each square than for far angles, indicating greater precision for far angle estimates than near angle estimates. Vision 2018, 2, x FOR PEER REVIEW 14 of 22 0.81. Thus, the data show a strong error of expectation effect. In addition, there were several significant higher-order interaction effects. For example, there were significant three-way interactions between distance, left vs. right, and trial, F(4,116) = 4.04, p = 0.004, ηp 2 = 0.12, and between near vs. far, left vs. right, and trial, F(1,29) = 7.08, p = 0.01, ηp 2 = 0.19. These interactions do not bear on this study's main hypotheses and so will not be discussed further. Of more theoretical importance, there were significant main effects on angle estimates for both distance to the square and near vs. far location within a square. Table 2 shows means and standard errors for angles estimates as a function of distance both overall and separately for near vs. far location. Figure 4 also illustrates these effects. The figure shows the mean angle estimate as a function of distance to each square and near vs. far location of the angle within a square. Once again, mean angle estimates are always smaller for near angles within a square than for angles at the far end of a square for all distances, a result that is consistent with the predictions of Hatfield, Erkelens, and Gilinsky's perspective-based models. Also notice that angle estimates decline with increasing distance to the square. In addition, the table shows that the standard error of the estimates tends to increase with distance, indicating that judgment precision goes down as distance to the square increases. Standard errors also tend to be larger for near angles within each square than for far angles, indicating greater precision for far angle estimates than near angle estimates. The repeated measures ANOVA (using the conservative Greenhouse-Geisser correction for a significant Mauchly Test of Sphericity) revealed that the mean angle estimate was significantly greater for near squares and smaller for more distant squares, F(2.36,68.56) = 18.09, p < 0.001, η p 2 = 0.38. Sidak post hoc tests showed that all angle estimates differ significantly from each other as a function of distance except that angle estimates for the second square (1.5 m from the observer) do not differ significantly from those of the first (0.5 m) or third (2.5 m) squares. In addition, within the same square, mean angle estimates for angles near to the observer (M = 84.15) were significantly smaller on average than those more distant from the observer (M = 88.18), F(1,29) = 74.27, p < 0.001, η p 2 = 0.72. Note that this result has very large F statistics and effect sizes. Unlike the first experiment, the interaction effect between distance and near vs. far location on angle estimates was significant (using the Greenhouse-Geisser correction for a significant Mauchly Test of Sphericity), F(2.68,77.63) = 6.51, p = 0.001, η p 2 = 0.18. This small interaction effect reveals that the difference between mean near and far estimates grows somewhat with increasing distance from the observer. Once again, Euclidean geometry would predict that the sum of the angles of a quadrilateral should always sum up to 360 • , while a geometry of negative curvature would predict that the angles should sum up to less than 360 • . To examine these predictions, each participant's angle estimates for a given square were summed. Figure 5 shows the average sum of the angle estimates as a function of distance to the square. Notice that the sum of the angle estimates is slightly greater than 360 • for the nearest square but less than 360 • for all others and that this sum decreases significantly as a function of distance from the participant to the square, F(2.36, 68.56) = 18.09, p < 0.001, η p 2 = 0.38. This result is consistent with a slight positive curvature to visual space very near to the observer and increasingly negative curvature with increasing distance. This pattern of change in curvature is strikingly similar to a number of previous works [76][77][78][79]. Vision 2018, 2, x FOR PEER REVIEW 15 of 22 The repeated measures ANOVA (using the conservative Greenhouse-Geisser correction for a significant Mauchly Test of Sphericity) revealed that the mean angle estimate was significantly greater for near squares and smaller for more distant squares, F(2.36,68.56) = 18.09, p < 0.001, ηp 2 = 0.38. Sidak post hoc tests showed that all angle estimates differ significantly from each other as a function of distance except that angle estimates for the second square (1.5 m from the observer) do not differ significantly from those of the first (0.5 m) or third (2.5 m) squares. In addition, within the same square, mean angle estimates for angles near to the observer (M = 84.15) were significantly smaller on average than those more distant from the observer (M = 88.18), F(1,29) = 74.27, p < 0.001, ηp 2 = 0.72. Note that this result has very large F statistics and effect sizes. Unlike the first experiment, the interaction effect between distance and near vs. far location on angle estimates was significant (using the Greenhouse-Geisser correction for a significant Mauchly Test of Sphericity), F(2.68,77.63) = 6.51, p = 0.001, ηp 2 = 0.18. This small interaction effect reveals that the difference between mean near and far estimates grows somewhat with increasing distance from the observer. Once again, Euclidean geometry would predict that the sum of the angles of a quadrilateral should always sum up to 360°, while a geometry of negative curvature would predict that the angles should sum up to less than 360°. To examine these predictions, each participant's angle estimates for a given square were summed. Figure 5 shows the average sum of the angle estimates as a function of distance to the square. Notice that the sum of the angle estimates is slightly greater than 360° for the nearest square but less than 360° for all others and that this sum decreases significantly as a function of distance from the participant to the square, F(2.36, 68.56) = 18.09, p < 0.001, ηp 2 = 0.38. This result is consistent with a slight positive curvature to visual space very near to the observer and increasingly negative curvature with increasing distance. This pattern of change in curvature is strikingly similar to a number of previous works [76][77][78][79]. One early reader of this paper suggested an alternate interpretation of our data. The reader suggested that location of angles within the square was unimportant and that judgments of "near" and "far" angles do not differ beyond the effects of differences in their egocentric distances. In other words, the reader suggested that angle estimates simply decrease with increasing distance from the observer, and whether the angle is located on the near or far sides of the square does not matter above and beyond this effect. However, this alternative explanation is not consistent with the data. Since the far angles are 0.5 m farther from the observer than the near angles, this alternative One early reader of this paper suggested an alternate interpretation of our data. The reader suggested that location of angles within the square was unimportant and that judgments of "near" and "far" angles do not differ beyond the effects of differences in their egocentric distances. In other words, the reader suggested that angle estimates simply decrease with increasing distance from the observer, and whether the angle is located on the near or far sides of the square does not matter above and beyond this effect. However, this alternative explanation is not consistent with the data. Since the far angles are 0.5 m farther from the observer than the near angles, this alternative explanation would predict that far angle estimates should be smaller than near angle estimates, but the data shows the opposite trend. Far angles estimates are consistently larger than near angle estimates. In fact, plotting mean angle estimates as a function of distance to the angle itself (instead of to the near edge of the square) would result in the top line of Figure 4 moving to the right 0.5 m. Instead of the difference in response to near vs. far angles going away, this way of displaying the data actually magnifies the differences in response to the different angle locations in a square. Discussion The results of Experiment 2 are largely consistent with those of Experiment 1. Near angles within a square are consistently estimated to be significantly smaller than far angles at all distances from the observer, in accord with perspective-based models once again. The sum of the angle estimates of a square declined with increasing distance once more. However, in the second experiment, the sum of the angles was slightly greater than 360 • for the nearest square, but smaller than 360 • for all other squares. Given that there was a small but significant error of expectation in Experiment 2, it is possible that the estimates of the first study that relied only on ascending judgments might be slightly too low. So, even this small discrepancy in the data is understandable. The inclusion of multiple judgments for each angle and the use of instructions that clarified that observers should judge the apparent number of degrees of each angle had a second beneficial effect. The second experiment produced more powerful results and impressive magnitudes of effect. The two experiments taken together show that we have discovered a reliable and replicable phenomenon. Near angles within a square are reliably smaller than far angles, the judged size of angles reliably decreases with increasing distance, and the sum of the angle estimates within a square are reliably lower than the Euclidean value of 360 • for all but the nearest stimuli. General Discussion Both of these experiments showed that the apparent size of the interior angles of squares do not match their physical size. In particular, interior angles nearest to the observer are perceived to be consistently and significantly smaller than angles farther away from the observer within the same square. In addition, the estimated apparent size of the interior angles significantly decreases as physical distance from the observer to the square increases. Finally, the sum of the angle estimates across the four interior angles of each square is consistently less than the 360 • predicted by Euclidean geometry. The sum of the angles deviates increasingly from the Euclidean prediction as the distance from the observer to the square increases. These results can be compared to the predictions of the simple affine and perspective-based models described earlier. The data are consistent with some of these predictions; however, neither of the models can account for all of the data. The simple affine model is least consistent with our data. Affine models would predict that the perceived sum of the angles of the squares should remain 360 • and that the interior angles of squares in the specific orientation we present should continue to be perceived as being 90 • after the affine transformation. Angles nearer to the observer should be perceived as the same size as angles farther from the observer within a square. None of these predictions is supported by our data. The sum of the estimated size of the angles of a square is almost always less than 360 • , individual angles are estimated to be almost always less than 90 • , and near angles within a square are perceived as smaller than far angles. The simple affine model fails, in part, because it clings to a number of simplifying assumptions. It describes visual space using a Cartesian coordinate system and posits that the y-dimension (in-depth) of this space is perceptually compressed by the same amount no matter how far to the left or right of straight ahead a stimulus is. Yet, the visual experience of an individual is probably more akin to a polar coordinate system. Two stimuli with the same x-coordinate and differing y-coordinates are only fully seen in-depth with respect to the observer when the x-coordinate is zero, and as the x-coordinate increasingly deviates from zero, the two stimuli in this example are seen at increasingly frontal orientations relative to the observer. If visual space is truly compressed in the in-depth dimension relative to the observer, the compression at a given distance away should be conceived of as taking place along a circle centered at the observer (or perhaps more accurately, the compression takes place along the visual horopter), not in terms of arbitrary Cartesian coordinates. Accordingly, consideration should be given to Wagner's [3,33] Vector Contraction Model, which conceives of compression in these polar terms. The Vector Contraction Model predicts that the near angles of a square should be seen as smaller than the far ones; however, it would still be inconsistent with our data in predicting that the far angles should be greater than 90 • . In the present study, Hatfield, Erkelens, and Glinsky's perspective-based models have more success than the simple affine model in accounting for our data. Perspective-based models predict that the near angles within a square should be perceived to be smaller than the more distant angles within a square, just as our data shows. In addition, perspective-based models predict that, under our stimulus conditions, the near angles should deviate perceptually more from 90 • for far squares than for near ones, just as our data shows. However, past perspective-based models have made the simplifying assumption that visual space is Euclidean after the perspective transformation. This implies that the sum of the angles of a quadrilateral should sum to 360 • and that the angles most distant from the observer within a square should be seen as being greater than 90 • and that they should seem increasingly large with increasing physical distance to the square. These later predictions are inconsistent with our data. Thus it would appear that proponents of perspective-based models are largely successful, but should question the simplifying assumption that visual space is Euclidean. The sum of the interior angles of the squares was almost always less than 360 • at all distances from the observer. In addition, the sum of the angles declined with increasing distance from the observer. These observations are consistent with past work that shows that the curvature of visual space might not be constant. Ivry and Cohen [77]) and Koenderink, van Doorn, and Lappin [78,79] have presented evidence that the curvature of visual space is not constant and that its curvature becomes increasingly negative with increasing distance from the observer. Our data support the idea that visual space may have increasingly negative curvature with increasing distance from the observer under the specific conditions of our study. Our results, otherwise consistent with perspective-based models, thus contribute converging confirmation that visual space exhibits negative curvature under a variety of conditions. Alternatively, our initial technique may not have eliminated all possible response biases. Angle estimates may decline with size simply because the projective size of stimuli shrink as distance from the observer increases; so, observers may think the angles shrink along with the stimuli's projective size. Such cognitive shrinkage would be more likely with a numeric estimation task than the matching task used here. To avoid this potential problem, the second experiment explicitly used instructions emphasizing that observers should base their estimates on the apparent number of degrees that each angle spans. In summary, the perspective-based model is better supported by our results than the affine model. However, consistent with some previous findings, perspective-based models may need to incorporate non-Euclidean transformations of physical space into their descriptions. Only systematic parametric work that describes the metric of visual space throughout the visual field will be able to offer a closer specification of the geometry or geometries of visual space in relation to various instructions and stimulus conditions. Author Contributions: M.W. and G.H. were responsible for conceptualization, visualization, methodology and models, and writing and revising this research. M.W. was responsible for data curation, formal analysis, validitation, project administration, provision of resources and funding, and supervision of this research. K.C. and A.N.M. conducted the empirical part of the investigation. Conflicts of Interest: The authors declare no conflicts of interest. This gives us the following initial relations: From Equation (A1), From Equations (A2) and (A3) we have This in turn leads to Combining Equations (A3) and (A8) yields Now, it's time to add the third dimension. If you look at the lines of perspective from the side, you see a geometrically equivalent diagram where z (height) takes the place of x (width). Thus, perceived height (z ) is found by substituting z for x in to Equation (A9) to yield Now, in the situation we have described, x, y, and z can be thought of as defining coordinates of a point in a Cartesian space. So, the equations we have developed can be thought of as describing the coordinates in visual space that correspond to a location in physical space. That is, for a position, P, in physical space defined by the coordinates x, y, z and a vanishing distance v, the coordinates of the point in visual space (P ) corresponding to P would be In truth, we have assumed Euclidean geometry to get to this point, since we have used Euclidean trigonometry to do the derivation. Keeping with this assumption, the distance between the two points in visual space (x' 1 , y' 1 , z' 1 ) and (x' 2 , y' 2 , z' 2 ) that correspond to two points in physical space (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) would be d = (x 1 − x 2 ) 2 + (y 1 − y 2 ) 2 + (z 1 − z 2 ) 2 (A14)
15,922.2
2018-06-01T00:00:00.000
[ "Mathematics" ]
BmDJ-1 Is a Key Regulator of Oxidative Modification in the Development of the Silkworm, Bombyx mori We cloned cDNA for the Bombyx mori DJ-1 protein (BmDJ-1) from the brains of larvae. BmDJ-1 is composed of 190 amino acids and encoded by 672 nucleotides. Northern blot analysis showed that BmDJ-1 is transcribed as a 756-bp mRNA and has one isoform. Reverse transcriptase (RT)-PCR experiments revealed that the BmDJ-1 was present in the brain, fatbody, Malpighian tubule, ovary and testis but present in only low amounts in the silkgland and hemocyte of day 4 fifth instar larvae. Immunological analysis demonstrated the presence of BmDJ-1 in the brain, midgut, fatbody, Malpighian tubule, testis and ovary from the larvae to the adult. We found that BmDJ-1 has a unique expression pattern through the fifth instar larval to adult developmental stage. We assessed the anti-oxidative function of BmDJ-1 using rotenone (ROT) in day 3 fifth instar larvae. Administration of ROT to day 3 fifth instar larvae, together with exogenous (BmNPV-BmDJ-1 infection for 4 days in advance) BmDJ-1, produced significantly lower 24-h mortality in BmDJ-1 groups than in the control. 2D-PAGE revealed an isoelectric point (pI) shift to an acidic form for BmDJ-1 in BmN4 cells upon ROT stimulus. Among the factors examined for their effects on expression level of BmDJ-1 in the hemolymph, nitric oxide (NO) concentration was identified based on dramatic developmental stage-dependent changes. Administration of isosorbide dinitrate (ISDN), which is an NO donor, to BmN4 cells produced increased expression of BmDJ-1 compared to the control. These results suggest that BmDJ-1 might control oxidative stress in the cell due to NO and serves as a development modulation factor in B. mori. Introduction The protein DJ-1 is ubiquitously expressed in cells and it is highly conserved across a wide variety of organisms, showing moderate sequence identity with heat shock protein 31 (HSP31) chaperones and ThiJ/PfpI cysteine proteases [1]. Mutated forms of DJ-1 are known to cause early onset autosomal recessive juvenile Parkinson's disease (PD), and many studies have demonstrated a neuro-protective role of DJ-1. DJ-1, which is encoded by PARK7, is a multi-functional protein that plays roles in chaperoning, RNA-binding, SUMOylation, apoptosis, and protease activity [2]. Additionally, DJ-1 is induced by oxidative modification and is rapidly oxidized at position Cys 106 [3]. Oxidative modification leads to mitochondrial damage in cultured cells exposed to 1methyl-4-phenyl-1,2,3,6 tetrahydropyridine (MPTP), 6-hydroxydopamine (6-OHDA), paraquat (PQ), and rotenone (ROT), which inhibit the mitochondrial electron transfer chain of mitochondrial complex I [4]. These compounds enhance production of reactive oxygen species (ROS) and reduce production of ATP, resulting in mitochondria dysfunction [5]. DJ-1 seems to directly scavenge free radicals from mitochondria in response to these oxidative stresses. MPTP, 6-OHDA, PQ, and ROT are used to produce PD models in rats and Drosophila and to analyze the pathology of PD [6,7]. DJ-1 has a dimer structure, and the L166P mutation produces structural perturbation that causes the protein to be ubiquitinated and susceptible to degradation by the 26S proteasome, significantly reducing its half-life in vivo [8,9]. L166P DJ-1 forms unstable dimers with disrupted protein folding and function [10]. The C106A mutation results in a loss-of-function of DJ-1 protease and chaperone activity [11,12]. However, the precise pathology due to mutations and species-specific biological functions of DJ-1 remain unclear. The silkworm, Bombyx mori, a Lepidopteran insect, has been utilized as a model system for basic science research because of its well-characterized genome, availability of various genetic mutants, and the development of transgenic, RNAi, and microarray technologies [13,14,15,16,17]. The complete silkworm genome has approximately 18,510 genes, including a substantial number of mammalian orthologs [13,16]. In the present study, we cloned the silkworm B. mori DJ-1 ortholog (BmDJ-1), clarified its expression pattern during development, and examined its anti-oxidative function. BmDJ-1 is a newly identified member of the DJ-1 family and is a growthassociated protein that is altered with development in B. mori. Molecular cloning of BmDJ-1 Amplifying BmDJ-1 by RT-PCR with 59 RACE using genespecific primers from B. mori larvae brain cDNA produced an 86bp product. The Kozak consensus sequence AAAATGAAG [18] was found to be present at the site of translation initiation determined using NetStart software [19]. Therefore, we determined that the cDNA encoded a putative 59-untranslated sequence of 95 bp, an ATG start site, and an open reading frame (ORF) at position 96 extending to position 668. The deduced ORF of BmDJ-1 was composed of 672 nucleotides comprising 190 amino acids, had a molecular weight of 20,113 Dalton, and a putative isoelectric point (pI) of 5.15. The nucleotide sequence reported in this paper has been submitted to GeneBank/DDBJ SAKURA Data bank Accession No. AB281053. A computer search of the SMART database (http://smart. embl-heidelberg.de/) revealed that BmDJ-1 contained a DJ-1_PfpI domain at position 31T-173T. We identified the location of the BmDJ-1 gene in scaffold 2995719-2998746 of chromosome 23 at the splitting of 5 blocks by linkage mapping 28 chromosomes by SNP markers [20]. BmDJ-1 mRNA is expressed in various tissues in fifth instar larvae Northern blot analysis revealed that there is a single transcription product for BmDJ-1 with a size of 756 bp ( Fig. 2A). Specificity of antibody against BmDJ-1 We examined the utility of anti-BmDJ-1 antibodies raised against the recombinant Xpress-tagged BmDJ-1 to identify BmDJ-1. Anti-BmDJ-1 antibody reacted with both recombinant BmDJ-1 protein as a 25-KDa band and BmDJ-1 in the cell and tissue lysate from B. mori as a 20-KDa band. In contrast, BmDJ-1 antibodies did not recognize recombinant carotenoid binding protein (CBP) from B. mori tagged with GST [21] and HEK 293 cell lysate (Fig. 3, lanes 1 and 4). The molecular weight of the recombinant BmDJ-1 protein (Fig. 3, lanes 2 and 3) was slightly greater than the endogenous BmDJ-1 protein (Fig. 3, lanes 5 and 6), excluding the possibility of non-specific binding to the Xpress tag. Identification of developmental stage and tissue-specific expression patterns of BmDJ-1 by immunoblotting Distribution of BmDJ-1 expression by developmental stage and tissue is shown in Figure 4. Whole body expression is roughly equal for all larval instars, pupae, and adults ( Fig. 4A and S1A, lanes 1-7). Moreover, equal amounts of BmDJ-1 are found in the brains of fifth instar larvae, pupae, and adults, but it is slightly increased in larvae ( Fig. 4B and S1B, lanes 8-10). To determine the distribution pattern of BmDJ-1, we studied tissues (midgut, fatbody, Malpighian tubule, ovary, and testis; Fig. 4C and S1C) from day 0 fifth instar larvae to adults by immunoblotting. BmDJ-1 was expressed in the larval through adult developmental stages in these tissues, but expression was low in day 1 pupae (fatbody, Malpighian tubule and ovary; Fig. 4C, panels b, c, e, lane 14). Expression levels increased with pupal stadium from day 0 to 4 ( Fig. 4C, panels f-h) and high levels of BmDJ-1 expression were also identified in the testis during these developmental stages (Fig. 4C, panel q). Therefore, BmDJ-1 showed a unique day-today expression pattern from day 0 fifth instar larvae to the adult developmental stages. The pI of BmDJ-1 shifted acidic by ROT stimulation Treatment of BmN4 cells with 50 mM ROT produced a shift in the pI to acidic, as shown on 2D-PAGE and immunoblotting (Fig. 5). BmDJ-1 overexpression in larvae causes resistance to ROT ROT was used to produce an oxidative stress in order to examine the effect of exogenous BmDJ-1 protein. We determined the lethal dose (LD) of ROT for day 3 fifth instar larvae of 10.1 mg/g (LD 50 ; 95% CI, 6.02-17.4) (Fig. 6). Based on computer simulations of reactivity using SAS software, we determined the optimal ROT concentration for further testing of the protective effects of BmDJ-1 to be 20 mg/g. We also confirmed virus-derived BmDJ-1 expression levels in the fatbodies of several insects after 1 day (24 h) and 4 days (day 4 fifth instar larvae) by RT-PCR and after 4 days by immunoblotting. Expression of blank-vector recombinant virus was detected as a 300bp band, while virus-derived BmDJ-1 expression was detected as an 850-bp band. The BmDJ-1 expression was absent at 24 h but was detected at 96 h (Fig. 7A). Virus-derived BmDJ-1 protein was expressed at about 2-fold greater levels in non-infected groups after 4 days, and BmDJ-1 protein expression in blank virus-infected control groups was significantly decreased ( Fig. 7B and S2). Expression of BmDJ-1 and NO concentration The expression pattern of BmDJ-1 was tissue-specific, reflecting the unique responses to oxidative stress. We examined some factors that might affect expression. NO concentration in the hemolymph was found to fluctuate from the fifth instar larva to adult (Fig. 8A), with high levels for day 0 and 6 fifth instar larvae and adults and gradually increasing NO concentration in the pupal stages. To test whether NO affects the expression of BmDJ-1, BmN4 cells were treated with 100 mM ISDN as an NO donor for 16 h. BmDJ-1 was detected in each sample by SDS-PAGE and immunoblotting with NO concentration (Fig. 8B) and BmDJ-1 expression ( Fig. 8C and S3) increased compared to the control (0.1% ethanol). Discussion Throughout its evolutionary history, DJ-1 shows a highly conserved amino acid sequence. Characterization of the B. mori variant, BmDJ-1, by cDNA cloning from the brains of the fifth instar larvae shows the presence of Cys and Leu, which are key residues for the function of DJ-1. On a phylogenetic tree of DJ-1 proteins, two orthologs of D. melanogaster, DJ-1a and DJ-1b, and BmDJ-1 placed in distinct clusters. D. melanogaster DJ-1a is most highly expressed in the testis from the pupal stages to adult, and DJ-1b is expressed in almost all tissues from embryo to adult. Loss-of-function DJ-1b mutant flies are sensitive to oxidative modification from H 2 O 2 and paraquat, although the role played by DJ-1a remains unclear [22]. Thus, these two D. melanogaster DJ-1s appear to have distinct functions. In contrast, BmDJ-1 exists as a single isoform based on the single 756-bp transcript and only one band for BmDJ-1 on northern blot assay. The EST database (SilkBase; http://morus. ab.a.u-tokyo.ac.jp/cgi-bin/index.cgi) shows two distinct EST clones (data not shown). While BmDJ-1 may exist as several kinds of splice variants, this could not be clarified in this study. BmDJ-1 demonstrated resistance to oxidative stress by ROT DJ-1 has been reported to play a role in anti-oxidative stress by several independent groups. We confirmed that BmDJ-1 changes to an acidic form that is affected by ROT treatment in BmN4 cells (Fig. 5), indicating a response to oxidative stress. In exogenous tests of BmDJ-1 with ROT, the mortality rate of individuals with BmDJ-1 is significantly decreased in the presence of ROT treatment, while the control groups remain extremely sensitive. It has been reported that the start of protein synthesis for BmNPV is 24 h after infection and that the protein expression level peaks at 96 h [23]. Endogenous protein synthesis stopped at 24 h. Immunoblotting at 96 h showed that virus-derived BmDJ-1 protein expression was significantly increased and virus-infection control group of BmDJ-1 protein expression was significantly decreased. Our findings of virus-derived BmDJ-1 expression after 96 h corroborate those results and suggest that BmDJ-1 overexpression improves the survival of silkworm larvae treated with ROT. BmDJ-1 expression controlled with NO BmDJ-1 showed a tissue-specific expression pattern that indicates unique responses to oxidative stress. We found that NO was an oxidative stressor in B. mori that could be modulated by BmDJ-1. The BmDJ-1 expression pattern in tissues in this study suggested that BmDJ-1 expression correlates to the hemolymph NO concentration, which showed day-to-day fluctuation from fifth instar larvae to adult (Figs. 4C and 8A). Moreover, the expression of BmDJ-1 was increased and the pI shifted acidic due to exposure to an NO donor (data not shown). These results showed that BmDJ-1 was oxidized and its expression was regulated by NO. Choi et al. [24] reported that the nitric oxide synthase (NOS) gene in B. mori shows the highest expression in Malpighian tubule in day 7 fifth instar larvae, suggesting that NO might be related to B. mori metamorphosis. Inoue et al. [25] reported that administration of ISDN, an NO donor, to the beetle Homoderus mellyi parry rapidly progresses pupation. Conversely, the administration of carnitine, which suppresses apoptosis of cells in larval beetles, extended the larval developmental period and generated huge adult beetles. These observations implicate NO in the mechanism of metamorphosis as an apoptosis initiator, though the underlying process remains unclear. In B. mori, apoptosis is the principal mechanism for dynamic remodeling of the body structure during metamorphosis. Apoptosis mainly occurs during the pupal developmental period, during which the restructuring produces the adult body. In our observation, the increased expression levels of BmDJ-1 occur in the pupal developmental stage, which coincides with apoptosis and the apparent melting of the body. BmDJ-1 might be involved in the elimination of NO in metamorphosis. Although DJ-1 protein acts as a controller caspase activation to alter self expression level in the apoptotic pathway [26], we cannot determine a direct relationship between BmDJ-1 and NO generation in metamorphosis based on these experiments. In future studies, we will investigate whether BmDJ-1 directly regulates NO in metamorphosis. Ethics statement The study protocol for the experimental use of the animals was approved by the Ethics Committee of Meiji Pharmaceutical University (Approval ID 2004). Insects The hybrid strain Kinshu x Showa was supplied from Ueda-Sha Co. Ltd., Nagano, Japan. Individuals were reared on the artificial diet Silkmate 2S (NOSAN, Tsukuba, Japan) and kept at 25uC on a 12 h light/12 h dark daily cycle. Molecular cloning of BmDJ-1 We first searched the B. mori expressed sequence tag (EST) database on KAIKOBLAST (kaikoblast.dna.affrc.go.jp) using the Drosophila melanogaster DJ-1 alpha (NM_137072) or beta (NM_143568) sequence as a query, and identified the EST clone NRPG1136, which did not overlap the 59 end of the coding region of the B. mori DJ-1 gene (BmDJ-1). The entire coding sequence was determined using total RNA extracted from the brains of day 3 fifth instar larvae by an RNeasy mini kit (Qiagen, Valencia, CA, USA). DNase-treated total RNA was processed for cDNA synthesis using oligo(dT)12-18 primers and SuperScript II reverse transcriptase (Invitrogen), and cDNA was amplified by PCR using Pfu Turbo DNA polymerase (Stratagene, La Jolla, CA, USA) and the primers 59-TCAAGAACAATGAGCAAGTCTGCG-39 and 59-TAATATTAGTACTGCGAGATTAAC-39. The amplified products were cloned into a cloning vector p3T (MoBiTec, Göttingen, Germany). The purified vectors were processed for sequencing by the dideoxynucleotide chain termination method on an ABI PRIZM 3100 Genetic Analyzer (Applied Biosystems, Tokyo, Japan). The cDNA clone, NRPG1136, was provided by the National Bioresource Project (MEXT, Japan). 59-Rapid Amplification of cDNA ends The 59-terminal cDNA ends were amplified using the SMART RACE cDNA Amplification kit (Clontech, Mountain View, CA, USA) according to the supplier's instructions with primers 59- GCCAGCTAGAGTAACTGTTACCCC-39 and 59-AGTCAC-TTGCCTTGAGCACAGCAC-39. The amplified products were cloned into a p3T vector for sequencing. Recombinant protein The ORFs of BmDJ-1 were amplified by PCR using PfuTurbo DNA polymerase and primers 59-AGCAAGTCTGCGTTAGT-GAT-39 and 59-TTAGTACTGCGAGATTAACA-39. Products were cloned into a prokaryotic expression vector pTrcHis-TOPO with a TOPO TA cloning kit (Invitrogen) and expressed in E. coli as fusion proteins with N-terminal Xpress tags. The nucleotide sequence was confirmed by sequencing. Recombinant BmDJ-1 expressed in E. coli was purified with HIS-Select spin columns (Sigma, St. Louis, MO, USA) according to methods described previously [27]. A recombinant b-galactosidase (LacZ) fragment tagged with X-press included in the TOPO TA cloning kit was used as a negative control. Immunology The antibody for immunoblotting was raised in Japanese white rabbits by subcutaneous injection of the recombinant BmDJ-1 and Ribi adjuvant system (Corixa Co., Hamilton, MT, USA) mixture. The serum was stored at 280uC. Immunoblotting To identify the presence of BmDJ-1 in different tissues and cells, protein samples (5 mg) were separated on SDS-PAGE, transferred to nitrocellulose membranes using the method of Towbin et al. [28], and immunoblotted using rabbit anti-BmDJ-1 antibody and goat anti-rabbit IgG-conjugated horseradish peroxidase (HRP). The membranes were developed using a chemiluminescent substrate (Pierce, Rockford, IL, USA). The tissue distribution of BmDJ-1 was determined for the midgut, fatbody, Malpighian tubule, testis, and ovary from day 0 fifth instar larvae, pupae, and adults. Each tissue sample was run on the same gel, which was also loaded with 20 ng of recombinant Xpress-tagged BmDJ-1. The distribution of BmDJ-1 from first to fifth instar larvae, pupae and adult on the whole body and brain of larvae, pupae and adults were also determined. All tissues were homogenized in RIPA lysis buffer composed of 50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 1% Nonidet P40, 0.5% sodium deoxycholate, 0.1% SDS, and a cocktail of protease inhibitors (Sigma), followed by centrifugation at 10,0006 g for 15 min. The protein concentration was determined by a Bradford assay kit (Pierce). Samples of supernatant (5 mg of protein) were separated by SDS-PAGE, transferred to nitrocellulose membranes, and immunoblotted with anti-BmDJ-1 antibody following the procedure described above. Northern blot analysis Total RNA derived from the ovaries of day 4 fifth instar larvae were used. Total RNA (12 mg) was separated on a 1.5% agarose-6% formaldehyde gel and transferred to a nylon membrane. DIGlabeled probes were synthesized using the PCR DIG probe synthesis kit (Roche Diagnostics, Mannheim, Germany) according to the supplier's instructions with the primers 59-CATTT-GTGCTGCTTCCATAGCGTT-39 and 59-CATTCCCTTTT-CGACTTGATCGGC-39. After pre-hybridization, the membranes were hybridized with the DIG-labeled probes at 54uC overnight. The specific reaction was visualized on Kodak X-OMAT AR X-ray films by the DIG chemiluminescence detection kit (Roche Diagnostics). RT-PCR Total RNA derived from the brain, midgut, fatbody, Malpighian tubule, testis, ovary, and hemocyte of day 4 fifth instar larvae was DNase-treated and processed for cDNA synthesis using oligo(dT)12-18 primers and SuperScript II reverse transcriptase (Invitrogen). cDNA was amplified by PCR using Taq DNA polymerase (Qiagen) and the primers 59-CATTTGTGCTG-CTTCCATAGCGTT-39 and 59-CATTCCCTTTTCGACTT-GATCGGC-39. Amplification was carried out for 30 cycles of denaturing for 40 s at 94uC, annealing for 40 s at 50uC and extension for 90 s at 72uC. Amplified PCR products were separated by agarose gel, stained with ethidium bromide, and visualized under UV light. Transfer plasmid and generation of recombinant virus The ORF sequence of BmDJ-1 was amplified by PCR from brain cDNA as described above, with the primers 59-GGG-GTACCCCATGAGCAAGTCTGCGTTAGTGAT-39 and 59-GGAATTCCAATATTAGTACTGCGAGATTAAC-39. The amplified region was digested with EcoRI and KpnI and cloned into the baculovirus transfer pBK283 vector. Blank pBK283 vector was used as a control. For generating recombinant BmNPV, we used a Bom-EX kit (NOSAN) according to the supplier's instructions. The recombinant BmNPV nucleotide sequence was confirmed by sequencing using the primers 59-ACTGTCGACAAGCTCTGTCC-39 and 59-ACAACGCACA-GAATCTAACGC-39. Purified recombinant virus was titrated by plaque assay, and high titer stocks (2610 7 pfu/ml) were used for infecting larvae. Determination of LD 50 of day 4 fifth instar larvae by ROT stimulation To determine the LD 50 of day 3 fifth instar larvae by ROT (Sigma) stimulation, we injected ROT intrahemocoelically to larvae weighing 3.5 to 4.0 g using a disposable syringe (Terumo, Tokyo, Japan) with a 30G needle. ROT was dissolved in DMSO (prepared immediately before use and stored in the dark) at 0, 1.25, 2.5, 5.0, 10, 20, 40, and 80 mg/g and injected into larvae in a volume of 10 ml/g body weight. The number of dead silkworms after 24 h was counted and the mortality rate (%) = (X/Y)6100 was calculated, where X = dead larvae in the group and Y = total larvae in the group. The mortality rates were analyzed with Probit analyses [29] using the Probit Analysis option in the SAS 8.2 software package (SAS Institute Japan Ltd., Tokyo, Japan) to calculate the LD 50 . Overexpression of BmDJ-1 to larvae and exposure to ROT oxidative stimuli A 50 ml aliquot of BmNPV-BmDJ-1 or BmNPV-blank-vector (1610 5 pfu/larva) was injected intrahemocoelically into day 0 fifth instar larvae using a disposable syringe (Terumo) with a 30G needle. Blank-vector recombinant virus was injected as a control. After rearing for 4 days on an artificial diet, larvae were examined for overexpression of BmDJ-1 to assess protection from oxidative stress due to ROT. Virus-derived BmDJ-1 expression level was measured in the dissected fatbodies of several insects after 4 days (day 3 fifth instar) by immunoblotting. We surmised the ROT dose that would be most effective in the experimental model with exogenous BmDJ-1 based on a report of the administration of exogenous DJ-1 [30]. ROT, prepared at 20 mg/g (LD 70 ), was injected to three groups of 10 to 20 larvae in a volume of 10 ml/g body weight. The number of dead silkworms after 24 h was counted and the mortality rate (%) was calculated. Data were analyzed with the multiple comparison test followed by the Cochran-Armitage test for dose-response relationship and Steel's (non-parametric) multiple comparison test. P,0.05 was considered significant. All statistical analyses were carried out using SAS system 8.2 software. Three trials were performed in each experiment. BmN4 cells treated with ROT, two-dimensional (2D) gel electrophoresis, and detection of BmDJ-1 BmN4 cells (2610 6 ) were grown on 6-well Falcon plates (BD Biosciences, Franklin Lakes, NJ, USA) and washed twice with PBS followed by 3 h of treatment with TC-100 medium containing ROT (50 mM) dissolved in 0.1% DMSO or 0.1% DMSO as a control in the dark. To prepare total protein extracts for twodimensional (2D) gel electrophoretic analysis, the cells were sonicated in rehydration buffer comprising 8 M urea, 2% CHAPS, 0.5% carrier ampholytes at pH 3-10, 20 mM dithiothreitol, 0.002% bromophenol blue, and a cocktail of protease inhibitors. Urea-soluble proteins were separated by isoelectric focusing (IEF) using the ZOOM IPGRunner system loaded with an immobilized pH 3-10 gradient strip (Invitrogen), as described previously [16]. After the first dimension of IEF, the protein was separated in the second dimension on a 4-12% NuPAGE polyacrylamide gel (Invitrogen). For detection of BmDJ-1, the gel was transferred to a polyvinylidene difluoride (PVDF) membrane for immunoblotting. All incubation steps were carried out at 25uC in the dark. Three trials were performed for each experiment. Collection of samples and measurement of NO levels Hemolymph (250 ml) was collected from day 0 fifth instar larvae, pupae and adults, or from medium to measure the concentration of NO. To remove proteins, samples were mixed with methanol (2:1 by volume), followed by centrifugation at 10,0006 g for 20 min, and NO levels in the supernatants were measured using an NOx analyzer (ENO-20; Eicom, Kyoto, Japan), according to the manual. BmN4 cell treatment with ISDN and detection of BmDJ-1 BmN4 cells (2610 6 ) were grown on 6-well Falcon plates (BD Biosciences) and washed twice with PBS followed by 16 h of treatment with TC-100 medium containing 100 mM of isosorbide dinitrate (ISDN; prepared immediately prior to use and kept in the dark) dissolved in 0.1% ethanol or with 0.1% ethanol alone as a control. Total protein extracts were prepared for immunoblotting and culture medium was prepared for NO analysis. Statistical analysis was performed using Student's t-test. P,0.05 was considered significant. All statistical analyses were carried out using SAS system 8.2 software. Three trials were performed for each experiment.
5,345.2
2011-03-24T00:00:00.000
[ "Biology", "Environmental Science" ]
Identifying phase-varying periodic behaviour in conservative nonlinear systems Nonlinear normal modes (NNMs) are a widely used tool for studying nonlinear mechanical systems. The most commonly observed NNMs are synchronous (i.e. single-mode, in-phase and anti-phase NNMs). Additionally, asynchronous NNMs in the form of out-of-unison motion, where the underlying linear modes have a phase difference of 90°, have also been observed. This paper extends these concepts to consider general asynchronous NNMs, where the modes exhibit a phase difference that is not necessarily equal to 90°. A single-mass, 2 d.f. model is firstly used to demonstrate that the out-of-unison NNMs evolve to general asynchronous NNMs with the breaking of the geometrically orthogonal structure of the system. Analytical analysis further reveals that, along with the breaking of the orthogonality, the out-of-unison NNM branches evolve into branches which exhibit amplitude-dependent phase relationships. These NNM branches are introduced here and termed phase-varying backbone curves. To explore this further, a model of a cable, with a support near one end, is used to demonstrate the existence of phase-varying backbone curves (and corresponding general asynchronous NNMs) in a common engineering structure. Introduction Nonlinearities in mechanical structures can cause a wide variety of complex dynamic phenomena, such as modal interactions, localization, bifurcations and instability [1][2][3]. As such, identifying the existence of these phenomena, and addressing the difficulties they pose to the design, performance analysis and prediction of the behaviour of nonlinear systems can be challenging. For example, cables can exhibit complex nonlinear behaviours [4][5][6][7]; this renders cable-supported structures, e.g. cable-stayed bridges and floating offshore systems, susceptible to unwanted vibrations during operation [8,9]. To understand these nonlinear phenomena, linear theory is often insufficient, or even invalid, since the nonlinear behaviours can significantly differ from the linearized ones. In this context, extending linear theory to account for nonlinear behaviours is needed. To address this, the concept of a nonlinear normal mode (NNM) was defined by Rosenberg [10][11][12] as an in-unison, or synchronous, periodic resonance for a conservative nonlinear system. It requires that the displacements of components all reach their extreme values and pass through equilibrium points simultaneously during periodic resonances. As such, these NNMs may be represented in terms of their initial displacements, where the initial velocities are set to zero. Such synchronous NNMs include single-mode, in-phase and anti-phase responses, which have been observed in a variety of nonlinear systems, e.g. the two-mass oscillators [13][14][15], beam structures [16,17], rotor systems [18] and cable structures [19]. As well as synchronous (single-mode, in-phase and anti-phase) resonances, nonlinear systems can exhibit asynchronous resonances, where the displacements of components do not reach their extreme values and pass through equilibrium points simultaneously while remaining periodic. To account for such asynchronous resonances, an extension to Rosenberg's definition was proposed in [20,21], where an NNM is defined as a (non-necessarily synchronous) periodic response of the conservative system; this definition of an NNM is considered throughout this paper. One example of an asynchronous NNM is the whirling motion observed in cable structures [6,19] and rotor systems [18]. This whirling motion is an out-of-unison response in which one coordinate reaches an extremum while another passes through the equilibrium point. As such, for a twomode system, this motion may be represented by the initial displacement of one coordinate and the initial velocity of the other (while the respective initial velocity and displacement are simultaneously zero). Considering the phase of the underlying coordinates, in-phase and antiphase motions are characterized by a phase difference of 0 • or 180 • , while out-of-unison motion is characterized by a phase difference of ±90 • . Besides this special case of out-of-unison motion, to the best of our knowledge, a more general asynchronous NNM, characterized by a general phase relationship (i.e. non-necessarily 90 • out-of-phase), has not been identified in the literature. If such general asynchronous NNMs exist, they represent a large family of nonlinear responses that may affect the performance of the nonlinear systems, and may potentially be exploited. Such behaviours may be considered to be more complex than synchronous or out-of-unison motions, as they may include responses where no velocities or displacements are simultaneously zero. Their existence also indicates that the phase relationships between modal coordinates are crucial parameters to be determined when computing the NNMs. Here, we hypothesize and then demonstrate the existence of general asynchronous NNMs using the simple motivating example of a two-mode 1 single-mass oscillator. In addition to this being a new solution type, this observation highlights the need to consider phase as a free variable (rather than constrained to 0, ±90 • or 180 • ) when searching for NNMs using analytical techniques, and using numerical approaches that rely on systematic investigation of the initial conditions. Section 2 first revisits the concept of synchronous and asynchronous NNMs, distinguished by the phase relationships between the modal coordinates of a two-mode system with 1 : 1 internal resonance. A numerical method is then used to demonstrate that the hypothisized general asynchronous NNMs can exist for the single-mass oscillator. As with the cable structure and in-line two-mass oscillator studied in [19], numerical results show that the single-mass model possesses out-of-unison NNMs when it has a geometrically orthogonal layout. With the breaking of the orthogonal configuration, the out-of-unison NNMs can evolve to more general asynchronous cases, where the phase difference is neither 0 • , 180 • nor ±90 • . This demonstrates that such motions may exist in a nonlinear mechanical structure. on this finding, in §3, an analytical technique is used to further quantify the characteristics of asynchronous NNMs in the single-mass model. Analytical phase relationships of the backbone curves, i.e. the branches of NNMs, verify the results found in §2, and further reveal that, with the breaking of orthogonal configurations, the out-of-unison backbone curve evolves to one that consists of asynchronous NNMs whose phase relationships vary along the backbone curve. This class of backbone curve is defined here as a phase-varying backbone curve, and represents the loci of general asynchronous NNMs. Using insights obtained from the single-mass model, the existence of phase-varying backbone curves in a cable model (a common mechanical structure that exhibits out-of-unison backbone curves [19]) is then considered in §4. A reduced-order cable model is first derived and verified, using an existing analytical model [8]. The addition of a support near the cable root is then considered-this resembles the engineering practice of installing external devices to suppress vibrations [6,22,23]. This support breaks the orthogonal configuration of the cable and, as with the single-mass model, causes the out-of-unison (i.e. whirling) motions to evolve into general asynchronous motions on a phase-varying backbone curve. Finally, conclusions are presented in §5. 2. Nonlinear normal modes of a two-mode system with 1 : 1 internal resonance In this section, the NNMs of a two-mode system with 1 : 1 internal resonance are first revisited, where an NNM is defined as a periodic response of a conservative system [20,21]. NNMs can be divided into synchronous and asynchronous solutions. Examples of the synchronous responses are shown in figure 1a,b, where the lines represent the oscillations over time in linear modal space (q 1 and q 2 denote the first and second linear modal coordinates, respectively). The extrema of q 1 and q 2 are marked by circles and crosses, respectively. Figure 1a represents the simplest type of NNM solution-single-mode responses that contain contributions from only q 1 or q 2 . Figure 1b shows the synchronous mixed-mode NNMs that, in contrast to the singlemode NNMs in figure 1a, arise from modal interactions and consist of contributions from both linear modal coordinates, q 1 and q 2 . For synchronous mixed-mode NNMs, the modal coordinates reach extrema and equilibrium points simultaneously, as shown in figure 1b. Such NNMs can be characterized using the phase relationship, θ d , between the fundamental components of q 1 and q 2 , which can be either in-phase (θ d = 0) or anti-phase (θ d = π ); note that this phase relationship is undefined for the single-mode cases in figure 1a. They can also be represented by their initial Figure 2. A schematic diagram of a single-mass two-mode system. A mass, with mass value m, has x and y denoting horizontal and vertical in-plane displacements, respectively. This mass is grounded by two horizontal springs with coefficients k 1 and k 3 , and unstretched lengths L 1 and L 3 . Another spring, with coefficient k 2 and unstretched length L 2 , grounds the mass, with angle δ representing the angle between k 1 and k 2 . The orthogonal case is shown here where δ = 90 • (Online version in colour.) modal conditions at extrema-non-zero displacements with zero velocities. These synchronous cases can be seen in a variety of nonlinear systems; see [13][14][15][16][17][18][19]. For asynchronous NNMs, an example is shown in figure 1c where one modal coordinate reaches an extremum when the other passes through the equilibrium point. In this case, the linear modal coordinates have ±π/2 out-of-phase relationships (the '+' and '−' signs here denote the clockwise and anticlockwise motions, respectively). Such motions can also be characterized by the initial conditions at extrema-a non-zero displacement for one coordinate with a nonzero velocity for the other, or vice versa. This class of NNM is termed an out-of-unison NNM in [19], and includes, for example, whirling motions of cables and rotor systems [18,19]. The commonly observed NNM motions can, therefore, be categorized as single-mode (figure 1a, where θ d is undefined), in-phase and anti-phase (figure 1b, θ d = 0, π , respectively) and out-ofunison (figure 1c, θ d = ±π/2). It seems logical, therefore, to pose the question: Can other phase relationships exist between the linear modes of NNM responses? Such an NNM is depicted in figure 1d, and corresponds to a general asynchronous response where the phase relationship between linear modal coordinates is θ d out-of-phase (to differentiate it from these previously discussed cases, θ d = 0, π , ±π/2). Unlike the synchronous and out-of-unison NNMs, this NNM represents responses where displacements and velocities cannot simultaneously be zero. To the best of our knowledge, this more general asynchronous NNM has not been identified in the literature. To explore the existence of this general asynchronous NNM, and further characterize its features, a two-mode single-mass system, schematically shown in figure 2, is firstly considered. This example system consists of one mass, with mass value m, and has displacements x and y, denoting horizontal and vertical in-plane motions, respectively. This mass is grounded by three linear springs with coefficients k 1 , k 2 and k 3 , and with unstretched lengths L 1 , L 2 and L 3 , respectively. At equilibrium, all the springs are unstretched and springs k 1 and k 3 are lying in the x-direction, while the angle between k 1 and k 2 is denoted δ (when δ = 90 • , spring k 2 is orthogonal to springs k 1 and k 3 ). Such a system can exhibit nonlinear behaviours because of geometric nonlinearity. To investigate the nonlinear dynamic behaviours, the equations of motion can be obtained via the Euler-Lagrange equations For details of the derivation, readers can refer to appendix A. Using this two-mode model, we investigate one potential mechanism, i.e. the breaking of the orthogonality, 2 that may lead to the existence of the general asynchronous resonance. This is achieved by comparing the backbone curves, i.e. branches of NNMs, for the orthogonal and non-orthogonal cases. These backbone curves are computed using the numerical continuation software COCO [24], and hence no analytical approximation is required for the results shown in this section. (a) NNMs of the system with an orthogonal configuration First, we consider the orthogonal case, where the two-mode single-mass system has m = 1, k 1 = k 3 = 0.5, k 2 = 1.005, L 1 = L 2 = L 3 = 1 and δ = 90 • . The backbone curves of this system are shown in figure 3 in the projection of the response frequency, Ω, against the absolute displacement amplitude of the mass, X 2 + Y 2 , where X and Y are the maximum amplitudes of displacements x and y, respectively. In this region, there are two single-mode backbone curves Figure 4. Backbone curves obtained via numerical continuation for the two-mode system in figure 2 with a non-orthogonal configuration. The backbone curves are shown as solid curves in the projection of the response frequency Ω against absolute displacement of the mass, √ X 2 + Y 2 , for a system with m = 1, k 1 = k 3 = 0.5, k 2 = 1.005, L 1 = L 2 = L 3 = 1 and δ = 89.5 • ; and the branch point bifurcation is denoted as a solid dot, labelled 'BP2' . Four embedded plots, in the projection of modal coordinates, q 1 (t) against q 2 (t), represent the responses of NNMs on the corresponding backbone curves. The extreme displacement values of modal coordinates q 1 (t) and q 2 (t) are marked by circles and crosses, respectively, in these embedded plots. Arrows in the embedded plot linked to S ±v 1 denote clockwise and anticlockwise motions. For comparison, the backbone curves for the orthogonal case in figure 3 are shown as grey dash-dotted curves with branch point bifurcations denoted as open dots. (Online version in colour.) S 1 and S 2 ; 3 and two mixed-mode backbone curves, S + 1 and S − 1 , bifurcating from S 1 through the branch point bifurcation 'BP1' (the subscripts of S + 1 and S − 1 indicate the backbone curve from which they bifurcate, in this case from S 1 ). Note that, because of the symmetry of the configuration, S + 1 and S − 1 are superimposed in this projection. The NNMs on these single-mode and mixed-mode backbone curves exhibit synchronous resonances-see the time-parameterized responses of NNMs on these backbone curves in the embedded plots, which are analogous to those shown in figure 1a,b, respectively. Besides these synchronous backbone curves, asynchronous backbone curves, i.e. the out-of-unison backbone curves, S ±90 1 , bifurcate from S 1 through the branch point bifurcation 'BP2'. The embedded plot, near S ±90 1 in figure 3, describes the NNMs on S ±90 1 : q 1 reaches its extreme value when q 2 has a zero value and vice versa, and the arrows on the curve describe the clockwise motion (θ d = +π/2) and anticlockwise motion (θ d = −π/2), respectively. The out-of-unison NNMs in this plot are similar to those shown in figure 1c, and the out-of-unison motions reported in [19]. (b) Nonlinear normal modes of the system with a non-orthogonal configuration With δ perturbed away from 90 • , the orthogonality of the system is broken. The effect of breaking the orthogonality on the backbone curves is shown in figure 4 for the case where δ = 89.5 • , while other parameters remain unchanged. For comparison, backbone curves for the orthogonal case are also presented using dash-dotted grey curves in this figure. It can be seen that the branch point bifurcation, 'BP1', splits to generate one primary in-phase backbone curve, S + 1 , and one isolated anti-phase backbone curve, S − 1 [26]. The contribution of the first linear modal coordinate, q 1 , to the single-mode backbone curve, S 2 , increases from 0 and leads to an in-phase backbone curve, S + 2 . These three mixed-mode backbone curves still consist of synchronous NNMs-see the embedded plots of time-parameterized responses. The other bifurcation point, 'BP2', remains, and connects the anti-phase backbone curve S − 1 to S ±v 1 . Backbone curves, S ±v 1 , can be seen as evolutions from the out-of-unison backbone curves, S ±90 1 , with the breaking of orthogonality. It is shown in the embedded plot linked to the S ±v 1 curves that, similar to the out-of-unison backbone curves, the NNMs on S ±v 1 also exhibit asynchronous responses, where the arrows denote the clockwise motion (+θ d ) and anticlockwise motion (−θ d ), respectively; however, the phase relationship between the two modal coordinates, θ d , are not ±π/2, but instead are similar to the asynchronous ones shown in figure 1d. This is highlighted by the dots and crosses, shown in the embedded plots, which illustrate that the extrema and equilibria are reached at different times. This demonstrates that NNMs with a general asynchronous motion can exist. The phase relationships, θ d , of NNMs on these backbone curves show amplitude-dependent characteristics; in other words, phase relationships of NNMs vary along the backbone curves. Hence they are termed phase-varying backbone curves, denoted with the superscript '±v'. This will be discussed in detail in the next section. In this section, the periodic responses, i.e. the NNMs, of a two-mode system with 1 : 1 internal resonance were firstly reviewed, emphasizing the less studied asynchronous NNMs. A specific example of such asynchronous NNMs is the out-of-unison NNM, studied in [19], where the modal coordinates have ±π/2 phase difference. To explore the existence of a more general case, where the NNM has a phase difference θ d = 0, π , ±π/2 between linear modal coordinates, a simple two-mode system, shown in figure 2, has been considered. We found that the breaking of orthogonality can transform the out-of-unison NNMs to the more general asynchronous ones. In the next section, analytical studies are carried out to further study the dynamic characteristics of the asynchronous NNMs. Analytical analysis of the asynchronous backbone curves In this section, using the harmonic balance technique, the backbone curves of the single-mass system are found analytically and further used to characterize the asynchronous responses. To simplify this analytical study, the full model, described by equation (2.1a), is first expanded to a polynomial one using Maclaurin expansion, and then truncated by retaining nonlinear terms up to the cubic order. The obtained equations of motion in linear modal space arë where ω n1 and ω n2 denote the first and second linear natural frequencies, respectively, and where Ξ 1 , Ξ 2 , . . . , Ξ 4 and Ψ 1 , Ψ 2 , . . . , Ψ 5 are quadratic and cubic nonlinear coefficients, respectively. For details of this derivation, see appendix A. To apply the harmonic balance method, 4 it is assumed that the modal displacements may be approximated by a single harmonic, i.e. frequencies of the two modes are equal, i.e. ω r1 = ω r2 = Ω, which accounts for 1 : 1 internal resonance. Following the procedure described in [16]-with the substitution of expressions (3.2) into the equations of motion (3.1) and the non-resonant terms removed-one can obtain the time-independent solutions from These equations can then be used to compute the backbone curves of the twomode system depicted in figure 2. Note that the quadratic terms, presented in equations (3.1), do not lead to 1 : 1 internally resonant components [27], hence they do not appear in equations (3.3). (a) Backbone curves for systems with orthogonal configurations For orthogonal configurations of the single-mass system, one can find that Ψ 2 = Ψ 4 = 0-see appendix A for details. This further reduces equations (3.3) to and There are six different sets of solutions to equations (3.4), representing the backbone curves of the system. The solution U 1 = U 2 = 0 is trivial, corresponding to the case where the system has no motion. Besides this trivial case, there are two single-mode solutions: one is the backbone curve, S 1 , with U 2 = 0 and U 1 = 0; the other is the backbone curve, S 2 , which has U 1 = 0 and U 2 = 0. Their amplitude-frequency relationships are and The NNMs on these single-mode branches, as discussed previously, are schematically shown in figure 1a. Cases where U 1 = 0 and U 2 = 0 are related to mixed-mode backbone curves, hence the phase relationship (equation (3.4c)) between two linear modal coordinates must be determined before finding their amplitude-frequency relationships. Two sets of phase relationships can be found to satisfy equation (3.4c). One requires sin(θ d ) = 0, i.e. θ d = nπ (where n ∈ Z), which corresponds to the in-phase and anti-phase cases. With sin(θ d ) = 0 substituted into equations (3.4), the amplitudefrequency relationships for in-phase and anti-phase backbone curves are given by , Like the single-mode backbone curves, NNMs on these mixed-mode, in-phase and anti-phase backbone curves relate to synchronous responses, schematically shown in figure 1b. Note that S ± 2 does not exist for the system considered in figure 3. The other phase relationship, satisfying equation (3.4c), has cos(θ d ) = 0, i.e. θ d = (2n + 1)π/2. This solution branch represents the outof-unison backbone curves S ±90 (3.8) NNMs on these backbone curves exhibit ±π/2 out-of-phase asynchronous resonances, whose responses are schematically shown in figure 1c. Again, we note that S ±90 2 does not exist for the system considered in figure 3. (b) Backbone curves for systems with non-orthogonal configurations For non-orthogonal configurations of the single-mass system, the nonlinear coefficients, Ψ 2 and Ψ 4 , are not necessarily equal to 0. With non-zero values of Ψ 2 and Ψ 4 , the single-mode solutions (U 2 = 0 and U 1 = 0, or U 1 = 0 and U 2 = 0) can no longer be achieved in equations (3.3). Hence, in this case, only mixed-mode backbone curves can be found. Similar to the orthogonal case, the phase relationship needs to be determined first by considering equation (3.3c). This can be satisfied when sin(θ d ) = 0, i.e. θ d = nπ , which corresponds to the in-phase and anti-phase backbone curves S ± 1 and S ± 2 , similar to those discussed previously. The amplitude-frequency relationships are given by rearranging equations (3.3a) and (3.3b) as where p = cos(nπ ). For even n, p = +1 and this indicates the in-phase backbone curves S + 1 and S + 2 ; whereas for odd n, p = −1, representing anti-phase backbone curves S − 1 and S − 2 . Besides the in-phase and anti-phase cases, the other phase relationship relates to a zero value of the terms in the bracket of equation (3.3c), which can be rearranged as This expression indicates that the phase relationship (θ d ) is dependent upon the amplitude, suggesting that the phase difference between the modal coordinates varies along the backbone curve. Here, we term the asynchronous NNM branch with an amplitude-dependent phase relationship between modal coordinates as a phase-varying backbone curve. To find the expressions of this phase-varying backbone curve, the phase relationship (3.10) is substituted into equations (3.3a) and (3.3b); after some algebraic manipulation, one has , (3.11) As previously discussed, an orthogonal configuration can lead to Ψ 2 = 0 and Ψ 4 = 0. Substituting these into expressions (3.11), the amplitude-frequency relationships describing phase-varying backbone curves can be reduced to those describing out-of-unison backbone curves in equations (3.8). Furthermore, with Ψ 2 = 0 and Ψ 4 = 0, the phase relationship, described by the expression (3.10), can be reduced to cos(θ d ) = 0. This phase relationship is again identical to that for the out-of-unison backbone curves. This therefore indicates that the phase-varying backbone curve is an evolution from the out-of-unison backbone curve with the orthogonality breaking, through the phase-amplitude coupling, described by equation ( Figure 5b shows the phase relationships on these backbone curves in the projection of the response frequency, Ω, against the phase difference, θ d , between the two fundamental components of modal coordinates, q 1 and q 2 . For the orthogonal case, as expected, different NNMs on any one backbone curve share a fixed phase relationship between q 1 and q 2 , indicated by the dash-dotted straight lines in figure 5b. For the non-orthogonal case, the backbone curves, S + 1 , S − 1 and S + 2 , have fixed phase relationships; however, the phase relationships of backbone curves, S ±v 1 , vary with frequency. One branch of these phase-varying backbone curves, S +v 1 , has a phase relationship varying from θ d = π (on the branch point bifurcation on S − 1 ) to θ d = π/2, with the decrease in response frequency (along with the increase in displacement amplitude-see figure 5a). The embedded plots, labelled (i), (ii), . . . , (vi), present the time-parameterized responses of a selection of NNMs on S +v 1 . It can be seen that the NNMs evolve from an anti-phase NNM (θ d = π on the branch point bifurcation) towards a clockwise out-of-unison NNM (θ d = π/2). The other phase-varying backbone, S −v 1 , shows similar behaviours, except for having NNMs exhibiting anticlockwise motions. In this section, the harmonic balance technique has been used to find the analytical expressions of backbone curves for the single-mass system with orthogonal and non-orthogonal configurations. Analytical analysis shows that the general asynchronous backbone curve, discussed in §2, has an amplitude-dependent phase relationship between the linear modal coordinates. This backbone curve is termed a phase-varying backbone curve, and it can be seen as an evolution from the out-of-unison backbone curve through the breaking of the orthogonality. The existence of such backbone curves indicates that phase relationships between modal coordinates are crucial parameters to be determined in finding NNMs, a key implication of which is when applying the harmonic balance method numerically to compute NNMs. In the next section, numerical analysis is carried out to investigate the existence of phase-varying backbone curves in a cable model. Phase-varying backbone curves for a cable model In this section, phase-varying behaviour is investigated using a horizontal cable, taut between two fixed end points. An additional elastic support connects the cable to the ground, near one of the fixed ends, as shown in figure 6. The dynamics of the cable system are modelled based on a lumped-mass approach, similar to the method in [28]. A brief description is given here for completeness. The model is formulated by discretizing the cable into n identical elastic elements, connected in series between n + 1 nodes. The two end nodes are fixed, resulting in a total of 3(n − 1) degrees of freedom in three-dimensional space. The mass of the cable is equally distributed between the elements, and for each element half of its mass is lumped on either end. The elements are assumed to be undamped and linearly elastic. The cable has an unstretched length of L 0 , uniform density ρ, Young's modulus E, and a constant cross-section of diameter d. Axial stress is assumed to be uniformly distributed over the cross-sectional area, and a static axial pre-tension with a horizontal component T is applied at both cable ends. The forces acting on the cable are due to gravity and elasticity, while viscous and aerodynamic effects are neglected. An additional undamped, linearly elastic element is attached to the cable at a position z s along its span. This element lies within the cross-sectional (x-y) plane, at an angle δ from the horizontal. It has a length l, stiffness k, and is unstretched when the system is at equilibrium. A 2 d.f. nonlinear reduced-order model of the cable system, which captures its salient dynamic behaviour near the first two natural frequencies, is obtained using a force-based indirect reduction method [29]. This involves a projection of the equations of motion of the full model onto a 2 d.f. reduced basis. The reduction/projection basis consists of the first out-of-plane and the first inplane transverse mass-normalized linear modeshapes of the cable about its equilibrium position. As such, the equations of motion of the reduced-order model can be written as where f 1 and f 2 are the nonlinear restoring forces. For linear elastic finite-element models with geometric nonlinearities, the forcing functions typically take the form of quadratic and cubic polynomials [29][30][31], i.e. and f 2 (q 1 , q 2 ) = Ξ 2 q 2 1 + 2Ξ 3 q 1 q 2 + 3Ξ 4 q 2 2 + Ψ 2 q 3 1 + 2Ψ 3 q 2 1 q 2 + 3Ψ 4 q 1 q 2 2 + 4Ψ 5 q 3 2 . (4.2b) Note that linear dependencies are imposed on the coefficients in equations (4.2), such that the energy in the system is conserved [31,32], similar to equations (3.1). The linear properties in equations (4.1) can be obtained directly through an eigenanalysis of the full system. However, the coefficients of the nonlinear terms in equations (4.2) are computed in a non-intrusive manner, using a set of static solutions of the lumped-mass model. 5 The static solutions are obtained by applying a set of prescribed static loads and computing the corresponding displacements. The selected loading cases consist of scaled linear combinations of the retained modes. For each load case, the computed static displacement of the full system is then projected onto the reduced modal space. Finally, the coefficients of the nonlinear terms in equations (4.2) are estimated through regression analysis in a least-squares manner, using the modal force-modal displacement dataset. A validation of the developed lumped-mass cable model, and of the corresponding reduced-order model obtained using the indirect reduction method described above, can be found in appendix B. We now consider a 40-element, 117 d.f. cable model with the following physical parameters: L 0 = 1.5 m, d = 5 mm, ρ = 3000 kg m −3 , E = 200 GPa. The system is subjected to a static pre-tension with a horizontal component T = 100 N, and is additionally constrained by an elastic element with the following properties: l = 0.2 m, k = 10 5 N m −1 and z s = 0.15 m. Two additional support layouts are considered-one corresponds to the case when the spring is aligned with the y-axis, i.e. when δ = 90 • , and this is denoted as the orthogonal case; the other relates to the case when δ = 60 • , and this is denoted as the non-orthogonal case. The estimated parameters of either model can be found in table 1. Table 1. Values of the estimated parameters of the reduced-order model for the orthogonal case and the non-orthogonal case. curves for the single-mass system discussed in §3. A selection of NNMs on S −90 1 , i.e. NNMs marked by '+' signs in figure 7a,b, are presented in figure 7c,d, where the time-parameterized responses are shown in modal coordinates and physical coordinates, respectively. Because of the variation in the tension of a cable during oscillation, a non-resonant q 2 component arises from the nonlinear quadratic terms in expressions (4.2). This leads to a shift of the extrema q 1 (t) (marked by circles in figure 7c,d) along the backbone curve [25]. Nonetheless, an anticlockwise out-of-unison (θ d = −90 • ) phase relationship between q 1 and q 2 can still be seen, similar to the one in figure 1c. Likewise, similar behaviours can be expected for the other out-of-unison backbone curve, S +90 1 , except for having clockwise motions. Figure 8a shows the backbone curves for the non-orthogonal case. The corresponding phase relationships on the backbone curves are shown in figure 8b. With δ perturbed away from 90 • , it leads the single-mode backbone curves, S 1 and S 2 , to mixed-mode backbone curves, S + 1 and S − 2 , respectively. The out-of-unison backbone curves, S ±90 1 , like these for the single-mass system discussed in previous sections, evolve into phase-varying backbone curves, S ±v 1 , and bifurcate from the in-phase mixed-mode backbone curve, S + 1 . The phase-varying backbone curves show that phase relationships evolve from θ d = 0 (in-phase motions at the bifurcation) to θ d ≈ ±π/2 (nearly out-of-unison motions), as depicted in figure 8b. One example of this evolution of motions is presented in figure 8c, which describes the modal motions of NNMs marked with '+' signs Conclusion NNMs represent important tools in the analysis of nonlinear phenomena. This paper has investigated the less commonly studied asynchronous NNMs. It has been shown that out-ofunison NNMs (characterized by a ±90 • phase relationship between modal coordinates) are a special case of a more general, and previously unreported, NNM solution set where the phase relationship may assume any value. The existence of this general asynchronous NNM was first explored using a single-mass model via numerical analysis and it was demonstrated that the breaking of orthogonality causes the out-of-unison NNMs to evolve into general asynchronous NNMs (i.e. the phase difference between the modes evolves from ±90 • to a general phase difference). An analytical method was then used to find the expressions of backbone curves, i.e. branches of NNMs, of the single-mass model. The analytical phase relationship between modal coordinates revealed that, along with the breaking of orthogonality, the out-of-unison backbone curves evolve to ones on which the phase relationships exhibit amplitude-dependent characteristics. This newly identified NNM branch is defined as a phase-varying backbone curve. The existence of phase-varying backbone curves was then investigated in a cable model, through the attachment of a near-cable-end support. This support has the effect of breaking the orthogonal geometry of the cable and, as with the single-mass example, cause the out-of-unison (whirling) motions to evolve into general asynchronous motions. The existence of phase-varying backbone curves, and the accompanying general asynchronous NNMs, represents a new set of nonlinear phenomena in mechanical systems. These NNMs may represent significant responses in nonlinear systems that may be critical in understanding their performance (such as the evolution from whirling in a cable with non-orthogonal geometry) or that may be exploited to improve the performance of such systems. The existence of such NNMs also indicates that phase relationships between modal coordinates are crucial parameters to be determined when computing nonlinear responses, and should be carefully considered when computing nonlinear responses. These motions may be considered more complex than the more commonly observed synchronous or out-of-unison motions, as their displacement and velocity coordinates are never simultaneously zero. A key implication is that, when using the harmonic balance technique to compute the NNMs, the phase relationships, being the critical parameters describing the nonlinear phenomena (similar to the role of the harmonic amplitudes), should be seen as unknowns to be determined. Acknowledgements. We gratefully acknowledge the financial support of the EPSRC and CSC. Appendix A. Obtaining the truncated model of the single-mass system Using L i to denote the stretch or compression of springs k i , the Lagrangian of the one-mass two-mode system, schematically shown in figure 2, can be written as where m is the mass value; k 1 , k 2 and k 3 are the coefficients of linear springs with lengths L 1 , L 2 and L 3 , respectively; and δ denotes the angle between springs k 1 and k 2 . Applying the Euler-Lagrange equation, the equations of motion can be obtained as and This full model can then be expanded as polynomial equations using Maclaurin expansion, and further simplified by retaining nonlinear terms up to cubic orders. In this way, the equations of motion can be written as where M and K are the mass and linear stiffness matrices, respectively; N x is a vector of nonlinear terms; and x is a vector representing physical displacements. They can be written as M = m 0 0 m , K = k 1 + k 2 cos 2 (δ) + k 3 k 2 sin(δ) cos(δ) k 2 sin(δ) cos(δ) k 2 sin 2 (δ) (A 4) and N x = 3β 1 x 2 + 2β 2 xy + β 3 y 2 + 4γ 1 x 3 + 3γ 2 x 2 y + 2γ 3 xy 2 + γ 4 y 3 β 2 x 2 + 2β 3 xy + 3β 4 y 2 + γ 2 x 3 + 2γ 3 x 2 y + 3γ 4 xy 2 + 4γ 5 y 3 , x = where the coefficients of nonlinear terms, β 1 , β 2 , . . . , β 4 , γ 1 , γ 2 , . . . , γ 5 , are 2L 2 2 and γ 5 = cos 2 (δ)L 2 This underlying linear model can be used for linear modal analysis to find the modal parameters, allowing the truncated system (A 3) to be translated into linear modal space. This is achieved by using substitution x = Φq, where Φ is the modeshape matrix and q is a vector of linear modal coordinates, written as where the first and second columns of Φ denote the first and second linear modeshapes, respectively. After applying the linear modal transform, the equations of motion can be written and and where ω ni denotes the ith linear natural frequency, and nonlinear modal coefficients are 2 22 , 3 22 , Appendix B. Validation of lumped-mass cable model and reduced-order model The lumped-mass discretization approach as well as the subsequent reduction method described in §4 are validated by comparing the backbone curves of a reduced-order model with those obtained using an analytically derived dynamic model of a small-sag cable system [8]. The analytical model in [8] is applicable to highly stressed cables with a small weight-to-tension ratio, such that axial modal motions can be neglected. The cable considered for validation purposes has Table 2. Comparison between the values of the estimated model parameters, using the analytical cable model in [8], and the 2 d.f. reduced-order model (ROM) via the reduction method described in §4. the same properties as that described in §4, but the applied axial pre-tension is increased from T = 100 N to T = 200 N, in order to satisfy the aforementioned requirement. A two-mode model based on [8] was previously studied in [19], where it was shown that the whirling motion of a cable corresponds to out-of-unison resonance between its first out-of-plane and first in-plane transverse modes. The results obtained using the corresponding reduced-order model are in close quantitative and qualitative agreement with these observations, as shown in figure 9. The corresponding parameters in the equations of motion (4.1), computed using the reduction method described in §4, as well as those obtained using the analytical model, are shown in table 2.
9,023
2020-05-01T00:00:00.000
[ "Engineering", "Physics" ]
Hybrid Attention Based Residual Network for Pansharpening : Pansharpening aims at fusing the rich spectral information of multispectral(MS) images and the spatial details of panchromatic(PAN) images to generate a fused image with both high resolutions. In general, the existing pansharpening methods suffer from the problems of spectral distortion and lack of spatial detail information, which might prevent the accuracy computation for ground object identification. To alleviate these problems, we propose a Hybrid Attention mechanism-based Residual Neural Network(HARNN) . In the proposed network, we develop an encoder attention module in the feature extraction part to better utilize the spectral and spatial features of MS and PAN images. Furthermore, the fusion attention module is designed to alleviate spectral distortion and improve contour details of the fused image. A series of ablation and contrast experiments are conducted on GF-1 and GF-2 datasets. The fusion results with less distorted pixels and more spatial details demonstrate that HARNN can implement the pansharpening task effectively, which outperforms the state-of-the-art algorithms. Introduction Remote sensing technology has played an important role in economic, political, military and other fields since the successful launch of the first human-made earth resources satellite. With the development of remote sensing technology, existing remote sensing satellites are able to obtain images with higher and higher spatial, temporal and spectral resolution [1]. However, due to the restrictions of technical conditions and hardware limitations [2], optical remote sensing satellites can only provide high-resolution PAN images and low-resolution MS images. PAN image has only one spectral channel, which means it cannot express RGB colors, and on the contrary, MS image carries high expression ability of color [3][4][5][6]. Therefore, the fusion of high spatial resolution of PAN and high spectral resolution of MS, called pansharpening, is proposed and proven to be an effective method. The existing pansharpening methods can be roughly divided into traditional fusion algorithms [7][8][9] and deep learning based fusion algorithms [10,11]. As the focus of this paper, deep learning based methods have been developed to refine the spatial resolution via substituting components [12,13], or transforming features into another vector space [14]. Although these previous works have achieved fusion accuracy to some extent, the extraction of spectral and spatial features from MS and PAN images could be further promoted to improve the spatial resolution and alleviate spectral distortion. The spectral distortions are caused by the large numerical difference between pixel values of MS and PAN images, since the surface features have discrepant value in different spectral bands. As shown in Figure 1b, which is generated by PNN, if there are spectral distortion regions in the fused image, the identification and analysis of ground objects will be affected, such as the identification of rivers, which mainly relies on color expression and spectral features of the fused image. As for the problem that the spatial resolution is not high enough [12], the accuracy of target segmentation will also be influenced by the indistinct edges of buildings and arable lands. In addition, the problem of high computational complexity leads to high hardware requirements and high time consumption [14]. To handle the problems of spectral distortion and low spatial resolution, we propose HARNN for pansharpening task. The proposed method is based on ResNet with a noval hybrid attention mechanism.The inputs of the network consists of two feature extraction branches to better extract spectral and spatial detail information from MS and PAN images. In order to extract multi-scale features from remote sensing images obtained by different satellites and reduce the complexity of training network, the extracted features are downsampled once [15,16] through convolutional operation and then rescaled to the original resolution after being fused. Moreover, to ease the problem of spectral distortion and improve the spatial resolution of the fused image, the encoder attention module and hybrid attention module are designed as parts of the network. Finally, we conduct extensive experiments on two remote sensing image datasets collected by Gaofen-1 (GF-1) and Gaofen-2 (GF-2) satellites, and the experimental results demonstrate that the proposed method could achieve promising results compared with other state-of-the-art methods. Specifically, the main contributions of this paper include: 1. A feature extraction network is designed with two branches including an encoder attention module to extract the spectral correlation between MS image channels and the advanced texture features from PAN images; 2. A hybrid attention mechanism with the truncation normalization is proposed in the feature fusion network to alleviate the problem of spectral distortion, and to improve the spatial resolution simultaneously; 3. Extensive experiments are conducted to verify the effectiveness of the attention mechanism in the proposed method, which could provide a comparative baseline for related research work. The rest of this paper is organized as follows. Section 2 reviews the related work in the pansharpening field. Section 3 describes the proposed network architecture, the utilized loss function, as well as the hybrid attention mechanism. Section 4 then introduces the experimental dataset, the evaluation metrics, and presents the experiment results of different pansharpening methods. Finally, the overall conclusion of this paper is summarized in Section 5. Related Work Existing pansharpening methods fall into two main categories: traditional fusion algorithms and deep learning (DL) based methods, among which the former is divided into component substitution algorithms (CS), multi-resolution analysis algorithms (MRA), and sparse representation (SR) based algorithms [3][4][5][6]. Traditional Algorithms One of the most popular pansharpening methods is CS-based algorithms. The basic idea of CS is to extract the spectral and spatial information of MS images by applying a pre-determined transformation [3,17]. Then the spatial component is replaced with the high resolution part generated from PAN image, and the final result is constructed by the inverse operation. Some representative CS-based methods include Principal Component Analysis (PCA) [7], Gram-Schmidt (GS) algorithm [18], Intensity-Hue-Saturation (IHS) algorithm [8], and improved IHS algorithms such as Adaptive IHS (AIHS) [19] and Nonlinear IHS (NIHS) [20]. These color-transformation-based methods are popular because of the fast transformation process and high spatial resolution of the fused images [21]. However, due to the direct substitution of components of CS methods, though retaining the spatial details of PAN, the difference in scales of pixel values of PAN and MS images leads to spectral distortion and color deviation [8]. In 2004, Benz U C applied MRA in remote sensing data analysis for the first time [22]. Since then, many MRA-based pansharpening methods such as Wavelet Transform (WT) [23], Discrete Wavelet Transform (DWT) [24], and Laplacian pyramid method [25] have been proposed to solve the problems above. Different from CS methods, these algorithms extract the high frequency information such as spatial details of PAN and inject it into MS images through multi-resolution transformations to reconstruct a fused image of high resolution. Since MRA methods only leverage the high-frequency details of the PAN image, the consistency can be maintained in terms of color characteristics. For example, the DWT method [24] decomposes the origin PAN and MS images into high and low frequency components, and performs inverse transformation after fusing these components at different resolutions. Despite the high spectral resolution, these MRA-based methods have the disadvantage of ignoring edge information, which results in the low spatial resolution of the fused image. Apart from CS and MRA based approaches, the SR based algorithms such as [26] are designed to improve the spectral and spatial resolution of the fusion result at the same time. These SR-based algorithms use high and low resolution dictionaries to sparsely represent MS and PAN images. The maximum fusion rule is adopted to partially replace the coefficients with the sparse representation coefficients of the PAN image. Then the spatial details of sparse representation coefficients are injected into the MS image. Finally, through image reconstruction, the fused image is obtained [27]. Compared to mentioned methods, the SR-based methods alleviate spectral distortion and increase the spatial detail information, but suffer from the excessive sharpening [28]. Deep Learning Based Algorithms In recent years, DL-based methods have been introduced into image processing fields such as image fusion [29,30], object segmentation [31,32], and video recognition [33,34], and they have been proven to be effective. On the issue of pansharpening, the DL-based methods are mainly divided into two categories: single-branch neural network and dualbranch neural network [35]. As for the single-branch architecture, the PAN image is considered as another spectral band of MS image and concatenated into it. Then, the composite image is delivered into CNN modules as one input, and transformed to a higher resolution version. For example, Zhong et al. [36] proposed to combine the GS algorithm and the SRCNN model [37,38] in the super-resolution domain and perform GS transform on high resolution MS and PAN images. However, the fusion results of GS-SRCNN still suffer from spectral distortion and lack of spatial details. Creatively, Masi et al. [10] proposed the PNN model based on convolutional neural network (CNN) [39] for the first time, which is composed of three convolution layers using kernel size (9,5,5). To improve the fusion accuracy, this same lab introduced nonlinear radiation index into PNN [40], yet the fused images still contained spectral distorted pixels and unclear edges, which implies the limitation of single-branch networks. Furthermore, to improve the full-resolution performance, Vitale et al. [41] proposed to introduce perpetual loss (PL) into network training process of pansharpening. By introducing an additional loss term, the training phase is optimized and the visual perception ability of CNN is promoted. With regard to the dual-branch networks, the MS and PAN images are generally processed by two feature extraction networks separately. Then, the extracted features are fused through a fusion network and finally the high resolution image is generated. Compared to single-branch architecture, the dual-branch networks are able to better extract spectral and spatial features from MS and PAN images, respectively, without influencing the spectral correlation of MS image. Gaetano et al. proposed a two-branch deep fusion network called RSIFNN [42] in 2018. They considered that there was redundant information between MS and PAN images which lead to a residual mask, and treat the entire network as a residual unit. The predicted mask was then superimposed on the original MS image to obtain the fusion result, while it also had the problem of spectral distortion and blurry contours. Furthermore, another two-branch fusion network named PSGAN [11] was proposed, in which the concept of Generative Adversarial Networks (GAN) [43] was introduced into pansharpening task and the fusion accuracy was improved because of the introduction of discriminators. In addition, the concept of residual module [44] was introduced into two branch pansharpening network by Liu et al. [12] to make better use of the feature extraction and fusion ability of deep neural network, but its fusion result still has some spectral abnormal pixels which effect the fusion quality. Besides, the attention module was adopted in the recent proposed networks to improve the ability of feature extraction and proved to be effective in pansharpening task [45]. As a consequence, the residual module was adopted in the proposed network, and a hybrid attention module was designed to further enhance the spatial and spectral resolution of the fused image, which will be discussed in the following section. Proposed Methods For the sake of brevity and clarity, the notations below in Table 1 will be used in subsequent sections to describe the proposed network in detail: Network Architecture We proposed a hybrid attention mechanism based dual-branch residual convolutional neural network called HARNN, which consists of feature extraction network, feature fusion network and image reconstruction network. The feature extraction part in HARNN is split into MS and PAN feature extraction branches. Figure 2 illustrates the semantic framework of DL-based pansharpening network, which is composed of three parts that highlighted by red, green and blue boxes. In the feature extraction part (red box), the original MS image is blurred using Gaussian Blur function and downsampled by 4 times, then rescaled to the initial resolution through bicubic interpolation, resulting in the blurred imageM ↓. In addition, the original PAN image is also downsampled to the same resolution of MS represented by P ↓ according to Wald Protocol. After these image preprocessing steps,M ↓ and P ↓ are sent into feature extraction branches, and the corresponding spectral and spatial features can be denoted as: where w l and b l represent the weight and bias vectors of the lth layer of network, × stands for the convolution denotation, φ is the activation function, and F l denotes the feature map after convolution and activation operation. Compared to single-branch feature extraction network and super-resolution-based pansharpening network, this dual-branch network has improved performance for making better use of the spatial information contained in MS and PAN and eliminating the redundant information between different bands of MS image. When extracting features fromM ↓, the result feature maps represent the spatial characteristics of each channel of MS image on a two-dimensional plane, as well as the spectral correlation between these channels. Correspondingly, the outline and texture information are better extracted while mining spatial features from P ↓, which means that the extracted features contain spatial and spectral information from both of the images. Subsequently, as the network architecture of HARNN shown in Figure 2, the feature maps extracted from two branches are fused via feature fusion network, in which the feature maps are downsampled by 2 times by pooling layer to get features with scale invariance and concatenated in the channel dimension before sent into network. After being processed by several residual blocks and attention modules, the fused feature maps can be represented as: where F M and F P denotes the feature maps extracted from respective feature extraction branch, ⊕ represents channel-wise concatenation, Θ and φ are the convolution and activation operation, respectively. By using this concept of Fusion instead of Detail Injection, the characteristics of CNN are efficiently utilized and the high-level abstract features are better extracted via deep network. Unfortunately, according to equation(2) mentioned above, it is inevitable that the gradient of deep network will disappear or explode during the process of back propagation, which can be denoted as: where f n and Loss represents the activation function and the error calculated in the nth layer, and ∆ω is the derivative when passing Loss back into the ith layer which could be quite large or close to zero. Hence the architecture of ResNet is adopted in HARNN to solve this problem. In residual blocks, the mapping between inputs and residuals are learned by network via skip connection, so Loss can be be propagated directly to the lower layers without intermediate calculation. In the proposed model, the pre-activated residual block [44] without batch normalization is introduced to prevent damaging the contrast information of original images. Furthermore, in order to enhance spatial resolution and alleviate spectral distortion of the pansharpening result, the combined loss function and hybrid attention module is also adopted in the proposed network and will be discussed in detail in subsequent subsections. Loss Function On the basis of a reasonable network architecture, the selection of loss function also affects the pansharpening results. In some pansharpening papers [10,46], MSE is selected as the single loss function for its faster converge rate. However, it is sensitive to outliers because it calculates the sum of squares of the errors, which causes that the loss value cannot reflect the overall error of the fused image. On the contrary, MAE is more robust to outliers, but with slower convergence rate. Inspired by [47], we make a compromise and adapt the weighed combination of MSE and MAE as loss function in order to achieve the balance of the converge rate and robustness. The combined loss function can be defined as: In the contrast experiment, we found that the network has the best performance when MAE and MSE has the proportion of 10:1 on our dataset. Accordingly, we chose this proportion to boost the fusion accuracy. Hybrid Attention Mechanism The hybrid attention mechanism will be introduced in detail in this section, and the schematic diagram of attention module is represented in Figure 3. The core concept of hybrid attention mechanism is inspired by the idea of depthwise separable convolution proposed by [48,49], in which the standard convolution is divided into depthwise convolution and pointwise convolution, and the number of parameters and computation will be decreased significantly without compromising fusion accuracy. In the proposed network, the concept of depthwise convolution is applied into the selection of image spatial features and it constitutes the spatial part of the hybrid attention module. In addition, the proposed attention mechanism consists of encoder attention module and fusion attention module. The encoder attention module is designed to extract implicit information in the feature extraction network, and the fusion attention module is designed to select more informative features. Suppose the feature mapsF fused through feature fusion network with N channels (N filters in the last convolution layer), then they are divided into N groups, which implies each group consists of only one feature map. Inside each group, the DepthwiseConv2D layer is applied to extract spatial features such as outlines and texture information in two dimensions. After being calculated by spatial attention block, feature maps are converted to weighted ones with strengthened texture features, which will be able to contribute more to the fusion result. The process of calculating spatial attention features is expressed as: whereF i denotes the ith group of feature maps,Θ i φ i represents depthwise convolution and activation function of the ith layer, andF_s i stands for the ith weighted feature map after spatial attention module. As another part of the hybrid attention mechanism, channel attention module also plays an important role in optimizing fusion results by screening more informative feature maps. Unlike spatial attention, the processing objects of channel attention module are the complete set of the feature maps by adding all convolution results of each feature map. This process is essentially equivalent to performing Fourier Transform to feature maps by convolution kernels, if we consider the transformation from features to features in another dimension is similar to the transformation from time domain to frequency domain. After applying channel attention toF, the eigencomponents of different convolution kernels will be represented, which implies the devotion of feature maps to improve fusion accuracy and can be denoted asF_c i . In order to get the mask of hybrid attention module, the sigmoid function σ(·) is applied to the final feature maps which concatenatingF_s i andF_c i . After multiplying mask andF, the weight of the feature mapsF are reassigned both in the spatial and the channel dimension, which can be represented as: To sum up, the hybrid attention mechanism extracts the informative parts from the fused feature maps, including spatial features that are conducive to enhance the texture expression ability of fusion results and a certain feature map that contributes more compared to other ones. By introducing the hybrid attention mechanism into the proposed model, the problems of spectral distortion and lack of texture details could be alleviated. The ablation study of this mechanism which verifies its effectiveness will be presented in the following section. Results and Discussion In this section, we design and conduct three groups of experiments to verify the following assumptions: 1. The encoder attention and fusion attention modules could improve pansharpening accuracy. 2. The residual hybrid attention mechanism is able to alleviate the problem of spectral distortion. 3. The proposed network outperforms other state-of-the-art pansharpening methods in spatial resolution. The details of experiments are illustrated as follows. Datasets To evaluate the former mentioned objects, the following experiments were implemented on the dataset of MS and PAN images obtained by Gaofen-2 satellite (GF-2) and Gaofen-1 satellite (GF-1), of which the MS images consisted of four bands (Red, Green, Blue and Near Infrared Band) and had the image size of 6000*6000 and 4500*4500, respectively. In order to enhance the diversity of data and improve the fusion accuracy of the model to various ground objects, we selected images covering different landforms including urban buildings, roads, fields, mountains and rivers, etc. The Ground Sample Distance (GSD), i.e., the real distance between each two adjacent pixels in the image, is used to describe the spatial resolution of remote sensing images. The GSD of MS and PAN images of GF-1 satellite are 8 m and 2 m, respectively. Correspongdingly, the GF-2 remote sensing images have the GSD of 3.24 m and 0.81 m, where the resolution is twice larger than that of GF-1 images. The detailed information of corresponding spectral wave length is listed in Table 2, and these two datasets cover the landscape of mountains, settlements and vegetation. To improve the training efficiency of HARNN and obtain multiscale features, we performed the downsampling operation on the original images according to Wald Protocol [50] (the original MS images were used as reference images), and cut them into 64 × 64 small tiles with the overlap ratio of 0.125. After preprocessing the images, these 58,581 samples were divided on a scale of 8:2, of which the random selected 80% part was used for model training and the rest part used for validation. The experiments implemented in this section were carried out on a remote server installing Ubuntu 18.04.4 LTS . To improve the computational efficiency, the multi-GPU training strategy was adopted via four NVIDIA RTX2080Ti GPU, which was realized by data parallel method allocating batches of training data to different GPUs. The total batch size was set to 32, the initial learning rate was set to 0.0001 and the total number of iterations was 30 k. Comparison Methods and Evaluation Indices In this section, we select 6 widely used pansharpening methods including traditional CS-based method PCA [7], MRA-based method Wavelet [9] and four recently proposed state-of-the-art DL-based methods, i.e., PNN [10], SRCNN [37], RSIFNN [42] and TFCNN [15]. To comprehensively verify the effectiveness of two-branch feature extraction network, we choose two single-branch networks PNN and SRCNN, and two dual-branch networks RSIFNN and TFCNN for comparison. In order to evaluate and analyze the performance of fusion algorithms in all aspects, we selected nine widely used evaluation indices in the pansharpening task, which can be divided into referenced and no-referenced indices based on whether to calculate using reference images or not. Referenced indices include relative dimensionless global error in synthesis (ERGAS), universal image quality index (UIQI), the Q metric, correlation coefficient (CC), spectral angle mapper (SAM), structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). Among them, ERGAS, U IQI and Q are metrics that describe the overall quality of the fusion image, CC and SAM are the spectral quality metrics, and SSI M denotes the structural similarity. As for PSNR, it represents the ratio of valid information to noise and is calculated by dividing the difference between the max and min pixel value and MSE of two images. According to Wald Protocol [50], the referenced indicators can be calculated with the reference MS image, but cannot be used in the real data experiments. Correspondingly, since the no-referenced indices are calculated using only images before and after fusion, the spectral distortion D λ index and the spatial distortion D s index are used for both downsampled data and real data experiments. Comparison of the Efficiency of Different Attention Strategies To verify the effectiveness of the encoder and hybrid attention strategies of the proposed method, we compared the qualitative and the quantitative performance of models adopting different attention strategies on the downsampled GF-2 dataset of Henan province, China. The Plain network refers to the network that has the same structure of HARNN, but the encoder attention modules and hybrid attention modules are not adopted in it. The Plain network is utilized as the baseline of this series of comparison experiments. The second network introduces encoder attention modules into its feature fusion nework without hybrid attention module, so it is called the Encoder network. Correspondingly, the third Fusion network only adopts hybrid attention without encoder attention modules, and the Proposed network uses these two modules simultaneously. Figure 4 demonstrates the fusion results of the mentioned networks on downsampled GF-2 image of Henan province, China. Figure 4a,b are the blurred MS image and the downsampled PAN image to be fused. Figure 4c is the original MS image, and Figure 4d-g are the fusion results of networks adopting different attention strategies introduced above. Compared with the low-resolution and reference images, it is obvious that the proposed network which has both the encoder attention and hybrid attention modules has the most informative spatial features and the most accurate color expression, which represents the high spectral resolution. Further more, as shown in the lower left corner of the tiles in Figure 4, the bright yellow buildings are blurry in the results of Plain, Encoder and Fusion networks, and the edge of the playground is as clear as the Proposed network, which implies the effectiveness of the encoder and hybrid attention modules in improving spatial resolution without introducing spectral distortion. To further verify the accuracy of the attention mechanism, the quantitative analyze results conducted on the downsampled GF-2 image of Henan province, China are listed in Table 3, which is the average result of five groups of experiments. In Table 3, the best performance for each index such as ERGAS. CC and SSIM is labeled in bold for convenience. As shown in the table, the proposed network adopting both encoder and hybrid attention modules outperform other control methods in almost all of the metrics, which is consistent with the qualitative results in Figure 4. In conclusion, the proposed hybrid attention mechanism is proved to be effective in improving spatial resolution and in keeping high spectral resolution. The comparative figures and metrics values demonstrates the effectiveness of the proposed network in the pansharpening task. In the following experiments, we compare the fusion results of the proposed network with some state-of-the-art pansharpening methods to further verify the efficiency of it. Comparison of Spectral Distortion In order to validate the ability of HARNN to alleviate spectral distortion, a series of experiments are designed as follows. These experiments are conducted on the simulated GF-2 dataset of Guangzhou Province, China, which is downsampled by four times according to Wald Protocol [50] and the original MS images are utilized as reference of the network. Figure 5 presents the qualitative results of this set of experiments where the lower right corner displays the zoomed details of the rooftop, and these false-color images are generated using red, green and blue channels of the fused MS images. Figure 5a-c shows the downsampled MS image, the downsampled PAN image and the original MS image, respectively, Figure 5d-j represents the fusion results of all of the pansharpening methods mentioned before including CS-based method PCA, MAR based method Wavelet and several DL-based pansharpening models proposed in recent years. Spatially, the fusion results of Wavelet and SRCNN suffer from the lack of spatial information and contour details, and the improvement of spatial resolution is not effective compared to the downsampled MS images. PCA and other DL-based methods successfully extract and fuse the spatial features into the final result, which is shown in the clear outline of the architecture. Spectrally, it is obvious that PCA has a color deviation, where the result is lighter in color than other images and has obvious synthetic traces. In the result of RSIFNN, SRCNN, TFCNN and PNN, there are several spectral distorted pixels shown on the bright rooftop, and these pixels look darker than the reference ones whose pixel value are close to 255. In contrast, the proposed method shows better effectiveness than former methods, both spatially and spectrally, for its fusion result having clear texture details in the circular area of the rooftop and having no obvious distorted pixels. To further verify the effectiveness of alleviating spectral distortion of the proposed method, we calculate the number of distorted pixels in these tiles by contrasting the pixel values of the reference image and count the pixels whose differences of four channels are greater than 50. Table 4 lists the quantitative statistical results of the distorted pixels of the seven zoomed tiles of Figure 5d-j, with the 60*60 size of these enlarged tiles to calculate the percentage. As is shown in Table 4, PCA has the most distorted pixels whose percentage is more than 40%, and this confirm that PCA has weak spectral fusion ability though its spatial resolution is relatively high. SRCNN, PNN and Wavelet all have more than 10% distorted pixels, and compared to RSIFNN and TFCNN, the proposed method has fewer abnormal pixels which is less than 1%. Therefore, the percentage calculated according to image pixel values further verify that the proposed method has the ability of alleviating spectral distortion and preserving the spatial details of the image. Figure 6f and has low spatial and spectral resolution. In contrast, HARNN shows better effectiveness than former methods both spatially and spectrally, for its fusion result having clear texture details in the circular area of the rooftop and no obvious distorted pixels. Spectrally, almost all of these DL-based methods suffer from spectral distortion, which is reflected in the abnormal green pixels of the fused images, and the proposed method performs the best among them. Through qualitative analysis, it can be found that PNN has the most severe distortion, and the fused image of the proposed method contains the least abnormal pixels. Table 5 lists the number of the distorted pixels of the zoomed tiles of Figure 6d-j, where the size of these enlarged tiles are 80*80. As shown in Table 5, PCA has 100% distorted pixels and PNN has 68.88%, which can also be observed in the figures. TFCNN contains less than 1% of the abnormal pixels, and HRANN has only 0.02% of them, which verifies the high spectral fusing ability of the proposed method. To summarize, the residual hybrid attention mechanism is verified to be able to alleviate spectral distortion on both GF-1 and GF-2 dataset. The distorted pixels are less in the fusion results of HARNN, which could be observed in Figures 5 and 6 and the statistical results in the tables above. Comparison of Spatial Resolution In order to further verify the effectiveness of improving spatial resolution of HARNN, we conduct this series of experiments on both downsampled and real dataset, and there is no reference image in the real dataset so we can only analyze the fusing result via qualitative analysis. Figure 7 shows the comparison result on a downsampled dataset of Qinghai Province, China of GF-2, where the detail tiles are enlarged to 50*50 and represented in the lower left corner. PCA has clear contour and edge details of shadow area, but suffers from slight color deviation, while RSIFNN, and TFCNN perform spectrally and are much clearer compared with Wavelet, SRCNN and PNN in terms of the expression of detailed information. Compared with all of these methods, the proposed method has the best comprehensive performance in spectral and spatial characteristics. For example, it has clear contour of the building shadow and the window on the house is also well sharpened. Table 6 lists the quantitative evaluation results of this set of experiments with the average metric value of 25 different sample tiles of downsampled GF-2 images, where the best result is marked in bold. As is shown in Table 6, the proposed model HARNN outperforms other methods in all referenced indices like ERGAS, Q, UIQI, SSIM and so on, TFCNN has the best result in D λ and RSIFNN has the highest value of D s . Compared with the recently proposed method TFCNN, the ERGAS value of the proposed method is improved by 2.59%, and is 50.56% better than PNN, which verifies the effectiveness of the proposed method in improving the overall quality of the fused image. In addition, the proposed method HARNN has the SAM value of 0.0407, which is 4.67% better than TFCNN, and improved by 30.71% compared with RSIFNN. As for the non-referenced metrics, the proposed method does not perform the best, but still has competitive performance. By comparing the evaluation value of all of the pansharpening methods, it can be observed that the proposed method is superior to other methods in global image quality and spectral and spatial similarity, which is important in pansharpening task. To ensure the integrity and comprehensiveness of the experiments, the computation and time complexity of DL-based method are measured and listed in Table 7. Despite the weak pansharpening effectiveness, SRCNN and PNN have prominent time consumption of 308 us/step and 384 us/step because of their fewer network parameters. Correspondingly, the proposed HARNN method consumes 13 ms to process one step, which is inferior to other comparison algorithms and needs to be improved in the future. Besides, the training, validation loss and accuracy curves of model training process on GF-1 dataset are also recorded and presented in Figure 8. As is shown in Figure 8a, the combined-loss (as discussed in Section 3.2) curves of the DL-based methods are presented in different colors, where the proposed HARNN method is described in blue lines. Obviously, loss value of these five DL-based algorithms converge to a stable small value in less than 5 epochs. In general, the less parameters the model contains, the faster the loss converges. However, the proposed HARNN has better performance of training and validation loss than TFCNN, though having much more parameters. Similarly, the same trend reflects in Figure 8b, which demonstrates the accuracy curves of these methods. In 5 epochs, all of these algorithms are able to reach the accuracy close to 1, where the TFCNN is trained a little bit slower than others. To sum up, the proposed HARNN only presents a modest performance in time consumption and loss convergency, and needs to be further optimized. In addition to the downsampled experiments, we also design a set of experiment on real data, which predict high-resolution fused image without reference. Figure 9 shows the fusion result on real GF-2 dataset of Beijing, China, and the detail tiles are enlarged to 50*50 and represented in the lower left corner in the red boxes. It is clearly observed that the fusion results of Wavelet, SRCNN, RSIFNN and PNN are not improved in spatial resolution and are still as blurry as the original MS image. The PCA fused image has as much spatial detail information as the original PAN images, but still has the problem of spectral deviation, which is reflected in the obvious difference between the color of the image and the original image. Compared with TFCNN, the proposed method is clearer in the building contours, and has less blur color blocks in the pink part of the building. In conclusion, as shown in Figures 7-9, we verify that the proposed network HARNN performs well in improving spatial resolution of the fused images both on the downsampled data and the real data. The qualitative results present the high spatial resolution of fused images, and the quantitative evaluation results in the above tables lead to better performance of HARNN compared with other traditional and state-of-the-art methods. Conclusions In this paper, we propose a hybrid attention mechanism based network (HARNN) for the pansharpening task, which is proved to have the ability of alleviating the problem of spectral distortion and sharpening the edge contour of the fused image. The main backbone of the proposed network is based on ResNet, and given the MS and PAN images as the network input, we design a dual-branch feature extraction network to extract spectral and spatial features from two inputs, respectively. To further improve the efficiency of the fusion network, the proposed network leverage the hybrid attention mechanism which enables to select more valuable spectral and spatial features from the extracted ones, and manages to solve the problems mentioned above. In order to evaluate the performance of our proposed method, we conduct extensive experiments on the downsampled and real dataset of GF-1 and GF-2 satellites, and the experimental results demonstrate that the proposed method could achieve a competitive fusion result which further proves the effectiveness of the designed network. Besides, the time consumption and loss convergency experiments illustrate the shortcomings of HARNN, where the computational complexity should be reduced. In the future work, we will focus on exploring the extraction and utilization of multiscale features based on current deep convolutional network, work harder on reducing complexity of the network, and conduct more classification experiments based on the fused image to verify the applicability of this method.
8,474.2
2021-05-18T00:00:00.000
[ "Computer Science" ]
Thermodynamics of multi-sublattice battery active materials: from an extended regular solution theory to a phase-eld model of LiMnFe<1-y>PO Phase separation during the lithiation of redox-active materials is a critical factor affecting battery performance, including energy density, charging rates, and cycle life. Accurate physical descriptions of these materials are necessary for understanding underlying lithiation mechanisms, performance limitations, and optimizing energy storage devices. This work presents an extended regular solution model that captures mutual interactions between sublattices of multi-sublattice battery materials, typically synthesized by metal substitution. We apply the model to phospho-olivine materials and demonstrate its quantitative accuracy in predicting the composition-dependent redox shift of the plateaus of LiMn y Fe 1-y PO 4 (LFMP), LiCo y Fe 1-y PO 4 (LFCP), LiCo x Mn y Fe 1-x-y PO 4 (LFMCP), as well as their phase separation behavior. Furthermore, we develop a phase-field model of LFMP that consistently matches experimental data and identifies LiMn 0.4 Fe 0.6 PO 4 as a superior composition that favors a solid solution phase transition, making it ideal for high-power applications. Introduction Li-ion batteries are fundamental to the upcoming transition towards sustainable energy production, electric mobility, and energy storage 1 . Although the early storage requirements were satisfied by active materials such as graphite and LiCoO2 2,3 , higher energy densities, sustainability, cheaper elements, and improved safety require developing more sophisticated battery active materials. Li-ion battery electrode materials also have other emerging applications 4 , such as electrochromic displays 5 , ion-tunable electrocatalysis 6 , resistive switching memory [7][8][9] , water desalination and purification 10 , and lithium extraction from brines 11,12 . In all of these applications, the design space for electrode materials with various desired properties has hardly been explored. Blending or modifying existing electrode materials is a promising method to improve properties, which is gaining attention, albeit with limited theoretical guidance. While the anode materials are moving towards silicon 13 , silicon/graphite composites, or Li-metal 14 , cathode development is running behind, with most advancements focusing on substituting cobalt in layered oxide materials with Ni, Mn, or Al developing LiNixMnyCo1-x-yO2 (NMC) and LiNixAlyCo1-x-yO2 (NCA) cathodes 15,16 . These approaches show the advantages of modifying the composition of an existing cathode with well-established lithiation mechanisms to reduce its cost and environmental impact and to improve energy density and cycle life. Applying the same approach to LiFePO4 (LFP), a phosphor-olivine material introduced by Goodenough and co-workers in 1997, 17 which has advantages over the layered oxides in lower cost and toxicity with greater stability and recyclability 18 , various partial or complete substitutions of Fe with Mn, Co, and Ni have been attempted. Higher redox potential and similar specific capacities can be obtained, improving the overall energy density [19][20][21] while sustaining decent diffusivity and cycle life. Currently, LiMnyFe1-yPO4 (LFMP) [21][22][23] exhibits the most promising characteristics and is rapidly being incorporated into commercial batteries. Therefore, it is crucial to gain a deep understanding of the basic physics of LFMP through modeling. Other materials in the same family, such as LiCoyFe1-yPO4 (LFCP) [24][25][26][27][28][29] and LiCoxMnyFe1-x-yPO4 (LFMCP) 19,20,30 , also display intriguing properties and merit further investigation as well. First-principles calculations struggle to provide a complete picture of the underlying mechanisms, in part due to the heavy impact of the practical choice of the pseudopotentials on the predicted redox potential 31,32 and partly because the use of Monte Carlo simulations aided by cluster expansion 33 prevent the understanding of the behavior of the material in a realistic battery system at finite temperature. In order to develop an accurate thermodynamic description, it was essential to model the behavior of individual particles using phase-field methods, which generalize the Cahn-Hilliard formalism [55][56][57][58][59][60][61] for driven electrochemical systems 34,48,52 . This approach has led to realistic models of diffusion and reaction models for materials such as graphite 62,63 , anatase TiO2 45 , LTO 46 , LCO 9 and LFP [64][65][66][67][68] , showing excellent agreement with experiments, guiding researchers to properly understand the reasons for various peculiar behaviors occurring in phase-separating materials and helping companies in the optimization of these kinds of batteries. Recently, phase-field modeling of LFP has succeeded in reproducing a vast dataset of operando x-ray images of nanoparticles cycling at different rates pixel by pixel 69 , while learning the two-phase free-energy landscape, the reaction kinetics of coupled ion-electron transfer 70 , and the heterogeneity of surface reactivity, correlated with variations in carbon coating thickness. An open-source code, MPET (Multiphase Porous Electrode Theory) 48 has been developed to facilitate the implementation of these models for specific cells and control algorithms 71 . With its modular design, users can quickly incorporate phase-field models of the studied material within a porous electrode theory framework, providing insights into both the individual particle and the collective system responses. In this study, we applied a thermodynamic-based approach to investigate the impact of composition on the performance of phospho-olivine materials. We extended the regular-solution theory 61,72 , originally applied to single-lattice LFP 34,73 , to consider the presence of multiple sublattices for the intercalated species, which have distinct properties and redox potentials. Our theory reveals how interactions between sublattices explain the composition dependence on redox potential and phase transition behavior. By applying the theory to a phasefield simulation of an LFMP half-cell, we gain fundamental insights into the optimal transition metal ratio, which demonstrate the possibility of using mean-field phase-field models to design active materials. Theory The mathematical modeling of a closed thermodynamic system starts by defining the Helmholtz free energy , given by = − 61,72 . To determine the properties of a solid solution, we need the temperature , a function for the entropy , linked to all possible configurations the system can have, and the internal energy , representing interactions between the particles of the system. The regular solution model, which is equivalent to a mean-field lattice-gas model with pair interactions, offers an elegant and straightforward way to describe solid solutions, and it was implemented successfully in various battery active materials 34,40,[44][45][46]48,73,74 . However, its application is mainly limited to materials where the intercalated species encounter one lattice type, such as LFP or LTO 46 , or two non-interacting sublattices, such as TiO2 45 . Phase-field models of staging phase transitions in graphite have been developed with multiple, periodic interacting crystal layers, 40 but the parameters must be fitted to experimental data to describe the complex phase diagram of the material 62,63,75 . To summarize this mean-field theory, we can start considering an active material containing lattice sites that can host intercalating species (e.g., lithium) whose relative concentration is defined as ̃= / . The ε ), which represents the mixing enthalpy, determining whether the material will favor phase separation during (de)intercalation # . One way to obtain the parameters introduced above is by performing first-principles calculations, attaining the energies of the structure at different fractions of intercalation, and subsequently determining the mixing energy and the redox potential 76 . Moreover, it is possible to measure the chemical potential experimentally during close-to-equilibrium (de)intercalation; fitting the voltage hysteresis gap will then capture the difference between the local minima and maxima of the Ω dependent chemical potential 44,77 . Multi-sublattice model Aiming to describe a general multi-sublattice material in which the lattice sites are mixed uniformly, the usual regular solution model must be modified to account for the various interactions that the intercalated species can have in the material. We consider a lattice composed of intercalating sites, divided into sublattices, of which = belong to the sublattice . Assuming that the lattice sites of different types are fixed in space after the synthesis of the entropy for the sublattice , having an occupation ̃= / , is calculated using the solid solution approach, resulting in The internal energy of the general system with different sublattices is computed using a mean-field pair interaction approach so that the effect of the sublattices interactions arises naturally. Using the same logic behind the regular solution model, we can start considering a scheme represented in Figure 1, in which, for clarity, only two types of sites are considered. It shows how an intercalated particle in the occupied lattice site will interact with other occupied sites of type , the vacancies of the sites , but also with the vacancies and the occupied sites of type , leading to 9 different interaction energies for a 2-sublattice material. The evaluation of the interaction energies between different sites ε , in such complex systems, is challenging and would involve atomistic quantum mechanical computations. As a first approximation, we calculate them as an average between the same-site interactions ε = (ε + ε )/2. Assuming that the site-site interactions ε are not influenced by the ratio between the compounds , the total internal energy will so be obtained considering that an atom or a vacancy in the sublattice , having close neighbors, will interact with = neighbors with an energy , or depending on their occupation state, but also with = neighbors with an energy , or . Applying this concept and considering all the possible combinations, the internal energy is Where Ω can be calculated as and it will coincide with Ω = (ε − 1 2 ε − 1 2 ε ), the value for the single lattice structure where only one kind of interaction is present. Equation 2 links the properties of the original materials, which are summarized in Ω , Ω , … , Ω , to the properties of the mixed compound. We can thus conclude that this one equation describes the system in all its possible compositions, becoming a powerful tool for alloy engineering. Distinguishing now between the absolute energy dependence on concentration (first two terms is eq. 2) and the enthalpy of mixing (last term in eq. 2), the free energy of the system can be rewritten as Where Θ is the standard chemical potential for the sublattice , which is correlated with the standard half-cell potential of the reacting sublattice Θ so that in the case of Li-ion batteries Θ = − Θ / . Once the regular solution theory has been reformulated for a multi-sublattice active material, it is crucial to analyze the analytical solution to gain insight into the impact of various alloying elements on the material's behavior before obtaining the free energy functional needed to construct a comprehensive phase-field model. To predict the behavior of the system is necessary to build an dimensional energy space and follow the concentration path that minimizes the energy extracted to transform the system from a completely deintercalated state, where ̃1,̃2, …̃= 0, to a fully occupied system where ̃1,̃2, …̃= 1. In this way, it is possible to numerically obtain a solution for (), and from it, a voltage curve for a homogenous single particle system can be obtained (). If the difference in standard chemical potential between the lattice sites is significant compared to Ω we will observe a series of redox plateaus in the voltage curve. Conversely, a more complicated energy path will be followed (see supplementary information, section 2). Limiting ourselves to the first case, we can analytically calculate the chemical potential, and so the voltage curve of the various plateaus as This simple formulation allows us to analytically capture how the system behaves depending on the compositions , , … , and the known factors Ω , Ω , … , Ω . In fact, considering an intercalation process, we can deduce that in case 1 Θ < 2,3,… Θ the intercalated species will initially sit in the lattice sites here defined as "1" keeping ̃2 ,3,… = 0, and then, once ̃1 ~ 1 the second lattice sites will react, and so on. Therefore, the effective standard chemical potentials measured will be From equation 7, we can conclude that, due to the enthalpic contributions of the surrounding sublattices, the effective chemical potential, and so the measured redox potential Θ , will be decreased by a value corresponding to the sum of the pair mixing energy coefficients, weighted by their corresponding stoichiometry. For the second plateau, we instead expect that since 1 ~ 1 the redox potential will increase by a factor 1 Ω 12 and be reduced by a factor ∑ Ω 2 =3 . Moreover, it is worth noticing how the effective enthalpic interaction Ω = Ω is now a function of the stoichiometry of the compound. In the case of phase-separating materials, this will impact the voltage hysteresis gap and the overall phase separation behavior with respect to the pure original lattice. Knowing that if Ω < 2 no phase separation will occur during the plateau of the specie " ", we can directly calculate the compositions that assure a solid-solution transition < 2 /Ω depending directly on temperature and the mixing energy of the original compound. Application to Phospho-olivine cathodes The available literature and the need for a reliable electrochemical model make phospho-olivine cathodes an excellent candidate for a critical verification of the developed theory by testing its implications on the redox potential shift and the order of the phase transition. We focus on the most promising iron substitutions in LiFePO4 (LFP) by Mn and Co. Cyclic voltammetry and voltage curve data from the literature can provide the necessary information to implement in the model. For Ω and Θ , we can rely on previous regular solution models for LFP 41 were Ω = 4.63 and Θ = −3.422 . However, for LiMnPO4 (LMP) and LiCoPO4 (LCP), no previously developed models are available, so we must obtain Ω = 7. 44 and Θ = −4.09 from fitting the voltage curve from Tasaduk et al. 20 and the cyclic voltammetry of Kobayashi et al. 78 , respectively. Obtaining values for LiCoPO4 presents particular challenges due to its instability with the current electrolyte 24 , limiting the available data. Nevertheless, we note that the peak separation of the cyclic voltammetry in the work of Jalkanen et al. 79 is close to that of LFP. Since the peak separation is proportional to the voltage gap due to phase separation, we can assume that Ω ≈ Ω and that Θ = −4. 78 can be determined from the peak midpoint. Fig. 2 Comparison between the calculated (lines) and the measured (empty dots) redox potential at various Mn and Co substitutions ( ). The green line is the redox potential shift of the Co plateau in LiCoyFe1-yPO4, while the dark orange is its counterpart in the Fe plateau. The blue line is the redox potential shift in LiMnyFe1-yPO4, and the light orange one is the redox shift in the corresponding Fe plateau. Starting by analyzing the shift in redox potential, we can compare the midpoints of the cyclic voltammetry results at different Fe substitutions from references [57] and [59] with the redox shift obtained from the theory. The expected dependence of the redox potential on the Mn content for the plateaus in LiMnyFe1-yPO4 will so be Making the same calculations for the LFCP, we can see in figure 2 how the theory predicts the experimental values without any fitting parameters. A strong indication of the validity of the theory is in the quantitative prediction of the shift in the Fe plateau Δ Θ = ( − 1)Ω which clearly does not depend on the original redox potential but only on the value of the average enthalpic interaction Ω , which is different for the case of Co and Mn substitution. For the LiCoPO4 case, we could have been more precise by considering the double redox plateau present, seemingly linked to a staging behavior 28,29 . At present, we chose to neglect this effect, assuming it would pose minimal effect on the pair interaction energies, therefore, the chemical potential of LCP can be approximately modeled in the same fashion as that of LFP and LMP. These results establish a strong foundation for the mean-field theory and offer a clear explanation for how the redox shift can be attributed to the interaction between surrounding lattice sites, in agreement with the Monte Carlo calculations of Malik et al. 33 in which the shift in redox potential was attributed to the effect of the pair interaction energies. It is remarkable that the straightforward assumption of averaging the two interaction energies, as demonstrated in eq. 3, continues to hold true, even when the phenomenon is rooted in atomistic behavior. The predicted equilibrium phase transition behavior aligns with the available data in the literature. The experiments of Jalkanen 79 and Kobayashi 78 show the CV peak separation of the corresponding plateau to be linearly dependent on the composition of the olivine material. Further, the works of Ravnsbaek 80-82 and Strobridge 29 analyze the operando XRD profiles of different compositions of LFMP and LFCP, exposing the absence of phase separation for the corresponding sublattice if the composition is below the one calculated from the model. Therefore, the mathematical theory and the associated phase diagram can become tools for the practical engineering of these alloys. For example, they enable the selection of a composition range in which the material behaves as a complete solid solution (red triangle), at least from an equilibrium thermodynamics perspective. We expect a wider zone (light red) in which the real system can behave as a solid solution due to the stabilizing effects of coherency strain 41 and auto-inhibitory intercalation reactions 52 and the relation between the particle dimensions and the phase separation front 29,83 . Within the solid solution region, it is also possible to select the composition that minimizes the Co content (orange curve for LFCMP in figure 3). The complete solid-solution behavior is confirmed by experiments on both LiMn1/3Co1/3Fe1/3PO4 30 , and LiMn0.3Co0.2Fe0.5PO4 84 in which the systems show a monotonically decreasing voltage curve. These conclusions come directly from the analytic application of the extended regular solution theory and the consistent calculations of Ω , Ω and Ω without the need for ab-initio simulations. The mean-field model also helps in the phenomenological description of the system. The dilution of a sublattice, and its consequent reduction in first neighbors, weakens the attractive interaction between the intercalated atoms allowing the entropic contribution to take over, leading to a solid solution. Since this effect is subtle, a temperature change will also lead to different behavior (see supplementary information, section 2). This conclusion differs from the one reported by Malik et al. 33 in which the disappearance of the phase separation for certain compositions was attributed to the reduction in the Li composition difference between the initial and the final state. Finally, the combination of the predictions for the redox potential shift and the phase transition naturally leads to the possibility of calculating the voltage curves for every composition, including LFMP, LFCP, and LFMCP, in good accordance with the previously cited experimental results. Phase field modeling Having demonstrated a correspondence between experimental data and the analytical solution of the model, it is now interesting to investigate other factors that may come into play when the system is out of equilibrium. This can be accomplished by employing a complete phase-field simulation, which takes into account factors such as coherency strain and gradient penalties. Beginning with a simulation of a single particle, the behavior will be observed during both charging and discharging, with the aim of gaining insight into the collective dynamics that arise at various compositions. Given the wealth of available experimental data in the literature and the potential for commercial applications 85 , we have narrowed our focus to LFMP simulations. We implement our model in the open-source code MPET 48 , freely available in its GitHub repository. The complete set of equations and the parameters of the phase-field simulations can be found in section 1 of the supplementary information, alongside a comparison of the simulated and measured voltage curves at equilibrium. Simulation results Single particle simulations Fig. 4 Evolution of the normalized Li concentration inside a 150 nm particle upon lithiation. The blue lines correspond to the Mn plateau, the orange lines to the Fe plateau. The labels show the average composition of the particle. The single particle simulations in equilibrium (C/1000) (fig 4) show how modifying the enthalpic contribution at different compositions affects the system. Given the fraction of Mn in LiMnyFe1-yPO4, for the cases of = 0.6 and = 0.8, we obtain Ω < 2 at ambient temperature so that the particle, as expected from the analytical calculations, behaves as a solid solution during the (de)intercalation of the Fe plateau and phase separate when in the Mn plateau. In contrast, for intermediate values of within the range of 0.2 to 0.4, although the effective interaction energies Ω and Ω exceed the critical threshold of 2 for phase separation, the coherency strain provides a stabilizing effect that leads to the transformation of the particle into a solid solution. The insertion direction in our one-dimensional model is the one in which the coherency strain is minimum, which coincides with the preferential direction for phase separation 41 . This implies that the observed solid-solution behavior will remain consistent when considering a three-dimensional particle. This claim requires further experimental verification, keeping in mind that the composition where we experience suppression of the phase separation may slightly differ from the one observed in simulations due to the documented sensitivity of the calculated coherency strain values on ab-initio simulation parameters 86,87 . However, the single-particle simulations rationalize how the composition LiMn0.4Fe0.6PO4 with its probable solidsolution behavior can limit the problems due to the measured low Li diffusivity 88 . Porous electrode simulations We applied our single-particle model in a porous electrode system to explore the collective many-particle behavior in a real cathode. In particular, it is interesting to simulate the effect of the composition on the possible suppression of phase separation, already known for LFP , and on the lithium distribution along the depth. We distinguish the phenomena by performing simulations at two different Mn content: the first consists of simulating the charge and discharge process of a thin electrode at C/10 to assess the collective dynamic, and the second involves a 1C discharge of a thicker electrode to focus on the transport limitations. All the simulations are done on an ensemble of 400 particles with lognormal size distribution. The computed evolution of the concentration during the C/10 cycle was collected in a probability distribution and converted to normalized volume expansion (see supplementary information, section 1) to better compare the result with the work of Ravnsbaek 81 et al. The results, shown in figure 5, not only strongly agree with the work of Ravnsbaek et al. 81 , but they also offer a thermodynamically consistent explanation of them. Both in experiments and simulations, a bimodal volume distribution is present in the plateaus where the theory predicts phase separation. Moreover, it is essential to state that, considering the cases of LiMn0.2Fe0.8PO4 and LiMn0.4Fe0.6PO4 where, as discussed in the previous paragraph, we predict a solid solution transition for the single particle, the collective dynamics are dominated by the non-monotonic shape of the chemical potential and its subsequent concentration-dependent exchange current density 52 . Since we are close to equilibrium conditions (C/10), we can conclude that the origin of this bimodal distribution is established by the inter-particle separation (mosaic lithiation) in which the smaller particles are more lithiated than, the bigger ones, as also observed for LFP 97 . Focusing on the asymmetry between charge and discharge, for LiMn0.4Fe0.6PO4, both in simulations and experiments, the phase separation of the Fe plateau is present only during charging. From this observation, Ravnsbaek et al. 80 suggested that the intrinsic order of the phase transition in LiMn0.4Fe0.6PO4 depends on the direction of the transition and attributed the reason to coherency strain effects. Our consistent thermodynamic model enables us to reinterpret these conclusions offering a physical explanation for the observed experimental behavior. Our simulations indicate the significance of the collective auto-inhibitory and auto-catalytic behavior upon lithiation and delithiation, respectively 52 . Due to the asymmetric concentration dependence of the exchange current density upon delithiation, an enhanced particle-by-particle reaction is observed, while during lithiation, the inter-particle separation is suppressed, as previously observed in LFP 97 and NMC 54 porous electrodes. While this phenomenon is only observed when cycling LFP at high rates, it is instead already present at C/10 in the Fe plateau of LFMP due to the low Ω . The only observed mismatch in figure 5 occurs in discharge case of LiMn0.2Fe0.8PO4 and can be attributed to various phenomena not included in the model such as a possible metastable phase for the Fe plateau 98,99 , effect of particle size 83,100 or non-linear dependence of the volume on the Li concentration. Finally, it is expected that already at 1C, in the case of a thin cathode, in the range 0.2 < < 0.4, the interparticle separation is completely suppressed during lithiation (see supplementary information, section 2), even if low intra-particle diffusivity may lead to different experimental results. To complete the picture, we studied the effect of the composition on the lithium depth profile during a 1C discharge of a commercial-like cathode. The simulated half-cell has a cathode thickness of 80 a porosity of 30%, the transport limitations of Li in the electrolyte are therefore not negligible anymore. Due to the absence of the nucleation barrier, solid solution materials tend to lithiate relatively uniformly along the depth, even in the case of strong transport limitation. On the other hand, phase-separating materials show steeper gradients in the Li concentration 101,102 . The mixed phase separationsolid solution behavior of LFMP makes it an exciting candidate to evaluate this effect since the different compositions affect the phase transition. As shown in figure 7, if LiMn0.8Fe0.2PO4 is used, the Mn plateau will show an inhomogeneous redox activity, typical of phaseseparating materials, while the last 20% of the discharge, corresponding to the Fe plateau, will have a uniform reaction along the depth. Similar considerations can be made on LiMn0.4Fe0.6PO4 in which a small but visible peak in redox activity can be seen in both plateaus due to the non-monotonicity of the chemical potential, which is still present even if the single particle is not phase separating. We can so conclude that the Li distribution along the depth of the electrode at different states of charge is severely affected by the concentration of Mn. Taking LFP as a reference, it is clear that the change in the degree of phase separation helps guarantee uniform lithiation along the depth, and the case of LiMn0.4Fe0.6PO4 is the one that most favors homogeneity. This result significantly impacts the cycle life, thanks to the possibility of reducing current hot spots, and offers a new route for a composition-based optimization of a commercial electrode. Conclusions In this study, we expanded the regular solution theory to explain and predict the behavior of phosphor-olivine cathodes. The inclusion of multiple sublattices and their interactions provided an elegant explanation for the shift in redox potential and phase separation behavior. The mean-field theory formalization offers an intuitive understanding of these phenomena, which can enable research on new active materials. This approach can serve as a valuable alternative to computationally extensive ab-initio calculations, delivering clear insights starting from simple concepts instead. Our phenomenological description of the mathematical derivation demonstrates that the redox shift is due to the interactions of the non-reacting sublattice. Furthermore, we found that a redox plateau that previously showed phase separation can transform into a solid solution. This transformation occurs due to the reduced number of closest neighbors within the same sublattice, which lowers the effective interactions of the intercalated species. The application of our model to well-studied materials such as LFMP, LFCP, and LFMCP and their possible compositions shows how quantitative and accurate this theory is, even if the examined system is considered complex. The subsequent application in a phase-field framework was able to reproduce and explain various experimental results whose interpretations were incomplete and lacked mathematical support. The firm conclusion about the absence of phase separation in low Mn content LFMP is still to be confirmed experimentally. However, the proposed mechanism to explain the operando XRD peak shift sheds light on the importance of considering multi particles behavior when experimenting with a collective system such as a halfcell. Finally, this model strongly indicates the optimal composition for a high-power cathode, showing how LiMn0.4Fe0.6PO4 may be an excellent candidate thanks to its solid solution behavior and low transport-induced inhomogeneity. To verify this claim, further experiments are necessary, and a 2-dimensional model, able to capture the known transport limitation in the particle, is also advised. The interplay between the concentrations of the two sublattices may play an essential role in explaining the out-of-equilibrium behavior opening the route for optimization of the (dis)charging procedure to exploit these effects 103 . We expect that the new theory may be applied to other popular active materials such as LiMn1.5Ni0.5O4 (LMNO) or the various compositions of NMC, explaining the effect of metal ratios on the performances. To do so, it will be necessary to consider the structural modifications occurring in the spinel or the layered structure that, at the moment, are not taken into account in the theory. It is finally hoped that overcoming these limitations may expand the domain of this theory in such a way that, as particle dimension, porosity, and thickness, also the composition of the materials can be included in the parameters to optimize when a battery is designed, improving cycle life and energy efficiency. Code and Data availability MPET is available as a Bitbucket repository: https://bitbucket.org/bazantgroup/mpet/src/master/ but at the moment, it does not contain the LFMP model in the master branch. The reader is thus invited to visit the GitHub repository of the code to explore the different branches: https://github.com/TRI-AMDD/mpet.git. The opensource nature of the code makes also possible to reproduce the described simulations using the parameters described in table 1 of the method section of the supplementary information.
6,980.4
2023-08-21T00:00:00.000
[ "Materials Science", "Physics" ]
Eigendamage: an eigendeformation model for the variational approximation of cohesive fracture—a one-dimensional case study We study an approximation scheme for a variational theory of cohesive fracture in a one-dimensional setting. Here, the energy functional is approximated by a family of functionals depending on a small parameter 0<ε≪1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0 < \varepsilon \ll 1$$\end{document} and on two fields: the elastic part of the displacement field and an eigendeformation field that describes the inelastic response of the material beyond the elastic regime. We measure the inelastic contributions of the latter in terms of a non-local energy functional. Our main result shows that, as ε→0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon \rightarrow 0$$\end{document}, the approximate functionals Γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma$$\end{document}-converge to a cohesive zone model. Introduction A tension test on a bar will typically show that small deformations are completely reversible (elastic regime) while large deformations lead to complete failure (fracture regime).Only for very brittle materials one observes a sharp transition between these two regimes (brittle fracture).By way of contrast, many materials exhibit an intermediate cohesive zone (damage regime) in which plastic flow occurs and a body shows gradually increasing damage before eventual rupture (ductile fracture). Variational models have been extremely successfully applied to problems in fracture mechanics, cf., e.g., [2,7,8,18] and the references therein including, in particular, the seminal contribution of Francfort and Marigo [23].Here energy functionals are considered that act on deformations in the class of functions of bounded variation (or deformation).The derivatives of these functions are merely measures, and the singular part of such a measure is directly related to the inelastic behavior of the bar.The resulting variational problems are of free discontinuity type allowing for solutions with jump discontinuity (macroscopic cracks).Moreover, within a damage regime the strain can contain a diffuse singular part describing continuous deformations beyond the elastic regime that can be related to the occurrence of microcracks, see e.g.[1,12,19,21,22,24,25,35].In fact, when considering variational problems with stored energy functions of linear growth at infinity and surface energy contributions that scale linearly for small crack openings, all these contributions to the total strain interact, cp.[6], which renders the problem challenging, both from a theoretical and a computational point of view. As free discontinuity problems are of great interest not only in fracture mechanics but also in image processing, several approximation schemes have been proposed with the aim to devise efficient numerical approaches to simulations.Most notably, the Ambrosio-Tortorelli approximation [3,4] has triggered a still continuing interest in phase field models in which a second field (the 'phase field') is introduced that can be interpreted as a damage indicator and whose value influences the elastic response of the material. With a particular focus on cohesive zone damage models we refer to, e.g., [13,[15][16][17]26].A small parameter ε is introduced in such models that corresponds to an intrinsic length scale over which sharp interfaces of the phase field variable are smeared out.A different approach has been initiated by Braides and Dal Maso for the Mumford-Shah functional, and then extended to various generalized settings in, e.g., [8,11,14,[27][28][29][30]32] which involves a non-local approximation of the original field u in terms of convolution kernels with intrinsic length scale ε ≪ 1. Our main motivation comes from the Eigenfracture approach to brittle materials that has been developed in [37] and further considered in [33,34,36].Our main aim is to extend this model to a ductile fracture regime with a significant damage zone.The variables of the model are the deformation field u ε and an eigendeformation field g ε , which induces a decomposition of the strain u ′ ε = (u ′ ε − g ε ) + g ε into an elastic and an inelastic part, the latter describing deformation modes that cost no local elastic energy.(We refer to [31] for more details on the concept of eigendeformations to describe inelastic deformations and, in particular, plastic deformations.)The energy associated to the formation and increase of damage is accordingly modeled in terms of a non-local functional acting on g ε , which replaces the non-local contribution defined in terms of a simple ε-neighborhood of the crack set in the original Eigenfracture model by a more general (and softer) convolution approximation. We would like to point out that our set-up thus introduces a novel modeling aspect to damage functionals.Instead of an explicit dependence of the stored energy function on the damage as being encoded in a phase field, in our model the constitutive laws, i.e., the linear elastic energy | • | 2 and fracture contribution f (see below) remain unchanged.An increase of damage is rather related to a transition from the elastic deformation field to the eigendeformation field.In particular, plastic deformations at the onset of the inelastic regime need not immediately lead to softening of the material.With respect to non-local convolution approximation schemes of the deformation field u we remark that in our model such non-local contributions need to be evaluated only near the support of g ε but not on purely elastic regions. In the present contribution we focus on the one-dimensional case.In this setting our analysis will benefit from the corresponding studys [29] of Lussardi and Vitali for pure convolution functional.Indeed, we will follow along the same path in order to adapt and extend their methods to our two field set-up.There are, however, a number of notable differences in our analysis which lead us, also in view of later extensions to higher dimensions [5], to provide a self-contained account of our results.A main difficulty stems from the fact that there is no pre-assigned functional relation between the eigendeformation fields g ε and the strain fields u ′ ε .Rather these quantities are merely 'coupled by regularity' in the sense that u ′ ε − g ε ∈ L 2 .As the limit of g ε needs to be studied in a rather weak space, this leads to technical difficulties when transferring asymptotic properties from u ε to g ε . Our results also constitute the first step towards higher-dimensional models.In particular, the case of antiplane shear will be addressed in a forthcoming contribution [5].Here the lack of a direct relation between g ε and ∇u ε and hence the absence of an underlying gradient structure will pose severe additional challenges. Outline We start by describing the setting of the problem and by stating the main results in Section 2. In Section 3 we remind some facts on functions of bounded variation and the flat topology.Section 4 is devoted to a compactness result.The Γ-lower limit for the eigendamage model is established in Section 5. To this end, we first derive the estimate from below of the jump part, subsequently the estimate from below of the volume term and the Cantor term, and the proof of the Γ-lim inf inequality is then completed by combining the previous results.In Section 6 we then address the estimate from above of the Γ-upper limit.Finally, in Section 7 the asymptotic behavior of the minimal energies with respect to the eigendeformation variable is studied. Setting of the problem and main result Suppose that a beam occupies the region (a, b) with 0 < a < b < ∞ and that a displacement u : (a, b) → R affects the beam.As cohesive energy associated with u we shall consider where c 0 is a fixed positive constant, and ψ, f : [0, ∞) → [0, ∞) are functions defined via The main ingredients in the energy (1) are a volume term, depending on the strain of the beam u ′ and corresponding to the stored energy, a surface term, depending on the crack opening [u] := u(• +) − u(• −) on the jump set J u and modeling the energy caused by cracks, and finally a diffuse damage term, depending on the Cantor derivative D c u and corresponding to the energy caused by microcracks. The natural function space in order to study such functionals in one dimension is the space BV (a, b) of functions of bounded variation on (a, b).Notice that the distributional derivative of each function u ∈ BV (a, b) allows for a decomposition Du = u ′ L 1 + D s u into the absolutely continuous and the singular part with respect to the Lebesgue measure, and the singular part D s u = [u]H 0 J u + D c u in turn into the jump part and the Cantor part, which we have used in (1).We consider both models with an apriori bound u L ∞ (a,b) ≤ K, K < ∞, and unrestricted models with K = ∞. We next introduce a functional depending on two fields ) with a non-local approximation of the the second variable γ, given as with ε > 0, I ε (x) := (x − ε, x + ε), and either K > 0 a fixed constant or K = ∞.We notice that E ε (u, γ) can only be finite if γ is absolutely continuous with respect to the Lebesgue measure, with density in L 1 (a, b).In this case u ′ represents the strain of the beam and γ is intended to compensate u ′ in regions where u ′ is above a certain strain level.Hence, u ′ −g is the elastic strain of the material, while γ describes the deformation of the material beyond the elastic regime, indicating that a permanent deformation is exhibited if In what follows, we are interested into the asymptotic behavior of the functionals {E ε } ε>0 as ε ց 0 (in the sense of Γ-convergence).Focussing first on the case K < ∞, it will be described by the energy functional E which for (u, γ) ∈ L 1 (a, b) × M(a, b) is defined as Let us notice that for a finite energy E(u, γ), the displacement field u and the eigendeformation field γ need to be linked in a very particular way.The singular part γ s of the measure γ with respect to the Lebesgue measure needs to coincide with the singular part D s u of the distributional derivative of u.The absolutely continuous part gL 1 of γ instead is not completely determined by the function u, but only the integrability restric- By a pointwise minimization of the integrand, the minimizer g * is explicitly given as For later purposes we notice that the eigendeformation field γ is completely described in terms of the function u as Moreover, the corresponding energy functional E(u, γ opt ) reduces to a one-field functional depending only on the displacement u ∈ BV (a, b), which under the additional restriction is precisely given by the energy F (u) introduced in (1). In order to state our Γ-convergence result we need to endow L 1 (a, b) × M(a, b) with a topology.A natural choice for the first component is the strong topology on L 1 (a, b).One appropriate choice for the second component is the flat topology, that is the norm topology on the dual of the space of Lipschitz continuous functions with compact support, while an alternative choice is the topology induced by suitable negative W −1,q -Sobolev norms, see Section 3 for more details.Our main result is the following: Theorem 2.1.Let L 1 (a, b) be equipped with the strong topology and M(a, b) be equipped with the flat topology.Assume K < ∞.Then the family The associated compactness result is stated in Theorem 4.2, where we in fact establish for the second variable convergence in W −1,q (a, b) for all 1 < q < ∞.Therefore, we obtain as a direct consequence of Theorem 2.1 also Γ-convergence of {E ε } ε>0 to E in L 1 (a, b) × M(a, b), when M(a, b) is equipped with the stronger topology of convergence in W −1,q (a, b) for some 1 < q < ∞. Remark 2.2.Our result can be seen as a two-field extension of the setting considered in [29].Indeed, introducing the constraints g = u ′ , respectively γ = Du, one is lead to functionals F ε (u) = E ε (u, u ′ L 1 ) and F (u) = E(u, Du) depending on u only.In this case the Γ-convergence of the sequence {F ε } ε to F has been obtained in [29]. The unrestricted problem is in fact strongly related.Indeed, even for K = ∞ an energy bound implies L ∞ bounds away from an asymptotically small exceptional set.The complement of the exceptional set can be chosen as a union of a bounded number of intervals, concentrating on the points of a finite partition a = x 0 < x 1 < . . .< x m = b of (a, b) in the limit ε → 0 such that {u ε } ε and {g ε } ε converge with respect to the L 1 norm, respectively the flat norm, locally on (a, b) \ {x 1 , . . ., x m−1 }.On the exceptional set, however, u ′ ε and g ε can assume extremely large values, spoiling their convergence even in a weak distributional sense.As a result, large jumps can develop in the limit and parts of u ε may elapse to ±∞.In order to account for such a possibility we consider limiting functions taking values in R = R∪{−∞, +∞}.More precisely, let P = (x 0 , . . ., x m ) : a = x 0 < x 1 < . . .< x m = b, m ∈ N and consider BV ∞,P (a, b) as consisting of functions u : (a, b) → R of the form with α i ∈ {−∞, 0, +∞}, i = 1, . . ., m, (x 0 , . . ., x m ) ∈ P and w ∈ BV (a, b).We denote the part where u is finite by We say that (u The corresponding compactness result for K = ∞ with respect to this particular convergence is stated in Theorem 4.3. Remark 2.4.In fact, E(u, γ) can be finite only if the restriction of u to F(u) is a BV function and not merely an element of GBV (F(u)).In particular, if E(u, γ) < ∞ and u ∈ L 1 (a, b), then u ∈ BV (a, b). Remark 2.5.Our methods can easily be adapted to obtain an alternative asymptotic description by considering renormalized functionals: From the above partition P one can derive a coarser one (whose members are finite unions of intervals) so that on each such set one has an L ∞ bound on u ε modulo a single additive constant and the mutual distance of u ε on two different sets diverges.This allows for an asymptotic description also of those parts that escape to infinity. Remark 2.6.Our results remain true if restricted to preassigned boundary values u As for a bounded energy sequence parts of the jump set could accumulate at the boundary {a, b}, a usual way to implement boundary conditions is to consider E ua,u b ε and E ua,u b on an extended interval (a−η, b+η), with η > 0 fixed, which are defined as E ε and E, respectively, before but with the additional constraints Indeed, the Γ-lim inf inequality and the compactness property are direct consequences of the case with free boundaries.The Γ-lim sup inequality for K < ∞ follows from the observation that the recovery sequence constructed in Proposition 6.1 indeed satisfies u ε = u near {a, b} and from Remark 6.4.The case K = ∞ is a direct consequence as there the recovery sequence is built as in the case K < ∞ near {a, b} since u a , u b ∈ R. Remark 2.7.Our results can also be adapted to general continuous stored energy functions W leading to a general non-quadratic bulk contribution b a W (u ′ − g) dx, whenever W satisfies a p-growth condition of the form c|r| p − C ≤ W (r) ≤ C|r| p + C for suitable constants c, C > 0 and some p ∈ (1, ∞), and for convenience we also assume that min W = W (0) = 0.The first term in the limiting functional is then replaced by b a W * * (u ′ − g) dx, respectively, F (u) W * * (u ′ − g) dx, where W * * is the convex envelope of W .In fact, making use of the estimate W ≥ W * * , compactness and the Γ-lim inf inequality follow exactly as before with W * * instead of | • | 2 by taking account of the obvious adaptions such as replacing L 2 by L p and SBV 2 by SBV p .The Γ-lim sup inequality requires an extra relaxation step, which is detailed in Remark 6.2, and is otherwise straightforward as well. For completeness we also give the corresponding approximation results for the minimal energies with respect to the second variable γ, which are defined as and Ẽ(u) := inf for u ∈ L 1 (a, b) and u ∈ L 0 ((a, b), R), respectively.As a direct consequence of the previous Γ-convergence result we obtain: Corollary 2.8.The family { Ẽε } ε>0 Γ-converges to Ẽ, on L 1 (a, b) equipped with the strong topology if K < ∞ and with respect to convergence a.e. on L 0 ((a, b), R) if K = ∞. Preliminaries In this section, we recall some basics on BV -functions, for simplicity on a one-dimensional interval (a, b) ⊂ R, and convergence of measures.where |Du|(a, b) is the total variation of Du.We here collect some basic facts from [2] for functions of bounded variation, which are relevant for our paper.We recall the notions of weak- * and strict convergence for sequences {u n } n in BV (a, b), which are useful for compactness properties and approximation arguments, respectively.We say that {u n } n converges weakly- * to u ∈ BV (a, b), denoted by We notice that every weakly- * converging sequence in BV (a, b) is norm-bounded by Banach-Steinhaus, while every norm-bounded sequence in BV (a, b) contains a weakly- * converging subsequence (see [2,Theorem 3.23]).We further say that b) with respect to the strict topology (see [2, Theorem 3.9]). We next discuss approximate continuity and discontinuity properties of a function u ∈ L 1 loc (a, b).We say that u has an approximate limit We denote by S u the set, where this condition fails, and call it the approximate discontinuity set of u.It is L 1 -negligible, and u coincides L 1 -a.e. in (a, b) \ S u with u.Furthermore, we say that u has an approximate jump point at x ∈ S u if there exist (unique) We denote by J u the set of approximate jump points and call it the jump set of u.Notice that u(x+) and u(x−) can be considered as one-sided limits from the right and from the left, respectively.For u ∈ BV (a, b) the set J u coincides with S u and is at most countable.It is also convenient to work with the precise representative According to the Radon-Nikodým theorem the measure derivative Du = D a uL 1 + D s u can be decomposed into the absolutely continuous and the singular part with respect to the Lebesgue measure L 1 .We then define the jump and the Cantor part of Du as From the identifications D a u = u ′ L 1 with the approximate gradient u ′ for the absolutely continuous part and We can actually decompose the function u as for an absolutely continuous function u a ∈ W 1,1 (a, b) with Du a = D a u, a jump function u j with Du j = D j u, and a (continuous) Cantor function u c with Du c = D c u (notice that these functions are determined uniquely up to additive constants).Thus, the decomposition of Du is recovered from a corresponding decomposition of the function itself (which for BV -functions defined on open subsets of R d with d > 1 in general is not possible). We finally mention the subspace SBV (a, b) of special functions of bounded variation, which contains all functions u ∈ BV (a, b) with D c u = 0.In addition, we define Convergence in negative Sobolev spaces and in the flat topology.The negative Sobolev spaces W −1,q (a, b) with 1 < q ≤ ∞ are defined as usual as the dual spaces of denoting the conjugate exponent to q with 1 q + 1 q ′ = 1), and correspondingly the norm is defined via the duality pairing as for every T ∈ W −1,q (a, b).Consequently, the spaces W −1,q (a, b) with 1 < q < ∞ are reflexive and separable.For later purposes, we mention two specific situations. then, by the Hölder inequality and the continuous embedding (with constant C ′ ), we obtain T Dv , T w ∈ W −1,q (a, b) with Because of the continuous and dense embedding W 1,1 0 (a, b) ⊂ C 0 (a, b), the negative Sobolev norms can actually be considered on the space M(a, b) of all finite Radon measures on (a, b), for which the duality pairing reads as If we allow q ′ = ∞ in this expression, we obtain the flat norm Here we have the inequalities for all functions v ∈ BV (a, b) and w ∈ L 1 (a, b).Let us still notice that due to Schauder's theorem and the compact embedding W 1,∞ 0 (a, b) ⋐ C 0 (a, b), the flat topology metrizes weak- * convergence of (uniformly bounded) measures.Therefore, we have the following relations for the convergence of measure with respect to convergence in W −1,q (a, b), the flat norm and in the weak- * sense. Lemma 3.1 (on convergence of measures).For a measure µ ∈ M(a, b) and a sequence {µ n } n∈N of measures in M(a, b), we have: Compactness In this section we establish a compactness result for sequences in L 1 (a, b) × M(a, b) with bounded energy E ε .This result together with a Γ-convergence result implies the convergence of minimizers and the corresponding minimum values.In order to bound suitable norms of the two fields in terms of the energy, we first prove the following technical lemma: |g| dt dx. Proof.We proceed analogously to [29, proof of Lemma 4 The application of [10, Lemma 4.2] (with η = 2ε), which is a consequence of the mean value theorem for integrals, shows that holds for a suitable x ε ∈ R. By non-negativity of f , the choice of the cut-off function φ ε and the definition of G ε , we hence have If we now set then the claim follows directly from (10), after rewriting the right-hand side via the definition of f as We can now address the aforementioned compactness results. , b) for all 1 < q < ∞ and in particular in the flat norm.Moreover, there holds γ s = D s u and u ′ − g ∈ L 2 (a, b). Proof.We first observe from the finiteness of E ε (u ε , γ ε ) that we necessarily have ) and consequently contains a subsequence, which converges weakly- * in W −1,∞ (a, b) to some γ ∈ W −1,∞ (a, b).We next study the convergence of the sequence {u ε } ε .To this end, we consider the function v ε defined by As a consequence of Lemma 4.1 and the definition of the energy E ε , we deduce i.e., that #J vε is bounded independently of ε.By the Cauchy-Schwarz inequality and again by Lemma 4.1, we additionally have . By the Rellich-Kondrachov theorem, there exists a subsequence that converges in L q (a, b) to some u ∈ BV (a, b) with u L ∞ (a,b) ≤ K, for any 1 ≤ q < ∞.To identify u as the limit in L q (a, b) of (the same subsequence of) It remains to show convergence of (the subsequence of) {γ ε } ε in W −1,q (a, b) for any 1 < q < ∞ and the claimed relations between γ and Du.In view of (8), we have with α i ∈ {−∞, 0, +∞}, i = 1, . . ., m and w ∈ BV ((a, b); R), and a measure γ ∈ M(a, b) such that, up to subsequences, {u ε } ε converges to u a.e. and in Proof.We define G ′ ε exactly as in Lemma 4.1 and denote by J 1 , . . ., J mε the connected components of {I ε (x α ) : x α ∈ G ′ ε }.We then notice that the arguments in the preceding proof show that for some constant C we have m ε ≤ C and while L 1 ((a, b) \ (J 1 ∪ . . .∪ J mε )) ≤ Cε.We set x ε,i = sup J i and α ε,i = − J i u ε dx, i = 1, . . ., m ε .Note that (11) implies Passing to a subsequence we may assume that m ε = m for some m independent of ε and ) such that {x 0 , . . ., x m } = {x 0 , x1 , . . ., x m} (with x0 := a and, by construction, x m = b) and set ) and the uniform bounds in (11) and (12) (12).In this case we define w ∈ BV (x i−1 , x i ) as the weak* limit in BV loc Remark 4.6.In principle, with the compactness result of Theorem 4.2 at hand, we could infer our Γ-convergence result in Theorem 2.1 from known results, by separate considerations of the elastic and inelastic contributions.To this end, given . This allows to express as the sum of the standard Dirichlet energy for W ε and a non-local energy involving only G ε considered by Lussardi and Vitali in [29].However, the understanding of the coupling between u ε and γ ε is essential for the extension to higher dimensions addressed in [5], which cannot be traced back to the one-dimensional case via the slicing technique. For this reason, we prefer to give a self-contained proof of Theorem 2.1. 5 Estimate from below of the Γ-lower limit Next, we start with the proof of the Γ-lim inf inequality.Except for the very last paragraph we assume K < ∞ in the whole section.To show this inequality it is useful to introduce the localized version of the functionals {E ε } ε and E. They are defined for (u, γ) ) and every open subset A of (a, b) via and We denote the Γ-lower and Γ-upper limit of {E ε } ε by respectively.For the Γ-lower limit we also need a localized version, for which we adapt the notation and write E ′ (•, •, A) for every open subset A of (a, b).(i) The properties that the set functions A → E ε (u, γ, A) are increasing and superadditive (on disjoint sets) for each ε carry immediately over to A → E ′ (u, γ, A).In this section we prove the Γ-lim inf inequality, where in view of Remark 4.5 it is sufficient to consider (u, γ) ∈ BV (a, b) × M(a, b) with u L ∞ (a,b) ≤ K, γ = D s u + gL 1 and u ′ − g ∈ L 2 (a, b).The basic idea is to derive three separate estimates for the jump part, the volume term and the Cantor term, respectively, and to infer the desired estimate then from a combination of these estimates by means of measure theory. Estimate from below of the jump term Proof. Step 1: For every x ∈ J u ∩ A we have Since only finite energy approximations are of interest, we consider a sequence . By definition of (u(x+), u(x−)) and since u ε → u in measure, we readily find points x− ε ∈ (x − δ, x) and x+ ε ∈ (x, x + δ) such that, for sufficiently small ε, (also cp.[29, Lemma 5.1]).Using the monotonicity of A → E ε (u ε , γ ε , A) and applying the estimate (10) with (a, b) replaced by (x − 2δ, x + 2δ) on an associated grid of points Ḡε , we obtain from the subadditivity and non-negativity of f for all ε < δ (notice the inclusion For the argument on the right-hand side, we observe from the Cauchy-Schwarz inequality, the inequalities in (14) and from the energy bound . Using once again the fact that f is increasing, we can continue to estimate (15) from below via Now, passing to the lim inf as ε → 0 and then letting δ → 0 + , we obtain (13). Step 2. For an arbitrary M ∈ N with M ≤ #(J u ∩ A) we select a set {x 1 , . . ., x M } containing M points of J u ∩ A and pairwise disjoint open intervals I 1 , . . ., I M in A such that x i ∈ I i for all i = 1, . . ., M .First, we apply the monotonicity and superadditivity of E ′ (u, γ, • ) (see Remark 5.1) and then the estimate of Step 1 for I i instead of A. This yields Since M is arbitrary and J u is at most countable, the claim of the proposition follows. Estimate from below of the volume and Cantor terms We basically follow the idea of Lussardi and Vitali from [29, Lemma 4.3 and Lemma 4.4].We start by proving that approximation sequences in W 1,1 (a, b)× M(a, b) can be modified in such a way that in the limit we additionally have the optimal L ∞ -estimate. Proof.We follow the outline of the proof for [29, Lemma 4.3], which, however, needs some modifications due to the additional variable γ.In what follows, we may assume that the precise representatives of u and u ε for each ε are considered.The function u can be decomposed as u a + u j + u c , see (7), where u j is a jump function with jump discontinuities at any point of J u and where u a + u c is uniformly continuous in (a, b).We set and first claim that for every n ∈ N there exists δ n ∈ (0, 1 n ] such that In fact, there are only finitely many points x1 , . . ., xm(n) in J u that have to be excluded to deduce that By choosing δ n > 0 sufficiently small we can guarantee ( 16) by (18) and the definition of σ provided that each interval of length δ n contains at most one xj , and we can further ensure (17), by the uniform continuity of u a + u c .We then consider a partition P n of (a, b), i.e., (where the dependence of the points on n is not written explicitly) such that the mesh size is less than δ n , i.e., x i+1 − x i < δ n for all i ∈ {0, . . ., k}, x i / ∈ J u and u ε (x i ) → u(x i ) as ε → 0 for every i ∈ {1, . . ., k}, which is possible by the pointwise convergence u ε → u a.e. in (a, b).Since by construction of δ n at most one of the points x1 , . . ., xm(n) ∈ J u , where a large jump of u j occurs, may belong to the interval [x i , x i+1 ], we necessarily have |u j (x) − u j (y)| < 1 n for y = x i or y = x i+1 such that as a consequence of ( 17) there holds for all x ∈ [x i , x i+1 ] and every i ∈ {1, . . ., k − 1}.After having fixed the partitions P n , we can now start with the construction of the sequence {ū ε } ε .Since u ε → u in measure and u ε (x i ) → u(x i ) for every i ∈ {1, . . ., k}, we can fix a "level" εn for each n ∈ N such that for every i = 1, . . ., k and all ε < εn . Notice that we can choose {ε n } n strictly decreasing and such that εn → 0 + as n → ∞. For ε > ε1 we then set ūε := u ε and γε := γ ε .Otherwise, if ε ∈ (0, ε1 ], we first determine the unique n = n(ε) ∈ N with ε ∈ (ε n+1 , εn ].On the first and the last interval of P n we set ūε equal to u ε (x 1 ) and u ε (x k ), respectively.On an arbitrary interior interval [α, β] of the form [x i , x i+1 ] for some i ∈ {1, . . ., k − 1} we define, after assuming without loss of generality for every i ∈ {1, . . ., k} and u ε ∈ W 1,1 (a, b), we clearly have ūε ∈ W 1,1 (a, b).Moreover, the L ∞ bound on u ε with constant K directly transfers to ūε .We next study the asymptotic behavior of the sequence {ū ε } ε .From the definition of ūε and with (21), we observe for all ε ≤ ε1 ūε which, via ( 16) and ( 17), implies for all x ∈ [x i , x i+1 ] and every i ∈ {1, . . ., k − 1}.Since the latter estimate is also satisfied for the first and the last interval of the partition, we then infer from n = n(ε) → ∞ as ε → 0 the estimate lim sup In order to show the convergence claims of {ū ε } ε , we again consider an arbitrary interior interval [α, β] of the partition P n .If we denote the pointwise projection of as we know u(x) ∈ [u ε (α) − 3/n, u ε (β) + 3/n] due to (19) and (21).Since the length of the first and last interval of P n vanish in the limit n → ∞ and hence for ε → 0, this implies the pointwise convergence of {ū ε } ε to u a.e. on (a, b).In addition, as {ū ε } ε is bounded in L ∞ (a, b), convergence in L q (a, b) for all 1 ≤ q < ∞ follows from the dominated convergence theorem.For later purposes, we notice from the definition of ūε and the previous inclusion for u that for every ε we have We next define the sequence {γ ε } ε in M(a, b) by setting for every ε ḡε (x) := g ε (x) if u ε (x) = ūε (x), 0 otherwise, and γε := ḡε L 1 .Since there holds ū′ In view of ( 9) and the Cauchy-Schwarz inequality we then notice for If we consider the limit ε → 0 on the right-hand side, the first term disappears, since we have the convergence ūε − u ε → 0 in L 1 (a, b), while the second term disappears by (22) combined with (20) and the fact that the length of the intervals in P n vanish for ε → 0. Therefore, we have γε − γ ε → 0 in the flat norm.Since by assumption there holds γ ε → γ in the flat norm, we conclude that we also have γε → γ in the flat norm, which completes the proof of the lemma. For a localization procedure, we further need the following statement on the relation between E ′ (u, γ, I) and the Γ-lower limit E ′ (u I , γ I ), where I ⊂ (a, b) is an open interval, u I is the extension of u| I to (a, b) with inner traces and γ I is the restriction of γ to I. and γ I := γ I there holds Proof.We proceed analogously to the proof of [29,Lemma 4.4].By definition of E ′ as the Γ-lower limit of {E ε } ε , there exists a sequence Without loss of generality, we may also assume pointwise convergence u ε → u a.e. in (a, b).For an arbitrary η ∈ (0, (β − α)/2) we then pick points α η ∈ (α, α + η) and β η ∈ (β − η, β) such that on the one hand and on the other hand For I η := (α η , β η ) ⊂ (α, β) we now consider the functions (u Iη , γ Iη ) and the sequence We next observe u Iη → u I in L 1 (a, b) and γ Iη → γ I in the flat norm as η → 0, from (24), respectively, Lemma 3.1 since γ Iη * ⇀ γ I in M(a, b) by dominated convergence (as we have pointwise convergence ½ Iη → ½ I on (a, b)).By the lower semicontinuity of E ′ we then arrive at the claim Now, we finally turn to the estimate from below for the volume and the Cantor terms. Proposition 5.5.Let A be an open subset of (a, b).For every Proof. Step 1: With σ := sup x∈Ju |u(x+) − u(x−)|, there holds the preliminary estimate By definition of E ′ as the Γ-lower limit of {E ε } ε , there exists a sequence After assuming without loss of generality E ′ (u, γ) < ∞ and passing to a subsequence (not relabeled) and a possible modification of the sequence via Lemma 5.3 we may further suppose for a positive constant C 0 as well as lim sup Let η > 0 be fixed.We may assume that u ε − u L ∞ (a,b) ≤ σ + η holds for all ε.Analogously as in the proof of Lemma 5.3 (cf.( 16) and ( 17)), there exists δ η > 0 such that |u(x) − u(y)| < σ + η for all x, y ∈ (a, b) with |x − y| < δ η .Thus, there holds for all such ε.Next, we apply Lemma 4.1 with u = u ε and γ = γ ε .In this way we find a uniform grid G ε in the interval (a, b) with grid size 2ε such that and then extended to (a, b) by the constant values ṽε (a + * ) and ṽε (b − * ), respectively, for all ε.As ṽε is bounded with ṽε L ∞ (a,b) ≤ K and coincides with u ε on the set {I ε (x α ) : by ( 26) and ( 28), we observe ṽε → u in L q (a, b) for all 1 ≤ q < ∞.Moreover, we have ṽε provided that ε is sufficiently small, i.e., 4ε < δ η (such that ( 27) is satisfied).Since the number of jumps of ṽε is bounded via (28) and the definition of E ε by we end up with the estimate for the size of D s ṽε in terms of the energy for all ε.In order to show that {γ ε + D s ṽε } ε is an approximating sequence of γ, we first notice from the definition of ṽε and γε = gε L 1 in ( 29) and ( 32), with ṽ′ With (9) and the Cauchy-Schwarz inequality we can then continue to estimate We now study the terms on the right-hand side.From ( 26) and the definition of Together with (30) and taking into account also the strong convergences u ε → u and ṽε → u in L 1 (a, b), we then arrive at For the latter conclusion we have also used the fact that 28) and the estimate (31) (recall also the bound (26) on the energies). After having discussed the convergence properties of the sequence {(ṽ ε , γε )} ε , we can finally turn to the proof of the estimate (25).From the definition of E ε we obtain via ( 28) and ( 31) By the choice of the sequence {(u ε , γ ε )} ε with (26) it follows that 33) and the lower semicontinuity of the total variation with respect to weak- * convergence.By the arbitrariness of η > 0 we conclude from the previous inequality the desired estimate (25). Conclusion and proof of the Γ-lim inf inequalities we have proved so far in Propositions 5.2 and 5.5 the following lower bounds for the volume, the Cantor and the jump part: for every open subset A of (a, b).These are now combined to prove the estimate from below of the Γ-lower limit which shows the first part of Theorem 2.1. Next, we define The Γ-lim inf in the case K = ∞ is a direct consequence of Theorem 5.6. Proof.Assuming without loss of generality E ε (u ε , γ ε ) < C 0 for a positive constant C 0 , by Theorem 4.3 we have that u ∈ BV ∞,P (a, b) and there is a partition a = x 0 < x 1 . . .< x m = b such that J u ⊂ {x 1 , . . ., x m−1 } ∪ F(u) and for From Remark 4.4 and Theorem 5.6 (applied for a suitable K) we then get The assertion follows in the limit δ ց 0 from the monotone convergence theorem. 6 Estimate from above of the Γ-upper limit We now turn to the estimate from above of the Γ-upper limit E ′′ .Except for the very last paragraph we assume K < ∞ in the whole section.We again restrict ourselves to pairs (u, , the jump set is finite, i.e., J u = {x 1 , . . ., x N −1 } for some N ∈ N, and we may further assume by the Sobolev embedding theorem that u is a piecewise continuous function with one-sided limits u(x±) for all x ∈ (a, b).Thus, γ is of the form with g ∈ L 2 (a, b).Let x 0 = a and x N = b.We then choose ε small enough such that We first define u ε ∈ W 1,1 (a, b) nearby the jumps of u by linear interpolation via (see the figure below).By construction, we have u ε L ∞ (a,b) ≤ K for all ε, and taking advantage of the fact that u is only modified on the intervals (x i − ε 2 − 2ε, x i + ε 2 + 2ε) for i ∈ {1, . . ., N − 1}, we also have We next define γ ε ∈ M(a, b) as γ ε = g ε L 1 , where g ε ∈ L 2 (a, b) is given by Therefore, we observe from the definition of u ′ ε that for every function ϕ This shows the convergence of {γ ε } ε to γ in the flat norm.It only remains to establish the energy estimate.From the construction of (u ε , γ ε ), we clearly have Due to the monotonicity of f and f (t) ≤ c 0 t for all t ≥ 0, we estimate the non-local energy term by With the continuity of u outside of the jump set J u we can pass to the limit ε → 0 on the right-hand side.In this way, we finally arrive at Remark 6.2.For a general stored energy function W as described in Remark 2.7 the above argument can be augmented with a standard relaxation step by adding to u ε a function So also in this case we have We next address the modification of γ.We extend the absolutely continuous part g outside of (a, b) by 0 and set g a,h := g * ψ h for all h > 0. Then we have g a,h ∈ C ∞ (R) for all h > 0 and Now, we set , ( 36), ( 37) and (38), implies lim sup This does not yet show (34), since u h L ∞ (a,b) ≤ K might not be satisfied for all h > 0. We resolve this problem in two steps.With u L ∞ (a,b) ≤ K and u h → u in L 1 (a, b), we can fix a sequence {η h } h in R + with η h → 0 + as h → 0 and We next define the truncated versions ũh (x) := min{max{u h (x), −K − η h }, K + η h } for all h > 0. We then have ũh → u in L 1 (a, b), Dũ h → Du in the flat norm and, in addition, also ũh L ∞ (a,b) ≤ K + η h for all h > 0. Correspondingly, we set γh := D j ũh + g h ½ {ũ h =u h } L 1 for all h > 0. By using (9) and by applying subsequently the Cauchy-Schwarz inequality, we get If we pass to the limit h → 0 on the right-hand side, the first term vanishes because of the uniform boundedness of u where the last two terms on the right-hand side vanish as h → ∞ by construction. Again, the case K = ∞ is a direct consequence. Γ-convergence for the minimal energies with respect to γ We finally prove the Γ-convergence result in Corollary 2.8 for the minimal energies with respect to the second variable γ, i.e., we consider the energies Ẽε and Ẽ from ( 5) and ( 6), respectively.We only treat the case K < ∞, the necessary modifications for K = ∞ are straightforward.Notice that, as a direct consequence of the fact that the function g * from (3) solves the optimization problem in (2), for every u ∈ BV (a, b) there holds Ẽ(u) = E(u, D s u + g * L 1 ) = E(u, γ opt ). (41) For completeness we state also the corresponding compactness result.We first show the Γ-lim inf inequality.We consider an arbitrary sequence {u ε } ε in L 1 (a, b) with u ε → u in L 1 (a, b), for which we may assume Ẽε (u ε ) ≤ C 0 for some positive constant C 0 and all ε.We then select a low energy sequence {γ ε } ε in M(a, b) with E ε (u ε , γ ε ) ≤ Ẽε (u ε ) + ε for every ε > 0. By passing to a subsequence if necessary, we may assume that lim inf a, b) is required.A particularly interesting choice of g for a given function u ∈ BV (a, b) constitutes the unique minimizer g * of the optimization problem to minimize b a |u ′ − g| 2 dx + b a c 0 |g| dx among all g ∈ L 1 (a, b). e. on (a − η, a) and u = u b a.e. on (b, b + η) and γ ((a − η, a) ∪ (b, b + η)) = 0.So if u does not satisfy the given boundary values on (a, b) in the limiting problem, this leads to an extra energy cost: Functions of bounded variation.A function u ∈ L 1 (a, b) is said to belong to the space BV (a, b) of functions of bounded variation if its distributional derivative is a finite Radon measure, i.e, if the integration-by-parts formula b a uϕ ′ dx = − b a ϕ dDu for every ϕ ∈ C 1 c (a, b) is valid for a (unique) measure Du ∈ M(a, b).The space BV (a, b) is a Banach space endowed with the norm u BV (a,b) := u L 1 (a,b) + |Du|(a, b), for the jump part (see [2, Theorem 3.83 and formula (3.90)]) we arrive at the decomposition , b) and thus, via Lemma 3.1, also in the flat norm.This shows γ = Du−wL 1 ∈ M(a, b), which in turn, by the Radon-Nikodým theorem, yields γ s = D s u and w = u ′ − g ∈ L 2 (a, b). Theorem is bounded independently of I by the uniform energy bound, we have indeed w ′ − g ∈ L 2 (a, b). Remark 4 . 4 . For later use we notice that Lemma 4.1 and the proof of Theorem 4.3 show that, under the assumptions of Theorem 4.3, for any open A ⋐ (a, b) \ {x 1 , . . ., x m−1 } Remark 4 . 5 . According to the compactness results of Theorems 4.2 and 4.3 we may restrict ourselves to pairs (u, γ)∈ BV (a, b) × M(a, b) with u L ∞ (a,b) ≤ K and (u, γ) ∈ BV ∞,P (a, b) × M(a, b), respectively, and such that the measure Du − γ, is absolutely continuous with respect to the Lebesgue measure with density in L 2 (F(u)).The statements of Theorems 2.1 and 2.3 are trivial otherwise. ( ii) A direct consequence of (i) is that lower bounds for A → E ′ (u, γ, A) transfer from intervals in (a, b) to arbitrary open subsets of (a, b), i.e., if for a positive Borel measure λ an estimate of the form E ′ (u, γ, A) ≥ λ(A) holds for all intervals A ⊂ (a, b), then the estimate actually holds for any open subset A ⊂ (a, b), cp.[29, Remark 4.6] for a similar statement. Proposition 5 . 2 . Let A be an open subset of (a, b).For every (u, γ) ∈ BV (a, b)×M(a, b) with γ = D s u + gL 1 and u ′ − g ∈ L 2 (a, b) we have the flat norm and Lemma 3.1, we then conclude γε + D s ṽε → γ in the flat norm and γε + D s ṽε * ⇀ γ in M(a, b). 1 → ε | dx + c 0 |D s ṽε |(a, b) ≥ b a |u ′ − g| 2 dx + c 0 b a |g| dx + c 0 |D s u|(a, b) ≥ b a |u ′ − g| 2 dx + c 0 b a |g| dx + c 0 |D c u|(a, b).Let us comment on the second-last inequality.For the first term we first deduce from the boundedness of the sequence {u ′ ε − g ε } ε in L 2 (a, b) combined with the convergences u ′ ε L Du and γ ε → γ = D s u + gL 1 in the flat norm that u ′ ε − g ε ⇀ u ′ − g in L 2 (a, b) and then employ the lower semicontinuity of the L 2 -norm with respect to weak convergence in L 2 (a, b).For the second and third term we use the weak- * convergence γε + D s ṽε * ⇀ γ = D s u + gL 1 in M(a, b) from ( Proposition 6 . 1 . γ) ∈ BV (a, b) × M(a, b) with u L ∞ (a,b) ≤ K, γ = D s u + gL 1 and u ′ − g ∈ L 2 (a, b) since the estimates are trivial otherwise.We first show the result for the particular case u ∈ SBV 2 (a, b) and then deduce the general result by approximation.For every (u, γ) ∈ SBV 2 (a, b) × M(a, b) with u L ∞ (a,b) ≤ K, γ = D s u + gL 1 and u ′ − g ∈ L 2 (a, b) we have Fig. : Fig.: Construction of the recovery sequence for a piecewise affine function with jump discontinuities. Corollary 7 . 1 ( Compactness of the minimal energies with respect to γ).Let {u ε } ε be a sequence in L 1 (a, b) with Ẽε (u ε ) ≤ C 0 for a positive constant C 0 and all ε > 0. There exists a function u∈ BV (a, b) with u L ∞ (a,b) ≤ K such that, up to a subsequence, {u ε } ε converges to u in L 1 (a, b).Proof.We choose a low energy sequence{γ ε } ε in M(a, b) with E ε (u ε , γ ε ) ≤ Ẽε (u ε ) + 1 for all ε.Since there holds E ε (u ε , γ ε ) ≤ C 0 + 1 for all ε, according to Theorem 4.2 there exists a function u ∈ BV (a, b) with u L ∞ (a,b) ≤ K such that u ε → u in L 1 (a, b).Proof of Corollary 2.8.It is again sufficient to establish the Γ-lim inf inequality and the Γ-lim sup-inequality only for u ∈ BV (a, b) with u L ∞ (a,b) ≤ K since the estimates are trivial otherwise. e. and γ ε → γ in the flat norm locally in F(u) on the complement of a finite set, i.e., γ ε A → γ A on each open set A ⋐ F(u) \ {x 1 , . . ., x m−1 } for some (x 0 , . . ., x m ) ∈ P. With this notion of convergence we have:
12,602.8
2021-06-11T00:00:00.000
[ "Mathematics" ]
Linearly-invariant families and generalized Meixner – Pollaczek polynomials The extremal functions f0(z) realizing the maxima of some functionals ( e.g. max |a3|, and max arg f ′ (z) ) within the so-called universal linearly invariant family Uα (in the sense of Pommerenke [10]) have such a form that f ′ 0(z) looks similar to generating function for Meixner–Pollaczek (MP) polynomials [2], [8]. This fact gives motivation for the definition and study of the generalized Meixner–Pollaczek (GMP) polynomials P n (x; θ, ψ) of a real variable x as coefficients of G(x; θ, ψ; z) = 1 (1− zeiθ)λ−ix(1− zeiψ)λ+ix = ∞ ∑ n=0 P n (x; θ, ψ)z , |z| < 1, where the parameters λ, θ, ψ satisfy the conditions: λ > 0, θ ∈ (0, π), ψ ∈ R. In the case ψ = −θ we have the well-known (MP) polynomials. The cases ψ = π − θ and ψ = π + θ leads to new sets of polynomials which we call quasi-Meixner–Pollaczek polynomials and strongly symmetric Meixner– Pollaczek polynomials. If x = 0, then we have an obvious generalization of the Gegenbauer polynomials. The properties of (GMP) polynomials as well as of some families of holomorphic functions |z| < 1 defined by the Stieltjes-integral formula, where the function zG(x; θ, ψ; z) is a kernel, will be discussed. 1. Linearly-invariant families of holomorphic functions (1.1) f(z) = z + a2z + . . . , z ∈ D 2010 Mathematics Subject Classification. 30C45, 30C70, 42C05, 33C45. The properties of (GMP) polynomials as well as of some families of holomorphic functions |z| < 1 defined by the Stieltjes-integral formula, where the function zG λ (x; θ, ψ; z) is a kernel, will be discussed. A family M of holomorphic functions of the form (1.1) is linearly-invariant if it satisfies two conditions: (a) f (z) = 0 for any z in D (local univalence), (b) for any linear fractional transformation φ(z) = e iθ z + a 1 + āz , a, z ∈ D, θ ∈ R, of D onto itself, the function The order of the linearly-invariant family M is defined as Universal invariant family U α is defined as It is well known that α ≥ 1 and U 1 ≡ S c = the class of convex univalent functions in D, and the familiar class S of all univalent functions is strictly included in U 2 .Moreover, for every α > 1, the class U α contains functions which are infinitely valent in D [10], for example: Another example of such a function was presented in [15]: for which Functions of the form (1.2) appear to be extremal for the long lasting problems: max recently solved by Starkov [14], [15], who proved that the extremal function for max |a 3 | is of the form (1.2) with t 1 = θ, t 2 = −θ, where However, the extremal function 15]). We see that the extremal function for max f ∈Uα |a 3 | has a special form leading to (MP) polynomials, but the extremal function for max f ∈Uα | arg f (z)| leads to (GMP) polynomials, defined below. (ii) The Cauchy product for the power series and (iiii) Putting (x + i) and (x − i) instead of x into the generating function (2.1), we find that Differentiation of the generating function (2.1) with respect to z and comparison of the coefficients at z n−1 yields: which together with (2.6) gives (2.5). The first four polynomials P λ n are given by the formulas: Corollary 1. Corollary 3. (i) The (QMP) polynomials Q λ n = Q λ n (x; θ) satisfy the threeterm recurrence relation: are given by the formula: (iiii) The polynomials y(x) = Q λ n (x; θ) satisfy the following difference equation The polynomials S λ n = S λ n (x; θ) are given by the formula: (iii) The polynomials S λ n = S λ n (x; θ) have the hypergeometric representation (iiii) The polynomials y(x) = S λ n (x; θ) satisfy the following difference equation Theorem 2.2.The polynomials Q λ n (x; θ) are orthogonal on (−∞, +∞) with the weight and In the proof we use the following lemmas. Lemma 4. For arbitrary polynomial Proof.Using hypergeometric representation for Q λ n (x; θ) we can write (4 cos 2 θ) Using the well-known formula: Therefore by (2.7) Using hypergeometric representation for Q λ n (x; θ) we can write where which ends the proof after some obvious simplifications. Remark 1.In the case x = 0 we can obtain "more pleasant" sets of "polynomials": for which one can prove the following. In special case τ = 0, λ = 1, i.e.T = T(1, 0), we are able to find explicitly the radius of local univalence and the radius of univalence of T which differ from the corresponding values in the class T = T(1, 0). The same remarks concern also the sets of polynomials S 0 (x, θ) = lim where µ is a probability measure on ∆ = (0, π) × R.
1,188.6
2013-06-01T00:00:00.000
[ "Mathematics" ]
Experimental ion mobility measurements in Xe-CH4 Data on ion mobility is important to improve the performance of large volume gaseous detectors. In the present work, the method, experimental setup and results for the ion mobility measurements in Xe-CH4 mixtures are presented. The results for this mixture show the presence of two distinct groups of ions. The nature of the ions depend on the mixture ratio since they are originated by both Xe and CH4. The results here presented were obtained for low reduced electric fields, E/N, 10–25 Td (2.4–6.1 kV ⋅ cm−1 ⋅ bar−1), at low pressure (8 Torr) (10.6 mbar), and at room temperature. Introduction Measuring the mobility of ions in gases is relevant in several areas from physics to chemistry, e.g. in gaseous radiation detectors modelling and in the understanding of the pulse shape formation [1][2][3], and also in IMS (Ion Mobility Spectrometry) a technique used for the detection of narcotics and explosives [4]. One of these examples are the so-called Transition Radiation Detectors (TRDs), used for particle identification at high momenta [5,6]. The choice of the gas mixture for such detectors is determined by several parameters such as high electron/ion velocity and low electron diffusion, which are of key importance [3] as they influence the rate capability and signal formation of TRDs' of the Multi-Wire Proportional Chambers type (MWPCs) [7]. Xenon (Xe) is considered to be the best choice as the main gas, while the choice of the best quencher is not unanimous [3]. One effective quenching gas is methane (CH 4 ) but, due to its flammability its usage is limited [3]. Still, xenon-methane (Xe-CH 4 ) mixtures are used in high energy physics experiments such as D∅ [8,9], HERMES [10] and PHENIX TRDs [11,12], being important to have detailed information on the transport properties of ions in these gas mixtures. The experimental setup used in the present work (described in detail in [13]) allows the measurement of ion mobility in gas mixtures. Initially thought for high pressure, it was converted into a low pressure gas system. Lowering the operation pressure provided a wider scope of application and more detailed information on the fundamental processes involved in the ion transport and also allowed to reduce the inherent operation cost. Still, the results have been consistently in accordance with data obtained at higher pressure [14]. Ion mobility Under a weak and uniform electric field a group of ions will eventually reach a steady state. In this conditions, the average velocity of this group of ions, also known as drift velocity v d , is proportional to the electric field intensity [4]: where K is the mobility of the ions, expressed in units of cm 2 · V −1 · s −1 and E the intensity of the drift electric field. The ion mobility, K, is normally expressed in terms of the reduced mobility K 0 , with N the gas number density and N 0 the Loschmidt number (N 0 = 2.68678 × 10 19 cm −3 for 273.15 K and 101.325 kPa according to NIST [26]). The mobility values are commonly presented as a function of the reduced electric field E/N in units of Townsend (1 Td=10 −17 V · cm 2 ). Langevin's theory According to Langevin's theory [27], one limiting value of the mobility is reached when the electrostatic hard-core repulsion becomes negligible compared to the neutral polarization effect [28]. This limit is given by the following equation, where α is the neutral polarisability in cubic angstroms (α=4.044 Å 3 for Xe [29] and α=2.62±0.01 Å 3 for CH 4 [30]1) and µ is the ion-neutral reduced mass in unified atomic mass units. Although the Langevin limit only applies rigorously for real ion-neutral systems only in the double limit of low E/N and low temperature, it still predicts the low-field mobility at room remperature with relatively good accuracy [28], which is the case in our experimental conditions. Although generally accepted, Langevin theory, has some known limitations in its application, namely with ions that undergo resonant charge transfer, where it fails to provide correct values for the ion's mobility [14]. Blanc's law In binary gaseous mixtures Blanc's law has proven to be most useful when determining the ions' mobility. According to this law the reduced mobility of the ion in the binary mixture, K mix , can be expressed as follows: where K g1 and K g2 are the reduced mobility of that same ion in an atmosphere of 100% of gas #1 and #2 respectively and f 1 and f 2 are the molar fraction of each gas in the binary mixture [31]. 1Fundamental information on molecular polarizabilities. Method and experimental setup The mobility measurements presented in this study were obtained using the experimental system described in [13]. A UV flash lamp with a frequency of 10 Hz emits photons that impinge on a 250 nm thick CsI film deposited on the top of a GEM that is inside a gas vessel. The photoelectrons released from the CsI film are guided through the GEM holes, ionizing the gas molecules encountered along their paths. While the electrons are collected at the bottom of the GEM electrode, the cations formed will drift across a uniform electric field region towards a double grid; the first one acts as Frisch grid while the second one, at ground voltage, collects the ions. A pre-amplifier is used to convert the charge collected into a voltage signal, and the time spectra are recorded in a digital oscilloscope. After the background subtraction from the signal, gaussian curves are fitted to the time of arrival spectra from which the peak centroids are obtained. Since the peaks' centroid correspond to the average drift time of the ions along a known fixed distance (4.273 cm), the drift velocity and mobility can then be calculated. One important feature of the system is the capability of controlling the voltage across the GEM (V GEM ), which limits the maximum energy gained by the electrons as they move across the GEM holes, narrowing the variety of possible primary ions produced. Identifying the primary ions will allow to pinpoint secondary reaction paths that lead to the identification of the detected ions. Since impurities play an important role in the ions' mobility, before each experiment the vessel was vacuum pumped down to pressures of 10 −6 to 10 −7 Torr and a strict gas filling procedure was carried out. No measurement was considered until the signal stabilised, and all measurements were done in a 2-3 minutes time interval to ensure minimal contamination of the gas mixture, mainly due to outgassing processes. The method described together with the knowledge of the dissociation channels, product distribution and rate constants represent a valid, although elaborate, solution to the ion identification problem, which has been providing correct and consistent results for several gas mixtures. Results and discussion The mobility of the ions originated in Xe-CH 4 mixtures has been measured for different reduced electric fields E/N (from 10 Td up to 25 Td), at 8 Torr pressure and at room temperature (293 K). The range of the reduced electric field values used to determine the ions' mobility is limited by two distinct factors: one is the electric discharges that occur at high E/N values; the other is the observed deterioration of the time of arrival spectra for very low values of E/N (below 5 Td or 1.2 kV · cm −1 · bar −1 ), which has been attributed to collisions between the ions and impurity molecules. Previous work on the mobilities and ionization processes of Xe [13] and CH 4 [22] in their parent gases has already been performed in our group. The range of E/N values considered in this work is within the validity conditions of Blanc's law, this is, in the region of low E/N [32,33]. Xenon (Xe) Regarding the pure xenon (Xe) case, only one peak is observed for electron impact energy of about 20 eV using a reduced electric field of 15 Td and a pressure of 8 Torr at room temperature. The ion -3 - JINST 12 P09003 responsible for the peak observed is the Xe dimer ion (Xe + 2 ). While the atomic ion (Xe + ) is a direct result of electron impact ionization [34], Xe + 2 is the result of the following reaction: Methane (CH 4 ) In pure CH 4 , two peaks were observed and reported in a previous work [17]. These two peaks were identified as corresponding to CH + 5 (peak with higher mobility) and to a 2-carbon ion group (C 2 H + n ) plus C 3 H + 7 (peak with lower mobility), which result from reactions involving the primary ions and CH 4 molecules. Other authors have observed three peaks in the same conditions as already discussed in [17]. These primary ions can be found in table 1, where we summarize the possible reactions due to electron impact in CH 4 for electron energies of 20 eV, together with the respective cross sections, appearance energies and the product distribution. The probabilities presented were obtained using the cross sections for CH 4 primary ionization products and the CH 4 total cross section provided in [38], which allowed us to infer the product distribution of the primary ionization. Table 2 presents a summary of the chemical reactions, their product distribution, and respective reaction rates for the reactions between the primary ions displayed on table 1 and CH 4 molecules, at room temperature [40]. The reactions presented on table 1 and 2 corroborate the explanation of the results obtained for pure CH 4 , justifying the attribution of the most intense peak to CH + 5 (lighter, thus with higher mobility) and the smaller one to a group of ions which include C 2 H + 4 , C 2 H + 5 and C 3 H + 7 (heavier, thus with lower mobility). Xe-CH 4 mixture In xenon-methane (Xe-CH 4 ) mixtures, two distinct groups of ions are observed for all mixture compositions studied, from pure Xe to pure CH 4 , as can be seen in figure 1, where the drift spectra for several Xe-CH 4 mixtures (10%, 50%, 70% and 95% of Xe) are displayed, at 8 Torr, 293 K and 15 Td with a V GEM of 20 V. The ions responsible for the several peaks were found to depend on the mixture ratio, suggesting that they are originated by both CH 4 and Xe. Looking at figure 1, there are two relevant aspects both as a result of increasing Xe concentration in the mixture: one is the decrease in mobility of the different ions observed, and the other is the change of the dominant ion species present, which can be identified by a decrease in the area of the group of ions with higher mobility, and an increase in the peak area of the second group of ions, indicating that the faster group of ions are generated from CH 4 molecules while the second, and slower group, is originated by Xe atoms. As for the shift of the peaks towards higher drift times in the drift spectrum (decreasing ion mobility) with increasing Xe concentration, it can be explained by the lower CH 4 mass when compared to that of the Xe atom, which implies a lower reduced mass (µ in the Langevin limit eq. (1.3)) in ion-neutral collision, thus a lower mobility. From the experimental results it is possible to conclude that the ions observed depend on the relative abundance of the gases. Starting from pure CH 4 and up to 5% Xe, only two peaks are -5 - JINST 12 P09003 observed. The ions responsible for these peaks are the two groups identified in pure CH 4 : CH + 5 (the peak with higher mobility) and C 3 H + 7 (the most intense peak). As mentioned previously, the CH + 5 is originated by CH + 4 through the reaction, whose probability may be further enhaced by the charge transfer of Xe + to CH 4 . C 3 H + 7 comes from a two-step reaction. The first is the production of C 2 H + 5 , an intermediary product of CH 4 , which will further react with CH 4 leading to the formation of C 3 H + 7 : Increasing Xe concentration from 5% up to 20% causes the appearance of a new peak, as can be observed in figure 1 for 10% Xe, with lower mobility (at the right side) of the C 3 H + 7 peak. This peak, is clearly related to the availability of Xe, since its area increases with Xe concentration. Between 5% and 20% Xe, reaction (3.1) is very slow, and is not completed during the drift time of the group of ions. So, Xe + is expected to be the ion responsible for the appearance of this peak. Once formed, and since the ionization energy of Xe + is lower than that of the ions originated by CH 4 , it is expected that Xe + won't transfer its charge to CH 4 (see table 3 and 1). Between 20% and 80% Xe, in addition to the already mentioned peaks, another one starts to appear at the left side of the peak attributed to C 3 H + 7 . The ion considered to be responsible for this peak is C 2 H + 5 , which has lower mass, thus is expected to have higher mobility. This ion results from the longer reaction time for the formation C 3 H + 7 (eq. (3.4)) at lower CH 4 abundance, allowing C 2 H + 5 ions to reach the collecting grid. Considering the Xe ions' group, the increasing concentration of Xe in the mixture will lead to the formation of Xe + 2 , which will replace Xe + . Finally, for Xe concentrations above 80% and up to pure Xe, only one peak is observed which, from the evolution observed, was attributed to Xe + 2 , leading to higher diffusion. Another relevant feature that can also be observed in figure 1 is the variation of the FWHM which is seen to increase with Xe concentration, and that can be related to the higher Xe mass. The evolution of the proportion of the peaks observed (from CH 4 and Xe ions) with the mixture composition can be explained analysing the total cross sections for electron impact ionization of Xe and CH 4 ions from table 3 and 1, it can be seen that, at electron energy of about 20 eV, the ionization probability for Xe is about 1.5 times higher for than for CH 4 . It is thus expected that, even at lower Xe concentrations (down to 40% of Xe), Xe ions will still be preferentially produced. -6 -Although energetically favoured, references to the charge transfer between CH + 4 and Xe (1) were not found in literature. The only related reference found was to a charge transfer from a doublet state of Xe + to CH 4 (2) [42]. Nevertheless, the prevalence of the charge transfer reaction represented by (1) would reinforce the experimental observations made, corroborating the presence and abundance of Xe ions at low Xe concentrations. From 100% down to 20% Xe, in figure 2, the experimental values obtained for the several ions identified follow Blanc's law, with the exception of the one for CH + 5 and Xe + 2 which deviate from it. In the CH + 5 case, this can indicate the increasing presence of CH + 4 instead of CH + 5 due to the longer reaction time for the formation of CH + 5 for Xe percentages above 20%. Since CH + 4 is likely to be affected by resonant charge transfer, this can explain the measured mobility values below the expected by Blanc's law. -7 -As for Xe + 2 , whose mobility is seen to be lower than the expected by Blanc's law, one possible explanation for this behaviour is the influence of its predecessor ion, Xe + , whose mobility is affected by resonant charge transfer, a phenomenum not accounted for in Blanc's law. The ion mobility values measured were seen to vary with the relative abundance of the gases, but no significant variation of the mobility was observed in the range of E/N (10-25 Td) studied. Table 4 summarizes the results obtained. Conclusion In the present work we measured the reduced mobility of ions originated by electron impact in Xe-CH 4 mixtures at pressures of 8 Torr, low reduced electric fields (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25) and different mixture ratios. The experimental results show that, for the range of concentrations studied, two different groups of ions were identified, one belonging to ions from CH 4 (CH + 5 , C 2 H + 5 and C 3 H + 7 ) and the other to Xe ions (Xe + and Xe + 2 ). The presence, abundance and mobility of the different ions present was seen to vary for the range of mixtures studied. Increasing CH 4 concentration in the mixture resulted in a higher mobility of the ions observed, with the behaviour roughly following Blanc's law through the entire range for the different ions proposed. It is our intention to extend the work on ion mobility using different mixtures of known interest (for the applications already referred) such as Ar-CF 4 , Ar-CF 4 -Isobutane, Ne-CF 4 and Xe-CF 4 .
4,095
2017-09-01T00:00:00.000
[ "Physics", "Chemistry" ]
Semaphorin 4D Induces an Imbalance of Th17/Treg Cells by Activating the Aryl Hydrocarbon Receptor in Ankylosing Spondylitis Objectives Semaphorin 4D (Sema4D) is constitutively expressed on T cells and osteoclasts, and regulates T cell proliferation and bone remodeling. In addition, several studies have shown that Sema4D is involved in the pathogenesis of autoimmunity. We undertook this study to investigate the mechanism by which Sema4D affects the pathogenic progress of ankylosing spondylitis (AS). Methods Soluble Sema4D (sSema4D) levels in serum were analyzed by enzyme-linked immunosorbent assay. The cell surface levels and transcripts of Sema4D were evaluated in CD4 + and CD19 + cells from the AS patients and healthy individuals. The mRNA expression levels were assessed by quantitative polymerase chain reaction (qPCR). The proportions of Treg cells and IL-17-producing T-cells (Th17 cells) differentiated from CD4 + T cells were analyzed by flow cytometric analysis. The aryl hydrocarbon receptor (AhR) agonistic effect of Sema4D was detected by analyzing the activation of downstream signaling pathways and target genes using Luciferase and EROD assay. Results Levels of sSema4D were elevated in both serum from AS patients, and clinical features markers were correlated with serum sSema4D levels. Sema4D facilitated CD4 + T cells proliferation and Th17 cells differentiation and inhibited Treg cells differentiation by enhancing RORγt expression and reducing Foxp3 expression, with increasing expression and secretion of IL-17 and IL-22. It induced the expression and activity of AhR target gene CYP1A1 and XRE reporter activity via interaction with CD72. Conclusion These findings indicate that Sema4D as a potent activator of T cells in the immune response contributes to the inflammation of AS by inducing imbalance in Th17 and Treg cell populations in an AhR-dependent manner, suggesting it is a crucial participant in AS pathogenesis. INTRODUCTION Ankylosing spondylitis (AS) is a chronic autoinflammatory rheumatic disease characterized by immunoinflammatory responses and abnormal bone remodeling. Features such as inflammatory cytokine production, syndesmophytes, erosions and osteoporosis are manifestations of progressive AS (1,2). Accumulating findings have demonstrated that T helper 17 (Th17) cells, a subgroup of CD4 + T cells, secrete interleukin (IL)-17, enhance proinflammatory effects and aggravate AS. Conversely, regulatory T (Treg) cells contribute critically to protection against pathogenic T cell responses and the maintenance of dominant immune homeostasis (3). Recent studies indicate that AS is significantly associated with the number of peripheral blood Th17 cells (4). Increasing numbers of reports have verified that the ratio of Th17 to Treg cells is elevated in AS patients, suggesting that the balance between Th17 and Treg cells may be a crucial factor for AS development (4,5). In addition, inflammatory cytokines, immune cells and osteocytes might be responsible for bone remodeling in AS. The pathophysiology of AS is one of abnormal bone metabolism characterized by pathological loss of trabecular bone in the center of the vertebral bodies accompanied by new bone formation in the cortical regions of the vertebrae (6,7). To date, the exact pathogenesis of immune activation and abnormal bone remodeling in AS remains obscure. To allow AS patients to attain true disease remission, seeking another pivotal molecular player that promotes immune activation and bone loss in AS is an indispensable task. Accumulating evidence has showed that the aryl hydrocarbon receptor (AhR) (8,9), a ligand-dependent transcriptional factor, is crucial for eliciting the responses and developing the immune function of Th17 and Treg cells, performing a particularly critical function in autoimmune diseases such as rheumatoid arthritis (RA). Recent studies conducted by experts on AhR have shown the function of AhR in regulating Th17 and Treg cell differentiation. According to the type of ligand, activation of AhR can induce Th17 or Treg cell differentiation, leading to aggravation of proinflammatory or immunosuppressive responses, respectively, in autoimmune diseases (10)(11)(12). AhR activation by its high-affinity ligand 6 formylindolo[3,2-b]carbazole (FICZ) may induce Th17 cells and promote IL-17 production (13). However, the effect of AHR on Th17 and Treg cell differentiation in AS has not been well studied. The semaphorins are a family of more than 20 proteins that are implicated in cell-to-cell communication and can be divided into eight main classes (14). Notably, recent research on semaphorins demonstrated that these proteins are critical for the immune response and bone metabolism (15,16). In the central nervous system, semaphorin 4D (Sema4D) was originally identified as an axon guidance molecule, but in the immune system, it was originally identified as a T cell activation marker that is constitutively expressed on T cells and regulates T cell priming (17,18). Sema4D has been shown to be implicated in the development of rheumatoid arthritis (19). Interestingly, Sema4D was recently identified in the bone microenvironment and demonstrated to be a product of osteoclasts that acts on osteoblasts to inhibit bone formation, thereby disrupting bone homeostasis in favor of resorption. Knockdown of Sema4D prevents bone loss, suggesting that Sema4D may be a new and potentially effective target for drugs that promote bone formation (20,21). Immune and bone metabolism abnormalities have critical roles in the progression of AS, suggesting that Sema4D might exacerbate AS. However, the involvement of Sema4D in the pathogenesis of AS has not yet been verified. Based on the previously reported role of Sema4D, we hypothesized that Sema4D may be involved in regulating Th17 and Treg cell differentiation through AhR activation, causing an inflammatory response and inducing the bone remodeling process in AS patients. Although extensive studies have addressed the physiological and pathological effects of Sema4D on many autoimmune diseases, the potential role of Sema4D in immunoregulation and bone remodeling in AS pathogenesis has not yet been reported. Thus, the present study was conducted to clarify the potential role of Sema4D in AS patients and to explore the underlying mechanism by which Sema4D could enhance the differentiation of proinflammatory Th17 cells and suppress the differentiation of anti-inflammatory Treg cells through activation of AhR, which requires the Sema4D-CD72 interaction. Study Subjects Serum samples from 56 patients with AS and 43 age-and sexmatched healthy controls were collected with informed consent in accordance with the principles of the Declaration of Helsinki for research involving human subjects and with approval from the local ethics committee of Nanjing Medical University. The AS patients fulfilled the 1984 modified New York criteria for the diagnosis of AS and were naïve to systemic treatment (22,23). No subject had other systemic diseases, autoimmune diseases, active infectious processes, a known history of bone fractures in the previous 24 months, a history of neurological cognitive disease, or a history of osteoporosis. To compare sema4D expression levels before and after anti-TNF-α treatment, we monitored the expression of sema4D in 20 patients who were treated with a TNF-α blocker after enrollment in the present study. Patient serum specimens were stored at −80 • C until analysis as described below. PBMCs from healthy donors and patients with AS were isolated by discontinuous density gradient centrifugation, washed twice in sterile phosphate-buffered saline (PBS) and resuspended at a concentration of 1 × 10 6 cells/ml PBS. Clinical and Laboratory Assessment We recorded the disease duration, BMI, sex, age, and extraarticular manifestations of AS patients. The disease activity of AS was assessed by the CRP level and Bath AS disease activity index (BASDAI). BAP (San Diego, CA, United States), osteocalcin (Nordic Biosciences, Herlev, Denmark), TRACP-5b and C-terminal cross-linking telopeptide of type I collagen (CTX) (Nordic Biosciences) were evaluated as markers of bone turnover. Serum BAP and TRACP-5b levels were measured by enzyme-linked immunosorbent assay (ELISA). Osteocalcin and CTX levels were measured by electrochemiluminescence immunoassays (ECLIAs) (24). The intraassay and interassay coefficients of variation were less than 9% for all parameters. The assay was performed according to the manufacturer's instructions. Complete blood counts were obtained, routine biochemical analyses were conducted, and HLA-B27 was evaluated. Serum Sema4D Levels and Cytokine Analysis The serum concentrations of Sema4D were determined using commercially available ELISA kits (MyBio-Source, San Diego, CA, United States). Assessment was performed according to the manufacturer's instructions . The levels of soluble IL-17, IL-22 and IL-10 were measured using ELISA kits specific for each cytokine (R&D Systems, Minneapolis, MN, United States). All measurements were performed in triplicate. CD4 + T Cell Purification and Stimulation PBMCs were isolated from blood samples freshly obtained from AS patients and healthy controls in lymphocyte separation medium (San Diego, CA, United States) according to the manufacturer's instructions. Isolation of CD4 + T cells from PBMCs was performed by magnetic cell separation. MACS CD4 microbeads (Miltenyi Biotec, Auburn, CA, United States) were incubated with PBMCs and applied to a MidiMACS separation column (Miltenyi Biotec, Auburn, CA, United States) according to the manufacturer's instructions. The purity of the isolated CD4 + T cells was determined by flow cytometry to be >95% for each population. The purified CD4 + T cells were seeded in 96-well plates at 1 × 10 6 cells per well and were then stimulated with or without soluble human Sema4D-Fc fusion protein (PeproTech, Rocky Hill, NJ, United States), anti-Plexin B2 (PLXNB2) antibody, anti-Plexin B1 (PLXNB1) antibody or anti-CD72 ligation antibody (BU40; Santa Cruz Biotechnology, United States). To assay Th17 cell differentiation in vitro (25), cells were induced to differentiate into Th17 cells with anti-CD3 (2 µg/ml, plate-bound) and anti-CD28 (2 µg/ml, soluble) antibodies. For blocking assays, cells were cocultured with 10 ng/ml Sema4D-Fc and 10 ng/ml anti-Sema4D antibody or isotype-matched control IgG for 48 h. The concentrations of human IL-10, IL-22 and IL-17 in culture supernatants were determined by ELISA. At the end of the stimulation period, cells were collected and analyzed by flow cytometry. Quantitative RT-PCR analysis (qRT-PCR) was performed as described above. Flow Cytometric Analysis CD4 + T cells were harvested before and after stimulation. Cell surface markers were stained with the indicated labeled antibodies against the indicated cell surface antigens. Cells were prepared in heparinized tubes by Ficoll-Paque density gradient centrifugation and were then analyzed on a FACSCanto (Invitrogen, Carlsbad, CA, United States) using FlowJo software (Tree Star) according to the manufacturer's instructions. The following antibodies were used for flow cytometry to analyze the cell types and cytokine production: PE-CD4, Foxp3-APC, CD25-PE and FITC-IL-17A (BioLegend, CA, United States). FITC-, PEand APC labeled mouse IgG antibodies were utilized as isotype controls (BioLegend, CA, United States). Proliferation Assay For the proliferation assay, isolated CD4 + T cells were labeled with a Cell TraceTM CFSE Cell Proliferation Kit (Invitrogen, Carlsbad, CA, United States) at a final concentration of 4 µM. CFSE-labeled CD4 + T cells were incubated under the described conditions. A total of 1 × 10 6 CFSE-labeled T cells were seeded into a flat-bottom 96-well plate. Soluble anti-sema4D (see above), soluble anti-CD72 (BioLegend, San Diego, CA, United States), or matched isotype antibodies were added as indicated. T cell proliferation was recorded after 3 and 5 days based on CFSE dilution as measured using flow cytometry. Western Blot Assay Cells were collected after induction, and cell lysate was prepared from 1 × 10 7 cells. The Western blot assay was performed according to the manufacturer's protocols. Transfection CD4 + T cells were transfected with siAhR for 24 h using Lipofectamine 2000 according to the manufacturer's protocols. SiGENOME RISC-free Control siRNA was used as the control. The cells were then rinsed, and then exposed to 10 ng/ml Sema4D in fresh media for 24 h (26). Ethoxyresorufin-O-Deethylase (EROD) Activity EL-4 cells were plated into 6-well plates at a density of 1 × 10 6 cells/ml; stimulated with Sema4D, anti-PlexinB1 antibody, anti-PlexinB2 antibody, anti-CD72, CH223191, and FICZ either alone or in combination; and incubated for 24 h. The supernatant was then collected. Then, CYP1A1 activity was measured with an EROD enzyme assay, as previously described (28). Fluorescence intensity was detected by using a FL600 plate reader (Biotek, Winooski, VT, United States), with excitation at 530 nm and emission at 590 nm. Statistical Analysis Statistical significance was calculated using SPSS 20.0. The nonparametric Mann-Whitney U test was used for comparisons between 2 groups, and comparisons among 3 groups were performed using the Kruskal-Wallis test followed by the Mann-Whitney U test. Correlation analysis was performed using the Pearson correlation test. For all statistical analyses, p values of less than 0.05 were considered significant. Sema4D Levels Were Significantly Elevated in Patients With AS Recently, the effects of sema4D on the immune system have been shown to play critical roles in diverse pathological processes in many chronic inflammatory diseases, such as RA. However, the specific role of Sema4D in modulating immune inflammation in AS has not yet been elucidated. To investigate the pathologic implications of Sema4D in patients with AS, we measured serum concentrations of Sema4D in AS patients and healthy controls. As shown in Figure 1A, serum concentrations of soluble Sema4D were significantly higher in AS patients than in healthy controls (mean ± SD 66.7 ± 6.9 ng/ml vs 26.8 ± 3.9 ng/ml; p < 0.01). Importantly, serum Sema4D levels were significantly decreased after 2 months of treatment with a TNF-α blocker. Previous reports showed that the levels of IL-17 in AS patients were significantly higher but those of sclerostin were significantly lower than the corresponding levels in controls. The present results are consistent with the previous reports. The clinical data and baseline demographic informations of participants are presented in Table 1. Serum concentrations of soluble Sema4D were significantly correlated with Bath AS disease activity index (BASDAI), CRP and serum IL-17 levels (r = 0.422 and p < 0.001 for BASDAI; r = 0.330 and p < 0.001 for CRP; r = 0.561 and p < 0.001 for IL-17) (Figures 1B-D). However, no significant correlation was found between the levels of soluble Sema4D and sclerostin (not shown). Correlation Between the Serum Sema4D Levels and Serum Bone Remodeling Marker Levels Accumulating data have identified Sema4D as a product of osteoclasts, which skew bone homeostasis toward resorption and are involved in the bone remodeling process (21,29). To determine whether the serum Sema4D concentration is correlated with serum bone remodeling marker levels in AS patients, we measured the serum levels of bone remodeling markers. The serum levels of BAP, TRAP 5b, CTX-I and OC in AS patients were significantly elevated compared with those in healthy controls (1123 ± 98.7 vs 877.9 ± 49.5 pg/ml, p < 0.001; 1869 ± 169.3 vs 1672 ± 112.8 pg/L, p < 0.001; 1.92 ± 0.45 vs 1.2 ± 0.23 ng/ml, p < 0.01 and 14.2 ± 4.7 vs 13.1 ± 5.3, p = 0.057, respectively). More importantly, significant positive correlations were found between the serum Sema4D concentration and serum levels of the bone turnover markers CTX-I, TRAP 5b and BAP (r = 0.347, p < 0.001; r = 0.433, p < 0.001; r = 0.293, p = 0.009, respectively) (Figures 1E-G), whereas no correlations were found between the serum Sema4D concentration and serum levels of the bone formation marker OCN. Additionally, no significant correlations were found between the serum Sema4D concentration and age or BMI. These data suggested that Sema4D plays a critical role in bone resorption and might be involved in the pathogenesis of bone loss in AS patients. Sema4D Expression in the CD4 + T Cells of AS Patients Sema4D has been reported to exhibit higher expression levels on T cells than on other lymphocytes, such as B cells, in RA patients. In addition, Sema4D expression is further enhanced after cellular activation (30,31). However, the role of Sema4D in the pathogenesis of AS has not been defined. PBMCs were isolated from AS patients and healthy controls, and Sema4D expression was detected in PBMCs from AS patients and healthy controls by flow cytometry. In healthy controls, cell surface membrane-bound Sema4D was abundantly expressed on CD4 + T cells but was expressed at lower levels on CD19 + B lymphocytes. By contrast, cell surface expression of Sema4D was significantly downregulated on CD4 + T cells from AS patients compared with those from healthy donors ( Figure 1H). Interestingly, qRT-PCR revealed that the mRNA expression of Sema4D in CD4 + T cells was elevated in AS patients (Figure 1J), suggesting that the increased serum levels of soluble Sema4D and the reduced levels of cell surface membrane-bound Sema4D on CD4 + T cells in patients with AS were due to shedding of Sema4D from the surface of activated T cells. As Sema4D expression was significantly enhanced in CD4 + T cells isolated from PBMCs of AS patients, we then analyzed the expression levels of other cytokines and transcription factors in CD4 + T cells isolated from PBMCs of AS patients. Our data revealed that compared with those of healthy controls, CD4 + T cells isolated from PBMCs of AS patients exhibited increased levels of IL-17A and RORγt mRNA but a decreased level of Foxp3 mRNA ( Figure 1K). Sema4D Promotes CD4 + T Cell Proliferation and Th17 Cell Differentiation, Whereas Inhibits Treg Cell Differentiation Previous reports have suggested that soluble Sema4D exerts multiple effects on CD4 + T cells in several diseases (10, 11). Thus, we speculated that Sema4D may also affect CD4 + T cells in AS patients. CD4 + T cells were harvested from PBMCs of AS patients and healthy donors, stained with CFSE, and cocultured with or without soluble human Sema4D-Fc in the presence of an anti-Sema4D or isotype control antibody. After 3 days, the proliferation rates and proportions of T cell subsets were assessed by flow cytometry. Notably, after stimulation with Sema4D, CD4 + T cells from AS patients showed significant proliferation (Figures 2A,B), which was markedly suppressed by adding soluble anti-Sema4D. To further explore the effect of Sema4D on CD4 + T cells subsets, Th17 and Treg cell differentiation, we used qRT-PCR to measure the mRNA expression levels of transcription factors. The expression of RORγt, a transcription factor specifically promoting Th17 cell differentiation, was significantly increased in CD4 + T cells from both AS patients and healthy controls after stimulation with Sema4D ( Figure 2E). Notably, we found that the expression of Foxp3, which is critical for the function and differentiation of Treg cells, was decreased in CD4 + T cells from both AS patients and healthy controls after stimulation with Sema4D ( Figure 3E). Taken together, these data implied that soluble Sema4D alone can directly activate CD4 + T cell proliferation and Th17 cell differentiation even in the absence of other stimulatory cytokines. Due to the increased expression of RORγt and decreased expression of Foxp3, we next investigated whether soluble Sema4D increases Th17 cell numbers and decreases Treg cell numbers. We used flow cytometry to measure the proportions of Treg cells and Th17 cells and found that the proportion of Th17 cells was increased and that of Treg cells was decreased among the CD4 + T cell populations from both healthy controls and AS patients after Sema4D stimulation (Figures 2C,D, 3A,B). Furthermore, we measured IL-17 and Foxp3 expression in the CD4 + T cells from AS patients after stimulation with Sema4D, and found that Sema4D enhanced IL-17 expression and reduced Foxp3 expression ( Figure 4K). Subsequently, we examined the protein levels of molecules implicated in Th17 cell differentiation (IL-22 and IL-17) and Treg cell differentiation (IL-10) in CD4 + T cells from AS patients and healthy controls following treatment with Sema4D. Sema4D significantly elevated the protein levels of IL-17 and IL-22 (Figures 2F,G), and reduced the protein levels of the IL-10 ( Figures 3C,D,F). The AhR Pathway Is Involved in Sema4D-Mediated Th17 Cell and Treg Cell Differentiation Recently, an increasing number of reports have elicited that AhR playes a critical role in the regulation of Th17 and Treg cell differentiation (12,32). Thus, we investigated whether AhR contributes to the Sema4D-mediated the balance of Th17/Treg cells, CD4 + T cells were transfected with siAhR in the presence of Sema4D under Th17-or Treg-polarizing conditions, and the expression levels of AhR and the AhR target gene CYP1A1 were measured. Of note, Sema4D could enhance CYP1A1 activity and AhR expression under Th17 or Treg-polarizing conditions. Interestingly, knockdown of AhR reduces CYP1A1 activity and expression (Figures 4A-C). Next, We found that AhR knockdown markedly diminished the enhanced effect of Sema4D on the expression of the IL-17 and RORγt mRNA (Figures 4D,E), and increased the IL-10 and Foxp3 mRNA expression (Figures 4F,G). Furthermore, our results showed that the proportion of Th17 cells was reduced and those of Treg cells was elevated in the CD4 + T cells from healthy individuals and AS patients after stimulation with Sema4D and siAhR plasmid comparing to stimulation with Sema4D (Figures 4H,J). All these data suggested that Sema4D-promoted expression of CYP1A1 might be dependent on AhR pathway activation. To further evaluate effect of Sema4D on the AhR pathway, we explored the activation of AhR pathway by Sema4D in EL-4 cells with anti-CD3/CD28 antibodies treatment. Results showed that Sema4D enhanced the protein and mRNA expression of CYP1A1, and the enzymatic activity of CYP1A1 under Th17 or Treg cell polarization conditions (Figures 5A-C), indicating that Sema4D might mediate AhR pathway activation in lymphocyts. Subsequently, we performed a blocking assay using siAhR plasmid and AhR antagonist CH233191. AhR antagonist or siAhR were shown to decreased Sema4D-induced CYP1A1 enzymatic activity, the mRNA and protein expression of CYP1A1 (Figures 5D-F). These results suggested that Sema4D-mediated effects on CYP1A1 were dependent on AhR. To further confirm that the Sema4D-mediated CYP1A1 activity was attributed to the increase of XRE (AhR-dependent reporter gene) reporter activity, we transfected EL-4 cells with an XRE-driven luciferase reporter gene and siAhR plasmid. Sema4D was shown to promote the reporter gene expression through AhR activation, and siAhR plasmid almost completely reversed Sema4D-promoted the reporter gene expression (Figure 5G). These findings revealed that Sema4D could significantly promote AhR binding to a specific XRE sequence and that AhR was a key factor in the Sema4D-mediated Th17 cell and Treg cell differentiation. Sema4D-CD72 Interaction Is Required for Cytokine Production and AhR Activation As the Th17-related cytokines IL-17 and IL-22 play a critical role in the pathogenesis of AS, we evaluated the effect of Sema4D on the cytokine production profile of T cells. Culture supernatants were collected after CD4 + T cells were incubated with or without Sema4D. IL-17 and IL-22 cytokine levels were markedly elevated after Sema4D stimulation (Figures 2F,G). The effects of Sema4D are believed to be regulated via three receptors: Plexin B1, Plexin B2, and CD72 (33). The effects of Sema4D are believed to be regulated via three receptors:PlexinB1,PlexinB2, and CD72 (32)(33)(34). To determine which receptor is involved in the stimulatory effect of Sema4D observed in CD4 + T cells, we first used qRT-PCR to evaluate the transcription of Plexin B1, Plexin B2, and CD72 in CD4 + T cells derived from patients with AS and healthy subjects. After Sema4D stimulation, the mRNA levels of CD72 were evidently upregulated compared to those of Plexin B2 and Plexin B1 ( Figure 6A). Then, we used flow cytometry to measure the cell surface expression of the CD72 protein on CD4 + T cells derived from AS patients and found that after Sema4D stimulation, CD72 expression was increased on CD4 + T cells derived from both AS patients and healthy controls (Figure 6B). To further address the role of CD72 in Sema4D-mediated cytokine production and AhR activation, blocking assays were conducted with anti-Plexin B1, anti-Plexin B2 or anti-CD72 antibodies. CD4 + T cells were stimulated with Sema4D or with anti-plexin B1, anti-plexin B2 or anti-CD72 antibodies, and cytokine production was assessed. Sema4D-induced cytokine secretion was significantly suppressed in CD4 + T cells treated with the anti-CD72 antibody compared with control cells in the presence of either the anti-plexin B1 or anti-plexin B2 antibody (Figures 6C,D). Furthermore, incubation with the anti-CD72 antibody markedly attenuated Sema4D-induced XRE-dependent luciferase reporter gene expression and the enzymatic activity and mRNA expression of CYP1A1 (Figures 6E-G). These findings suggested that Sema4D may induce cytokine secretion and AhR activation in CD4 + T cells via interactions with the CD72 receptor. These observations further reinforced the possibility that CD72 may be the major receptor controlling the effect of Sema4D signaling on the proliferation and differentiation of CD4 + T cells in AS patients. DISCUSSION Immunological inflammation has been recognized as the critical factor in AS pathogenesis, and disrupted immune homeostasis is closely related to the occurrence of immune pathologies (34). Previous work demonstrated that skewing of the Th17/Treg balance toward Th17 polarization plays a pivotal role in AS (35). Recent work has shown that Sema4D is expressed in immune cells and osteoclasts from RA patients and plays a critical role in RA pathogenesis (19,29). Consistent with these findings, we found in this study that the serum Sema4D concentration was significantly increased in the peripheral blood of AS patients. These data suggested that Sema4D might be involved in the pathogenesis of AS. Although recent reports have shown that serum Sema4D concentrations are elevated in other disease states and animal models, the role of Sema4D in AS patients has not yet been elucidated. Our results highlight the pathological significance of Sema4D in AS pathogenesis, suggesting basic research and clinical implications. Sema4D is abundantly expressed on T cells and weakly on B cells. and can be induced to shedding from the cell surface by matrix metalloproteases to become soluble Sema4D (sSema4D). Both membrane-bound Sema4D (m Sema4D) and sSema4D have important immune regulatory functions that promote immune cell activation and responses (36,37). Sema4D exerts essential functions in T cell priming and activation. In line with these findings, Our data verified the expression and role of sSema4D in regulating immune responses in the pathogenesis of AS. Previous studies have indicated that soluble Sema4D levels are elevated in RA patients and that high soluble Sema4D levels are related to clinical and biological markers of RA (29). Therefore, we investigated the role of Sema4D in the pathogenesis of AS. In the current study, we confirmed that the serum and AS patients (n = 20) treated with anti-plexin B1, plexin B2 or CD72 antibody, respectively 10 µg/ml, in the presence of soluble Sema4D (10 ng/ml) by ELISA. (E) Expression of mRNA for CYP1A1 in peripheral blood CD4 + cells from 20 patients with AS and 15 healthy controls treated with anti-plexin B1 (10 µg/ml), plexin B2 (10 µg/ml) or CD72 (10 µg/ml) antibody in the presence of soluble Sema4D (10 ng/ml). (F,G) EL-4 cells were treated with anti-CD3/CD28, anti-plexin B1 (10 µg/ml), plexin B2 (10 µg/ml) or CD72 (10 µg/ml) antibody in the presence of soluble Sema4D (10 ng/ml) for indicated times. After 48 h, the activity of XRE-driven luciferase reporter gene and enzymatic activity of CYP1A1 were detected by using luciferase report gene and EROD assay, respectively. The data were presented as means ± SEM. The results are representative of three independent experiments. ** = p < 0.01; N.S = no significant. concentration of Sema4D was significantly increased in patients with AS and positively correlated with markers of AS disease activity, such as the CRP level and BASDAI score. In addition, we found that the change in the soluble Sema4D concentration before and after treatment of AS with a TNF-α blocker and the decrease in the concentration of soluble Sema4D in AS patients after successful treatment were positively correlated with lower clinical disease activity. Our data suggest that Sema4D is an additional clinical disease marker for the evaluation of AS activation status. Recent reports have shown that the expression of Sema4D is elevated in osteoclasts and induces bone resorption by activating osteoclasts (38). In the current study, we measured the serum levels of several bone markers to explore whether bone formation or bone resorption are affected by Sema4D in AS patients. the levels of the resorption indexes CTX and TRAP 5b, which indicate osteoclast activity, were significantly elevated and correlated with the serum Sema4D concentration in AS patients. However, only a weak correlation was observed between the serum Sema4D concentration and the levels of the bone formation markers osteocalcin and BAP. Collectively, these results suggest that Sema4D enhances bone resorption in AS patients. Further work will be needed to ascertain the exact role of Sema4D in bone loss in AS. Immune inflammation in AS is classically characterized by activation of inflammatory pathways in both the innate and adaptive immune responses. The manifestations of this immune inflammation are driven by T cells, especially CD4 + T cells (35). Significant increases in Th17 cells and significant decreases in Treg cells in the peripheral blood of AS patients were found to be responsible for the pathogenesis of AS (39). As recent reports indicated that Sema4D is expressed mainly in T cells (40), we assessed the cell surface expression of Sema4D by flow cytometry and the mRNA expression of Sema4D by qRT-PCR. Intriguingly, in CD4 + T cells from AS patients, the cell surface expression of Sema4D was significantly reduced, whereas the mRNA expression of Sema4D was elevated. This relative reduction in the cell surface level of Sema4D is reported to be attributable to shedding of Sema4D from the surface of CD4 + T cells. Collectively, our results indicated that the reduction in cell surface membrane-bound Sema4D is crucially implicated in the development of AS and prompted us to investigate the effect of Sema4D on CD4 + T cell proliferation and differentiation. Consistent with previous data (39), our data confirmed that the proportion of Th17 cells is significantly increased in the PBMCs of AS patients. We investigated the function of Sema4D during CD4 + T cell proliferation. After stimulation with Sema4D, CD4 + T cells exhibited significant proliferation in a culture system of purified CD4 + T cells from AS patients. However, this proliferation was significantly suppressed by the addition of soluble anti-Sema4D. Similar to FICZ, Sema4D remarkably boosted both the Th17 cell frequency and cytokine secretion in AS patients, and the level of secreted IL-17 and the mRNA expression levels of IL-17 and RORγt in CD4 + T cells from AS patients were markedly elevated after treatment with soluble sema4D treatment. In addition, Tregs are essential for suppressing excessive autoimmune responses, thereby maintaining immune homeostasis. Our findings indicated that Sema4D can suppress Foxp3 and IL-10 expression, suggesting a role of Sema4D in the functional impairment of Treg cells in AS patients. These data revealed that Sema4D might mediate the pathogenesis of AS by inducing T cell proliferation, regulating Th17 and Treg cell differentiation and reinforcing Th17 cell function. In addition, it is also interestingly that increased percentages of CD4 + CD25-Foxp3 + cells in AS after Sema4D stimulation. However, Yang and colleagues reported that the phenotype and functional activity of Tregs were scarce on CD4 + CD25-Foxp3 + in lupus patients, that not all CD4 + Foxp3 + T cells have protective suppressive function. Also, it has been reported that CD4 + CD25-Foxp3 + T cells functionally promote the proliferation and differentiation of CD4 + T cells into Th17 cells might sustain chronic inflammation (41)(42)(43). Therefore, further study will be needed to explore the exact role of Sema4D-mediated CD4 + CD25-Foxp3 + cells elevation in AS. Taken together, the current findings further illuminated the immunoregulatory mechanism of Sema4D that causes the Th17/Treg imbalance. Recently, AhR has attracted researchers' attention because of its widespread expression in immune cells, such as certain subtypes of T cells, Th17 and Treg cells. AhR is involved in critical immunoregulatory functions (44). In many autoimmune diseases, AhR activation is the crucial factor regulating the differentiation of Th17 and Treg cells (45). Thus, we speculated that Sema4D regulates Th17 and Treg cell differentiation in a manner dependent on the activation of AhR to accelerate AS development. Here, we found that Sema4D enhanced the expression of AhR and CYP1A1 (the AhR target gene) in AS patients. Notably, pretreatment of Sema4D-exposed cells with an AhR antagonist or knockdown of AhR significantly decreased the expression of Cyp1A1 and suppressed the regulatory effect of Sema4D on Th17 and Treg differentiation. Moreover, Sema4D potentiated CYP1A1 enzymatic activity via the AhR pathway. Here, we conducted further detailed analysis and verified that Sema4D-induced CYP1A1 activity was attributable to increased expression of an AhR-dependent reporter gene. These results further corroborated the hypothesis that Sema4D can induce AhR pathway activation to modulate Th17 and Treg differentiation in AS patients. Sema4D has been reported to function as a ligand and to interact with three different receptors: Plexin-B1, Plexin-B2, and CD72 (46). To determine which receptor is involved in the Sema4D-induced activation of AhR, we first measured the expression levels of PlexinB1, PlexinB2, and CD72 in activated T cells and found that CD72 is the main Sema4D receptor in CD4 + T cells from AS patients. It should be noted that treatment with an anti-CD72 antibody significantly suppressed CYP1A1 mRNA expression and XRE reporter activity in CD4 + T cells in the presence of Sema4D. Moreover, cytokine secretion was suppressed. These data implied that the promotive effects of Sema4D on AhR activation and cytokine production are dependent on its interaction with the CD72 receptor. CONCLUSION This work is the first to report the role of Sema4D in AS patients. Our results showed that the serum concentrations of secreted Sema4D were significantly higher in AS patients than in individuals without AS and that Sema4D is a potential biomarker for AS disease. Furthermore, our studies aimed to reveal the mechanism by which increased Sema4D levels play a role in regulating the immune response in AS patients. Therefore, further studies should also address the questions of whether sSema4D represents a useful biomarker for evaluating the immune response in patients with AS and whether it could serve as a potential therapeutic target for AS treatment. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ supplementary material. ETHICS STATEMENT The participants gave their written informed consent and the Regional Ethics Committee at Nanjing Medical University approved the study. AUTHOR CONTRIBUTIONS JX conceived and designed the study, analyzed the data, and planned the experiment. JX, ZW, and WW performed in vitro experiments and participated in its design and coordination, and drafted the manuscript. All authors read and approved the final manuscript. FUNDING This work was supported by grants from the National Natural Science Foundation of China (30600560). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
7,940.8
2020-03-26T00:00:00.000
[ "Medicine", "Biology" ]
Comparing Redox and Intracellular Signalling Responses to Cold Plasma in Wound Healing and Cancer Cold plasma (CP) is an ionised gas containing excited molecules and ions, radicals, and free electrons, and which emits electric fields and UV radiation. CP is potently antimicrobial, and can be applied safely to biological tissue, birthing the field of plasma medicine. Reactive oxygen and nitrogen species (RONS) produced by CP affect biological processes directly or indirectly via the modification of cellular lipids, proteins, DNA, and intracellular signalling pathways. CP can be applied at lower levels for oxidative eustress to activate cell proliferation, motility, migration, and antioxidant production in normal cells, mainly potentiated by the unfolded protein response, the nuclear factor-erythroid factor 2-related factor 2 (Nrf2)-activated antioxidant response element, and the phosphoinositide 3-kinase/protein kinase B (PI3K/Akt) pathway, which also activates nuclear factor-kappa B (NFκB). At higher CP exposures, inactivation, apoptosis, and autophagy of malignant cells can occur via the degradation of the PI3K/Akt and mitogen-activated protein kinase (MAPK)-dependent and -independent activation of the master tumour suppressor p53, leading to caspase-mediated cell death. These opposing responses validate a hormesis approach to plasma medicine. Clinical applications of CP are becoming increasingly realised in wound healing, while clinical effectiveness in tumours is currently coming to light. This review will outline advances in plasma medicine and compare the main redox and intracellular signalling responses to CP in wound healing and cancer. Introduction Plasma is an ionised gaseous mixture of excited ions, free radicals, and electrons that emits an electromagnetic field and ultraviolet light.Thermal plasma, existing in the universe as stars, fire, lightning, and auroras, uses energy in the form of heat to strip gases of electrons to create plasma in thermal equilibrium.In contrast, cold atmospheric-pressure plasma, referred to simply as "cold plasma" (CP) from here, is a partially ionised plasma, where intense electric fields (kHz-MHz range) are applied to low temperature gas (noble gas, air, O 2 , and N 2 , a mixture of either dry or humidified) to liberate electrons at high energy, and hence temperature (10 4 -10 5 • C), leading to inelastic and elastic electron impact collisions that partially ionise the gas with a degree of ionization of 0.001% or lower [1,2].The heavy gas particles are unaffected by the electric field and remain cool, while the much smaller and fewer highly kinetic and hot electrons that impact and ionise the gas do little to change the heavy gas particle temperature, meaning that the CP is in a non-equilibrium thermodynamic state and the overall resulting temperature can be below 40 • C [3].As a result, CP is tissue-tolerable, usually up to several minutes of continuous application before significant tissue heating occurs, and enables diverse medical applications of CP [3].This also means that when the electric field (i.e., power) is disabled, the high-energy electrons are rapidly quenched by the gas and plasma ceases to form, making the use in medicine highly controllable.Therefore, CP has been tested in a variety of medical applications to birth the still evolving field of plasma medicine. There is a vast diversity in CP reactor design.In plasma medicine, CP is most commonly produced using corona, microwave, gliding arc, dielectric barrier discharge (DBD), and piezoelectric direct discharge (PDD) CP.The DBD consists of two parallel electrodes, of which one electrode is of a dielectric material (glass, quartz, ceramics, rubber, plastic, enamel, and teflon) [4].Biological tissue is also dielectric and can act as the second electrode in medical applications of DBD plasmas [5].The corona discharge CP is a very weak discharge with a low electron density in the plasma plume that forms around the tip of one electrode (usually fitted as a wire or needle) [6,7].On the other hand, the gliding arc plasma is a quasi-thermal plasma with a much higher electron and ion density at the smallest distance (1-5 mm) between two diverging electrodes [8][9][10].The gliding arc plasma is then propelled by high gas flow or buoyancy, 'gliding' along the diverging electrodes, which increases the volume of the plasma, decreasing the discharge intensity to a lower temperature (generally < 80 • C) non-equilibrium state before a new gliding arc is discharged [8,9].More recently, the PDD configuration has entered plasma medicine, where the discharge originates directly from a resonant piezoelectric transformer that can transform voltages greater than 1000-fold [11].The main advantage of the PDD is that the compact solid state piezoelectric transformer requires little power, which significantly miniaturizes the CP device for more versatile use [11].The designs of these CP devices are more comprehensively reviewed elsewhere [2,4,10,11].For a point target application of CP within millimetres, plasma jets are produced using one of the aforementioned CP discharge types, which are then propelled into a narrow jet by a secondary non-ionised gas.For treatment of a larger area, DBD, corona, and gliding arc discharges can cover larger areas; these plasmas have chaotically shifting discharges that create "dead zones" causing inefficient plasma coverage.Meanwhile, multiple plasma jets can be constructed in an array for simultaneous treatment of a larger area [12,13].This plasma jet array can have a relatively consistent and uniform plasma profile over the target area and can use jet-jet interactions to amplify and produce unique plasma plumes by adjusting the discharge gas flow rate [12][13][14][15][16]. Consequently, many findings from CP technology studies may be specific to particular CP designs.Therefore, it is important to note that some findings discussed in this review should not be generalised to all CP designs. The medical use of CP is generally centred around the milieu of reactive oxygen and nitrogen species (RONS) that are formed [17].For direct CP exposure, highly reactive and short-lived RONS, including hydroxyl ( • OH), hydroperoxyl ( • OOH), superoxide (O 2 •− ) and nitric oxide (NO • ) radicals, oxygen (O) and nitrogen (N) atoms, singlet oxygen ( 1 O 2 ), hydrogen (H + ), hydroxyl (OH − ), and peroxynitrite (ONOO − ) ions are abundantly present [18][19][20].These highly reactive species have a profound effect on cell membrane structure, porosity, hydrophobicity, and fluidity, mainly through lipid and protein oxidation, which compromises the function of the cell that also affect downstream intracellular signalling [21].When applied for longer exposure times, the RONS produced can cause significant potent redox imbalances that hinder cell proliferation or lead to destruction of cells, which is beneficial to prevent tumour regrowth [22][23][24].In contrast, lower CP exposure times can promote cell proliferation, increased motility/migration, activate inflammatory signalling pathways, and antioxidant production in healthy skin and immune cells, which is an essential response in wound healing and tissue regeneration [2,25].The difference in the application of CP between wound healing and cancer killing revolves around 'hormesis' [21], where "low doses" of CP can be harmful when applied to malignant tissue, but can also promote positive wound healing effects in healthy tissue inducive to tissue regeneration.Here, it must also be recognised that "low" and "high" dose with respect to CP exposure in this review do not refer to a quantitative dose of CP in this context, but instead refer the use of CP on hormesis principles.The first attempts at defining a standardised "plasma dosage" were performed by varying the CP treatment duration, voltages, flow rate, and feed gas composition, culminating in an expression for CP dosage as D~QVt (D: entire plasma dose applied to the cells, Q: discharge gas flow rate, V: output voltage, and t: treatment time) [26].Unfortunately, this definition of plasma dose is overly simplistic; it only accounts for a few linear factors, while several other factors are absent, including nonlinear factors that make plasma dosage determination complicated [27].This led to the equivalent total oxidation potential (ETOP), which is based on the oxidation potential of the reactive species in the plasma discharge, UV, electric fields, and the interactions between these factors [28].However, there are still various limitations to the ETOP, with refinement needed before there is a possibility of implementation and standardisation [2]. Plasma-activated liquids (PAL), most commonly plasma-activated water (PAW) and plasma-activated medium (PAM), are an indirect method of applying CP.PAL are produced either by direct discharge of CP into the liquid (e.g., water, media), or positioning CP discharges in the gas phase above the liquid to facilitate reactions at the gas-liquid interface [18,29].While PAL is usually applied several minutes to up to several months after initial CP activation, the oxidative potential of CP is "stored" in the liquid for later use.Hence, several long-lived secondary RONS are present instead of highly reactive and short-lived RONS, including ozone (O 3 ), nitrate/nitric acid (NO 3 − /HNO 3 ), nitrite/nitrous acid (NO 2 − /HNO 2 − ), and hydrogen peroxide (H 2 O 2 ) [2,20,30].Particularly H 2 O 2 , NO 3 − , and NO 2 − will linearly accumulate in PAL as a function of CP treatment duration and energy density discharged per liquid volume and NO 2 − and H 2 O 2 decay while NO 3 − continually forms post-plasma activation under acidic conditions [31,32].These molecules also decrease the pH of the liquid, creating an antimicrobial environment capable of effectively killing bacteria [33][34][35][36] and eliciting various redox-dependent cellular signalling responses in eukaryotic and cancerous cells.Additionally, PAL have been shown to exhibit the desired biological effects when used after the short-lived species are degraded post-plasma activation, indicating that PAL could be storable for future use when sealed and stored at appropriate temperature [18,[37][38][39].Consequently, PAL has the advantages of being potentially storable, transportable, with no risk of heating tissue, and is more feasible for use in busy clinics. Although plasma medicine was conceived at the end of the 20th century, a detailed analysis of mammalian cell signalling pathways responsive to plasma medicine has only recently been elucidated.This review aims to outline the current clinical use of plasma medicine and the cell signalling pathways responsible for eliciting cell responses to CP therapy in both activating cells, as seen in wound healing, and destruction of cancerous cells from in vitro, ex vivo, and in vivo studies.Although the antimicrobial effect of CP is well established and is an important part of wound care, this activity is independent of mammalian cell responses and is not discussed here, but has been covered by several other fantastic reviews [40][41][42].Finally, gaps in future research and the limitations of plasma medicine are discussed. Plasma Medicine in the Clinic Plasma medicine, both CP and PAL, have diverse applications in various biomedical fields, including broad-spectrum antimicrobial therapy, anticancer therapy, acute and chronic wounds (e.g., diabetic foot ulcers, haemostasis) therapy, and dentistry [2,18,30,40,41,[43][44][45][46][47][48][49][50][51][52].Several studies have examined the use of CP for disinfection of burn injuries [53], as well as for CP activation of hydrogels to enable the delivery of antimicrobial drugs or CP-generated RONS [54,55].Currently, there are a few Conformité Européenneor (CE; European Conformity)-certified class IIa/b CP medical devices, including the kINPen ® MED plasma jet (Neoplas Med, Greifswald, Germany), SteriPlas ® microwave plasma torch (Adtec, Twickenham, UK), PlasmaDerm ® DBD device (CINOGY, Duderstadt, Germany), Plasma Care ® (Terraplasma Medical, Garching, Germany), and CPT ® cube/CPT ® patch DBD device (ColdPlasmaTech, Greifswald, Germany).To date, a number of clinical case studies have demonstrated benefits of CP therapy together with standard of care for difficult-to-treat wounds, including flap donor sites with exposed tendons [56] and complex wounds following cranio-maxillo-facial surgery [57].These studies highlighted CP as the reliable, minimally invasive, easy-to-apply chairside treatment option resulting in effective healing without adverse events, which may be particularly useful for patients with impaired healing conditions due to the presence of systemic or local risk factors [56,57].However, further studies are required to validate these observations in randomised controlled trials (RCTs) with larger patient populations and in comparison to control groups.Currently, clinical plasma medicine is constrained to wound healing, but clinical studies on cancer are rapidly emerging.Three hallmarks of clinical CP application to wounds include (1) antimicrobial efficacy, (2) modulation of redox signalling with subsequent stimulation of cell proliferation and migration properties via the release of growth factors, and (3) increased tissue oxygenation and micro-circulation, which is important during healing and angiogenesis.The two most studied areas of CP medical applications include wound healing and cancer; however, the absence of standardized guidelines across different CP treatments impedes the current reproducibility and effective assessment of CP treatment efficacy. Wound Healing In clinical wound management, plasma medicine and CP devices have been approved for therapeutic use because CP simultaneously has broad-spectrum antimicrobial activity, promotes wound healing, and regulates tissue inflammation [40,44,58].This could potentially aid in treating chronic non-healing wounds, including diabetic foot ulcers, arterial and venous leg ulcers, and pressure injuries.These patients have an impaired wound healing response and immunodeficiency, resulting in high susceptibility to the development of chronic and recurrent bacterial and fungal clinical infections [59,60], which significantly increases the risk of amputation [61,62].In Australia alone, over 420,000 people are affected by a chronic wound at any time, with health-care related costs exceeding AUD 3.5 billion, approximately 2% of national health care expenditure [63].There is a desperate clinical need for novel therapeutic options, possibly using CP as adjuvant therapy to current standards in wound management.To address this, RCTs have shown CP in combination with best-practice care led to better outcomes. In contrast to the above-mentioned case studies on complex wounds, the latest metaanalysis of CP therapy showed no significant improvement in wound healing (five studies, 148 participants) nor reduction in infections (four studies, 91 participants) in chronic wounds.However, exposure to CP also resulted in no adverse effects, suggesting relative safety in current protocols [64].Since then, recent RCT has shown the efficacy of CP therapy in improving wound healing.Four RCTs, (i) 37 non-diabetic patients with non-healing wounds, (ii) 78 patients with mixed wound aetiology, and (iii-iv) 44 and 45 diabetic patients with chronic wounds, showed that wound closure was significantly accelerated, with improvements to wound-specific patient-reported metrics of quality of life and pain [47,48,65,66].Another RCT in 44 diabetic patients revealed a greater reduction in systemic inflammatory cytokines and chemokines, including interleukin-1 (IL1), -8 (IL8), interferon-gamma (IFNγ), and tumour necrosis factor-alpha (TNFα) in the CP therapy group [49].Most recently, preliminary results from the multicentre RCT Plasma On Chronic Wounds for Epidermal Regeneration (POWER) study on wound healing of uninfected, lower leg wounds in 47 patients showed that CP treatment resulted in ≥60% and ≥90% wound closure in 28% and 16% of participants, respectively, while standard wound therapy alone led to ≥40% wound reduction in 18% of subjects [67].No adverse effects were observed, and antibiotic use was lower in the CP treatment group.The study is ongoing and expected to be completed by the end of 2024 (German Clinical Trials Register: DRKS00019943).Clinical studies on CP technology for wound healing are summarised in Table 1. Another important aspect of the effects of CP on wound healing is the pro-angiogenic signalling of the wound bed to revascularize newly formed granulation tissue.To date, studies have shown that local vascular responses to CP include the promotion of platelet activation, aggregation, and fibrin polymerisation to accelerate blood coagulation and aid healing [68].Short (5 min) CP exposure on intact skin and wounds has been demonstrated to increase tissue oxygenation and flow through localised promotion of endogenously produced NO (NO from CP cannot permeate the epidermis), a potent paracrine vasodilator and inflammatory regulator, to improve vascular circulation for over 1 h after CP therapy is stopped [69][70][71][72].Later, (i) one study of 20 patients with diverse vascular morbidities, (ii) two studies with 30 combined healthy volunteers, and (iii) 20 male patients with diabetes-related chronic leg ulcers found a significant increase in cutaneous capillary circulation and subsequent blood and tissue O 2 saturation after treatment with the PlasmaDerm DBD CP device [73][74][75].Although not confirmed in clinical trials, CP was suggested to result in phosphorylation (activation) of endothelial NO synthase (eNOS) and increased NO production.In murine wounds, this was shown to subsequently result in increased levels of vascular endothelial growth factor (VEGF) release and platelet-derived growth factor receptor-beta (PDGFRβ) expression to promote angiogenesis and tumour growth factor-beta 1 (TGFβ1) paracrine stimulation of collagen I deposition and wound re-epithelialization [76,77].This supports the hypothesis that the paracrine-stimulating effect of CP therapy on subcutaneous vascular and epidermal tissues promotes wound healing and tissue regeneration.Future RCTs are imperative to optimise CP treatment duration and efficacy in developing personalised therapies.Understanding the exact mechanism underlying the effect of CP treatment on wound healing and investigating the combined impact of CP and the traditional standard of care is fundamental for enhancing and refining the efficacy of CP in wound management [78].The potentially high cost of CP devices and treatments may limit accessibility for some patients in developing countries, while longitudinal studies should help assess the long-term effects and sustainability of CP treatment as adjuvant therapy in wound management. CP Device Ref. Study Design Population and Treatment Groups Primary Results SteriPlas ® plasma torch [ Cancer CP cancer therapy, or "plasma oncology", has mostly gone through limited phase I clinical trials, with some phase II trials still underway (Table 2).Studies to date have demonstrated a potential role of CP in cancer treatment, including cancer remission, with a focus on the effects of CP on RONS cancer cell impact, myeloid cells, immunogenic cancer cell death, and tumour response to altered tumour microenvironment [81].A first-in-human trial in six patients with infected ulcerous squamous cell carcinoma (SCC), who received thrice-weekly applications of CP therapy every other week, showed large patient-specific palliation against microbial infection [81].More recently, 20 patients with stage IV solid tumours underwent CP therapy at the surgical margins following tumour resection to eliminate tumour regrowth [82].No adverse effects or tumour recurrence were observed in patients with microscopic positive margins up to 32 months follow-up [82].Although a primary culture of 9/10 patient tumour cells showed significantly reduced viability due to CP treatment in vitro, the apoptosis and oxidative stress pathways responded differently between patients [82], likely due to a variety of cancer types between patients.If the utility of CP is cancer type-dependent, more animal and clinical trials involving different malignant cancer types and tumour microenvironments are required to elucidate the real potential of CP in plasma oncology.Recently, a phase II clinical trial in 63 patients with mild-moderate cervical intraepithelial neoplasia (CIN) tested a single 30 s CP application for prospective cervical cancer prevention compared to the control group of 287 participants that account for spontaneous remission [85].A significantly higher remission rate was observed in CP-treated patients after both 3-and 6 months follow-up compared to controls, with fairly tolerable pain and discomfort [85].RONS-mediated suppression of Akt (protein kinase B) phosphorylation (activation) and heat shock protein 27 (HSP27) with increased p53 phosphorylation/p53 binding protein expression promoting caspase 3/7-driven apoptosis has been indicated as a potential mechanism in a number of in vitro studies [84,86].Unfortunately, the device used can rapidly (<5 s) heat tissue to damaging and painful temperatures (~80 • C) if not applied with constant uniform motion [84,86], unlike the other CE-certified CP devices, and hence may not technically be considered a CP device.Additionally, clinical characteristics and histological analysis between groups were significantly different due to the non-randomised trial design, making scientific conclusions highly tenuous.CP has also been used to treat late-stage head and neck SCC tumours, which experienced up to 80% reduction in tumour surface, but unfortunately, in these clinical case studies, tumour growth relapsed [81].On reflection by the investigators, this indicated that CP treatment alone may not be sufficient for management of advanced head and neck tumours, but a good adjunct therapy that has the potential to improve patients' quality of life (decreased infection load, enhanced social interaction) [87]. Several factors impede clinical trials of plasma oncology.First, plasma medicine has been limited to tumour cancers (currently excluding lymphomas and blood cancers), which are heterogeneous in classification, subtyping, and metastases [88,89].Although CP has shown efficacy in osteosarcoma in vitro and in vivo, direct CP treatment is extremely invasive, and the use of less-invasive injectable PAL requires further research before clinical translation [90].Fortuitously, CP treatment of blood from leukaemia patients has shown evidence that cancerous cells can be killed without adversely affecting haematological profiles [91].This demonstrates that a hormesis approach between killing malignant cells and preserving healthy blood cells and (blood, not physical) plasma components may be viable, enabling the first steps towards clinical trials in non-tumour cancers.Secondly, it is unethical to trial CP without radiotherapy or chemotherapy, which can be patient-specific, and makes assessment of CP effect after accounting for interactions with other therapy extremely difficult.Third, some cancer patients may become desperate and join trials for the chance to be included in the treatment arm [92], introducing significant selection bias that may mask the true effectiveness of CP.This is not just theoretical, and has even been reported during a phase II trial in CIN patients who reported the "urgent desire to have children, where great fear and/or psychological stress" compelled them to join the study [84].Similarly, studies might also intentionally forgo randomisation to allow for "maximum patient autonomy and voluntariness", which strongly influences the clinical characteristics between groups [85].While these factors are justified on humane grounds, RCT design for plasma medicine in cancer is made considerably more difficult. Given the relative infancy of plasma medicine and the advanced age of participants who usually qualify for recruitment in these studies, little is known about the long-term consequences of plasma medicine.One study with a five-year follow-up of five participants from a previous RCT found no long-term complications to overall health [83].Unfortunately, although encouraging, this post hoc selection of participants for follow-up and a small sample size are extremely prone to systematic selection bias, which prohibits inference.Hence, prospective follow-up over longer time periods is required to confirm the long-term treatment effects. Cold Plasma on Redox-Mediated Modification of Cell Components Mammalian cells are enveloped in a fluid phospholipid bilayer with opposite-facing hydrophilic headgroups that contain cholesterol and proteins underneath the glycocalyx to form the cell membrane.The cell membrane allows nutrients, waste, and signalling molecules to be transported in and out of the cell.This transport is passive for hydrophobic and low-charge-density molecules, or active by protein channels or endo/exocytosis.Phospholipids like phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, and sphingomyelin bind to long-chain fatty acids containing double-bonds that are prone to oxidation [93].Several studies have used phospholipid liposomes as models of mammalian cell membranes to show that short-lived RONS from CP, particularly O, O 3 , •− , and NO • radicals, and ONOO − are strong oxidisers of fatty acid chains [94] that increase membrane porosity, consequently reducing membrane stability and increasing permeability [95][96][97][98][99], which can exacerbate further RONS entry directly into the cell.Furthermore, CP-derived RONS can form secondary lipid hydroperoxyl radicals (R-OO • ) that can propagate damage to other cell compartments [94], leading to further damage after CP exposure has ceased [99].Commonly, lipid peroxides, e.g., malondialdehyde and 4-hydroxy-2-nonenal (HNE), and isoprostane levels are good indicators of cellular redox stress [94], while HNE also promotes antioxidant response signalling [100,101].Cholesterol also reacts with RONS to form cytotoxic oxidation products like 7α,β-hydroxy-, 7α-hydroxyperoxy-, 7-oxo-and 5,6-epoxycholesterol, of which cholesterol modification by CP has been inferred [102]. Cancer cells have shown increased susceptibility to RONS (and hence CP) compared to non-malignant cells.This is because cancer cells have higher basal redox levels than normal cells due to increased metabolism, proliferation, dysfunctional mitochondrial activity, and low inducible antioxidant capacity which makes cancer cells more sensitive to redox stress [103].Susceptibility to redox stress is further exacerbated by the elevated aquaporin overexpression [104], increasing H 2 O 2 permeability and consequent lipid peroxidation [105], and lowering membrane cholesterol levels increasing permeability of ROS (O 2 •− , H 2 O 2 ) through cell membranes [106].Furthermore, there is a strong inverse correlation between membrane cholesterol levels and sensitivity to CP [107], and cholesterol content can reduce membrane disruption by CP in a multilamellar liposomal model [99].It is hoped that these factors together enable a goldilocks zone of CP treatment for simultaneous killing of cancer tissue while promoting healing and angiogenesis in healthy tissue. In addition to directly reacting with membrane lipids and causing antioxidant (glutathione; GSH) depletion, 1 O 2 (from CP and secondary decomposition of long-lived RONS in PAL) can deactivate catalase, amplifying H 2 O 2 accumulation via NADPH oxidase (NOX)/superoxide dismutase (SOD) and mitochondria-dependent apoptosis via caspase 9/3 [108].Cell metabolism (NADPH reduction) correlates with a higher tolerance to CP treatment [107].Other indirect evidence supports this claim.For example, pyruvate protects cancer cells in PAM in vitro by scavenging H 2 O 2 [109].Pyruvate is an essential carbon source for cancer cell metabolism, and is in high concentration in some malignant tumour cells due to the "Warburg effect" driving preferential metabolism towards the lactate pathway [110].This study suggests that cellular metabolism confers protection to cancer cells against CP, but more studies are required to investigate the effect of metabolic pathways in cancerous and non-malignant cells. CP-derived RONS also readily react with protein side-chains, particularly aromatic groups of tryptophan, tyrosine and histidine, amine groups in lysine and arginine, cysteine thiol (-SH), and methionine sulfhydryl groups [111].H 2 O 2 oxidises several types of protein residues as a redox switch to signal gene up/downregulation, proliferation, migration, and metabolism [112].Additionally, the lipid oxidation products mentioned earlier can also oxidise protein cysteine residues, resulting in lipid-protein adduct formation [113].ONOO − can also spontaneously decompose into potent • OH and NO 2 • radicals that react with protein tyrosine residues to form tyrosine radicals, inactivating proteins by forming irreversible 3-nitrotyrosine or dityrosine (protein-protein) adducts, or propagating lipid peroxidation [114].ONOO − in CP and PAM is a potent driver of caspase-mediated apoptosis in malignant cells [108,[115][116][117].This could also occur in healthy cells with "higher" CP exposure, if it overcomes its antioxidant capacity. GSH is the most abundant cellular antioxidant.The thiol moiety makes GSH an effective scavenger of RONS and oxidised proteins through S-glutathionylation (protein thiol-GSH adduct formation) to protect cells against redox stress [118].In fact, cysteine supplementation before exposure to CP nearly completely negated biological responses and prevented GSH disulfide (GSSH) formation in skin keratinocytes (HaCaT) [119].On the other hand, these protein modifications in excess led to protein inactivation by denaturation, cleavage, and aggregation, and can cause endoplasmic reticulum stress (ERS), initiating the unfolded protein response (UPR) [120].Lipid (hydro)peroxides can also adduct to genomic DNA, impairing replication and transcription, leading to cell apoptosis [121].Hence, these secondary byproducts of CP are advantageous in treating cancer, but need to be avoided in healthy cells.For an in-depth reading of CP-derived RONS on cell protein and lipid modifications, fantastic comprehensive reviews are available [21,94,111]. DNA is highly reactive to • OH and ONOO − , leading to genotoxic nucleoside -OH adduct radicals and oxidation products, sugar oxidation causing strand breaks and DNAprotein crosslinking [122].Cells exposed to CP experience DNA oxidation (8-oxoguanine; 8-oxoG, or 8-hydroxy-2 ′ -deoxyguanosine; 8-OHdG) due to ROS and attempt to repair damage by expressing 8-oxoG DNA glycosylase (OGG1), poly(ADP-ribose) polymerase (PARP), and histone H2AX (γH2AX) [123][124][125].8-oxoG was also formed in murine xenograft tumours treated with CP, accompanied by PARP activating p53/caspase 3-mediated apoptosis in vivo [125].Furthermore, CP can have an epigenetic effect on cells, as shown in a breast cancer cell line that widely alters gene methylation, including inhibiting heat shock cognate B (HSCB) and phosphoribosyl pyrophosphate synthetase 1 (PRPS1) oncogene expression [126].Obviously, genotoxic effects of CP would be harmful and possibly mutagenic if inflicted on healthy tissue.Fortunately, no elevated mutation rate due to CP exposure has been observed in HaCaT cells, fibroblasts, and lymphocytes in vitro using various CP devices [127][128][129][130]. Furthermore, no neoplastic lesions, signs of tumour growth, or tumour markers were found in 84 immunocompromised mice 350 days after receiving 14 consecutive daily CP treatments [131], nor DNA damage (γH2AX) was observed in ex vivo CP-exposed human skin samples [132].Although no tumour recurrence or adverse effects in proximal normal tissue were observed in a Phase I clinical study with 32 months of follow-up [82], long-term safety of CP devices regarding mutagenicity is still needed. Cellular Responses to Cold Plasma through Redox-Responsive Intracellular Signalling Pathways Redox signalling and homeostasis are ubiquitous to all life on Earth, involving a complex and dynamic cell-dependent system that acts as both sensor and effector of cellular environmental changes to coordinate response to stimuli (including during tissue regeneration, wound infection, or cancer).CP and PAL are potent sources of RONS that directly affect cellular redox homeostasis.However, redox responses can also be secondary, tertiary, etc., to the initial CP exposure [2,21].Essentially, all therapeutic applications of CP are classified into the field of applied redox biology [25].This has been exemplified by the scavenging of CP/PAL-derived RONS with the antioxidant N-acetylcysteine (NAC) that unanimously reduced or ablated DNA damage, cell cycle arrest, ERS, autophagy, and apoptosis to CP/PAL in several in vitro studies in cancer and prevented wound healing and antioxidant response in normal cells [116,123,124,. Cells can either be directly treated with CP while in media or indirectly treated with PAL, usually PAM or PAW.This leads to the accumulation of RONS, which dissolve from the plasma/gas phase into the liquid phase, or are generated by a complex series of secondary reactions (Figure 1) [29,[154][155][156].In cases of direct CP treatment of media containing cells, shorter-lived highly reactive oxidant species would also be pertinent in redox signalling.However, RONS such as NO • and • OH are too reactive to diffuse further than a µm range in the liquid phase, but may form again through the decomposition of ONOO − [29,31,32].Hence, the most commonly measured RONS are aqueous H 2 O 2 , NO 2 − , and NO 3 − as markers of RONS generation, due to their long lifetime.Research to date has found that ultraviolet photons play a negligible role in eukaryotic cell responses to CP, with CP-derived UV dose being an order of magnitude below the minimal erythema dose and the UV not even being able to reach living skin cells [157].However, CP-derived UV photons have also been shown to generate minor amounts of atomic H + , • OH, and NO • through photolysis of other RONS [35,158], which experiments with antioxidants cannot discriminate. Epithelial cells are the most affected by CP, and illicit a host of molecular responses over time.Generally, oxidative stress leads to protein unfolding and cessation of protein synthesis (ERS/UPR), cell cycle arrest (G2/M phase), mitochondrial dysregulation, and post-translational modifications (predominantly phosphorylation via kinases) following the first few hours of CP exposure [159].Cell cycle arrest and protein synthesis cessation/unfolding are instated quickly to preserve cell survival early after exposure, while DNA repair occurs later.The nuclear factor erythroid-2 related factor 2 (Nrf2) antioxidant response to oxidative stress is the most prominent cell response occurring immediately following, and up to several days after CP exposure to condition cells against redox stress, eventually restoring cell function and proliferation [159].In this way, light CP exposure can promote faster healing through redox eustress, levels of redox imbalance that condition cells to altered redox (usually oxidative) state, while returning to redox homeostasis.CP can also elicit autophagy, the cellular process of breaking down damaged components and organelles in the lysosome [160], and eventually apoptosis in cancer cells by amplifying redox stress [109,117,149,150].Several distinct protein signalling pathways regulate these processes, and can cross-communicate to affect cell survival and proliferation [161].For example, CP-activated extracellular signal-regulated kinase 1/2 (ERK1/2) mitogenactivated protein kinases (MAPK) and Akt, the latter leading to significant activation of nuclear factor-kappa B (NFκB) phosphorylation to promote cell proliferation [162].Interestingly, nearly the opposite has occurred in cancer cells.CP caused apoptosis through simultaneously inhibiting Akt/mammalian target of rapamycin (mTOR) while exciting Jun N-terminal kinase (JNK) and p38 MAPK signalling pathways, leading to autophagy and the activation of p53-mediated activation of proapoptotic caspases, while inactivating Akt also inhibited NFκB to reduce tumour cell proliferation [163,164].CP application in cancer therapy commonly leads to cell cycle arrest among different cancer types, and eventually autophagy and apoptosis [165].These two examples demonstrate the hormesis approach to killing malignant cells while also promoting the healing of healthy tissue.This Section will discuss modulations of several cell signalling pathways affected by CP and PAL in the context of wound healing and cancer. cellular environmental changes to coordinate response to stimuli (including during tissue regeneration, wound infection, or cancer).CP and PAL are potent sources of RONS that directly affect cellular redox homeostasis.However, redox responses can also be secondary, tertiary, etc., to the initial CP exposure [2,21].Essentially, all therapeutic applications of CP are classified into the field of applied redox biology [25].This has been exemplified by the scavenging of CP/PAL-derived RONS with the antioxidant Nacetylcysteine (NAC) that unanimously reduced or ablated DNA damage, cell cycle arrest, ERS, autophagy, and apoptosis to CP/PAL in several in vitro studies in cancer and prevented wound healing and antioxidant response in normal cells [116,123,124,. Cells can either be directly treated with CP while in media or indirectly treated with PAL, usually PAM or PAW.This leads to the accumulation of RONS, which dissolve from the plasma/gas phase into the liquid phase, or are generated by a complex series of secondary reactions (Figure 1) [29,[154][155][156].In cases of direct CP treatment of media containing cells, shorter-lived highly reactive oxidant species would also be pertinent in redox signalling.However, RONS such as NO • and • OH are too reactive to diffuse further than a µm range in the liquid phase, but may form again through the decomposition of ONOO − [29,31,32].Hence, the most commonly measured RONS are aqueous H2O2, NO2 − , and NO3 − as markers of RONS generation, due to their long lifetime.Research to date has found that ultraviolet photons play a negligible role in eukaryotic cell responses to CP, with CP-derived UV dose being an order of magnitude below the minimal erythema dose and the UV not even being able to reach living skin cells [157].However, CP-derived UV photons have also been shown to generate minor amounts of atomic H + , • OH, and NO • through photolysis of other RONS [35,158], which experiments with antioxidants cannot discriminate. Clear evidence shows that redox signalling is essential to wound healing [169].Chronic wounds occur in parallel with chronic redox stress, which amplifies inflammation, apoptosis, and impedes neovascularisation [169].As a consequence, therapeutic treatments that modulate redox status in interstitial tissue and cells are coming into focus to remedy chronic wounds [169], including CP induction of antioxidant response mechanisms via Nrf2 signalling (Figure 2).When exposed to CP, keratinocytes (HaCaT) increased HO1, catalase, NQO1, GST, and Gclc/m transcription via Nrf2 activation and translocation, concurrent with the release of proangiogenic chemokine VEGFA, growth factors heparin-binding epithelial growth factor (HBEGF), colony-stimulating factor 2 (CSF2), Prostaglandin-endoperoxide synthase 2 (PTGS2), and inflammatory cell cytokine interleukin 6 (IL6), which promote re-epithelialisation and wound repair [170][171][172][173].While Nrf2 disassociates from Keap1, Keap1 also alters cytoskeletal arrangement, regulating E-cadherin and F-actin filamentation and critical during tissue regeneration [174,175].In conjunction with reducing connexin 43 (Cx43) expression to relax cell-cell focal adhesion in skin cells [174], CP increases keratinocyte motility to re-epithelialize wounds.The role of Keap1 in response to CP has so far been limited to its role as a redox switch for Nrf2 and as a cytoskeletal junction protein.Yet, Keap1 is involved in several regulatory roles in the cell ranging from protein degradation, promoting NFκB p65 translocation for cell survival, cell cycle progression, and p62-mediated autophagy [176].Nevertheless, there is growing momentum towards studying Keap1 beyond its canonical function as an inhibitor of Nrf2, but this is yet to be explored with regard to CP treatment. When translated into mice studies, direct CP treatment of acute full-thickness wounds accelerated wound healing, neutrophil, and macrophage infiltration and TNFα, TGFβ and IL-1β transcription in vivo, and promoted HO1 and NQO1 transcription in dermal fibroblasts and epidermal keratinocytes via Nrf2 ex vivo [174,177].THP1 monocytes also activate Nrf2 and upregulate HO1 transcription in response to CP exposure [178,179].Studies to date have also shown that RONS, like ONOO − , can also activate Nrf2 indirectly via the PI3K/Akt pathway [180,181], and upon overexposing keratinocytes to CP at levels that significantly damage lipids, proteins, and DNA, Nrf2 is suppressed as a result of Akt degradation [124].Similarly, inhibiting JNK (but not ERK or p38) attenuated CP-mediated HO-1 induction [138], indicating that CP-derived RONS may cross activate Nrf2 through JNK, although the mechanism for this effect is yet to be determined. HO1, an integral product of Nrf2, catabolises haem into biliverdin, releasing Fe 2+ and carbon monoxide (CO), with biliverdin being further degraded to bilirubin by biliverdin reductase [182].Bilirubin, CO, and ferritin upregulation by Fe 2+ protect cells through RONS scavenging to inhibit apoptosis and inflammation [183].This in, concert with increased GSH synthesis, seems to support Akt activation, promoting cell survival by inducing Nrf2/ARE and inhibiting p53 and Bax/Bcl2-dependent caspase-mediated apoptosis.Moreover, p53 performs a biphasic function in Nrf2 modulation; low p53 activity enhances Nrf2 expression to foster cell survival, whereas high p53 activity suppresses Nrf2 to promote cell death [184].In this sense, the p53/Nrf2 axis acts as a focal point in deciding the fate of the cell, and may be a mechanism by which CP can be applied therapeutically on a hormesis basis.limited to its role as a redox switch for Nrf2 and as a cytoskeletal junction protein.Yet, Keap1 is involved in several regulatory roles in the cell ranging from protein degradation, promoting NFκB p65 translocation for cell survival, cell cycle progression, and p62mediated autophagy [176].Nevertheless, there is growing momentum towards studying Keap1 beyond its canonical function as an inhibitor of Nrf2, but this is yet to be explored with regard to CP treatment.When translated into mice studies, direct CP treatment of acute full-thickness wounds accelerated wound healing, neutrophil, and macrophage infiltration and TNFα, TGFβ and IL-1β transcription in vivo, and promoted HO1 and NQO1 transcription in dermal fibroblasts and epidermal keratinocytes via Nrf2 ex vivo [174,177].THP1 monocytes also activate Nrf2 and upregulate HO1 transcription in response to CP exposure [178,179].Studies to date have also shown that RONS, like ONOO − , can also activate Nrf2 indirectly via the PI3K/Akt pathway [180,181], and upon overexposing keratinocytes to CP at levels In many cancer cells, Keap1 and Nrf2 gene mutations constitutively increase their activity [185], acting as counterbalance to promote survival in the higher endogenous RONS environment resulting from malignant metabolic and proliferative activity [186].Inhibiting HO1 activity or Nrf2/HO1 gene silencing results in significantly greater apoptosis by CP [138].Therefore, Nrf2/HO1 inhibition and CP adjunct therapy could synergise as a cancer therapy, similar to HO1 inhibition improving conventional chemo/radiotherapy and reducing tumour growth [187], or synergistic activity of CP with chemo/radiotherapy in vitro and in vivo [139,145,[188][189][190][191][192][193][194].Unfortunately, the absence of clinically safe HO1 inhibitors currently precludes testing, but may be grounds for future clinical research.Simultaneous treatment with CP and pharmacological inhibition of Trx also synergistically enhanced caspase 3-dependent apoptosis with some degree of lipid peroxidationdependent ferroptosis (iron-dependent form of apoptosis linked to cytotoxic lipid peroxide accumulation) in glioblastoma cancer cells and slowed tumour growth in glioblastomabearing mice [145].As glioblastomas are incurable, manifest as the most aggressive and deadly tumours of the brain, and are rapidly growing in incidence [195], CP therapy could have a massive impact on improving survival and prognosis of these patients. Recently, it was observed that CP activates Nrf2 concurrent with BTB and CNC homolog 1 (Bach1) in THP1 cells [179], an antagonist of Nrf2 that competitively binds with sMAF proteins as a negative-feedback to ARE [196].This is in contrast to previous work showing that CP enhances Nrf2 and suppresses Bach1 transcription in HaCaT [170].The reason for these contrasting cellular responses remains unclear.Considering that Bach1 is degraded in cells under oxidative stress [197], Nrf2 activity may paradoxically lead to upregulation of Bach1 in cancer cells [198], and as Bach1 is an important oncogene that drives tumour metastasis [198,199], comparing the response of Bach1 to CP in normal and cancer cells should be investigated.Overall, Nrf2-ARE activation following CP treatment is integral to restoring redox homeostasis and cell survival with the interplay with p53, JNK, and Akt.Additionally, Nrf2 may only be half the story, as the canonical Nrf2 inhibitor, Keap1, also signals cytoskeletal protein rearrangement and cell junction proteins to promote wound healing.However, an increasing body of research shows that Keap1 also influences cell fate decisions [176], which should be further investigated as it relates to CP. ERS and UPR Signalling Pathway Under redox imbalance, cells experience endoplasmic reticulum (ER) stress (ERS), causing elevated protein misfolding that impairs cell function.In an attempt to counteract this, the ER releases calcium to signal general suppression of protein synthesis, while upregulating a suite of chaperones that repair protein misfolding as the unfolded protein response (UPR) to restore homeostasis [120].There are three arms of the UPR that signal cells to increase their protein folding capability: (i) the inositol-requiring protein 1α (IRE1α), which uniquely splices X-box binding protein 1 (Xbp1) mRNA to express the Xbp1 transcription factor, (ii) protein kinase RNA-like ER kinase (PERK) that activates the eukaryotic initiation factor 2-alpha (eIF2α), and (iii) activating transcription factor 6 (ATF6), which gets cleaved in the Golgi apparatus to translocate to the nucleus [120].All three arms promote chaperones and recovery of ER biosynthesis activity to restore protein synthesis, while IRE1α and ATF6 also signal inflammatory responses.However, only prolonged PERK activation can decide cell fate via C/EBP-homologous protein (CHOP) in two ways.Firstly, if the ER recovers, CHOP acts as negative feedback to PERK activity by promoting GADD34 and constitutive repressor of eIF2α phosphorylation (CReP) binding to protein phosphatase 1 (PP1) which dephosphorylates (deactivates) eIF2α [200,201], or if ERS go unresolved, prolonged CHOP activation activates apoptosis [202].One of the integral ERS-associated proteins that signals ERS and initiation of the UPR is glucose-regulated protein 78 (GRP78).GRP78 is expressed on the ER membrane as an ERS sensor bound to PERK, IRE1α, and ATF6, and acts as a chaperone that binds to misfolded proteins [203].Therefore, the UPR can be enacted by both normal and malignant cells in an attempt to restore cellular homeostasis, including in response to redox dysregulation. Later, in airway epithelial cells, induction of the UPR in response to CP was confirmed, revealing significant upregulation of a number of ERS-associated proteins, including GRP78 [159].Importantly, apoptosis was absent in conditions of lower CP exposure times in these studies [159,171].While these studies have shown that CP can induce ERS/UPR, CP has also been shown to remediate cells already experiencing ERS.In a chemically induced atopic dermatitis model in vitro and in vivo, elevated inflammation (upregulated TNFα, IL1β, and C-C motif ligand 2 (CCL2), with decreased anti-inflammatory IL10), ERS/UPR, GRP78 expression, and eventually CHOP-mediated apoptosis were observed, all of which were curtailed by CP treatment via induction of HO1 [204].HO1 induction as part of the response to CP indicates the role of Nrf2 in conditioning the cell for restoring redox homeostasis to cellular function [177].Altogether, these results show that inducing moderate ERS with CP may activate the UPR in a pro-survival response to redox eustress, while other cell signalling pathways promote wound healing (Figure 3). ChaC GSH-specific γ-glutamyl cyclotransferase 1 (CHAC1) degrades GSH, and thus plays a role in redox balance of the cell [205] and is also overexpressed by prolonged activation of the PERK/IRE1α arm of the UPR [206].CHAC1 expression was not induced in keratinocytes treated with PAM activated for only 20 s with CP, but was transcriptionally induced up to 5-fold with PAM activated for 180 s [171].Therefore, the lack of cell death and no CHAC1 induction with lower CP activation time showed PAM to be safe on skin cells [171].On the other hand, while the ERS/UPR may be exploited with CP to promote survival in healthy cells, prolonged redox dysregulation in cancer cells may also be possible through aberrant activation of the UPR.The expression of CHAC1 in breast and ovarian cancer cells and associated with significantly higher mortality in breast and ovarian cancer patients, implicating a possible role as a metastatic factor in these cancers [207,208].Furthermore, CHAC1 was among the most upregulated genes in SCC and glioblastoma cells (>16 and 25-fold, respectively) following treatment with PAM [209,210] that was sizably larger than in keratinocytes [171], indicating an intense degree of redox stress aggravating the UPR in cancer cells that led to apoptosis [209].In summary, while slightly elevated CHAC1 expression may increase malignancy of cancer cells [207,208], overexpressing CHAC1 with CP may kill cancer cells via prolonged PERK/IRE1α activation [209].Activation of the PERK/eIF2α and IRE1α/Xbp1 arms of the UPR, upregulation of GRP78, and eventually CHOP expression leading to apoptosis have also been observed in colorectal and melanoma cancer cells exposed to CP [151,211].Neuroblastoma cells in response to CP exposure also activated eIF2α that led to stress granule formation (protein-RNA complexes that interrupt mRNA translation), in line with PERK-mediated UPR [212].To date, ATF6 activation has not shown involvement in CP-induced UPR, and was not activated in melanoma cells [211].The ERS/UPR is predominantly a result of elevated redox stress caused by CP, as NAC was able to prevent the UPR [151].Unfortunately, all evidence of CP-inducing ERS and UPR to date are in vitro, while only one study showed CP could remediate already present ERS and UPR in a mouse atopic dermatitis-like in vivo model [204].The relevance of the UPR has not yet been confirmed in vivo with CP treatment of normal tissue, wounds, or malignant tissue. Xbp1 transcription factor, (ii) protein kinase RNA-like ER kinase (PERK) that activates the eukaryotic initiation factor 2-alpha (eIF2α), and (iii) activating transcription factor 6 (ATF6), which gets cleaved in the Golgi apparatus to translocate to the nucleus [120].All three arms promote chaperones and recovery of ER biosynthesis activity to restore protein synthesis, while IRE1α and ATF6 also signal inflammatory responses.However, only prolonged PERK activation can decide cell fate via C/EBP-homologous protein (CHOP) in two ways.Firstly, if the ER recovers, CHOP acts as negative feedback to PERK activity by promoting GADD34 and constitutive repressor of eIF2α phosphorylation (CReP) binding to protein phosphatase 1 (PP1) which dephosphorylates (deactivates) eIF2α [200,201], or if ERS go unresolved, prolonged CHOP activation activates apoptosis [202].One of the integral ERS-associated proteins that signals ERS and initiation of the UPR is glucoseregulated protein 78 (GRP78).GRP78 is expressed on the ER membrane as an ERS sensor bound to PERK, IRE1α, and ATF6, and acts as a chaperone that binds to misfolded proteins [203].Therefore, the UPR can be enacted by both normal and malignant cells in an attempt to restore cellular homeostasis, including in response to redox dysregulation. Later, in airway epithelial cells, induction of the UPR in response to CP was confirmed, revealing significant upregulation of a number of ERS-associated proteins, including GRP78 [159].Importantly, apoptosis was absent in conditions of lower CP exposure times in these studies [159,171].While these studies have shown that CP can induce ERS/UPR, CP has also been shown to remediate cells already experiencing ERS.In a chemically induced atopic dermatitis model in vitro and in vivo, elevated inflammation (upregulated TNFα, IL1β, and C-C motif ligand 2 (CCL2), with decreased anti-inflammatory IL10), ERS/UPR, GRP78 expression, and eventually CHOP-mediated apoptosis were observed, all of which were curtailed by CP treatment via induction of HO1 [204].HO1 induction as part of the response to CP indicates the role of Nrf2 in conditioning the cell for restoring redox homeostasis to cellular function [177].Altogether, these results show that inducing moderate ERS with CP may activate the UPR in a pro-survival response to redox eustress, while other cell signalling pathways promote wound healing (Figure 3).Activating transcription factor 6 (ATF6) activity has not been observed (indicated by "?").Lower CP exposure allows cells to recover, while higher CP exposure can cause prolonged C/EBP-homologous protein (CHOP) activity, leading to apoptosis. ERK1/2, JNK, and P38 MAPK Pathways The mitogen-activated protein kinases (MAPK) are a family of serine/threonine kinases categorised into three canonical signalling cascades: the extracellular signal-regulated kinases (ERK1/2), c-Jun N-terminal kinase (JNK), and p38 MAPK.The role of MAPK in cells is complex and influences diverse cellular functions.The ERK1/2 signalling cascade generally promotes pro-survival actions like regulating the cell cycle in response to stress, but also proliferation [213], whereas the JNK and p38 pathways usually balance cell fate decisions regarding autophagy and apoptosis [214].However, the long-term induction of these pathways also leads to terminal cell death.For instance, JNK and ERK1/2 can both propagate caspase-mediated apoptosis and autophagy, while JNK and p38 activate p53 to enact intrinsic (mitochondria-mediated) and extrinsic (Fas death receptor-mediated) apoptosis [24].Therefore, the complex dynamics of MAPK activity allows these pathways to both promote survival and activation of healthy cells and kill malignant cells when applied through a hormesis approach. MAPK Inducing Cancer Cell Death MAPK signalling itself and through cross-communication with PI3K (another oncogenic protein) are some of the most prominent pathways causing cancer malignancy by promoting cancer cell survival and proliferation [185,215].In particular, MAPK is frequently mutated in cancer cells, which leads to constitutive MAPK amplification from the nascent pre-malignant stage through to tumour progression and metastasis [216].Accordingly, this makes modulating MAPK a promising prospect for anticancer therapy.As a result, MAPK signalling alterations in cancer cells and tumours has been well characterised, often showing to lead to impaired invasiveness, cell cycle arrest, and eventual cell death. Exposure of cervical cancer cells (HeLa) to CP reduced ERK1/2 and JNK activity, while JNK and p38 activity was reduced in ovarian cancer cells exposed to PAM, leading to significantly diminished invasiveness (cell migration) and matrix metalloproteinase 9 (MMP9) activity [147,217].However, more lethal exposures to CP markedly activate JNK and p38 in several cancer cell types, including HeLa cells, neuroblastoma, prostate cancer, lung carcinoma, hepatoma, breast cancer, and colorectal adenocarcinoma cell lines, leading to cell cycle arrest, then bcl2 proapoptotic proteins bax and bak initiate mitochondria-dependent cell death via cytochrome c release and caspase 9/3-driven apoptosis [39,116,123,142,164].In several cancer cell types with high gasdermin E (GSDME) expression, CP treatment also led to pyroptosis [142], a highly inflammatory manifestation of programmed necrosis caused by gasdermin (including GSDME) cleavage by caspase 3, leading to GSDMN pore formation that perforates the cell membrane [218][219][220][221].ERK1/2 expression and phosphorylation in cancer cells can also be significantly downregulated by CP [144,222,223].Exposure of thyroid papillary cancer cells to CP led to a dismantled cytoskeletal orientation and decreased MMP2/9 activity that impaired cell invasiveness, which was not observed in normal thyroid cells [222].In addition to inactivating pro-apoptotic factors, bcl2 also inactivates beclin 1, an essential protein for coordinating the formation of the early autophagosome [224].Concurrently, CP and PAL can also increase ERK1/2 activity in cancer cells, leading to LC3 activity, which, in coordination with JNK inhibiting bcl2 to release beclin 1 and upregulating autophagy-related protein 5 (ATG5), leads to the development of the nascent autophagosome [149,188,225].CP also upregulates the expression of pro-autophagic sestrin 2 via JNK to initiate the extrinsic apoptosis pathway [148,226].Autophagy is essentially a survival mechanism, whereby cells can survive if autophagy resolves the damage.However, cancer cells being more susceptible to RONS causing damage tends to lead to caspase 3-driven apoptosis.Therefore, the susceptibility of malignant and healthy cells to RONS is also affected by MAPK signalling, as summarised in Figure 4. Downregulation of ERK1/2 and upregulation of the JNK pathways due to CP/PAL are usually accompanied with decreased PI3K/Akt/mTOR pathway and downstream NFκB activity in cancer cells to inhibit proliferation and promote caspase 9/3-dependent apoptosis [144,163,210].ONOO − has been implicated as a major causative of RONS due to the high detection of its biomarker 3-nitrotyrosine in affected cells [115].The PI3K/Akt pathway is an inhibitor of p53 [227], which is antagonistic to the p53-stimulating effect of p38 and JNK MAPK pathways [228,229].Taken together, CP induces apoptosis by the inverse regulations of PI3K/Akt, p38, and JNK, amplifying p53 expression and signalling.Furthermore, JNK activation of caspases 9/3, suppression of ERK1/2 and PI3K/Akt, and mitochondrial dysfunction are the result of the lack of antioxidant capacity to repair damage caused by CP-derived RONS [108,230,231].This is clearly evidenced by the frequent findings that MAPK activation, together with downstream cancer cell invasiveness, apoptosis/pyroptosis, and autophagy is ablated by the preconditioning of cells with the RONS scavenger NAC [116,123,142,144,145,[147][148][149]152].H 2 O 2 may be the major culprit, given that selective H 2 O 2 quenching also prevented cell cycle arrest and death [145,164], in addition to the increased sensitivity of cancer cells to H 2 O 2 due to elevated aquaporin expression and lower membrane cholesterol content [104,107], as described earlier. 221].ERK1/2 expression and phosphorylation in cancer cells can also be significantly downregulated by CP [144,222,223].Exposure of thyroid papillary cancer cells to CP led to a dismantled cytoskeletal orientation and decreased MMP2/9 activity that impaired cell invasiveness, which was not observed in normal thyroid cells [222].In addition to inactivating pro-apoptotic factors, bcl2 also inactivates beclin 1, an essential protein for coordinating the formation of the early autophagosome [224].Concurrently, CP and PAL can also increase ERK1/2 activity in cancer cells, leading to LC3 activity, which, in coordination with JNK inhibiting bcl2 to release beclin 1 and upregulating autophagyrelated protein 5 (ATG5), leads to the development of the nascent autophagosome [149,188,225].CP also upregulates the expression of pro-autophagic sestrin 2 via JNK to initiate the extrinsic apoptosis pathway [148,226].Autophagy is essentially a survival mechanism, whereby cells can survive if autophagy resolves the damage.However, cancer cells being more susceptible to RONS causing damage tends to lead to caspase 3driven apoptosis.Therefore, the susceptibility of malignant and healthy cells to RONS is also affected by MAPK signalling, as summarised in Figure 4. Various preclinical mouse models of tumour progression have been performed to test the efficacy of CP as an antitumour therapy [147,152,223,232].A reduced rate of glioblastoma and ovarian tumour progression was observed when the respective cancer cells were pretreated with PAM/CP prior to xenografting, with immunohistochemical analysis showing increased p38, JNK, and caspase 3 activation in CP-treated tumours [147,232].However, pre-treating cancer cells with CP in vitro before xenografting lacks clinical relevance.Two other studies in BALB/c mice with lung cancer cell and glioblastoma cell xenograft tumours were treated with CP daily for 10 and 20 days, respectively [152,223].Glioblastoma tumour growth was significantly decelerated with CP treatment, with immunohistological analysis showing extensive DNA damage, NOX3 expression (indicating endogenous H 2 O 2 production), and caspase 3 activation, which implicate JNK and p38 activity, but tumour growth still persisted [152].In contrast, 10 days of CP treatment trended towards reducing tumour progression, but this was not significant [223].Regardless, both of these studies showed that CP induced JNK and p38 activity to slow tumour growth, but CP therapy alone was not effective enough to halt tumour progression.Fortunately, mouse xenografted tumour models have shown that combination therapy of CP/PAL and chemotherapeutic drugs can synergistically reduce tumour growth rate and even lead to tumour size reduction [139,145,191,193,194].However, more animal models that test adjunct CP treatment with chemotherapy against progressive tumours are warranted to determine parameters for CP-augmenting chemotherapies before progressing to clinical trials. MAPK in Wound Healing and Inflammation The hormesis principle applied to CP treatment of healthy cells to promote healing potentiates, a different profile of MAPK activity than that observed in destroying cancer cells.Exposure of keratinocytes to CP has led to increased gene expression of MAPK and MAPK kinases (MEK), with downstream expression and release of IL6, IL7, IL8, IL3 receptor (IL3R), IL4R, and IL6R in keratinocytes [171,233], which collectively promote the inflammatory response, leukocyte recruitment, monocyte activation, and differentiation into macrophages [234].Growth factors, including granulocyte macrophage colony-stimulating factor (GMCSF), HBEGF, and VEGF, were also upregulated [171], and these factors have demonstrated a critical role of CP indirectly stimulating proliferation, migration, and activation of MAPK through autocrine actions of growth factors.Shorter CP exposure times (20 s) marginally activated MAPK, but cytotoxic exposures (180 s) markedly activated all arms of MAPK signalling, upregulating HSP27 to arrest the cell cycle and prolonging stimulation of p53 to induce apoptosis [233].Upregulation of HSP27 likely acts as a negative feedback mechanism to delay apoptosis [235], as HSP27 is known to prevent cytochrome c-dependent activation of caspase 3 [236], but CP exposure in this experiment was too harsh to prevent apoptosis [233]. Similar results were also observed in THP1 monocytes in vitro that upregulated JNK and p38 phosphorylation, as well as HSP27 in response to PAM [237].Interestingly, leukaemia (Jurkat) lymphocytes are far more sensitive to PAM, inducing significantly stronger JNK and p38 phosphorylation and caspase 3-mediated apoptosis with low HSP27 detection [237].For isolated human monocytes, PAM formed from 30 to 60 s CP exposure stimulated monocytes through MEK-ERK1/2 [238], concurrent with elevated proinflammatory IL8 release [239].Conversely, 3-6 min CP exposure induced phosphorylation of ERK1/2 and JNK pathways, leading to caspase 3 activation, finding that JNK activated H 2 O 2 concentration-dependent in PAM [238].Therefore, data to date suggests that low CP exposure time activates monocytes and has been shown to promote the proinflammatory M1 (antitumour) differentiation of macrophages [240].Conversely, high CP exposure times may kill immune cells that may help modulate inflammation, which would not be beneficial in chronically inflamed wounds. Compared to the antitumour testing of CP and PAL in animal models, there is a paucity of animal experiments translating CP-induced effects of MAPK in wound healing.What has been found is that CP and PAW treatment significantly accelerated wound healing in Sprague Dawley rats with full thickness excisional wounds, culminating in >90% wound closure approximately two to four days earlier than untreated wounds [153,241], with PAWtreated wounds reaching full wound closure 7 days faster than untreated wounds [242].The CP-treated wounds exhibited significantly greater ERK phosphorylation promoting proliferation and N-cadherin expression with lower E-cadherin expression [241], which are crucial steps for detaching cell-cell junctions to promote cell migration to re-epithelialise wounds [243].This is reciprocated by findings that PAW increased integrin β1/5 expression and phosphorylation of FAK and paxillin in keratinocytes, indicating that PAW promotes keratinocyte migration and proliferation in vitro and in vivo [153]. Neutrophils are an integral first responder in host defence against pathogenic microbes, but have received little attention for their response to CP.One study found that direct CP treatment of isolated neutrophils elicited the release of neutrophil extracellular traps (NET) and NOX/O 2 •− -dependent NETosis [244].NETs are the release of histoneconjugated genomic DNA bound to proteases into the extracellular space from neutrophils to trap microbes, and leads to NETosis, a form of programmed cell death specialised by neutrophils [245].Like CP treatment, NET formation has been linked to the activation of p38 and ERK1/2 MAPK [246].The role of CP-induced NETs/NETosis in enhancing host defence is likely negligible, as CP itself is a broad-spectrum antimicrobial [40,41].Then again, in chronic wounds which have high neutrophil abundance and dysregulated inflammation, CP may induce pathological NET release in the wound that can antagonise other wound healing processes by exacerbating prolonged inflammation [247].However, this is still controversial, and further mechanistic studies are needed to validate this finding.Currently, the clinical relevance of NETs/NETosis on wound healing, due to CP or not, is still also unclear.Additionally, NETosis has been linked to increased tumourigenesis, and inhibitors of NET release are being trialled in conjunction with other cancer therapies (reviewed [248]), but NET formation in cancer as a result of CP treatment has not been investigated.Therefore, the possibility of NETosis due to CP treatment of tumours and cancer treatment should be investigated in future animal and clinical trials. PI3K/Akt Pathway The phosphoinositide 3-kinase (PI3K)/protein kinase B (Akt) signalling cascade orchestrates diverse regulatory functions in cell survival, disease pathogenesis, angiogenesis, and tumorigenic processes, and therefore it is an important pathway for both induction of wound healing and cancer intervention [216,249].Healing of acute wounds normally occurs with basally elevated Akt/mTOR activity to promote cell growth, migration, and angiogenesis to help revascularise the new tissue [250,251].In contrast, chronic wounds, including diabetic foot ulcers (DFU), have been shown to exhibit impaired PI3K/Akt/mTOR signalling that impairs cell survival and reduces growth factor release, which would also stimulate surrounding tissue to heal the wound [252]. PAM has demonstrably promoted keratinocyte proliferation in vitro and in vivo via promoting Wnt/β-catenin signalling and PI3K/Akt/mTOR [143,253,254].Akt inhibits glycogen synthase kinase 3β (GSK3β), which prevents β-catenin degradation and allows nuclear translocation to promote cyclin D expression and the consequent G1/S proliferative cell cycle phase in keratinocytes [254].Evidently, a hormesis approach to CP applies here, as the higher CP-derived RONS (H 2 O 2 and possibly NO) exposures then reduced PI3K/Akt, ERK1/2, and β-catenin signalling back to untreated levels [143].Another axis is shown whereby CP-induced PI3K/Akt activation leads to downstream ERK1/2 and NFκB activation to also promote cell survival and proliferation, likely involving CPderived NO [143,162,253].PI3K/Akt signalling also leads to NO synthase activation, which increases endogenous NO production [255], followed by activation of protein kinase G and downstream ERK1/2 to promote proliferation [256].Therefore, not just CP-derived NO, but the stimulation of endogenous sources of NO are involved in promoting survival, proliferation, and angiogenesis, which are critical to wound healing. Akt can also dampen pro-apoptotic p53 and Bax expression, leading to the upregulation of anti-apoptotic Bcl2 and increased Nrf2 activity to improve cell survival [177].Skin-derived mesothelial cells exposed to PAM also had higher Akt expression and phosphorylation with dampened p53 activity to promote survival, while fibroblasts died through p53-dependent apoptosis [257].Furthermore, CP treatment of skin cells also results in the release of epidermal growth factor (EGF) and keratinocyte growth factor (KGF), which promote autocrine skin cell proliferation, elevate MMP2/9 activity for ECM remodelling, and release VEGF to recruit endothelial cells to aid in angiogenesis [258].Stimulation of PI3K/Akt by CP is summarised in Figure 5. phosphorylation with dampened p53 activity to promote survival, while fibroblasts died through p53-dependent apoptosis [257].Furthermore, CP treatment of skin cells also results in the release of epidermal growth factor (EGF) and keratinocyte growth factor (KGF), which promote autocrine skin cell proliferation, elevate MMP2/9 activity for ECM remodelling, and release VEGF to recruit endothelial cells to aid in angiogenesis [258].Stimulation of PI3K/Akt by CP is summarised in Figure 5. Suppressing PI3K/Akt Signalling in Cancer Cells The PI3K/Akt signalling pathway is commonly overexpressed through genetic alterations in cancer cells leading to metastases [249].In normal cells, active phosphatase and tensin homolog (PTEN) inhibits PI3K/Akt activation, whereas many types of cancers are PTEN-deficient, leading to aberrant PI3K/Akt signalling [249,259].The use of CP in the destruction of cancer cells has been shown to occur with increased Akt degradation due to redox stress [109,117,144,149,150,191]. The first study in head and neck cancer (human SCC15 and mouse SCC7) cells found that CP increased apoptosis, concurrent with reduced Akt phosphorylation and expression through MUL1-encoded E3 ligase-mediated ubiquitination [150].Furthermore, tumour growth in a mouse model of head and neck cancer was slowed and eventually stopped with CP treatment, with increased MUL1 expression and decreased Akt phosphorylation confirmed immunohistochemically [150].MUL1 expression can also stimulate NFκB activity to inhibit apoptosis, and yet this is interpreted to be a shortterm effect in which cells eventually succumb to apoptosis during prolonged stress [260].Reduced Akt/mTOR expression has also been linked to reduced hypoxia-inducible factor 1α (HIF1α) expression to dampen proliferation [117].Despite this, direct CP treatment of glioblastoma cells attenuated proliferation and decreased total Akt protein expression, but increased Akt and ERK1/2 phosphorylation 24 h after exposure [261].Although Akt degradation occurred in both situations, differences in the activation of Akt observed between different cell lines in these studies may be due to the PTEN status of cancer cells.The U-87 glioblastoma cell line that elevated Akt phosphorylation in response to CP is a PTEN-mutant, while the LN-18 cell line is a PTEN wild-type and did not experience elevated Akt phosphorylation [261].In view of this, the gene status of PTEN (among other tumour suppressor-related genes) may be a factor that affects CP therapy against cancers. In addition to targeting PI3K/Akt in cancer, the signal transducer and activator of the transcription 3 (STAT3) pathway counterbalances PI3K/Akt/mTOR signalling via PTEN and promotes tumourigenic proliferation, invasion, migration, and angiogenesis in most cancers [262].When Akt is inhibited in PTEN-deficient cancer cells, there can be a large increase in STAT3 signalling that compensates [263].Therefore, anticancer therapeutic strategies that inhibit both PI3K/Akt and STAT3, particularly in PTEN-deficient cancers, could be more effective in halting tumorigenesis [263].CP and PAM at lethal exposures also led to the deactivation of STAT3, which, together with the degradation of Akt signalling, halted tumour growth and promoted caspase 3-driven apoptosis in osteosarcoma and pancreatic cancer cells [109,149,191]. PAM has also demonstrated efficacy towards destroying cancer cells [140,264].PAMtreated human hepatoma (HepG2) cells had significantly decreased expression of Akt and mTOR, which led to decreased p62 (inhibitor of autophagy) and increased Beclin 1 and LC3-II expression to promote autophagosome formation [264].The autophagosome formation, autophagy, and degradation of Akt/mTOR were all ablated by the simultaneous addition of catalase and SOD, implicating extracellular O 2 •− /H 2 O 2 as the causative PAMderived ROS [264].PAM also induced the expression of PTEN, and consequently reduced Akt phosphorylation and deactivated NFκB.However, it also sensitised cancer cells to the TNF-related apoptosis-inducing ligand (TRAIL) and greatly enhanced caspase 8/3driven apoptosis [140].Mechanistically, the increased PTEN activity was likely due to reduced the expression of miRNA425 that inhibits PTEN translation [140,265].TRAIL binds to death receptors to induce apoptosis and is highly selective towards cancer cells, but aberrant PI3K/Akt pathway activity, common in cancer cells [249], can also lead to TRAIL resistance [266].The importance of PTEN is accentuated by findings that PTEN deficiency increases the resistance of cancer cells to TRAIL [266].Therefore, these studies suggest that CP could restore TRAIL as a therapeutic option in these cancers, although this is yet to be discovered. Whereas JNK and p38 MAPK promote p53 activity, the PI3K/Akt pathway in the cell inhibits p53 [227].Consequently, aberrant Akt activity promotes cancer cell survival through inhibition of p53 [177,227].Treating oral SCC with PAM has led to enriched p53 expression and, consequently, p21-mediated cell cycle arrest, inhibition of angiogenesis, DNA repair, and the mTOR (PI3k/Akt) pathways, and eventually caspase 8/9/3-driven apoptosis [209,267].The ataxia telangiectasia mutated (ATM) pathway was also implicated in PAM-and direct CP-induced p53 activity [233,267], and although not cancer related, PAM selectively induced ATM expression in Mycobacterium tuberculosis-infected macrophages [268].Additionally, the ATM pathway is associated with activity in a diverse array of other cell signalling pathways, including PI3K/Akt/mTOR [269].Despite this, the ATM pathway response to CP and the relevance to anticancer effects of CP is still unclear.The multitude of consequences to deactivating PI3K/Akt signalling in cancer cells through cytotoxic CP exposure are summarised in Figure 6. Curr.Issues Mol.Biol.2024, 46, FOR PEER REVIEW 21 Therefore, these studies suggest that CP could restore TRAIL as a therapeutic option in these cancers, although this is yet to be discovered.Whereas JNK and p38 MAPK promote p53 activity, the PI3K/Akt pathway in the cell inhibits p53 [227].Consequently, aberrant Akt activity promotes cancer cell survival through inhibition of p53 [177,227].Treating oral SCC with PAM has led to enriched p53 expression and, consequently, p21-mediated cell cycle arrest, inhibition of angiogenesis, DNA repair, and the mTOR (PI3k/Akt) pathways, and eventually caspase 8/9/3-driven apoptosis [209,267].The ataxia telangiectasia mutated (ATM) pathway was also implicated in PAM-and direct CP-induced p53 activity [233,267], and although not cancer related, PAM selectively induced ATM expression in Mycobacterium tuberculosis-infected macrophages [268].Additionally, the ATM pathway is associated with activity in a diverse array of other cell signalling pathways, including PI3K/Akt/mTOR [269].Despite this, the ATM pathway response to CP and the relevance to anticancer effects of CP is still unclear.The multitude of consequences to deactivating PI3K/Akt signalling in cancer cells through cytotoxic CP exposure are summarised in Figure 6. Discussion and Perspectives As illustrated in this review, CP-derived RONS can be used to precondition normal cells against redox stress and promote pro-survival and wound healing by exploiting acute activation of Nrf2/ARE and UPR mechanisms [159,[170][171][172]174,177].Furthermore, "high" CP exposure can kill malignant cells through CHOP-dependent apoptosis as a result of prolonged UPR activity [151,211,212] and inhibition of Nrf2/ARE via Akt Discussion and Perspectives As illustrated in this review, CP-derived RONS can be used to precondition normal cells against redox stress and promote pro-survival and wound healing by exploiting acute activation of Nrf2/ARE and UPR mechanisms [159,[170][171][172]174,177].Furthermore, "high" CP exposure can kill malignant cells through CHOP-dependent apoptosis as a result of prolonged UPR activity [151,211,212] and inhibition of Nrf2/ARE via Akt degradation and p53 [124,184].Additionally, CP treatment could differentially modulate different MAPK pathways depending on the exposure time and whether the cells were malignant or nonmalignant [39,123,147,164,217,233].Similarly, CP has been shown to modulate PI3K/Akt in a bimodal fashion, with lower CP exposure times transiently activating Akt to promote survival, proliferation, and angiogenesis during tissue repair [143,253], while higher CP exposure times causing severe redox stress would lead to heavily suppressed or completely ablated Akt expression and activity, apoptosis, and autophagy [109,117,144,149,150,191]. The interactions between the intracellular signalling pathways and their effects on cell fate in response to CP exposure are summarised in Figure 7. Curr.Issues Mol.Biol.2024, 46, FOR PEER REVIEW 22 pathways and their effects on cell fate in response to CP exposure are summarised in Figure 7. Direct Comparisons between Malignant Cells and between Malignant and Non-Malignant Cells and Tissue Current evidence shows that the most well-characterised responses to CP are related to anticancer treatment, bringing a wealth of knowledge for how CP causes antiproliferative, pro-apoptotic, and pro-autophagic responses in cancer cells. Direct Comparisons between Malignant Cells and between Malignant and Non-Malignant Cells and Tissue Current evidence shows that the most well-characterised responses to CP are related to anticancer treatment, bringing a wealth of knowledge for how CP causes antiproliferative, pro-apoptotic, and pro-autophagic responses in cancer cells.Several studies have directly compared malignant and normal human cell responses to CP and PAL under constant or very similar cell culture conditions [84,115,116,140,144,164,226,229,232,261,[270][271][272][273][274].Studies have even compared the responses of multiple cell lines of the same cancer type or different cancers using different conditions for each cell line to investigate CP sensitivity [39,142,189,267].Although useful insight may still be garnered, it is problematic to compare responses between cell types, particularly under different culture media conditions, and may lead to inaccurate conclusions.For instance, the selectivity of indirect CP treatment for inducing apoptosis in malignant cells and not healthy cells was only observed when cells were in their recommended media, but the selectivity was significantly diminished when cultured in the same media [271].Similarly, comparing results from similar studies can also be problematic if the studies used different CP sources, making any comparisons mostly attributable to the difference in CP configuration.Accordingly, selectivity of CP and PAL can only be achieved after careful control of cell culture conditions between malignant and non-malignant cells.In saying this, only studies that compared malignant and healthy cell responses to CP under comparable cell culture and CP conditions are discussed here. Skin keratinocytes, namely HaCaT, are the most utilised reference cell line for comparison with malignant cells.One recent study compared the cellular responses of HaCaT with SCC cells following indirect CP treatment under the same culture conditions and found drastically different responses.Compared to SCC cells, only HaCaT were shown to have upregulated expression of Nrf2-related proteins (NQO1, thioredoxin transmembrane protein 2, and GST M3) following indirect CP treatment [272].Furthermore, HaCaT demonstrated increased cell motility and adhesion protein expression of angio-associated migratory cell protein (AAMP), Rho-associated coiled-coil containing protein kinase 2 (ROCK2), cortactin-binding protein 2 (CTTNBP2), cilia-and flagella-associated protein 20 (CFA20), and integrin-linked kinase-associated serine/threonine phosphatase 2C (ILKAP; which implicates β-catenin/Wnt signalling that affects cell migration [275]) following indirect CP treatment.In contrast, the same proteins are inversely regulated in SCC (HNO97) cells [272].This was supported by other studies comparing HaCaT to melanoma cell lines and human brain cancer (U87) cells to normal astrocytes, showing that direct CP selectively killed cancerous, but not healthy cells by intracellular ROS inducing autophagy or p38-or JNK-dependent caspase 3-driven apoptosis [26,226,232,274].PAM also upregulated hyaluronan synthase 3 (HAS3) expression that stimulated hyaluronan production in HaCaT cells, which promotes cell motility, adhesion, and growth [276], but not in A431 epidermoid squamous carcinoma cells [277].Osteogenic responses to CP associated with p38 MAPK activation and GST antioxidant activity were also more pronounced in periodontal bone-derived stem cells than in bone-marrow-derived mesenchymal stem cells under the same conditions [278].Evidently, there is strong in vitro support showing that CP can elicit healing activities in normal cells while cancer cells are either deactivated or killed.While these data are exciting, more mechanistic studies are required to confirm the suggested cancer selectivity of indirect or direct CP and PAL treatment on a cancer-specific basis. Compared to normal prostate epithelial cells, prostate cancer cells had more stark JNK activation and additionally activated p38 to augment apoptosis [164].Interestingly, quenching single RONS during CP exposure did not affect cancer cell cycle arrest, but quenching H 2 O 2 or O 3 returned cell cycles to control levels in normal prostate epithelial cells [164].Similarly, ERK1/2 autophagic responses (increased bax/bcl2 ratio and LC3 expression) to CP treatment in murine melanoma cells were not observed in normal cells [188].Altogether, these results support the paradigm that cancer cells are more susceptible to redox stress than normal cells during CP treatment.SCC tissue and proximal healthy tissue also showed different responses to direct CP ex vivo, with higher levels of cytochrome c, higher IL10, TNFα, and IFNγ release, and lower IL22 released from tumour tissue [272] indicating significantly increased stress and apoptosis in malignant tissues.While these findings are promising, further studies are required to facilitate the translation of understanding from in vitro findings to animal and clinical testing. Non-cancerous malignant and healthy cells have also been compared following CP treatment.For instance, there has been comparison to CP treatment between normal primary fibroblasts and fibroblasts from keloids; a benign, hyperproliferative, progressive fibrotic skin lesions characterised by overaccumulation of extracellular matrix proteins like collagen, with scarring that can be both physically and psychologically distressing for sufferers [279].Interestingly, direct CP exposure did not kill either fibroblast type, but inhibited cell migration in keloid fibroblasts, associated with reduced collagen expression, suppressed ERK, Akt and STAT3 phosphorylation, while normal fibroblast migration and Akt phosphorylation was enhanced [280].Even though this has currently been the only in vitro investigation into treating keloid with CP, an assessor-blinded, self-controlled trial of CP therapy for keloid was recently completed [281].Eighteen participants had one randomised side of a keloid scar treated with a DBD CP device twice weekly for five weeks while the other side was untreated, and 30-day follow-up after the last treatment was conducted [281].The colour, pigmentation, rubor, texture, and volume all steadily decreased after subsequent CP treatments, indicating keloid receding, with only mild scarring observed in one patient that quickly resolved itself [281].Therefore, CP therapy could be a management strategy for abnormal tissue conditions.Unfortunately, as a keloid scar covering <5 cm 2 was treated for 5-15 min twice-weekly to marginally improve over several weeks [281], limitations with CP therapy will become apparent in cases when keloids (or other conditions) cover larger areas.CP has also shown efficacy against psoriasis, an inflammatory skin condition with abnormal keratinocyte activity that exhibit aberrant MAPK, PI3K/Akt, STAT3, and NFκB pathway [282].Although the molecular pathway responses to CP in psoriasis are not well characterised, CP inhibits STAT3 activation and significantly remedy inflammatory and epidermal hypertrophic pathologies in a psoriasis mouse model [283,284]. The Hormesis Principle to Plasma Medicine The mechanisms to utilise CP for therapeutic intervention can be divided into two categories based on the hormesis principle.The first is to induce oxidative eustress in tissue using "low" CP exposure to regulate local inflammatory signalling, bolster cellular redox signalling, and promote cell DNA/protein repair and antioxidant response mechanisms in healthy tissue.This promotes cell proliferation, leukocyte activation, angiogenesis, and tissue remodelling to accelerate healing.The second is applied to kill cancer cells using "high" CP exposure to overwhelm the antioxidant and DNA repair mechanisms mentioned above, inhibit proliferation, and promote apoptosis and autophagy.A hormesis framework for CP dosage in regard to wound healing, cancer therapy, and respective cell signalling modulation is represented in Figure 8. What if cancer cells were instead stimulated and normal cells were lethally exposed to CP? Evidence to date would surmise that comparatively "low" doses of CP against cancer cells could be pro-malignant, while "high" doses of CP on wounds may slow down wound healing.The latter has been observed in full-thickness excisional wounds in rats, whereby 1 min CP exposure accelerated wound healing, while there was no difference between 3or 5 min exposure to untreated wounds [241].Correspondingly, lower CP exposure times stimulated tumour spheroid growth, until higher (≥120 s) exposure reduced spheroid growth again [191].Additionally, osteosarcoma tumour growth was significantly elevated by PAL formed by 5 min CP activation compared to a slight decrease in tumour growth seen with PAL produced by 10 min CP exposure [191].A significant external factor for the effectiveness of PAL was also the growth dimensions of cancer cells; 2D monolayers were significantly more susceptible to CP/PAL than 3D in vitro spheroids and tumours [191].Therefore, there is a clear danger that must be recognised, namely, that CP exposures or PAL formulations that kill in vitro cultures of cancer cells may not be potent enough to impede tumour growth, or worse, may enhance tumourigenesis. angiogenesis, and tissue remodelling to accelerate healing.The second is applied to kill cancer cells using "high" CP exposure to overwhelm the antioxidant and DNA repair mechanisms mentioned above, inhibit proliferation, and promote apoptosis and autophagy.A hormesis framework for CP dosage in regard to wound healing, cancer therapy, and respective cell signalling modulation is represented in Figure 8. Limitations and Challenges in Plasma Medicine There are several challenges that hinder understanding the biochemical and molecular pathways involved in the cellular response to CP.First, primary exposure to direct and indirect CP therapy usually lasts only seconds to several minutes, while downstream intracellular effects have only been investigated for hours to days after primary exposure ceased in most studies to date.Even so, what occurs in cells and tissues during active CP exposure is still largely unexplored.Difficulties in investigating biological responses in these short time scales have also been reported; an issue not unique to plasma medicine, but is highly relevant to the redox biology field.However, performing experiments using methods that will not interfere with RONS (or electric fields, radiation, and free electrons in regard to direct CP) is a significant technical challenge. Secondly, research into cell signalling pathways has brought a wealth of knowledge on a variety of therapeutic targets of plasma medicine.However, a large majority of research is based on in vitro cell culture and comparing malignant to non-malignant cells.Some ex vivo experiments have shown limited direct clinical relevance to tissue-level response to CP therapy [132,174,177,272], but ex vivo experiments are often short in duration, are maintained in media that change its biological environment, and prohibit longer term experiments of CP therapy that are relevant to real application of CP on animals and human subjects that could span weeks to months.Currently, in vivo investigations of the effects of CP on cell signalling pathways are well characterised in cancer models, but scarce in wound healing.Notably, only one animal model each has reported accelerated wound healing to ERK1/2 and Akt during CP treatment [241,254], with no other study to our knowledge supporting or contrasting this finding.No translation of the effect of CP inducing the UPR during early stages of wound healing has been performed either.Finally, there is a distinct lack of chronic wound models in animal research on CP therapy in wound healing that accurately replicate chronic wounds in humans [285]. Particularly problematic is trying to translate in vitro findings from PAL treatment, including PAM that surrogate for direct tissue exposure to CP.The effects of CP on media components are compounded by the different media compositions between studies that can drastically affect the RONS profile in the liquid [286] and the consequent downstream biological effects of CP in healthy and malignant cells [25,109,271].Additionally, water itself is significantly cytotoxic in vitro owing to its hypotonicity, so PAW is usually diluted to half or quarter "concentration" in basal or growth media.This can be a major issue, as media can act as a buffer against pH, partially or significantly scavenge RONS of interest, and produce unknown or overlooked biologically active products, compromising the interpretation of the experimental results.For example, as stated earlier, the presence of pyruvate reduced SaOS-2 human osteosarcoma cell cytotoxicity to PAM by phosphorylation of ERK1/2, GSK3β, AMPK, and c-JUN and HSP60 dephosphorylation, which was primarily attributed to pyruvate scavenging H 2 O 2 [109].As pyruvate is a common media supplement as a carbon source for cell metabolism, especially in cancer malignancy [110], it is essential to consider its presence in the context of the plasma medicine experiments.This concept should be applied to other substrates/metabolites when possible.Therefore, experimental designs generally require additional control of variables like considering supplementation of preferably serum-free medium, excluding redox reactive supplements, and including exogenous RONS scavengers as additional treatment groups for validating presence of particular RONS.As such, interpretations for many of these experimental designs are limited to reasoning along the lines that PAM or the presence of PAW "contains RONS that may react directly with cells or respond to biologically active secondary redox products to induce/promote/suppress/ablate a cell response". Concluding Remarks At the cellular level, the effects of CP on Nrf2, UPR, PI3K/Akt, and MAPK signalling, their interplay, and downstream relationship with other tumour suppressor molecules like p53 and NFκB is well characterised.Translation to in vivo relevance is also emerging.Regardless, more research is needed into the role of Keap1 in response to CP beyond canonical Nrf2 inhibition. As exemplified by the treatment of keloids and psoriasis, the initial phenotype of the cells, beyond whether the cell is normal or cancerous, may be a significant factor that influences the response to CP exposure.Therefore, other non-cancerous lesions and conditions that are caused by an adverse cell phenotype could be a viable target for CP therapy. CP has already shown lack of genotoxicity in the short-term when applied to wound healing while still being potently antimicrobial to prevent and treat wound infections.Additionally, CP has potential as an adjunct therapy to synergise with chemo/radiotherapy to treat cancer.Furthermore, CP adjunct therapy may provide opportunities for cancer patients to receive lower chemotherapy doses to better control side-effects and thus improve quality of life in sufferers.Ideally, CP therapy may be applied to kill tumour cells at the margins of tumour resections and also stimulate healing of proximal healthy tissue to close wounds by the principles of hormesis and bystander effect.However, this dual effect, if possible, faces tremendous challenges.In particular, quantifying plasma "dose" will become imminently important, as clinical trials in plasma medicine mature into routine clinical use and more research is needed to best standardise reporting and clinical guidelines.Further RCTs will help unravel the mechanism underpinning CP effects on wound healing and cancer, with hope of developing more personalised CP treatment with high efficacy and minimal side effects.Additionally, continuously monitoring the long-term prospective genotoxic potential of CP technology is required to be in step with progress in the field. Figure 1 . Figure 1.Chemical reactions of cold plasma (CP)-derived reactive oxygen and nitrogen species (RONS) at the gas-liquid interface and through diffusion in the liquid phase.The depths at which reactions occur are indicated on the vertical axis, with important and negligible reactions denoted by solid and dashed lines, respectively.Large orange (ROS) and green (RNS) vertical bars with "diffuse" arrows indicate diffusion as the dominant factor in the movement of RONS in the depth of liquid.Redistributed from [29] (CC BY 4.0).Epithelial cells are the most affected by CP, and illicit a host of molecular responses over time.Generally, oxidative stress leads to protein unfolding and cessation of protein synthesis (ERS/UPR), cell cycle arrest (G2/M phase), mitochondrial dysregulation, and post-translational modifications (predominantly phosphorylation via kinases) following Figure 1 . Figure 1.Chemical reactions of cold plasma (CP)-derived reactive oxygen and nitrogen species (RONS) at the gas-liquid interface and through diffusion in the liquid phase.The depths at which reactions occur are indicated on the vertical axis, with important and negligible reactions denoted by solid and dashed lines, respectively.Large orange (ROS) and green (RNS) vertical bars with "diffuse" arrows indicate diffusion as the dominant factor in the movement of RONS in the depth of liquid.Redistributed from [29] (CC BY 4.0). Figure 2 . Figure 2. CP-derived RONS promote wound healing by dissociating Kelch-like ECH-associated protein 1 (Keap1)/nuclear factor erythroid-2 related factor 2 (Nrf2) to activate the antioxidant response element (ARE), rearranging cytoskeletal architecture to promote cell motility, recruiting inflammatory cells, and autocrine signalling to promote cell survival, proliferation, and angiogenesis.The Jun Nterminal kinase (JNK) pathway has been shown to potentiate Nrf2 activation, but the mechanisms are still unclear (indicated by "?"), while Akt decreases p53 expression to promote Nrf2.Keap1 activity may also go beyond the canonical function of inhibiting Nrf2, but these functions are greyed out to indicate the unexplored nature of these potential responses to CP. Figure 2 . Figure 2. CP-derived RONS promote wound healing by dissociating Kelch-like ECH-associated protein 1 (Keap1)/nuclear factor erythroid-2 related factor 2 (Nrf2) to activate the antioxidant response element (ARE), rearranging cytoskeletal architecture to promote cell motility, recruiting inflammatory cells, and autocrine signalling to promote cell survival, proliferation, and angiogenesis.The Jun N-terminal kinase (JNK) pathway has been shown to potentiate Nrf2 activation, but the mechanisms are still unclear (indicated by "?"), while Akt decreases p53 expression to promote Nrf2.Keap1 activity may also go beyond the canonical function of inhibiting Nrf2, but these functions are greyed out to indicate the unexplored nature of these potential responses to CP. Figure 3 . Figure 3. CP exposure causes endoplasmic reticulum stress (ERS) due to redox stress and activates the inositol-requiring protein 1α (IRE1α) and protein kinase RNA-like ER kinase (PERK) arms of the UPR.Activating transcription factor 6 (ATF6) activity has not been observed (indicated by "?").Lower CP exposure allows cells to recover, while higher CP exposure can cause prolonged C/EBPhomologous protein (CHOP) activity, leading to apoptosis. Figure 3 . Figure 3. CP exposure causes endoplasmic reticulum stress (ERS) due to redox stress and activates the inositol-requiring protein 1α (IRE1α) and protein kinase RNA-like ER kinase (PERK) arms of the UPR.Activating transcription factor 6 (ATF6) activity has not been observed (indicated by "?").Lower CP exposure allows cells to recover, while higher CP exposure can cause prolonged C/EBP-homologous protein (CHOP) activity, leading to apoptosis. Figure 4 . Figure 4. CP-derived RONS promote cancer cell autophagy and death by activating the mitogenactivated protein kinase (MAPK) family signalling pathways.RONS activating JNK and extracellular signal-related kinases 1/2 (ERK1/2) pathways leads to autophagosome formation and autophagy, while JNK and p38 also potentiate apoptosis via both intrinsic and extrinsic pathways of p53.Cancer cells that highly express gasdermin E (GSDME) also undergo pyroptosis. Figure 5 . Figure 5. Phosphoinositide 3-kinase (PI3K)/Akt/mammalian target of rapamycin (mTOR) pathway stimulation by CP leads to multifactorial stimulation of cell survival and proliferation by indirectly Figure 7 . Figure 7. Overview of the effect of CP on cell signalling pathway activity, interactions between pathways, and cell fate decisions.The black cross indicates degradation of Akt.Black and red arrows indicate the direction of activation, being (↑) upregulated and (↑↑) prolonged or excessive upregulation. Figure 7 . Figure 7. Overview of the effect of CP on cell signalling pathway activity, interactions between pathways, and cell fate decisions.The black cross indicates degradation of Akt.Black and red arrows indicate the direction of activation, being (↑) upregulated and (↑↑) prolonged or excessive upregulation. Figure 8 . Figure 8. Conceptual hormesis response curves for cancer (red) and normal (green) cells underpinned by the different responses in cell signalling pathway regulation between redox stress Figure 8 . Figure 8. Conceptual hormesis response curves for cancer (red) and normal (green) cells, underpinned by the different responses in cell signalling pathway regulation between redox stress and eustress.The red and green text boxes represent cancer and normal cells, respectively.Arrows indicate the direction of activation, being (↑) upregulation, (↓) downregulation, and (↑↑) prolonged upregulation. Table 1 . Phase I and II prospective clinical trials of CP technology for the promotion of wound healing. Table 2 . Phase I and II prospective clinical trials of CP technology against cancers and tumours. Several studies have directly compared malignant and normal human cell responses to CP and ,116,140,144,164,226,229,232,261,270-274].Studies have even compared the
20,374.2
2024-05-01T00:00:00.000
[ "Medicine", "Environmental Science", "Chemistry" ]
Children’s experiences of engaging with ICT in learning EFL: A case study from Saudi Arabia Within last decade there has been a push for use of ICT based education tools across all levels. However, there has been a lack of research on how ICT interventions affect students’ ability to learn. For young children the focus need to be on improving experience of learning and boosting their learning skills. This paper looks at how ICT intervention affect Saudi preschool school children’s experience of learning English as Foreign Language (EFL). Data was collected using English test scores and student observation. Findings of this paper indicate that ICT can be useful in teaching EFL but it need to be used keeping in mind the abilities of the students. Introduction In 2000s Saudi government started to look for ways to make the country modern and competitive (Alnofai, 2014).In a globalised world the knowledge of English was seen as one of the key ingredient for success, for both organisations and individuals (Faruk, 2014). Within last two decades, Saudi Arabia has been looking to invest significantly in education especially in modern and scientific education.Aligning Saudi education system with the global system is one of the key objectives in this regard.Teaching of English as foreign language (EFL) at early stage in child education is seen critical component of this education reform plan (Alnofai, 2014).Saudi Arabians now view English as the one universal language of modernization, sciences, and high economic status, making English a requirement for the labor market (Faruk, 2014).However, the current state of teaching EFL is seen as poor due to reasons such as poorly trained teachers (Alharbi, 2015), use of methods such as Grammar-Translation and the use of Arabic as a language of instruction, use of deductive methods (Assalahi, 2013).Use of modern technology is seen as one of the solutions to improve the quality of learning of EFL in Saudi Arabia (Alharbi, 2015).Alfahad (2009) conducted a study on Saudi Arabian students and concluded that technology is likely to enhance the communication skills, and change students from passive learners to more communicative learners. Several studies have confirmed that use of ICT can be useful in language teaching and learning (Hubbard, 2013;Kean, Embi, and Yunus, 2012;Jung, 2006).Use of ICT helps individuals in developing their creative abilities, problem solving skills and critical thinking ability (Hubbard, 2013;Kean et al., 2012;Klimova and Semradova, 2012).ICT also gives unrestricted access to global repository of information and knowledge which has become even easier with ubiquity of communication mediums such as smartphone. This study focuses on the use of ICT in the learning of EFL in Saudi Arabia.This study is anchored on the recent attempts of the Saudi government to introduce English as a subject in the curriculum of elementary education subsequently attesting to the increasing importance of teaching EFL to younger students. Theoretical overview According to Piaget's constructivist theory, child-determined exploration and guided discovery, as opposed to direct teaching, serve as the basis for learning.In addition, Ciampa (2012, pp. 3-4) asserted that "the constructivist goals of learner control, autonomy support, choice, active problem-solving, and use of relevant and authentic texts in beginning reading instruction are preferred to explicit, teacher-directed instruction."Thus, Piaget's constructivist theory underpins the stated benefits of ICT use in educational settings.In particular, it is reflected in Weber's claim that (2011, p. 565) ICT provides "better access, more control, and greater freedom for e-learners"; as well as in Sharples et al.'s (2009) view that children need to cultivate a reflective, 'metacognitive awareness' (meaning an understanding of their own learning processes) of their own creative and safe engagement with ICT. Similarly, Vygotsky's (1978) Social Constructivism Theory posits that social aspects or factors such as friendships and a sense of togetherness are important to learning.Social interaction plays a key role in the development of cognitive skills (Chung, 2012).Chung (2012) explained that Vygotsky (1978) subsequently developed the 'Zone of Proximal Development' (ZPD) theory to further illuminate the reason behind the positive effect of social interaction on an individual's cognitive development. ICT tools are viewed as devices that enhance social interaction and child play when applied in educational settings (Gahwaji, 2011).Plowman and Stephen (2005) also found that there is a positive relationship between play and learning, highlighting the advantages of introducing children to ICT at an early age.These findings strongly support the assumptions behind Piaget's Constructivist Theory as well as Vygotsky's (1978) Social Constructivism Theory. Communicative Language Teaching is focused on achieving holistic communicative competence, including linguistics and adequate grammar skills.It focuses on aspects such as fluency and context meaningfulness and aims to consider language in functional terms.It thus, suggests that productive use is more critical than accurate use of language (Brown, 2007).This view is also supported in the Discourse Approach (Cots, 1996) which suggest that promoting discourse based teaching can help students in acquiring strong communication skills.It thus supports the use of real life materials for teaching language.The aim is not only to improve linguistic abilities but also communicative and pragmatic competence.This means engaging is a discussion in a group, taking turns to listen and talk based on what has been heard.This is likely to help students learn and apply foreign language (Celce-Murcia and Olshtain, 2005). ICT and teaching EFL Several authors have defined ICT differently but for the purpose of this research the definition given by Plowman and Stephen (2005, p.147) is adopted.They defined ICT as various "audiovisual resources, 'smart' toys […] remote control devices, photocopiers, telephones, fax machines, televisions, and computers, […] toys that simulate appliances such as mobile phones, laptops, cash registers, microwave ovens, and barcode readers as well as computers […]." Within the context of education in general, ICT has been considered capable of supporting basic education (Sanyal, 2001).With particular reference to childhood education, there appears to be a consensus amongst various stakeholders such as policy makers, practitioners, academics, and parents with regards to the positive relationship between play and learning, highlighting the advantages of introducing children to ICT at an early age (Plowman and Stephen, 2005). Extant literature that delves on the usefulness of ICT in early education is of critical importance to the proposed study since this particular strand of literature can help illuminate the ways in which ICT can be used to facilitate the learning of EFL by young children.For instance, extant literature has documented the utility of ICT in enhancing young children's learning and their experiences with peers (NAEYC, 1996).In the same vein, the immense potential of using computers in benefitting young children's mental development (Haugland and Wright, 1997) as well as the positive effects of using interactive teaching programs on preschool children's literacy development (Gahwaji, 2011) have also been demonstrated in prior studies.Gahwaji, (2011, p. 101) recommended the use of ICT that "allows children to decide pace and direction, and contains sound, voice, and music"; as well as those that feature "open-ended learning tasks with animated routines and directions that can be paused and resumed, or halted, and swift feedback to children in order to nurture their interest." The use of ICT in educational settings has been described as engaging, enabling and transformative (Clark et al. 2009;Prensky, 2010).The use of ICT in childhood education has been found to enhance both personalisation and collaboration, providing tools and experiences that can help children improve their social and independent learning (O'Hara, 2008).In the early years of education, technology has been found very useful in creating an 'enabling environment', founded on communication and interaction (O'Hara, 2008).This relates to Montessori's (2009, p.6) assertion that child development occurs most effectively within environments conducive to 'exploration, communication and manipulation'.While developing practical skills with technology as they develop, children will also need to cultivate a reflective, 'metacognitive awareness' (meaning an understanding of their own learning processes) of their own creative and safe engagement with ICT (Sharples et al. 2009).This concept has been defined as 'e-confidence' and is a key concern for teachers when planning learning experiences involving ICT.A framework of possibilities for using ICT in early education has been developed by the National College of School Leadership (Blows, 2009).This matrix involves a progressive scale of 'e-words', which describe increasing the utility of ICT in transforming learning and in developing children's thinking skills (Blows, 2009).The integration of ICT into the Early Years must be supported by a comprehensive school e-safety policy (Byron, 2008). Based on the constructivist point of view, learning is construed as a process of sense-making and meaning-making of the world.It involves knowledge construction by the pupils through the experiences that they have, by relating their own experiences to what they already know, and through the guidance that teachers are able to offer them (Department for Education and Skills, 2004).Pupil collaboration, experimentation, reflection and analysis are encouraged through the use of well-suited ICT tools.As a result, they become "more independent, active and responsible learners" (Hennessy, Deaney and Ruthven, 2005, p.2). Hence, ICT is increasingly being acknowledged as tools that support student learning (Pera, 2013). Several researchers have looked at using technology for teaching foreign languages.Yun (2014) support the use of technology in teaching foreign languages especially through movies and documentaries which use everyday language and are, therefore, useful means of teaching authentic speech.Li and Brand (2009) supported use of songs and music for similar reasons.Ching and Fook (2013) found that graphic literature magnifies the skills of critical thinking if implemented in language curricula and model lesson plans. Research framework ICT is useful in developing Metacognitive capabilities which eventually lead to core learning related competencies and consequently improvement in language skills.Theories such as CLT, MI theory and Discourse Approach suggest that ICT is useful in developing students' Metacognitive skills such as creativity, problem solving, communication, self development etc.It leads to improvement in core competencies; in terms of core competencies for children three core competencies are identified in this research: Task competency: can the student complete a given task i.e. is he/she aware of what needs to be done. Process competency: Does the student understand the process that needs to be followed?Personal competency: Does the student have the personal skills such as being cooperative, work in a team, focus on the task etc.In terms of child's ability to learn a foreign language five key measures were identified.These are: Listening, speaking, reading, writing and reading stories.These are identified on the basis that: -English as foreign language means that the child has just started to learn English.They cannot be, thus, expected to have advanced English language skills.-These five are the key skills that most teaching EFL programs focus on. -These are the skills that children learning EFL are tested on in Saudi Arabian education system.This means new forms of evaluation were not required.This minimised possibilities of errors in evaluation. Method This research adopts a mixed methods approach involving a single Saudi Arabian preschool setting in Riyadh.The methods employed include English language testing and classroom observations. Children's tests (pre-test and post-test) are effective ways of ascertaining the effects of the intervention being investigated (in this case, the effects of the use of ICT tools on the learning of English by the children).There may be differences in the way the students engage with the ICT which would not be visible on a purely quantitative view, differences which might explain certain patterns in the observed results.This can be achieved through observation. Observation is an effective way of "finding out what people do in particular contexts, the routines and interactional patterns of their everyday lives" (Darlington and Scott, 2002:74). Data collection The setting of the study was 'Here to Grow', a private school located in the central urban part of Al Riyadh.The school uses a combination of American curriculum and Arabic National curriculum. The data collection methods used in this study were: (1) children's tests; and (2) non-participant observation. 1-Children's Tests: The children's tests were used to investigate the impact of the usage of ICT tools on their learning of English and subsequently provide support to or evidence against the aforementioned learning theories.The children's tests included the following: (1) an English Language Test which was administered at the beginning of the second term of the school year, specifically on 25 January 2015 (for children belonging to both Group 1 (G1) and Group 2 (G2); and (2) an English Assessment Test which was administered in the half term, specifically on 22 March 2015, and at the end of the term, specifically 10 May 2015.The English Assessment Test is a standard, in-house test which is actually a grading/rating sheet completed by the teacher.It is used by the school to evaluate the English language skills of pupils from the start of the school year up to the end of the school year.It has five columns-the first four columns correspond to the four grading periods of the entire school year, while the fifth column corresponds to the qualities of the pupils that are being rated by the teacher.The English Language Test measures pupils' skills relating to the following core areas: listening and speaking, reading and writing, and reading stories.The English Assessment Test on the other hand, was administered after the ICT intervention. The children were tested three times during the study in order to know the effect of the ICT intervention.Test scores of each child were obtained for all tests. 2-Non-Participant Observation: The present study used non-participant observation to record changes in Metacognitive, core and language skills of children.All G1 members were initially given access to the tablets and the apps with the guidance and instruction of the teachers, whilst no G2 members were provided with tablets.Hence, G2 served as the control group.Then there was a switching over point, wherein G1 members were no longer given access to the tablets.Hence, G1 which was the experimental group at the beginning of the term now became the control group.At this point, all G2 members were given access to the tablets, with the guidance and instruction of the teachers, and hence became the experimental group. Ethics Ethical considerations relevant to the present study are centered on the following ethical issues: (1) the use of human participants in the study; (2) privacy for the participants and the confidentiality of the information that they have provided; (3) data protection; (4) the requirement for informed consent of the participants; and (5) vulnerable populations. To address the first ethical issue, protection of participants from any form of physical or psychological danger during their participation in the study was ensured.This was done by explaining to the teachers and parents of the children-participants what their participation would entail.It was made sure that this information was clearly understood by the teachers and parents of the children-participants. Furthermore, participants (teachers) and parents of the children-participants were debriefed after conducting the study.All participants (and their parents -in the case of the children-participants) were given the opportunity to decide whether or not to continue with their participation in this study.They were given the prerogative to withdraw at any stage of the investigation; and they were informed of their right to do so. To address the issue of confidentiality of information about the participants and their respective responses, no further contact with the participants was made after the administration of the research instruments which include the following: (1) the Preschool Evaluation Test and the English Assessment Test; (2) Non-Participant Observation for Children Participants Schedule.In addition, the confidentiality and anonymity of all participants was upheld throughout the research study. To address the issue of data protection, the collection, storage, disclosure, and use of research data obtained from the participants strictly and fully complied with the Data Protection Act of 1998, which places obligations relevant "to fair and lawful data collection and processing" (Matwyshyn, 2009:238).This was done through the inclusion of the following clauses in the Parents of the Children-Participant Debriefing Form: (a) "The information your child provided will only be used for the research, and will not be disclosed to any third party, except as part of the research findings"; and (b) "The data your child provided will be kept and all collected data were stored and pass-worded in author's personal computer. Data analysis Data collected for the Children's tests which consisted of test scores for the English Language Test and the English Assessment Test were analysed using a series of multivariate analyses of variance using the general linear model program in Microsoft Excel.Data collected from observation of children participants which consisted of field notes were coded and manually analysed.According to Brophy, Snooks and Griffifths (2008 , p. 136), for less structured observation such as open observation, data "can be coded and analyzed in the same way as the texts of interview transcripts -carefully relating the codes to the evaluation aims and objectives."Test results for both Group A and Group B indicate that the pre and post intervention mean test score for the both the groups are statistically different with the post intervention scores higher than the pre intervention score. Quantitative analysis of test results Next the scores across the five learning dimensions were compared: Data indicates that for group A scores across four dimensions improved as a result of the ICT intervention while the score for writing remained the same.However, the change in score for 'reading stories' was statistically not significant.For group B, scores across three dimensions improved while score for 'listening' remained the same and score for 'reading stories' actually declined after the intervention.However, only the rise in scores for writing and reading were found statistically significant. Qualitative analysis of observation data In total 1071 observation items were noticed.These items were categorised according to different themes that they belonged to.The table indicates that most of the observations were for creativity followed by autonomy and personal competency.In fact in some cases these two themes were found to be overlapping with each other.In autonomy most common observation was that after the first session most of the students were able to look for certain pieces of information online without assistance.In addition, many of the students were able to also look for additional relevant information. ICT and meta cognitive competencies: One of the key objectives of ICT is to improve the meta cognitive competencies such as autonomy, creativity and problem solving capabilities of students.Development of these meta cognitive competencies is critical not only for English language learning but also for overall academic development of students. Communication: Student observation found that interpersonal communication between students decreased as a result of use of ICT.When they had no ipad they were seen chatting, forming groups and engaging in activities.However, when once they were handed over ipads most of the students seemed too absorbed by working on the ipad.The level of communication between the students decreased substantially while using ipad.While in other group exercises students were seen playing with each other, during the use of ipad this aspect was missing. Out of the 17 students in group A, only 9 instances of communication were noted during the first hour when students were given ipads.In case of group B which consisted of 13 students the number of such instances were slightly higher at 11 but the quality of that communication was poor in the sense that the students only spoke briefly.This indicates that ipad based learning may lead to more individualised behaviour. Self development: Use of ipads did leave to some degree of self improvement but this was not significant enough to suggest that use of ipads will definitely lead to improvement in self development.There was a decline in team working and cooperation.However, students were able to stay calm and focused on the task.Students' Persistence to learn about particular topic improved significantly after introduction of iPad.Students continued to look for interesting topics and there was a degree of persistence in learning. Creativity: Creativity of the individuals seemed to improve as a result of the use of ipad.Individuals were seen as more creative; for instance one of the students, who was confused between whether to choose a lion or a tier as his favourite animal, ended up finding information on liger (a hybrid between lion and tiger) and instantly claimed it as his favourite animal.He searched for a liger based on his imagination as he has never heard about a liger before.Even his teacher was surprised because even he teacher had never heard of a liger before. Originality is one of the key aspects of creative Metacognitive competency.There were several instances when students were seen trying for things that they imagined for their minds.For example, one of the students was seen looking for "sky steps" i.e. stairs which go up to the sky and another student was seen looking for "fly boy" i.e. the boy who could fly.All of these concepts were not linked to their studies but rather emerged from their curious minds.Many of these ideas were completely original and some were influenced by the things that they have seen or experienced.This indicates that the use of ipad did improve the metacognitive competency of creativity in the students. Problem solving: The problem solving ability of students did seem to improve as a result of use of ipad.Teachers gave many fun exercises for kids to do and it was noticed that with time their performance in problem solving improved.The scores of the apps were noticed and in 26 of the 30 cases the performance of students improved with the use of ipad.These exercises included some math problems and some puzzles. Autonomy: Use of ipad improves autonomy as witnessed in the observations.Students required instructions on how to use in the first two lessons but by the third lessons students were already aware of how to use ipads and they seemed little interest in the instructions.Ipads allowed the students an environment of self control and this was evident in observations.Individuals were interested in using ipads but they were pursuing their own interests such as looking for information on special powers of superheroes and then role playing as superheroes. ICT and Core competencies Personal competency: Personal competency refers to the self conceptualisation, autonomy and self direction.Personal competency is essential in order for students to develop their overall learning capabilities.Student observation revealed that personal competencies of students somewhat improved but not all personal competencies were improved as a result of the use of ipad.For example, students exhibited greater autonomy as a result of the use of ipad but their communication, cooperation and team working skills were affected negatively.There were some instances of team working where students were seen helping each other in using ipads but this was purposive and did not turn into social interaction. Task competency: One of the key aspects of learning is understand the task competency.Task competency of students showed marked improvement as a result of the use of ipads.By the third lecture using ipads, most of the students in group A were able to perform their tasks as instructed.Similar trends were seen in case of group B with most of the students getting adapted to the task by lecture 3. Also, when the teacher made slight alteration to the task, some of the students were able to complete the altered tasks without the need for additional assistance.Some other students were able to complete the tasks by following the individuals who understood what to do.What was interesting was that even the students who misunderstood the task were able to complete the tasks like the previous ones.For the exercise tasks were arranged in three levels of difficulty based on the complexity of tasks.Most of the students were adapt at completing the tasks of level 1 complexity but only few were able to complete the level 2 complexity tasks by themselves.Some were able to complete it using the help while some others misunderstood.Almost all of the students were able to complete the task incorrectly Results indicate that most of the students understood the task from start to finish and were able to complete the cycle of task leading to some form of output albeit wrong. Process competency: In order to test process competency, one test was developed.Students were asked to complete process of different levels of complexity.Results indicate that process competency of some of the students improved as a result of use of ipad but the results were not significant enough to suggest that continuous use of ipad may lead to improvement in process competency of students.The results indicate inconsistency in process competency as a result of the use of ipad.It is possible that with time and continuous use of ipad the process competency of student may improve but based on results it is difficult to say that any such improvements can be attributed to use of ipad.It is understandable that when individuals repeat certain processes they get used to the process and are able to follow the process.This test was aimed at understanding whether the process competency of individuals improved as a result of use of ipad and wanted to eliminate possibility of any improvement due to individuals getting used to the complex processes. ICT and Language competencies Listening: Use of ipad seemed to improve the ability of students to listen.Out of the 101 instances recorded for listening in 53 instances students listened the word correctly and in another 22 instances the students remembered the word almost correctly.After listening they were able to write it down accurately albeit with some spelling mistakes. Speaking: Speaking ability of students seemed to improve significantly but observer noticed that the students were finding it difficult to understand the accent of the speaker on their ipad apps but could comfortably understand the teacher.Nevertheless, their speaking ability seemed to improve as a result of using ipads especially after multimedia based sessions. Writing: There were instances which suggest that use of ipads may help in improving pupil's writing ability.After use of ipad, more students were seen trying to make up spelling of difficult words based on their own constructive abilities.They would break down the words in smaller chunks and the create the spelling based on combining those small chunks.There was also noticeable change in their ability to create longer sentences. Reading: The ability to read showed noticeable improvement.Most of the students were able to read the sentences correctly by the third lesson using ipad.This is also reflected in their improved reading score. Reading stories: Surprisingly while the reading score showed improvement but the score for reading stories did not improve and in some cases even declined.This was surprising because students seemed interested in listening to the stories.The only explanation for this was the selection of stories which were either not interesting enough or were too long for the students to read. Conclusion The findings of this research indicate that use of ICT may help in improving the learning of EFL for young children in Saudi Arabia but this on its own, may not be a sufficient measure to achieve such improvement.In particular some contextualisation of ICT modules is required to reduced the effort that the teachers and students have to input to utilise ICT for language learning.It was seen that the use of ICT negatively affects some of the personal skills and this is one of the key issues that many researchers have raised in past as well.This challenge can be overcome with designing of ICT based lessons in a way that it facilitates rather than hinder person skills development especially in terms of cooperation, collaboration and team working. Table 1 . t-Test: Paired two sample for means for group A Table 3 . Comparison of means of score for each individual item before and after ICT intervention. Table 4 . Frequency of different themes in the observation data. Table 5 . Results of task competency observations Table 6 . Results of Process competency observations
6,423.2
2017-10-01T00:00:00.000
[ "Education", "Computer Science" ]
Noise-Resistant Demosaicing with Deep Image Prior Network and Random RGBW Color Filter Array In this paper, we propose a deep-image-prior-based demosaicing method for a random RGBW color filter array (CFA). The color reconstruction from the random RGBW CFA is performed by the deep image prior network, which uses only the RGBW CFA image as the training data. To our knowledge, this work is a first attempt to reconstruct the color image with a neural network using only a single RGBW CFA in the training. Due to the White pixels in the RGBW CFA, more light is transmitted through the CFA than in the case with the conventional RGB CFA. As the image sensor can detect more light, the signal-to-noise-ratio (SNR) increases and the proposed demosaicing method can reconstruct the color image with a higher visual quality than other existing demosaicking methods, especially in the presence of noise. We propose a loss function that can train the deep image prior (DIP) network to reconstruct the colors from the White pixels as well as from the red, green, and blue pixels in the RGBW CFA. Apart from using the DIP network, no additional complex reconstruction algorithms are required for the demosaicing. The proposed demosaicing method becomes useful in situations when the noise becomes a major problem, for example, in low light conditions. Experimental results show the validity of the proposed method for joint demosaicing and denoising. Introduction Current digital imaging systems often consist of monochromatic image sensors equipped with color filter arrays to capture color information. Color filters are required because the photosensors response to the intensity of the light and cannot distinguish the color information. Therefore, a small filter is coated in front of each pixel so that the small filter filters the light by wavelength range to obtain information about the specific color of light for each pixel. For example, the Bayer CFA pattern [1] passes through only one of the primary colors (Red, Green and Blue) at each pixel, where the ratio of the numbers of the R, G, and B pixels in this pattern is 1:2:1. The other two missing colors have to be reconstructed by demosaicing algorithms [2][3][4][5][6][7][8][9][10][11]. However, the problem of low resolution arises with the Bayer CFA because information about two color components is lost for each pixel. Moreover, as much of the light is absorbed by the color filters, the proportion of the light energy to the noise energy decreases, which again results in a decrease of the signal-to-noise-ratio (SNR). Therefore, the reconstructed (demosaiced) color image becomes noisy, especially, in low illumination environments when the light energy is low. The problem of noise intensifies as the resolution of mobile photos in smartphones grows since the small sized sensors receive less light and which causes a loss of detail, resulting in relatively high noise levels. The noise problem deepens as the resolution of mobile photos in smartphones increases, reducing the size of image sensors and the light energy they detect. Therefore, to allow more of the incident light to be detected rather than absorbed, CFAs using secondary colors (Cyan, Yellow, and Magenta) have been proposed in various forms, including CYGM (Cyan, Yellow, Green, Magenta), the CMY (Cyan, Magenta, and Yellow), and the CMYW (Cyan, Magenta, Yellow, and White) CFAs [12,13]. Other demosaicing methods use the RGBW (Red, Green, Blue, and White) CFA, which is a CFA that contains 'White' pixels that allow for the penetration of all color intensities [14,15]. There is a tradeoff between the number of R, G, and B pixels and the number of W pixels in demosaicing methods using the RGBW CFA as a higher number of W pixels results in a larger sensitivity of the W sensor, which makes it more robust to the noise, but this also results in a degradation of the resolution as white pixels do not contain color information. In this paper, we propose a deep image prior (DIP)-based demosaicing method that can reconstruct colors from a random RGBW CFA without the use of any complex reconstruction algorithm other than the training of the DIP with the single CFA image. To train the DIP network, we propose a new loss function that is tailored for the demosaicing of the random RGBW CFA. The loss function differs from existing ones used in DIP-based image restoration problems in the respect that for the white pixels, it imposes a linear constraint that the generated color components should obey instead of specifying the values of the color components. This is the first time that the DIP is used for the demosaicing of the CFA, which contains white pixels. Color Filter Arrays in Digital Imaging Systems Current digital imaging systems often constitute monochrome image sensors that are overlaid with color filter arrays(CFAs) to capture color information. The most commonly used CFA is the Bayer CFA, which consists of Red, Green, and Blue color filters so that only one color component can be obtained at each pixel location, as shown in Figure 1. Let Ω be the two-dimensional spatial domain of the image, For the Bayer CFA, the components of c[k] are defined as where S R , S G , and S B denote the sets of the R, G, and B pixels, respectively. The problem of demosaicing is to restore I orig from I s , which is an ill-posed problem since I orig is a threechannel image, while the sensed image I s is a one-channel monochrome image. Therefore, additional constraints have to be imposed to solve the demosaicing problem, such as applying an additional assumption that adjacent pixels have similar colors. By applying this assumption, in conventional demosaicing methods, the missing two color components are interpolated from the spatially adjacent CFA data. RGBW Color Filter Array In [16], the proportions of the R, G, and B components in the white pixel are computed based on the assumption that a linear relationship exists between the white component and the three primary color components (red, green, and blue). Based on this assumption, the components in RGBW CFA matrices, which contain R, G, B, and W (white) pixels, can be defined as: where S W denotes the set of the white pixels, and α R , α G , and α B are the proportions of the R, G, and B components, respectively, in the white pixel. The values for α R , α G , and α B differ for different sensors. For example, in [16], the authors calculated α R , α G , and α B by optimization to obtain α R = 0.2936, α G = 0.4905, and α B = 0.2159 using a typical RGBW sensor, i.e., the VEML6040 sensor developed by the Vishay company [17]. In this paper, we use the same values, α R , α G , and α B , as calculated in [16] but emphasize the fact that the proposed method works with any values of α R , α G , and α B . It can be seen from (3) that the demosaicing problem can no longer be solved by mere interpolation as with the Bayer CFA because the intensity values sensed at the W pixels become linear combinations of the R, G, and B color components. Therefore, as a set of linear equations have to be solved, more complex demosaicing methods have to be applied, such as those in [12,13] in conventional non-neural network approaches. Meanwhile, neural network approaches for demosaicing use an end-to-end approach where the input is the mosaiced image and the output the ground-truth image. Therefore, the loss function is simple where the output of the network is to be compared with the ground-truth image. For example, the work of [18] uses the loss function defined as where N θ (·) denotes the neural network, and I in i and I t i are the i's mosaiced and groundtruth images, respectively. Other tasks, such as the work of [19,20] we used in the comparison in the experiments, divide the entire image into smaller patches but apply the same loss function as shown in (4) to all the patches: However, conventional neural network approaches require a large dataset of mosaic/ ground-truth images for the training of the neural network, and the training process usually takes several days. In comparison, we propose the use of a deep image prior (DIP) network that can be trained on the single sensed CFA image and can simultaneously obtain the color components for the R, G, B, and W pixels based on a loss function tailored for this problem. Deep-Image-Prior-Based Image Restoration Recently, Ulyanov et al. proposed the use of the DIP for image restoration [21]. The DIP resembles an auto encoder but is trained with a single image I, i.e., the image to be restored. The DIP converts a noise tensor z derived from a uniform distribution into a restored image g θ (z), which is the output of the deep image prior network with parameters θ and input z. In [21], the DIP is applied to the task of inpainting by minimizing the following loss function: where is the Hadamard's product(element-wise multiplication operator) and m ∈ {0, 1} H×W is a binary mask with a value equal to zero corresponding to the missing pixels to be inpainted and a value of one corresponding to the pixels to be preserved. It has been shown in [21] that the minimization of L in (6) with respect to θ results in a DIP network that can inpaint an image. Inpainting and demosaicing are similar in that they fill in the missing information but differ in the fact that in inpainting the existing pixels have full channel information while in demosaicing they have not. That is, while all R, G, and B values are available for existing pixels in inpainting, in demosaicing, existing pixels have only one of the R, G, and B values. In [22], Park et al. use the variational DIP for the joint task of demosaicing and denoising. However, so far the DIP has never been used for the demosaicing of CFA images which contain white pixels. Proposed Method In this section, we explain in detail the proposed method. We first introduce the random RGBW color filter array. Then, we propose a method how to reconstruct the colors from the monochrome CFA image sensed with the random RGBW CFA. Random RGBW Color Filter Array In neural network-based demosaicing methods, it is the CFA that determines the parameters of the neural network because the CFA defines the loss function. A different CFA results in a different loss function, which means that the resulting parameters of the network become different for different CFAs. The reason that we come up with a random pattern is based on the guess that teaching a network to generate random colors at random locations would make it easy to find the most meaningful parameters for the backpropagation because there is no bias for specific colors at certain locations. Furthermore, the reason that we used equal numbers of R, G, and B pixels instead of the widely used ratio of 1:2:1 for R, G, and B pixels is that in the reconstructed image, the color seed should serve as a reference point for the R, G, and B colors of pixels around the color seed to revive. When the number of R, G, and B color seeds is unbalanced, then the color will revive itself disproportionately, as we will show in the experimental section. Figure 2 shows a 6 × 6 partial cut of the proposed random RGBW CFA pattern. Fifty percent of the whole RGBW CFA consists of white pixels, and the remaining 50% is divided equally between R, G and B pixels. The value of the sensed intensity value I s [k] contains, according to (1) are constrained to lie in [0, 1] for physical feasibility since they correspond to the opacity rates. It is known that the white pixels let more of the light energy through than the R, G, and B pixels. As more light energy is detected at the white pixels, the signal-to-noise ratio is larger than at the R, G, and B pixels. This makes the random RGBW CFA more robust against the noise. DIP-Based Demosaicing of the Random RGBW-CFA In this section, we propose a DIP-based demosaicing method for the demosaicing of the random RGBW CFA. We denote by g θ (z) the three-dimensional output of the DIP with network parameters θ and z as the input. The size of g θ (z) is H × W × 3, where H and W are the height and the width of the demosaiced image, respectively, and the number of channels is 3, corresponding to the R, G, and B channels. At every pixel k ∈ Ω, where Ω is the two-dimensional spatial domain of g θ (z), the color is defined by an RGB triplet g θ (z)[k], which is a three-element vector that specifies the intensities of the Red, Green, and Blue components of the color. For the purpose of simplification, we will denote g θ (z)[k] as g θ [k]. Thus, where g R θ [k], g G θ [k], and g B θ [k] are the Red, Green, and Blue components of the three-element vector g θ (z) [k]. The loss function used in the training of the DIP is where the components of c ] T are defined as in (3). The minimization of the loss function constrains the output g θ [k] to obey the physical constraint imposed on the sensed monochrome CFA image, i.e., the constraint that the sensed monochrome pixel is a weighted sum of the R, G, and B components, where the weights are determined by the ratio of opacity of the color filters in the CFA. It should be noticed that the form of the loss function is different from those designed for other image restoration purposes with the DIP. In conventional DIPs, pixel-wise comparisons are made between the intensities of the generated pixels and the data pixels. In comparison, the loss function in (8) does not make a pixel-wise comparison between the intensities but imposes a constraint that the generated pixel values should obey. The reason that the original colors can be restored by minimizing the loss function in (8) (8) minimum and the image prior constraint imposed by the DIP. However, even though there exists a unique solution, it is not guaranteed that the DIP can find this solution. The R (or G, B) pixels act as anchor points that hold the true R (or G, B) color components of the sensed image. As the other components of c[k] at these pixels are zero, the true R color components will be reconstructed using these pixels when minimizing the loss in (8). The other color components will use these reconstructed true color components as clues and will be reconstructed by satisfying the constraints imposed by the loss in (8) where denotes the element-wise product and RGB2Gray(·) denotes the operation similar to the RGB to Gray image conversion, i.e., the operation that produces a monochrome image by adding the R, G, and B components. Here, the operation of RGB2Gray(·) is adding the three components in the three channels of c g θ (z), resulting in a one-channel image. It should be noted that for the R, G, and B pixels, i.e., for k ∈ S R ∪ S G ∪ S B (where ∪ denotes the union operator), only one component in c [k] g θ (z)[k] is non-zero as c[k] has only one non-zero component. This is why we use the notation RGB2Gray(·) instead of RGB2Gray(·). However, if the loss function in (9) is minimized directly, the colors in the demosaiced image will fade. This is due to the fact that the minimizing of the loss function in (9) reaches the local minimum without sufficiently reconstructing the color components inherent in the white pixels. To prevent this phenomenon, we split up the loss function in (9) into four terms and apply different weights to them: Here, L W is related to the white pixels, and L R , L G , and L B are related to the R, G, and B pixels, respectively, where L W is defined as and Here If λ W = λ R = λ G = λ B = 1, the losses in (9) and (10) are the same. To avoid the color fading artifact, we apply different values of λ W , λ R , λ G and λ B for different iteration number t: For the first 500 iterations, the color image is reconstructed using only the R pixels and the White pixels, i.e., by minimizing only λ W L W + λ R L R with λ W = λ R = 1. By doing so, we expect that the red colors are fully reconstructed at least at the R pixels. After that, we reconstruct the colors for the next 500 iterations according to the sensed G pixels and White pixels, i.e., λ W L W + λ G L G with λ W = λ G = 1, and then, for the next 500 iterations, according to the sensed B pixels and White pixels, i.e., At the end of 1500 iterations, the R, G, and B color components are highly saturated at all pixels. We then run another 600 iterations with λ W = 1.5, λ R = 0.5, λ G = 0.5, and λ B = 0.5. This imposes a strong linear constraint on the white pixels according to the minimization of the loss function in (11) while also trying to maintain the reconstructed colors to some extent. The number of iterations has been determined by experiments. Figure 3 shows the overall diagram of the proposed method and how the input noise is gradually turning into the demosaiced image as the optimization process progresses according to the minimization of the loss function in (8). We use a noise image z as the input to the DIP, which is sum of a constant noise (z c ) and a variable noise (z v(t) ), as in the variational DIP in [22]. The variable noise z v(t) is newly generated for each step of the training and is multiplied by 0.1 before being added to z c : The constant noise z c remains unchanged throughout the training. After the training of the DIP is finished, i.e., at the test time, only the constant noise z c is put into the DIP to obtain the final denoised and demosaiced color image. The reason that z v(t) is added to z c in the training is to obtain a denoising effect, as explained in [22], i.e., the extra variable noise z v(t) prevents the output of the DIP from being noisy. Experimental Settings We experimented on the noise-added Kodak [23] dataset, which contains 24 images in bmp image file format with size of 768 × 512, and the McMaster dataset, which is used in the experiments in [24] consisting of 18 color images with resolution of 500 × 500. The noise is derived from a zero-mean Gaussian distribution with different standard deviations for each color channel. We set the standard deviations as 0.0463, 0.0294, 0.0322, and 0.0157 for the R, G, B, and W channels, respectively. The standard deviations are determined according to actual physical measurements of the amount of noises in the R, G, B, and W channels acquired under low light condition. The amount of noise is different for each channel since each color filter absorbs different light energy. Network Structure We use the same structure of the DIP network as in [21] for inpainting. The network has an encoder-decoder type structure that consists of five down-sampling and five upsampling convolutional layers with skip connections, where each convolutional layer consists of 128 feature maps obtained by 3 × 3 convolutions followed by a Leaky ReLU activation. The weights in the DIP are all initialized with a Gaussian noise with standard deviation of 0.03. For the backpropagation optimization, we used the Adam optimizer with a learning rate of 0.001. Experimental Results We compared the proposed method with other joint demosaicing and denoising neural networks such as the Sequential Energy Minimization (SEM) method [18], the DNet [19], and the LCNN model [20]. The SEM method is one of the first data-driven local-filtering methods to use a deep learning based model for joint demosaicing and denoising. The DNet further improves the SEM model by adopting the convolutional neural network in its structure, while the LCNN model is a lightweight convolutional neural network to adopt residue learning and aggregated residual transformations. These methods use a lot of images for the training of the network and use the conventional Bayer CFA. For the comparison of RGBW-CFA-based demosaicing, we compared with the demosaicing method developed by the Sony corporation [15] for the Sony RGBW CFA, with the Paul's method [14] for the Paul's RGBW CFA. We also compare with the residual interpolation method [6] as a representative of classic RGB demosaicing methods. We made quantitative comparisons in terms of the CPSNR (Color Peak Signal-to-Noise Ratio) the SSIM (Structural Similarity Index Measure), and the FSIMc (Feature Similarity Index for Color images). Figure 4 shows how the CPSNR value changes in the process of reconstructing the Kodak No. 17 image as the iteration progresses. It can be seen that after iteration 2100, the CPSNR value converges. Therefore, we normally terminate the reconstruction process at iteration 2100. Furthermore, it can be observed that the PSNR value changes significantly at iterations 500, 1000, and 1500 because the loss function changes at this time, as explained in Section 3.2. Again, it can be observed that there is a large change in the colors, before and after 500, 1000, and 1500 iterations, respectively, as the loss function changes at iteration 500, 1000, and 1500, respectively. It can be seen that the all the colors are well reconstructed at iteration 2100. To show the effectiveness of using the same ratio R, G, and B pixels as opposed to the 1:2:1 ratio of R, G, and B pixels commonly used in conventional demosaicing methods, we compared, in Figure 3, the reconstruction results with the RGBW CFAs having a ratio 1:2:1:4 of R, G, B, W pixels, and a ratio of 1:1:1:3 of R, G, B, W pixels, respectively. To show the validity of using equal ratio R, G, and B pixels as opposed to the ratio of 1:2:1 of R, G, and B pixels normally used in conventional demosaicing methods, we compared in 19 image is used a lot in the comparison between different demosaicing methods because it is difficult to reconstruct the fence bars while preventing the aliasing artifact. As can be observed in Figure 9b,c, the RI and the Sony CFAs with their corresponding demosaicking methods show aliasing artifacts (color artifacts) on the fence region. Paul's method and SEM method can deal with aliasing artifacts to some extent but some aliasing artifacts still remain. The DNet and the LCNN methods can deal better with the aliasing artifacts but are not so efficient in eliminating the noise, as can be seen in Figure 9. Due to the smaller noise at white pixels, the effective reconstruction by the proposed loss function, and the inherent denoising property, the proposed method shows good denoising effects as can be seen in Figure 9h. However, one of the drawbacks of the proposed method is that the color fades a little, as can be seen in Figure 9h. There is still room for improvement if other network structures are used for the DIP, which may be one of the further study topics. [6] (c) Sony [15] (d) Paul's [14] (e) DNet [19] (f) SEM [18] (g) LCNN [20] (h) Proposed. [6] (c) Sony [15] (d) Paul's [14] (e) DNet [19] (f) SEM [18] (g) LCNN [20] (h) Proposed. Conclusions In this paper, we proposed a DIP-based joint demosaicing and denoising method tailored for the demosaicing of a white-dominant RGBW color filter array. The demosaicing is performed by training the DIP network with a single noisy color filter array (CFA) image. For this aim, we proposed a loss function different from those used in other DIP-based image restoration applications. We proposed how to reconstruct colors from white pixels that have no explicit color information and showed in the experimental results that the noise in the white pixels as well as in the R, G, and B pixels are well removed by the regression process inherent in the minimization of the loss function. The proposed method can easily be applied to the demosaicing to other color filter arrays.
5,775
2022-02-24T00:00:00.000
[ "Computer Science" ]
TOWARD FLEXIBLE DATA COLLECTION OF DRIVING BEHAVIOUR : Recently, driving behavior has been the focus of several researchers and scientists, they are attempting to identify and analyze driving behavior using different sources of data. The purpose of this research is to investigate data acquisition methods and tools related to driving behavior, in addition to the type of data acquired. Using a systematic literature review strategy, this study identified tools and techniques used to collect data related to driving behavior among 120 selected studies from 2010 to 2020 in several literature resources. It then measured the percentages of the most commonly used methods, as well as the type of data collected. In-vehicle and IoT sensors was found to play the greatest role in data collection in approximately 67% of the documents selected studies; And concerning the type of data acquired, those relating to the vehicle are the most widely collected. Thus, this study definitively answers the question regarding the different data sources and data types used among researches. However, further studies are needed to give more attention to the driver's data and also to investigate the data from the three dimensions of driving (driver, vehicle, and environment) together as an integrated and interconnected system. INTRODUCTION Recent research indicates that driver error contributes to up to 75% of all roadway crashes (Stanton and Salmon, 2009). Literally, human factors contribute in the manifestation of 95% of all accidents, according to study of 2041 traffic accidents conducted by (Sabey and Taylor, 1980). Reducing those huge numbers and save people's life become necessity, for that reason and in order to improve safety, security and comfort of the driver and other road's users, many studies were dealing with the topic of driving behavior (DB) using different approaches and techniques. The common element in these studies, is represented by source of data according to (Elamrani Abou Elassad and Mousannif, 2019). In fact, the majority of researches at the field of DB are using one of those three types of studies, Naturalistic Driving Studies (NDS) , Field Driving Studies (FDS) or Simulator Driving Studies (SDS) to collect data (Yang et al., 2018a). Which clearly allow us to realize the importance of the data acquisition process to analyze driving behavior. Moreover, (Andria et al., 2015) considered that the data acquisition in automotive environments is widely used in everyday applications. Actually, the recent computerizations of cars, together with the development of sensor technologies and car communication devices have transformed the cars into wealthy sources of information on the driver, the vehicle and environment (Bouhoute et al., 2019). In addition, the remarkable advancement of Internet of vehicles (IoV) technologies and big data technologies in recent years have offered new solutions to improve traffic safety and efficiency (Cen et al., 2017). While these precursor works offer helpful insights into DB evaluation from a data-based perspective, it is crucial to note that through data-collection examination for DB analysis is quite limited; to the authors' knowledge minimal work has been directed to the investigation of the harnessed data characteristics in this domain. Therefore, this paper aims to present a short survey that reveals methods of data collection process and tools linked to Driving Behavior, in which we present the most techniques and measures used to collect and gather useful information to analyze driving behavior. Another aspect has also been covered in this work which concerns the three driver's dimensions data. The rest of the paper is organized as follows: Section II presents the methodology adopted to select some related existing works to data collection of driving behavior, and the process of extracting and synthesizing the data. In Section III, results obtained about techniques and technologies used to collect data, then the three dimensions of driving behavior related data. Finally, Section IV concludes the paper. METHODOLOGY In order to identify, analyze and interpret all available evidence related to "data collection in the area of driving behavior", we planned, conducted and reported the review by following the systematic literature review SLR process (often referred to as a systematic review) suggested by (Kitchenham and Charters, 2007). This process aims to present a fair evaluation of the topic mentioned above using a trustworthy, rigorous, and auditable methodology. research questions and producing a review protocol are the most important pre-review activities. The first two phases are described by (Wen et al., 2012) through their development of the review protocol that mainly includes six stages (Figure 1): research questions definition, search strategy design, study selection criteria and procedures, quality assessment, data extraction and data synthesis. The figure illustrates the whole process followed on this study. The first stage in this process involves raising a set of research questions (RQs) based on the main objective of the study. Research questions The aim of this paper is to summarize and describe the majority of techniques of data collection in the area of driving behavior including all its dimensions. Towards this aim some RQs were addressed. The Search strategy Once the RQs have been identified, a research strategy must be followed it. It consists of selecting the search key terms (keywords), resources (libraries or others with relevant experience) and search process. Search terms The search terms used in this paper were constructed using the following strategy (Wen et al., 2012), : a) Derive major terms from the questions; b) Identify alternative spellings and synonyms for major terms; c) Check the keywords in any relevant papers we already have; d) Use the Boolean OR to incorporate alternative spellings and synonyms; e) Use the Boolean AND to link the major terms from population, intervention, and outcome. The result of analyzing RQs of topic "Toward flexible data collection of driving behavior" mentioned above brought us to extract the following keywords: Data collection -Driving behavior After that, we tried to find new words, synonyms and alternatives spellings of the keywords already found and the results are: • Data collection: acquisition, assembling, • Driving behavior: driving style, driving pattern, driving profile. Once we identified the most keywords and their synonyms, we adopt the basic rule to establish the search string: for each separated word, we found its synonyms and concatenated them with the OR connector. After the definition of the groups of words with their synonyms, we concatenated them with AND to end the string. And search string extracted from are: ("Data collection" OR "data acquisition") AND ("driving behavior" OR "driving style" OR "driving pattern" OR "driving profile") (3) This strategy was applicated on the title and abstract of each article. Resources In this study, we used four electronic databases as the literature resources to search for primary studies (IEEE Xplore, ScienceDirect, Web of Science and Google Scholar). Since the search engines of different databases use different syntax of search strings, our search string constructed previously was adjusted to accommodate different databases and used to search for journal papers in those electronic databases published between 2010 and 2020. Search process This research has been conducted on the four electronic databases separately, then export the CSV file of the returned papers and gather the results together to form a set of candidate papers ( Figure 3).A script of python was applicated to this set of papers to generate world cloud of the title and the abstract of each article. This script is free available on GitHub 1 . Then, the set of articles selected has been scanned so as to remove duplicated documents. Some reading strategies has been used and described on next subsection 'study selection criteria' to identify 120 relevant articles which were then used for data extraction and data synthesis. Study selection criteria Search criteria for a first stage resulted in 1224 candidate papers (see Figure 3). Due to the fact that many of the candidate documents do not provide any useful information to answer the research questions raised by this paper, further filtering is needed to identify the relevant papers. Knowing that both the title and the abstract are generally written correctly, accurately, carefully, and meticulously, i.e. they confirm whether the document is strongly pertinent for the mean topic of the study or not. Moreover, the 'word cloud' technique reveals the essential from an extract of text, fast and engaging. It was applied to the title and the abstract of every documents, it is used to represent the words that compose the title and the abstract in different sizes according to the frequency of their use, as illustrated in ( Figure 4). Analyzing the results obtained and keeping those in which the following words, 'data, behavior, driving, collecting and acquisition'. If one of the above-mentioned terms appeared widely and broadly, we select the article. As a result, we have selected 244 articles. We have used the Skimming 2 and scanning 3 reading's techniques for the purpose of removing the duplicated articles and to get a general overview of the relevant article. During this stage, we try to preserve scientific documents that contain valuable insights about the data collection, thus permitting us to highlight 120 relevant articles. Study quality assessment On the one hand, the quality assessment QA of the selected studies is initially used as the basis for weighting the quantitative data extracted in the meta-analysis according to (Julian PT Higgins, 2009). And since we are interested in this first work by the percentage of data sources used and the percentage of driving's dimension data on the other hand, we do not specify a dedicated QA to this paper. Instead, we just verified whether the articles involved provide relevant information regarding of all these aspects. Data extraction forms: This subsection aims to clarify the process of extraction the data followed in this paper. We exploited the selected studies to collect the data that contribute to addressing the research questions concerned in this work. In fact, the data extraction process is designed to answer the following questions: ▪ what is the data acquisition tool used to acquire the data? ▪ Which of the three dimensions of driving is covered? driver, vehicle, or environment? While trying to find answers to these questions, some data could not be extracted directly from the selected studies. Nevertheless, we were able to obtain them indirectly by processing the available data in an appropriate form. For example, there are some studies that use databases offered by other previous works, in this case we try to see sources of data in the original work if available, otherwise we conclude based on the rest of the article. ( Figure 5) illustrates some extracted data. As shown, the figure composed of three headings. These rubrics are in a way a reformulation of the previous questions. The data extraction process consists of giving "1" or "0" (green icon or reed icon) according to the presence or absence respectively of each item in the article, the comprehensive list of the relevant articles selected to this paper and extraction results are given in the appendix. Data synthesis methods Data synthesis aims to gather all previous results, interpret results, shed light on the interests of most researchers and reveal some future areas of research. Actually, the purpose of data synthesis is to aggregate evidence from the selected studies for answering the research questions. Therefore, by summing up the scores obtained through results of data extraction process and using some visualization tools, including pie chart to present the percentages pertaining to the source of data used and DVE's data among all the selected articles, we can move to the next section which will be dedicated to results and discussion. RESULTS AND DISCUSSION This section presents and discusses the findings of this short review. First, we introduce the data collection topic and the statistics of most commonly methods used to gather information related to driving behavior. Then, we present the instruments and measurements techniques used to collect data according to the selected studies one by one in the separate subsections. Second, we reported statistics of researchers' attention to driver's dimensions. Data collection This section aims to shed light upon the data collection process and the most techniques used in the literature research related with the field of driving behavior. Before citing methods of data collection, it appears necessarily to define what is the data acquisition first? According to Cambridge Dictionary (Cambridge), data collection activity means collecting information that can be used to find out about a particular subject. This activity enables a person or organization to understand the relevance topic, answer its linked questions, evaluate outcomes, and make predictions about future probabilities and trends. So, in order to understand driving behavior and a major factor for road traffic safety, assembling and gathering its associated data is a mandatory stage. Nevertheless, the varieties on the sources of data cause a difference in understanding driving behavior among researchers. In fact, the studies on driving behavior assessment have not settled on a common framework due to this diversity (Zhu et al., 2017). As mentioned above, it is assumed that the rest of this section will describe techniques used in data collection. Some driving style-related studies used self-report and driving behavior questionnaires to collect information, other several studies have taken advantage of new technologies and benefit from the incredible development of automotive sensors such as In-Vehicle Data Recorders, smartphones, IoT sensors and traffic surveillance technologies to sense and collect contributions attributes of DB. In short, according to (Carvalho et al., 2017), the data collected from the action of driving can be carried out by several kinds of sensors, from those of a general kind in smartphones to dedicated devices such as monitoring cameras, telematics boxes 4 and OBD 5 (On-Board Diagnostic) adapter. (Figure 6) illustrates the statistics and the percentages of techniques and methods used to collect data for driving behavior study according to scientific researches selected for this work. A brief description of these techniques is detailed in (Hata! Başvuru kaynağı bulunamadı.) as shown below. As can be seen in (Figure 6), the most dominant data source for driving behavior among the 120 articles used in this paper is the integrated sensors "In-vehicle sensors", at the rate of 40%. Then IoT device and other sensors with a percentage of 26%. The smartphone has also demonstrated a strong capacity of data collection at the rate of 12%, followed by the use of Self-report technique by 10% and other databases with 8%. Finally, the use of the traffic surveillance's tools by 4%. 4 A telematics box or black box is a measurement probe installed inside the vehicle. It may be equipped with its own sensors or be connected to the vehicle's internal sensors via the CAN-bus. 5 OBD is a system that enables current vehicles to carry out a self-diagnosis and provide real time data (e.g., speed) via a standard communications port. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) Therefore, the type of sensors embedded in the vehicle remains the best source to gather data according to the literature used for this study. In-vehicle and other sensors: Driving behavior related data can be acquired usually by on-board devices. In Vehicle Data Recorders (IVDRs) are one of the tools widely used for on-board data collection. They are devices installed on vehicles that monitor and record continuously the vehicle parameters (Bouhoute et al., 2019). In reality, car sensors can produce about 1.3 gigabytes of data every hour and an estimated 312 million gigabytes every year for 4 hours of daily driving according to IBM (Kimberly Madia, 2014), which provides a valuable opportunity for researchers in the area of Driving Behavior. Based on those sensors, (Boquete et al., 2010), (Xie et al., 2019), among other researchers, present a various platforms as the acquisition system. However, the major limitation of this approach is the availability of OBUs. It is likely that such an advanced technology is only available to a biased subset of vehicles. Furthermore, the need of data from the driver's interaction with the vehicle requires more sensors to be added to the vehicle. For this reason, several demonstration tools have been developed to access the available telemetry data, (Ding et al., 2019) Used electroencephalography (EEG) and steering behavior in a simulated driving experiment to test the correlation between some patterns of driving behavior, cognitive states and personalities. Since driving is a social act which human factor has the most important role in it. Some researchers are interested in the study of the human contribution. (Yang et al., 2018b) used an electrode cap connected to Curry 7 software to collect EEG signals. Self-report and questionnaires: Studies in transportation psychology have traditionally employed selfreport measures to examine personality, motivations, cognitions, and perceptions on the one hand, and driving behavior, driving styles and skills and involvement in traffic violations and crashes on the other. Nevertheless, the usefulness and validity of such instruments is often questioned particularly when the aim is to capture risky driving behavior (Boufous et al., 2010). Generally, there is widespread use of self-report measures of driving behavior in the traffic psychology literature. Moreover, Most prevailing studies have used subjective questionnaire data and objective driving data to classify driving behavior whereas few studies have used physiological signals such as electroencephalography (EEG) to gather data (Yang et al., 2018b). One of the studies adopted self-report technique is conducted by (Useche et al., 2019) to collect data for their research that was composed of three core sections: The first part of the questionnaire asked about individual and demographic variables, job-related features and job type and road safety indicators. Although surveys and self-reports represent a powerful and inexpensive tool for studying various topics in traffic behavior in addition to much of the knowledge in transportation psychology that has been gained by this technique, there is still a dispute regarding the usefulness and validity of such instruments, leading to less than ideal and trustworthy reports on one's own driving behavior and some serious limitations that must be taken into account when using these methods. Smartphone: The emergence of affordable sensing and computing platforms has a real impact on the appearance of new fields related to driving behavior. One of them is the analysis of driving performance through the use of mobile technology, a field also known as Smartphone Driving Analytics (Carlos et al., 2019). Recently, smartphones have a rich set of on-board sensors such as accelerometers, gyroscopes, GPS, and cameras. These sensors provide valuable information when investigating users' needs and behavioral patterns. Several researchers are currently using mobile phones to collect and gather driving related data. As reported by (Warren et al., 2019) data collection using un-obtrusive technology such as smartphone provides a valuable alternative to study-based data collection. The percentage of 52% (11 out of 21 studies) related to studies that used the mobile phone to acquire data is based only on the use of a cell phone. However, even smartphones are shown to have great potential in data collection. They are largely regarded as dangerous because of its potential to cause distracted driving and crashes. In-vehicle sensors Devices which can transform physical quantities such as pressure or acceleration into output signals (usually electrical). And always embedded on the vehicle. Self-report questionnaires A research instrument consists of a series of questions to gather information and data about the driver. Other sensors sensors that are not integrated into the car, including sensors of IoT like Arduino and Raspberry Pi, … smartphone high-resolution and high-speed (CMOS) image sensor, global positioning system (GPS) sensor, accelerometer, gyroscope, ambient light sensor, and microphone, …. Traffic surveillances Observation from a distance, using some techniques such as closed-circuit television (CCTV), or interception of electronically transmitted information such as internet traffic. Dataset International driving-dataset projects. Table 2. Description of measurement techniques used to collect data The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) Traffic surveillances: Road infrastructure development has received widespread attention of many countries in recent years, as well as trying to equip the road with the latest technology. As a result, traffic surveillances instruments have brought new opportunities for researches in terms of gathering data of driving to investigate different DB's facets. Among researchers who have already taken advantage of this source of data (Zhou et al., 2011) built a framework to define driver behavior patterns by extracting vehicle information from traffic video sequences. Moreover, urban traffic surveillance data at both intersections and road segments were used by (Hongxin et al., 2016) to investigate the driver's involvement in the accident. In addition to a few other researchers addressed the topic of DB using traffic surveillance data, it remains limited use of this source of data. Datasets: the present time, data is becoming the key for a majority of challenges. Indeed, one-of them is the driving behavior. For many years several researches gathering data related to driving and generating datasets to better understanding the behavior of the driver. (Figure 8) shows some examples of several projects around the world that have collected on-road driving-data according to (Miyajima and Takeda, 2016). As illustrated previously, the percentage of studies selected for this work that have used pre-collected data sets is 8%. The database of 100-Car Study conducted by the Virginia Tech Transportation Institute was used to modelling of driver Car-Following behavior (Sangster et al., 2013). While (Hamzeie, 2016) investigates how speed limits affect driver speed selection using both data collected on real-time with a Roadway Information Database. (Hallmark et al., 2015) and (Lv et al., 2019) among others several researchers take advantage of the rich Second Strategic Highway Research Program 2 Naturalistic Driving Study (SHRP 2 NDS) datasets to investigate and study assertive approaches of the driving. Furthermore, (Li et al., 2019) used the data set of the electric vehicles to identify the driving patterns. Driver Vehicle Environment's Data Driving is a driver-vehicle-road environment system and all the three elements affect each other and the whole system. One driver behavior error or vehicle fault or road environment anomaly may lead to another and a chain of reactions within the whole driving system (Mao et al., 2019). Thus, researches have been interested in these three dimensions of driving behavior for a long time; in order to clearly distinguish the relationship between the different dimensions of the DB and the DVE model and also to better extract the driving's dimension addressed in each article, we have been based principally on theoretical framework proposed by (Elamrani Abou Elassad et al., 2020). According to the statistics of the studies selected for this paper (Figure 7), it is clear that researchers are more interested in vehicle-related data collection than in driver-related data or the surrounding environment at rates of 52%, 29% and 19% respectively. Vehicle related data includes several kinematics such as speed and acceleration/deceleration are the most common measurements in the scientific literature because of its direct impact on the driving behavior and also the opportunity to easily getting them. The driver's profile and state, namely the physiological and psychological conditions provide in its turn relevant and essential information for understanding the driving behavior. In fact, due to the direct contribution of the human being in the driving process, his/her own data have a very strong influence on predicting and detecting of driving events. However, collecting such data usually needs more equipment and sensors. Finally, surrounding environment data including road geometry, road condition, road type, traffic and the weather condition often requires remote access to the data. CONCLUSIONS The present survey, in particular the last two sections on data collection and types of data collected, provides a few good insights indicating the need of high-resolution driver's data. On the basis of this, we got the following results: ▪ Most researchers are interested in the use of in-vehicle sensors to acquire information related to driving behavior. Also, vehicle data have been the primary focus of data collection. ▪ The percentage of the studies that cover driver's dimension remains very limited, knowing that the driver plays both the role of the controller and the major evaluator of the vehicle quality and the pathfollowing. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) This study cannot claim to be complete although we believe that it will be a valuable resource for anyone interested in research on driving behavior in general and data acquisition in particular. Source of data Dimension In-vehicle sensors
5,733
2020-11-23T00:00:00.000
[ "Computer Science" ]
Constraints on the pMSSM from searches for squarks and gluinos by ATLAS We study the impact of the jets and missing transverse momentum SUSY analyses of the ATLAS experiment on the phenomenological MSSM (pMSSM). We investigate sets of SUSY models with a flat and logarithmic prior in the SUSY mass scale and a mass range up to 1 and 3 TeV, respectively. These models were found previously in the study 'Supersymmetry without Prejudice'. Removing models with long-lived SUSY particles, we show that 99% of 20000 randomly generated pMSSM model points with a flat prior and 87% for a logarithmic prior are excluded by the ATLAS results. For models with squarks and gluinos below 600 GeV all models of the pMSSM grid are excluded. We identify SUSY spectra where the current ATLAS search strategy is less sensitive and propose extensions to the inclusive jets search channel. Introduction Supersymmetry [1] is one of the conceivable extensions of the Standard Model (SM). It could provide a natural candidate for cold dark matter and stabilise the electroweak scale by reducing the fine tuning of higher order corrections to the Higgs mass. Supersymmetry (SUSY) proposes superpartners for the existing particles. Squarks and gluinos, superpartners of the quarks and the gluon are heavy coloured particles, which can decay to jets and the Lightest Supersymmetric Particle (LSP), i.e. the neutralino. The neutralino is only weakly interacting and stable since we assume the conservation of R-parity. The LSP escapes detection which results in missing transverse momentum in the detector. Channels with jets and missing transverse momentum have a large discovery potential at the LHC [2], since the coupling strength of the strong force would cause an abundance of squarks and gluinos if these particles are not too heavy. The ATLAS collaboration has analysed their data to search for squarks and gluinos in events with 2-4 jets and missing transverse momentum corresponding to an integrated luminosity L int of 35 pb −1 in Ref. [3] and 1.04 fb −1 in Ref. [4]. No excess above the SM background expectation was observed in the analysed data. Although these searches are designed to be quite independent of SUSY model assumptions, mass limits are presented only for a constrained Minimal Supersymmetry Standard Model (cMSSM) model and for simplified models with only squarks, gluinos and the lightest neutralino. We will study the exclusion range of the ATLAS search for phenomenological MSSM (pMSSM) [5] scenarios, which have a more diverse spectrum of characteristics than the cMSSM. We identify some of the regions in the pMSSM parameter space where the current search strategy is insensitive. In the pMSSM the more than 120 free parameters of the MSSM are reduced to 19 by demanding CP-conservation, minimal flavor violation and degenerate mass spectra for the 1st and 2nd generations of sfermions. In addition it is required that the LSP is the neutralinõ χ 0 1 . The 19 remaining parameters are 10 sfermion masses 1 , 3 gaugino masses M 1,2,3 , the ratio of the Higgs vacuum expectation values tan β, the Higgsino mixing parameter µ, the pseudoscalar Higgs boson mass m A and 3 A-terms A b,t,τ . This work is based on "Supersymmetry Without Prejudice" [6]. The model points presented in [6] are used for our purpose. Each model point was constructed by a quasi-random sampling of the pMSSM parameters space. The points were required to be consistent with the experimental constraints prior to the LHC [6]. Event generation, Fast Simulation and Analysis We study the reach of the ATLAS search by emulating the ATLAS analysis chain. First we generate events from LHC collisions for each pMSSM SUSY model with a Monte Carlo generator for SUSY processes. These events are then simulated by a fast detector simulation and the acceptance and efficiency is determined by applying the most important ATLAS analysis cuts on the simulated events. Finally these numbers are used to calculate the expected number of signal events for each signal region and analysis. These numbers are compared to the model-independent 95% C.L. limits provided by ATLAS. PYTHIA 6.4 [7] is used for the event simulation of proton-proton collisions at a 7 TeV centre-of-mass energy. All squark and gluino production processes are enabled as they are of most importance for the inclusive jets search channel. For every model point 10000 events are generated which we found to be enough even for the models with the largest cross sections. To get as close as possible to the ATLAS analysis we use DELPHES 1.9 [8] as a fast detector simulation with the default ATLAS detector card, modified by setting the jet cone radius to 0.4. The PYTHIA output is read in by DELPHES in HepMC format, which is produced by HepMC 2.04.02 [9]. The object reconstruction is done by DELPHES, which uses the same anti-k T jet algorithm [10] as ATLAS. Also included in the reconstruction are isolation criteria for electrons and muons. We do not emulate pile-up events. Reconstructed events are analysed with the same event selections as used by the ATLAS analysis with 35 pb −1 (shown in Table 1) and also with the event selections used in the 1.04 fb −1 analysis (see Table 2). In these Tables ∆φ(jet i , E miss T ) min is the minimum of the azimuthal angles between the jets and the 2-vector of the missing transverse momentum E miss T . The invariant mass m eff is calculated as the scalar sum of E miss T and the magnitudes of the p T of the leading jets required in the selection (i.e. 2 jets for the 2-jet selection in region A), except for signal region E, where m eff is the sum of E miss T and all reconstructed jets with p T > 40 GeV. In addition to these cuts a veto on electrons and muons with p T > 20 GeV was required. After this selection the event counts are scaled to the luminosities considered in the analyses, i.e. 35 pb −1 and 1.04 fb −1 , respectively. The NLO cross section used for this is calculated by LHC-Faser light [11,12] from PROSPINO2.1 [13,14] cross section grids. The limits on the effective cross sections given by the ATLAS analyses are used to calculate a limit on the number of signal events passing the cuts, also given in Table 1 and 2. No attempt was made to include theoretical uncertainties. In the studied SUSY mass range these uncertainties are small compared to the differences of the ATLAS and DELPHES setups and would not change drastically any conclusion of this work. In order to compare our setup to ATLAS we determined the relative efficiency difference for each SUSY point studied by ATLAS in the m 0 -m 1/2 plane for the cMSSM grid with tan β = 10, A 0 = 0 and µ > 0. Here A * E is the acceptance times efficiency of the ATLAS and DELPHES analysis setups. Table 3. Accepted signal fraction (E * A) for the ATLAS and DELPHES setup and shown for the analysis with L int = 1.04 fb −1 . Table 3. The efficiency of our setup is found to be in agreement with the ATLAS efficiency[15] on the level of 10 − 30% for the 2-and 3-jet signal regions A − B and SUSY masses around the present ATLAS limits. These limits are ranging for m 1/2 from 200 − 500 GeV and go up to intermediate m 0 of 1000 GeV. At m 1/2 < 200 GeV larger deviations occur. Here both the statistical uncertainty of the ATLAS and DELPHES efficiencies are larger and the selection efficiencies are tiny. The largest deviations occur if in addition m 0 is large. The signal regions are not intended for SUSY signals at m 1/2 < 200 GeV and large m 0 and do therefore not contribute to the search for such SUSY signals. Note that the ATLAS analysis selects always the signal region with the largest exclusion potential for each SUSY model. For signal region C − E and for the 4 and more jet channels we observe better agreement at low m 1/2 and slightly worse agreement at m 0 > 1000 GeV and m 1/2 > 400 GeV. Here our DELPHES setup underestimates the efficiency by up to 50−70%. The increased differences at larger jet multiplicities might be caused by the cummulative effects of the the ATLAS and DELPHES jet response. In view of the mostly smaller efficiencies of DELPHES compared to ATLAS, our study can be regarded as conservative. pMSSM random points The pMSSM points are taken from "Supersymmetry Without Prejudice" [6] (related work [16,17]). All details can be found in these references. 19 free parameters were randomly sampled, one set with a flat prior with masses up to 1 TeV and another one with a logarithmic prior and masses up to 3 TeV, each parameter was varied in the range given in Table 4. The parameters are: 10 sfermion masses mf ; 3 gaugino masses M 1,2,3 ; the ratio of the Higgs vacuum expectation values tan β; the Higgsino mixing parameter µ; the pseudoscalar Higgs boson mass m A and 3 A-terms A b,t,τ , the A-terms for the first and second generations can be neglected due to the small Yukawa couplings. It was assumed that the neutralino is the LSP and that the first two squark generations are degenerate. Several experimental and theoretical constraints [18] are applied on the generated points, i.e. the current dark matter density and constraints from LEP and Tevatron data. Additionally we have required that the mass splitting between the chargino and the lightest neutralino is ∆m > 0.05 GeV with ∆m = m Chargino − mχ 0 , to avoid mishandling by PYTHIA. Small mass splittings make charginos stable and PYTHIA yields error messages in the hadronization routines and drops these events. The problem is avoided by a decay of the chargino before the hadronization routine, i.e. by a sufficiently large mass splitting between the chargino and the neutralino. About 1% of the remaining model points could not be generated with PYTHIA due to other compressed mass spectra, i.e. due to very small mass differences between SUSY particles. Here mostly the mass difference of the sbottom or stop to the neutralino was small. These compressed mass spectra lead to long lived squarks which can not be handled by PYTHIA nor by the detector simulation and causes PYTHIA to stop. These model points are dropped. The following studies are therefore not valid if the SUSY model leads to long-lived particles in the spectrum besides the lightest neutralino. Models from a linear prior in the SUSY mass scale For each SUSY model signal events were generated. Each event was analysed after a detector simulation with DELPHES and the number of signal events was determined for each SUSY model and each of the 8 studied signal regions. In the following we call "excluded models" SUSY models which produced a larger number of signal events than excluded by the ATLAS model-independent limits in at least one of the signal regions studied. The model-independent limits are listed in Table 1 and Table 2. Only the SUSY models which yield less signal events in all regions are not excluded by these ATLAS searches. These models are called "not excluded models". gluino M gluino . The SUSY points which are excluded by ATLAS are shown as green points, models not excluded as black triangles. We show that 99% of the points are excluded with the current ATLAS analyses in the jets and missing transverse momentum channels. All studied points with a mass of the squarks and the mass of the gluino < 600 GeV are excluded. This means that there is not much room anymore in the pMSSM for having both light squarks of the first generations and at the same time a light gluino. Remarkably, also points with small mass splittings between the squarks or gluino and the neutralino are excluded in this mass range. The reason is quite simple. It is very unlikely that a "random" sampling yields cases where the mass splittings of all squarks and the gluino to the neutralino are small. If one of the squarks or only the gluino is a bit heavier than the neutralino such processes yield detectable rates in the ATLAS signal regions. Note that in these models the left and right handed squarks can have quite different masses. In Table 5 a subset of the not excluded model points are presented together with some of their properties. A complete list of all not excluded model points can be found in Appendix A. We found some features why model points are not excluded. We determined average values for some properties for each SUSY model point, neglecting the fact that these values are coming from different SUSY decay chains. The investigated properties of the non-excluded SUSY models are shown in Table 5. The following features have been found to be significant: Low cross section A large fraction of model points at high squark and gluino masses cannot be excluded because the cross section is simply too low to be observed for the integrated luminosity . Figure 3 shows the fraction of not-excluded points as a function of the total SUSY squark and gluino cross section. Below 0.1 pb less than 50% of the SUSY models can be excluded with the analysis setup. These are mainly points with a large average effective mass value. At large cross sections of greater 5 pb all studied pMSSM models can be excluded by the ATLAS analyses. Lepton and multi-jet events (long decay chains) Around 25% of the not excluded model points have a large average number of leptons. In addition we find that these SUSY models do often have a large average number of jets. It is trivial to note that, because of the lepton veto, there is not much sensitivity to these models with the inclusive jets analysis. These points can most likely be excluded with the single or multi-lepton analyses. These searches do have signal regions investigating events with up to 4 jets [19,20]. Some SUSY models with long decay chains would yield lepton(s) together with multiple jets. Compressed spectra together with high squark and gluino masses Figure 4 shows the excluded and non-excluded SUSY models from the grid with the flat prior as a function of M SUSY and the mass difference of M SUSY and the mass of the lightest neutralino. In this note, the SUSY mass scale M SUSY is defined as the minimal mass of all first and second generation squarks and the gluino. The figure shows the interesting feature that the non-excluded points are mostly located at small mass differences (relative to M SUSY ) and high M SUSY . Small mass differences between the colored particles and the neutralino yield events with small transverse momentum jets. Figure 5 shows the average effective mass (calculated with the leading 3 jets) as a function of M SUSY . More than half of the not-excluded SUSY models at high M SUSY have an effective mass that is significantly below the value found for the excluded SUSY models. We conclude that the cut on the effective mass is too harsh for these models. For those compressed models the effective mass is differently correlated with the SUSY mass scale. A lower cut on the effective mass, however, would cause a significant increase in the number of background events. We therefore studied additional features of these non-excluded models. A comprehensive study yields as the most significant feature a large average value of missing transverse momentum. Figure 6 shows the ratio f of the missing transverse momentum over the effective mass as a function of the effective mass. The not-excluded models at m eff < 600 GeV do have average f -values above 0.3 − 0.35. It is interesting to note that for higher m eff values smaller cuts on f seem to be appropriate. Increasing the cuts on f for the high m eff regions seems not to yield to an improved performance. In addition these points do show a typical average jet multiplicity as can be seen in Figure 7, also at m eff < 600 GeV. The non-excluded points have jet multiplicities between 2 − 7. In conclusion we propose that ATLAS adds to future analyses signal regions with f > 0.3− 0.35 and a reduced effective mass cut of m eff > 500 GeV for high and low jet multiplicities. A similar conclusion has been found for lower jet multiplicities in an independent study dedicated to compressed spectra [21]. Some of the non-excluded points found in our study could be used as benchmark sets to further optimise the cut values with a detailed ATLAS simulation including background events. Figure 8 shows the result of our analysis of 1000 points made with the logarithmic prior up to 3 TeV in the mass scale of SUSY. Excluded points are shown as green points, not excluded ones as black triangles. Due to possible larger masses of the squarks and gluinos more points survive at higher masses. In total we find that 87% of the model points are excluded. Again around one quarter of the not excluded model points have an average lepton number exceeding one. These points cannot be excluded because of the lepton veto. Models from a logarithmic prior in the SUSY mass scale A new feature is found in the logarithmic grid. Some SUSY models with gluino masses above 1000 GeV and squark masses between 300 − 600 GeV are not excluded. Figure 9 shows the total cross section for squark and gluino production processes as a function of the SUSY mass scale M SUSY . All not-excluded SUSY models with M SUSY < 600 GeV are close to the minimal SUSY cross section at a given value of M SUSY . The cross section is minimal since here only thed R ands R or theũ R andc R are light. All other squarks and the gluino have much larger mass values. Figure 9. The total NLO squarks and gluino production cross section as a function of the minimal mass of the first and second generation squarks and the gluino m SUSY for excluded model points (green dots) and not excluded models (black triangles). For some high mass model points the cross section is significantly enhanced by sbottom and stop production processes. These SUSY scenarios might also be missed in future searches, if the cuts on mass scale related variables (as m eff ) are raised further. Limits on squark masses derived at a minimum SUSY cross section might be helpful. In contrast to the flat-prior model points compressed mass spectra do not seem to be an important issue for the log-prior grid as far as we could infer from only 1000 model points. Summary We show that the "Search for squarks and gluinos using final states with jets and missing transverse momentum" of the ATLAS experiment excludes up to 99% of the model points of the randomly generated pMSSM grid of "Supersymmetry without Prejudice" assuming a flat prior for a SUSY mass scale below 1 TeV. For the model points assuming a logarithmic prior up to 87% are excluded. Besides the models with a high average number of leptons, the most frequent reasons for the model points not to be excluded are a too low cross section below the discovery potential, a too low mass splitting between the lightest coloured sparticle and the neutralino resulting in a low effective mass m eff . We propose to add selections with an increased missing transverse momentum cut and a decreased m eff cut, both with low and high jet multiplicities. In addition we find that the search is quite insensitive if only one type right handed squarks is light, i.e. if the SUSY cross section is smaller than usually assumed. These scenarios might also profit from low mass signal regions with minimal statistical and systematic uncertainties.
4,677.4
2012-02-28T00:00:00.000
[ "Physics" ]
Tumor Suppressor Properties of Small C-Terminal Domain Phosphatases in Clear Cell Renal Cell Carcinoma Clear cell renal cell carcinoma (ccRCC) accounts for 80–90% of kidney cancers worldwide. Small C-terminal domain phosphatases CTDSP1, CTDSP2, and CTDSPL (also known as SCP1, 2, 3) are involved in the regulation of several important pathways associated with carcinogenesis. In various cancer types, these phosphatases may demonstrate either antitumor or oncogenic activity. Tumor-suppressive activity of these phosphatases in kidney cancer has been shown previously, but in general case, the antitumor activity may be dependent on the choice of cell line. In the present work, transfection of the Caki-1 cell line (ccRCC morphologic phenotype) with expression constructs containing the coding regions of these genes resulted in inhibition of cell growth in vitro in the case of CTDSP1 (p < 0.001) and CTDSPL (p < 0.05) but not CTDSP2. The analysis of The Cancer Genome Atlas (TCGA) data showed differential expression of some of CTDSP genes and of their target, RB1. These results were confirmed by quantitative RT-PCR using an independent sample of primary ccRCC tumors (n = 52). We observed CTDSPL downregulation and found a positive correlation of expression for two gene pairs: CTDSP1 and CTDSP2 (rs = 0.76; p < 0.001) and CTDSPL and RB1 (rs = 0.38; p < 0.05). Survival analysis based on TCGA data demonstrated a strong association of lower expression of CTDSP1, CTDSP2, CTDSPL, and RB1 with poor survival of ccRCC patients (p < 0.001). In addition, according to TCGA, CTDSP1, CTDSP2, and RB1 were differently expressed in two subtypes of ccRCC—ccA and ccB, characterized by different survival rates. These results confirm that CTDSP1 and CTDSPL have tumor suppressor properties in ccRCC and reflect their association with the more aggressive ccRCC phenotype. Introduction Clear cell renal cell carcinoma (ccRCC) represents the most aggressive histological form of kidney cancer and accounts for 80-90% of cases of renal carcinomas [1].ccRCC is often diagnosed at late stages, and the survival rate when metastases are present is extremely low.The outcomes of ccRCC are the worst among all urogenital tumors.ccRCC represents about 3% of all cancers occurring with the highest incidence in Western countries.ccRCC is the 13th leading cause of cancer deaths worldwide [2]. In general, ccRCC is well circumscribed and the capsule is usually absent.Loss of chromosome 3p and mutation of the von Hippel-Lindau (VHL) gene located at chromosome 3p25 are common.Loss of function of the von Hippel-Lindau protein contributes to tumor initiation, progression, and metastasis.Additional ccRCC tumor suppressor genes (UTX, JARID1C, SETD2, PBRM1, and BAP1) are located at the 3p locus [3]. Earlier, Yu-Ching Lin et al. showed the tumor-suppressive properties of SCP phosphatases in ccRCC, which are mediated by the stabilization of the PML protein [16].Although the present work partially overlaps with the study of Lin et al., we used a different cell line model and obtained different results: tumor-suppressing activity was found only for CTDSP1 and CTDSPL.The cancer significance of genes can be quite different depending on the cell line used for evaluation.Different cell lines, even belonging to the same tumor type, may carry a different spectrum of driver mutations/deletions and thus involve different pathways of carcinogenesis. Earlier it was shown that CTDSP1 inhibits breast cancer cell migration and invasion [13] and suppresses tumor properties of liver carcinoma [5], osteosarcoma [6], and uveal melanoma [17].CTDSP1 also negatively regulates angiogenesis [14] and increases irinotecan sensitivity of colorectal cancer through the stabilization of topoisomerase I [18].The CTDSPL gene is located in 3p22, a region prone to frequent mutations, deletions, and promoter methylations in many types of tumors, including lung, cervical, kidney, breast, and ovarian cancers [19,20].Recently, we demonstrated that CTDSP1, 2, and L inhibit the growth of A549 cells, a non-small-cell lung cancer cell line [21].The purpose of this paper was to extend these findings with regard to ccRCC.However, the results obtained on ccRCC were not so uniform as for lung cancer.This indicates a high heterogeneity of ccRCC. The pro-oncogenic properties of CTDSP1, 2, and L phosphatases are also diverse.It was shown that CTDSP1 stabilizes the SNAI1 transcription factor, a key regulator of epithelial-mesenchymal transition, and thus promotes gastric cancer cell migration [15].Moreover, it was demonstrated that CTDSP1 contributes to the increased migration of neuroglioma cells [22].The activation of CTDSPL may be an early event in avian leukosis virus-induced carcinogenesis of B-cell lymphomas [23]. Although over the past decades, some success has been achieved in the treatment of kidney cancer, ccRCC is still characterized by high mortality, which determines the importance of the search for new mechanisms of carcinogenesis and the identification of tumor suppressors that are relevant for ccRCC. Analysis of TCGA Omics Data for SCP Subfamily and RB1 in ccRCC Using our previously developed CrossHub tool, we analyzed RNA sequencing and methylation profiling data for the members of the SCP family in ccRCC.Among the three members of the SCP family and their primary target, RB1, only CTDSPL demonstrated significant downregulation in ccRCC in most samples (three-fold on average; see Supplementary Table S1).RB1 and CTDSP1 were characterized by weak overexpression (1.3-fold).CTDSPL and CTDSP2 were significantly co-expressed (r s = 0.46; Supplementary Table S2). To identify the possible mechanisms of CTDSPL downregulation, we examined whether promoter CpG islands were hypermethylated in ccRCC.However, there were no significant changes in promoter region methylation levels for any of the four studied genes.We only observed hypermethylation of some intronic CpG sites that were distant from the promoter regions (Supplementary Table S3).The most pronounced intronic hypermethylation was observed for CTDSPL, where it was negatively correlated with gene expression (r s values range from −0.4 to −0.64, p < 10 −14 ); we also observed hypermethylation of intronic sites of CTDSP2 (r s values range from −0.24 to −0.32, p < 10 −6 ) and RB1 (no correlations with expression).According to ENCODE data on chromatin segmentation (ChromHMM/Segway algorithms), these sites may be located in the enhancer regions.Finally, the analysis of mutations in CTDSP1/2/L and RB1 genes showed that somatic mutations in these genes are not a frequent event in ccRCC. Next, we searched for microRNAs (miRNAs) that were strongly anti-co-expressed with any of the four studied genes and had binding sites in 5 UTRs, as such miRNAs may have regulatory potential.The most interesting results were those obtained for the CTDSPL gene, which demonstrated both anti-correlation with mir-18a, mir-181 a/b/d, mir-34a, and the presence of binding sites for these miRNAs predicted simultaneously by several algorithms or databases (TargetScan, DIANA microT, and miRTarBase; Supplementary Table S4).We also noticed mir-183 and mir-182 for CTDSP1; mir-15a for CTDSP2; and mir-106a, mir-26a, and some others for RB1.In ccRCC, we did not observe any common regulatory miRNAs with pronounced anti-co-expression for all three phosphatases, as it was a case for lung cancer [21]. Expression Analysis of SCP Subfamily Genes and RB1 in ccRCC Using RT-qPCR The results of the quantitative expression analysis for CTDSP1/2/L and RB1 in 52 primary paired samples (ccRCC tumor tissue and matched normal tissue) are shown in Figure 1 and Table 1.The distribution of the relative expression levels of four genes by the ccRCC stage is presented in Figure 1: I stage (n = 22), II stage (n = 13), and III stage (n = 17).The mRNA levels of CTDSP1 and CTDSP2 were slightly and infrequently increased.The difference in CTDSP1 expression levels between stages I and II was statistically significant (p < 0.05).In contrast to CTDSP1 and CTDSP2, CTDSPL expression showed a noticeable decrease in 50% of samples, which was more pronounced at stage III than at stage II (p < 0.05, Mann-Whitney U-test).The mRNA level of the retinoblastoma gene RB1 was noticeably increased in 50% of cases, and the difference in its expression levels between the II and III stages was statistically significant (p < 0.001).No significant differences in the expression of four studied genes were found between male and female patients, or between tumor samples with and without metastases. A correlation analysis was carried out for CTDSP1/2/L and RB1 expression (Supplementary Figure S1).The results showed that the expression levels of CTDSP1 and CTDSP2 were highly correlated with each other (Spearman's rank correlation coefficient r s = 0.76, p < 0.001) but not with the expression of CTDSPL.However, a significant correlation was found between the expression of CTDSPL and RB1 (r s = 0.38, p < 0.001).This result did not agree with TCGA data, according to which the CTDSPL and CTDSP2 genes are well co-expressed.A correlation analysis was carried out for CTDSP1/2/L and RB1 expression (Supplementary Figure S1).The results showed that the expression levels of CTDSP1 and CTDSP2 were highly correlated with each other (Spearman's rank correlation coefficient rs = 0.76, p < 0.001) but not with the expression of CTDSPL.However, a significant correlation was found between the expression of CTDSPL and RB1 (rs = 0.38, p < 0.001).This result did not agree with TCGA data, according to which the CTDSPL and CTDSP2 genes are well coexpressed. CTDSP1 and CTDSPL Exert Tumor Suppressive Activity In Vitro Caki-1 is a metastatic kidney carcinoma cell line, which was established in 1971 from the cutaneous metastasis of the kidney carcinoma of a 49-year-old man.When transplanted, these cells form tumors of clear cell histology in nude mice.This line is a useful preclinical model that is very widely used in cancer research [24].Despite a long stay in culture, Caki-1 cells are able to form structures that resemble kidney tissue in their morphology, physiology, and biochemistry [25].Its morphologic phenotype most closely matches the morphology of metastatic ccRCC. CTDSP1 and CTDSPL Exert Tumor Suppressive Activity In Vitro Caki-1 is a metastatic kidney carcinoma cell line, which was established in 1971 from the cutaneous metastasis of the kidney carcinoma of a 49-year-old man.When transplanted, these cells form tumors of clear cell histology in nude mice.This line is a useful preclinical model that is very widely used in cancer research [24].Despite a long stay in culture, Caki-1 cells are able to form structures that resemble kidney tissue in their morphology, physiology, and biochemistry [25].Its morphologic phenotype most closely matches the morphology of metastatic ccRCC. The cells were transfected with expression constructs containing the coding regions of the SCP phosphatase genes.We obtained three variants of clones of Caki-1 cells expressing CTDSP1, CTDSP2, and CTDSPL.In cells with a green fluorescent signal, the growth rate (number of cell doublings per day) was measured in the clones compared with the control cell line transfected with empty vector pT2/HB.Data analysis showed that the exogenous expression of CTDSP1 and CTDSPL genes inhibited the growth of Caki-1 cells in vitro (Figure 2A-C).Statistically significant differences in the number of transfected and nontransfected cells were observed at 96 h after transfection for CTDSP1 and CTDSPL. of the SCP phosphatase genes.We obtained three variants of clones of Caki-1 cells expressing CTDSP1, CTDSP2, and CTDSPL.In cells with a green fluorescent signal, the growth rate (number of cell doublings per day) was measured in the clones compared with the control cell line transfected with empty vector pT2/HB.Data analysis showed that the exogenous expression of CTDSP1 and CTDSPL genes inhibited the growth of Caki-1 cells in vitro (Figure 2A-C).Statistically significant differences in the number of transfected and non-transfected cells were observed at 96 h after transfection for CTDSP1 and CTDSPL. Survival Analysis and Expression Analysis of SCP Subfamily Genes and RB1 in ccA and ccB Subtypes Using the UALCAN web portal (http://ualcan.path.uab.edu,accessed on 19 August 2023), which allows analyzing gene expression based on TCGA RNA-Seq data [26], we estimated expression changes of CTDSP1, CTDSP2, CTDSPL, and RB1 in two major molecular subtypes of ccRCC, namely ccA (clear cell type A) and ccB (clear cell type B) (Figure 3A-D).The expression of CTDSP1 and RB1 genes is predominantly increased in ccA (no statistically significant changes in ccB), while the expression of CTDSP2 is decreased only in the ccB subtype and is almost intact in ccA.The CTDSPL expression is reduced in both subtypes (p < 0.001), but a more significant decrease is noticed for ccB.In addition, using the GEPIA web server, which provides interactive patient survival analysis based on TCGA data [27], we found that low expression of CTDSP1, CTDSP2, CTDSPL, and RB1 Survival Analysis and Expression Analysis of SCP Subfamily Genes and RB1 in ccA and ccB Subtypes Using the UALCAN web portal (http://ualcan.path.uab.edu,accessed on 19 August 2023), which allows analyzing gene expression based on TCGA RNA-Seq data [26], we estimated expression changes of CTDSP1, CTDSP2, CTDSPL, and RB1 in two major molecular subtypes of ccRCC, namely ccA (clear cell type A) and ccB (clear cell type B) (Figure 3A-D).The expression of CTDSP1 and RB1 genes is predominantly increased in ccA (no statistically significant changes in ccB), while the expression of CTDSP2 is decreased only in the ccB subtype and is almost intact in ccA.The CTDSPL expression is reduced in both subtypes (p < 0.001), but a more significant decrease is noticed for ccB.In addition, using the GEPIA web server, which provides interactive patient survival analysis based on TCGA data [27], we found that low expression of CTDSP1, CTDSP2, CTDSPL, and RB1 was associated with poorer overall survival in ccRCC (p < 0.001, log-rank test) (Figure 3E-H).Moreover, for CTDSPL, the difference in overall survival rates is the most significant.Disease-free survival analysis also showed an association of lower CTDSPL expression with a shorter time to tumor recurrence (p < 0.001, log-rank test).We also found that the increased expression of the predicted regulatory miRNAs for CTDSP1/2/L (mir-183, mir-15a, and mir-18a; see Table S4), is associated with worse prognosis in ccRCC (p < 0.01 for mir-183 and mir-18a; p < 0.05 for mir-15a, log-rank test; Figure S2A-C).At the same time, the expression of mir-15a is more strongly decreased in the ccA subtype as compared to ccB (Figure S2D).with a shorter time to tumor recurrence (p < 0.001, log-rank test).We also found that the increased expression of the predicted regulatory miRNAs for CTDSP1/2/L (mir-183, mir-15a, and mir-18a; see Table S4), is associated with worse prognosis in ccRCC (p < 0.01 for mir-183 and mir-18a; p < 0.05 for mir-15a, log-rank test; Figure S2A-C).At the same time, the expression of mir-15a is more strongly decreased in the ccA subtype as compared to ccB (Figure S2D). Discussion The members of the subfamily of small C-terminal domain phosphatases (CTDSP, or SCP) carry out specific dephosphorylation of serine and threonine residues in their target proteins, which are involved in a variety of biological processes.Most of these processes are often disrupted in cancer [5,13,16,17].The substrates of SCP phosphatases include RNA polymerase II; the key cell cycle regulator Rb; SMAD transcription modulators (Figure 4); AKT1 protein kinase, which is a regulator of the cell cycle, apoptosis, and angiogenesis; transcription factors TWIST1 and c-MYC; protein of promyelocytic leukemia (PML); and others [28].Dysfunction or inactivation of SCP phosphatases contributes to the development of various cancers, including renal carcinoma.There has been increasing interest in SCP phosphatases owing to their tumor-suppressive or oncogenic properties as well as their participation in the development of malignant tumors of various etiology and localization.For example, , the deregulation of CTDSPL (SCP3) in ccRCC may lead to the deregulation of a number of important pathways in which it is involved at the mRNA level and at the level of protein interactions (Figure 4). Discussion The members of the subfamily of small C-terminal domain phosphatases (CTDSP, or SCP) carry out specific dephosphorylation of serine and threonine residues in their target proteins, which are involved in a variety of biological processes.Most of these processes are often disrupted in cancer [5,13,16,17].The substrates of SCP phosphatases include RNA polymerase II; the key cell cycle regulator Rb; SMAD transcription modulators (Figure 4); AKT1 protein kinase, which is a regulator of the cell cycle, apoptosis, and angiogenesis; transcription factors TWIST1 and c-MYC; protein of promyelocytic leukemia (PML); and others [28].Dysfunction or inactivation of SCP phosphatases contributes to the development of various cancers, including renal carcinoma.There has been increasing interest in SCP phosphatases owing to their tumor-suppressive or oncogenic properties as well as their participation in the development of malignant tumors of various etiology and localization.For example, the deregulation of CTDSPL (SCP3) in ccRCC may lead to the deregulation of a number of important pathways in which it is involved at the mRNA level and at the level of protein interactions (Figure 4). In a previous study [16], Yu-Ching Lin et al. demonstrated that all three phosphatases, CTDSP1, 2, and L are capable of dephosphorylating promyelocytic leukemia protein (PML), a well-known tumor suppressor, which results in the subsequent inhibition of the mTOR/HIF pathway.This may be the leading mechanism of tumor suppressive activity exerted by SCP phosphatases in ccRCC.In ccRCC cell lines 786-O and A-498, the ectopic expression of CTDSP1 suppressed proliferation, migration, and invasion of tumor cells in vitro and in vivo (no data regarding CTDSP2 and L).In the present study, we used another cell line model, Caki-1, which corresponds to metastatic ccRCC.We revealed that CTDSP1 and CTDSPL, but not CTDSP2, are capable to suppress Caki-1 cell proliferation. Yu-Ching Lin et al. found CTDSP1 and CTDSPL expression to be frequently downregulated in ccRCC.Their downregulation correlated with PML phosphorylation at Ser-518 as well as PML downregulation; downregulation of these phosphatases was associated with high-grade tumors [16].In our study, a noticeable decrease of expression was observed only in the case of CTDSPL; the expression levels of CTDSP1 and CTDSP2 were correlated (p < 0.05) but not downregulated in ccRCC.In addition, the decrease in expression of CTD-SPL and RB1 was more pronounced in Stage III tumors (r s = 0.38, p < 0.001 for correlations with stage).The correlation of CTDSPL downregulation with poorer survival in ccRCC patients also suggests that this gene is associated with tumor malignancy.for CTDSPL taking into account differentially expressed genes in ccRCC.The genes with decreased and increased expression are marked with a gradient from green to red (color scale represents log2 of expression level fold change, tumor versus normal).The network was inferred using the GPS-Prot (BioGrid data).Differentially expressed genes in ccRCC (TCGA; KIRC dataset) were derived with ANOVA algorithm using GEPIA2. In a previous study [16], Yu-Ching Lin et al. demonstrated that all three phosphatases, CTDSP1, 2, and L are capable of dephosphorylating promyelocytic leukemia protein (PML), a well-known tumor suppressor, which results in the subsequent inhibition of the mTOR/HIF pathway.This may be the leading mechanism of tumor suppressive activity exerted by SCP phosphatases in ccRCC.In ccRCC cell lines 786-O and A-498, the ectopic expression of CTDSP1 suppressed proliferation, migration, and invasion of tumor cells in vitro and in vivo (no data regarding CTDSP2 and L).In the present study, we used another cell line model, Caki-1, which corresponds to metastatic ccRCC.We revealed that CTDSP1 and CTDSPL, but not CTDSP2, are capable to suppress Caki-1 cell proliferation. Yu-Ching Lin et al. found CTDSP1 and CTDSPL expression to be frequently downregulated in ccRCC.Their downregulation correlated with PML phosphorylation at Ser-518 as well as PML downregulation; downregulation of these phosphatases was associated with high-grade tumors [16].In our study, a noticeable decrease of expression was observed only in the case of CTDSPL; the expression levels of CTDSP1 and CTDSP2 were for CTDSPL taking into account differentially expressed genes in ccRCC.The genes with decreased and increased expression are marked with a gradient from green to red (color scale represents log2 of expression level fold change, tumor versus normal).The network was inferred using the GPS-Prot (BioGrid data).Differentially expressed genes in ccRCC (TCGA; KIRC dataset) were derived with ANOVA algorithm using GEPIA2. Among the three SCP phosphatases, it is most probable that CTDSPL played a leading role in the pathogenesis of ccRCC.CTDSPL inactivation in kidney tumors may be mediated by mechanisms different from CTDSP1 and CTDSP2.In the present work, cell growthinhibiting activity was observed for CTDSP1 (p ≤ 0.001) and CTDSPL (p ≤ 0.05).These results are fundamentally different from our recent data obtained for non-small-cell lung cancer (NSCLC), in which the CTDSP1/2/L genes were often inactivated and all of them demonstrated tumor suppressive activity in vitro, leading to a significant slowdown of growth and senescence of the A549 lung adenocarcinoma cell line [21].A very frequent (84%) and highly concordant (r s = 0.53-0.62,p ≤ 0.01) downregulation of CTDSP1/2/L and RB1 was characteristic of primary NSCLC samples. It is possible that CTDSPL inactivation at the mRNA level is mediated by mechanisms different from those involving CTDSP1 and CTDSP2 in ccRCC.Thus, intronic enhancer hypermethylation and miRNA interference may play a role.Inactivation of CTDSPL may be also due to deletions of the chromosome 3p locus, as indicated by previous studies (CTDSPL is located on chromosome 3p, which undergoes frequent deletions and hypermethylation in ccRCC) [29].However, according to TCGA data, somatic deletions only in the CTDSP1 gene are a frequent (up to 20% of samples of some cancers) driver event (GISTIC algorithm) in a variety of cancers, including kidney, lung, bladder, ovarian, breast, mesothelioma and other tumors.Somatic deletions in CTDSP1 are considered a likely driver event only in ccRCC and mesothelioma, and there is no evidence for deletions in CTDSP2 to be a driver event (Broad Institute, TCGA Copy Number Portal, https://portals.broadinstitute.org/tcga/gistic/browseGisticByGene, accessed on 19 August 2023). ccRCC is a very heterogeneous type of cancer and the presence of at least two major molecular subtypes, ccA and ccB, has been previously shown [30,31].According to the results derived from the UALCAN web resource, ccB is characterized by a lower expression of CTDSP1, CTDSP2, and RB1, as compared to ccA (Figure 3A-D).This is consistent with the data that patients with the ccA subtype have significantly better survival rates than those with ccB [30].Moreover, the association of decreased expression of all four genes, CTDSP1/2/L and RB1 with worse survival in ccRCC was observed according to TCGA data (GEPIA; Figure 3E-H).Despite this, a noticeable decrease in expression was found in primary tumor samples only for the CTDSPL gene in our sampling (Figure 1).According to these data, the expression of CTDSP1 and RB1 is predominantly increased (Figure 1), which indicates that the ccA subtype prevails in our sampling.Generally, it is known that ccB is characterized by increased expression of genes associated with the cell cycle and the epithelial-mesenchymal transition (EMT), while ccA is characterized by increased expression of genes associated with hypoxia and angiogenesis [32,33].Therefore, given the ability of SCP phosphatases to block EMT and their involvement in the regulation of Rb and PML activity, the decreased expression of CTDSP1 and CTDSP2 in the ccB subtype in comparison to the ccA subtype looks consistent. MicroRNAs can regulate gene expression and are one of the additional factors affecting it, like mRNA expression.In the case of RCC, it is known that micro RNAs can determine tumor progression and serve as a diagnostic and prognostic tool [34].It has been shown that microRNAs may possess oncogenic or oncosuppressor activities [35]. The increased expression of CTDSP1 in the ccA subtype may be due to the decreased expression of mir-183 in ccA as compared to ccB (Figure S2D).Further study of the heterogeneity of ccRCC and the genetic characteristics of its subtypes is necessary for a more accurate diagnosis and the development of more effective targeted therapy, taking into account the genetic characteristics of the particular ccRCC subtypes. It was found that the expression of RB1 is also associated with poor survival in ccRCC patients (GEPIA, Figure 3H).Interestingly, the expression of CCND1 is increased in ccRCC; this gene encodes cyclin D, which in complex with cyclin-dependent kinases Cdk4 and Cdk6 phosphorylates Rb during the cell cycle [36], (Supplementary Figure S3).As in the case of Rb, a decrease rather than an increase in the expression of CCND1 is associated with poor survival (Figure 3H).The increase in expression of RB1 at the early stages of ccRCC appears to be associated with its anti-apoptotic properties, regardless of its ability to block cell proliferation [37]. Tissue Specimens, Clinical and Pathological Characteristics The present study included ccRCC tumors and adjacent histologically normal tissue specimens taken from 52 patients after surgical resection.Prior to the operation, the patients did not receive radiation or chemotherapy.Tumor specimens were characterized according to the international TNM tumor classification system [3] in the Blokhin National Medical Research Center of Oncology of the Ministry of Health of the Russian Federation. The clinical diagnosis was also confirmed by pathological examination of specimens at the Department of Tumor Pathologic Anatomy, Research Institute for Clinical Oncology (Moscow, Russia).Voluntary written consent was obtained from each patient to participate in the study.The use of biological specimens in the present study was within the scope of the Declaration of Helsinki and was also approved by the Ethical Committee of Blokhin National Medical Research Center of Oncology.The general clinical characterization of the specimens is presented in Supplementary Table S5. Cell Transfection and Plasmids For transfection of Caki-1 cells, genetic constructs pT2/HB-CTDSP1-2A-EGFP, pT2/HB-CTDSP2-2A-EGFP, and pT2/HB-CTDSPL-2A-EGFP were obtained according to the previously developed method [21].Protein-coding sequences of CTDSP1, 2, and L genes and enhanced green fluorescent protein (EGFP) gene were merged through the T2A linker and then cloned with the pT2/HB vector.This design allowed us to detect the expression of transfected constructs containing CTDSP1, 2, and L genes by measuring the level of EGFP fluorescence.The vectors were introduced into Caki-1 cells by electroporation using a X-Cell Electroporation System (Bio-Rad, Hercules, CA, USA) and the Sleeping Beauty transposon system.Cell line Caki-1 transfected with the empty vector pT2/HB was used as a control.The pCMV (CAT) T7-SB100 vector was kindly provided by Dr. Zsuzsanna Izsvak (Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Addgene plasmid # 34879) and pT2/HB was kindly provided by Dr. Perry Hackett (College of Biological Sciences, University of Minnesota, Minneapolis, MN, USA; Addgene plasmid # 26557). The efficiency of transfection for CTDSP1, CTDSP2, and CTDSPL was 10.3%, 22.7%, and 12.5%, respectively.The transfection efficiency of the control plasmid pTagRFP (Evrogen, Moscow, Russia; cat # FP141), encoding for the TagRFP fluorescent protein, was 19.5%.Cell sorting was performed 48 h after the transfection.The sorting was conducted using an S3 cell sorter (Bio-Rad, Hercules, CA, USA).The transfected cells were sorted for the presence of a green fluorescent signal indicating the expression of phosphatase gene inserts.After sorting, cells were plated into Costar 24-well plates (Corning, NY, USA) at 10,000 cells per well.The growth rate (number of cell doublings per day) of cells with the green fluorescent signal was determined in clones and compared to a control cell line.Cells were counted at 48, 72, and 96 h after seeding using a Diaphot inverted phase contrast microscope (Nikon, Tokyo, Japan) (4 × lens) and a Nikon D5000 camera. Bioinformatics Analysis The differential expression of CTDSP1/2/L and RB1 and survival analysis of TCGA data was performed using GEPIA (Gene Expression Profiling Interactive Analysis) web server [27].Survival curves were created using the Kaplan-Meier method with a cutoff along the median.Differences in survival curves were compared using a log-rank test, with p-values less than 0.05 considered statistically significant.To evaluate the differential expression between two major ccRCC subtypes (ccA and ccB), we used the UALCAN web resource, which is also based on TCGA data [26]. Also, to estimate the differential expression of CTDSP1/2/L and RB1 genes in ccRCC (TCGA data) and reveal possible mechanisms of their expression regulation, we used our previously developed CrossHub tool (https://sourceforge.net/projects/crosshub/, accessed on 19 August 2023) [38].TCGA RNA-Seq dataset (KIRC) included 533 tumors and 72 matched normal tissues.Additionally, we examined methylation profiling (320 tumors and 160 normal samples) and miRNA (254 and 71 samples, accordingly) ccRCC datasets in order to reveal possible mechanisms of SCP downregulation. Quantitative Gene Expression Analysis with RT-PCR Total RNA was isolated from tumor and normal kidney tissue samples according to the manufacturer's protocol with commercial miRneasy Mini Kit (Qiagen, Germantown, MD, USA).The quantity and quality of RNA were evaluated using the Nanodrop-ND 1000 UV-Vis Spectrophotometer (Thermo Fisher, Waltham, MA, USA).For the reverse transcription reaction, we used the Reverse Transcription System kit (Promega, Madison, WI, USA).Real-time quantitative PCR was performed using the TaqMan ® Gene Expression Assay (Thermo Fisher Scientific, Waltham, MA, USA) for each gene.The relative level of mRNA of each gene was calculated using the ∆∆Ct method [24] with the GAPDH and RPN1 genes as endogenous control. RT-PCR Statistical Analysis Statistical significance of differences in gene expression between tumor and matched normal samples were determined using the paired Wilcoxon signed-rank test.Statistical significance of observed changes in gene expression associated with different clinicopathological characteristics of ccRCC patients was determined using the non-paired Mann-Whitney U test.Changes were considered statistically significant at p-values ≤ 0.05.To estimate the possible co-expression of the studied genes, Spearman's rank-order correlation was used. Conclusions Our data confirmed that CTDSP1 and CTDSPL (but not CTDSP2) exerts tumor suppressive activity in ccRCC.However, which phosphatases exhibit tumor-suppressive activity may strongly depend on the cell line selected for the assay. Together with differential expression data (RT-qPCR, TCGA) as well as TCGA survival data, this indicates a significant role of CTDSP/SCP family phosphatases in cancer development.However, the separation of ccRCC into molecular subtypes may be of great importance, on which the role of phosphatases in cancer development may also strongly depend.The possible role of some miRNAs in the regulation of phosphatase gene expression is also noteworthy, but this issue requires further investigation. Understanding the role of SCP phosphatases in oncogenesis continues to be of great interest, given their known functions and their effects on the properties of tumor cells.The elucidation of the biological roles of SCP phosphatases and the design of methods of their regulation represent a vast area of further research and could lead to the development of new approaches for the treatment of kidney cancer. Figure 3 . Figure 3. Analysis of CTDSP1, CTDSP2, CTDSPL, and RB1 expression in ccA and ccB subtypes in TCGA ccRCC (KIRC dataset) samples (A-D) and survival analysis in ccRCC (E-H).(A-D) Boxplots show the median, first, and third quartiles (25th and 75th percentiles); minimum and maximum sample values; and outliers.*** p ≤ 0.001.Plots (E-H) show Kaplan-Meier curves for overall survival (TCGA data) with median cut-off.The log-rank test was used to assess the significance of the differences between groups with high and low transcripts per million (TPM) and the Hazards Ratio values were calculated for each gene. Figure 3 . Figure 3. Analysis of CTDSP1, CTDSP2, CTDSPL, and RB1 expression in ccA and ccB subtypes in TCGA ccRCC (KIRC dataset) samples (A-D) and survival analysis in ccRCC (E-H).(A-D) Boxplots show the median, first, and third quartiles (25th and 75th percentiles); minimum and maximum sample values; and outliers.*** p ≤ 0.001.Plots (E-H) show Kaplan-Meier curves for overall survival (TCGA data) with median cut-off.The log-rank test was used to assess the significance of the differences between groups with high and low transcripts per million (TPM) and the Hazards Ratio values were calculated for each gene. Figure 4 . Figure 4. Protein-protein interactions networks (including the data from high-throughput screens)for CTDSPL taking into account differentially expressed genes in ccRCC.The genes with decreased and increased expression are marked with a gradient from green to red (color scale represents log2 of expression level fold change, tumor versus normal).The network was inferred using the GPS-Prot (BioGrid data).Differentially expressed genes in ccRCC (TCGA; KIRC dataset) were derived with ANOVA algorithm using GEPIA2. Figure 4 . Figure 4. Protein-protein interactions networks (including the data from high-throughput screens)for CTDSPL taking into account differentially expressed genes in ccRCC.The genes with decreased and increased expression are marked with a gradient from green to red (color scale represents log2 of expression level fold change, tumor versus normal).The network was inferred using the GPS-Prot (BioGrid data).Differentially expressed genes in ccRCC (TCGA; KIRC dataset) were derived with ANOVA algorithm using GEPIA2. Author Contributions: Conceptualization, G.S.K., V.N.S. and Y.E.Y.; data curation, G.S.K., G.A.P. and T.T.K.; formal analysis, G.S.K., G.A.P., V.N.S. and Y.E.Y.; funding acquisition, A.Y.P., Y.S.C., V.N.S. and Y.E.Y.; investigation, G.A.P., E.B.D., A.Y.P. and K.S.V.; methodology, G.S.K., E.B.D., K.S.V. and Y.E.Y.; resources, K.S.V.; software, G.S.K. and Y.S.C.; supervision, V.N.S. and Y.E.Y.; writing-original draft, G.S.K., G.A.P., V.N.S. and Y.E.Y.; writing-review and editing, A.Y.P., V.N.S. and Y.E.Y.All authors have read and agreed to the published version of the manuscript.Funding: This work was supported by the Russian Science Foundation (Grant 20-15-00337).Institutional Review Board Statement: The use of biological specimens in the present study was within the scope of the Declaration of Helsinki and was also approved by the Ethical Committee of Blokhin National Medical Research Center of Oncology (approval date: 19 May 2014).
7,485.8
2023-08-01T00:00:00.000
[ "Biology", "Medicine" ]
Sparsity-based super-resolved coherent diffraction imaging of one-dimensional objects Phase-retrieval problems of one-dimensional (1D) signals are known to suffer from ambiguity that hampers their recovery from measurements of their Fourier magnitude, even when their support (a region that confines the signal) is known. Here we demonstrate sparsity-based coherent diffraction imaging of 1D objects using extreme-ultraviolet radiation produced from high harmonic generation. Using sparsity as prior information removes the ambiguity in many cases and enhances the resolution beyond the physical limit of the microscope. Our approach may be used in a variety of problems, such as diagnostics of defects in microelectronic chips. Importantly, this is the first demonstration of sparsity-based 1D phase retrieval from actual experiments, hence it paves the way for greatly improving the performance of Fourier-based measurement systems where 1D signals are inherent, such as diagnostics of ultrashort laser pulses, deciphering the complex time-dependent response functions (for example, time-dependent permittivity and permeability) from spectral measurements and vice versa. Supplementary Figure 4: Super-resolved CDI of a 1D dense piecewise-constant object. (a) The "original" 1D object. (b) Power spectrum of the original object with 43dB noise. (c) Truncated power spectrum that corresponds to the part used to simulate the measured data. (d) The blurred reconstruction calculated by inverse Fourier transform of the "measured" power spectrum presented in (c), assuming full knowledge of the spectral phase. (e) Sparsity-based reconstruction (dashed red) compared with the original image (solid blue). The reconstruction uses the "measured" power spectrum (of (c)) and the prior information that the original image is sparse in the frame of shifted rectangular functions with different widths. Extrapolated power spectrum (f) and recovered spectral phase (g) calculated via sparsity-based reconstruction (dashed red) compared with the original image (solid blue). Demonstration of ambiguity in 1D CDI of example objects in the paper It is well known that 1D CDI is generally an ill-posed problem: Generic compact support objects correspond to the same far-field intensity pattern 2-3 . Still, there are some uncommon objects for which 1D CDI can yield unique solutions where knowing the support is the only prior information. In this section, we demonstrate that the theoretical ( Fig. 1) and experimental (Fig. 3) examples in the paper do not belong to those unusual cases, but rather belong to the general class of signals that do suffer from the ambiguity problem characteristic to the problem of phase retrieval of 1D objects. We begin with the object in Fig. 1 which is also plotted in Supplementary Figure 1 Supplementary Figures 1c and 1e, respectively). This shows that it is sparsity that distinguishes between the correct signal and the ambiguous signals. Supplementary x-ray CCD camera (1024×256 pixels). The sought information in our object is practically 1D; hence we integrate the detected intensity pattern along the non-diffraction (horizontal) dimension (256 pixels). Examining the performance of our sparsity-based algorithm (utilizing prior knowledge that the sought information is sparse in the basis of shifted rectangles) by comparing the reconstructed information with the original image (Supplementary Figure 3 (g,f,h)) leads to ~6 times resolution enhancement (while we measured the power spectrum of the object up to spatial frequency 0.073um -1 , we reconstructed its spatial spectral amplitude and phase with good fidelity up to 0.4um -1 ). The experimental object in this example is symmetric; a fact that in principle could assist the phase retrieval, yet symmetry was not used in our reconstruction. Sparsity-based algorithm for super-resolved 1D CDI of objects that consist of rectangles with only approximately known widths. Our problem of super-resolved phase retrieval can be written mathematically as || || Here is the magnitude-squared of a point DFT of a vector with , is the i th -row of a DTF matrix, is a dictionary where and || || Algorithm for basis update. Input: Measurements , rectangle width value and uncertainty with step , threshold parameter , maximal number of iterations and . Output: Estimate ̂ of Initialize: Construct dictionary consisting of shifted bars with constant width . Solve problem (1) with new dictionary and obtain solution ̂ . End While Return ̂ ̂ Sparsity-based super-resolved 1D CDI of objects that are sparse in a frame of rectangles with various widths. The "sparsity basis" is an important component in sparsity-based CDI (in all dimensions). The basis that was used in the paper -rectangle functions with a fixed widthis not an essential That is, the incomplete power spectrum has led to considerable loss of resolution even if the spectral phase is known. Next, we implement sparsitybased reconstruction on the truncated spatial power spectrum, without assuming any knowledge on the spectral phase. As a model, we assume that the object is constructed from a small Clearly, the reconstructed object, its complete power spectrum and its reconstructed spectral phase match the original object very well despite usage of the noisy truncated spectrum as "measured data" and the lack of any knowledge on the spectral phase.
1,142.4
2015-09-08T00:00:00.000
[ "Physics" ]
A COMPARATIVE ANALYSIS OF UNSUPERVISED AND SEMI - SUPERVISED REPRESENTATION LEARNING FOR REMOTE SENSING IMAGE CATEGORIZATION : This work aims at investigating unsupervised and semi-supervised representation learning methods based on generative adversarial networks for remote sensing scene classification. The work introduces a n ovel approach, which consists in a semi-supervised extension of a prior unsupervised method, known as MARTA-GAN. The proposed approach was compared experimentally with two baselines upon two public datasets, UC-MERCED and NWPU-RESISC45 . The experiments assessed the performance of each approach under different amounts of labeled data. The impact of fine-tuning was also investigated. The proposed method delivered in our analysis the best overall accuracy under scarce labeled samples, both in terms of absolute value and in terms of variabilit y across multiple runs. INTRODUCTION Over the last decades, much of the effort involved in deploying automatic image classification algorithms has been invested in designing and manually selecting custom features for a target application.In this sense, the use of Bag-of-Visual-Words (BoVW) was one of the first attempts in the field (Yang , Newsam, 2010), followed later by different classifiers like Random Forest (RF) and Support Vector Machines (SVM) (Helber et al., 2017).Recently, Deep Learning (DL) techniques have become the dominant trend in image classification (Simonyan , Zisserman, 2014, Szegedy et al., 2015, Cheng et al., 2018), mainly due to their ability to automatic learn discriminative features directly from data (LeCun et al., 2015, Krizhevsky et al., 2012, Penatti et al., 2015, Nogueira et al., 2017), when labeled samples are abundant. Although recent years have witnessed an increase of Earth observation data, remote sensing labeled data still falls short of the demands imposed by DL-based techniques.Mainly because of the high costs involved in field survey and the required labor-intensive visual interpretation. In this sense, transfer learning (Pan , Yang, 2010, Weiss et al., 2016) and unsupervised deep learning techniques, such as Stacked Denoising Autoencoders, Convolutional Autoencoders and Deep Belief Networks (Liang et al., 2017, Romero et al., 2016, Zou et al., 2015), emerged as attractive alternatives.In transfer learning, networks already trained using huge data-sets are reused in problems where the labeled data is limited by performing a fine tuning (Nogueira et al., 2017) of certain layers.On the other hand, unsupervised methods do not require any labeled data for the learning process. In the last few years, Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have been catching the community attention due to their ability to learn data distributions through an unsupervised two-player min-max game performed by two different networks: a generator and a discriminator. Considering the power of GANs for unsupervised learning, Lin et al (Lin et al., 2017) proposed a Multiple-Layer Feature-Matching GANs architecture (MARTA-GANs) for feature learning.In short, MARTA-GANs capture latent features from the discriminator network, which can be later used as input to a classifier.This method presented substantial improvements in comparison with others unsupervised feature learning models. Aiming to exploit cases where few labeled samples are available, (Springenberg, 2015) proposed to work with semi-supervised GAN (SS-GAN) algorithms.More specifically, they introduced the categorical generative adversarial networks (CatGANs) for image classification.This model was extended in (Salimans et al., 2016) to improve its convergence.Specifically, they proposed the feature matching term and the mini-batch discrimination concept among others modifications.Later, the SS-GAN approach was adapted to remote sensing data applications, such as object detection (Chen et al., 2018) and pixel-wise PolSAR (Liu et al., 2018) and hyperspectral (He et al., 2017, Zhan et al., 2018) image classification.However, despite the efforts of (Salimans et al., 2016), SS-GANs still present some convergence problems, mostly when the number of unlabeled samples is much larger than the labeled ones. Motivated by this scenario, we introduce in this paper a Semi-Supervised Representation Learning GAN (SSRL-GAN), which, although conceptually similar to SS-GANs, presents a different training strategy and adaptations in architecture.In short, SSRL-GANs present an external classifier allowing the use of binary cross-entropy cost functions for supervised and unsupervised stages.With these changes, we observed an improvement in the convergence of the model and in the classification performance, mainly when less labeled samples were used to train the model. We further analyze and compare different alternatives for remote sensing image categorization when a limited number of labeled samples is available.First, we take the MARTA-GAN (Lin et al., 2017) as baseline, which is an unsupervised learning method.Then, we compare it with two semi-supervised approaches: the Semi-Supervised GANs, as presented in (Salimans et al., 2016), and the Semi-Supervised Representation Learning GAN proposed in this work.Additionally, we evaluate how these methods behave when more labeled samples are added in the training set.And finally, we adopt a classic fine tuning approach, using only labeled data, to investigate if their performance can still be enhanced. The rest of this paper is organized as follows.Section 2 briefly describes the fundamentals underlying GANs.A detailed description of each assessed method is the subject of Section 3. The experimental protocol is reported in Section 4, while Section 5 shows the results obtained by the experiments.Finally, Section 6 summarizes the main conclusions and indicates future directions. GENERATIVE ADVERSARIAL NETWORKS (GANS) GANs, introduced by (Goodfellow et al., 2014), constitute a class of unsupervised machine learning models composed by two neural networks: the generator, which synthesizes realistic images and the discriminator, which tries to correctly discern between synthesized and real images. A min-max game procedure is used to train these neural networks.The Generator learns a function G that maps samples of a known random distribution p(z) into samples of a distribution p model (x), which the Discriminator D can hardly distinguish from a sample of a given data distribution p data (x). The Discriminator, in turn, is trained to learn a function D that distinguishes whether a sample comes from p data (x) or p model (x).The optimal mapping function G * can be found by solving the following equation: where L(G, D) is the GAN loss function defined by, where E and log are the expectation and logarithmic operators, respectively, and z is a random noise vector, which follows a known noise distribution p(z), typically uniform or Gaussian. EVALUATED METHODS This section presents the four methods assessed in this paper for remote sensing image categorization with few labeled samples available.In the following, we describe the unsupervised MARTA-GAN, the Semi-Supervised GAN, the Semi-Supervised Representation Learning GAN, and the Fine Tuning applied in the Discriminator of all methods. 3.1 Multiple-Layer Feature Matching GANs (MARTA-GANs) MARTA-GAN (Lin et al., 2017) is an unsupervised representation learning algorithm that relies on the same GAN's min-max game to learn discriminative features f (x).Like Deep Convolutional GANs (Radford et al., 2015), the Generator and the Discriminator are convolution networks trained to minimize a modified loss function L(G, D) given by the equation: The third term, called feature matching loss, is added to the GAN loss function to favor similarity between the generated and real images.The learned features f (x), named in (Lin et al., 2017) multi-feature layer, result from concatenating the outputs of the three last convolutional layers of the discriminator network. Semi-Supervised GANs (SS-GANs) SS-GANs (Salimans et al., 2016) exploit the available labeled data together with the unlabeled data to perform a semi-supervised learning.The Discriminator output is changed from 1 neuron to K + 1 neurons, where the first K neurons are used to classify the real labeled samples into one out of the K classes present in the data-set and the (K + 1)−th neuron computes the probability that the input sample is real or fake, i.e. synthesized by the GAN.The training function for the SS-GANs becomes: where: [log(D(x, y|y < K + 1))] (5) and Observe that, L(G, D) is a composition of the standard supervised loss function L supervised with the unsupervised loss L unsupervised , which actually represents the standard GAN min-max game, including the well known feature matching loss.The optimal solution can be found by minimizing these two losses jointly. Semi-Supervised Representation Learning GANs (SSRL-GANs) The proposed SSRL-GANs differs from the SS-GANs by an auxiliary classifier not embedded in the Discriminator.Thus, the Discriminator is responsible for verifying if the input sample is real or fake, whereas the Classifier evaluates how good are the features at the multi-feature layer for the classification of the available labeled samples.The architecture of the SSLR-GAN is shown in Figure 1 and involves three networks: Generator, Discriminator and Classifier. The training process is divided into two consecutive stages, unsupervised and supervised, depending on whether the training data is labeled or not.In the first, pure unlabeled data is used in each mini-batch while in the second only labeled samples are employed.The Generator is trained in the same way for both stages, since it does not rely on labels.Thus, while the parameters of the Discriminator are fixed, the parameters of the Generator are updated to synthesize images realistic enough to fool the Discriminator.Formally, it is about minimizing the following cost function which also includes the feature matching loss term: Analogously, while the Discriminator is being trained, the Generator parameters are kept fixed.Thus, in the unsupervised stage, the Discriminator parameters are updated so that the function LD is maximized for real samples and minimized for synthetic ones, as stated below: In the supervised stage, the function LD is modified to include a new term that tries to maximize the probabilities C(f (x), y) assigned by the Classifier to the real class y of each sample x, as shown in Equation 9. Aiming to minimize this expression, the Discriminator will tend to produce more discriminative and representative features. Since the Classifier network requires label information for training, it is not used in the unsupervised stage.In the supervised stage, it is trained using the features f (x) learned by the Discriminator considering only the real labeled data.In summary, the whole method can be mathematically described as: where L(G, D, C) is the GAN objective function defined by, Fine Tuning We further tested if the features learned by the aforementioned methods could be improved by a subsequent fine-tuning step. For MARTA-GAN and SSRL-GAN the original classification layer was replaced by a softmax multiclass classification layer. For SS-GAN, we kept the first K neurons of the Discriminator output layer.Then, a new supervised training was carried out using the available labelled samples. EXPERIMENTAL ANALYSIS The experiments performed in this work aimed to evaluate the representations learned by the methods described above, specifically: MARTA-GAN, SS-GAN, SSRL-GAN and the fine tuned version of these algorithms. Once the methods were trained, we took the features extracted from their respective multi-feature layers for image categorization.As in (Lin et al., 2017), we used a Support Vector Machine (SVM) (Hearst et al., 1998) for this purpose. The SVM was trained on the same labeled samples available on the training set. Datasets We assessed the methods using two public datasets for remote sensing image categorization. The first dataset was the UC MERCED Land Use Dataset1 (Yang , Newsam, 2010).It comprises 21 land-use classes.Each 256×256 pixel image has a spatial resolution of 0.3 m per pixel.For each class, 100 images were manually extracted from large images downloaded from the USGS National Map of different urban areas around the United States.Some image samples of this data-set are shown in Figure 2(a). The second dataset used in our experiments was the NWPU-RESISC45 (Cheng et al., 2017).This dataset2 contains 31500 remote sensing images of size 256×256 pixels and spatial resolution from about 30 m to 0.2 m per pixel for most classes.A total of 45 scene classes are represented in the dataset.For each class, 700 images were extracted from Google Earth by experts in the remote sensing field.Figure 2(b) shows samples of these images. Network Architectures The architecture of the Generator and Discriminator networks were essentially the same as that of the MARTA-GAN (Lin et al., 2017).The Classifier, used only in the SSRL-GANs, was a Multi-Layer Perceptron (MLP) network, which took as input the feature vector at the multi-feature layer of the Discriminator and propagated it into a hidden layer with 512 units empirically chosen and using a rectified linear unit (ReLU) as activation (256,256,3) tanh(.)(256,256,3) Table 2. Architecture of the Generator for the three methods. function.Its output layer implemented a softmax function and had as many units as the number of classes in the dataset. The three network architectures (Classifier, Generator, and Discriminator) are described in more details in Tables 1, 2 and 3.The symbols denote for each layer, convolution (C), deconvolution (D), batch normalization (B), ReLU (A1), Leaky ReLU (A2), MaxPooling (P), Flatten (F) and Fully Connected (Fc).The number of filters, filter's dimension and the convolution stride are indicated in parenthesis.All filters were square and the stride was equal in horizontal and vertical directions.The multi-feature layer resulted from the concatenation of F1, F2 and F3 which were the product of a flattening operation over feature maps at different scales in the network. All methods were trained with a batch size of 64 samples using the Adam optimizer (Kingma , Ba, 2014), which parameters learning rate and momentum β1 were set to 0.0002 and 0.5, respectively.The α parameter in the Leaky ReLU activation function was set to 0.2.The terms that make up the cost functions of all methods had the same relevance, been setting each importance coefficient to one.As in (Lin et al., 2017), we scaled the input images in the range of [−1, 1] before training and testing.Also, we applied the early stopping regularization procedure to avoid overfitting.The patience parameter, which controls the number of epochs without improvements in the validation loss, was set to 10.Each experiment was executed 5 times in order to evaluate the sensitivity of the methods to the initial solution of trainable parameters. To verify the influence of the number of labeled samples in the performance of each method, our experiments were carried out in two different protocols.We used the same Train set in both protocols in the unsupervised learning stage.The protocols differed in the number of labeled samples used for the supervised training stage of SS-GANs and SSRL-GANs, and also for training the SVM. In Protocol 1, we used for the supervised stage the Aux set, as described before.In Protocol 2 we applied vertical and horizontal flips, rotations and data replication to augment the number of labeled samples.This way, the number of labeled samples in Protocol 2 was about seven times larger than in Protocol 1.The methods were implemented in TensorLayer 3 on a NVIDIA Titan XP GPU. RESULTS Figure 3 summarizes the results for the UC-MERCED and NWPU-RESISC45 datasets in terms of Overall Accuracy (OA).The bar plots in Figure 3a to 3b refer to UC-MERCED, whereas Figure 3c to 3d relates to NWPU-RESISC45. The results for the fine tuned version of the evaluated methods are presented in the Figure 3b and 3d for UC-MERCED and 3 https://tensorlayer.readthedocs.io/en/stable/NWPU-RESISC45, respectively.In these figures the suffix FT denotes the results obtained after fine-tuning.Each bar group indicates the median OA over all runs for each method and protocol.The plots also show, in black, the highest and the lowest OA value recorded in our experiments in each case. As expected, the augmentation of labeled data improved the accuracy, in some cases remarkably.This can be seen by comparing corresponding bars within each plot.Data augmentation affected favorably even the MARTA-GAN results, an unsupervised representation learning method.The improvement for this method came from the SVM classifier, which profited from the extra labeled samples.The gain brought by labeled data augmentation ranged from 4.6% for MARTA-GAN on UC-MERCED, to 19.2% for SS-GAN FT on NWPU-RESISC45. A comparison of plots related to the same dataset reveals that fine-tuning also improved the accuracy consistently.Also the variability of the results across multiple runs reduced thanks to fine-tuning.The improvement in terms of OA ranged from 0.3%, for SS-GAN in Protocol 1, to 4,8% for MARTA-GAN in Protocol 2. However, the key issue in this analysis is the comparison of the three methods in each scenario.The proposed method, SSRL-GAN, was consistently superior to MARTA-GAN in all experiments.Data augmentation and fine-tuning affected MARTA-GAN and SSRL-GAN performance similarly on both datasets.Even so, the proposed method always outperformed its unsupervised counterpart.Thus, the exploitation of labeled samples in MARTA-GAN as proposed in SSRL-GAN, was generally beneficial in all variants tested in our experiments.SS-GAN presented a unique behavior.In all experiments conducted under Protocol 1 it presented the worst results among all methods, both in terms of absolute values and in terms of variability.However, when we increased the number of labeled samples, moving to Protocol 2, SS-GAN became consistently the best performing method. These results indicate that SS-GAN was among all tested methods the most sensitive to the so-called, small sample size problem.In other words, the experiments indicated that under conditions of greater scarcity of labeled data the SSRL-GAN presented the best results among all analyzed methods on both databases.Additionally, in conditions of more abundance of labeled data the proposed method was overcome by SS-GAN. CONCLUSIONS In this work, we performed a comparative analysis of semi-supervised representation learning methods for remote sensing scene classification.We further introduced a novel semi-supervised approach based on Generative Adversarial Networks (GANs). The methods were evaluated on two public datasets.We took as baseline an unsupervised and a semi-supervised method, both based on GANs.The experimental analysis indicated that the features learned by the proposed method allowed to achieve better accuracy than the baselines when the amount of labeled data was small.The experimental analysis also revealed that a fine-tuning step further improved results in all tested methods. Figure1.Overview of the SSRL-GAN method.The Generator (G) learns to synthesize images to fool the Discriminator (D), which learns to distinguish between real and synthesized images.The semi-supervised procedure is performed by switching between unlabeled and labeled real images.When labeled images are used, features f (x) are extracted from the multi-feature layer and used as input to the Classifier (C) which will influence the GAN objective function. Figure 3 . Figure 3. Overall Accuracy results in (%): FT in the plots on the right indicates fine-tuning As a continuation of the present research, we intend to explore the conclusions drawn from this work for solutions based on GANs for other applications.
4,200
2019-09-16T00:00:00.000
[ "Computer Science" ]
A Novel Cell Line Based Orthotopic Xenograft Mouse Model That Recapitulates Human Hepatoblastoma Currently, preclinical testing of therapies for hepatoblastoma (HB) is limited to subcutaneous and intrasplenic xenograft models that do not recapitulate the hepatic tumors seen in patients. We hypothesized that injection of HB cell lines into the livers of mice would result in liver tumors that resemble their clinical counterparts. HepG2 and Huh-6 HB cell lines were injected, and tumor growth was monitored with bioluminescence imaging (BLI) and magnetic resonance imaging (MRI). Levels of human α-fetoprotein (AFP) were monitored in the serum of animals. Immunohistochemical and gene expression analyses were also completed on xenograft tumor samples. BLI signal indicative of tumor growth was seen in 55% of HepG2- and Huh-6-injected animals after a period of four to seven weeks. Increased AFP levels correlated with tumor growth. MRI showed large intrahepatic tumors with active neovascularization. HepG2 and Huh-6 xenografts showed expression of β-catenin, AFP, and Glypican-3 (GPC3). HepG2 samples displayed a consistent gene expression profile most similar to human HB tumors. Intrahepatic injection of HB cell lines leads to liver tumors in mice with growth patterns and biologic, histologic, and genetic features similar to human HB tumors. This orthotopic xenograft mouse model will enable clinically relevant testing of novel agents for HB. four segments of the liver 3 . Mice with tumors generated with hydrodynamic injection develop multifocal nodules within the liver, and the organ is eventually entirely replaced by tumor. This may be representative of patients that present with tumor in all four segments of the liver, but this is only a small percentage of patients 3 . With the subcutaneous and intrasplenic xenograft models, tumors can be quickly generated in genetically identical animals from the human HB cell lines Huh-6 11 , HepT1 8 , and HepG2 12 . In the subcutaneous model, injection of all three cell lines led to growth of tumors, depending on the strain of mice and time elapsed since injection of cells 8,9 . In the intrasplenic model, immunodeficient mice were directly injected with HepG2, Huh-6, or HepT1 cells into the spleen. The Huh-6 and HepT1 tumor cells, but not HepG2 cells, then migrated to the liver, giving rise to intrahepatic tumors 9,10 . Of note, animals that underwent splenectomy just after injection more readily developed intrahepatic tumors 10 . These tumors were small, multifocal nodules that again do not represent the disease typically seen in children. Notably, there is one published study of injection of HepG2 cells into the portal vein to generate intrahepatic tumors, but the focus in this work is use of this model for drug testing for hepatocellular carcinoma (HCC) 13 . Thus, although these models have contributed to the field, none truly recapitulates the disease. For effective preclinical studies to be performed, a true intrahepatic orthotopic xenograft model that accurately replicates the human disease is essential. We have successfully developed an intrahepatic patient-derived xenograft (PDX) model of HB using patient specimens 14 . Other groups have also examined subcutaneous and intrahepatic growth of patient-derived liver cancer tissues as models of HCC, including an interesting study in which tumors composed of sorted human liver cancer stem cells (hLCSCs) were grown subcutaneously 15,16 . Since these tissues have limited availability due to the rarity of the disease, we wanted to develop and characterize an intrahepatic, orthotopic xenograft model using commercially available HB cell lines. In addition, cell line derived xenograft models can be better standardized and are not dependent on tissue quality of surgical samples that usually have been exposed to chemotherapy. In this paper, we describe the development and characterization of such an intrahepatic xenograft HB mouse model. Human HB cells were injected into the livers of immunocompromised mice. Mice were monitored for tumor growth using bioluminescence imaging (BLI), magnetic resonance imaging (MRI), and measurement of serum levels of human α-fetoprotein (AFP). At the conclusion of the study, animals were euthanized and tissues were harvested for protein expression analysis by immunohistochemistry and gene expression profiling by RNA sequencing. Generation of tumors by intrahepatic injection of human HB cell lines. To generate liver tumors, we injected two million HepG2 or Huh-6 human HB cell lines into either the right median lobe with a right flank incision (Fig. 1a) or the left lateral lobe with a midline, abdominal incision ( Fig. 1b) of the liver of NOD/Shi-scid/ IL-2Rγ null (NOG) mice (see methods). Regardless of injection technique, the tumors grew as large, exophytic masses originating from the injected lobe and invaded multiple adjacent segments of the liver as seen in children with pre-treatment extent of disease (PRETEXT) I, II, and III tumors (Fig. 1c,d) 17 . In order to longitudinally monitor the growth of tumors in vivo, we used cells that had been transduced with lentiviral vectors expressing the luciferase gene. The expression of strong luciferase activity (2-3 million relative luminescence (RLU)) was confirmed prior to implantation of tumor cells. Upon intraperitoneal injection of luciferin into the animals, the cells emitted a BLI signal that could be monitored from week to week (Fig. 2a-d). Four of 11 (36%) mice injected with HepG2 cells showed BLI signal at 5 weeks after injection, and 6 of 11 (55%) displayed BLI signal at 7 weeks (Fig. 2a). Four of 9 (44%) mice injected with Huh-6 cells demonstrated BLI signal at 1 week after injection, and 5 of 9 (55%) exhibited BLI signal at 3 weeks after injection (Fig. 2b). Temporal analysis of BLI signals reflects the growth of tumors (Fig. 2c,d) and shows that changes in the HepG2 masses are significant (p = 0.0044). Taken together, the data demonstrates that intrahepatic injection of both HB cell lines successfully leads to the growth of liver tumors in immunocompromised mice. Human AFP is elevated in the serum of mice harboring xenograft tumors. One of the most important indicators used for diagnosis and disease surveillance in patients with HB is levels of AFP in the blood, and 97% of patients show elevated levels 18 . We hypothesized that levels of human AFP would be elevated in the serum of the mice harboring the xenograft tumors. To test this hypothesis, we measured serum human AFP levels weekly in one mouse representative of each cell line xenograft using an enzyme-linked immunosorbent assay (ELISA). At the time of injection of cells and at one week after injection, levels of human AFP in the blood remained very low (Fig. 2e). By three weeks after injection, animals with both xenograft tumors showed elevated human AFP in their serum (Fig. 2e). MRI of xenograft tumors. MRI was performed at early and late time points to confirm the presence of tumor and for 3D assessment of tumor burden. Intrahepatic tumors were visible as hypo-intense regions in the lobes of the liver (Fig. 3a,b). Contrast-enhanced T1-weighted imaging was performed using a long circulating blood pool liposomal contrast agent for assessing intra-tumoral vasculature and spatial relationships between tumor and major hepatic blood vessels. The blood pool agent showed the presence of intra-tumoral vasculature with a high degree of vascularity specificity at the tumor periphery ( Fig. 3c-f). As the tumors grew, major hepatic vessels, including the inferior vena cava (IVC), were pushed from their regular anatomical orientation. Histological examination of xenograft tumors. At the conclusions of these studies, animals were euthanized and samples were harvested for histological and immunohistochemical analyses. Histological review of the xenografted tumors indicated that those generated with both HB cell lines resemble primary human HB samples to varying degrees (Fig. 4a,b). HepG2-derived tumors most closely resemble human HBs of homogeneous, embryonal phenotype (Fig. 4a). In contrast, Huh-6 tumors had morphologic characteristics that differ from the general histology of primary tumors (Fig. 4b). Huh-6 xenotransplants demonstrated an usual pattern with tumor cells organized in papillary structures generally uncharacteristic of HB (Fig. 4b). Of note, histology was similar to Huh-6 tumors obtained previously in the subcutaneous model 19 . An immunohistochemistry panel, including AFP, β-catenin, and Glypican-3 (GPC3), commonly used in HB diagnosis and classification 20 was employed to evaluate protein expression in the mouse xenograft tumors. AFP is variable in HB specimens since this protein is secreted and the key measurement is the levels of AFP in the serum 20,21 . Of the two cell line xenografts, only HepG2 tumors showed clear positive AFP staining throughout; Huh-6 tumors were predominantly negative with scattered positive patches of cells (Fig. 4e,f, Supplementary Fig. S1a-c). GPC3 is a reliable marker that is expressed in epithelial, fetal, and embryonal components and is negative in normal liver and benign tumor tissues 22 . Both HepG2 and Huh-6 xenograft tumors showed strong cytoplasmic and membrane staining for this marker (Fig. 4g,h). As a second assessment of expression of AFP and GPC3, we performed quantitative reverse transcription polymerase chain reaction (RT-PCR) (qPCR) experiments to measure levels of mRNA expression of these two HB markers in HepG2 and Huh-6 cells, in comparison to expression in the terminally differentiated hepatic cell line HepRG and in human fibroblasts (HFs). AFP and GPC3 expression are both significantly elevated in HepG2 and Huh-6 cells ( Supplementary Fig. S1d,e). Finally, β-catenin is arguably the most important marker of HB as nuclear staining is used as a surrogate for the presence of mutations in the β-catenin gene, CTNNB1, which are commonly found in cases of HB 23 . Nuclear β-catenin expression is only seen in malignant hepatocytes 24 . In both the HepG2 and Huh-6 xenograft tumors, β-catenin staining was strong throughout the tumor nuclei (Fig. 4c,d). Taken together, histological analyses show that the xenograft tumors resemble primary HB tumors. Mutation analysis of cell lines and xenograft tumors. We further characterized the cell lines by performing mutational analyses of CTNNB1 and the Telomerase reverse transcriptase (TERT) promoter gene previously reported to be present in each cell line (Table 1) 23 . Both cell lines carry mutations in CTNNB1. HepG2 carries a large in-frame deletion, p.W25_I140del (116 codons within exons 3 and 4) 25 , while Huh-6 carries a CTNNB1 point mutation, p.G34V 25 . HepG2 is also reported to carry a mutation of the TERT promoter, G228A 26 . Gene expression profiling of xenograft tumors. We used RNA sequencing to profile HepG2 and Huh-6 gene expression in vitro, including both parental and luciferase-transduced cell lines, and xenograft tumors generated in vivo. As an initial analysis, we compared transcriptome similarities between the luciferase-transduced cells and the parental cells (Fig. 5d) to show that the luciferase-transduced cells had almost identical gene expression as the parental cells. We then analyzed six normal liver samples, nine primary human untreated HB samples, and four in vitro and in vivo HepG2 and Huh-6 samples by principal component analysis (PCA). PCA of these profiles revealed three main clusters of samples: normal livers, primary tumors, and the in vitro and in vivo HepG2 and Huh-6 samples (Fig. 5a). In vitro and in vivo HepG2 profiles showed greater similarity than the corresponding Huh-6 profiles (Fig. 5a). To reveal the major differences in the Huh-6 tumor sample that occur as a result of being grown in vivo, we conducted gene set enrichment analyses (GSEA) of each pair of in vitro cell and in vivo tumor samples with two gene sets from the Broad Molecular Signatures Database (Fig. 5b, Supplementary Table S2). Seventeen pathways in the Hallmark gene set were significantly changed between the two Huh-6 samples while nine pathways were significantly altered between the two HepG2 samples (Fig. 5b). Eleven pathways were significantly enriched in the Huh-6 tumor sample (Fig. 5b), including seven that are involved in the immune system and inflammation (interferon alpha response, interferon gamma response, allograft rejection, inflammatory response, Tumor necrosis factor alpha (TNFα) signaling via Nuclear factor kappa B (NFκB), complement, Interleukin 6 (IL-6)/Janus kinase (JAK)/Signal transducer and activator of transcription 3 (STAT3) signaling). These pathways were not enriched in the HepG2 tumor sample, although four were increased in the HepG2 cell sample (TNFα signaling via NFκB, allograft rejection, IL-6/JAK/STAT3 signaling, inflammatory response) (Fig. 5b). Enrichment for genes that function in the epithelial-mesenchymal transition, KRAS signaling, hypoxia, and apoptosis was also seen in the Huh-6 tumor sample (Fig. 5b), and these changes were not seen in either HepG2 sample. Unique differences seen in the Huh-6 cell sample included higher expression of MYC targets, E2F targets, Wnt/β-catenin signaling, G2M checkpoint, and estrogen response. Higher expression of Mammalian target of rapamycin complex 1 (MTORC1) signaling was seen in both the Huh-6 and HepG2 cell samples. Taken together, these statistical analyses suggest that more gene sets are changed between the Huh-6 in vitro and in vivo samples than the two HepG2 samples, many of which are connected to the immune response. In a third, specific analysis of the RNA sequencing dataset, we analyzed the expression of a previously published prognostic 16-gene signature that has been reported to differentiate between a low risk, better prognosis HB cluster (C1) versus a high risk, poor prognosis HB cluster (C2) 27 . We used a heat map to classify expression in the two cell lines and corresponding xenograft tumor samples compared to the nine primary HB patient samples, all normalized to average expression in the normal liver samples (Fig. 5c). In general, all of the human HB tumor samples showed a C2 gene expression profile. Discussion HB is the most common pediatric primary liver tumor. Although generally a rare cancer, it leads to the death of more than half of high-risk patients even with intensive chemotherapy and surgical interventions 3 Huh-6-C p.G34V wild-type Huh-6-T p.G34V wild-type Table 1. Mutation analysis of HB cell lines and xenograft samples. Two parental cell lines are HepG2-C and Huh-6-C; two cell lines grown in vivo as xenograft tumors are HepG2-T and Huh-6-T. However, preclinical testing of such therapies has lagged due to the paucity of clinically relevant animal models of the disease. PDX models derived from primary tumors are very promising but have limited availability due to the rarity of the disease 14,28 . Subcutaneous and intrasplenic models utilizing commercially available cell lines do not accurately recapitulate tumors seen in patients 9,10 . In addition, a relevant study of intrahepatic xenograft tumors generated with HepG2 cells focused on this as a model of HCC instead of HB 13 . Hence, there is a compelling need for an intrahepatic xenograft model with human HB cell lines that replicates this complex disease, and, in this paper, we describe such a model that will enhance testing of new therapies. Importantly, this intrahepatic HB xenograft model recapitulates the key hallmarks of the disease: elevated serum AFP levels, large exophytic tumors with active blood supplies, and embryonal histological phenotype with elevation of AFP, GPC3, and β-catenin. Although elevation of protein levels of AFP were not detected in Huh-6 tumors with immunohistochemistry experiments, significant increases in mRNA expression of both AFP and GPC3 were seen in HepG2 and Huh-6 cells. Because of this discrepancy, we verified our data by staining the Huh-6 tumors with a total of four different AFP antibodies. With all four antibodies, the tissues were predominantly negative with limited scattered patches of positive cells. In the literature, it is still unclear whether detection of AFP with immunohistochemistry assays correlates with serum levels of this protein. In a large immunohistochemistry study of 83 patient samples comprising a mix of diagnostic tumor biopsies and post-chemotherapy, post-surgical specimens, no statistically significant correlation was found between levels of serum AFP at diagnosis and with expression of AFP in resected tumors 21 . We speculate that such secreted proteins have high rates of protein turnover in cells and thus are not always detectable with immunohistochemistry assays. This study shows that direct intrahepatic orthotopic injection of widely distributed HB cell lines in immunodeficient mice leads to primary hepatic tumors that morphologically mimic human HB tumors, and this is the first report that characterizes such cell line-derived xenograft tumors in relationship to primary HB tumors in order to demonstrate clear clinical relevance. In our study, we examined the complete gene expression profiles of all cell lines and xenograft tumor samples in comparison to normal liver and HB tumor samples, which has not been described previously. In PCA of the total RNA sequencing dataset, both HepG2 samples and the Huh-6 sample grown in vitro clustered close to each other. However, the Huh-6 sample that was grown in vivo differed from what would be expected given its human HB tumor origin. Interestingly, gene expression changes in the Huh-6 tumor that seem to have occurred as a result of being grown in vivo result in cells with a profile different than the cell line grown in vitro. These results correlate with the histological review, as the Huh-6 xenograft tumor does not resemble primary HB disease as closely as the HepG2 tissue. We speculate that this inconsistency may be due to the environment of the murine liver affecting the cell line as studies have shown that there are key differences between the human and mouse liver, especially in regards to transcriptional regulation and general gene expression among homologous genes 29,30 . Previous studies of in vivo models of HB have shown differences in histological appearance of cell line-derived tumors depending on their location of growth in the animal 9,10 . No previous studies have analyzed gene expression changes in HB cell lines grown in vitro and in vivo, but papers about other types of cancer have shown similar differences depending on whether cells are grown in vitro or in vivo and on location of growth in vivo 31,32 . In support of results from PCA, GSEA suggested more extensive divergence of the Huh-6 in vivo sample, including gene expression changes that indicate immune response and inflammation. It is well accepted that growth of human cells in animals requires the support of animal cells and that human and animal cells interface and interact with each other. Perhaps this contact is leading to unique upregulation of immune pathways within the Huh-6 cells, leading to more widespread alterations in gene expression in Huh-6 cells that are not seen in HepG2 tumors. Taken together, the results of this study show that the commercially available HB cell lines can be used for in vivo preclinical testing; however, HepG2 more accurately mimics human HB. This study also makes it clear that more cell lines are needed to study HB biology and treatment response; therefore, large centers that treat many patients must come together to develop new patient-derived cell lines and xenografts. With the new international collaborations 33,34 that have been formed to study HB, there will be more availability of tissues to develop these valuable resources. Materials and Methods All animal procedures used in this study were performed under and in accordance with an animal protocol approved by the Institutional Care and Use Committee of Baylor College of Medicine. Orthotopic mouse model. In vivo studies were performed in female NOD/Shi-scid/IL-2Rγ null (NOG) mice (Taconic Biosciences, Hudson, NY). 2 × 10 6 HepG2 and Huh-6 cells transduced with luciferase and resuspended in 100 μl phosphate-buffered saline (PBS) were surgically implanted into either the right lobe of the liver through a right flank incision or the left lobe with a midline, abdominal incision. The mice underwent BLI beginning at 10 days after implantation and every week thereafter with the In Vivo Imaging System (IVIS, PerkinElmer, Waltham, MA), and luminescence flux was recorded to assess tumor growth. After seven weeks (HepG2) or four weeks (Huh-6), necropsy was performed, intrahepatic and extrahepatic sites of tumor were noted, and samples were harvested for immunohistochemistry and RNA isolation. ELISA to measure circulating AFP in mouse blood. Blood was drawn from the facial veins of mice harboring xenograft tumors at the indicated time points. AFP was measured in the serum of animals with an AFP ELISA kit (EIA-1468, DRG Instruments, Germany). In vivo MRI. MRI was performed on a 1.0 T permanent MRI scanner (M2 system, Aspect Technologies, Israel). A 35 mm volume coil was used for transmit and receive of radiofrequency (RF) signal. Mice were sedated using 3% isoflurane, setup on the MRI animal bed, and then maintained under anesthesia at 1-1.5% isoflurane delivered using a nose cone setup. Body temperature was maintained by circulating hot water through the MRI animal bed. Respiration rate was monitored using a pneumatically controlled pressure pad placed in the abdominal area underneath the animal. A long circulating liposomal-Gd blood pool contrast agent (SC-Gd liposomes) was systemically administered via the tail vein at a dose of 0.1 mmol Gd/kg and used for contrast-enhanced T1-weighted imaging 35 Supplementary Fig. S1b); 180003, Thermo Fisher Scientific, Waltham, MA (Fig. 4e)), β-catenin (ab32573, Abcam), and GPC3 (GPC3, 11395, Santa Cruz Biotechnology, Dallas, TX) were performed. Imaging of tumor sections on slides was done on a DMi8 microscope (Leica, Germany). Quantitative RT-PCR with cell lines. RNA was extracted from cells with the Direct-zol RNA MiniPrep Kit (Zymo Research, Irvine, CA, USA). RNA purity and quantity were determined using a spectrophotometer measuring absorbance at 260/280 nm. cDNA was generated from total RNA with the SuperScript III First-Strand Synthesis System for RT-PCR (Invitrogen) or the qScript cDNA SuperMix (Quanta Biosciences, Gaithersburg, MD, USA). Taqman qPCR was done with TaqMan Universal Master Mix II (Applied Biosystems, Foster, CA, USA) and with the following primers (Applied Biosystems): AFP (Hs1040598_m1) and GPC3 (Hs01018936_m1). GAPDH (Hs02758991_g1) was used as an internal control in all qPCR experiments. All experiments were run on a StepOnePlus Real-Time PCR System (Applied Biosystems). Mutation analysis of cell lines and xenograft tumors. DNA from frozen xenograft tissues and cell lines was extracted using the QIAamp DNA Mini Kit (Qiagen, Germany). Samples were treated with RNase A and eluted in Buffer AE. CTNNB1 exon 3 and 4 and TERT promoter mutation status was determined by PCR of genomic DNA with primers listed in Supplementary Table S1 37 . Two-directional Sanger sequencing analysis of PCR products was completed with Mutation Surveyor v. 5.0.1 (Softgenetics). RNA sequencing of hepatic tumors and comparison with parental cell lines and liver cancer samples. RNA from frozen hepatic tumor samples was isolated using the mirVana miRNA isolation kit (Ambion, Austin, TX). Samples were treated with DNase 1 and eluted in nuclease-free water. RNA from xenograft tumors and parental cell lines was isolated using the RNeasy Plus Mini Kit (74134, Qiagen). RNA samples were submitted for RNA sequencing to the Baylor College of Medicine Genomic and RNA Profiling Core (Houston, TX) or to GENEWIZ (South Plainfield, NJ). RNA-seq FastQ files were processed using STAR 38 and Cufflinks 39 , with alignment to Hg19/GRCh37. HB tumor and normal liver samples were processed via Illumuna HiSeq. 2500 40 . HepG2 was sequenced on both platforms, and a multiplicative scaling vector was used to scale the cell line and xenograft values for comparison with the patient tumor and normal liver samples. Fragments per kilobase per million (FPKM) values for 2324 genes were included after filtering for Human Genome Organization (HUGO) Gene Nomenclature Committee (HGNC) protein coding genes with average expression above 25 FPKM and coefficient of variation (CV) above 0.3 over the set of all samples. Spearman correlation of 18571 HUGO genes to indicate gene expression similarities of two parental cell lines and two luciferase-transduced cell lines all grown in vitro. PCA was implemented in R (Team, R. D. C. R: A language and environment for statistical computing, http://www.R-project.org (2014)) using the prcomp function with the scale parameter set to adjust variables to unit variance. GSEA of xenograft tumors and cell lines. Read FASTQ files were processed with RSEM (version 1.2.17) 41 using the read aligner Bowtie2 42 applied to the combined human and mouse NCBI Refseq (3/21/16) transcriptomes. Next, TPM values from human transcripts were selected and renormalized. Using a gene expression cutoff of 7.5 transcripts per million (TPM) and CV above 0.3 for HGNC protein coding genes, two sample comparison gene set enrichment was implemented using the GSEA (v2.23) program 43 in pre-ranked mode with an extended "Signal2Noise" metric score for ranking. The Broad Molecular Signatures Database (MSigDB v5.2) sets h (Hallmark) and c2 (Curated gene sets) were used. Statistical analysis. The Kruskal-Wallis test was used to determine the statistical significance of in vivo tumor flux differences among the indicated time points. Data availability. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
5,660.2
2017-12-01T00:00:00.000
[ "Medicine", "Biology" ]
Prediction of Synaptically Localized RNAs in Human Neurons Using Developmental Brain Gene Expression Data In the nervous system, synapses are special and pervasive structures between axonal and dendritic terminals, which facilitate electrical and chemical communications among neurons. Extensive studies have been conducted in mice and rats to explore the RNA pool at synapses and investigate RNA transport, local protein synthesis, and synaptic plasticity. However, owing to the experimental difficulties of studying human synaptic transcriptomes, the full pool of human synaptic RNAs remains largely unclear. We developed a new machine learning method, called PredSynRNA, to predict the synaptic localization of human RNAs. Training instances of dendritically localized RNAs were compiled from previous rodent studies, overcoming the shortage of empirical instances of human synaptic RNAs. Using RNA sequence and gene expression data as features, various models with different learning algorithms were constructed and evaluated. Strikingly, the models using the developmental brain gene expression features achieved superior performance for predicting synaptically localized RNAs. We examined the relevant expression features learned by PredSynRNA and used an independent test dataset to further validate the model performance. PredSynRNA models were then applied to the prediction and prioritization of candidate RNAs localized to human synapses, providing valuable targets for experimental investigations into neuronal mechanisms and brain disorders. Introduction How people memorize, learn, and process external information largely depends on the sophisticated connections between neurons [1]. Unlike typical cells, neurons have a highly polarized architecture, consisting of the soma with the nucleus, and extended protrusions, including dendrites and an axon [2]. Within a complex neural network, the region where two neurons contact is referred to as a synapse, which is essential for neural communications [3]. Extensive studies have been conducted to understand mRNA transport and localization to synapses. It is commonly acknowledged that many mRNAs are packaged into granules after being transcribed in the nucleus and then transported to synaptic regions for local translation. The mechanism of local translation is supposed to facilitate fast responses to environmental changes and synaptic inputs [4,5]. Thus, mRNA localization plays a key role in neuronal protein translation, allowing the local synthesis of components required for synaptic plasticity during brain development [5][6][7][8]. Dysregulation of synaptic mRNA localization and translation can affect cellular functions, leading to neurological diseases such as Fragile X Syndrome and Spinal Muscular Atrophy [9,10]. Moreover, synaptically localized RNAs may be involved in liquid-liquid phase separation to form membraneless neurite compartments with diverse functions [11]. With highly polarized morphology, neurons offer a great model for studying RNA localization [8]. Subcellular fractionation techniques and electron microscopy were originally used to understand the structure of synaptic terminals and internal contents [12][13][14]. Recently, microarrays [2,6,15] and next-generation sequencing technologies have been used to profile dendritic transcriptomes in rats and mice [16][17][18][19][20]. However, it is very challenging to accurately profile dendritic transcriptomes, with major difficulties in the clean separation of dendrites from cell bodies and the complexity of dynamic neuropil events [18]. Notably, previous studies have only identified a small number of dendritic RNAs in common [16][17][18][19][20], and the full synaptic RNA pool remains largely unclear. Previous studies suggest that neuronal mRNAs may carry regulatory elements that affect mRNA localization, stability, and translation [19,20], whereas the lack of localization signals could be the reason why some mRNAs are retained in the soma [21]. For instance, the 3 UTR of Ca 2+ /calmodulin-dependent protein kinase II (CaMKIIa) targets the mRNA to dendrites for local translation [22]. The loss of localization signals in the 3 UTR of CaMKIIa mRNA altered its distribution in dendrites, resulting in reduced accumulation of CaMKIIa in postsynaptic densities (PSD) and impairments of synaptic plasticity and spatial memory [23]. The 5 UTR of sensorin mRNA has also been implicated in synaptic mRNA localization in Aplysia [7]. These findings suggest that sequence features may be used to predict synaptic RNAs. Since many neuronal proteins are involved in synaptic plasticity and higher-order brain functions such as learning and memory [24], synaptic genes may also display characteristic expression patterns during neuronal development and aging. Interestingly, the analysis of human brain time-series transcriptome data reveals that synaptic genes are particularly sensitive to the aging process [25]. Moreover, functional genomic studies using developmental human brain transcriptome data have shown that schizophrenia and autism spectrum disorders partially converged on neurodevelopmental modules involved in transcriptional regulation and synaptic function [26,27]. Thus, gene expression data may also contain relevant information for predicting synaptic RNAs. With the growing size and complexity of genomic data, machine learning techniques have been increasingly used to extract hidden knowledge regarding a specific biological problem. One intriguing problem is RNA subcellular localization, which plays an important role in modulating protein distributions and cellular functions of various classes of RNAs transcribed from the genome [28]. To date, machine learning models have been developed to predict the subcellular localization of RNAs, with some models intended for mRNAs [29][30][31][32] and others for long non-coding RNAs (lncRNAs) [33][34][35]. RNATracker used a deep neural network to predict the subcellular localization of mRNAs from onehot-encoded transcript sequences [31]. mRNALoc employed support vector machine (SVM) models to predict mRNA subcellular localization based on pseudo-K-tuple nucleotide composition (PseKNC) features [29]. Recently, DM3Loc was developed, which applied the multi-head self-attention mechanism to deep learning architecture [32]. For lncRNAs, predictors such as lncLocator [33], iLoc-lncRNA [34] and DeepLncRNA [35] utilized sequence-based features. lncLocator and iLoc-lncRNA were constructed using conventional machine learning algorithms, and DeepLncRNA was based on a deep neural network. Many methods mentioned above enable multi-label prediction for multiple subcellular regions such as the nucleus, cytoplasm, ribosome, exosome, and so on. However, no model has yet been reported to our knowledge for accurate prediction of synaptically localized RNAs. In this study, we developed a new machine learning method named PredSynRNA to predict human synaptically localized RNAs. We compiled a training dataset from previous studies and used RNA sequence and developmental brain gene expression data as features to construct various models with different learning algorithms. Interestingly, the Support Vector Machine (SVM) model using the expression features achieved the best performance. PredSynRNA was then employed to predict and prioritize candidate RNAs, including 1070 mRNAs and 330 lncRNAs, which might be localized to human synapses. Compilation of Training Data Instances Considering the lack of training instances of human dendritic and somatic RNAs, we first collected those from respective lists published in five rodent studies that utilized RNA-sequencing techniques [16][17][18][19][20]. For each RNA instance, we identified the human orthologue using Ensembl BioMart [36]. To improve the quality of the dataset, we examined the overlaps across different studies and only selected the dendritic and somatic RNAs identified by at least two independent studies as potential instances using jvenn [37] ( Figure S1). In addition, any instances that overlapped with potential training positives were excluded from the list of somatic RNAs. The dataset before feature encoding contained 1423 dendritically localized RNAs (positive instances) and 1617 somatically localized RNAs (negative instances). Most, if not all, of the dendritic RNAs were considered to be synaptically localized as axonal RNAs were normally at very low abundance when synaptic transcriptomes were profiled in the previous studies [16][17][18][19][20]. Sequence and Expression Features It was suggested that certain sequence elements might be responsible for mRNA localization to synaptic neuropil [7]. We thus extracted sequence features by calculating the k-mer frequencies of concatenated 5 and 3 UTR of an mRNA transcript (normalized by the sequence length). Protein-coding transcript sequences were downloaded from the GENCODE GRCh38 release 33 [38], and the longest protein-coding sequence was retained. Sequence features derived from different k-mer combinations (k = 1, 2, 3) were examined for model construction ( Figure S2). The gene expression features for each RNA instance were extracted from the BrainSpan Atlas of the Developing Human Brain [39]. The BrainSpan dataset contained the expression profiles of over 52,000 genes in 524 brain tissue samples from 26 brain structures for a series of developmental time points ranging from 8 weeks post-conception (pcw) to 40 years of age. The gene expression levels were represented by Reads Per Kilobase of transcript per Million mapped reads (RPKM). The RNA instances with RPKM > 1 in at least 1% of brain samples were retained, resulting in a training dataset of 1271 positive instances and 1513 negative instances. The expression features were processed by log 2 (RPKM + 1) transformation. The expression and sequence features were also normalized using the min-max method. Feature Selection The high dimensionality of sequence and expression features might lead to model overfitting. Feature selection could be utilized to remove redundant and irrelevant features [40]. It was also of interest to identify and examine the most important features for predicting synaptically localized RNAs. During model training, the importance score of each feature was computed using the Random Forest (RF) algorithm [41]. The mean importance scores calculated from five repetitions of 10-fold cross-validations were used to rank and select the most relevant features. The importance scores of expression features were also examined to reveal the significant time points during brain development. Model Training Various machine learning algorithms, including logistic regression (LR), support vector machine (SVM), random forest (RF), XGBoost (XGB), and artificial neural network (ANN), were tested for model construction. LR is a statistical method that finds the best fitting model to describe the relationship between the logit of outcome and a set of independent variables. SVM is a learning algorithm that aims to distinguish two classes by a hyperplane with the maximal margin [42]. RF is an ensemble learning method that constructs a multitude of decision trees for a classification task [41]. XGBoost is an implementation of gradient-boosted decision trees and has fast execution speed and good model performance [43]. In this study, the LR, SVM, and RF models were implemented using Scikit-learn 0.21.2 [44] and XGB with xgboost 0.90. To find the optimal set of parameters for each model, the grid search method was used. The class weights within the parameters were set for the above models to address the imbalance of the training dataset. For the ANN model, different numbers of hidden layers were tested, and the ANN with one hidden layer was chosen in this study (Tables S1 and S2). The optimization of hyperparameters, including hidden units, drop-out rate, and learning rate, was performed using Hyperopt 0.2.4 [45]. The ANN model was implemented with Keras 2.2.4 in Python. Tuned parameters for three final, most representative models used for future analysis are provided in Table S3. Model Testing During model construction, PredSynRNA performance was evaluated by five repetitions of 10-fold cross-validations, in which the training dataset was randomly divided into 10 equal-sized subsets: one holdout subset for testing and the remaining nine subsets for training [46]. For the ANN model, an additional step of bootstrap resampling was used to obtain a balanced dataset before 10-fold cross-validations. Furthermore, an independent test dataset was collected from a previous study on the somato-dendritic localization of mRNAs in mouse hippocampus [47] and used to validate the generalization ability of PredSynRNA. Any instances in the training dataset were excluded from the independent test dataset. Sequence and expression features were extracted in the same way as for the training instances. This test dataset contained 613 positive instances and 925 negative instances. Performance Metrics The performance metrics used in this study are as follows: True positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) are tabulated and used to calculate the performance metrics shown above. The Matthews correlation coefficient (MCC) measures the correlation between the predicted and actual classifications on a scale of 0 ≤|MCC| ≤ 1 [48]. The receiver operating characteristic (ROC) curve plots the true positive rate (sensitivity) versus the false positive rate (1-specificity) for varying output thresholds of the model. The ROC curve and the area under the curve (ROC-AUC) are considered the most robust measures of model performance [49]. Prediction and Analysis of Candidate RNAs Localized to Human Synapses After model validation, PredSynRNA was applied to the prediction of synaptically localized candidate RNAs from a list of brain-expressed RNAs, including 7046 mRNAs and 3331 lncRNAs. The top three PredSynRNA models with the best performance in crossvalidations and on the independent test dataset were used to predict the probability of a given RNA transcript being synaptically localized, with the default probability threshold of 0.5. The positive predictions shared by all the three models were referred to as the high-confidence list of candidate RNAs. To understand the biological processes or cellular functions in which the high-confidence candidates might be involved, we performed functional annotation clustering analysis using Genes 2022, 13, 1488 5 of 15 DAVID Bioinformatics Resources 6.8 with the list of brain-expressed genes as the background [50]. High classification stringency was used, and the EASE score referring to the one-tail Fisher exact probability value for the enrichment analysis was set to 0.01. The high-confidence list of candidate RNAs was also compared with the SynGO gene list, which included 1112 synaptic genes based on gene ontology (GO) annotations and published, expert-curated evidence [51]. GSEAPreranked analysis (GSEA 4.1.0) with default parameters [52] was performed to examine the enrichment of SynGO genes in the ranked list of the brain expressed RNAs according to the probability scores predicted by PredSynRNA. Results The machine learning task in this study can be defined as a binary classification problem, and our method, PredSynRNA, is illustrated in Figure 1. Dendritically and somatically localized RNAs were compiled from previous rodent studies [16][17][18][19][20] (Figure S1) due to the lack of published RNA instances in human neurons. Human orthologues were identified, and the RNAs shared in at least two studies were selected and taken as training instances. For feature encoding, the k-mer frequencies of RNA transcript sequences and the developmental brain gene expression profiles from the BrainSpan Atlas of the Developing Human Brain [39] were used to construct models with different learning algorithms. A Random Forest-based method was used for feature selection, and model performance was evaluated by 10-fold cross-validations and an independent test dataset. The best models were then utilized to predict and prioritize synaptically localized candidate RNAs. the background [50]. High classification stringency was used, and the EASE score re ring to the one-tail Fisher exact probability value for the enrichment analysis was se 0.01. The high-confidence list of candidate RNAs was also compared with the SynGO g list, which included 1112 synaptic genes based on gene ontology (GO) annotations a published, expert-curated evidence [51]. GSEAPreranked analysis (GSEA 4.1.0) with fault parameters [52] was performed to examine the enrichment of SynGO genes in ranked list of the brain expressed RNAs according to the probability scores predicted PredSynRNA. Results The machine learning task in this study can be defined as a binary classification pr lem, and our method, PredSynRNA, is illustrated in Figure 1. Dendritically and som cally localized RNAs were compiled from previous rodent studies [16][17][18][19][20] (Figure S1) d to the lack of published RNA instances in human neurons. Human orthologues were id tified, and the RNAs shared in at least two studies were selected and taken as train instances. For feature encoding, the k-mer frequencies of RNA transcript sequences a the developmental brain gene expression profiles from the BrainSpan Atlas of the Dev oping Human Brain [39] were used to construct models with different learning al rithms. A Random Forest-based method was used for feature selection, and model p formance was evaluated by 10-fold cross-validations and an independent test dataset. T best models were then utilized to predict and prioritize synaptically localized candid RNAs. Figure 1. Schematic diagram of PredSynRNA for prediction of synaptically localized RNAs. F dendritically and somatically localized RNAs were compiled from previous rodent studies. Th human orthologues were identified and taken as the training instances. Second, features were tracted from RNA sequence and developmental brain gene expression data. Third, feature select was conducted using a Random Forest-based method. Forth, various machine learning models w constructed and evaluated. Lastly, the best models were applied to the prediction and prioritizat of candidate RNAs that may be localized to human synapses. First, dendritically and somatically localized RNAs were compiled from previous rodent studies. Then, human orthologues were identified and taken as the training instances. Second, features were extracted from RNA sequence and developmental brain gene expression data. Third, feature selection was conducted using a Random Forest-based method. Forth, various machine learning models were constructed and evaluated. Lastly, the best models were applied to the prediction and prioritization of candidate RNAs that may be localized to human synapses. Prediction of Synaptically Localized RNAs Using Sequence and Expression Features We first constructed and evaluated various machine learning models using sequence features in terms of k-mer frequencies. Figure 2 show the ROC and precision-recall (PR) curves of the SVM, ANN, and RF models using a combination of 1-mer, 2-mer, and 3-mer frequencies. Since the training dataset was imbalanced, PR curves were used to show the model's ability to predict positive instances [53]. A full comparison of the models using different sequence features is shown in Figure S2. The SVM model appeared to slightly outperform the ANN and RF models and achieved the ROC-AUC of 0.644 and PR-AUC of 0.582 ( Figure 2 and Table 1). Although the different machine learning models using sequence features did not show good performance, they achieved higher ROC-AUC values than random guesses (ROC-AUC = 0.5), suggesting that the 5 and 3 UTRs might contain some relevant information for predicting synaptically localized RNAs. of 0.582 ( Figure 2 and Table 1). Although the different machine learning models usi sequence features did not show good performance, they achieved higher ROC-AUC v ues than random guesses (ROC-AUC = 0.5), suggesting that the 5′ and 3′ UTRs might co tain some relevant information for predicting synaptically localized RNAs. Next, we built different machine learning models using developmental brain ge expression features. Based on the ROC and PR curves from 10-fold cross-validations, t expression-based models clearly outperformed the sequence-based models (Figures 2 a S3). Particularly, the expression-based SVM model achieved the ROC-AUC of 0.771 a PR-AUC of 0.758, considerably higher than those of the sequence-based SVM model (F ure 2, Tables 1 and S4). The results suggest that developmental brain gene expression p files contain highly relevant information for predicting synaptically localized RNA However, model performance was not further improved by combining the expressi features with the inherently different sequence features (Table S4). Table 1. Performance metrics of models using different features and learning algorithms. Supp vector machine (SVM), artificial neural network (ANN), and random forest (RF) models achiev better performance using the expression features than the sequence features based on five rep tions of 10-fold cross-validations. Features Model ROC-AUC Accuracy Sensitivity Specificity F1 MC Next, we built different machine learning models using developmental brain gene expression features. Based on the ROC and PR curves from 10-fold cross-validations, the expressionbased models clearly outperformed the sequence-based models (Figures 2 and S3). Particularly, the expression-based SVM model achieved the ROC-AUC of 0.771 and PR-AUC of 0.758, considerably higher than those of the sequence-based SVM model (Figure 2, Tables 1 and S4). The results suggest that developmental brain gene expression profiles contain highly relevant information for predicting synaptically localized RNAs. However, model performance was not further improved by combining the expression features with the inherently different sequence features (Table S4). Relevant Expression Features Learned by PredSynRNA Feature selection was performed in this study to potentially improve model performance and to identify the most relevant features for predicting synaptically localized RNAs. We first computed the importance score of each feature using the RF-based method and then utilized the top-ranked expression or sequence features to build various machine learning models. However, when compared with using the full feature sets, feature selection did not significantly improve the performance of the expression or sequence-based models (Figures S4 and S5; Tables S4 and S5). For the expression features, as the dimensionality increased to 192 features, the models with different learning algorithms appeared to reach close to the maximum performance ( Figure S4), suggesting that the top-ranked expression features captured most of the relevant information for predicting synaptically localized RNAs. We examined the expression features, which included a series of developmental time points and brain tissue types of the samples in the BrainSpan dataset. As shown in Figure 3, the top three developmental time points based on the importance scores of the expression features included 2 years, 35 post-conception weeks, and 8 years, whereas the top three brain tissue types were found to be the orbital frontal cortex (OFC), hippocampus (HIP), and primary somatosensory cortex (S1C). The OFC is a prefrontal cortex region, which is critical in many aspects of brain function, including cognitive abilities, decision making, emotional processing, semantic memory, and language [54,55]. The HIP plays a key role in memory, learning, and spatial orientation [56]. The S1C is part of the somatosensory system, which is known for processing various somatosensory inputs from the body and has recently been shown to be involved in emotional regulation [57]. Taken together, our findings from feature selection are generally consistent with the knowledge that an explosion of synaptogenesis occurs in cortical regions during early brain development [58,59], further suggesting that the PredSynRNA models have learned relevant expression features for predicting synaptically localized RNAs. Relevant Expression Features Learned by PredSynRNA Feature selection was performed in this study to potentially improve model performance and to identify the most relevant features for predicting synaptically localized RNAs. We first computed the importance score of each feature using the RF-based method and then utilized the top-ranked expression or sequence features to build various machine learning models. However, when compared with using the full feature sets, feature selection did not significantly improve the performance of the expression or sequence-based models (Figures S4 and S5; Tables S4 and S5). For the expression features, as the dimensionality increased to 192 features, the models with different learning algorithms appeared to reach close to the maximum performance ( Figure S4), suggesting that the topranked expression features captured most of the relevant information for predicting synaptically localized RNAs. We examined the expression features, which included a series of developmental time points and brain tissue types of the samples in the BrainSpan dataset. As shown in Figure 3, the top three developmental time points based on the importance scores of the expression features included 2 years, 35 post-conception weeks, and 8 years, whereas the top three brain tissue types were found to be the orbital frontal cortex (OFC), hippocampus (HIP), and primary somatosensory cortex (S1C). The OFC is a prefrontal cortex region, which is critical in many aspects of brain function, including cognitive abilities, decision making, emotional processing, semantic memory, and language [54,55]. The HIP plays a key role in memory, learning, and spatial orientation [56]. The S1C is part of the somatosensory system, which is known for processing various somatosensory inputs from the body and has recently been shown to be involved in emotional regulation [57]. Taken together, our findings from feature selection are generally consistent with the knowledge that an explosion of synaptogenesis occurs in cortical regions during early brain development [58,59], further suggesting that the PredSynRNA models have learned relevant expression features for predicting synaptically localized RNAs. Figure 3. Visualization of the RF-based importance scores for the developmental brain gene expression features. The expression features with no importance score were excluded, and the importance scores of expression features in the same developmental time point and tissue type were averaged. The labels on the x-axis and y-axis have been arranged in descending orders based on the importance ranks of developmental time points and brain tissue types, respectively (pcw: post conception week; mos: months; yrs: years). Figure 3. Visualization of the RF-based importance scores for the developmental brain gene expression features. The expression features with no importance score were excluded, and the importance scores of expression features in the same developmental time point and tissue type were averaged. The labels on the x-axis and y-axis have been arranged in descending orders based on the importance ranks of developmental time points and brain tissue types, respectively (pcw: post conception week; mos: months; yrs: years). Evaluation of Model Performance on an Independent Test Dataset To further evaluate the predictive performance of the models, we compiled an independent test dataset with 613 positive instances and 925 negative instances, which were not included in the training dataset. Notably, almost all the tested models achieved comparable performance on the independent test dataset as in cross-validations ( Figure 4 and Table S5). The performance metrics of the SVM, ANN, and RF models using the full expression features are depicted in Figure 4. Interestingly, when compared with model performance in cross-validations, the SVM and RF models achieved slightly higher ROC-AUC, accuracy, and MCC on the independent test dataset, whereas the ANN model showed slightly reduced performance, probably due to the fact that ANN could be easily overfitted on a small training dataset and the model generalization ability might be affected. In addition, feature selection did not improve model performance on the independent test dataset (Table S5). Overall, the results confirmed the predictive capability of the PredSynRNA models using developmental brain gene expression data. es 2022, 13, x FOR PEER REVIEW 8 of To further evaluate the predictive performance of the models, we compiled an in pendent test dataset with 613 positive instances and 925 negative instances, which w not included in the training dataset. Notably, almost all the tested models achieved co parable performance on the independent test dataset as in cross-validations (Figure 4 a Table S5). The performance metrics of the SVM, ANN, and RF models using the full pression features are depicted in Figure 4. Interestingly, when compared with model p formance in cross-validations, the SVM and RF models achieved slightly higher RO AUC, accuracy, and MCC on the independent test dataset, whereas the ANN mo showed slightly reduced performance, probably due to the fact that ANN could be eas overfitted on a small training dataset and the model generalization ability might be fected. In addition, feature selection did not improve model performance on the in pendent test dataset (Table S5). Overall, the results confirmed the predictive capability the PredSynRNA models using developmental brain gene expression data. Prediction and Prioritization of Candidate Human RNAs Localized to Synapses To identify synaptically localized candidate RNAs, we applied the SVM, ANN, a RF models trained with the full expression features to classify a list of 10,377 brainpressed RNAs, including 7046 mRNAs and 3331 lncRNAs. Overall, 2747, 1348, and 27 mRNAs were predicted to be synaptically localized mRNAs by the SVM, ANN, and models, respectively ( Figure S6 and Table S6). Particularly, 1070 candidate mRNAs w shared by the three lists of predictions. Moreover, 330 lncRNAs were commonly predic by the three PredSynRNA models ( Figure S7 and Table S7). These common predictio were regarded as high-confidence candidate RNAs that may be localized to human sy apses. To characterize the high-confidence candidates, we performed DAVID functional notation clustering analysis [50]. As shown in Figure 5, six functional terms were fou to be significantly enriched in the candidate list, including extracellular exosome, mi chondrial part, and ribosomal subunit as the top three gene ontology (GO) terms. E somes, a class of extracellular vesicles, have been shown to play key roles in the cent nervous system, synaptic plasticity, and inter-neuronal communication [60,61]. At synapse, membrane-bound vesicles store neurotransmitters, enabling the transfer of formation between neuron cells [62]. In addition, neurons highly rely on aerobic oxidat phosphorylation together with the principal energy producers, mitochondria, to supp synapse dynamics. The dysfunctions of these crucial factors may contribute to the path ogy associated with neurodegenerative disorders such as Alzheimer's disease [63,6 Prediction and Prioritization of Candidate Human RNAs Localized to Synapses To identify synaptically localized candidate RNAs, we applied the SVM, ANN, and RF models trained with the full expression features to classify a list of 10,377 brain-expressed RNAs, including 7046 mRNAs and 3331 lncRNAs. Overall, 2747, 1348, and 2777 mRNAs were predicted to be synaptically localized mRNAs by the SVM, ANN, and RF models, respectively ( Figure S6 and Table S6). Particularly, 1070 candidate mRNAs were shared by the three lists of predictions. Moreover, 330 lncRNAs were commonly predicted by the three PredSynRNA models ( Figure S7 and Table S7). These common predictions were regarded as high-confidence candidate RNAs that may be localized to human synapses. To characterize the high-confidence candidates, we performed DAVID functional annotation clustering analysis [50]. As shown in Figure 5, six functional terms were found to be significantly enriched in the candidate list, including extracellular exosome, mitochondrial part, and ribosomal subunit as the top three gene ontology (GO) terms. Exosomes, a class of extracellular vesicles, have been shown to play key roles in the central nervous system, synaptic plasticity, and inter-neuronal communication [60,61]. At the synapse, membrane-bound vesicles store neurotransmitters, enabling the transfer of information between neuron cells [62]. In addition, neurons highly rely on aerobic oxidative phosphorylation together with the principal energy producers, mitochondria, to support synapse dynamics. The dysfunctions of these crucial factors may contribute to the pathology associated with neurodegenerative disorders such as Alzheimer's disease [63,64]. Moreover, differential expression analysis in a previous study [18] suggested that the mitochondrial membrane, ribosomal subunit, and electron transport chain are among the top GO terms enriched in dendrites. Therefore, the results demonstrated a significant association between the candidate RNAs and synapse-related functions. enes 2022, 13, x FOR PEER REVIEW 9 of 1 Figure 5. Functional terms enriched within the high-confidence candidate RNAs. The DAVID func tional annotation clustering analysis was performed for the list of 1400 high-confidence candidat RNAs predicted by PredSynRNA to be synaptically localized. GO terms (GOTERM_BP_4 GOTERM_CC_4, and GOTERM_MF_4) were used for the functional analysis. For each annotation cluster, the most enriched GO term, its gene count, and statistical significance are shown in the dia gram. To further examine the functional association with synapses, we compared the can didate RNAs with a set of 1112 human synaptic genes curated by the SynGO databas [51]. The list of 10,377 brain-expressed RNAs was ranked by the mean probability score predicted by the SVM, ANN, and RF models of PredSynRNA, and the enrichment of syn aptic genes in the ranked list was analyzed using the GSEAPreranked algorithm [52]. A shown in Figure 6, the synaptic genes from SynGO are significantly enriched near the top of the ranked list, where the candidate RNAs are located. The enrichment score (ES reaches the maximum (0.2373) near the top of the ranked list, and the nominal p-value i estimated to be zero (actual p-value < 0.001). A list of 82 SynGO synaptic genes showing core enrichment is provided in Table S8. Taken together, our results suggest that th PredSynRNA models can be used to prioritize the candidate RNAs for investigating thei functional roles in human synapses. To further examine the functional association with synapses, we compared the candidate RNAs with a set of 1112 human synaptic genes curated by the SynGO database [51]. The list of 10,377 brain-expressed RNAs was ranked by the mean probability scores predicted by the SVM, ANN, and RF models of PredSynRNA, and the enrichment of synaptic genes in the ranked list was analyzed using the GSEAPreranked algorithm [52]. As shown in Figure 6, the synaptic genes from SynGO are significantly enriched near the top of the ranked list, where the candidate RNAs are located. The enrichment score (ES) reaches the maximum (0.2373) near the top of the ranked list, and the nominal p-value is estimated to be zero (actual p-value < 0.001). A list of 82 SynGO synaptic genes showing core enrichment is provided in Table S8. Taken together, our results suggest that the PredSynRNA models can be used to prioritize the candidate RNAs for investigating their functional roles in human synapses. Figure 6. Significant enrichment of synaptic genes in the ranked list of candidate RNAs. T 10,377 brain-expressed RNAs was ranked by the mean probability scores predicted by ANN, and RF models using the full set of expression features. The GSEAPreranked ana was then performed for a set of 1112 human synaptic genes obtained from the SynGO data The enrichment score (ES) reaches the maximum (0.2373) near the top of the ranked list nominal p-value is estimated to be zero by an empirical phenotype-based permutation te dure (actual p-value < 0.001 with 1000 permutations). Discussion RNA localization to synapses is not only regarded as one of the driving fo developmental changes in the brain but is also implicated in neurological disease machine learning methods have been developed for predicting RNA localization tiple cellular compartments [29][30][31][32][33][34][35], such predictors are still lacking for synaptica ized RNAs. In this study, we developed a new machine learning method, PredS to predict the synaptic localization of human RNAs. PredSynRNA models utilize opmental brain gene expression data as features and achieved relatively high perfo in cross-validations and on an independent test dataset. Our results also suggest models can capture relevant expression features for predicting and prioritizing ca RNAs localized to human synapses. However, the performance of PredSynRNA m limited due to the lack of experimentally verified human RNA instances for mod ing. To construct the models, we used human orthologues of rodent RNAs iden previous studies, which had only a small number of dendritic RNAs in commo PredSynRNA model performance may be further improved by compiling a more hensive and high-quality training dataset for this difficult machine learning tas future. Despite the limited and noisy training data, PredSynRNA models using the d mental brain gene expression features achieved relatively high performance for pr synaptically localized RNAs. However, the addition of RNA sequence features Figure 6. Significant enrichment of synaptic genes in the ranked list of candidate RNAs. The list of 10,377 brain-expressed RNAs was ranked by the mean probability scores predicted by the SVM, ANN, and RF models using the full set of expression features. The GSEAPreranked analysis [51] was then performed for a set of 1112 human synaptic genes obtained from the SynGO database [50]. The enrichment score (ES) reaches the maximum (0.2373) near the top of the ranked list, and the nominal p-value is estimated to be zero by an empirical phenotype-based permutation test procedure (actual p-value < 0.001 with 1000 permutations). Discussion RNA localization to synapses is not only regarded as one of the driving forces for developmental changes in the brain but is also implicated in neurological diseases. While machine learning methods have been developed for predicting RNA localization to multiple cellular compartments [29][30][31][32][33][34][35], such predictors are still lacking for synaptically localized RNAs. In this study, we developed a new machine learning method, PredSynRNA, to predict the synaptic localization of human RNAs. PredSynRNA models utilized developmental brain gene expression data as features and achieved relatively high performance in cross-validations and on an independent test dataset. Our results also suggest that the models can capture relevant expression features for predicting and prioritizing candidate RNAs localized to human synapses. However, the performance of PredSynRNA might be limited due to the lack of experimentally verified human RNA instances for model training. To construct the models, we used human orthologues of rodent RNAs identified by previous studies, which had only a small number of dendritic RNAs in common. Thus, PredSynRNA model performance may be further improved by compiling a more comprehensive and high-quality training dataset for this difficult machine learning task in the future. Despite the limited and noisy training data, PredSynRNA models using the developmental brain gene expression features achieved relatively high performance for predicting synaptically localized RNAs. However, the addition of RNA sequence features in terms of k-mer frequencies did not further improve model performance. This is rather surprising as many previous studies attempted to identify potential localization elements present in the untranslated regions, mostly the 3 UTRs of mRNA transcripts in neurites. Since these elements can be heterogeneous to a great extent in size and structure, it may be hard to predict and deduce the consensus sequence or structural motifs [65]. Moreover, mRNA localization in neurites can also be affected by alternative splicing and polyadenylation. Previous studies have also shown that neuronal mRNAs are prone to have diverse 3 UTR isoforms, which differ in subcellular locations, including soma and neurites [19,20,66,67]. Therefore, simple sequence features such as k-mer frequencies may not be able to delineate the complex patterns of RNA localization to synapses. Nevertheless, the results do not necessarily mean that RNA transcript sequences do not contain relevant information for predicting synaptically localized RNAs. In future studies, state-of-the-art deep learning techniques may be utilized to uncover the sequence patterns that determine RNA localization to synapses. It is noteworthy that deep learning techniques have been used to identify sequence motifs for mRNA subcellular localization to the nucleus, cytosol, endoplasmic reticulum, and exosome [31,32]. RNATracker [31] implemented a convolutional neural network (CNN) coupled with bi-directional long shortterm memory (LSTM) layers to learn and extract sequence information for predicting mRNA subcellular localization, and the weights learned by the first CNN layer were converted into position-weight matrices and matched with known motifs of RNA-binding proteins to reveal the localization zip codes. DM3Loc [32] employed multiscale CNN filters and multi-head self-attention layers to infer the localization zip codes. However, the lack of highquality localization data and the complexity of alternative splice variants for synaptically localized RNAs make it difficult to apply sophisticated deep learning techniques. Since the robust performance of PredSynRNA was demonstrated by cross-validations and using an independent test dataset, the models were then utilized to predict and prioritize candidate RNAs, mostly mRNAs, which might be localized to human synapses. Interestingly, the top five candidate mRNAs include RPL8, MZT2B, RPS20, TMEM219, and HBB (Table S6). RPL8 has been identified as one of the candidate proteins that are significantly associated with the prognosis of the most aggressive brain cancer-glioblastoma and temozolomide treatment [68]. MZT2B has been reported to be one of the potential hippocampus genes associated with Alzheimer's disease [69]. RPS20 has been suggested as a candidate gene associated with medulloblastoma, the most common malignant brain tumor in children [70]. The TMEM219 gene is located in a multigenetic copy number variation region (16p11.2) associated with several brain disorders, including schizophrenia, seizure, and Alzheimer's disease [71][72][73]. HBB has been shown to be in mitochondrial fractions of mammalian neurons and involved in neuronal metabolism to provide neuroprotection in multiple sclerosis [74][75][76][77]. The PredSynRNA models have also been used to predict a list of synaptically localized candidate lncRNAs, including SNHG8 and MALAT1 (Table S7). As the full set of human synaptic RNAs remains largely unclear, we anticipate that the high-confidence candidate RNAs predicted by PredSynRNA can provide valuable targets for further experimental studies. However, it should be noted that the human brain is the most complex organ, which comprises different cell types of great diversity [78]. Although PredSynRNA has been trained using the developmental brain gene expression data with most samples derived from cortex regions that tend to have high neuronal enrichment, the predicted candidate RNAs may also be expressed in other non-neuronal cell types such as glial cells. With the accumulation of single-cell RNA-seq data, which provide fine resolution in examining cellular compositions and dynamics during brain development [78,79], PredSynRNA may be further refined by incorporating comprehensive, high-quality cell-type specific data in the future. Conclusions In this study, we developed a new machine learning method, PredSynRNA, to predict the synaptic localization of human RNAs. The PredSynRNA model utilized developmental brain gene expression data as features to achieve relatively high performance during cross-validations and on an independent test dataset. Our results also suggest that the model is capable of capturing relevant expression features and can be used to predict and prioritize candidate RNAs localized to human synapses. In the future, PredSynRNA model performance may be further improved by compiling and curating a more comprehensive and high-quality training dataset for this difficult machine learning task.
9,283.4
2022-08-01T00:00:00.000
[ "Biology", "Computer Science" ]
Dynamic 15N{1H} NOE measurements: a tool for studying protein dynamics Intramolecular motions in proteins are one of the important factors that determine their biological activity and interactions with molecules of biological importance. Magnetic relaxation of 15N amide nuclei allows one to monitor motions of protein backbone over a wide range of time scales. 15N{1H} nuclear Overhauser effect is essential for the identification of fast backbone motions in proteins. Therefore, exact measurements of NOE values and their accuracies are critical for determining the picosecond time scale of protein backbone. Measurement of dynamic NOE allows for the determination of NOE values and their probable errors defined by any sound criterion of nonlinear regression methods. The dynamic NOE measurements can be readily applied for non-deuterated or deuterated proteins in both HSQC and TROSY-type experiments. Comparison of the dynamic NOE method with commonly implied steady-state NOE is presented in measurements performed at three magnetic field strengths. It is also shown that improperly set NOE measurement cannot be restored with correction factors reported in the literature. Electronic supplementary material The online version of this article (doi:10.1007/s10858-020-00346-6) contains supplementary material, which is available to authorized users. Introduction Since its first use of magnetic relaxation measurements of 15 N nuclei applied to the protein, the staphylococcal nuclease (Kay et al. 1989), this method has become indispensable in the determination of molecular motions in biopolymers (Jarymowycz and Stone 2006;Kempf and Loria 2003;Palmer, III 2004;Reddy and Rayney 2010;Stetz et al. 2019). The canonical triad of relaxation parameters-longitudinal (R 1 ) and transverse (R 2 ) relaxation rates accompanied by the 15 N{ 1 H} nuclear Overhauser effect (NOE)-have been most often used in studies investigating the mobility of backbone in proteins. It is a common opinion that 15 N{ 1 H} NOE is unique among the mentioned three relaxation parameters because it is regarded as essential for the accurate estimation of the spectral density function at high frequencies (ω H ± ω N ), and it is crucial for the identification of fast backbone motions. (Idiyatullin et al. 2001;Gong and Ishima 2007;Ferrage et al. 2009). The most common method for the determination of X{ 1 H} NOE is a steady-state approach. It requires measurements of the longitudinal polarization at the thermal equilibrium of spin X system, S 0 , and the steady-state longitudinal X polarization under 1 H irradiation, S sat (Noggle and Schirmer 1971). Note that the nuclear Overhauser effect, defined as = S sat S 0 , should not be mistaken with nuclear Overhauser enhancement, = S sat − S 0 S 0 = − 1 (Harris et al. 1997). It has to be pointed out that NOE measurements appear to be very demanding and artifact prone observations. One of severe obstacles in these experiments is their ca. tenfold lower sensitivity in comparison to R 1N and R 2N which is Electronic supplementary material The online version of this article (doi:https ://doi.org/10.1007/s1085 8-020-00346 -6) contains supplementary material, which is available to authorized users. 1 3 due to the fact that the NOE experiments with 1 H detection start with the equilibrium 15 N magnetization rather than 1 H. The steady-state 15 N{ 1 H} NOEs (ssNOE) are normally determined as a ratio of cross-peak intensities in two experiments-with and without saturation of H N resonances. Such arrangement creates problems with computing statistically validated assessment of experimental errors. 15 N{ 1 H} NOE pulse sequence requires a very careful design as well. Properly chosen recycle delays between subsequent scans and saturation time of H N protons have to take into account the time needed to reach the equilibrium or stationary values of 15 N and H N magnetizations (Harris and Newman 1976;Canet 1976;Renner et al. 2002). Exchange of H N protons with the bulk water combined with the long longitudinal relaxation time of water protons leads to prolonged recycle delay in the spectrum acquired without saturation of H N resonances. Unintentional irradiation of the water resonance suppresses H N and other exchangeable signals owing to the saturation transfer and many non-exchangeable 1 H resonances via direct or indirect NOE with water (Grzesiek and Bax 1993) while interference of DD/CSA relaxation mechanisms of 15 N amide nuclei disturbs the steady-state 15 N polarization during 1 H irradiation (Ferrage et al. 2009). All aforementioned processes depend directly or indirectly on the longitudinal relaxation rates of amide 1 H and 15 N nuclei R 1H and R 1N as well as the longitudinal relaxation rate of water protons, R 1W , and the exchange rate between water and amide protons, k. In this study, the dynamic NMR experiment (DNOE), a forgotten method of the NOE determination in proteins, was experimentally tested, and the results were compared with independently performed steady-state NOE measurements at several magnetic fields for widely studied, small, globular protein ubiquitin. Additionally, several difficulties inherent in 15 N{ 1 H} NOEs and methods for overcoming or minimizing these difficulties are cautiously discussed. Experimental The uniformly labeled U-[ 15 N] human ubiquitin was obtained from Cambridge Isotope Laboratories, Inc in lyophilized powder form and dissolved to 0.8 mM protein concentration in buffer containing 10 mM sodium phosphate at pH 6.6 and 0.01% (m/v) NaN 3 . DSS-d 6 of 0.1% (m/v) in 99.9% D 2 O was placed in a sealed capillary inserted into the 5 mm NMR tube. Amide resonance assignments of ubiquitin were taken from BioMagResBank (BMRB) using the accession code 6457 (Cornilescu et al. 1998). NMR experiments were performed on three Bruker Avance NEO spectrometers operating at 1 H frequencies of 700, 800 and 950 MHz equipped with cryogenic TCI probes. The temperature was controlled before and after each measurement with an ethylene glycol reference sample (Rainford et al. 1979) and was set to 25 °C. The temperature was stable with maximum detected deviation of ± 0.3 °C. Chemical shifts in the 1 H NMR spectra were reported with respect to external DSS-d 6 while chemical shifts of the 15 N signals were referenced indirectly using frequency ratio of 0.101329118 (Wishart et al. 1995). The spectral widths were set to 12 ppm and 22 ppm for 1 H and 15 N, respectively. The number of complex data points collected for 1 H and 15 N dimensions 2048 and 200, respectively. In each experiment, 8 scans were accumulated per FID. Double zero filling and a 90°-shifted squared sine-bell filter were applied prior to Fourier transformation. Data were processed using the program nmrPipe (Delaglio et al. 1995) and analyzed with the program SPARKY (Goddard and Kneller). Resonance intensities were used in calculating relaxation times and NOE values obtained from a nonlinear least-squares analysis performed using Fortran routines written in-house, based on the Newton-Raphson algorithm (Press et al. 2007). The pulse programs used in this work were based on the HSQC-type R 1 ( 15 N) and 15 N{ 1 H} NOE experiments (Lakomek et al. 2012). The carrier frequency during 1 H saturation with 22 ms spaced 180° hard pulses on 1 H was moved from water frequency to the centre of amide region (8.5 ppm). Evolution times in R 1 ( 15 N) and dynamic NOE experiments were collected in random order. Reproducibility of experiments was excellent. Therefore, the interleaved mode was not used since it could introduce instabilities of water magnetization (Renner et al. 2002). The list of delays applied in the experiments used in this work is given in Table S3. Dynamic NOE measurement-introduction It can be concluded from the Solomon equations (Solomon 1955) that in the heteronuclear spin system X-H, the heteronuclear Overhauser effect is built up with the rate R 1 (X) under the condition of proton saturation as shown for the 13 C-1 H spin system (Kuhlmann et al. 1970;Kuhlmann and Grant 1971). As a consequence of this observation a dynamic NOE was employed for the simultaneous determination of R 1 ( 13 C) and 13 C{ 1 H} NOE using Eq. (1) Measurements of time dependent changes of signal intensities S(t) allow for the determination of ε, R 1 , and their probable errors, as defined by any standard criterion of nonlinear regression methods. The DNOE can be especially beneficial in studying nuclei with negative magnetogyric ratios since in unfavorable circumstances, nulling of the resonance in a proton saturated spectrum can occur. Therefore, the DNOE has been successfully used in relaxation studies of 29 Si (Kimber and Harris 1974;Ejchart et al. 1992) and 15 N (Levy et al. 1976) nuclei in organic molecules. The 15 N-DNOE has been also investigated in small protein (Zhukov and Ejchart 1999). This approach can be especially profitable in studies of medium to large size proteins displaying highly dynamic fragments. Time schedule of NOE measurement Both nitrogen polarizations, S sat and S 0 , depend on a number of physical processes in the vicinity of amide nitrogen nuclei. Dipolar interaction between 15 N and 1 H N brings about the nuclear Overhauser effect. Additional processes as chemical shift anisotropy relaxation mechanism of 15 N and its interference with 15 N/ 1 H N dipolar interaction, direct NOE and saturation transfer from water to 1 H N protons due to chemical exchange influence both nitrogen polarizations, especially if the pulse sequence itself will result in non equilibrium state of water protons. Presaturation of the water resonance resulting in partial saturation of water magnetization attenuates 1 H N signal intensities mostly through the chemical exchange or through homonuclear NOE with water protons. (Grzesiek and Bax 1993;Lakomek et al. 2012). Therefore, evolution of the spin system towards S sat or S 0 nitrogen polarizations depends on the rates of the processes mentioned above, the longitudinal relaxation rates of 15 N, 1 H N , and water protons, R 1N , R 1H , and R 1W , and the chemical exchange rate, k, between amide and water protons. These rates strongly determine the time schedule of NOE measurements, which is schematically shown in Fig. 1. Hence, their knowledge is a prerequisite for choice of optimal delays. The numerical data of R 1H and R 1W for the sample studied here are given in Table 1. Nevertheless, one should be aware that the R 1W depends on temperature, pH, and protein concentration. Residue specific R 1N values for the ubiquitin sample will be discussed further. In the noNOE reference measurement, 15 N nuclei have to reach the thermal equilibrium at the end of delay RD 1 . During the block denoted as measurement in Fig. 1, the pulse sequence resulting in the 2D 15 N/ 1 H spectrum with the desired cross peak intensities is executed. At the start of acquisition, several coupled relaxation processes take place, resulting in multi-exponential decay of 15 N, 1 H N , and water protons (Ferrage et al. 2008). Keeping in mind that R 1W is much smaller than the rates of other processes, it can be reasonably assumed that R 1W rate mainly defines RD 1 . Fulfillment of the condition where factor 0.02 has been chosen to some extent arbitrarily, should properly determine RD 1 values in most of the cases. Still one has to be aware that the smallest decay rate resulting from the exact solution of full relaxation matrix can be smaller than R 1W . In NOE measurement, the buildup of 15 N magnetization takes place with the rate R 1N . 15 N relaxation rates can be, however, broadly dispersed if mobility of N-H vectors in a studied molecule differ significantly. Therefore, to meet the condition a compromise may be required (c.f. Table S1). Experiments of steady-state and dynamic NOE measurements differ in the RD 2 setting. In the case of steady-state NOE, the value RD 2 = 0 is adequate. Even if the nitrogen polarization displays a nonzero value at the beginning of the D sat period, it will still have enough time to reach the steady-state condition. In dynamic NOE, however, the nitrogen polarization has to start from closely controlled thermal equilibrium. Therefore, condition (2) with RD 1 replaced with RD 2 has to be fulfilled. The description (RD 1 -RD 2 -D sat )/B 0 will be further adopted to characterize particular NOE experiments used in this work. Analysis of systematic errors resulting from an incorrect delay setting in NOE values, ε = S sat /S 0 , for nuclei with γ < 0 should take into account that these errors can be caused by false S 0 values and/or S sat values. The apparent S 0,app value in not fully relaxed spectrum is always smaller than the S 0 of true equilibrium value. On the other hand, the nonequilibrium apparent S sat,app value is always larger than the S sat , equilibrium value, i.e. more positive for ε > 0 or less negative for ε < 0. The joint effect of erroneous S sat and S 0 , however, does not always result in the relation ε app > ε as could be hastily concluded. An attenuated S 0 value in conjunction with properly determined, negative S sat results in ε app < ε, and this is experimentally confirmed by ε values observed for the C-terminal, mobile residue G76. Its values obtained in the measurements free of systematic errors Figure 8). Such misleading behavior could be expected for mobile residues in flexible loops, unstructured termini, or intrinsically disordered proteins. Setup and data processing of DNOE measurement Relation between signal intensities and evolution times in a dynamic NOE experiment, D sat , depend on three parameters: nuclear Overhauser effect, ε, nitrogen longitudinal relaxation rate, R 1N , and signal intensity at the thermal equilibrium, S 0 (Eq. 1). Provided that the longitudinal relaxation rates have been previously obtained in a separate experiment, their values can be entered in Eq. 1, reducing the number of determined parameters in a computational task further denoted as a sequential one. Influence of the propagation of R 1N errors on the ε values is usually negligible; variation of R 1N values within the range ± σ (standard deviation) typically results in dε changes smaller than 10 -5 except for residues exhibiting ε < 0.4 (Figs. S1, S2). In ubiquitin, such residues are located at the C-terminus. This behavior is attributed to the stronger correlation between ε and R 1N parameters owing to the increased range of signal intensities for smaller ε values (Fig. 2). Another possibility of data processing, simultaneous use of dynamic NOE and relaxation rate data in one computational task, brings about results (ε and dε values) practically identical to those obtained in the sequential task. The dynamic NOE data can also be used without support from separate R 1N data. Such data processing delivers the ε values and their errors close to those resulted from the sequential or simultaneous approach (Figs. S3, S5). On the other hand, derived R 1 relaxation rates are less accurate with errors an order of magnitude larger than those obtained in the dedicated R 1 experiment (Figs. S4, S6). Therefore, a dynamic NOE measurement cannot be regarded as a complete equivalence of a separate R 1 experiment. Numerical data for three different data processing methods of dynamic NOE at 22.3 T are given in the Table S2, and a comparison of the discussed numerical Table 3. Recently, an experimentally demanding TROSY-based pulse sequence dedicated to deuterated proteins has been invented for simultaneous measurement of R 1N relaxation rates and ε values. The accuracy of the proposed technique has been verified by comparison to the results of both relaxation parameters measured conventionally (O'Brien and Palmer III 2018). Dynamic NOE measurements, as with relaxation rate experiments, require optimization of a number and length of saturation periods, D sat . One important assumption in the selection of D sat values is to sample a broad range of intensities I(t) ~ S(t) in a uniform manner. The shortest D sat equal to zero delivers I 0 ~ S 0 . The longest D sat should be as close to a value fulfilling the condition (2) as is practically feasible (c.f. Table S1). These assumptions were checked on the DNOE measurement comprising 11 delays. Next the number of delays was reduced to seven and then to 4 selected delays, and results were compared. Apparent NOE values and their standard deviations changed only slightly. Residue specific differences in ε values between the full experiment and each of the reduced ones were smaller than appropriate dε values. They are compared in Fig. 3, and the presented data assure that four correctly chosen D sat values do not deteriorate ε values and their accuracies. This conclusion allows us to state that DNOE measurement can require an acceptable amount of spectrometer time. Error determination of NOE measurements The NOE errors are equally important to NOE values themselves. They are used to weigh the NOE data in the relaxation-based backbone protein dynamics calculation (Palmer et al. 1991;d'Auvergne 2008;Jaremko et al. 2015). Inaccurate values of NOE errors can result in the erroneous estimation of protein backbone dynamics. Particularly, the overestimation of NOE leads to significant errors in the local dynamics parameters as evidenced by appropriate simulations (Ferrage et al. 2008). Occasionally, the average values of the NOE and standard errors in the mean have been determined from several separate NOE data sets (Stone et al. 1992;Renner et al. 2002). Nonetheless, it has been most often accepted to use signal-to-noise ratios (SNR) in the determination of steady-state NOE errors (Farrow et al. 1994;Tjandra et al. 1995;Fushman 2003). The Eq. (4) is an approximation of exact formulation of experimental error determination since it takes into account only this part of experimental errors which arises from the thermal noise. It can be safely used if the thermal noise dominates other contributions to the total experimental error. A weak point in Eq. (4) arises also from the fact that amino acid residues located in flexible parts of macromolecules often display NOE values close to zero, which results in the underestimation of dε, owing to the factor | | as shown in Eq. (4). Justification of an SNR-based approach should comprise two issues: checking of the reliability of SNR determination delivered by commonly used processing tools and comparison of the SNR-determined errors with those obtained from the statistical analysis of a series of independent NOE measurements. To the best of authors' knowledge, such study has not been yet undertaken for 15 N nuclei in proteins and has only be performed once for 13 C nuclei (Bernatowicz et al. 2010). In our study, we found that SNR values automatically derived in the peak intensity determination differed from those obtained semi-manually; their larger part was overestimated. Therefore, automatically delivered SNR values concomitant cross peak intensities cannot be taken for granted. Description of the SNR issue is given in the Supporting Material (section: Determination of signal-to-noise ratio). In order to closely analyze the relevance of SNR-based NOE errors, a series of 10 NOE measurements was performed at 22.3 T using identical spectrometer setup. A comparison of standard deviations (σ) calculated for each of 70 residues of ubiquitin with corresponding means of SNR-based NOE errors is presented in Fig. 4. It can be concluded from Fig. 4 that values of two presented sets of NOE errors are very similar, and their means are close to one another with a Saturation of H N protons Originally, saturation of proton resonances was achieved by a train of 250° pulses at 10 ms intervals (Markley et al. 1971). In protein relaxation studies, however, a train of 120° pulses spaced 20 ms apart was commonly used for this purpose (Kay et al. 1989). In search of the optimal 1 H saturation scheme, different pulse lengths (120°, 180°, 250°) and different pulse spacings (5 ms, 10 ms, 20 ms) were employed (Renner et al. 2002). Finally, it was concluded that pulses of approximately 180° at l0 ms intervals performed slightly better than other settings. Extensive experimental survey of H N proton saturation accompanied by theoretical calculations based on averaged Liouvillian theory was carried out on all components of saturation sequence (Ferrage et al. 2009(Ferrage et al. , 2010. It was concluded that the best results were obtained using the symmetric 180° pulse train (τ/2 − 180° − τ/2) n with τ = k/J NH , where n-the integer determining length of saturation time (D sat = n⋅ τ) and k-a small integer, usually k = 2, giving τ about 22 ms. It was also suggested to move the proton carrier frequency from water resonance to the center of the amide region and reduce the power of the 180° pulses to minimize sample heating. Analysis of NOE experiments NOE experiments performed to analyze the influence of a particular sequence of parameters on the apparent nuclear Overhauser effects values, ε app , are presented in Table S3. Experiments,DNOE/16.4,and DNOE/22.3 can be expected to deliver the most accurate results. They are regarded as a kind of reference point for a selected magnetic field. The importance of using appropriate D sat values in steady-state NOE measurements is demonstrated by comparing NOEs in the experiments (14-0-4)/18.8 and (14-0-14)/18.8. The first displays a systematic increase of ε app owing to incomplete H N saturation during D sat . Residue specific differences between the mentioned experiments are shown in Fig. 6. Residues G75 and G76 with negative ε values display decreased ε app as discussed earlier (section: Time schedule of NOE measurement). Calculation of factors exp(−D sat ⋅ R 1N ) using residue specific R 1N data is presented in Fig. 7 for D sat values utilized in the measurements performed at 22.3 T as listed in Table S3. The D sat = 3 s is sufficiently long for all residues except the last two C-terminal glycines, G75 and G76. In fact, even D sat = 4 s is not long enough for the observation (Fig. 9). The RD 1 = 3 s and RD 1 = 6 s result in the increase of ε magnitudes relative to the RD 1 = 13 s on average, 0.0544 and 0.0042, respectively. On the other hand, average difference between measurements with RD 1 = 13 s and RD 1 = 10 s is negligible − 0.0007. This result gives evidence that RD 1 delay equal to 10 s allows to reach the equilibrium state of H N protons in the studies system. Concluding, comparison of the NOE values obtained at different settings of D sat or RD 1 highlights the importance of the correct choice of delays in the determination of accurate ε values. Correction factors As has been shown above, the effect of slow spin-lattice relaxation of water protons and the chemical exchange of amide protons with water combined with too short relaxation delays in the steady-state NOE experiments usually results in substantial systematic NOE errors owing to the incomplete relaxation towards the steady-state or equilibrium 15 N polarization. Therefore, several correction factors were introduced to compensate such errors using the following equation where ε and ε app are exact and apparent NOE values, respectively. It has been claimed that the effect of incomplete R 1W recovery can be corrected by substituting the factor into Eq. 5 (Skelton et al. 1993). It has been also suggested that factor Δ for the RD 1 pair: 3 s and 13 s (brown circles), the RD 1 pair: 6 s and 13 s (orange triangles), the RD 1 pair: 10 s and 13 s(light green squares). Color coded average differences after rejection of G76 with ε < 0 are equal to 0.0544, 0.0042, and 0.0007 Fig. 10 Residue specific differences between corrected ε app and ε values obtained in (13-0-3)/22.3 measurement. The ε app values were obtained from (3-0-3)/22.3 experiment after compensation for R 1W (Eq. 5A, brown circles), R 1H (Eq. 5B, orange triangles), and R 1H , R 1N (Eq. 6, light green squares). Horizontal color-coded lines correspond to appropriate means of difference magnitudes allows for the correction of the not sufficiently long relaxation delay RD with respect to R 1H (Grzesiek and Bax 1993). Another correction that takes into consideration the inconsistency of both R 1N and R 1H with relaxation delays has also been recommended (Freedberg et al. 2002): Efficiencies of all three corrections were checked on the NOE measurement with the intentionally too short delays: RD 1 = 3 s, RD 2 = 0, and D sat = 3 s, (3-0-3)/22.3. As shown earlier (Fig. 9), all ε app in (3-0-3)/22.3 measurement were larger than corresponding ε values in the correctly performed measurement (13-0-3)/22.3. The mean of differences was equal to 0.054. None of these above-listed corrections was able to fully compensate the effect of wrong adjustment of RD 1 delay. Three corrections allowing for R 1W (Eq. 5), R 1H (Eq. 5B), and R 1H and R 1N (Eq. 5C) resulted in the means of absolute differences equal to 0.019, 0.048, and 0.036, respectively (Fig. 10). Therefore, these corrections have compensated for the delay missetting by 67%, 17%, and 38%, respectively. Obviously, the R 1W effect is the most important factor for compensation. Compensation for a not long enough D sat period with properly chosen RD 1 is an easier task. The experiment (10-10-1.3)/22.3 was discussed earlier, and its results were shown in Fig. 8. Use of another correction, results in the corrected ε app values, which differ from the DNOE experiment by an average of 0.003 (Fig. S7). Nevertheless, in view of the above-mentioned results, it is obvious that none of the existing correction terms should be used as a substitute for a properly designed experiment. Conclusions In this study, it has been shown that dynamic NOE measurement is an efficient and accurate method for NOE determination. In particular, it presents its usefulness in cases of NOE values that are close to zero. This method provides a robust and more accurate alternative to widely used steady-state NOE measurement. The DNOE measurement allows for the determination of NOE values and their accuracies with standard nonlinear regression methods. If high accuracy longitudinal relaxation rates R 1 are not of great importance, they can be simultaneously obtained with a reduced accuracy as a "by-product" in the DNOE data processing without any significant reduction of the accuracy and precision of determined NOE values. It has been proven that commonly used methods of NOE accuracy based on the signal-to-noise ratio accompanying steady-state NOE measurements are reliable provided that root-mean-square noise has been determined correctly. It has to be stressed that in view of the results presented in this work, none of the existing correction terms are able to restore accurate NOE values in cases where measurements are improperly set up and performed.
6,223
2020-09-12T00:00:00.000
[ "Physics" ]
The Quantified Argument Calculus and Natural Logic The formalisation of natural language arguments in a formal language close to it in syntax has been a central aim of Moss’s Natural Logic. I examine how the Quantified Argument Calculus (Quarc) can handle the inferences Moss has considered. I show that they can be incorporated in existing versions of Quarc or in straightforward extensions of it, all within sound and complete systems. Moreover, Quarc is closer in some respects to natural language than are Moss’s systems—for instance, it does not use negative nouns. The process also sheds light on formal properties and presuppositions of some inferences it formalises. Directions for future work are outlined. The Quantified Argument Calculus and Natural Logic Hanoch Ben-Yami The formalisation of natural language arguments in a formal language close to it in syntax has been a central aim of Moss's Natural Logic. I examine how the Quantified Argument Calculus (Quarc) can handle the inferences Moss has considered. I show that they can be incorporated in existing versions of Quarc or in straightforward extensions of it, all within sound and complete systems. Moreover, Quarc is closer in some respects to natural language than are Moss's systems-for instance, it does not use negative nouns. The process also sheds light on formal properties and presuppositions of some inferences it formalises. Directions for future work are outlined. Despite the successes of the Predicate Calculus, based on Frege's Begriffsschrift (1879), there have been recurrent attempts to develop different logic systems, closer in various respects to natural language. Strawson's (1950Strawson's ( , 1952 and Sommers' (1982) are two such familiar earlier ones. More recently, Lawrence Moss has published a series of works, some coauthored with Pratt-Hartmann, which engage in the similar project of Natural Logic (Pratt-Hartmann and Moss 2009;Moss 2010aMoss , 2010bMoss , 2010cMoss , 2011Moss , 2015. Natural Logic has several aims. One main aim is to "construct a system [whose] syntax is closer to that of a natural language than is first-order logic" and give "logical systems in which one can carry out as much simple reasoning in language as possible" (Moss 2010a, 538-39). Moss's works "attempt to make a comprehensive study of the entailment relation in fragments of language", "to go beyond truth conditions and examples, important as they are, and to aim for more global characterizations" (2010a, 561). "The subject of natural logic," Moss writes, "might be defined as 'logic for natural language, logic in natural language.' By this, we aim," he clarifies, "to find logical systems that deal with inference in natural language, or something close to it" (2015,563). Moss has tried to faithfully represent in his systems standard quantifiers, passive-active voice relations, comparative adjectives, and more. A different system with similar aspirations which has also been recently developed is the Quantified Argument Calculus, or Quarc. 1 Quarc is a powerful formal logic system, first introduced in Ben-Yami's "The Quantified Argument Calculus" (2014), based on work published by Ben-Yami in the preceding decade (primarily 2004) and closely related to the calculus introduced in Lanzet and Ben-Yami (2004). It is closer in its syntax than is the Predicate Calculus to natural language, sheds light on the logical role of some of the latter's features which it incorporates (such as copular structure, converse relation terms and anaphora), and it is also closer to natural language in the logical relations it validates. Ben-Yami (2014) contains a Lemmon-style natural deduction system for Quarc and a truth-valuational, substitutional semantics; this system has been shown to be sound and complete (Ben-Yami 2014; Ben-Yami and Pavlović forthcoming). Quarc has since been extended into a sound and complete three-valued system with defining clauses, using model-theoretic semantics (Lanzet 2017). In this latter version it was shown to contain a semantically isomorphic image of the Predicate Calculus. Thus, Quarc has been shown to be at least as strong as the first-order Predicate Calculus, and moreover, the proofs in these papers shed light on the nature of quantification in the Predicate Calculus (see there for details). In other works (Pavlović 2017;Pavlović and Gratzl 2019), a sequent calculus has been developed for several versions of Quarc and various properties of the system, such as cut-elimination, subformula property and consistency were proved. Quarc has also been used to investigate Aristotelian logic, both assertoric and modal, in works mentioned above as well as in Raab (2018). Raab concludes that the Quarc-reconstruction he provides of Aristotle's logic is "much closer to Aristotle's original text than other such reconstructions brought forward up to now" (abstract). It would be interesting to compare what Natural Logic has achieved with what has or can be achieved by Quarc. The present paper embarks on this inquiry. Only embarks, for limitations of space and time force us to leave out a comparative study of some central questions of the Natural Logic project. An important issue for Moss is that of decidability. He would like to determine whether the logic systems he constructs to incorporate reasoning in natural language, systems which are more limited in their expressive power than the first-order Predicate Calculus, are decidable. Moss and Pratt-Hartmann write: From a computational point of view […] expressive power is a double-edged sword: roughly speaking, the more expressive a language is, the harder it is to compute with. In the last decade, this trade-off has led to renewed interest in inexpressive logics, in which the problem of determining entailments is algorithmically decidable with (in ideal cases) low complexity. The logical fragments subjected to this sort of complexity-theoretic analysis have naturally enough tended to be those which owe their salience to the syntax of first-order logic, for example: the two-variable fragment, the guarded fragment, and various quantifier-prefix fragments. But of course it is equally reasonable to consider instead logics defined in terms of the syntax of natural languages. (2009, Moss also thinks that decidable systems with less expressive power might represent more faithfully actual human reasoning (2015,563). Interesting and important as decidability questions are, they will not be addressed in this paper but be left for future work. The primary concern of this paper is Quarc's capacity to incorporate the natural language inferences studied by Natural Logic. Natural Logic's starting point is a variety of inferences in natural language, all apparently formally valid. Formal systems are then built to incorporate some of these inferences. I shall examine whether Quarc can incorporate these inferences or how it should be extended to accomplish that. I shall also discuss the soundness and completeness of the systems I consider. Quarc is introduced in the next section; I develop it there only to the extent needed for its application later in the paper. In the section following it, I first present several arguments which Moss considers, and then address each of them in a separate subsection. Along the way I also consider whether, with Moss, we should allow nouns to be negated. I end with a short conclusion, which also includes directions for future work. forthcoming) and there is therefore no need for an additional detailed exposition. Moreover, for our purposes below we do not need to employ the full version of Quarc that was introduced in Ben-Yami (2014). Accordingly, although I shall first informally introduce the full Quarc language of that paper, the following formal introduction will be of a reduced version (but with the addition of identity), one which we shall then continue to use. Informal Introduction of the System Consider a simple subject-predicate or argument-predicate sentence: (1) Alice is polite. Its grammatical form can be represented by (2) Here, grammatically, the argument is the noun phrase "every student." In it, the quantifier "every" attaches to the one-place predicate "student", and together they form a quantified argument. This is the way quantification is incorporated in Quarc: Namely, quantifiers are not sentential operators. Rather, they attach to oneplace predicates to form quantified arguments. Some other examples: (10) Some students are polite. (11) Every girl loves Bob. (12) Every girl loves some boy. are formalised (respectively; likewise below) by, This basic departure in the treatment of quantification requires a few additional ones. One is the need to reintroduce the copular structure and, with it, modes of predication, as in Aristotelian logic. In natural language, we can negate sentence (1), "Alice is polite", in two ways: (16) It's not the case that Alice is polite. (17) Alice isn't polite. The Predicate Calculus allows only the first mode of negation-the one rarer and somewhat artificial in natural language-namely, sentential negation. Quarc, however, also allows the negation symbol to be written between the argument or arguments and the predicate, signifying negative predication, by contrast to affirmative one. These two sentences are thus formalised, respectively, by Parentheses can be omitted without ambiguity in these formulas, and they can be written as ¬ and ¬ . Since the argument is singular, these two formulas are equivalent, and they shall be defined as such both in the proof system and in the semantics below. However, the equivalence does not hold when the argument is quantified: (20) It's not the case that some students are polite. (21) Some students aren't polite. formalised by: These formulas will not be equivalent either in the proof system or in the semantics. Some adjectives have a corresponding negative form: polite and impolite, for instance. Yet even if "Alice isn't polite" means the same as "Alice is impolite", this is not the case with all such pairs of adjectives. Often, the negative form designates not the contradictory but the contrary of the positive one: while "reverent" means, feeling or showing deep and solemn respect, "irreverent" means, showing a lack of respect for people or things that are generally taken seriously (Oxford definitions); one's attitude towards, say, religion can be neither reverent nor irreverent. Moreover, many adjectives have no negative form: tall, asleep, red; and relation words usually don't-e.g. "loves" or "teacher of." For these and other reasons (see below on negative nouns), the work done by negative predication cannot generally be accomplished by negative predicates. All natural languages have the means of reordering the noun-phrases in relational sentences without changing, if the arguments are all singular, what is said by the sentences. Different languages achieve this by different means. English often accomplishes it by changing from active-to passive-voice: (24) Alice loves Bob. (25) Bob is loved by Alice. In the singular case, the two are logically equivalent. But again, this is not generally the case when the arguments are quantified: (26) Every girl loves some boy. (27) Some boy is loved by every girl. Quarc incorporates this reordering by having an -place predicate written with a permutation of the 1, 2, … , sequence as superscripts to its right. Sentences (24) to (27) are then formalised by, As with negation, the formulas with singular arguments alone are defined as equivalent in both proof system and semantics, while this equivalence will not generally hold for sentences with quantified arguments. The last additional feature of Quarc is its use of anaphora. Consider the two sentences, (32) John loves John. (33) John loves himself. The former is rarely used, although one of its uses is to explain the use of the reflexive pronoun "himself" in the latter. The reflexive pronoun "himself" in (33) is anaphoric on the earlier occurrence of "John", its source, in the sense that it can be replaced by its source and the sentence will have the same meaning. This eliminable anaphor is what Geach called pronoun of laziness (1962, sec.76). Quarc incorporates it by using a Greek letter for the anaphor, also written as a subscript to the right of its source. Accordingly, it formalises (32) and (33) by: The formalisation of quantified sentences in which quantified arguments have anaphors is similar: (36) Every man loves himself. (37) (∀ , ) As with negation and reordering, if all arguments are singular, then a Quarc formula with an anaphor and the formula with that anaphor replaced by its source are defined as equivalent in both proof system and semantics. However, the anaphor is no longer generally replaceable by its source when the latter is quantified, neither in natural language nor in Quarc. With this I conclude the informal introduction of Quarc and turn to the more rigorous introduction of the formal system. However, for the purposes of the discussion below, we don't need to use formulas with anaphora. I therefore introduce a reduced version of Quarc, in this respect, which will make it easier to follow and focus on the main argument of this paper. The interested reader is referred to the works mentioned above to see how anaphora is incorporated in the full version of Quarc. Vocabulary of Quarc The language of Quarc contains the following symbols: (38) (Vocabulary) • Predicates: , , , …, denumerably many and each with a fixed number of places, including the two-place predicate =. If is a one-place predicate, then ∀ and ∃ will be called quantified arguments (QAs). An argument is a singular argument or a quantified one. For every -place predicate , > 1, apart from =, , where is any permutation of 1, … , (including the identity permutation), is called a reordered form of ; is also an -place predicate. Formulas of Quarc The following rules specify all the ways in which formulas can be generated. Formulas of the form, ( 1 , … , ) , in which is a reordered predicate are not considered basic formulas, as this simplifies the semantic definitions below. The notion of governance, which is related to that of scope in the Predicate Calculus, is defined as follows: (40) (Governance) An occurrence of a QA governs a string of symbols just in case is the leftmost QA in and does not contain any other string of symbols ( ), in which the displayed parentheses are a pair of sentential parentheses, such that contains . Once anaphors are introduced, the notion of governance becomes non-trivial and its definition needs elaboration. Since they are not introduced in this formal part, determining whether a quantified argument governs a formula is straightforward. For instance, ∃ governs the formulas (∃ ) , (∃ )¬ , ( , ∃ ) , (∃ , ∀ ) and (∃ , ∀ ) 1,2 -the last two because it is to the left of ∀ . By contrast, ∃ does not govern ¬((∃ ) ), since it is contained in ((∃ ) ); nor ((∃ ) ) ∧ ( ), as it is contained in ((∃ ) ); nor (∀ , ∃ ) , since ∀ is to its left. For the reduced Quarc language of this paper, a somewhat simpler definition of governance could be provided, practically listing the schemas of formulas governed by a QA; I prefer to use this definition in order to facilitate the transition to fuller Quarc languages. We shall often omit parentheses where no ambiguity arises. Truth-Valuational, Substitutional Semantics As in Ben-Yami (2014), I use here a truth-valuational, substitutional semantics for Quarc. Justification of the approach and answers to some common or possible objections, neither specific to Quarc but as a general semantic approach, can be found in Ben-Yami (2014) and Ben-Yami and Pavlović (forthcoming). The results below do not depend on the use of this semantics: a model-theoretic semantics for Quarc can and has been developed. A precursor of Quarc with model-theoretic semantics is found in Lanzet and Ben-Yami (2004) and a three-valued version of Quarc with model-theoretic semantics is found in Lanzet (2017). (Law of Identity) Every formula of the form For every one-place predicate there is some SA such that ( ) is true. 6. (Sentential operators) Let and be formulas. Then, ¬( ) is true just in case is false, etc. 7. (Negative predication) Let be an -place predicate and 1 , … , SAs. The truth-value of ( 1 , … , )¬ is that of (42) (Validity) An argument whose premises are all and only the formulas in the set of formulas and whose conclusion is the formula is valid, written ⊧ , just in case every valuation that makes all the formulas in true also makes true, even if we add or eliminate singular arguments from our language (of course, only singular arguments not occurring in or can be eliminated). We also say that entails . For a discussion of these definitions, see Ben-Yami (2014). Proof System The proof system used here is based on that found in Ben-Yami (2014) and Ben-Yami and Pavlović (forthcoming), with the omission of the rules for anaphora. I use a Lemmon-style natural deduction system, based on the one introduced in Jaśkowski (1934) and further developed and streamlined in Fitch (1952), Lemmon (1978) and elsewhere. Proofs are written as follows: (43) (Proof) A proof is a sequence of lines of the form ⟨ , ( ), , ⟩, where is a possibly empty list of line numbers; ( ) the line number in parenthesis; a formula; and the justification, a name of a derivation rule possibly followed by line numbers, written according to one of the derivation rules specified below. is said to depend on the formulas listed in . The line numbers in are written without repetitions and in ascending order. The formula in the last line of the proof is its conclusion. If there is a proof with the formula as conclusion, depending only of formulas from the set , then is provable from , or ⊢ . I next list the derivation rules of the system. ( ) = =I 4. (Identity Elimination, =E) (This and the following rules specify how to add a line to a proof which contains preceding lines of the specified forms.) Let be a basic formula containing occurrences 1 , … , of the singular argument ( may also contain additional occurrences of ). Where 1 , 2 is the list of numbers occurring either in 1 or in 2 . 5. (Sentence negation to Predication negation, SP) Let P be an -place predicate and 1 , … , singular arguments. be a formula governed by ∀ . Assume that neither [∀ ] nor the formulas in lines apart from ( ) in line contain any occurrence of the singular argument . Where − is the possibly empty list of numbers occurring in apart from . 11. (Instantial Import, Imp) 3 Let stand for either ∃ or ∀, and [ ] be governed by . Assume does not occur in [ ], or any of the formulas 1 , and in no formula 2 apart from and . Why the quantifier is called, in Quarc, particular and not existential is explained in Ben-Yami (2004, sec.6.5;2014, 123). 3 In Ben-Yami (2014, 133) this rule was called Instantiation. "Instantial Import", however, is preferable for several reasons. First, in this way the ambiguity of "Instantiation" is avoided, as it is used only for the truth-value assignment rule in Definition 41.5. Secondly, unlike "Instantiation", the phrase "Instantial Import" does not imply that this derivation rule presupposes that any one-place predicate has instances. What it does presuppose is that for a formula as in (i) to be true, should have instances; and this is the case even if we allow some one-place predicates to be empty and adopt a three-valued system as in Lanzet (2017). Lastly, "Instantial Import" hints at a relation of this rule to the Predicate Calculus' existential import. As examples, I provide three proofs, which between them demonstrate all the derivation rules apart from the rules for identity, which are not special to Quarc, and Reorder, which is used later. First, (∀ ) ⊢ (∃ ) : This inference, being part of the Aristotelian Square of Opposition, is invalid on the standard translation of these sentences to the Predicate Calculus. Quarc is closer in this respect to Aristotelian Logic; for discussion, see Ben-Yami (2004, Lanzet (2017), Raab (2018). Secondly, the Aristotelian Barbara, i.e. (∀ ) , (∀ ) ⊢ (∀ ) : And lastly, an Aristotelian conversion: "No is " follows from "No is ." Instead of introducing into Quarc a negative quantifier translating "no"-something that can be done-these sentences are translated here as synonymous with "Every/any is not " or (∀ )¬ , and (∀ )¬ , and we show that (∀ )¬ ⊢ (∀ )¬ : For additional examples, see Ben-Yami (2014) and Ben-Yami and Pavlović (forthcoming). The Inferences to be Considered In different works, Moss provides different examples of the kinds of inference he discusses in the context of his Natural Logic project. I shall use here, as our point of departure, the inferences he lists in his "Natural Logic" (Moss 2015, 561-62). This list is more detailed and more recent than those found elsewhere in his writings. 4 Passive voice Some dog sees some cat. ∴ Some cat is seen by some dog. Conjunctive predicates Bao is seen and heard by every student. Amina is a student. ∴ Amina sees Bao. 3. Comparative adjectives 4 A reviewer drew my attention to two other relevant works by Moss (2016) and Moss and Topal (2020) (the latter published, online only, shortly before this paper was submitted), in which additional inferences involving comparative quantifiers are involved. I comment on them when discussing comparative quantifiers below. Every giraffe is taller than every gnu. Some gnu is taller than every lion. Some lion is taller than some zebra. ∴ Every giraffe is taller than some zebra. Defining clauses All skunks are mammals. ∴ All who fear all who respect all skunks fear all who respect all mammals. Comparative quantifiers More students than professors run. More professors than deans run. ∴ More students than deans run. I shall examine the incorporation of inferences of these kinds in Quarc, each in a separate subsection. But before turning to them, I address a different feature which some of Moss's systems contain, negative nouns. Negative Nouns Some of Moss's formal systems contain devices intended to represent "negated nouns such as 'non-man' or 'non-animal' " (Pratt-Hartmann and Moss 2009, 648). Moss thinks that "this is rather unnatural in standard speech but it would be exemplified in sentences like Every non-dog runs" (2015,. Other examples Moss provides there are All non-apples on the table are blue and Bernadette knew all non-students at the party (Pratt-Hartmann and Moss 2009, 564). But when such sentences are used, which I suspect is rarely, they are surely used as elliptical for sentences like, "All fruits on the table which aren't apples are blue" or "Bernadette knew all non-student guests at the party." There were also breadcrumbs on the table, but we didn't mean to say that they were blue; and there were also drinks and finger food at the party. This ellipsis understanding is also shared by Moss. In his (2010a, 539-40), we find an introductory dialogue between A, Moss's mouthpiece, and a Questioning Q. Q requests "an example of some non-trivial inference carried out in natural language", to which A responds by mentioning an inference containing the premise, Every non-pineapple is bigger than every unripe fruit. Q immediately remonstrates: " 'non-pineapple'?! I thought this was supposed to be natural language"; and A excuses himself with, "Take it as a shorthand for 'piece of fruit which is not a pineapple'." Regrettably, Q acquiesces: "Ok, I get it." Yet if, instead of Q, A would have encountered Critical C, she might have retorted, "So why not stay with 'fruits which aren't pineapples'? Should Logic turn a shorthand into a formal syntactic feature?! And you anyway intend to incorporate defining clauses in your system, for instance when formalising 'all who respect all skunks', so you shall have the resources for 'fruits which aren't pineapples.' If your goal is, as you stated, 'logic for natural language, logic in natural language', then try avoiding non-men, non-dogs and other non-natural creatures." C's point is supported by an observation due to Aristotle. In his Categories (~BC330), when discussing primary, individual substances -an individual man or horse, for instance-and secondary substances, like "man" and "animal" as species and genera, he notes: "Another mark of substance is that it has no contrary. What could be the contrary of any primary substance, such as the individual man or animal? It has none. Nor can the species or the genus have a contrary" (Cat. 5, 3b24). Since there is no contrary to man or animal, "non-man" and "non-animal" cannot function, on their own, as noun phrases. The actual natural language sentences which Moss formalises by means of formal negative nouns, designated by a bar ( for non-'s), are sentences like, "Some aren't " and "Some don't any ", formalised by ∃( , ) and ∃( , ∀( , )) (2015, 573). (We don't need to go into the details of Moss's syntax, since for our purposes the idea is sufficiently clear from these examples.) These two sentences are formalised in Quarc by (∃ )¬ and (∃ , ∀ )¬ . Accordingly, Quarc can formalise these sentences without recourse to negative nouns but by using negation as a mode of predication, as it is indeed used in natural language. I think that finding the idea of negative nouns acceptable is influenced by the semantic idea of a domain of discourse. If, when quantifying, the plurality over which we quantify is that of a domain of discourse, then we can single out a part of it either as containing all items to which a predicate applies, or all those to which it does not apply. Indeed, when Moss develops a semantics for languages that include negative nouns, his model or structure contains a non-empty set which functions as the domain, and if ⊆ , then = \ (Pratt-Hartmann and Moss 2009, 651). However, a domain of discourse, in the technical sense in which the idea is employed in semantics, is an artefact of Fregean Logic, whose quantified sentences contain no expression specifying the plurality over which they quantify. For this reason, the semantics must introduce an otherwise implicit domain. Natural language sentences, by contrast, do specify the plurality over which they quantify: when I say, "All your students came to class", I specify your students as the relevant plurality. Quarc follows natural language in this respect, and needs no domain of discourse or of quantification (Ben-Yami 2004, 59-60;Lanzet 2017). Once the domain is eliminated, "non-man" and "non-animal" have nothing to designate and should be eliminated as well. For these reasons, I think that negative nouns are not needed and should not be included in a logic which aspires to be a logic for natural language. As argued above, the rare sentences which apparently use them are better seen as elliptical: as such they can be formalised in Quarc, which therefore does not need to contain negative nouns. Passive Voice (45) Some dog sees some cat. ∴ Some cat is seen by some dog. Conjunctive Predicates (47) Bao is seen and heard by every student. Amina is a student. ∴ Amina sees Bao. The new element in this inference is the conjunctive verb, or more generally conjunctive predicate, "see and hear." We shall extend Quarc to incorporate it. We take our cue for the incorporation of conjunctive predicates in Quarc from the way negative predication, reordering and anaphora were incorporated in it. Namely, we shall define valuation-and derivation rules for the case in which all arguments are singular terms, and show that these together with the other rules which have already been defined provide us with desirable results for the more complex cases as well. Vocabulary We do not extend the basic vocabulary of Quarc but define, (48) (Conjunctive predicates) If and are -place predicates, so is ( ) ∧ ( ), which is called a conjunctive predicate. Formulas No new rules. If and are one-place predicates, then ( )( ) ∧ ( ) is a formula. Similarly for any -place predicates and any arguments. (51) is true iff every linguist knows some philosopher and admires the same philosopher. By contrast, since (53) is true just in case so is each of its conjuncts, we shall not get that every linguist need admire a philosopher he knows. Proofs We add an introduction and an elimination rules for conjunctive predicates: (54) (Conjunctive Predication Introduction, CP-I) Let and be -place predicates, 1 , … , singular arguments. It is straightforward to see that soundness is preserved. The completeness of Quarc on the truth-valuational approach is proved in Ben-Yami and Pavlović (forthcoming) by adapting Henkin's proof (1949). We won't provide here the complete proof but only specify its features that are relevant for proving that the completeness of the system is preserved with the additional structures introduced in this paper. As part of the proof, a "Henkin Theory" is specified, consisting of all formulas falling under certain schemas. It is then shown that any valuation that respects the truth-value assignment rules for the connectives of the propositional calculus while making all the formulas of the Henkin Theory true, respects all the truth-value assignment rules of Quarc as well. Later, some of the formulas of the Henkin Theory are shown to be theorems of Quarc. To prove that completeness is preserved, we should add to the Henkin theory the axiom schema, Any valuation that respects the truth-value assignment rule for the connective ↔ while making all the formulas of this form true, clearly respects Conjunctive Predication (49) as well. And, given CP-I and CP-E, this is a schema of theorems of Quarc. See Henkin (1949) and Ben-Yami and Pavlović (forthcoming) for further details. We can now turn to a proof of the argument opening this subsection. We formalise it as follows: Bao is seen and heard by every student: ( , ∀ )( ∧ ) 2,1 Amina is a student: ∴ Amina sees Bao: ( , ) We show that, Proof. Comparative Adjectives (58) Every giraffe is taller than every gnu. Some gnu is taller than every lion. ∴ Some lion is taller than some zebra. Every giraffe is taller than some zebra. Most comparative adjectives are transitive: if Alice is younger than Bob, and Bob younger than Charlie, then Alice is younger than Charlie. It might thus seem that this transitivity is built into language as a formal rule, for any comparative adjective of the form, -er. There are, however, exceptions, as we learn from Rock-Paper-Scissors: in this game, paper is stronger or better than rock, rock is stronger than scissors, yet scissors is stronger than paper. Such exceptions notwithstanding, we shall treat in this subsection comparative adjectives of the form -er as transitive. I do not think that the transitivity of adjectives of the -er structure is merely a frequent albeit contingent fact. Rather, we have here a rule of grammar which allows exceptions. That the past tense of "go" is "went" does not show it not to be a rule that the past tense of verbs is formed by adding "ed." With comparative adjectives we have a different kind of rule and exception, concerning not syntax but meaning; yet this does not affect the fact that transitivity is a rule for the use of comparative adjectives, to be overridden only if the exception is explicitly introduced. Vocabulary and formulas We add to the language denumerably many two-place comparative predicates, er , er , er … No new formula rules. Semantics (59) (Comparative Adjective Transitivity). Let er be a comparative predicate, and 1 , 2 and 3 singular arguments. If the truth-value assigned to ( 1 , 2 ) er and ( 2 , 3 ) er on a valuation is true, then that assigned to ( 1 , 3 ) er is also true. Soundness is again immediate. Completeness is proved by adding to the Henkin theory all the formulas which fall under the schema, Any valuation that respects the truth-value assignment rules for the connectives ∧ and → while making all the formulas of this form true, respects (59) as well. All formulas of this form are theorems of Quarc, provable from CAT. See again Ben-Yami and Pavlović (forthcoming) for further details. The proof of (58) is quite tedious and adds no interesting element to what we learn from proofs of simpler inferences. I shall therefore formalise and prove instead the following: Every giraffe is taller than every wildebeest: (∀ , ∀ ) er Some wildebeest is taller than every lion: (∃ , ∀ ) er ∴ Every giraffe is taller than every lion: (∀ , ∀ ) er We show that: Proof. Asymmetry Another property of comparative adjectives is asymmetry. If Alice is younger than Bob, then Bob isn't younger than Alice. Unlike transitivity, asymmetry seems to have no exception for comparative adjectives. This property can also be straightforwardly incorporated in Quarc. Nothing needs to be added to either vocabulary or formula rules. In the semantics, the rule should be that if ( 1 , 2 ) er is true on a valuation, then ( 2 , 1 ) er is false on it. And the rule of inference should allow the inference ( 1 , 2 ) er ⊢ ¬( 2 , 1 ) er . We shall not develop this further here. (64) All skunks are mammals. ∴ All who fear all who respect all skunks fear all who respect all mammals. Defining Clauses Those who respect the skunks and mammals, as well as those who fear the former, are presumably not respectful triangles or fearful ideas, say. Which respectful and fearful "things" are referred to would depend on context, but something more specific does seem to be meant. We shall assume here that the conclusion is about creatures generally, and consider it as elliptical for, (65) All creatures who fear all creatures who respect all skunks fear all creatures who respect all mammals. This will enable us to treat inference (64) by means of the extended, threevalued Quarc system developed in Lanzet (2017), which has the syntactic and semantic resources to represent defining clauses and can straightforwardly translate sentences such as (65). One might object and claim that the conclusion of (64) is about absolutely everything. Triangles and ideas, so might one continue, also fall within its purview, only they happen not to fear or respect anything, ipso facto skunks and mammals. I find this approach unconvincing when applied to natural language, whose logic both Natural Logic and Quarc aim to represent. However, the issue need not be decided for the purpose of formalising inference (64) in Quarc: the means for representing absolute generality are provided in both Lanzet and Ben-Yami (2004) and Lanzet (2017), in each somewhat differently, by the introduction of a special predicate, Thing or . Very roughly, the idea is that everything is a Thing: for every constant , is true. (This special predicate also helps explore the relations between Quarc and the Predicate Calculus.) We shall not develop this idea further here, though, but continue with the assumption that a predicate with narrower application is assumed, and use creature as in (65). The three-valued Quarc system of Lanzet (2017) is too complex to be fully presented in this paper. I shall therefore introduce only some of its features, which will enable us to get an idea of how sentence (65) and consequently inference (64) are handled by it. The reader is referred to Lanzet (2017) for a full exposition. Since we are not inquiring into decidability in this paper but leaving it as a subject for future work, neither shall we inquire whether a restricted, simpler yet complete and decidable version of that system suffices for the formalisation of the relevant arguments. Compound Predicates Consider the sentence, (66) Alice is a woman who knows Bob. It is logically equivalent to, (67) Alice is a woman and Alice knows Bob. While (67) is formalised in Quarc as, we shall formalise (66) by: The chain of symbols, ∶( , ) , is considered a compound predicate. More generally, if [ ] is a formula and a one-place predicate, then ∶ [ ] is a compound predicate, which is also a one-place predicate. [ ] contains no occurrence of (to avoid ambiguity), and replaced some or all Proofs Lanzet (2017) develops a three-valued system, allowing for some formulas to lack a truth value. "All my children work in the coal mines" is neither true nor false when uttered by a childless person. Similarly, ∃ and ∀ will lack a truth value when has no instances. If our conception of validity in a three-valued system is that truth entails truth, and this is Lanzet's conception, then this three-valued framework complicates the proof system. The classical Negation Introduction rule, for instance, cannot be employed. In addition, some of the rules for quantifiers should be modified, because in some cases we should guarantee that the predicate occurring in the argument position, say , has instances. This can be done in several ways, one of them by having (∃ ) among our premises: this formula is true if and only if P has instances. For these two reasons, the ∀-Introduction rule is replaced by two rules. Lanzet uses a proof system which operates on sequents, although resembling a natural deduction system in its inference rules. Adapting his rules to the system used in this paper, his ∀I1 rule will be: Where ∀ governs [∀ ] and does not occur in 1 apart from , in 2 or in [∀ ]. Returning to the inference with which we opened this subsection, on the conception of validity as truth entails truth, sentence (65), "All creatures who fear all creatures who respect all skunks fear all creatures who respect all mammals", follows from "All skunks are mammals" only if we assume that the compound predicates in the conclusion's argument positions, "creatures who fear all creatures who respect all skunks", and "creatures who respect all mammals" have instances. Otherwise, if no one respected mammals, say, there would be no one to fear in the conclusion, and a true premise would have a conclusion which is neither true nor false.-We can develop a different conception of validity for three-valued systems, in which, instead of truth leading to truth, an argument is valid just in case, if its premises are not false, then its conclusion isn't false either (Halldén 1949). Another option is to define validity for a three-valued system as, if the premises are true then the conclusion isn't false (strict-to-tolerant validity, Cobreros et al. 2013). On either conception, a valid argument with true premises may have a conclusion which has no truth-value, and no additional premise should be added to (64). Both options are worth exploring, but here we shall limit ourselves to the option Lanzet adopts and take validity to mean, truth entails truth. We should, therefore, add to (64) The proof is long and requires familiarity with the rules of Lanzet (2017), so instead of providing it we shall show that the inference is valid. Since the system of that paper was proved there to be complete, it follows that the inference can be proved. Proof. Proof. We should show that, if on a valuation the three premises of (73) are true, then for every instance of ∶( , ∀ ∶( , ∀ ) ) and every instance of ∶( , ∀ ) , the following is also true, ( , ) . From premises (71) and (72), we know that each of these compound predicates has instances. So suppose ( ) ∶( , ∀ ∶( , ∀ ) ) is true on with a specific set of SAs (remember that on the truth-valuational semantics, we may add or eliminate singular arguments from our language). Then so are and ( , ∀ ∶( , ∀ ) ) . But this means that ∶( , ∀ ) has instances on , and that for any of its instances , ( , ) is true on . For any such , since ∶( , ∀ ) is true on , and ( , ∀ ) are true on . And again, for any instance of on , ( , ) is true on . On , if is an instance of ∶( , ∀ ) , then both and ( , ∀ ) are true on . So for any instance of on , ( , ) is true on . Now, if is an instance of on , from the first premise of (73), ∀ , is also true on , and therefore ( , ) is true on . So ( , ∀ ) is also true on . Since is also true, ∶( , ∀ ) is true on . But we saw that ( , ∀ ∶( , ∀ ) ) is true on . So ( , ) is true on . We see that inference (64) can be incorporated in an existing powerful version of Quarc. Moreover, in the process, Quarc has brought to light two features of Moss's original formulation which needed to be addressed: completion of an ellipsis and making two presuppositions explicit. We therefore proved here a revised inference, (73). (74) More students than professors run. More professors than deans run. ∴ More students than deans run. Comparative Quantifiers The four kinds of inference we discussed above did not pose serious issues for their incorporation in Quarc, syntactically, semantically, or proof-theoretically. The active-passive-voice distinction and defining clauses were already incorporated in Quarc, the latter in a three-valued version of it; and conjunctive predicates and comparative adjectives required rather straightforward extensions for their incorporation. Comparative quantifiers, however, pose several challenges, only some of which will be met in this paper. The quantifiers of Quarc, ∃ and ∀, translate natural language's "some", "a", "all", "any" and "every" in various of their uses. All these quantifiers are unary determiners: they attach to one general noun to form a noun phrase. "Some boys", "a girl", "all men", "any woman" and "every person" are a few examples. This is also true of some other natural language quantifiers, for instance three, at least seven, infinitely many, most and many. Translating these quantifiers in Quarc will require additional vocabulary but not additional syntactic roles. By contrast, comparative quantifiers, in their use exemplified in (74), are binary determiners: they attach to two general nouns to form a noun-phrase. As, for instance, in "more students than professors" and "more professors than deans" (Ben-Yami 2009). Translating them into Quarc will therefore necessitate an additional syntactic role: a quantifier which attaches to an ordered pair of one-place predicates to form a quantified argument. Vocabulary and Formulas We add a new binary quantifier, Π, read "more". If and are one-place predicates, then Π( , ) is a binary quantified argument. Semantics To capture the truth-conditions of "more" within a truth-valuational substitutional semantics, as well as those of many other, unary quantifierse.g. "three", "at least seven", "many" and "most"-we should overcome a difficulty related to the fact that several names might name the same thing (Lewis 1985). Suppose we defined "Two men married Olivia Langdon" as true if there are two different substitution instances of names for "two men", each verifying " is a man", which yield a true sentence of the form, " married Olivia Langdon." We would then get that the sentence is true, for both "Mark Twain is a man" and "Samuel Clemens is a man" are true, as are "Mark Twain married Olivia Langdon" and "Samuel Clemens married Olivia Langdon." Yet Mark Twain is Samuel Clemens, and only this single man married Olivia Langdon. To overcome this difficulty, we first define for each one-place predicate on each valuation a maximal substitution set . This is a set for which, • only names for which is true on are in . • for any different and in , = is false on • for any for which is true on , = is true on for some in , possibly itself. In this way we make sure that every is counted exactly once, so to say, by the names in 's maximal substitution set. It is easy to show that on each valuation, all maximal substitution sets of a given predicate have the same number of members, or cardinality. We can now define the truth value of a formula [Π( , )], governed by Π( , ), on a valuation . We consider two maximal substitution sets and . [Π( , )] is true on just in case more substitution cases of the form, [ /Π( , )] with ∈ are true on than such substitution instances with ∈ . Turning to inference (74), we can formalise it and show the validity of the formalisation in Quarc. Its formalisation will be, (75) (Π( , )) , (Π( , )) ⊧ (Π( , )) approach to other comparative quantifiers (e.g. "at least as many") or construct a model-theoretic semantics for them is straightforward. Accordingly, we have managed to show an advantage of Quarc over the Predicate Calculus in this respect. Comparative Quantifiers in "Existential" Sentences In more recent work, Moss and Topal extended Natural Logic and applied it to sentences of the form, "There are at least as many as " and "There are more than " (2016; 2020) (see fn. 3). They have developed sound and complete proof systems for cardinality comparisons, for both finite (Moss 2016) and infinite sets (Moss and Topal 2020). This is impressive work, and it would be interesting to inquire whether Quarc can deliver anything comparable. This, however, will not be attempted in this paper, for several reasons. There are obvious space considerations. For instance, the proof system of Moss (2016) contains 24 rules, of which 16 involve his formalisations of "at least as many" and "more"; the corresponding numbers for the proof system of Moss and Topal (2020) are 21 and 12. Accordingly, a Quarc system formalising these inferences might involve significantly more additions than the extended systems considered above. Similarly, a completeness proof for this extended system would not be established by minor additions to the one provided in Ben-Yami and Pavlović (forthcoming). This is a topic for a separate paper. Moreover, a Quarc treatment of sentences of the form, "There are at least as many as " and "There are more than ", will depart from Moss's in some important fundamental respects. Moss formalises these sentences by sentences similar in form to those formalising "All/some are/aren't ." For instance, "Some are " is formalised by ∃( , ), and "There are more than " by ∃ > ( , ). Namely, apart from the different quantifier, no syntactic distinction is drawn between the argument-predicate sentence, "Some are ", in which the argument is "some ", and the so-called existential sentence, "There are ", in which is a noun phrase formed by a comparative quantifier, "more than ." However, the existential sentence, "There are more than " is no argument-predicate one. A sentence similar to it in form using the quantifier "some" will be, "There are some ", and not, "Some are ." An argument-predicate sentence with the quantifier "more" would have the form of the sentence considered above, "More students than professors run." As mentioned earlier, Moss hasn't developed a proof system for these sentences. The distinction between existential sentences and argument-predicate sentences seems to be a linguistic universal. Moreover, existential sentences show important differences from quantified argument-predicate ones (Ben-Yami 2004, sec.6.5;I. Francez 2009;McNally 2011). Accordingly, a system that aims to be a logic for natural language informed by the latter's syntax should formalise existential sentences differently than it does argument-predicate ones. It should distinguish the two constructions and explore the logical relations between them. As part of such a general treatment of existential sentences, those with a noun-phrase of the form "more than " as their pivot (see I. Francez 2009; McNally 2011 for the terminology) can also be introduced and discussed, as well as those with other comparative quantifiers. A general inquiry into the logic and formalisation of existential sentences has not been attempted by Moss and shall not be attempted here either. Conclusions and Future Work This paper tried to assess the ability of Quarc, in its current or extended versions, to represent the kinds of inference which have served as the basis of Moss's constructions of Natural Logic systems. We have shown how Quarc can incorporate, sometimes with some extensions, passive-active voice distinctions, conjunctive predicates (see and hear), comparative adjectives (taller), and defining clauses (who respect all mammals). All these were incorporated within sound and complete systems. We have also shown how Quarc can be syntactically extended to incorporate comparative quantifiers (more … than …) and provided a semantics but not a proof system for this extension. All this was done by using a language with a syntax close to that of natural language. In this respect we followed Moss's dictum for his Natural Logic project, "logic for natural language, logic in natural language" (2015, 563). I believe that in some respects we improved on Natural Logic, for instance by not using negative nouns. The process also helped shed light on some of the inferences we discussed. The constraints of the formal system brought us to recognise an ellipsis and presuppositions involved in the conclusion of inference (64), "All who fear all who respect all skunks fear all who respect all mammals." A main aim of the Natural Logic project which we did not address here was the question of decidability. Apart from the theoretical interest, this is relevant to questions of the applicability of computer programmes for determining validity. I hope this question will be addressed in future work, by myself or others. Another topic which was not addressed in this paper but which has engaged Natural Logic is that of monotonicity (Moss 2015, sec.4). Moss's work is based on van Benthem's (1986van Benthem's ( , 1991, which generated additional inquiries as well (see Benthem 2008 for a historical survey). Whether and how can Quarc analyse the phenomena of monotonicity is again left for future work. The last topic mentioned as subject for future work is the formalisation of the so-called existential sentences-"There are "-in Quarc. Once this is done, existential sentences with comparative quantifiers -"There are more than " and "There are at least as many as "-can also be formalised, and Moss's work on these last sentences can be comparatively studied. So, there is still work to be done. Yet hopefully, we have shown that in addition to the earlier successes in its application to the analysis of the logic of natural language, Quarc can also represent the inferences that motivated Moss's Natural Logic.* Hanoch Ben-Yami Central European University benyamih@ceu.edu
11,817.4
2022-11-18T00:00:00.000
[ "Philosophy", "Computer Science" ]
Glycosylation in the Tumor Microenvironment: Implications for Tumor Angiogenesis and Metastasis Just as oncogene activation and tumor suppressor loss are hallmarks of tumor development, emerging evidence indicates that tumor microenvironment-mediated changes in glycosylation play a crucial functional role in tumor progression and metastasis. Hypoxia and inflammatory events regulate protein glycosylation in tumor cells and associated stromal cells in the tumor microenvironment, which facilitates tumor progression and also modulates a patient’s response to anti-cancer therapeutics. In this review, we highlight the impact of altered glycosylation on angiogenic signaling and endothelial cell adhesion, and the critical consequences of these changes in tumor behavior. Introduction A thin layer of endothelial cells lines the interior surfaces of blood and lymphatic vessels, releases signals that control vascular relaxation and contraction, secretes factors that regulate blood clotting, and plays an important role in immune function and platelet adhesion. The relationship between tumor cells and endothelial cells is complex. To sustain rapid cellular proliferation and a high metabolic rate, solid tumors develop a vascular network that fulfills tumors' need for nutrients and oxygen and also aids in the removal of metabolic waste products [1]. In rapidly growing tumors, an angiogenic switch, often triggered by hypoxia-induced expression of vascular endothelial growth factor (VEGF) and other angiogenesis-inducing molecules, causes normally quiescent endothelial cells to proliferate and sprout [2][3][4][5][6]. In the tumor microenvironment (TME), dysregulation of angiogenic signals contributes to the development of hyper-permeable and highly heterogeneous blood vessels, and also aids in entry of tumor cells into (intravasation) and out of (extravasation) the blood stream via trans-endothelial migration [7]. Tumor-associated endothelial cells often exhibit decreased adhesion between neighboring cells and with the extracellular matrix, with profound consequences relevant to the development and treatment of cancer. Consequently, abnormally organized and leaky tumor blood vessels contribute to tumor angiogenesis, inflammatory cell infiltration, metastasis, and the development of resistance to chemotherapeutic agents in tumors of diverse origin [2,[7][8][9][10][11][12][13]. Glycoprotein and Glycosaminoglycan Synthesis and Recognition by Lectins The luminal surface of endothelial cells contains an extensive network of membrane-bound glycoproteins and proteoglycans, called the endothelial glycocalyx. There is an increasing awareness that the endothelial glycocalyx plays a critical role in vascular physiology and pathology, especially with relation to tumor angiogenesis and interactions between endothelial cells and tumor cells that mediate trans-endothelial migration. Protein Nand O-glycosylation, as well as glycosaminoglycan (GAG) synthesis, involve multiple enzymatic steps that occur co-and/or post-translationally, are influenced by enzyme and substrate levels, and result in considerable structural diversity [22][23][24][25]. Tumor-associated endothelial cells are exposed to a hypoxic, hyper-glycolytic, and pro-inflammatory milieu [26][27][28]. Endothelial cell glycosylation is supremely sensitive to hypoxia and inflammation [29][30][31][32]. Tumor-associated endothelial cells adopt a hyper-glycolytic metabolic state [27,33,34]. The enzyme fructose-6-phosphate-amidotransferase (GFAT) converts fructose-6-phosphate into glucosamine-6-phosphate, and in so doing shunts glycolytic intermediates into the HBP, linking metabolism and glycosylation [35][36][37]. Glucosamine-6-phosphate is the common precursor to all amino sugars used in glycoprotein synthesis [38,39]. Ultimately, changes in endothelial cell glycosylation alter protein interactions and function at the plasma membrane [36,37,[40][41][42][43]. Protein glycosylation changes dramatically in cancer, and has been studied extensively in tumor epithelial cells, where it regulates cellular adhesion, cell-matrix interactions, and signaling via receptor tyrosine kinases (RTKs) [40,[44][45][46][47][48][49]. In fact, ST6Gal-I, responsible for the attachment of sialic acid to glycoproteins via 2,6-linkage, regulates transcription factors involved in stem cell maintenance [50]. However, until recently there has been little understanding of how changes in endothelial cell glycosylation in the tumor microenvironment influence endothelial barrier function, adhesion, cell-matrix interactions, and cell signaling. N-Glycosylation N-glycosylation occurs on asparagine (N) residues within the NXS/T motif, where any amino acid X except for proline (X P) follows asparagine, and serine or threonine (S/T) occupy the third position. N-glycosylation is a complex, multi-step co-and/or post-translational process that is initiated by the transfer of N-acetyl-glucosamine-1-phosphate (GlcNAc-1-P) to a dolichol-phosphate on the cytoplasmic face of the endoplasmic reticulum (ER) membrane by GlcNAc-1-phosphotransferase (encoded by the human DPAGT1 gene, yeast ALG7) [23}. Notably, tunicamycin, an analog of uridine diphosphate-N-acetylglucosamine (UDP-HexNAc), inhibits this step and has been used widely to study N-glycosylation. After this initial step, an additional N-acetylglucosamine (GlcNAc) and five mannose (Man) residues are added sequentially. Then, the entire dolichol-linked glycan is flipped into the ER lumen, where four additional-Man residues and three glucose residues are added. This precursor, assembled from 14 monosaccharides, is then transferred by multi-subunit enzyme, oligosaccharyltransferase (OST), to an asparagine residue within the NXS/T motif. The nascent glycoprotein next undergoes interaction with chaperones to ensure quality control. Glycoproteins that 'pass' this quality control step proceed through multiple steps and are trimmed during protein folding to remove glucose. Further trimming and processing occurs in the ER and Golgi and produces a heterogeneous set of N-linked glycans. Mucin-Type O-Glycosylation Mucin-type O-glycans, also called O-GalNAc glycans, are initiated by the transfer of N-acetylgalactosamine (GalNAc) by polypeptide GalNAc-transferases (ppGalNAcTs) to specific Ser and Thr residues on O-glycosylated proteins. There are 20 human polypeptide GalNAc-transferase genes. This process occurs in the Golgi apparatus. O-glycosylated regions of proteins are frequently rich in serine, threonine and proline residues. O-glycans are commonly found on mucins, a class of glycoproteins that may each contain hundreds of such O-glycans, but other proteins can also be O-glycosylated, including membrane-associated glycoproteins such as P-selectin glycoprotein ligand 1 (PSGL-1). While all mucin-type O-glycans start with O-GalNAc there is considerable structural variability. There are four common O-glycan core structures, and additional rare core structures have also been elucidated [22]. O-GlcNAc In contrast to the complex glycans on cell surface glycoproteins, O-linked β-N-acetylglucosamine (O-GlcNAc) modification of Ser and Thr residues occurs on intracellular proteins and is involved in signaling and the regulation of enzyme activity [51]. Two key enzymes, O-GlcNAc transferase (OGT) and O-GlcNAcase (OGA), catalyze the addition and removal of O-GlcNAc, respectively, from intracellular proteins. O-GlcNAc modification of endothelial nitric oxide synthase (eNOS) results in inactivation of the phosphorylated enzyme in the context of diabetes [52,53]. In addition, elevated flux through the HBP leads to increased protein modification by O-GlcNAc and impairs angiogenesis, potentially by inhibiting Akt signaling in endothelial cells [54]. Decreasing levels of OGT in prostate cancer cells diminished expression of VEGF and reduced endothelial tube formation in vitro, and regulation of this process involved FOXM1 [55]. Glycosaminoglycans Glycosaminoglycans (GAGs) are long unbranched polysaccharides with repeating disaccharide units that are a major component of the extracellular matrix (ECM). They undergo sulfation at distinct positions and also undergo epimerization of uronic acid, resulting in the generation of a diverse set of molecules with distinct physical and biological properties. With the exception of hyaluronan, GAGs are covalently linked via serine residues to GAG-bearing proteins (proteoglycans) that reside on the cell surface and within the ECM. Six classes of GAGs exist, including chondroitin sulfate, dermatan sulfate, heparan sulfate, heparin, hyaluronan, and keratan sulfate. Hyaluronan (HA), a high-molecular-weight, non-sulfated glycosaminoglycan, is synthesized at the cell surface and is subsequently incorporated into the extracellular matrix [24,45]. Glycan-Binding Proteins Lectins are a class of glycan-binding proteins that recognize carbohydrate substructures within larger branched carbohydrates. Lectins are notable for their low-affinity interactions, which mediate "rolling" in of leukocytes and cancer cells when they interact with glycans on the endothelial cell surface. In this review, we will discuss two major classes of lectins, which are categorized by the substructures they bind. The first, galectins, recognize glycans with exposed galactose residues [56]. The second, selectins, are a family of calcium-dependent cell adhesion molecules that recognize sialylated, fucosylated carbohydrate ligands with low affinity [57]. Selectins are upregulated in inflammatory conditions to recruit platelets and leukocytes to sites of injury or infection, but may also be co-opted in the context of cancer to facilitate tumor cell adhesion to endothelial cells. In addition to lectins, a broad array of molecules, including many growth factors, have the ability to bind with carbohydrate moieties on glycoproteins and glycosaminoglycans, and in so doing they mediate cell-cell and cell-matrix interactions. The modification of membrane glycoproteins by Nand O-glycans, cytoplasmic O-glycosylation, the production and deposition of glycosaminoglycans, and the recognition of motifs on glycoconjugates by lectins, have been characterized extensively in the epithelial context. Next, we will discuss specific glycoproteins that are involved in endothelial cell adhesion, and how carbohydrate modifications may impact the function of these molecules. Endothelial Cell Adhesion Molecules Much of what is known about the impact of altered glycosylation on cell-cell and cell-matrix adhesion is derived from studies of aberrant glycosylation in tumor cells [58][59][60]. For example, increased β-1,6 branching and increased sialylation on N-linked glycans that occurs during tumorigenesis lessens cell-cell adhesion [58,61]. In contrast, knowledge of the impact of altered glycosylation on endothelial adhesion molecules is primarily based on the interaction of endothelial cell adhesion molecules with immune cells in the context acute inflammatory conditions. Glycans on the surface of leukocytes, and to a lesser extent, glycans on the surface of endothelial cells, play a crucial role in leukocyte recruitment. Glycosyltransferases, including α1,3 fucosyltransferases, α2,3 sialyltransferases, core 2 N-acetylglucosaminlytransferases, β1,4 galactosyltransferases, and polypeptide N-acetylgalactosaminyltransferases are involved in the synthesis of selectin ligands that mediate leukocyte rolling by binding to selectins [62]. Major glycoconjugates and lectins involved in endothelial cell adhesion and signaling are shown in Figure 1. ICAM-1 Intercellular adhesion molecule 1 (ICAM-1/CD54) is involved in trans-endothelial migration of leukocytes and serves as a ligand for integrins on leukocytes. showed that activated endothelial cells expressed two forms of ICAM-1, the more abundant of which displayed N-glycans modified with α2,6-linked sialic acids, while the less abundant form displayed primarily high-mannose type glycans. Inhibition of α-mannosidase to force expression of high-mannose N-glycans led to increased monocyte rolling and adhesion, as compared with ICAM-1 displaying more processed N-glycans, suggesting that the high-mannose glycans could serve as leukocyte ligands. However, in cells with ICAM-1 displaying high-mannose glycans, interactions with the actin cytoskeleton were lost, suggesting that the glycosylation status and adhesion properties of ICAM-1 are modulated by inflammation [63]. Endothelial Selectins: E-Selectin (ELAM) and P-Selectin E-selectin is an endothelial-specific lectin that recognizes glycans containing the sialyl-Lewis x substructure (SLe x ; NeuAc α2,3Gal β1,4(Fuc α1,3)-GlcNAc, its expression is activated by cytokines, and it is involved in recruitment of neutrophils to sites of inflammation [64,65]. Aberrant expression of glycans bearing the SLe x motif in multiple types of cancer, including colon cancer [66,67] and prostate cancer [68][69][70], has been implicated in facilitating tumor cell adhesion to the endothelial cells, and facilitating tumor cell metastasis via interaction with selectins [71,72]. P-selectin is expressed in both platelets and activated endothelial cells. In endothelial cells, P-selectin is stored in Weibel-Palade bodies and is rapidly released and translocated to the cell surface in response to inflammation. P-selectin glycoprotein ligand-1 (PGSL-1/CD162) is a ligand of P-selectin that is expressed on leukocytes and contains mucin-type O-glycans. Interestingly, P-selectin deficient mice show a decreased rate of tumor growth and decreased metastasis compared to wild-type mice [73]. This can be explained in part by the fact that tumors frequently express glycosylated ligands with sialyl-Lewis x structures, bind platelets and leukocytes via P-selectin, and use these interactions to initiate contact with endothelial cells at distant sites and extravasate [74]. Some tumor cells express P-selectin and initiate this process in a platelet-independent manner [75]. PECAM (CD31) Platelet endothelial cell adhesion molecule (PECAM) is involved in cell adhesion, mechanical stress sensing, angiogenic signaling, and also has an anti-apoptotic role [77]. It is a major component of intercellular junctions in endothelial cells. In addition, it has been shown to have lectin-like properties and recognize α2,6-sialic acid, and this property is involved in regulation of hemophilic interactions [78,79]. PECAM glycans bearing α2,6-sialic acid are essential for endothelial tube formation, and removal of these sialic acid residues disrupts endothelial tube formation [80,81]. Several N-glycans are located at the hemophilic binding interface [82], suggesting that α2,6-sialylated glycans modulate homophilic PECAM-dependent interactions. A decrease in α2,6-sialylation reduces the levels of PECAM at the cell surface and increases its role in apoptosis, and may regulate interactions between PECAM, VEGFR2, and integrin-β3 [77,83]. Therefore, α2,6-sialylated glycans appear to be critical for endothelial cell survival, as they stabilize membrane proteins, leading to their retention at the cell surface and thereby impact pro-angiogenic signaling. IGPR-1 The Ig-containing and proline-rich receptor-1(IGPR-1) is a newly identified Ig-CAM that is uniquely expressed in human and other higher mammalians, but not in rodents [84]. IGPR-1 is expressed in endothelial cells and regulates endothelial barrier function and angiogenesis [85]. More importantly, IGPR-1 expression is elevated in various tumors including, colon cancer [86]. Although it is heavily glycosylated [84], the role of glycosylation in IGPR-1 function has not been studied. VE-Cadherin Vascular endothelial cadherin (VE-cadherin/CD144) is an endothelial-specific adhesion molecule that is an essential player in the formation of cell-cell endothelial adherens junctions and controls vascular permeability. Analyses of VE-cadherin N-glycans indicate that it bears predominantly sialylated biantennary and hybrid-type glycans, and it may also be O-manosylated [87,88]. Sialic acid-bearing glycans on VE-cadherin are likely important for maintenance of endothelial cell adherens junctions [89]. As we have noted here, several of the glycoproteins and glycan-binding proteins discussed above, including ICAM-1, E-Selectin, P-Selectin, VCAM-1, and PECAM, are known to initiate specific adhesive interactions only when modified (or binding to) specific glycan substructures. As a result, changes in glycosylation alter the functions of these proteins. Below, we will discuss factors that influence endothelial glycosylation and in so doing alter endothelial cell adhesion. Factors that Influence Endothelial Glycosylation Endothelial glycosylation is evolutionarily conserved in both developmental and inflammatory processes. Yano et al. (2007) examined the endothelium of hagfish to understand evolutionarily-conserved features of the endothelium using lectins LCA (Lens culinaris agglutinin) and HP (Helix pomatia) that bind carbohydrate structures containing α-linked mannose and α-N-acetylgalactosamine respectively, to characterize differences in glycosylation between endothelial cells in different vascular beds. Their analyses revealed that vascular bed-associated differences in glycosylation facilitated histamine-induced adhesion of leukocytes in capillaries and post-capillary venules but not in the aortic endothelium or arterioles, suggesting a link between inflammation and altered glycosylation [92]. Using similar methods, Jilani et al. (2003) demonstrated that lectin affinities differed between the vasculature of chicken embryos at early and late stages of development, suggesting that endothelial glycosylation plays a role in embryonic development [93]. These patterns are likely relevant to human to the biology of human cells as well. Inflammatory cytokines TNF-alpha and interleukin-1, and bacterial lipopolysaccharide increase expression of ST6Gal-I and also increase the binding of lectins with affinity for sialic acid to the endothelium. E-selectin, ICAM-1, and VCAM-1 were reported as glycoprotein substrates for ST6Gal-I [94]. DW Scott et al. demonstrated that inflammatory stimuli including TNF-α, LPS, and IL-1β induce changes in expression of specific endothelial glycoproteins involved in monocyte adhesion including ICAM-1 and VCAM-1, as well as expression of enzymes involved in N-glycan processing including α-mannosidase, which catalyzed the removal of two mannose residues from GlcNAcMan5GlcNAc2, the committed step in the synthesis of complex N-linked glycans [95]. These investigators showed that endothelial responses to inflammatory stimuli vary between vascular beds [96]. Within the tumor microenvironment, inflammatory stimuli, hypoxia, and tumor-secreted signaling factors alter expression of endothelial cell surface carbohydrates by impacting the underlying expression of enzymes involved in carbohydrate synthesis [16,[95][96][97]. Table 1 shows the reported impact of various cytokines and hypoxia on endothelial glycosylation. Pro-inflammatory signals including IFN-γ and IL-17 increase the expression of α2,6-linked sialic acid-containing carbohydrate epitopes on the endothelial cell surface glycoproteins. In contrast, immunosuppressive cytokines IL-10 and TGF-β1 reduce α2,6-linked sialic acid-containing carbohydrate epitopes on N-linked glycans [16]. In addition, tumor necrosis factor-α (TNF-α) and interleukin-1β (IL-1β) alter endothelial surface N-glycosylation and this correlates with increased monocyte adhesion [76,95]. Critically, immune-mediated mechanisms that alter glycosylation and the expression of glycan-binding proteins have been shown to lead to acquired resistance to anti-angiogenic therapies via changes in the interaction with glycan-binding proteins [98]. The impact of hypoxia on endothelial cells in the tumor microenvironment has been extensively studied [99][100][101]. Hypoxia-inducible factor (HIF-1) is a heterodimeric transcription factor composed of subunits HIF-1β/aryl hydrocarbon receptor nuclear translocator (ARNT) and either HIF-1α or HIF-2α. Under normoxic conditions, prolyl-hydroxylase (PHD) enzymes including PHD2 hydroxylate HIFα, leading to HIF-1 inactivation, followed by its ubiquitination by the von Hippel-Lindau tumor suppressor (pVHL), an E3 ubiquitin ligase, and subsequent degradation [102][103][104]. However, under hypoxic conditions such as those in the tumor microenvironment, PHD2, unable to bind oxygen, no longer hydroxylates HIFα, and this results in its accumulation [105]. HIF-1α and HIF-2α regulate different and, in some cases, opposing, sets of genes [106,107]. While there is some evidence that HIF-1 signaling alters glycosylation, the extent of its influence on endothelial glycosylation, the potential differential roles of HIF-1α and HIF-2α, and the physiological impact of the resulting changes in glycosylation are unclear. In addition to hypoxia, the role of metabolism in endothelial cell glycosylation is an intriguing, though unexplored, subject. It has been reported that the glycolytic activator PFKFB3 regulates endothelial cell rearrangement during vessel sprouting, in part by reducing intercellular adhesion [108]. The role of glycosylation in reducing intercellular adhesion should be further investigated in this context. Abnormal endothelial cell glycosylation and increased expression of lectins, which bind glycan epitopes, aid the development of resistance to anti-angiogenic cancer therapeutics [14,109]. Further exploration should be done to understand the impact of these changes in both acute and chronic inflammation. Hypoxia, a common feature of the tumor microenvironment, also appears to alter endothelial cell glycosylation, leading to the production of glycoproteins bearing carbohydrate structures with less α2,6-linked sialic acid, greater branching of β1,6 N-glycan structures, and elongation with poly-LacNAc residues [16]. Culturing endothelial cells in a tumor conditioned medium from colon carcinoma cell line HT29 induced increased β1,6-GlcNAc branching of endothelial cell glycans, suggesting that factors secreted by tumor cells also influence glycosylation in their environment [32]. Inflammatory cues, hypoxia, and tumor-secreted factors, by triggering changes in endothelial surface carbohydrate structures, may alter angiogenic signaling by modifying the properties of endothelial glycoproteins that are key mediators of signaling and adhesion. To improve our understanding of the role these changes play in tumor progression, metastasis, and treatment, further study is required in animal models and human tissue. Glycosylation and VEGFR2 Pro-Angiogenic Signaling Vascular endothelial growth factors (VEGFs), first identified based on their role in vascular permeability, bind to extracellular matrix proteoglycans (specifically, heparan sulfate proteoglycans, HSPGs), resulting in their sequestration and controlled release from the extracellular matrix in cases of tissue damage or remodeling by matrix-metalloproteinases. Upon release, they are available to promote angiogenesis to repair tissue, although this process is dysregulated in the tumor microenvironment. Additional factors, including fibroblast growth factors, and angiogenic inhibitors such as thrombospondin and platelet factor 4, also interact with and are in some instances stabilized by HSPGs [110]. Using proximity ligation assays in primary brain endothelial cells, Xu et al. (2011) demonstrated that heparan sulfate and VEGFR2 interact directly, and that the number of heparan sulfate-VEGFR2 complexes increased in response to stimulation with VEGF 165 and VEGF 121 [111]. HSPGs also bind gremlin (Drm), and alter its activation of VEGFR2 [112]. Most endothelial surface proteins bear Nand/or O-linked glycans. Multiple adhesion molecules bind glycoconjugates expressed on the surfaces of endothelial cells [113]. The cell-surface receptor tyrosine kinase VEGFR2 is involved in pro-angiogenic signaling in endothelial cells and plays a critical role in tumor angiogenesis. The extracellular domain of VEGFR2 is highly modified by N-linked glycans [114], and glycans, especially α2,6-linked N-glycans at site N247 on Ig-like domain 3 near the ligand binding pocket, influence ligand-dependent signaling [17]. Immune-mediated mechanisms that alter glycosylation and influence endothelial cell signaling are implicated in acquired resistance to anti-angiogenic therapies, highlighting the convergence of immunosuppressive and pro-angiogenic signaling in the tumor microenvironment. Chiodelli et al. (2017) also found that VEGFR2-associated NeuAc plays an important role in modulating VEGF/VEGFR2 interaction, pro-angiogenic activation of endothelial cells and neovascularization [14]. Galectin-3 (Gal-3) is able to induce angiogenesis in a glycan-dependent manner by binding to glycoproteins on the surface of endothelial cells [15]. VEGFR-2 N-glycans are involved in retention of the receptor at the endothelial cell surface via interaction with Gal-3 [115]. Rabinovich et al. studied anti-VEGF refractory tumors and found that glycans on the endothelial surface glycoproteins, including VEGFR2, were remodeled to selectively bind galectin-1 (Gal-1) expressed by the tumor cells. Endothelial cells displayed high levels of β1,6-GlcNAc-branched N-glycans and low levels of α2,6-linked sialic acid in anti-VEGF refractory tumors compared to tumors that were sensitive to anti-VEGF treatment. Binding of Gal-1 to VEGFR2 resulted in VEGF-independent activation of the receptor [16]. The group also found that hypoxia upregulates expression of galectin-1 (Gal-1) via HIF-1-dependent and -independent mechanisms. In Kaposi's sarcoma, activation of the transcription factor nuclear factor κB (NF-κB) by reactive oxygen species resulted in higher levels of Gal-1 expression that promoted angiogenesis and tumorigenesis [116]. In another study by the same group, HIF-1α was found to increase Gal-1 expression in colorectal cancer (CRC) cells, and the group identified two hypoxia-responsive elements upstream to the transcriptional start site of the Gal-1 gene that are essential for HIF-1-mediated galectin-1 expression [16]. Tumor microenvironment-dependent changes in endothelial cell glycosylation are summarized in Figure 2. Glycosaminoglycans in Tumor Angiogenesis and Metastasis Within the ECM, GAGS play a role in regulating migration of endothelial cells, providing a scaffold that guides endothelial cell tube formation, and stabilizes neovasculature. An excellent review by Oliveira-Ferrer, et al. describes the varied roles of GAGs in metastasis [117]. Here, we will primarily discuss the role of GAGs as they relate to endothelial cell function (or dysfunction) in cancer. Heparan Sulfate Proteoglycans (HSPGs) HSPGs are a well-studied group of proteins that bear long heparan sulfate chains consisting of 50-200 glucuronic acid disaccharide repeats with variable patterns of sulfation, and reside both on the endothelial cell surface and within the extracellular matrix. HSPG modifications including sulfation create binding sites for various ligands, including adhesive proteins, chemokines, growth factors and growth factor-binding proteins, proteases and protease inhibitors, and morphogens [118][119][120][121][122]. Critically, these interactions are sensitive to the position and linkage of sulfate modifications. Transmembrane HSPGs including syndecans, glycpicans, and perlecan reside on the cell surface and are involved in extracellular matrix assembly and maintenance. Both VEGFR2 and VEGF (including VEGF 165 but not VEGF 121 ) interact with heparan sulfate, and ligand-stimulation has been reported to increase heparan sulfate-VEGFR2 complex formation and vascular permeability [111]. VEGF HS-binding domains encoded by exons 6 and 7 are responsible for the interaction of VEGF ligands with HS, and result in the sequestration of VEGF in the extracellular matrix that may subsequently be released by proteases and heparanase during ECM degradation by proteases associated with angiogenesis [123][124][125]. The ability of VEGF 165 to bind HS is partially controlled by its interaction with endothelial transglutaminase-2 [126]. Additional growth factors, including PDGF-B, contain HS-interacting domains [127,128]. TGF-β isoforms also bind HS, and HS plays a role in gradient formation of cytokines [129,130]. By regulating heparan sulfate modifications on endothelial cells, heparan sulfatases affect tumor angiogenesis in a number of contexts, including ovarian and breast cancer. Downregulation of endosulfatases responsible for removal of 6-O sulfate from HS in response to hypoxia, as well as downregulation in tumor cells, results in the presence of more highly sulfated forms of HS, thus increasing growth factor binding and downstream signaling [131]. Chondroitin Sulfate (CS) Chondroitin sulfate (CS), composed of repeating units of the disaccharide GalNAc-GlcA, is also variably-sulfated in a tissue-specific manner by carbohydrate sulfotransferases. Expression of specific sulfated forms of CS on the surface of tumor cells facilitates their interaction with platelets and endothelial cells by creating ligands that bind P-selectin, e.g., in breast cancer [132]. Moreover, the sulfation pattern of CS on versican appears to be critical for interaction with L-selectin, P-selectin, and CD44, molecules involved in endothelial cell adhesion and/or tumor angiogenesis [133]. However, the full role of such modifications in tumor angiogenesis remains to be determined. Hyaluronan (HA) Hyaluronan (HA) is a negatively charged, nonsulfated GAG. Unlike other GAGs, hyaluronan (HA) is not covalently linked to a core protein. Rather, it is deposited in the extracellular matrix, where it may interact with ECM proteins and other GAGs. In healthy tissue, the coordinated expression and activity of HA synthases and hyaluronidases maintain a homeostasis. In tumors, higher expression of low-molecular weight HA is often present and is associated with inflammatory conditions [134], and contributes to tumor angiogenesis by impairing cellular adhesion [135,136]. HA also seems to play a role in tumor-associated macrophage trafficking to tumor stroma [137]. Endothelial Glycosylation Regulates Tumor Cell Trans-Endothelial Migration The binding to glycosylated epitopes on tumors by selectins (E-selectin, P-selectin) and galectins expressed on endothelial cells, and of tumor-expressed lectins to endothelial glycans, mediates a process of rolling followed by stable heterotypic adhesion. This process mirrors the process through which platelets and leukocytes interact with the endothelium. The glycan-binding proteins on endothelial cells recognize glycan substructures on platelets, leukocytes, and circulating tumor cells. Conversely, L-selectin expressed on leukocytes (specifically, T cells) also recognizes glycan structures on endothelial cells, allowing leukocytes to attach to specific endothelial beds, based purely on the glycans expressed on the endothelial surface [138]. Sulfated glycans also play a role in this process in lymphatic endothelium. There is evidence that these interactions are regulated by the spatial and temporal expression of glycosyltransferases and sulfotransferases in endothelial cells in a bed-specific manner, and by inflammatory signals. Galectin-3 (Gal-3) expressed on endothelial cells is a major actor in tumor metastasis. Gal-3 is the only human lectin of the 'chimera' galectin subtype. It can exist as a monomer, or form multivalent complexes of up to five Gal-3 molecules via its non-lectin domain, allowing it to facilitate the interaction of multiple glycoproteins. By binding T antigen on MUC-1, Gal-3 promotes adhesion of tumor cells to the endothelium in breast and prostate cancer [139][140][141]. Circulating Gal-3 can also increase tumor cell adhesion to and migration across the endothelium by interacting with MUC1 on tumor cells, leading to exposure of additional glycosylated ligands including CD44 that bind E-selectin on endothelial cells [142]. Under flow conditions, highly metastatic MDA-MB-435 human breast carcinoma cells that express high levels of T antigen and Gal-3 showed increased adhesion to endothelial cells compared to similar non-metastatic cells [143]. Glycan-mediated intravasation, rolling, and extravasation of tumor cells contribute to tumor metastasis ( Figure 3). For example, in colon and prostate cancers, glycans with the SLe x motif (a tetra-saccharide containing both sialic acid and α1,3 linked fucose) are involved in tumor metastasis [66,67]. Forced reduction in the expression of α1,3 fucosyltransferases reduced incidence of prostate cancer in mice [68][69][70]. As previously noted, this can be explained by tumor cell adhesion to the endothelial cells via interaction with selectins [71,72]. In patients with multiple myeloma, high expression of ST3Gal6, which catalyzes the 2,3-linked attachment of sialic acid residues to glycoproteins, correlates with lower overall survival. Knockdown of ST3GAL6 in multiple myeloma cells diminished the cells' ability to undergo trans-endothelial migration and reduced ability to roll on P-selectin in vitro [144]. Toward Therapeutic Strategies that Target Endothelial Glycosylation Several anti-cancer therapeutic strategies that target tumor vasculature have been proposed, and include (a) the inhibition of tumor angiogenesis and (b) treatments that promote blood vessel normalization to enhance delivery of chemotherapeutic agents and reduce metastasis [2,[145][146][147][148]. In clinical trials, anti-angiogenic therapies have shown promise in patients with colorectal, lung, breast, and other cancers, but resistance to these therapies often develops rapidly [98,145,[149][150][151]. Additional drug targets that aid in vascular normalization are being investigated [146]. There remain gaps in our understanding of tumor-associated endothelial cell pathobiology, including how tumor microenvironment-induced changes in the glycosylation of endothelial adhesion and signaling molecules contribute to altered angiogenesis. Addressing this gap in knowledge could lead to the design and delivery of pharmacological agents that aid in normalizing blood vessels, prevent metastasis and increase responsiveness to targeted chemotherapeutics. A number of approaches that target protein glycosylation attempt to address this gap. Therapeutic targeting of glycan-mediated processes has been explored, including the use of glycomimetics [152]. Partial inhibition of OST, the enzyme involved in the initiation of N-linked glycosylation, is an approach pioneered by Contessa et al. [153,154]. Among the molecular targets of this strategy are receptor tyrosine kinases such as EGFR, which are highly N-glycosylated. The approach is currently being tested in a number of pre-clinical models [155]. While it has not been tested in the context of angiogenesis, it is notable that VEGFR2 and additional RTKs involved in pro-angiogenic signaling are highly N-glycosylated, and therefore might also be susceptible to targeting by this drug, potentially in combination with other approaches. Another breakthrough involves the development of fucosyltransferases inhibitor 2-fluorofucose (2-FF) by Okeley et al. (2013) [156]. Many selectin ligands are fucosylated, and administration of 2-FF could potentially block these interactions and attenuate trans-endothelial migration of tumor cells. In pre-clinical models, 2-FF inhibited leukocyte-endothelium interactions [157], inhibits liver cancer HepG2 proliferation, migration, and tumor formation [158], and reduced fucosylated E-selectin ligand expression in human invasive ductal carcinoma [159]. Multiple fucosyltransferases in humans catalyze the attachment of fucose via specific linkages to glycans. It is likely that the development of fucosyltransferase-specific inhibitors will ultimately be the most successful strategy, as this will enable targeting of specific fucose linkages involved in metastasis while minimizing off-target effects. Thioglycosides are a class of compounds that are currently being tested as glycosylated decoys to reduce selectin-dependent leukocyte adhesion [160]. It remains to be seen whether a similar approach might be applied in the context of cancer treatment. Additionally, targeting selectin-mediated cell adhesion to endothelial cells may represent an opportunity to control tumor immunity [161]. As discussed previously, heparanase is elevated in multiple types of cancer and promotes tumor invasion, angiogenesis, and metastasis. Heparanase inhibitors that prevent the release of heparan sulfate side chains have been tested in pre-clinical and clinical settings, and reduce tumor metastasis by maintaining ECM integrity and partially restoring vascular function [162][163][164][165]. Conclusions Tumor-associated endothelial cells are significantly influenced by signals from nearby tumor cells, stromal cells and infiltrating immune cells. Glycans on endothelial adhesion molecules including ICAM-1, VCAM-1, and PECAM, and glycan-binding proteins (lectins) expressed on the surfaces of endothelial, immune, and cancer cells, alter the adhesive properties of endothelial cells and facilitate (or disfavor) immune and tumor cell infiltration. In addition, altered endothelial cell glycosylation in the tumor microenvironment has been shown to impact VEGFR2-mediated angiogenic signaling. Further investigation will be needed to understand how changes in tumor-associated endothelial cell glycosylation machinery, with cues from the tumor microenvironment, dysregulate endothelial cell signaling and adhesion, and contribute to the formation of abnormal and leaky tumor blood vessels. Since glycosylation is not template based, different sites within the same protein may be occupied by different glycan structures, and a single protein may have many glycoforms with different biological functions. Major barriers to progress in this field have included (a) the technical challenge of analyzing glycan heterogeneity, (b) the low abundance of plasma membrane receptors and adhesion molecules, and (c) the complexity of linking non-template-based protein glycosylation status to biological function. Despite these challenges, significant progress has been made towards elucidating the roles of normal and aberrant glycosylation in endothelial processes, and we further expect that advances will be made in these areas in the years ahead. We predict that recent advances in mass spectrometry-based methods for the characterization of glycoconjugates, in combination with gene expression analyses in model systems and tissue, CRISPR (clustered regularly interspaced short palindromic repeats)-Cas9 gene editing, and the application of fluorophore-conjugated lectins for live cell and tissue imaging, among others, will enable the establishment of a clear relationship between changes in glycan structures on the cell surface and altered endothelial function in tumor-associated endothelial cells. The knowledge gained in this exciting and emerging field of biology can lead to development of a new class of therapeutics to combat cancer and other diseases. Funding: The authors acknowledge the support of NIH grants F32 CA196157 (to KBC), R21CA191970 and R21CA193958 (to NR), and P41 GM104603 (to CEC). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Conflicts of Interest: The authors declare no conflict of interest.
7,453.6
2019-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Serum Dopamine Level in Acute Murine Toxoplasmosis Toxoplasmosis is a globally parasitic zoonotic disease transmitted by Toxoplasma gondii protozoa. This infection in its chronic form can cause a change in its host's specic behavior and is also associated with developing neuropsychological symptoms in humans. Changes in neurotransmitters' levels, especially dopamine, have been identied as a behavior change factor in the infected host. This study aimed to evaluate serum dopamine levels in acute murine toxoplasmosis. In this study, 50 mice infected with Toxoplasma were studied in 5 separate groups, and ten healthy mice were considered a control group. For ve consecutive days after parasite injection, blood sampling and serum isolation were performed daily from one of the groups. Serum dopamine levels were measured by HPLC method. Statistical studies showed that serum dopamine on the rst to the fourth day after parasite inoculation was the same as the control group, but the fth day began to increase. The present study results indicate that dopamine production in mice infected with Toxoplasma gondii increases from day ve after infection. This result suggests that in acute toxoplasmosis, dopamine production is low, and the trend of chronic disease increases dopamine production. Introduction Toxoplasmosis is a common parasitic disease among humans and animals. About one-third of the world's population is chronically infected with this protozoal infection (Robert-Gangneux and Dardé 2012; Saadatnia and Golkar 2012). Felidae are the nal host, containing sexual parasite stages and product infective step (Oocyst) in their small intestine enterocytes (Innes 2010). Birds and a wide variety of mammals, including humans, are intermediate hosts and carriers of parasitic tissue cysts in their brains and muscles (Dubey 2020). Humans become infected by eating contaminated meat with tissue cysts, water, or vegetables contaminated with the Oocyst and congenitally (Asgari et al. 2011;Dubey 2020;Innes 2010). Infection is often asymptomatic but is especially important in immunosuppressed individuals and congenital forms Robert-Gangneux and Dardé 2012;Shiadeh et al. 2020). Chronic Toxoplasmosis was initially thought to have no clinical signi cance, but in recent decades the potential role of Toxoplasma brain cysts in causing mental disorders in humans has become controversial (Flegr 2013a). Since the 1950s, researchers have drawn attention to the relationship between Toxoplasma and mental disorders (Torrey and Yolken 2003). The high prevalence of Toxoplasmosis, the high a nity of Toxoplasma parasites to the brain in the chronic phase, and the signi cant concurrency between anti-Toxoplasma antibodies and mental disorders have raised researcher's doubts about the causal relationship between parasites and mental diseases (Del Grande et al. 2017;Fekadu et al. 2010;Pearce et al. 2012;Xiao et al. 2018). Toxoplasma can manipulate the host brain cells and consequently behavioral changes in their host (Boillat et al. 2020). It seems that the parasite's ability to induce behavioral changes in the intermediate host leads to its predation by the nal and other intermediate host and facilitates parasite transmission (Boillat et al. 2020;Hammoudi and Soldati-Favre 2017). The main suspect for behavioral change in latent Toxoplasmosis in intermediate hosts is dopamine (Flegr 2013a;b). Dopamine is made in mammals by the adrenal glands as well as dopaminergic cells in the brain. This catecholamine is made by removing a carboxyl group from its precursor levodopa (Ldopa). Also, L-dopa is synthesized by the enzyme tyrosine hydroxylase from L-Tyrosine (Berke 2018;Iversen and Iversen 2007). Two aromatic amino acid hydroxylases )AAH1 and AAH2( enzymes have been discovered in the Toxoplasma. These enzymes are responsible for catalyzing phenylalanine conversion to tyrosine and tyrosine to L-dopa (Gaskell et al. 2009). The parasite synthesizes L-dopa to build its Oocyst wall (Wang et al. 2017). Several studies have been performed on the Toxoplasma cyst stage's role in altering neurotransmitters' levels, especially dopamine, and subsequently induce behavioral changes (Johnson and Johnson 2020). Toxoplasma has been shown to change testosterone levels in addition to neurotransmitters Kaňková et al. 2011;Lim et al. 2013). Outside the central nervous system, dopamine functions are not clear (Eisenhofer et al. 2004). various roles have been proposed for dopamine, Including local paracrine messenger, vasodilator function (in average concentrations), Help to excretion of sodium and urine, reduces insulin production, reduces gastrointestinal motility to protect the intestinal mucosa, reduces the activity of lymphocytes (Bucolo et al. 2019;Carey 2001;Eisenhofer et al. 2004;R Buttarelli et al. 2011;Sarkar et al. 2010). In 2009, Gaskell et al. Showed that encoding genes)AAH1 and AAH2( enzymes are activated in the chronic phase when tachyzoites converted to bradyzoite (Gaskell et al. 2009). On the other hand, Carruthers et al. demonstrated that the Toxoplasma parasite could effectively develop schizophrenia, even in the acute phase (Carruthers and Suzuki 2007). Strobl et al. observed an increase of Toxoplasma tachyzoites proliferation in human broblast cell culture media after adding dopamine volumes (Strobl et al. 2012). with these explanations, the parasite's role in changing dopamine levels in the acute phase and its possible effects remains unclear. In our previous study, tyrosine as a dopamine precursor was measured in consecutive days post-Toxoplasma-infection in mice models . In this regard, this work aimed to measure the serum dopamine levels in acute murine Toxoplasmosis. Ethics approval This work was aimed to evaluate the serum dopamine levels in acute murine Toxoplasmosis in 2018 in Shiraz, Iran. The present study is based on guidelines for the care and use of laboratory animals (Council 2011 Parasite's preparation Toxoplasma gondii parasite RH strain was injected intraperitoneally into BALB/c mice. After 72 hours, the mice were euthanized according to ethical standards. The peritoneal area was ushed with physiological saline via a 5 cc syringe, and the parasites were collected from the peritoneal site and then washed with PBS. The parasites were mechanically isolated from the host cells bypassing the aspirated uid through high gauge needles. The solution was then centrifuged at 200 g for 10 minutes to remove cell debris. The supernatant was separated and centrifuged at 800 g for 10 minutes. The sediment was washed three times with Phosphate-buffered saline (PBS) at a pH of 7.2 and prepared as a pure tachyzoite. Animals Sixty BALB/c mice aged 6 weeks and weighing 30-35 g were obtained from the Comparative Medical Institute of Shiraz University of Medical Sciences. All animals were kept in standard conditions: temperature of 22±2 °C, the humidity of 60-40%, dark-light cycles of 12 hours, proper ventilation, and access to adequate water and food. Sixty mice were divided into 6 groups of 10. Each group was kept in separate cages. The parasites were subcutaneously injected into groups 1 to 5 (10 5 tachyzoites per mice). The sixth group was considered as the control group (only PBS was injected). After 24 hours from parasite injection, the sampling began so that a group of 10 mice was sampled daily after anesthesia for ve consecutive days. Then the samples were taken to the Medical School of the Shiraz University of Medical Science. The serum was isolated and kept at 70 °C until the test. Chromatography conditions HPLC was used to determine serum tyrosine levels (Waters, USA). Chromatographed Samples on an inverted phase column (Spherisorb C1; Waters) with a C18 column in isocratic mode. A 5% water-soluble acetonitrile at a 1 ml/min ow rate was used as the mobile phase. The absorption of diluted serum and control samples was reading at 225 nm in a UV detector (LC 95; Perkin-Elmer, U Berlin, Germany). Standard curve preparation The HPLC apparatus was set up to draw a Standard curve by 1, 0.5, 0.25, 0.125, 0.0625, and 0.03125 μg/ml concentration of standard dopamine solution. An amount of 0.01 mg of dopamine was dissolved in 1 cc of 5% perchloric acid to obtain a Homogeneous solution, increased the volume solution to 10 cc, and passing through the syringe lter. Followed by Serial dilution was performed using 5% perchloric acid to obtain the desired concentrations. Finally, Determined concentrations of dopamine solutions were made by absolute ethanol solvent. Each density was run three times in the HPLC device, and the standard curve was drawn using software (SQS 98; Perkin-Elmer) by different concentrations (Fig. 1). Sample examination The sera were taken out of the freezer, and after thawing at room temperature, 50 μl of each serum was mixed with an equal amount of 5% (v/v) perchloric acid solution. This step was performed for all 60 samples, injected 50 μl of each sera sample into the device by an HPLC needle. Finally, by applying a retention time of 3.5 minutes, each sample's curve was drawn at a wavelength of 236 nm. Dopamine peaks of sera samples were determined by comparing the retention times of standard dopamine. (Fig. 2) Statistical analysis ANOVA and Post Hoc test evaluated statistical data in software SPSS version 22 (Chicago, IL, USA). P-value≤ 0.05 was considered as statistical differences. Results Dopamine concentration in serum samples was calculated based on the standard curve. After preparing standard dopamine solutions in different concentrations and injecting them into the device, the relevant curves were plotted. The standard equation was obtained based on the area below the curve. Then, based on the area below the calculated curve for each unknown sample and the equation's slope, the samples' dopamine concentration was calculated. The mean serum dopamine level on the rst, second, third, and fourth days after injection of the parasite was similar and unmeasurable with its level in the control group. The mean serum dopamine level in the fth group was raised, and signi cant difference from that in the control group(P=0.042). (Fig. 3). Discussion The present study was conducted to evaluate the Toxoplasma parasite's effect on changes in blood dopamine levels in acute murine toxoplasmosis. For this purpose, serum dopamine levels of 50 mice as a case group and 10 mice as a control group were measured on 5 consecutive days post-infection. Dopamine levels in the rst to fourth days were not different in the case and control groups, but on the Late acute phase ( fth day of infection), dopamine levels increased. Our previous study, which focuses on tyrosine production (one of the dopamine precursors) in acute murine toxoplasmosis, showed the highest and lowest tyrosine levels in the second and fth-day post-infection, respectively . While in the current study, dopamine production began on day 5 after infection. It considered that, decrease the tyrosine levels and increase the dopamine may be correlated in acute toxoplasmosis. It has been shown that the Toxoplasma parasite has a tyrosine hydroxylase enzyme that catalyzes L-dopa from L-tyrosine (Gaskell et al. 2009). Decreased tyrosine levels and increased dopamine levels in the late acute phase of mouse toxoplasmosis (day 5) may be related to these two genes' function and activation. And the parasite probably consumption the tyrosine to produce dopamine using these enzymes. Stiibs et al. Showed a similar dopamine production pattern in the control and case group in acute murine toxoplasmosis. But dopamine production in chronic Toxoplasmosis was 14% higher in the case group than in the control group (Stibbs 1985). Mirzaei et al. have examined tyrosine levels and dopamine in chronic toxoplasmosis in mice models; They showed low levels of tyrosine and high dopamine levels in the serum (Mirzaeipour et al. 2020). In the present study, we used the parasite's RH strain (lethal strain) to observe dopamine changes in the acute phase only. This strain is killing mice due to its high pathogenicity (Asgari et al. 2013). Several studies have been performed to understand the potential of Toxoplasma's induction of behavioral disorders in the intermediate host (Fekadu et al. 2010). Major studies agree on the parasite's ability in its chronic form to alter the levels of neurotransmitters (Flegr 2013a;b). It has been concluded that the parasite in the chronic stage causes mood and physiological changes by manipulating the nervous system function. For example, mice with Toxoplasmosis do not respond to cat urine and its presence; consequently, they are easier to hunt (Berdoy et al. 2000). According to the manipulation hypothesis, Toxoplasma parasite has evolved to alter the intermediate host's nervous function system. In doing so, the parasite increases its chances of transmission to the next host and its survival (Flegr 2013b). Unlike the acute phase, Many studies have been conducted to highlight the relation between chronic toxoplasmosis and dopamine brain levels in animal models and humans because of dopamine's importance in behavioral changes (Babaie et al. 2017;Berenreiterová et al. 2011;Parlog et al. 2015;Xiao et al. 2014;Xiao et al. 2018). So, the present study was conducted in BALB/c mice to survey the dopamine level in acute toxoplasmosis. Our result suggests that dopamine production is negligible in the acute phase of infection and begins in part as we approach this phase's end. Conclusion Measurement of dopamine levels in the serum of Toxoplasma mice infected ve consecutive days after parasite inoculation showed that dopamine production did not change on days one to four compared to the control group. Only on day 5 in some mice dopamine production increased. This result indicates that dopamine production is de cient in Toxoplasmosis's acute period and that dopamine production increases chronically over time.
3,005.8
2021-05-07T00:00:00.000
[ "Biology" ]
E-VILLAGE GOVERNMENT : FOR TRANSPARENT AND ACCOUNTABLE VILLAGE GOVERNANCE DOI: 10.21532/apfj.001.17.02.02.10 ABSTRACT E-village government is a breakthrough that can be used by village to support the village governance’s transparency and accountability after the issuance of Village Law. Of the 301 villages in Banyumas, only 47 villages have village website on their own initiative. Of the 47 villages that have website, only 4 villages have informative, timely, relevant, and sufficien website which can be used as a medium for two-way communication between village government and the people. Village location that is close to the district capital and educational level of village people do not make the village more responsive in supporting the development of e-village government. Progressive and innovative village leaders are the main factors in implementing e-villagegovernment. Some villages have not implemented e-village government because they still have lack of human resources, lack of experience and lack of supervision.1 INTRODUCTION "The first governments to respond to any new realities are local governments-in large part because they hit the wall first.(David Osborne and Ted Gaebler in Reinventing Government)" Law no.6 of 2014 on Village (Village Law) has marked a new era of government in Indonesia.Since the implementation of the Village Law, Indonesia has had four levels of government, that is, central government, local government (first level and second level), subdistrict government, and village government.Therefore, Indonesia is considered as a country with the most levels of government in the world, since most countries only have central government and local government.This long decentralization of authority has become not only an opportunity, but also a challenge for local governments, Volume 2, No.2 nd Edition (July-December 2017) Novita Puspasari : E-village government: for transparent and accountable..... Page [221][222][223][224][225][226][227][228][229] especially village governments, in Indonesia.Data derived from the Central Bureau of Statistics (BPS) show that in 2011, 63.21% of the poor lived in rural areas.With the enactment of Village Law, it is expected that the villagers be more empowered, independent and prosperous.Village autonomy is also expected to solve problems in the village quickly and precisely. The challenge faced by the village government is on how to manage the existing resources well.The economic resources derived from village funds of around IDR 1.4 billion per year require good management.If the village funds are not managed properly, the funds can become new fraud areas (Puspasari, 2015).Mismanagement post-decentralization often occurs in local government.Local government is the most corrupt sector in Indonesia even after it has been given autonomy to manage its own region (BPS, ICW).The village government certainly will not expect the same. Decentralization, on the one hand, offers significant opportunities to increase government accountability (Yilmaz et al, 2008).Previous study conducted by Yilmaz et al (2008) found that decentralization did not make government accountability better.However, according to Dersahin and Pinto (2010), increasing transparency will reduce corruption.The role of information technology is crucial to increase transparency and accountability, (Laurenco et al, 2013).In an effort to improve transparency and accountability, both central and local governments in Indonesia take advantage of information technology by creating websites.Website is one form of e-government.Communities can assume that a government is transparent, accessible, responsible, effective and participatory through public websites provided by the government.(Dominguez, et al, 2011).Unfortunately, most government websites have not adequately disclosed relevant information for the accountability process (Laurenco et al, 2013). Banyumas Regency in Central Java province covers 301 villages.Recognizing the demand for increased accountability and transparency after the implementation of the Village Law, the village governments in Banyumas Regency started creating village websites.The creation of the village website was initiated by Gerakan Desa Membangun (GDM) in 2011.Village website is expected to increase accountability and transparency of the village management (especially village funds), and make the information flow between village government and village stakeholders run well. Based on the above background, there are several things to be studied in this research as expressed in the following research questions: 1) For the villages that already have websites, is the information content in the websites useful enough to support the concept of transparency and public accountability?2) For the villages that have no websites, why do not they create a village website? E-Government: Transparency and Accountability The relationship between e-government and corruption (fraud) has been a research subject for many years.A study conducted by Pina et al (2007) finds that countries with low levels of corruption are more transparent in disclosing information about public funds management, and vice versa.The study conducted by Kim (2007) finds that there is a direct link between corruption control and e-government development.Countries, institutions or regions that have developed e-government can control the level of corruption. According to Armstrong (2005), transparency is unrestricted public access to timely and reliable information on the decisions and performance of the public sector.Kauffmann and Kray (2002) define transparency as a timely and reliable flow of economic, social and political information, and accessible to all relevant stakeholders.According to Heald (2006), transparency is a complex term which can be analysed by means of a set of dichotomies, the major one being between event and process transparency.Events, which can be further divided into inputs, outputs, and outcomes represent externally visible points/ states that are linked by processes.These may be divided into transformations (linking inputs into outputs) and linkage (linking outputs into outcomes). The object of transparency is an event.The result of an event is easy to measure, but the process of the event is difficult to measure as it relates to procedural and operational aspects (Laurenco et al, 2013).Most definitions of transparency have the same core, that is, accessible information. Transparency cannot be separated from accountability.Transparency is always necessary for accountability, since access to information requires a first step in the accountability process (Meijer, 2003).Accountability is often defined as the government's obligation to report the use of public resources for the purpose of performance measurement (Armstrong, 2005).Yilmaz et al (2008) sees accountability in terms of demand and supply.According to Yilmaz (2008), public and social accountability must be bridged to ensure that communities have the ability and opportunity to demand accountability and that local governments have ways to respond to community demands for better accountability and service.According to the International Federation of Accountants (IFAC, 2013), accountability is ensuring that those who make decisions and perform services can be held accountable for what is done.Effective accountability is not only about reporting on what is done but also ensuring that stakeholders can understand and respond to the plans and activities of the entity openly (IFAC, 2013) According to IFAC (2013), government should be open related to decisions, actions, plans, resource use, outputs, and outcomes.The relationship among accountability, transparency and the need for technology is illustrated in the following figure: Appendix Figure 1 Village Autonomy through Law No. 2 0f 2014 The implementation of the Village Law is done in the context of the implementation of local autonomy.Village autonomy is the right, authority, and obligation to regulate and manage its own governmental affairs and interests of local people in accordance with legislation (Road Map of the Implementation of Village Law, 2014).The definition of village according to Law No. 6 of 2014 is the unity of legal community that has the territorial borders authorized to regulate and administer government affairs and the interests of local communities based on community initiatives and traditional rights recognized in the system of government of the Unitary State of the Republic of Indonesia (NKRI). According to McKnight in Osborne and Gaebler (1992), by encouraging ownership out of the community and submitting it to higher levels of bureaucracy, it would actually weaken the community and harm society.McKnight argues that the community understands the problems they face better.The community not only provides 'service', but also solves problems.In addition, the community is more flexible and creative than the fat bureaucracy.The concept of village as a community that becomes the foundation of the Village Law is believed to be the motor of national development in Indonesia. According development is an effort to improve the quality of life and the welfare of the village community.Village development is participatory, which means that the development management system in village and rural areas is coordinated by the Village Head by promoting togetherness, kinship and mutual cooperation to bring about peace and social justice.In building the village, it is also necessary to empower the village community.Empowerment of rural communities, according to the Regulation of Minister of Home Affairs No 114 of 2014 is an effort to develop self-reliance and social welfare of the community by improving knowledge, attitude, skill, behavior, ability, awareness, and utilizing resources through the determination of policies, programs, activities, and assistantship in accordance with the essence of problem and priority needs of the villagers. The autonomy of the village to manage its own finances is a point of interest to many parties.According to Article 71 of Law No. 6 of 2014, village finance is all village rights and obligations that can be assessed with money and everything in the form of money and goods relating to obligations.Village finances are managed on the basis of transparent, accountable, participatory principles and carried out in an order and disciplined budget. The need for information on village management makes it necessary to conduct information management of village data based on Information Technology (IT).Road Map of the Implementation of the Village Law (2014) requires the village to have an Information System Management and Maintenance Team. In this case the village should at least: 1) Own and run the schedule of village data information condition checking.2) Have a regular schedule of replacement of content / information to be reported.3) Cooperate with parties having relevant interests in the procurement of information systems. RESEARCH METHOD This research uses qualitative research method.Based on several reasons for the use of qualitative research according to Creswell (2013), this study uses a qualitative approach because it takes into account two things, that is, to answer the research question in the form of how and what.In addition, this topic is a topic that still needs to be explored because the variables cannot be easily identified.The approach used is a case study approach.The case study focuses on the development of in-depth analysis of a case or some cases (Cresswell, 2013).The case in this study is e-government on village government in Banyumas Regency. According to Molelong ( 2007), qualitative research is a research that is able to provide an understanding of the phenomenon that is happening thoroughly with the description in the form of language and words.The data collection technique used in this research is indepth interview in the form of semi structural interview to find the problem openly and deeply.In addition to in-depth interviews, content analysis is also conducted to analyze village websites in Banyumas Regency. The object in this study is the villages in Banyumas Regency.Banyumas Regency has 24 sub-districts and 301 villages.There are two types of village websites in Banyumas Regency: the village website made by the Government of Banyumas Regency and the village website made by Gerakan Desa Membangun (GDM), a non-governmental organization which then collaborates with the Ministry of Village, Disadvantaged Area Development, and Transmigration.Almost all village websites were made by the Government of Banyumas Regency, while only about 47 villages made their own websites with GDM assistance.This study was conducted within 3 months (February, March, and July 2016). RESULTS AND DISCUSSION Banyumas Regency has 27 sub-districts (4 sub-district cities) and 301 villages.Each village gets around IDR 1 billion a year to build the village.Almost all villages in Banyumas Regency have village websites made by the regency government.Meanwhile, there are about 47 villages that make the village website assisted by GDM.Previously, the forty-seven villages already had websites made by the government, but almost all the websites were dormant.There is a fundamental difference between these two types of village websites.The websites created by the Regency Government are made as a form of the fulfillment of government work program, and the initiative is from top to bottom (regency government to village government).The approach used is a top-bottom approach.While the village website made by the village government assisted by GDM is a direct initiative from the village.The differences are also visible from the display and content of both types of websites.The websites created by the Regency Government are not updated regularly and are not informative, while the websites created by the village own initiatives have some new and more informative breakthroughs.Based on these considerations, the website content analysis carried out in this research is websites created on the initiative of the village government and GDM. Appendix Figure 2 To answer the first research question, "Is the information content in the website useful enough to support the concept of transparency and public accountability?",an analysis of the content of 47 village websites is conducted.Of the forty-seven websites, ten websites are still in process, so only thirty-seven websites are analyzed.Websites that support the concept of transparency and public accountability are the websites that disclose adequate information about organizational management, relevant data (Laurenco et al, 2013), timely (Kelley, 2015) and have two-way communication (Yilmaz et al, 2008). Of the thirty-seven websites analyzed, only the websites of 4 villages (Dermaji Village, Wlahar Wetan Village, Melung Village, Karanggayam Village) that fulfill the concept of supporting transparency and public accountability.The websites of these four villages contain the timely agenda and activities of the village government, the services provided, as well as the financial information such as income, expenditure, use of village funds and village budget of the current year and previous years.In addition, the four websites contain two-way communication between citizens and village government that are visible from feedbacks of citizens' comments and links to twitter of village governments that contain active communications. Thirty-three other village websites list incomplete and untimely information.This reinforces the results of the research conducted by Dominguez et al (2011) that most government agency websites serve only as 'governmental billboards'.This research also reinforces the results of the research conducted by Laurenco et al (2013) that local governments have not disclosed sufficient information for the accountability process and they cannot maximize the potential of the internet to make the data more feasible and reusable by communities and local stakeholders. It is worth to note that the four villages that have been able to create websites that support transparency and public accountability are located far away from the district capital (Purwokerto).They are the outermost villages of Banyumas Regency.In contrast, the villages closest to Purwokerto and General Sudirman University, in fact, still do not have their own village websites.The higher education levels of rural communities living close to the city are not able to make the village more responsive to issues of accountability and transparency of village funds.This also supports the results of Volume 2, No.2 nd Edition (July-December 2017) 2011) that the level of education of local communities is not the main factor that encourages the development of digital administration.When explored more deeply, the main factor of the village responsiveness by making e-village government is the village head factor.A progressive village head is the main driver of the village progress. To answer the second research question, "Why do not they create village websites?",an in-depth interview was done in villages that still had no their own village website.Interviews were conducted in 4 villages (Tambaksogra Village, Tambaksari Village, Pernasidi Village, and Karanglewas Village).Interviews were only carried out in the four villages of hundreds of villages that had not made their own village website because the pattern of the responses of the four villages was relatively similar.From the interviews, they claimed to have not provided village management information online on the website because of the following reasons: The first is due to the lack of human resources (HR) to manage the website.The second is due to the lack of experience, both in the digital world and in the management of village funds.The village fund has only been running for two years so that the mechanism has not been unclear.Due to the vagueness of the mechanism, the village government felt confused about what and when to inform to the people.The third is due to the lack of qualified financial personnel.This is the problem of almost all villages.The lack of skilled finance personnel makes villages, in the second year of implementation of the Village Law, still grope about how good financial administration is.Some villages requested assistance of village counselors to complete financial administration related to village funds.One interesting thing is that they think that the village funds always come late and there are many unclear cuts.So, why they have to make transparent and accountable reporting.Here is an excerpt of one of the sources: "moreover making a good financial statement and putting it into the internet, just making manual financial report is still difficult.Furthermore, the funds are always late down.We are also confused about what money we use for operating and how the report can be made then" The fourth is the lack of supervision.Accountability Report of Village Budget implementation realization and Village Fund Accountability Report are always made with the assistance of village counselors.However, supervision in the form of government audits has never been done.This makes the village is not encouraged to present an informative report that can fulfill the principles of transparency and accountability.The creation of Village website as one of the tools to support transparency and accountability is voluntary.So, if the village does not report the activities and the use of village funds, it is not a problem.The goodwill of the Village Head to make his government transparent and accountable is a major factor in the success of e-village government. The four reasons why most of these villages do not create websites are in line with the results of research conducted by Kelley (2015).The local government problem of why they do not present relevant and timely information on their website is due to: lack of resources, lack of experience, lack of trained financial personnel, inadequate oversight, and inadequate technology.In developed countries like the United States there are institutions outside government agencies that collect the financial statements of local governments and publish them on their websites.If it just relies on central government initiatives and or local government awareness to publish government financial reports online, e-government will never succeed. CONCLUSION AND SUGGESTION The village autonomy mandated by Law No. 6 of 2014 forces villages to manage their own household, including in managing their own finances.Good village governance with the principle of transparency and accountability must be implemented if the village does not want to repeat the failure of regional autonomy that is filled with fraud.E-village government is a breakthrough that can be used by villages to support transparent and accountable village governance. Of the 301 villages in Banyumas Regency, only 47 villages have website.Of the 47 villages that have websites, only 4 villages have the most informative websites with timely and relevant presentation, and serve as a two-way communication medium between the village government and village stakeholders.The factor of progressive and innovative Village Leaders has made the four villages one step ahead with e-village government.While the factors that cause most of the villages in Banyumas Regency do not have their own websites are: lack of human resources, lack of experience, lack of trained financial personnel and lack of supervision.The location of the village that is closer to the district capital and the high level of education of the village community do not support the development of e-village government.There is a need for government agencies to oversee and assist villages in e-village governments such as those in the United States (Kelley, 2015).In Indonesia, the role of institutions like GDM needs to be strengthened again to assist villages in terms of information technology. The number of studies on village governance after the implementation of Village Law is still relatively few so that the opportunity to do research on this topic is wide open.Further research can discuss why the high level of rural community education and location closer to the capital do not support the development of e-village government.Further research can also examine whether e-village government can prevent fraud in villages.
4,659.2
2018-02-13T00:00:00.000
[ "Economics", "Political Science", "Medicine" ]
Non-Invasive and Label-Free On-Chip Impedance Monitoring of Heatstroke Heatstroke (HS) is a life-threatening injury requiring neurocritical care which could lead to central nervous system dysfunction and severe multiple organ failure syndrome. The cell–cell adhesion and cell permeability are two key factors for characterizing HS. To investigate the process of HS, a biochip-based electrical model was proposed and applied to HS. During the process, the value of TEER is associated with cell permeability and CI which represents cell–cell adhesion decreases that are consistent with the reduction in cell–cell adhesion and cell permeability characterized by proteins (occludin, VE-Cadherin and ZO-1) and RNA level. The results imply that the model can be used to monitor the biological process and other biomedical applications. Introduction Heatstroke is a life-threatening illness characterized by an elevated core temperature above 40 • C and central nervous system dysfunction [1]. It results from the imbalance between heat production and dissipation within the body. Recent studies have revealed death rates as high as 10-15% in general heatstroke cases and up to 40% in severe cases that suffer from disseminated intravascular coagulation (DIC), acute kidney injury (AKI), rhabdomyolysis (RM), and even multiple organ dysfunction syndrome (MODS) [2]. Current consensus attributes the immediate response to heat stress to the injury of vascular endothelial cells (VECs), which triggers subsequent coagulation activation and inflammation, both of which crucially contribute to the progression of heatstroke. Despite this, inadequate research has hampered the assessment of VEC damage, thus stalling the development of disease evaluation and effective treatment for heatstroke [3][4][5][6]. The current methods for labeling VEC damage have several limitations, such as invasiveness, complexity, and imprecision [7,8]. For instance, the Transwell experiment, a traditional method frequently used in cell biology research, requires fixing cells with 4% paraformaldehyde prior to the subsequent step, which may damage cell vitality. Furthermore, trans epithelial electrical resistance (TEER), commonly adopted to quantify cell permeability in a non-invasive and label-free manner, can only be applied to monolayer cells, not individual cells [9,10]. However, due to recent advances in microfluidic Biosensors 2023, 13, 686 2 of 10 technology, nanofabrication, and integrated sensors, biochips have emerged as a reliable alternative. Biochips can precisely simulate the in vivo physical environment, thanks to their ability to control the flow of culture medium into cell chambers [11]. Consequently, the traditional cell culture systems utilizing culture flasks and plates has gradually been abandoned. Endothelial cells form a semi-selective barrier in the body that separates blood from organs and tissues, playing a crucial role in maintaining the overall stability of the body. Due to cell-cell interactions, the endothelial cells adjacent to each other form tight junctions (TJs), adhesion junctions (AJs), and gap junctions (GJs) [12,13]. The molecular composition of tight junction proteins (TJs) includes: claudin, occludin, junctional adhesion molecules (JAMs), and zonula occludens (ZOs) [14][15][16][17][18][19][20][21]; the "basal" tissue of adhesion junctions is provided by vascular endothelial cadherin (VE-cadherin). By decreasing the expression of the tight junction structural proteins ZO-1 and occludin in vascular endothelial cells, heatstroke prevents endothelial cells from forming tight junction complexes, resulting in damage to the endothelial cell barrier structure, thereby increasing vascular permeability [2]. In the present study, we proposed a novel biochip to test the electrical changes of cells under heat stress. The new biochip could be used to monitor TEER and cell-cell integrin capacitance in real time when VECs were exposed to varying levels of heat stress condition. What is more, we also administrated that the electrical characteristics were in agreement with the change of specific proteins and RNA expression reflecting cell permeability. This study provides a novel method to continuously monitor VECs damage under heat stress, which is helpful for clinicians to understand the patient's vascular endothelial cell damage and inflammation. The overall research thought is shown in Figure 1. Biosensors 2023, 13, x FOR PEER REVIEW 2 of 10 microfluidic technology, nanofabrication, and integrated sensors, biochips have emerged as a reliable alternative. Biochips can precisely simulate the in vivo physical environment, thanks to their ability to control the flow of culture medium into cell chambers [11]. Consequently, the traditional cell culture systems utilizing culture flasks and plates has gradually been abandoned. Endothelial cells form a semi-selective barrier in the body that separates blood from organs and tissues, playing a crucial role in maintaining the overall stability of the body. Due to cell-cell interactions, the endothelial cells adjacent to each other form tight junctions (TJs), adhesion junctions (AJs), and gap junctions (GJs) [12,13]. The molecular composition of tight junction proteins (TJs) includes: claudin, occludin, junctional adhesion molecules (JAMs), and zonula occludens (ZOs) [14][15][16][17][18][19][20][21]; the "basal" tissue of adhesion junctions is provided by vascular endothelial cadherin (VE-cadherin). By decreasing the expression of the tight junction structural proteins ZO-1 and occludin in vascular endothelial cells, heatstroke prevents endothelial cells from forming tight junction complexes, resulting in damage to the endothelial cell barrier structure, thereby increasing vascular permeability [2]. In the present study, we proposed a novel biochip to test the electrical changes of cells under heat stress. The new biochip could be used to monitor TEER and cell-cell integrin capacitance in real time when VECs were exposed to varying levels of heat stress condition. What is more, we also administrated that the electrical characteristics were in agreement with the change of specific proteins and RNA expression reflecting cell permeability. This study provides a novel method to continuously monitor VECs damage under heat stress, which is helpful for clinicians to understand the patient's vascular endothelial cell damage and inflammation. The overall research thought is shown in Figure 1. Flowchart of this study: First, the HUVECs were cultured on a biochip. Second, a heat stroke model was constructed. Finally, different samples were collected for analysis and the impedance of the heat-stressed HUVECs was measured by an instrument. Design and Fabrication of the Chip The device is shown in Figure 2a, and comprises a substrate layer, an electrode layer and a construction layer. The fabrication process of the device comprises the following key steps: First, Au/Ti electrodes were manufactured by a lift-off process. Next, the Design and Fabrication of the Chip The device is shown in Figure 2a, and comprises a substrate layer, an electrode layer and a construction layer. The fabrication process of the device comprises the following key steps: First, Au/Ti electrodes were manufactured by a lift-off process. Next, the construction layers were manufactured by pouring the PDMS and curing agent (ratio of 10:1) on an aluminum alloy mold. After curing 30 min with 60 • C and peeling from the mold, the finished device was prepared by aligning and bonding the construction layer to the electrode chip ( Figure 2b). construction layers were manufactured by pouring the PDMS and curing agent (ratio of 10:1) on an aluminum alloy mold. After curing 30 min with 60 °C and peeling from the mold, the finished device was prepared by aligning and bonding the construction layer to the electrode chip ( Figure 2b). Cell Preparation (Human Umbilical Vein Endothelial Cells Culture) Human umbilical vein endothelial cells (HUVECs) were purchased from Procell Co., Ltd., Santa Ana, CA, USA) The experiments were conducted using cells from passage 4 to 7, all of which were grown in 10 cm culture dishes (Greiner Bio-One, Kremsmünster, Austria). The cells were then subcultured every 3 to 5 days, and the culture medium (Procell) was changed every 2 days. The cells were subcultured at 90% confluence, then we use 2 mL of 0.25% trypsin-EDTA (Gibco, Billings, MT, USA) to digest cells. A new culture flask was prepared, and the remaining cells were used to observe the prepared microfluidic and 6-well plate (Greiner Bio-One) that had been coated with 100 μg/mL of fibronectin (ThermoFisher Scientific, Waltham, MA, USA) for 2 to 4 h at 37 °C. The control cells were maintained in an incubator at 37 °C. For heat stress induction, cells were subjected to 43 °C for 2 h and then to 37 °C for 6 h. In addition, the cell incubator used in this study was developed by our research group, and the temperature fluctuation was ±0.2 °C. Western Blotting First, protein samples from cells were prepared using radioimmunoprecipitation assay (RIPA) lysis buffer. The protein samples were then subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) followed by electroblotting onto polyvinylidene fluoride (PVDF) membranes. The PVDF membrane was connected to the positive electrode on one side (red) and the gel to the negative electrode on the other side (black). The membranes were then probed with monoclonal antibodies against ZO-1 (Proteintech group), polyclonal antibodies against VE-cadherin (Cell Signaling Technology), occludin (Proteintech), and GAPDH (GeneTex). Finally, the protein bands were visualized using chemiluminescence detection reagents (Advansta), and Cell Preparation (Human Umbilical Vein Endothelial Cells Culture) Human umbilical vein endothelial cells (HUVECs) were purchased from Procell Co., Ltd., Santa Ana, CA, USA) The experiments were conducted using cells from passage 4 to 7, all of which were grown in 10 cm culture dishes (Greiner Bio-One, Kremsmünster, Austria). The cells were then subcultured every 3 to 5 days, and the culture medium (Procell) was changed every 2 days. The cells were subcultured at 90% confluence, then we use 2 mL of 0.25% trypsin-EDTA (Gibco, Billings, MT, USA) to digest cells. A new culture flask was prepared, and the remaining cells were used to observe the prepared microfluidic and 6-well plate (Greiner Bio-One) that had been coated with 100 µg/mL of fibronectin (ThermoFisher Scientific, Waltham, MA, USA) for 2 to 4 h at 37 • C. The control cells were maintained in an incubator at 37 • C. For heat stress induction, cells were subjected to 43 • C for 2 h and then to 37 • C for 6 h. In addition, the cell incubator used in this study was developed by our research group, and the temperature fluctuation was ±0.2 • C. Western Blotting First, protein samples from cells were prepared using radioimmunoprecipitation assay (RIPA) lysis buffer. The protein samples were then subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) followed by electroblotting onto polyvinylidene fluoride (PVDF) membranes. The PVDF membrane was connected to the positive electrode on one side (red) and the gel to the negative electrode on the other side (black). The membranes were then probed with monoclonal antibodies against ZO-1 (Proteintech group), polyclonal antibodies against VE-cadherin (Cell Signaling Technology), occludin (Proteintech), and GAPDH (GeneTex). Finally, the protein bands were visualized using chemiluminescence detection reagents (Advansta), and densitometric analysis was conducted using imaging processing software (Image J 1.8.0_172). Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) Total RNA was extracted with TRIzol ® reagent (Invitrogen, Waltham, MA, USA) and cDNA was prepared using HiScript ® III RT SuperMix for qPCR (+gDNA wiper) (Vazyme, Nanjing, China) according to the manufacturer's protocol. Quantitative RT-PCR of heat-stressed HUVECs was performed using SYBR ® Green (Bio-Rad) and the following primers are shown at the 5 to 3 ends' (Table 1). cDNA was obtained by reverse transcription at 1:20 dilution, and the diluted cDNA was used as a template. The reaction system for each well is formulated in the following proportions ( Table 2). The data were normalized to GAPDH expression, and relative expression changes were analyzed using the ∆Ct method. Instrument software sets the fluorescence value of the 3rd-15th cycles as the baseline. The threshold is 10 times the standard deviation of the baseline. The Ct value of the template has a linear relationship with the logarithm of the initial copy number of the template. The higher the initial template concentration, the smaller the Ct value, and the lower the initial template concentration, the larger the Ct value. When the PCR cycle reaches the cycle at which the Ct value is located, it just enters the true exponential amplification period (logarithmic phase). At this time, the small error has not been magnified, and the reproducibility of the Ct value is good, that is, for the same amount of initial template, the obtained Ct value is relatively stable. Table 1. Primer names and sequences used in qRT-PCR. Chip Circuit Model According to the physical properties of the subjects, an equivalent circuit model is designed for measuring the properties of monolayer cells using our system. The system and the corresponding equivalent circuit model are shown in Figure 3a, and the corresponding simplified circuit is shown in Figure 3b,c. The capacitance, CI, which presents the cellcell integrin, can be calculated using a simplified circuit from Figure 3 and Equation (2). The TEER, which is associated with the cell body, is calculated using Equation (3) which is described in detail in the Supplementary Materials. Trans-Epithelial Electrical Resistance (TEER) and Capacitance of Cell-Cell Adhesion Measurement The biochip was glued onto a custom-made printed circuit board (PCB). The electrodes were connected to the PCB through a copper wire, and the PCB was connected to an impedance analyzer (HF2LI, Zurich Instrument, Zurich, Switzerland). The calibration (blank medium) served as a baseline, and the cell impedance was measured using the HF2LI instrument with an input of 100 mV. The frequency range was from 100 Hz to 1 MHz, and 10 points were measured per decade. First the HUVECs in the control group were cultured at 37 °C in 5% CO2 until the cells occupied approximately 80-90% of the well area of the biochip, and the impedance was then measured. The biochip in the experimental group was incubated for 2 h in the 43 °C, 5% CO2 incubator, and the impedance was recorded. Finally, the biochip in the recovery group was incubated for 6 h in an incubator at 37 °C in 5% CO2, and the impedance was recorded. Images were obtained using a fluorescence microscope (IX73, Nikon, Tokyo, Japan). Immunofluorescence Assay The culture medium was removed and washed three times with PBS. The cells were fixed with 4% PFA (LEAGENE) at room temperature for 15 min, which was removed from the refrigerator and stored at room temperature beforehand. Washing the cells three times with PBST, which contains 0.1% Triton-X100 (Sigma-Aldrich, St. Louis, MO, USA) and is added to PBS, with each wash lasting 5 min. Blocking the cells with a solution of PBST containing 0.5% serum and 0.1% Triton-X100 at room temperature for 1 h. Diluting the primary antibody with blocking buffer according to the instructions or the known ratio before incubating at 4 °C overnight. Washing the cells three times with PBST containing 0.05% TWEEN-20, with each wash lasting 5 min. The secondary antibody was diluted with PBS, usually at a ratio of 1:1000 according to the instructions, and was incubated at room temperature in the dark for 1 h. PBST (0.05% TWEEN-20) was used to wash 3 times for 5 min each time. The cell nuclei was stained with DAPI diluted in PBS according to the instructions, typically at a ratio of 1:5000, and was incubated in the dark at room temperature for 10-15 min. The cells were washed three times with PBST containing 0.05% TWEEN-20, with each wash lasting 5 min. Statistical Analysis The Origin and GraphPad Prism 8 statistical software (GraphPad Prism 8) packages were used for the data analysis. The results were averaged from at least three independent experiments and are presented as the means ± the SD. The Student's t-test or one-way Trans-Epithelial Electrical Resistance (TEER) and Capacitance of Cell-Cell Adhesion Measurement The biochip was glued onto a custom-made printed circuit board (PCB). The electrodes were connected to the PCB through a copper wire, and the PCB was connected to an impedance analyzer (HF2LI, Zurich Instrument, Zurich, Switzerland). The calibration (blank medium) served as a baseline, and the cell impedance was measured using the HF2LI instrument with an input of 100 mV. The frequency range was from 100 Hz to 1 MHz, and 10 points were measured per decade. First the HUVECs in the control group were cultured at 37 • C in 5% CO 2 until the cells occupied approximately 80-90% of the well area of the biochip, and the impedance was then measured. The biochip in the experimental group was incubated for 2 h in the 43 • C, 5% CO 2 incubator, and the impedance was recorded. Finally, the biochip in the recovery group was incubated for 6 h in an incubator at 37 • C in 5% CO 2 , and the impedance was recorded. Images were obtained using a fluorescence microscope (IX73, Nikon, Tokyo, Japan). Immunofluorescence Assay The culture medium was removed and washed three times with PBS. The cells were fixed with 4% PFA (LEAGENE) at room temperature for 15 min, which was removed from the refrigerator and stored at room temperature beforehand. Washing the cells three times with PBST, which contains 0.1% Triton-X100 (Sigma-Aldrich, St. Louis, MO, USA) and is added to PBS, with each wash lasting 5 min. Blocking the cells with a solution of PBST containing 0.5% serum and 0.1% Triton-X100 at room temperature for 1 h. Diluting the primary antibody with blocking buffer according to the instructions or the known ratio before incubating at 4 • C overnight. Washing the cells three times with PBST containing 0.05% TWEEN-20, with each wash lasting 5 min. The secondary antibody was diluted with PBS, usually at a ratio of 1:1000 according to the instructions, and was incubated at room temperature in the dark for 1 h. PBST (0.05% TWEEN-20) was used to wash 3 times for 5 min each time. The cell nuclei was stained with DAPI diluted in PBS according to the instructions, typically at a ratio of 1:5000, and was incubated in the dark at room temperature for 10-15 min. The cells were washed three times with PBST containing 0.05% TWEEN-20, with each wash lasting 5 min. Statistical Analysis The Origin and GraphPad Prism 8 statistical software (GraphPad Prism 8) packages were used for the data analysis. The results were averaged from at least three independent experiments and are presented as the means ± the SD. The Student's t-test or one-way analysis of variance (ANOVA) was performed to calculate the statistical significance of the differences. Statistical significance was set at * p < 0.05. Semi-quantitative image analysis was performed using Image J software (Image J 1.8.0_172). Impedance Monitor TEER measurement is a rapid, conventional, and non-invasive assay used to assess the epithelial monolayers' level of integrity and differentiation in in vitro cultures. The electrical impedance across the epithelium or endothelium is related to the formation of robust tight junctions between neighboring cells [22]. As presented in Figure 4, when impedance was measured in the control group and the experimental group, the original impedance spectrum data obtained were plotted according to the measured impedance and phase values at the frequency from 100 Hz to 1 MHz: The amplitude curve in Figure 4a shows a decreasing trend with the increase in frequency. The phase curve obtained in Figure 4c shows a gradual upward trend with the increase in frequency. Amplifying the frequency range from 10 4 Hz to 10 6 Hz revealed alterations in the amplitude (Figure 4b) and phase ( Figure 4d) curves of the NC (Negative control), HS (Heatstroke), and Re (Recovery) groups. Biosensors 2023, 13, x FOR PEER REVIEW 6 of 10 analysis of variance (ANOVA) was performed to calculate the statistical significance of the differences. Statistical significance was set at * p < 0.05. Semi-quantitative image analysis was performed using Image J software (Image J 1.8.0_172). Impedance Monitor TEER measurement is a rapid, conventional, and non-invasive assay used to assess the epithelial monolayers' level of integrity and differentiation in in vitro cultures. The electrical impedance across the epithelium or endothelium is related to the formation of robust tight junctions between neighboring cells [22]. As presented in Figure 4, when impedance was measured in the control group and the experimental group, the original impedance spectrum data obtained were plotted according to the measured impedance and phase values at the frequency from 100 Hz to 1 MHz: The amplitude curve in Figure 4a shows a decreasing trend with the increase in frequency. The phase curve obtained in Figure 4c shows a gradual upward trend with the increase in frequency. Amplifying the frequency range from 10 4 Hz to 10 6 Hz revealed alterations in the amplitude (Figure 4b) and phase (Figure 4d) curves of the NC (Negative control), HS (Heatstroke), and Re (Recovery) groups. in (a,b). The characterized frequencies ranged between 10 4 and 10 6 Hz, as shown in panels (c,d). NC represents the control group, HS represents the 43 • C heatstroke group, and RE represents the 37 • C 6 h recovery group. TEER, Cell-Cell Adhesion Capacitance Monitor We obtained the TEER and C I values by calculating the impedance spectrum using formulas as follows: TEER = (a 2 + b 2 )/a (1) where the f represents the frequency measured using an impedance analyzer. Rs represents a constant value in the fitted curve of impedance measurement data, a represents the real part of the impedance measurement data minus Rs, and b represents the imaginary part of the impedance measurement data. The details of the calculation are provided in the Supplementary Materials. By Equations (1) and (2), the TEER and C I can be calculated. It is interesting to note that the TEER value and C I value of the cells both changed, and both exhibited a downward trend after heat stress (Figure 5a,b). A previous study has shown that the TEER value is related to temperature [13] and the adhesion junctions between cells decreased while the permeability increased [1]. As is shown in Figure 5a,b, our results are consistent with these findings. To further verify whether this conclusion is valid, we collected protein samples from the cells and observed the expression of tight junction proteins. The experimental results showed that the expression of tight junction proteins decreased after heatstroke. This result is consistent with the above conclusions. Therefore, we can deduce that microfluidic impedance measurement can monitor the changes of in cell heatstroke, and this phenomenon may be applied to other disease models. The results are presented in Figure 5a,b. The TEER values of the HUVECs monolayer exposed to heatstroke showed a notable decrease, only to increase again upon recovery. Likewise, a similar trend was demonstrated by the C I values when compared to the NC group. The outcomes demonstrated that the cellular barrier was compromised, leading to an increase in membrane permeability. TEER, Cell-Cell Adhesion Capacitance Monitor We obtained the TEER and CI values by calculating the impedance spectrum using formulas as follows: TEER = (a 2 + b 2 )/a (1) where the f represents the frequency measured using an impedance analyzer. Rs represents a constant value in the fitted curve of impedance measurement data, a represents the real part of the impedance measurement data minus Rs, and b represents the imaginary part of the impedance measurement data. The details of the calculation are provided in the Supplementary Materials. By Equations (1) and (2), the TEER and CI can be calculated. It is interesting to note that the TEER value and CI value of the cells both changed, and both exhibited a downward trend after heat stress (Figure 5a,b). A previous study has shown that the TEER value is related to temperature [13] and the adhesion junctions between cells decreased while the permeability increased [1]. As is shown in Figure 5a,b, our results are consistent with these findings. To further verify whether this conclusion is valid, we collected protein samples from the cells and observed the expression of tight junction proteins. The experimental results showed that the expression of tight junction proteins decreased after heatstroke. This result is consistent with the above conclusions. Therefore, we can deduce that microfluidic impedance measurement can monitor the changes of in cell heatstroke, and this phenomenon may be applied to other disease models. The results are presented in Figure 5a,b. The TEER values of the HUVECs monolayer exposed to heatstroke showed a notable decrease, only to increase again upon recovery. Likewise, a similar trend was demonstrated by the CI values when compared to the NC group. The outcomes demonstrated that the cellular barrier was compromised, leading to an increase in membrane permeability. (c-f) occludin, VE-Cadherin, and ZO-1 protein expression were measured by immunoblotting. (n = 3). The data were quantified by normalization to GAPDH (n = 3). The data are expressed as the means ± the SD. Statistical significance is indicated as * p < 0.05, ** p < 0.01. Figure 5. Changes in the TEER and C I values of impedance measurements, and the protein expression of VE-cadherin, occludin, and ZO-1 after heatstroke. NC represents the control group, HS represents the 43 • C heatstroke group, and RE represents the recovery group at 37 • C for 6 h. (a,b) The TEER and C I values were obtained by the impedance measurement data and the formula. (c-f) occludin, VE-Cadherin, and ZO-1 protein expression were measured by immunoblotting. (n = 3). The data were quantified by normalization to GAPDH (n = 3). The data are expressed as the means ± the SD. Statistical significance is indicated as * p < 0.05, ** p < 0.01. To verify whether TEER and C I values were consistent with the biological changes of endothelial cells after heatstroke, we extracted the total protein of the endothelial cells and found that compared with the NC group, the expression of VE-cadherin displays a noticeable reduction in the HS group, and the difference exhibited statistical significance (** p < 0.01). Moreover, after recovery, the expression level of VE-cadherin demonstrates an increase with a statistically significant difference (* p < 0.05) (Figure 5d). Similarly, ZO-1 expression levels showed a significant reduction after heatstroke treatment compared to the NC group (* p < 0.05). Nonetheless, ZO-1 expression levels significantly increased after recovery, exhibiting a statistically significant difference (* p < 0.05) (Figure 5e). Occludin expression levels decreased after heatstroke treatment but increased after recovery (Figure 5f). Immunofluorescence and qPCR Experiment To further verify whether the electrical changes were consistent with the biological changes, HUVEC cells were stained, and RNA was extracted after heatstroke. As illustrated in Figure 6a, the VE-cadherin changed from a regular and compact state to a loose state after heatstroke. After heatstroke, VE-cadherin had fewer connections between HUVECs. As presented in Figure 6b-d, after heatstroke, there was a marked decrease in VE-Cadherin mRNA levels (** p < 0.01), as well as in the mRNA levels of occludin (* p < 0.05) and ZO-1 (** p < 0.01). Then, there was an increase in all mRNA levels after recovery. We speculated that it might be due to the migration of occludin from cell membrane to cytoplasm or nucleus during heatstroke, but because of time constraints, we did not verify this phenomenon. In 2017, researchers have found that heatstroke can cause the content of VE-cadherin to decrease in cell membrane and significantly increase in cytoplasm. Figure 6. Immunofluorescence Assays and the RNA expression of VE-cadherin after heatstroke. NC represents the control group, HS represents the heatstroke group at 43 • C, and RE represents the recovery group at 37 • C for 6 h. (a) presents the alterations in VE-cadherin morphology in HUVEC cells after heatstroke, as contrasted with the control group. (b-d) present VE-cadherin, occludin, and ZO-1 protein expression were measured by qPCR. The data were quantified by normalization to GADPH (n = 3). The data are expressed as the means ± the SD. Statistical significance is indicated as * p < 0.05, ** p < 0.01. The scale bar of immunofluorescence map is 200 µm. Conclusions In summary, a biodevice with planar integrated electrodes was devised and tested for culturing and impedance sensing of cells. A new heatstroke model was established to monitor heatstroke in vitro. The electrical properties of heatstroke were obtained by fitting theoretical models to the impedance data. During the process, the value of TEER and C I decreased and that is consistent with the reduction in cell-cell adhesion and cell permeability characterized by proteins (Occludin, VE-Cadherin and ZO-1) and RNA level. Since the changes in occludin protein level, mRNA level and morphology were not obvious after heatstroke. This device can be used to obtain label-free and non-invasive electrical measurements of heatstroke, and it can also be used to investigate other entities, such as the blood brain barrier (BBB). This device can be adapted to monitor dynamic changes in the electrical properties of tissues (organoids) over long periods for biomedical and biological applications. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request.
6,665.4
2023-06-27T00:00:00.000
[ "Engineering", "Medicine" ]
De-noising groundwater level modeling using data decomposition techniques in combination with artificial intelligence (case study Aspas aquifer) Considering the recent significant drop in the groundwater level (GWL) in most of world regions, the importance of an accurate method to estimate GWL (in order to obtain a better insight into groundwater conditions) has been emphasized by researchers. In this study, artificial neural network (ANN) and support vector regression (SVR) models were initially employed to model the GWL of the Aspas aquifer. Secondly, in order to improve the accuracy of the models, two preprocessing tools, wavelet transform (WT) and complementary ensemble empirical mode decomposition (CEEMD), were combined with former methods which generated four hybrid models including W-ANN, W-SVR, CEEMD-ANN, and CEEMD-SVR. After these methods were implemented, models outcomes were obtained and analyzed. Finally, the results of each model were compared with the unit hydrograph of Aspas aquifer groundwater based on different statistical indexes to assess which modeling technique provides more accurate GWL estimation. The evaluation of the models results indicated that the ANN model outperformed the SVR model. Moreover, it was found that combining these two models with the preprocessing tools WT and CEEMD improved their performances. Coefficient of determination (R2) which indicates model accuracy was increased from 0.927 in the ANN model to 0.938 and 0.998 in the W-ANN and CEEMD-ANN models, respectively. It was also improved from 0.919 in the SVR model to 0.949 and 0.948 in the W-SVR and CEEMD-SVR models, respectively. According to these results, the hybrid CEEMD-ANN model is found to be the most accurate method to predict the GWL in aquifers, especially the Aspas aquifer. Introduction The population growth, industrial and agricultural development, and occurrence of droughts in recent years have exerted pressure on groundwater in Iran, such that most aquifers have experienced medium to high groundwater level drops. Consequently, many aqueducts have been dried out, and most of permanent springs have experienced remarkable reductions in their water yield. More than 50% of the need for drinking water in Iran is supplied through groundwater. Accordingly, investigating the current condition of groundwater sources can assist managers in terms of more efficient planning and decision-making. The literature review of the field reveals that the most of the studies performed on groundwater have used intelligence models to estimate groundwater level (GWL). Among these intelligence models, the support vector regression (SVR) and artificial neural network (ANN) models have shown to have a good performance. In the following, some of the studies on groundwater using these models are presented. Sattari et al. (2017) used the SVR and M5 tree models to predict the GWL in Ardabil city plain. Their results showed that both models had satisfactory performances, but the M5 model was easier to implement and interpret. In a review paper, Rajaei et al. (2019) assessed the artificial intelligence models used in groundwater modeling. Mirarabi et al. (2019) investigated SVR and ANN models for the GWL prediction. The outcomes indicated the better performance of the SVR model than the ANN model. Preprocessing tools such as wavelet transform (WT) and empirical code decomposition (EMD) have received attention in different fields over recent years, such that the combination of these tools with different models has resulted in hybrid models with higher accuracy. The EMD is a thoroughly effective method for extracting signals from data and is used to decompose signals in the time-frequency domain (Huang et al. 2009). The complementary ensemble empirical mode decomposition (CEEMD) method is the completed form of EMD. In this method, a pair of positive and negative white noise is added to the main data to create two series of IMF. Therefore, a combination of the main data and the additional noise is obtained in which the sum of IMFs equals the main signal. Sang et al. (2012) employed the EMD method to analyze nonlinear data in hydrology. In the following, some of the studies on the use of the two preprocessing tools are discussed. Adamowski and Chan (2011) applied the wavelet artificial neural network (W-ANN) to predict the GWL in the Quebec Province of Canada. The ANN and autoregressive integrated moving average (ARIMA) models were also used. The results indicated the high capability of the W-ANN compared to the other two models. Moosavi et al. (2014) optimized hybrid models of wavelet transform-adaptive neuro-fuzzy inference system (W-ANFIS) and W-ANN using the Taguchi method to predict the GWL in Mashhad, Iran. In this study, different structures were evaluated for both integrated models. According to the results, the W-ANFIS model had better performance than the W-ANN model. Suryanarayana et al. (2014) used the ANN, SVR, wavelet transform-support vector regression (W-SVR) and ARIMA to forecast monthly fluctuations of the GWL in Visakhapatnam, India. This research used monthly data of precipitation, average temperature, maximum temperature, and groundwater depth for a 13-year period (2001-2012). The results showed that the W-SVR had better performance than other models. Eskandari et al. (2018) simulated the GWL fluctuations of the Borazjan plain using an integration of the support vector machine (SVM) and WT. The outcomes indicated the better performance of the hybrid model than the SVM model. Eskandari et al. (2019) assessed the combination of the adaptive neuro-fuzzy inference system (ANFIS) and WT in modeling and predicting the GWL of the Dalaki basin in Bushehr Province in Iran. Their results indicated that the use of the WT improved the performance of the ANFIS model by 14%. Salehi et al. (2019) predicted the GWL of Firuzabad plain using the combined model of time series-WT. They reported that the combined model outperformed the time series model. Bahmani et al. (2020) simulated GWL using GEP and M5 tree models and their combination with WT. According to the results, the combined models had better performance than GEP and M5 models. Moreover, the performance of the two combined models (W-GEP and W-M5) was similar. It was also observed that choosing the proper level of decomposition significantly affected the accuracy of the combined models. Bahmani and Ouarda (2021) modeled GWL by combining artificial intelligence techniques. They used four observation wells in Delfan plain in Iran. They combined gene expression programming (GEP) and M5 models with WT and CEEMD techniques. The results indicated the better performance of the model combined with GEP. Awajan et al. (2019) presented a review on empirical mode decomposition in forecasting time series from 1998 to 2017. Wang et al. (2019) provided a wind power short-term forecasting hybrid model based on the CEEMD-SE method in a Chinese wind farm. This method was obtained after integration processing CEEMD, sample entropy (SE), and harmony search (HS) with an extreme learning machine with the kernel (KELM). The results showed the hybrid method (CEEMD-SE-HS-KELM) had higher forecasting accuracy than EMD-SE-HS-KELM, HS-KELM, KELM, and the extreme learning machine (ELM) model. Niu et al. (2020) provided short-term photovoltaic (PV) power generation forecasting based on random forest feature selection and CEEMD. A hybrid forecasting model was created combining random forest (RF), improved gray ideal value approximation (IGIVA), CEEMD, the particle swarm optimization algorithm based on dynamic inertia factor (DIFPSO), and backpropagation neural network (BPNN), called RF-CEEMD-DIFPSO-BPNN. Results revealed that RF-CEEMD-DIFPSO-BPNN was a promising approach in terms of PV power generation forecasting. The GWL has considerably dropped in the Tashk-Bakhtegan basin and Maharlu lakes over recent years (Ashraf et al. 2021). The Aspas aquifer is one of the aquifers in the Basin that a high drop in GWL has experienced. The aquifer is located upstream of the basin. Due to the drop in the groundwater level, the effluent rivers have turned into influents rivers. Rivers in Aspas are of the head branches of the Kor River, and their drying or low water causes drying up of the Kor River and Bakhtegan Wetland. Based on the review of the sources, two artificial intelligence models (ANN and SVR) have performed well. Thus, these models were used to investigate the GWL in the Aspas plain. To increase this models accuracy were used preprocessing tools. Two preprocessing tools WT and CEEMD have had a good performance in different studies, so in this study have been used. After the emergence of four combined models W-ANN, W-SVR, CEEMD-ANN, and CEEMD-SVR, the results were compared with the ANN and SVR models, and the best intelligent model was determined. The main differences between our study and the latest studies (Bahmani et al. 2020;Bahmani and Quarda 2021) are as follows: (i) in exception to this study, they used minimal local data, usually 3 to 8 observation wells. In this study, we used the aquifer unit hydrograph (extracted from the data of 40 observation wells) and the isohyetal, isothermal and isoevaporation maps that provide a global estimation of the aquifer accurately. (ii) The proposed hybrid approaches increased the accuracy of the intelligent models. Study area The Tashk-Bakhtegan and Maharlu lakes basin, with an area of 31,451.8 km 2 , is located in Fars Province in Iran. The basin is divided into two basins, namely Tashk-Bakhtegan and Maharlu, and 27 sub-basins. The Aspas study area with the code 4321 is located in the northwest of the basin. The region has a total area of 1590.5 km 2 , including the plain (764.9 km 2 ), highlands (816.5 km 2 ), and lake (9.1 km 2 ). The maximum height of the region is 3495 m (the peak of Bar Aftab mountain in the east), while the minimum height is 2061 m (Oujan river in the central plains). The most important city of the region is Sadeh. Figure 1 depicts the location of the study area and the Aspas aquifer in the Tashk-Bakhtegan and Maharlu lakes basin in Fars Province of Iran. Data preparation According to the data of the Aspas aquifer, 40 observation wells with a better reference period were selected. After introducing the GWL data of the observation wells in the HEC-4 software, the reference period of 2002-2020 was chosen to plot the unit hydrograph. The box plot method was used to assess the outlier data. In case of encountering any outlier data, they were removed and regenerated using the HEC-4 software. Ultimately, the unit hydrograph of the groundwater (water level variations) was plotted in a monthly scale for the Aspas aquifer, as shown in Fig. 2. Table 1 provides the location of the used meteorology stations inside and outside the Aspas aquifer. After evaluating and extending the data using the HEC-4 software and examining the outlier data, the monthly isohyetal maps were plotted for the Aspas study area, using 13 meteorology stations inside and outside it. Afterward, the monthly precipitation values were extracted for the aquifer. According to the maps, the average precipitation on the aquifer in the 19-year reference period (2002-2020) was 427 mm. The data of eight meteorology stations inside and outside the Aspas sub-basin were employed to evaluate the temperature of the aquifer. After investigating the data and extending them using the method of differences, the outlier data were examined. The monthly isothermal maps were plotted for the study area. Then, the monthly temperature values were extracted for the Aspas aquifer. The average annual temperature of the Aspas aquifer was obtained 13 °C. The data of eight meteorology stations inside and outside the Aspas study area were employed to evaluate the evaporation of the aquifer. After assessing the data and extending them based on the temperature-evaporation relationship for each station and examining their outlier data, the isoevaporation maps were plotted for the study area. Afterward, the monthly evaporation values were extracted The modeling was performed after determining the monthly data of the GWL, precipitation, temperature, and evaporation. Before using the data, they were standardized (changed to values between zero and one). For this purpose, Eq. 1 was used as recommended by Solgi et al. (2017). The input parameters of the model included precipitation, temperature, evaporation, and the GWL in a given month, and the output parameter was the GWL in the next month. Of the whole data, 75% were used for training, and 25% were used for testing. The input parameters of the model included: precipitation ( P t ), temperature ( T t ), evaporation ( E t ), and groundwater level ( GWL t ) at time t and groundwater level at time t + 1 ( GWL t+1 ). In Eq. (1), x is the desired data, x is the average data, x max is the maximum data, x min is the minimum data, and y is the standardized data. Figure 3 shows the general flowchart of the work steps. As can be seen in this fig., first are received the data of observation wells and meteorological stations. Then, the data are completed and extended by Hec-4 software. The boxplot method is used to check outlier data. The unit hydrograph of groundwater is drawn. Maps of isohyetal, isotemperature and isoevaporation are drawn in the GIS software. Then, monthly values for the aquifer are extracted from these maps. These values are entered as input data to tools based on data analysis. The main signal is decomposed, and the sub-signals are extracted. The sub-signals of each input parameter are entered as input to artificial intelligence models. After modeling by artificial intelligence models, the results of each model are compared by observed values. If the results are acceptable, the model results are extracted, otherwise, the models are run again to improve the results by changing the effective factors in the modeling. In the end, the results of all models are compared based on evaluation criteria and the best model is selected. Artificial intelligence models According to the evaluations of the intelligence models, two of them that had good performance in previous studies, i.e., ANN and SVR, were used. Afterward, two preprocessing tools, namely the WT and CEEMD, were employed to enhance the performance of the models. ANN model The following are effective in modeling artificial neural networks (ANNs) (Solgi 2014): 1. Sufficient training, 2. Number of layers in a network, 3. Number of neurons of the middle layers, 4. Training laws, 5. Simulation functions (transmission) An important criterion in training a network is the number of iterations (epochs) or repetitions the network experiences while training. In training a network, determining the proper number of iterations is of great importance. In general, the more the number of iterations in training a network, the smaller the simulation (prediction) error. However, when the number of iterations exceeds a given value, the testing group error increases. The best number of iterations minimizes the errors in both training and testing groups. The number of layers in a network is among the main criteria in designing ANNs. These networks are usually comprised of several layers. These layers include an input layer, middle layers, and an output layer. The number of the middle layers is determined by trial and error. In general, it is recommended to use ANNs with lower middle layers. In ANNs, the number of neurons of the input and output layers is a function of the type of problem. However, there is no particular relationship for the number of neurons in the middle layers, and these neurons are determined based on trial and error for each middle layer. The internal mode of a neuron caused by the activation function is known as the activity level or action. Generally, each neuron sends its activity level to one or more of the other neurons in the form of a single signal. The activation functions of a layer's neurons are typically, but not essentially, the same. Moreover, these activation functions are nonlinear for the neurons of the hidden layer while being identity functions for the input layer's neurons. Using a reaction function, a neuron produces outputs for various inputs. Linear and sigmoid functions are two well-known functions (Karamuz and Araghinezhad 2010). An activation function is chosen based on the specific need of a problem that should be solved by the ANN. In practice, a limited number of activation functions is used, as listed in Fig. 4 (Alborzi 2001). The network architecture in solving a problem refers to choosing the proper number of layers, the number of neurons in each layer, the way connections are made between neurons, and the proper activation functions for neurons, along with determining the training algorithm and the way the weights are adjusted. Another major parameter of an ANN is the training function. Table 2 lists the training functions used in the current study. SVR model In this case, the kernel functions are used. In this study, four common kernels as the following were used: k(x, y) = (x.y + 1) d d = 2, 3, … These kernels are the polynomial (Poly) kernel, RBF kernel, Sigmoid kernel, and linear (Lin) kernel, respectively (Raghavendra and Deka 2014). The RBF kernel has one parameter g (Gamma). In the sigmoidal kernel, just the default values, including zero and 1/k Gamma, are used. The linear kernel does not have any parameters. The polynomial kernel has two parameters d (the degree of the polynomial) and r (an accumulative fixed number). For more information on this topic, see references (Cortes and Vapni 1995;Raghavendra and Deka 2014). As listed in Table 3, Preprocessing tools In this study, used two tools includes the WT and CEEMD. Wavelet transform (WT) Wavelet transform (WT) is one of the efficient mathematical transformations in the field of signal processing. Wavelets are mathematical functions that provide a time-scale form of time series and their relationships for analyzing time series that include variables and non-constants. The wavelet function is one that has two important properties of oscillation and being short-term. The wavelet coefficients can be calculated at any point of the signal (b) and for any value of the scale (a) in the following equation (Nourani et al. 2009). In Eq. (3), is the mother wavelet function, a is scales and b translates the function. For different values of a b, the value of T is obtained. Because the effect of time series data is differentiated before entering different models and the initial signal is decomposed into several sub-signals, it is possible to use an analysis that includes short-term and long-term effects, which in turn optimizes the model in future evaluations and estimates. CEEMD method The empirical mode decomposition (EMD) is a method for decomposing different signals in a process called screening. During this process, the main signal is decomposed into a number of components with different frequency contents. According to Eq. 4, the EMD method decomposes the main signal x(n) to a number of intrinsic mode functions (IMFs) (Amirat et al. 2018). In Eq. 4, r n (x) is the remaining component after n IMFs and c i (x). C(x) is the wave-shape (harmonic) function extracted from the main signal, which does not have the conditions of the IMF. Data can have several IMFs at a time. These fluctuating modes are called intrinsic mode functions and have the two following conditions: 1. in all data, the number of extremum points equals the number of zero points or, at most, is different by a unit. 2. At each point, the average of the envelopes fitted to local maximum and minimum points should be zero. Given the presence of alternation and noise in the signals, in some cases, the frequency-time domain distribution is interrupted, and the EMD performance is disturbed due to the difference in the modes. In order to solve this problem, Wu and Huang (2004) proposed a different method called ensemble empirical mode decomposition (EEMD). In the decomposition procedure of EEMD, a limited volume of white noise enters the main signal. By using the positive statistical aspects of the white noise, which is uniformly distributed in the frequency domain, the effect of the alternating noise is removed from the decomposition process. In the CEEMD method, a pair of positive and negative white noise is added to the main data to create two series of IMF. Therefore, a combination of the main data and the additional noise is obtained in which the sum of IMFs equals the main signal. Combining models with the WT method This section discusses the formation of hybrid models obtained from the combination of WT with the ANN and SVR intelligence models. When an initial signal is decomposed using the WT method, and the resulting sub-signals are used as inputs to the ANN and SVR intelligence models, the hybrid models of W-ANN and W-SVR are obtained. As shown in Fig. 4, at first, the signals of the input parameters of precipitation, temperature, evaporation, and the GWL are decomposed using the WT. Then, the obtained sub-signals are introduced as inputs in the ANN and SVR models to create W-ANN and W-SVR hybrid models. Figure 5 depicts the Pa(t), Ta(t), GWLa(t) andEa (t) sub-signals of the overall scale (approximate) in the last level and other sub-signals of the detailed scale (detail) from level one to the last level. Combining models with the CEEMD method This section describes the creation of hybrid models obtained from the combination of CEEMD with the ANN and SVR intelligence models. When an initial signal is decomposed using the CEEMD method, and its sub-signals are used as inputs to the ANN and SVR intelligence models, the hybrid models of CEEMD-ANN and CEEMD-SVR are obtained. As shown in Fig. 5, at first, the signals of the input parameters of precipitation, temperature, evaporation, and the GWL are decomposed using the CEEMD. Then, the obtained sub-signals are introduced as inputs in the ANN and SVR models to create CEEMD-ANN and CEEMD-SVR hybrid models. Figure 6 depicts the Pimf(t), Timf (t), GWLimf (t) andEimf (t) sub-signals of the overall scale (IMF) in the last level and other sub-signals of the residuals from level one to the last level. The CEEMD method has two parameters, i.e., the maximum number of IMFs and £. It can be said that the number of IMFs is the level of decomposition in the WT method. In this study, the values of 0.1, 0.2, and 0.3 were used for £. Some studies have used 0.2 for this parameter. Moreover, according to the structure of the study, the number of IMFs was one to six. Accordingly, the hybrid models were used in different modes by combining six IMFs and three £ values. Evaluation criteria In the performance assessment of models, different qualitative and quantitative parameters should be evaluated to clearly observe the effectiveness of each input parameter on the results. Accordingly, the following parameters were used to examine the efficiency of the methods (Solgi et al. 2017;Bahmani et al. 2020). In equations above, the parameters include: n: number of data, i counter variable, G iobs : observational data, G iobs : average of the observational data, G ipre : computational data, G ipre : average predicted data, m: number of the parameters of the model and Npar : number of the trained data. The coefficient of determination ( R 2 ) determines the agreement between the data created by the model and the real data. The values closer to one indicate better agreement and lower error. Therefore, this parameter was used to evaluate the effectiveness of each parameter, such as the type of the wavelet function, number of middle neurons, and decomposition degree of wavelet in the performance of all models. Moreover, the parameter of RMSE is the root mean square error of the computational and observational data. It is clear that lower values of this parameter indicate the better training and simulation of the data. Regarding the Akaike information criterion (AIC), it can be said that lower Akaike coefficients reveal the better performance of the models. The low value of the Akaike coefficient is caused by two factors, i.e., the error of the model and the number of parameters. Therefore, it is a good criterion to evaluate models. Results and discussion This section provides the results of the intelligence models executed for the GWL estimation using the data obtained from the plotted isohyetal, isothermal, and isoevaporation maps and the unit hydrograph of the groundwater. Afterward, the preprocessing methods were used, and the obtained hybrid models and their results were provided. Eventually, the results of the models were compared based on the statistical criteria. Results of the ANN model The ANN was modeled by coding in the MATLAB software and using all effective parameters previously mentioned. For this purpose, different structures were evaluated in each combination. Table 4 provides the results of the best structure in each combination. As can be seen in the table, combination No. 6 with eight input parameters and three neurons in the middle layer had the best performance. Its coefficient of determination was 0.927, and its error was 0.0158. In this prior combination, the training law of trainlm and stimulation (transmission) function of tansig had the best performance compared to other training laws and transmission functions. The performance of this model can be compared with the observational values in Fig. 7. As can be seen, the ANN model had better performance in minimum points compared to the peak points. Results of the SVR model The SVR was modeled by coding in the MATLAB software and using all effective parameters previously mentioned. For this purpose, different structures were evaluated in each combination. Table 5 provides the results of the best structure in each combination. As can be seen in the table, combinations 5 and 6 had the same coefficient of determination, but given its lower error, combination No. 6 was known as the superior combination. The coefficient of determination and error of this combination were 0.918 and 0.0168, respectively. In this superior combination, the line kernel had the best performance among all kernels. The performance of this model can be compared with the observational values in Fig. 8. As can be seen in the figure, the estimations of the SVR model in the maximum and minimum points were different from the observational values. Results of the W-ANN model According to Nourani et al. (2009), Eq. 8 can be used as an initial estimation in determining the decomposition level in the WT method on a monthly time scale. In this equation, L denotes the decomposition level, and N is the number of data. Given the number of data in this study, which was 216, L was calculated at two. In order to improve the accuracy, the decomposition levels were considered from one to four. For this purpose, the WT method was coded in MATLAB software. In order to execute the W-ANN model, the sub-signals obtained from the execution of the WT in different decomposition levels were introduced as inputs to the ANN model. To this end, the W-ANN model was studied in different decomposition levels with various wavelet functions in different structures, whose results are listed in Table 6. According to the table, wavelet function (8) L = Int log(N) Fig. 7 Comparison of the ANN model with observed data Sym3 in decomposition level 1 had the best performance. This superior combination had eight input neurons with four hidden layers with the training law of trainbr and the stimulation function of purelin. In this combination, RMSE = 0.0149 and R 2 = 0.938, which indicated its better performance than other combinations. The performance of this model can be compared with the observational values in Fig. 9. As can be seen in the figure, the W-ANN model did not have proper performance in the peak points. Results of the W-SVR model In order to execute the W-SVR model, the sub-signals obtained from the execution of the WT in different decomposition levels were introduced as inputs to the ANN model. To this end, the W-SVR model was studied in different decomposition levels with various wavelet functions in different structures, whose results are listed in Table 7. According to the table, wavelet function Coif1 in decomposition level 2 had the best performance. This superior combination had an RBF kernel. In this combination, RMSE = 0.0164 and R 2 = 0.949, which indicated its better performance than other combinations. The performance of this model can be compared with the Fig. 9 Comparison of the W-ANN model with observed data Results of the CEEMD-ANN model The R software was employed to execute the CEEMD method. At first, the "hht" package was installed on the R software. Then, sub-signals were extracted using the CEEMD method through coding in the R software environment. The program was executed for the maximum IMF number of 10. The results showed that the maximum number of sub-signals could be six. Thus, the code of the program was executed for the IMF numbers of one to six. Afterward, the obtained sub-signals were introduced as inputs to the ANN model. The results of the evaluation are given in Table 8. According to the table, the best performance in the IMF equaled one, and the value of was obtained 0.2. In this superior structure, the training law of trainbr and transmission function of tansig were the best. This structure had an R 2 of 0.98 and an RMSE of 0.0035. The performance of this model can be compared with the observational values in Fig. 11. As can be seen in the figure, the CEEMD-ANN model had a suitable performance in almost all points, and its estimates were close to the observational values. Results of the CEEMD-SVR model The program was executed for the IMF numbers of one to six. Then, the resulting sub-signals were introduced as inputs to the SVR model to obtain the CEEMD-SVR hybrid model. Table 9 lists the results of using this hybrid model. According to the table, the best performance in the IMF equaled two, and the value of was obtained 0.1. Kernel Lin had the best performance. The values of R 2 and RMSE were equal to 0.95 and 0.0155, respectively. The performance of this model can be compared with the observational values in Fig. 12. As can be seen in the figure, the CEEMD-SVR hybrid model did not have proper performance in the peak points. Fig. 11 Comparison of the CEEMD-ANN model with observed data Comparison of the results of the models This section compares the results obtained from the execution of different models in different modes. According to the reference period used for the data of the testing step, the different methods were compared in the years 2016 to 2020. As can be seen in Table 10, the use of preprocessing tools improved the performance of the ANN and SVR models, such that using the WT resulted in an improvement of 1.10% in the ANN model and 3.07% in the SVR model. Furthermore, the use of the CEEMD model resulted in improvements by 7.08% and 2.89% in the ANN and SVR models, respectively. A comparison of the results reveals that the hybrid CEEMD-ANN model with a coefficient of determination of 0.99 and an error of 0.0035 had the best performance. This model had an Akaike coefficient of 85.5, which was lower than other models. Figure 13 compares the different models in the testing period based on the unit hydrographs of the aquifer. As can be seen, the hybrid models had a better performance, among which the hybrid CEEMD-ANN model was closer to the observational values. Therefore, this model can be used in forecasting the GWL of the Aspas aquifer. The results of this study are consistent with the results of Bahmani et al. 2020; Bahmani and Ouarda 2021, based on the better performance of models combined with WT and CEEMD. It is also consistent with the results of Adamowski and Chan (2011) and Solgi et al (2017) who stated that the model combined with the artificial neural network had the best performance. Due to the fact that preprocessing tools divide initial signals into several sub-signals and these sub-signals are included in the models, this preprocessing of the data causes the execution time of the models to be reduced and the performance to be increased. So, in the artificial neural network model, it usually reduces the number of layers and neurons, and in fewer repetitions, it achieves better results, and in the SVR model, it also shortens the execution time of the program. On the other hand, the use of the standardization formula is another effective parameter in improving the result, which was used in all simple and combined models in this study. Selecting a suitable decomposition level affects the accuracy of hybrid models. A high level of decomposition is not always helpful to increase the accuracy of the model and an optimal decomposition level must also be identified. The results of CEEMD-based models illustrate the importance of ε for modeling and its impact on the accuracy of the hybrid models. No specified rule for ε determination has been presented and selecting ε depends on the features of a time series such as mean and extreme values. Therefore, it is recommended to find an optimal value for ε, for a given specific hydrological time series. Conclusions The aim of this paper was to identify the most reliable modeling technique in prediction of GWL monthly drop through comparison of models outcomes with observed GWL in existing wells. Two intelligent modeling techniques, ANN and SVR, were combined with two preprocessing tools WT and CEEMD resulting in four hybrid models. These hybrid methods were utilized to predict the GWL variations in the Aspas alluvial aquifer located in Tashk--Bakhtegan and Maharlu lakes basin. Model input data included precipitation, temperature, evaporation, and GWL of past months which were standardized before being used in the models. After model training process and during model testing phase, all models were implemented for the reference period 2016-2020 for which real GWL was available from existing observation wells. This allows to assess which model provides the closest fit to the real observed data. The performance of each modeling technique was evaluated by three main criteria including R 2 , RMSE, and AIC. The results indicated that the hybrid models outperformed the intelligence models of ANN and SVR due to decomposed and de-noised input data. Among hybrid models, CEEMD-ANN was found to be the most accurate one with 7.08% performance improvements as compared to ANN model. Therefore, it is recommended to use this hybrid model (CEEMD-ANN) in forecasting the GWL variations in the Aspas aquifer according to findings of this paper. There are a number of limitations in this research which can be pursued and further developed in future studies. Firstly, the proposed models used existing observation wells data as input data which is not usually available for recent months and might revert back up to a year ago. This undermined the need and importance of up-to-date data for purposes of water resource planning and management. To remediate this issue, the authors recommend utilizing GRACE satellite data and precipitation, temperature and evaporation satellite data as input data for intelligent models which are usually available for recent months and even days. Secondly, additional intelligent models such as Random Forest can be implemented in future studies to assess whether they provide more accuracy in GWL prediction.
8,150.8
2023-03-06T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Fidelity Susceptibility as Holographic PV-Criticality It is well known that entropy can be used to holographically establish a connection between geometry, thermodynamics and information theory. In this paper, we will use complexity to holographically establish a connection between geometry, thermodynamics and information theory. Thus, we will analyse the relation between holographic complexity, fidelity susceptibility, and thermodynamics in extended phase space. We will demonstrate that fidelity susceptibility (which is the informational complexity dual to a maximum volume in AdS) can be related to the thermodynamical volume (which is conjugate to the cosmological constant in the extended thermodynamic phase space). Thus, this letter establishes a relation between geometry, thermodynamics, and information theory, using complexity. Studies done on various different branches of physics seem to indicate that the physical laws are informational theoretical processes, as they can be represented by the ability of an observer to process relevant information [1,2]. However, in such a process it is important to know the amount of information that can be processes, and hence, the concept of loss of information in a informational theoretical process become physically important. This loss of information in a process is quantified using the concept of entropy. It may be noted that even the structure of spacetime can be viewed as an emergent structure which occurs due to a certain scaling behavior of entropy in the Jacobson formalism [3,4]. In fact, in this formalism general relativity is obtained by using this scaling behavior of maximum entropy i.e., the maximum entropy of a region of space scales with its area. This scaling behavior of maximum entropy is motivated by the physics of black holes, and it in turn motivates the holographic principle [5,6]. The holographic principle equates the number of degrees of freedom in a region of space to the number of degrees of freedom on the boundary surrounding that region of space. The AdS/CFT correspondence is of the most important realizations of the holographic principle [7]. It relates the supergravity/string theory in AdS spacetime to the superconformal field theory on the boundary of that AdS spacetime. It may be noted that AdS/CFT correspondence has been used to obtain quantify the entanglement by using the concept of quantum entanglement entropy, and this has in turn been used to address the black hole information paradox [8,9]. Thus, for a subsystem A (with its complement), it is possible to define γ A as the (d − 1)-minimal surface extended into the AdS bulk with the boundary ∂A, and the holographic entanglement entropy for this system can be written as [10,11] where G is the gravitational constant in the AdS spacetime, and A(γ A ) is the area of the minimal surface. This relation can be viewed as a connection between a geometry, thermodynamics and information theory. As it relates a geometrical quantity (minimal volume), to a thermodynamical quantity (entropy), and which in turn related to information theory (loss of information). In this paper, we will analyse this correspondence further, and establish such correspondence between volume in AdS, the difficulty to process information, with a thermodynamical quantity. In a information theoretical process, it is not only important to know how much information is retained in a system, but also how easy is it to obtain that information. Just as entropy quantifies loss of information, a new quantity called the complexity quantifies the difficulty to obtain that information. Now as the physical laws are thought to be informational theoretical processes, complexity (just like entropy) is a fundamental quantity. In fact, complexity has been used to analyse condensed matter systems [12,13], molecular physics [14], and even quantum computational systems [15]. Recently studies done on black hole physics seem to indicate that complexity might be very important in understanding the black hole information paradox, and this is because the information may not be ideally lost in a black hole, however, as it would be impossible to reconstruct it from the Hawking radiation, it would be effectively lost [16,17,18]. It has been suggested that the complexity would be dual to a volume in the bulk AdS spacetime, [19,20,21,22], where R and V are the radius of the curvature and the volume in the AdS bulk. As there are different ways to define a volume in AdS, different proposals for complexity have been proposed. For a subsystem A (with its complement), it is possible to use the volume enclosed by the same minimal surface which was used to calculate the holographic entanglement entropy, V = V (γ) [23]. This quantity is usually denoted by C. However, we can also define the complexity using the maximal volume in the AdS which ends on the time slice at the AdS boundary, V = V (Σ max ) [24]. It has been demonstrated that the complexity calculated this way is actually the fidelity susceptibility of the boundary CFT. So, this quantity is called as the fidelity susceptibility even in the bulk, and it is denote by χ F [24]. The fidelity susceptibility of the boundary theory can be used for analyzing the quantum phase transitions [25,26,27]. Thus, like the holographic entanglement entropy, this establishes a connection between geometry and information theory. As we want to distinguish between these two quantities, we shall call C as holographic complexity [23], and χ F as fidelity susceptibility [24]. In this paper, we would like to demonstrate that this connection can be extended even to thermodynamics. So, just like holographic entanglement entropy was used to establish a connection between information theory, geometry, and thermodynamics, we will demonstrate that fidelity susceptibility also establishes a connection between information theory, geometry and thermodynamics. The missing part of this connection is the connection between fidelity susceptibility and thermodynamics. To establish this connection for a concrete example, let us now consider the Schwarzschild black holes in AdS backgrounds (SAdS 4 ). The metric is given by where Following the standard procedure, the black hole mass M is related to the temperature, through the Wick rotation τ = it (this requires the resulting Euclidean geometry to be free from a conical singularity). Denoting the position of the horizon by the largest root of f (r + ) = 0, the mass, temperature and the entropy of the black hole can be expresses as [28] Since we plan to use perturbative calculations, we assume M is small and for the the horizon, we take r + ∝ M . We can define the horizon size in terms of temperature T (we set l = 1) as follows: note that temperature has a minimum locates at T min = √ 3/2πl. We define a perturbation parameter ǫ ≡ M/l, and analyse all the expression up to first order in ǫ, i.e., we neglect any O(ǫ) contribution. Now we can obtain a thermodynamical quantity, which can be viewed as a volume in the bulk AdS spacetime. It has been observed that when a charge or rotation are added to a AdS black hole, their behavior qualitatively becomes analogous to a Van der Waals fluid [29,30]. This analogy between AdS black holes and a Van der Waals fluid becomes more evident in extended phase space, where the cosmological constant is treated as the thermodynamics pressure [31,28]. Thus, it is important to study the extended phase space for a system. In this paper, we will use the extended phase space, and relate it to the fidelity susceptibility of a system. Thus, in the extended phase space, the cosmological constant Λ is treated as the thermodynamic pressure P = −Λ/8π = 3/8πl 2 , and the first law of black hole thermodynamics is written as δM = T δS + V δP . The thermodynamic volume is defined as the quantity conjugate to P where all other quantities like S, ... are fixed. Thus, it is also possible to write the black hole equation of state using, P = P (V, T ), and compare it to the corresponding fluid mechanical equation of state. It may be noted that it is also possible to construct a quantity thermodynamically conjugate to pressure, and this quantity represents the thermodynamical volume. In this paper, we will demonstrate that this thermodynamical volume corresponds to the fidelity susceptibility, thus establishing the connection between thermodynamics, information theory, and geometry. So, now using metric (3), we observe that the thermodynamic volume can be written as V = 4πr 3 + /3, using (8) and definition of P we have V = V (T, P ), and this equation of state is given by the following expression, We plot (10) in Fig. (1). We plot P as thermodynamic pressure , defined in Eq. (9) versus thermodynamic volume given by Eq. (10). This graph shows EoS of black hole for different isothermal lines when T = constant. Note that due to the EoS, temperature is always bounded to T ≥ 1 2 √ 3 π . This graph demonstrates that the pressure initially increases with volume, and then after reaching a maximum volume it slowly decreases with the further increase in volume. We will now compare this behavior of thermodynamic pressure and its conjugate volume to the pressure and volume defined for different information theoretical complexity, and observe that this behaviour of thermodynamic pressure and volume matches the behavior of the pressure and volume defined from the fidelity susceptibility of the system. In condensed matter physics, the fidelity susceptibility has been calculated for a many-body quantum Hamiltonian H(λ) = H 0 +λH I , where λ is an external excitation parameter [25,26,27]. This Hamiltonian can be diagonalized by an appropriate set of eigenstates |n(λ) > and eigenvalues E m (λ), H(λ)|n(λ) >= E n (λ)|n(λ) > . These eigenstates are usually taken as orthonormal basis for the Hilbert space for CFT system. Now if two states λ, λ ′ = λ + δλ are close to each other, then it is possible to define the distance between two states as F (λ, λ + δλ) = 1 − δλ 2 χ F (λ)/2 + O(δλ 4 ), where the fidelity susceptibility of the system is χ F (λ) [25,26,27]. This quantity χ F (λ) can be holographically calculated, and it is the equal to the informational complexity, when the volume is taken as the maximum volume in AdS, i.e., V = V (Σ max ) [24]. Now we can use this expression for any deformation of AdS, and here we will use it for SAdS 4 . So, we can write the volume term in this expression as It may be noted that if we expand this expression in series, the zeroth order term is divergent even for pure AdS. So, we can define a fidelity volume v f id , by subtracting a volume term for pure AdS, V (Σ max ) AdS , from a volume term for AdS black holes, To compare fidelity susceptibility with thermodynamics, we can use this fidelity volume v f id . It may be noted that in the extended phase space [31,28], a thermodynamic volume was defined as the quantity thermodynamically conjugate to the thermodynamic pressure (which was obtained using the cosmological constant). Here we will use the same argument to obtain the pressure conjugate to the fidelity volume. So, we can define a new quantity, which we call as the fidelity pressure. This quantity will be defined to be thermodynamically conjugate to the fidelity volume, We numerically plotted it in Fig. (2). This graph shows that there is a criticality in the p f id v f id . This has the same form as the graph for the thermodynamic pressure and volume. This graph has been plotted using numerical integration to obtain v f id , p f id . To obtain numerical solutions the cutoff parameter r ∞ we will set it equal to 1/δ, δ ≪ 0.05 in the numerical computations. We numerically constructed an EoS for fidelity concept. To compare the results with thermodynamic description given in Fig. (1), we plotted the fidelity pressure and fidelity volume based on this EoS in the same isothermal regimes. It is observed that the fidelity pressure again increases with the fidelity volume till it reaching a maximum value. After reaching this maximum value, it reduces with further increase in the volume. Thus, the thermodynamics of black holes and the fidelity susceptibility seem to represent the same physical process. However, the fidelity susceptibility is well defined in terms of a boundary conformal field theory [25,26,27,24], and this would in principle imply that the thermodynamics of black hole would be well defined in terms of boundary field theory. In fact, the fidelity susceptibility represents the difficulty to extract information from a process, so it is more important to understand the difficulty to extract information during the evaporation of a black hole, rather than the loss of information during the evaporation of a black hole. The fidelity volume measures this difficulty to extract information during the evaporation of a black hole. The fidelity pressure would be the quantity conjugate to this quantity, and would measure the flow of this quantity with the change in the mass of the black hole, during its evaporation. Thus, the fidelity volume and fidelity pressure can be important quantity which could be used to analyze such a process. It may be noted that recent studied on black hole information have suggested that even though the information may not be actually lost in a black hole, it would be effectively lost, as it would be impossible to obtain it back from Hawking radition [16,17,18]. This again seems to indicate the information paradox in a black hole should be represented by fidelity volume and fidelity pressure, and it is more important to understand the difficulty to recover the information which could be expressed in terms of these quantities. As alternative definition for the informational complexity of the boundary theory have been made using a different definitions for the volume in the bulk AdS, we will also use this definition to compare it to thermodynamical volume. Thus, we will also use the volume enclosed by the same minimal surface which was used to calculate the holographic entanglement entropy, V = V (γ), and compare this to the thermodynamic volume [23]. Now for M = 0, the area integral for metric (3) is defined as here cos θ 0 = ρ0 √ 1+ρ 2 0 , ρ 0 ∼ l The Euler-Lagrange equation for r = r(θ) is given by, here prime denotes derivative with respect to the θ. So, the volume integral can now be written as To analyse the relation between the holographic complexity and thermodynamics, we define a volume as the entanglement volume v ent and relate it to V (γ). In fact, just as the fidelity volume, we define this entanglement volume v ent by subtracting this volume for pure AdS, V (γ) AdS , from this volume for AdS black hole, It may be noted that from the argument used in defining extended phase space [31,28], fidelity pressure was defined to be thermodynamically conjugate to fidelity volume. So, using the same argument, we can define the entangled pressure p ent as a new quantity thermodynamically conjugate to the entanglement volume, Now we numerically plot v ent − p ent for different values of temperature in Fig. (3). It may be noted that the entanglement pressure can get negative, and so we use the absolute value of the pressure, |p ent | in such a plot. To obtain the numerical solution for holographic complexity, we use the the initial conditions r(θ 0 ) = ρ 0 , and r ′ (0) = 0. We solve the Euler-Lagrange equation to find r(θ), and obtain the holographic complexity. Thus, we plot the v ent − p ent , using numerical solutions for complexity pressure given by Eq. (18) and its conjugate volume . It may be noted that at peaks, we observed that dpent dpent → ∞. It may be noted that this graph diverges at points, and this behavior is expected, as we are using the same minimal surface which was used to calculate the holographic entanglement entropy, and such divergences have been observed to occur in holographic entanglement entropy [32,33,34,35,36]. As we are using the same minimal surface, we would expect similar behavior for holographic complexity. It would be interesting to find an explicit relation between the holographic complexity and entanglement entropy, as both these quantities are defined using the same minimal surface. Such a relation could be used to define the holographic complexity of a boundary theory. It is expected that it would measure that the entanglement volume could be used to analyze the difficulty to extract information during a phase transition, and the entanglement pressure would indicate a holographic flow in such a quantity, when the geometry describing such a quantities changes holographically. It may be noted that this quantity does not resemble the behaviour of the thermodynamic volume and pressure. Thus, it would be more interesting to analyze the phase transition of a boundary theory using this quantity, after defining its boundary dual rather than analyzing the black hole information paradox. However, as fidelity susceptibility does resemble the behavior of thermodynamic volume and pressure, the fidelity susceptibility would be the quantity to use for studying the black hole information paradox. Thus, we have plotted various quantities which are represented by different definitions of the volume in AdS, and the conjugate to these definitions of volume in AdS. For each of these cases, we plotted the P V graph for the same deformed AdS solution. Now we can compare the behavior of these different quantities using the graphs Figs. (2,3) and (1). It was observed that behavior of the P V graph for holographic complexity was totally different from the P V graphs for the thermodynamic volume and fidelity susceptibility. However, it was also observed that the P V graph obtained from fidelity susceptibility and thermodynamic volume had almost the same behavior. So, it was conclude that to the thermodynamical volume in extended phase and fidelity susceptibility represent the same physical quantity. The fidelity susceptibility can identified with informational complexity of the boundary theory, which is obtained geometrically using maximum volume in AdS. So, in this paper, informational complexity was related to the thermodynamic volume of a theory using the the maximum volume in AdS spacetime. Thus, the results of this paper established a connection between geometry, thermodynamics, and information theory. It would be interesting to investigate this relation further, and analyse it for other deformations of the AdS spacetime.
4,333
2016-04-23T00:00:00.000
[ "Physics" ]
The effect of external forces on discrete motion within holographic optical tweezers Holographic optical tweezers is a widely used technique to m anipulate the individual positions of optically trapped mic ron-sized particles in a sample. The trap positions are changed by updating the ho lographic image displayed on a spatial light modulator. The updating p rocess takes a finite time, resulting in a temporary decrease of the intens ity, and thus the stiffness, of the optical trap. We have investigated thi s c ange in trap stiffness during the updating process by studying the motio n of an optically trapped particle in a fluid flow. We found a highly nonlinear be havior of the change in trap stiffness vs. changes in step size. For ste p siz s up to approximately 300 nm the trap stiffness is decreasing. Abov e 300 nm the change in trap stiffness remains constant for all step sizes up to one particle radius. This information is crucial for optical force measu rements using holographic optical tweezers. © 2007 Optical Society of America OCIS codes:(090.2890) Holographic optical elements; (140.7010) Laser trapping; (170.4520) Optical confinement and manipulation References and links 1. A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, “Obse rvation of a single-beam gradient force optical trap for dielectric particles,” Opt. Lett. 11, 288–290 (1986). 2. J. E. Molloy and M. J. Padgett, “Lights, action: optical tw eezers,” Contemp. Phys. 43, 241–258 (2002). 3. D. McGloin, “Optical tweezers: 20 years on,” Philos. T. R. Soc. A364, 3521–3537 (2006). 4. K. Svoboda and S. M. Block, “Biological applications of op tical forces,” Annu. Rev. Bioph. Biom. 23, 247–285 (1994). 5. R. M. Simmons, J. T. Finer, S. Chu, and J. A. Spudich, “Quantit tive measurements of force and displacement using an optical trap,” Biophys. J. 70, 1813–1822 (1996). 6. J. C. Meiners and S. R. Quake, “Femtonewton force spectrosc opy of single extended DNA molecules,” Phys. Rev. Lett.84, 5014–5017 (2000). 7. J. Liesener, M. Reicherter, T. Haist, and H. J. Tiziani, “M ulti-functional optical tweezers using computergenerated holograms,” Opt. Commun. 185, 77–82 (2000). 8. J. E. Curtis, B. A. Koss, and D. G. Grier, “Dynamic holograph ic optical tweezers,” Opt. Commun. 207, 169–175 (2002). 9. P. Jordan, J. Leach, M. Padgett, P. Blackburn, N. Isaacs, M . Goksor, D. Hanstorp, A. Wright, J. Girkin, and J. Cooper, “Creating permanent 3D arrangements of isolated ce lls using holographic optical tweezers,” Lab Chip 5, 1224–1228 (2005). 10. G. M. Akselrod, W. Timp, U. Mirsaidov, Q. Zhao, C. Li, R. Timp , K. Timp, P. Matsudaira, and G. Timp, “Laserguided assembly of heterotypic three-dimensional living cel l microarrays,” Biophys. J. 91, 3465–3473 (2006). 11. R. Di Leonardo, J. Leach, H. Mushfique, J. M. Cooper, G. Ruo cco, and M. J. Padgett, “Multipoint holographic optical velocimetry in microfluidic systems,” Phys. Rev. Lett. 96, 134502 (2006). (C) 2007 OSA 24 December 2007 / Vol. 15, No. 26 / OPTICS EXPRESS 18268 #88373 $15.00 USD Received 8 Oct 2007; revised 14 Dec 2007; accepted 14 Dec 2007; published 20 Dec 2007 12. E. Eriksson, J. Enger, B. Nordlander, N. Erjavec, K. Ramse r, M. Goksor, S. Hohmann, T. Nystrom, and D. Hanstorp, “A microfluidic system in combination with optica l tweezers for analyzing rapid and reversible cytological alterations in single cells upon environmental changes,” Lab Chip7, 71–76 (2007). 13. E. Eriksson, J. Scrimgeour, J. Enger, and M. Goksor, “Holo graphic optical tweezers combined with a microfluidic device for exposing cells to fast environmental changes,” Pr oc. SPIE6592, 65920P-9 (2007). 14. M. Reicherter, S. Zwick, T. Haist, C. Kohler, H. Tiziani, and W. Osten, “Fast digital hologram generation and adaptive force measurement in liquid-crystal-display-b sed holographic tweezers,” Appl. Opt. 45, 888–896 (2006). 15. C. H. J. Schmitz, J. P. Spatz, and J. E. Curtis, “High-preci sion steering of multiple holographic optical traps,” Opt. Express13, 8678–8685 (2005). 16. S. Keen, J. Leach, G. Gibson, and M. Padgett, “Comparison o f a high-speed camera and a quadrant detector for measuring displacements in optical tweezers,” J. Opt. A-Pure Appl. Opt.9, S264–S266 (2007). 17. A. Ashkin, “Forces of a single-beam gradient laser trap o n a dielectric sphere in the ray optics regime,” Biophys. J.61, 569–582 (1992). Introduction Optical tweezers [1] have become a useful tool for manipulating micron-sized particles and cells [2,3].For small displacements of an optically trapped object the restoring optical gradient force scales linearly with the distance from the trap center.This enables investigations of forces in the femto-and pico-Newton regime [4,5,6]. In the late 1990s optical tweezers were developed further by the implementation of spatial light modulators (SLMs) placed in the Fourier plane of the trapping plane.A single optical trap could thus be divided into several optical traps, each individually controllable by updating the kinoform displayed on the SLM [7,8].The SLMs used for optical tweezers applications are commonly based on liquid crystals that modulate the phase of the incident light.The diffraction efficiency of liquid crystal SLMs is high due to the many addressable phase levels.However, a limitation is the slow response time of the liquid crystal. Holographic optical tweezers (HOT) has recently found its applications in life science where parallel single cells in an array can be manipulated and studied simultaneously [9,10].In combination with microfluidic systems, HOT offers several new possibilities for flow measurement [11] and the ability to control the local environment of single cells in parallel [12,13].HOT have also been used for optical force measurement (OFM) applications [14].HOT thus allow OFM to be performed with several trapped particles simultaneously.OFM either keeps the trap position fixed and measures the displacement of the trapped object from the trap center, or uses the positional data to continuously adjust the trap position to bring the particle back to its initial position.The later "closed loop" configuration allows a true force measurement to be performed, but requires the ability to rapidly and continuously adjust the trap position.In most closed-loop configurations the trap movement is achieved using acousto-optic deflectors (AOD). The minimum update of the step size possible with HOT is in the sub-nanometer range [15].In principle it is therefore possible to move optically trapped particles with HOT in a close to continuous fashion.However, the update frequency of the SLM is, beside the technical aspects, limited by the computation time required to calculate the required kinoforms.In practice the movement of trapped objects is restricted to discrete steps.In this paper we study the movement of optically trapped particles by HOT in a fluid flow during the SLM update for various step sizes.As the SLM momentarily divert lights away from the intended traps during the update process, it causes a temporary weakening of the trap stiffness.This is crucial for OFM applications, where it is important to have control of the trap stiffness.The weakening of the trap stiffness is here quantified and used to establish operating guidelines by which the performance of HOT for OFM and/or microfluidics applications can be optimized. Experimental procedure The experimental setup is illustrated in Fig. 1.The HOT setup is configured around an inverted Zeiss Axiovert 200 microscope with a 1.3NA,100×, Plan Neo-fluar objective.The optical traps were created using a 1.5 W, 532 nm cw laser.The laser beam was expanded to slightly overfill the SLM (Hamamatsu X8267, 768 × 768 pixels, 20 × 20 mm 2 , 256 phase levels), which was imaged onto the back aperture of the microscope objective.The kinoforms were calculated as blazed gratings to give an angular displacement of the diffracted beam and hence a lateral displacement of the trap [14].All experiments were performed with a single silica bead trapped 10 µm away from the zeroth order optical trap and 5 µm from the coverslip.The particle was stepped in a direction perpendicular to a fluid flow, as illustrated in Fig. 2. The flow was created by translating the microscope stage with a uniform speed back and forth over a distance of 500 µm.The Stokes drag force of a trapped particle of radius r, generated by a fluid flow of velocity v and viscosity η is given by F = 6πrvη. A halogen 100 W light bulb and a 0.8NA condenser was used to illuminate the sample, which was imaged using a CMOS camera mounted on the viewing port of the microscope.When used at its full 1280 × 1024 resolution the CMOS sensor has a standard video frame rate of 24 Hz.However, by reducing the region of interest (ROI) the frame rate could be increased.For the measurements on 1.1 µm and 2.0 µm diameter beads the ROI was reduced to allow images to be taken at 1 kHz, while the larger 5.0 µm beads were imaged at 500 Hz.The scale of the image was calibrated against an object micrometer.The position of the trapped bead was measured while stepping the trapped particle during 50 s (1.1 µm and 2.0 µm beads) or 60 s (5.0 µm beads) using a real-time, center-of-mass tracking algorithm with an accuracy in the order of 10 nm [16].The positional data was then analyzed to extract information about the maximum downstream displacement, ∆x, during each hologram update and the steady state downstream displacement due to Stokes drag, x Stokes (c.f.Fig. 2).The trap position was updated at 0.5 Hz, resulting in approximately 25 measurements of ∆x for the smaller particles (1.1 µm and 2.0 µm) and 30 measurements for the larger particle (5.0 µm) for each step size.The measured downstream displacements were averaged to give one data point for each step size and particle size. Results and discussion A typical example of the experimental data is shown in Fig. 3.For ease of illustration only 6 seconds of data are shown, 2 seconds in each of the 3 trap positions.In these plots it is possible to distinguish all the key parameters, including the steady state downstream displacement due to the Stokes drag force acting on the trapped particle (x Stokes ), the residual Brownian motion, the HOT step size (∆y) and the inter-step downstream displacement (∆x) that occurs while the hologram is being updated on the SLM. In this study we have focused on the inter-step downstream displacement relative to the center of the trap, d = x Stokes + ∆x, as a function of particle size and step size.The experimental data of the measured total displacements, d, during the SLM update are shown for three different particle sizes as a function of step size in Fig. 4. The behavior is essentially the same for all three bead sizes.The downstream displacement increases with step size up to 200-300 nm after which it becomes constant for step sizes up to one particle radius.For step sizes above one particle radius the downstream displacement increases again. To explain this non-linear behavior it is necessary to look at the intensity in the optical trap while updating the SLM with a kinoform corresponding to the new trap position.Such an intensity measurement is shown in Fig. 5(a), where the intensity of the laser light reflected off the coverslip was monitored with a CMOS camera in a ROI containing both trap positions.The intensity measurement confirms that light is diverted away from the optical trap during the time the SLM is updating the hologram.This "dead time" was on the order of 200 ms, which is also in agreement with the SLM specifications. of the trap step size, whereas the intensity loss was measured to be dependent on step size (c.f.Fig. 5(b), left axis).The intensity loss increased up to a step size of 300 nm, after which the intensity loss remained constant.This also explains why the downstream drag increases for step sizes up to 200-300 nm. Further, the behavior of the intensity can be explained from the calculated kinoforms as the step sizes increases (the holograms are assumed to have phase values between 0 and 2π, scaling linearly with the digital signal sent to the SLM).In Fig. 5(b) (blue curve, right axis) the average phase shift per pixel when taking steps of various sizes is shown (steps are in the positive y direction, starting at x = 10 µm, y = 0 µm).Since the amplitude profile of the incident beam falling onto the SLM is Gaussian shaped, pixels in the center of the SLM affect the trap intensity more than pixels on the border of the SLM.Therefore, the phase change values for different pixels have been weighted with a Gaussian with a width matched to the size of the SLM before calculating the average phase change per pixel.The calculated average phase shift per pixel increases for step sizes up to 200 nm (in our setup corresponding to 1.6 grating periods across the SLM), which is in good agreement with both the intensity measurements The measured intensity reflected off the coverslip as the SLM was updated from one hologram to another.While the SLM is updating the hologram, light is diverted away from the trapping region, thus decreasing the measured intensity.The intensity was measured in a region of interest containing both traps, when moving from x = 10 µm, y = 0 µm to x = 10 µm, y = 0.2 µm in one step.(b) Left axis (red curve): the dependence of the depth of the intensity decrease on step size (steps are in the positive y direction, starting at x = 10 µm, y = 0 µm).Right axis (blue curve): The average phase change per pixel between two kinoforms as a function of step size (steps are in the positive y direction, starting at x = 10µm, y = 0µm), weighted with a Gaussian intensity profile.The curve saturates at a phase shift of 2π/3 (black line).and the measurements of the downstream displacement.It can also be noted that the average phase shift per pixel approaches 2π/3 for large step sizes.This situation is equivalent to the average phase shift per pixel when changing from an arbitrary blazed grating to a hologram with a constant phase level, where the constant phase level corresponds to the RMS value of a flat probability distribution of phase values between 0 and 2π.This explanation was further supported by measuring the downstream displacements when adding a constant phase shift per pixel to the hologram (without moving the trap position).For a range of fluid flow rates the downstream displacements for an average phase shift per pixel of 2π/3 agreed well with the downstream displacements found for step sizes in the range of 300 nm up to one particle radius (see Fig. 6).As expected from the Stokes drag force, there is a linear dependence of the downstream displacement as a function of flow rate. The increasing downstream displacement for step sizes above one particle radius is perhaps the most intuitive part of the experimental data.The force acting on a particle is well known to fall off rapidly once the particle is more than one particle radius away from the center of the trap [17]. It is worth noting that the scaling between the change in grating period of the displayed holograms and the step size in the trapping plane will differ between different optical setups.The scaling depends mainly on the wavelength, the effective focal length of the microscope objective and the magnification of the imaging optics between the SLM and the microscope objective.Another important parameter that will differ between HOT setups is the response time of the SLM, which will affect the magnitude of the downstream displacement. Conclusions When using SLMs for HOT applications one critical question is what step size that should be used during particle movement.The movement of the optically trapped particle should be both rapid but not likely to result in an escaped particle.For optical force measurement applications .Note that the downstream displacements for an average phase shift per pixel of 2π/3 agrees well with with the downstream displacements found for step sizes in the range 300 nm up to one particle radius. it is also very important to know the stiffness of the optical trap (and preferably to keep it constant).By measuring the downstream displacement in a fluid flow due to the decrease in trap stiffness during the update of the SLM (Hamamatsu X8267) we have identified some general guidelines for HOT using a phase-only SLM with a range of 2π: First, for step sizes above the particle radius the downstream displacement during the SLM update increases dramatically, since the restoring force of the optical trap falls off quickly outside one particle radius. Secondly, for step sizes between 300 nm (corresponding to 2.3 grating periods across the SLM) and one particle radius, the inter-step downstream displacement is approximately independent of step size.This can be explained by the decrease in trap intensity during the SLM update, which is constant in this range.The constant intensity loss is due to a roughly constant average hologram phase shift per pixel.The data also demonstrates that the time needed for the trapped particle to travel to the new position is negligible compared to the updating time of the SLM.In addition, the downstream displacement is proportional to the fluid flow rate, as expected from the Stokes drag force acting on the particle. Finally, for step sizes up to 300 nm, the inter-step downstream displacement is increasing.In this range the average phase shift per pixel in the holograms is increasing with the hologram step size, resulting in an increasing intensity loss during the SLM update. In conclusion, in applications where a quick movement of the trap is desired and a decrease in trap stiffness is tolerated, it is beneficial to move the trap with steps equalling the bead/cell radius.On the other hand, in applications where it is crucial to keep an almost constant trap stiffness, the trap should be moved in as small steps as allowed by the HOT setup. Fig. 1 . Fig. 1.A schematic drawing of the experimental setup used to measure the position of a trapped particle in a fluid flow.The laser beam was expanded (lenses L1 and L2, focal lengths f 1 = 30 mm and f 2 = 200 mm) to slightly overfill the SLM.A λ /2 plate was used to adjust the polarization direction of the laser to that of the SLM.The size of the beam reflected off the SLM was then reduced to fit the size of the back aperture of the microscope objective (lenses L3 and L4, focal lengths f 3 = 600 mm and f 4 = 200 mm) that focussed the light to form the optical traps. Fig. 2 . Fig. 2.An illustration of using HOT to move a particle from position (x, y) to position (x, y+ ∆y) in the presence of flow in the x-direction.(a) The initial position of the optical trap is at (x, y).(b) The location of the optical trap is updated to (x, y + ∆y) and the maximum downstream displacement, ∆x, as a function of the step size, ∆y, is measured.(c) The bead trapped at the new location. Fig. 3 .Fig. 4 . Fig. 3. Positional data for a bead (2 µm in diameter) trapped in a 50 µm/s flow illustrated as xy scatter plots.Data for ∆y = ±0.2,0.7, 1.4 µm is shown in (a), (b) and (c) respectively.The particle is moved from top to bottom in all of the figures.Note that the displacement due to Stokes drag force, x Stokes , can be seen.The aspect ratio of this diagram has been set to 7:1 in order to emphasize ∆x. Fig. 5 . Fig. 5. (a) The measured intensity reflected off the coverslip as the SLM was updated from one hologram to another.While the SLM is updating the hologram, light is diverted away from the trapping region, thus decreasing the measured intensity.The intensity was measured in a region of interest containing both traps, when moving from x = 10 µm, y = 0 µm to x = 10 µm, y = 0.2 µm in one step.(b) Left axis (red curve): the dependence of the depth of the intensity decrease on step size (steps are in the positive y direction, starting at x = 10 µm, y = 0 µm).Right axis (blue curve): The average phase change per pixel between two kinoforms as a function of step size (steps are in the positive y direction, starting at x = 10µm, y = 0µm), weighted with a Gaussian intensity profile.The curve saturates at a phase shift of 2π/3 (black line). Fig. 6 . Fig. 6.Measured total downstream displacement, d = x Stokes + ∆x, for four different flow rates: 25, 50, 75 and 100 µm/s.The measurements were done with a 2.0 µm diameter particle trapped with approximately 12 mW of laser power.In figure (a) the step size was varied and in figure (b) a phase change was added to the trapping hologram (without updating trap position).Note that the downstream displacements for an average phase shift per pixel of 2π/3 agrees well with with the downstream displacements found for step sizes in the range 300 nm up to one particle radius.
4,855.2
2007-12-24T00:00:00.000
[ "Physics", "Biology" ]