text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Using gene expression from urine sediment to diagnose prostate cancer: development of a new multiplex mRNA urine test and validation of current biomarkers Background Additional accurate non-invasive biomarkers are needed in the clinical setting to improve prostate cancer (PCa) diagnosis. Here we have developed a new and improved multiplex mRNA urine test to detect prostate cancer (PCa). Furthermore, we have validated the PCA3 urinary transcript and some panels of urinary transcripts previously reported as useful diagnostic biomarkers for PCa in our cohort. Methods Post-prostatic massage urine samples were prospectively collected from PCa patients and controls. Expression levels of 42 target genes selected from our previous studies and from the literature were studied in 224 post-prostatic massage urine sediments by quantitative PCR. Univariate logistic regression was used to identify individual PCa predictors. A variable selection method was used to develop a multiplex biomarker model. Discrimination was measured by ROC curve AUC for both, our model and the previously published biomarkers. Results Seven of the 42 genes evaluated (PCA3, ELF3, HIST1H2BG, MYO6, GALNT3, PHF12 and GDF15) were found to be independent predictors for discriminating patients with PCa from controls. We developed a four-gene expression signature (HIST1H2BG, SPP1, ELF3 and PCA3) with a sensitivity of 77 % and a specificity of 67 % (AUC = 0.763) for discriminating between tumor and control urines. The accuracy of PCA3 and previously reported panels of biomarkers is roughly maintained in our cohort. Conclusions Our four-gene expression signature outperforms PCA3 as well as previously reported panels of biomarkers to predict PCa risk. This study suggests that a urinary biomarker panel could improve PCa detection. However, the accuracy of the panels of urinary transcripts developed to date, including our signature, is not high enough to warrant using them routinely in a clinical setting. Electronic supplementary material The online version of this article (doi:10.1186/s12885-016-2127-2) contains supplementary material, which is available to authorized users. Background During the last two decades, prostate-specific antigen (PSA) has been extensively used for prostate cancer (PCa) screening, detection and follow-up. The routine use of PSA has been the subject of continued controversy owing to its limited specificity, which derives from the fact that elevated serum levels of PSA occur in a variety of non-neoplastic conditions such as prostatitis and benign prostate hyperplasia (BPH) [1]. Furthermore, up to 27 % of men with PSA in the normal range (≤ 4 ng/ml) suffer from PCa [2]. The current gold standard method for diagnosis of PCa in patients with elevated serum PSA is non-targeted transrectal ultrasound-guided needle biopsy, which fails to detect PCa in approximately 20-30 % of cases [3]. Therefore, there is a need for additional non-invasive and more specific markers of early PCa that will permit the stratification of patients according to their risk of developing PCa and thus identify men who will require prostate biopsy. A great improvement in high-throughput gene expression techniques has yielded several promising molecular biomarkers for PCa detection. Prostatic cells can be collected in urine after an intensive prostatic massage. In 2003, Hessels et al. for the first time used the prostate cancer antigen 3 (PCA3) for the identification of PCa in urine sediments obtained after prostatic massage [4]. Since then, several studies have assessed the diagnostic performance of this marker (reviewed in [5,6]) and other individual transcripts [7,8]. However, taking into account the heterogeneity of PCa, several authors have searched for a multiplex detection system of biomarkers, which has proved to outperform the diagnostic value of the individual markers [9][10][11][12]. We have previously identified new putative mRNA markers for PCa diagnosis that can be extrapolated to post-prostatic massage (PPM) urine samples [13]. In the present study we aim to test several of those previously identified putative biomarkers in a large cohort of PPM-urine samples in order to develop an improved multiplex mRNA biomarker model for PCa diagnosis to be routinely used in the clinical setting. Furthermore, in our cohort we have validated the commercially available test based on urine PCA3 expression as well as the best performing mRNA panels of biomarkers reported in the literature [9][10][11][12]. Patients and urine samples Under Institutional Review Board approval (Hospital Clinic ethics committee) and patients' informed consent, we prospectively collected 273 freshly voided urine samples from PCa patients and age matched controls between January 2009 and September 2012 at the Hospital Clínic of Barcelona. All patients underwent radical prostatectomy. The grade and stage of the tumours were determined according to Gleason criteria and TNM classification, respectively [14,15]. Systematic prostate biopsy was performed to identify PCa patients included in the present study. Voided urine samples (20 to 50 ml including the initial portion of the urine,) were collected following prostatic massage in sterile containers containing 2 ml of 0.5 M EDTA, pH 8.0. Urines were immediately stored at 4°C and processed within the next 8 h. The samples were centrifuged at 1000xg for 10 min, at 4°C. The cell pellets were re-suspended in 1 ml of TRIzol reagent (Invitrogen, Carlsbad, CA, USA) and frozen at −80°C until RNA extraction. RNA extraction, cDNA synthesis and pre-amplification RNAs from the urinary cell pellets were extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions and quantified with a NanoDrop (NanoDrop Technologies, Wilmington, DE, USA). cDNA was synthesized from 100 ng of total RNA using the High Capacity cDNA reverse transcription kit (Applied Biosystems, Foster City, CA USA; hereafter referred to as AB) following manufacturer's instructions, except that the final volume of the reaction was 25 μl. A total of 1.25 μl of each cDNA sample, 2.5 μl of TaqMan PreAmp Master Mix kit 2X (AB) and 1.25 μl of pooled assay mix 0.2X containing 46 Gene Expression Assays (AB) were used for the multiplex pre-amplification of the target cDNAs following manufacturer's instructions (AB). The 46 assays included in the pooled assay mix were selected from previous data from our group [13] and literature [10,12,16,17] and contains 42 target genes and four endogenous controls; B2M, GAPHDH, KLK2 and KLK3 (Additional file 1: Table S1). Of note, 23 of the 42 target genes selected here were previously analyzed in urine samples by our group [13]. Quantitative PCR using BioMark 48.48 Dynamic Arrays A total of 2.25 μl of each pre-amplified cDNA was loaded into the Dynamic Array along with 0.25 μl of GE Sample Loading Reagent 20X (Fuidigm) and 2.5 μl of TaqMan Universal PCR Master Mix 2X (AB). For the assays, 2.5 μl of TaqMan® Gene Expression Assays 20X (AB) were combined with 2.5 μl of Assay Loading Reagent and were pipetted into the assay inputs. Reaction conditions were as follows: 50°C for 2 min, 95°C for 10 min, followed by 40 cycles of 95°C for 15 s and 60°C for 1 min. The real-time quantitative PCR (qPCR) experiments were performed on the BioMark instrument. Quantitative PCR data analysis The real-time qPCR analysis software was used to obtain cycle quantification (Cq) values. Threshold was manually calculated for each gene. Since experimental errors such as inaccurate pipetting or contamination can result in amplification curves that look significantly different from a typical amplification curve, all amplification plots were checked both computationally and manually. Relative expression levels of target genes within a sample was expressed as ΔCq (ΔCq = Cq endogenous control -Cq target gene ). We used as endogenous control the mean Cq value of KLK2 and KLK3, which allowed us to normalize the prostate epithelial cell content in the collected urine sample [4]. Most of the studies seeking urinary transcripts for PCa diagnosis have used KLK3 as a prostate-specific endogenous control [4,18,19]. In this study, to minimize the possibility of erroneous relative gene expression quantification, we also selected KLK2 as a second prostatespecific endogenous control since its expression level is highly correlated with KLK3 [20]. All 273 urine samples initially included in the study were positive for both housekeeping genes, the B2MG (B2MG mean Cq = 8.79; range 5.07-14.58) and GAPDH (GAPDH mean Cq = 10.85; range 7.6-16.17), indicating that all samples contained cells. Moreover, all samples were also positive for KLK2 (KLK2 mean Cq = 13.12; range 9.87-17.85) and for KLK3 (KLK3 mean Cq = 12.91; range 9.58-17.65) genes, indicating that all samples contained cells of prostate origin. Cq values for all other biomarkers are in the range for those of KLK2 and KLK3 (data not shown). All Cq values (except 2 cases in B2MG gene) fall in the optimal range of quantifiable Cq values in BioMark instrument (Cq = 6 to Cq = 23) [21]. Moreover, to assure the quality of the expression data obtained, low RNA quality samples were identified as outliers according to their average expression by the Mahalanobis Distance Quality Control (MDQC) method [22] and were excluded from the study. Fold change values were generated from the median expression of the genes from the BioMark 48.48 Dynamic Arrays in the groups compared. Statistical analysis The association of each variable with final radical prostatectomy pathology results was analyzed by univariate logistic regression. Significance was defined as p values < 0.05. All transcripts analyzed were subjected to variable selection using the lars function with method LASSO in the lars R statistical package (http://CRAN.R-project.org/ package=lars) [23]. As all the samples were used for the model generation, the performance of the model may be over-optimized. To correct this bias, we further performed a leave-one-out cross-validation (LOOCV) and 100 randomisations with 5-fold cross-validation (5fCV) (http://CRAN.R-project.org/package=rms). The optimal probability cutoff for the univariate study variables and logistic regression models (our model and those previously described in the literature [9][10][11][12]) was computed through a ROC analysis. To evaluate the performance of the models, we computed sensitivity (SN), specificity (SP), negative predictive value (NPV), positive predictive value (PPV) and overall error rates (ER) for the mRNA expression signature. Analysis of variance (ANOVA) of the Risk score probability versus three groups of PSA was done. Pairwise comparisons were made with Tukey's HSD procedure. R-software was used for all calculations. Study population and informative rate Among the 273 urine samples initially collected from 180 PCa patients and 93 control individuals, we excluded 29 urines from PCa patients (16 %) and 20 from controls (22 %) because they were flagged as low-quality samples when tested using MDQC method [22]. Thus, in total, the urine samples of 224 men, 151 with PCa and 73 controls were successfully analyzed (82 %). Table 1 shows characteristics and clinicopathological information for the 224 evaluable subjects. Only 10 patients with PSA levels > 4 were included as controls. Pathological reports from these patients confirmed the absence of malignity at the time of sample collection and they have not presented PCa during a mean followup of 45.6 months (range 19.5 to 78.9). To evaluate the performance of individual markers for diagnosing PCa, we performed a ROC analysis (Table 2). Then, individual biomarkers were subjected to variable selection to develop a multiplex model that could improve performance over single biomarkers. This analysis resulted in a final selection of a four-gene model that contains HIST1H2BG, SPP1, ELF3 and PCA3. The four gene model outperformed single genes and previously reported models in the literature in detecting PCa in urinary sediments (SN = 77 %; SP = 67 %; PPV = 83 %; NPV = 58 %; ER = 26 %; AUC = 0.763). After applying LOOCV analysis to the four-gene model, we obtained a SN of 79 % for discriminating between tumor and control urines with a SP of 60 % (PPV = 80 %; NPP = 58 %; ER = 27 %; AUC 0.735). By using 5fCV analysis, we found a SN of 72.52 % for discriminating between tumor and control urines with a SP of 64.83 % (PPV = 80.86 %; NPV = 53.5 %; ER = 30 %; AUC 0.732) (Fig. 1a). To note, the four-gene model also performs well in the diagnostic PSA gray-zone (PSA 3-10 ng/ml) yielding a SN of 79 % for discriminating between tumor urines from patients with PSA serum values between 3 and 10 ng/ml and control urines, with a SP of 59 % (PPV = 72 %; NPP = 68 %; ER = 29 %; p < 0.001) (Fig. 1b). Evaluation of previously reported diagnostic biomarkers of urinary transcripts in our cohort First, we evaluated the PCA3 marker (TaqMan PCR test for PCA3) as a single marker. Univariate logistic regression analysis showed that expression of PCA3 was a significant discriminator of PCa from control individuals (p < 0.01). PCA3 alone achieved an overall SN of 49 % and a SP of 85 % (AUC = 0.708) to discriminate controls from PCa urines ( Table 2 and Additional file 2: Table S2). Then, we evaluated in our cohort some of the most potentially promising PCa diagnostic panels of urinary transcripts reported in the literature, to validate their performance in an independent set. Table 3 summarizes the diagnostic performance of the biomarkers panels in our case-control setting in comparison to the results obtained in the original studies. As shown, all the biomarker combinations roughly maintain their performance when tested in an independent set, the combination described by Laxman et al. (2008) having the best performance [10]. Discussion Currently, PSA is considered the most valuable tool in the early detection, staging and monitoring of PCa. However, as mentioned in the introduction, PSA has several limitations as a PCa diagnostic biomarker, especially in deciding the necessity of a prostate biopsy. Actually, PCa is detected in only about a third of patients with elevated serum PSA who undergo random prostate biopsy. Repeated biopsies reveal the presence of PCa in another 10-35 % of the cases [24]. Not only economic aspects but also anxiety, discomfort, and sometimes severe complications are associated with prostate biopsies. Therefore, the development of a non-invasive diagnostic tool for the early detection and screening of PCa as well as to increase the probability of detecting PCa at repeat biopsy, reducing the number of unnecessary biopsies, is needed in urological practice. Detection of aberrantly expressed transcripts in PCa cells shed into the urine after prostatic massage are promising biomarkers for the development of a reliable non-invasive PCa diagnostic method. In fact, several promising RNA-based urine PCa biomarkers are described in the literature, but only the PCA3 assay (Progensa) is approved by the FDA and currently is the only molecular diagnostic assay for PCa commercially available. However, PCA3 is not routinely used in the clinical setting mainly because clinicians feel that the increase in accuracy over serum PSA testing is not significant enough to warrant a biopsy. Furthermore, since PCa is a heterogeneous disease, it is reasonable that a combination of markers outperforms single marker detection. In this regard, several authors have described combinations of RNA-markers in urine samples but to our knowledge, none of them, except one [25], has been externally validated nor is currently used in the clinical setting. In the present work, we have developed a four-gene panel that outperforms those previously described in the literature. In addition, in our cohort we have validated PCA3 as well as the most promising panels of biomarkers described. Fig. 1 Diagnostic performance of the four -gene expression signature. a ROC analysis based on the predicted probabilities derived from the four-gene model. b Probabilistic sensitivity analysis of the signature according to serum PSA levels From our analysis, we have been able to identify six new candidates that independently predict PCa in PPMurine samples, besides PCA3. This has been possible since we have explored target genes selected from previous PCa microarray data [13,17] instead of analyzing only previously described prostate related biomarkers. Actually, all target genes explored were used to develop the four-gene set model that contains the previously described PCA3 gene and three new biomarkers: HIST1H2BG, SPP1 and ELF3. This model outperforms individual biomarkers and previously reported models in the literature. Although LOOCV indicates a certain degree of overfitting, all data obtained after cross validation corroborate the SN and SP for the final model. Moreover, the model performs well in the diagnostic PSA gray-zone (PSA 3-10 ng/ml) where a reduction in the number of unnecessary biopsies is necessary. Notably, the three new biomarkers of the model had been previously associated with PCa. Alterations in expression of histone HIST1H2BG were associated with biochemical recurrence in PCa patients after radical prostatectomy [26]. The transcription factor ELF3 (E74-like factor 3), that acts as a negative modulator of androgen receptor transcriptional activity, was found underexpressed in PCa [27], according to our results. On the other hand, SPP1 (secreted phosphoprotein 1) encodes the protein osteopontin (OPN). Both, OPN RNA and protein have been found overexpressed in a number of human tumor types, including PCa [28]. In some cases, OPN overexpression has been shown to be associated directly with poor patient prognosis or with other indicators of poor prognosis. Thus, OPN has a dual interest, as a biomarker of malignancy as well as a candidate for testing as a poor prognostic factor. Even though in the present study we did not achieve statistical significance for SPP1, the addition of this gene to the model improved the AUC from 0.740 (HIST1H2BG, PCA3 and ELF3) to 0.763 (SPP1, HIST1H2BG, PCA3 and ELF3), indicating that effectively its expression adds information to the model. The present study confirms that PCA3 can successfully discriminate PCa from controls in randomly selected patients with variable PSA levels (PSA = 0.94-365 ng/ml) [29,30]. A limitation of most studies based on urinary biomarkers is that the negative PCa patient group consists of patients who have undergone prostate biopsy for suspected PCa with a negative result, but in fact, 20-30 % of such patients will be diagnosed with PCa at a later date [3]. To overcome this limitation, our control group consisted of patients without suspected PCa (PSA < 4.0 ng/ml), thus minimizing the risk of including subjects with PCa in the control group. Moreover, there is no uniform methodological protocol for urinary transcript quantification in the reported studies. For instance, some studies use a multiplex cDNA preamplification step before qPCR transcript quantification [16,31], while others use a Whole Transcriptome Amplification [10,32] or even in some studies cDNA is not preamplifed [11]. Also different gene expression normalization methods are used [4,11,16,18,31]. Thus, it is notable that despite this methodological heterogeneity and the inherent limitations of the sample source (PPM-urine contains different cell types, including renal tubular cells, urothelial cells, prostate cells, etc.… and the proportion of prostate tumor cells in each subject is different), we and the vast majority of the groups identify PCA3 as an independent predictor for PCa diagnosis, making it the most reliable individual biomarker to date. However, combining urinary biomarkers in a panel has shown higher diagnostic accuracy than PCA3 alone. Regarding this, we have been able to validate some of the previously reported panels of biomarkers [9][10][11][12] in our cohort and to develop a new urinary panel of biomarkers that improves serum PSA and previously reported panels of biomarkers. On the contrary, we could not validate differences between control and cancer population for the TMPRSS2-ERG status. This is in all probability due to the methodological approach used here, since others using the same methodology as us (RT-qPCR using the same gene expression assay as us; Hs03063375_ft ) to evaluate TMPRSS2-ERG status also did not find differences between cancer and control urines [33] while other authors using Southern blot [9] or transcription-mediated amplification [32] were able to find such differences. Of concern, neither the FDA approved PCA3 test alone, or in combination with other biomarkers, is being routinely used in the clinical setting. This is most likely because the addition of urine biomarkers to the current clinical diagnostic tools only shows a limited improvement in the PCa diagnosis accuracy and does not provide sufficient value to affect biopsy decision making. In fact, recently the Evaluation of Genomic Applications in Practice and Prevention Working Group (EWG) has found insufficient evidence to recommend PCA3 testing not only for deciding to conduct initial biopsies for PCa at risk men (e.g. previously elevated PSA test or suspicious digital rectal examination) but also for deciding when to rebiopsy previously biopsy-negative patients for PCa. Furthermore, the EWG did not find convincing evidence to recommend PCA3 testing in men with PCa positive-biopsies to determine whether the disease is indolent or aggressive, in order to develop an optimal treatment plan [34]. Thus, even though many efforts have been made in the last decade to identify urine biomarkers that determine men at high risk of PCa and whether the disease is indolent or aggressive in men with PCa, the results do not seem convincing for clinicians. We acknowledge that our study has several limitations. First it resides in the relatively low sample size of the studied cohort. This was because 18 % of urine samples collected could not be evaluated (informative specimen rate of 82 %). Although some improvements in the methodological process would be desirable to decrease the percentage of fails, this percentage is in the range of those described by other authors who quantify gene expression in PPM urine samples (informative specimen rates 56 to 92 %) [10-12, 16, 30, 31]. However, sample collection can be repeated if necessary. It could also be argued that we arbitrarily selected the 42 target genes, while the list of differentially expressed genes in PCa is much larger. In this regard, we have tried to include the biomarkers according to previous studies, as being either detectable in urine or appropriate for combined models, and genes highly differentially expressed in PCa tissue samples. We are also aware that we should test the performance of our four-gene expression signature in a real clinical scenario by analyzing patients who undergo prostate biopsy for suspected PCa, even though this study will have the limitation of false negative biopsies, which account for 20-30 % of men at risk of PCa [3]. Lastly, future validation studies are needed to further improve the performance of this test by examination of larger and independent cohorts.
5,105.8
2016-02-09T00:00:00.000
[ "Biology", "Medicine" ]
Loading Mode and Lateral Confinement Dependent Dynamic Fracture of a Glass Ceramic Macor A systematic comparison of the tensile and compressive response of glass ceramic Macor, with zero porosity and low density, is carried out by using flattened Brazilian disk and cylindrical specimen from quasi-static to dynamic loading conditions. The experiments were performed on a screw driven Zwick machine and an in-house built split Hopkinson bar synchronized with a high speed photographic system. Likewise, the loading rate dependent fracture toughness is also investigated by using a notched semi-circular Brazilian disk. A digital image correlation technique is adopted to assist in the monitoring of strain field, crack initiation and propagation under dynamic loading conditions. Both tensile and compressive strength show loading rate dependencies, however, the static and dynamic tensile strengths are only 20% of the compressive strengths without confinement and less than 10% of the confined compressive strength. The microstructural characterization reveals the fracture mechanisms in unconfined Macor are predominantly transgranular with mica platelets and cleavage planes, which are influenced by the loading mode and loading rate. However, the Macor with confinement shows ductile fracture micrographs with a shear localization zone consisting of fine particles. With the use of Macor ceramic as a model material, the paper presents an economical approach to investigate the loading mode and pressure dependent failure of ceramic materials. This will support the characterization of dynamic properties of current and future developed advanced ceramics for demanding applications in the aero engine. Introduction Advanced ceramic materials with high hardness and relatively low density are very attractive for structural applications. The essential brittleness of advanced ceramics, however, limits their performance in service. The wide use of advanced ceramic materials requires an understanding of their deformation and failure mechanism, which will benefit the optimization of the structural design. A series of standard techniques under static loading is available for testing the mechanical properties of ceramic materials, such as compressive strength [1] and flexural strength [2]. However, the ceramics employed in aerospace engineering and armour design always undergo high speed deformation process, as can be found in the works of Townsend and Field [3,4], Bourne and Rosenberg et al. [5,6] in the Cavendish Laboratory. Analysis of such impact events requires the understanding of dynamic mechanical properties of ceramic materials. The tensile strength is an important mechanical property of brittle materials [7]. However, the tensile test under static or dynamic conditions can be challenging and is very sensitive to the specimen alignment and attachment methods. The conventional direct tensile test with a dog-bone specimen is not suitable for ceramic material and is difficult to manufacture. Several indirect tensile approaches have been developed to measure the tensile strength of brittle materials. The Brazilian disk is relatively simpler to manufacture compared to the dog-bone specimen. The Brazilian test, as a typical indirect test which is convenient to conduct through the far filed compressive loading technique [8,9], has been increasingly popular for the study of the tensile response of brittle materials [10]. The Brazilian test with an arc form anvil for the contact surface was proposed by Mellor and Hawkes [11], in order to reduce the local contact stress concentration. Palmer and Field [12], Williamson et al. [13], Grantham and Siviour et al. [14] in the Cavendish Laboratory showed that this configuration effectively prevented the premature edge failure by reducing the shear stress near the contact points. Another kind of tensile test reported by our Oxford colleagues Johnson and Ruiz [15] was the spalling test using a brittle ceramic rod impacted axially by a Hopkinson pressure bar. The dynamic Brazilian test was conducted in parallel, and it was suggested that the use of curved anvils would reduce the compressive Hertzian contact stresses. Recently, Khosravani et al. [16,17] reviewed the applicability of the Brazilian test and spalling test using a Hopkinson pressure bar to evaluate the tensile strength of brittle materials. A flattened Brizillian disc (FBD) test, in which the two flat planes were machined on two sides of the disk, was proposed by Wang et al. [18,19] to reduce stress concentration close to the loading area and was used to successfully measure the tensile strength. From the literature, the Hopkinson bar, or Kolsky bar [20][21][22], is an important technique for characterizing the dynamic behavior of materials, including the strength of brittle materials, as can be seen from the very recent works of Khosravani et al. [16] and Cao et al. [23]. Fracture toughness is an intrinsic property of brittle materials to resist crack initiation and propagation. The mechanisms of impact damage would be affected by the fracture toughness of brittle materials, see Ruff et al. [24], Townsend and Field [25]. The dynamic fracture toughness is important for the understanding of fracture properties of brittle materials at high strain rates. Rittel and Maigre [26] investigated dynamic fracture properties of PMMA using a compact compression specimen and Hopkinson compression bar, and reported the loading rate dependent fracture toughness of PMMA. Belenky et al. [27] adopted a one-point impact technique to characterize the static and dynamic fracture properties of transparent nanograined alumina. Samborski and Sadowski [28] studied the fracture toughness of alumina ceramic under static and dynamic loading conditions and analyzed the porosity effects on fracture toughness. The notched semi-circular Brazilian (NSCB) specimen subjected to a three-point bending loading, initially proposed by Chong and Kuruppu [29][30][31], has been increasingly used to determine the fracture toughness of brittle materials. Liu et al. [32] measured the fracture toughness of polymerbonded explosive material using NSCB specimens. Both the NSCB test and three-point bending test were carried out by Chen et al. [33,34], and good consistency was found in the measurements of the dynamic fracture toughness of ceramic materials. The glass ceramic Macor is increasingly implemented in aerospace engineering [35,36] and high precision engineering applications [37,38]. The viscoelastic property of Macor was reported by Bagdassarov [39], together with the internal friction and the shear modulus. So et al. [40] studied the ultrasonic property of Macor at low temperature by using pulse-echo approach. In service, Macor would be subjected to high speed impact. Several studies concerning the dynamic mechanical response of Macor have been carried out in the last two decades across a series of strain rates. Chen et al. [41] investigated the pressure dependent constitutive behaviour of Macor ranging from quasi-static to high strain rate loading, and found that the compressive strength increased with the increase of the confined pressure. Dong et al. [42] reported the rate dependent tensile strength and flexural strength of Macor. From the literature review, one can find that a systematic comparison between tensile and compressive responses, and the fracture toughness measurement for glass ceramic Macor have been seldom reported. Recently, DIC with non-contact characteristics is increasingly employed for the displacement and strain field measurements [34,[43][44][45]. Compared to the traditional strain gauge methodology with contact characteristics and a limited number of measurement points, the DIC method provides a direct measurement of the displacement or strain field. Chen et al. [43] studied the tensile strength of several brittle materials by combining the DIC and strain gauge measurements in the Brazilian test. Bhattacharya and Goulbourne [46,47] used the DIC technique to investigate the dynamic deformation mechanism of a ceramic using DIC to obtain the full field deformation measurement. The real time monitoring of dynamic deformation and failure process of Macor is still missing, and the fractographic studies under different loading modes (and confinement conditions) are less well reported. As an excellent electrical and thermal insulator, Macor offers good performance in electrical components, such as sensors and resistors in the engine control, management, and thermal protection systems in aero engine. Macor used in these components would be subjected to impact loading with complex stress states. Consequently, the dependence of the dynamic fracture behavior of Macor on the loading mode and lateral confinement is of considerable interest to designers. In this work, the flattened Brazilian test and the notched semi-circular bending test, complemented by DIC techniques, were carried out to study the tensile strength and mode I fracture toughness of glass ceramic Macor under static and dynamic loading conditions. Likewise, the compressive tests without confinement and with lateral confinement were also conducted to compare the tensile strength and compressive strength. These studies are vital in order to understand the dynamic response of Macor and provide useful information to guide the engineering design. The methodology and techniques used in the present work will also support the characterization of the impact performance of advanced ceramic materials [48] in aero engine system. The material, specimen geometries and experimental setups are introduced in "Material and Methods" section. The next section presents the experimental results using different specimens and techniques. "Discussion" section discusses the main outcome of the present paper, followed by conclusions. Material and Specimens Macor is a unique white color, odorless, porcelain-like (in appearance) glass ceramic composite consisting of 55% fluorophlogopite mica phase and 45% borosilicate glass made by Corning Inc. This ceramic shows zero porosity and a low density of 2.52 g/cm 3 . Table 1 lists the compounds of Macor quoted from the manufacturer [49]. The FBD specimens were machined into 9.66 mm diameter and 4 mm length (thickness) cylinders with two parallel sections corresponding to the loading angle 2θ = 30°, as can be seen in Fig. 1. The tensile stress α t at the center of the FBD is given by where k is the non-dimensional stress factor related to the loading angle, here k = 0.92 [18,33] for the loading angle 2θ = 30°. F is the applied load on the parallel surfaces, B is the thickness of the disk, and D is the diameter of the disk. A semi-circular specimen with a notch along the symmetrical line, initially proposed by Chong and Kuruppu [29,30], was used for the bending test in order to measure the fracture toughness of Macor. Figure 2 shows the diagram of the NSCB test, which is similar to the three point bending test. The notched specimen sits on a base with two supporting points, and the load F is applied through the top face of the specimen. Here, a is the pre-notch length 2.5 mm, D is the diameter of the specimen 10 mm, B is the thickness of specimen 4 mm, and 2S is the distance between two supporting points which is equal to 8 mm. The base is made from a Ti6Al4V alloy. A formula to determine the history of mode-I stress intensity factor K I (t) of the NSCB specimen is given by F(t) is the force history. When 0.25 ≤ a D ≤ 0.35 and 2S D =0.8, the dimensionless stress intensity factor Y k is expressed as follows [33,50]: The fracture toughness K IC is given by Here, F max is the maximum applied force. According to the ASTM standard procedure E399, and the approaches proposed by Zhou et al. [51], Liu and Chen [32,33,50], the fracture toughness K IC is an intrinsic property of Macor ceramic to resist crack initiation and propagation, and can be obtained from the peak value of K I (t) , which corresponds to the stress intensity factor at the maximum applied force. Cylindrical specimens with diameter 5 mm and length 5 mm were used for the compression tests. A series of previous studies showed that the hydrostatic pressure greatly influences the deformability of brittle materials, and the relevant techniques to confine brittle materials can be found in the works of Chen et al. [41,52], Subhash and co-workers [53,54] and Rittel et al. [55][56][57]. Here, the dynamic confinement was applied by a mechanical approach from metallic sleeves with low-strain hardening undergoing plastic deformation to obtain an almost constant confined pressure [55,56]. Different confinement levels were achieved by sleeves machined from Al 6061 alloy and Ti6Al4V alloy with a wall thickness of 0.7 mm ± 0.005 mm. The specimen assembly is schematically shown in Fig. 3, with a Ti6Al4V alloy adapter on top of the specimen to complete the assembly. The Macor specimens and sleeves were machined to a high accuracy such that the grease lubricated specimen can be tightly inserted with sufficient hand pressure. The friction between the sleeve and the specimen is assumed to be negligible in the present work. The specimen was loaded through the adapter by the incident Hopkinson bar. According to the detailed analysis in Refs. [55,56], the axial stress α axial and radial stress q are given by where F is the applied force, σ y , t, and r are the yield stress, wall thickness and inner radius of the metallic sleeve. Here, low and high confinements correspond to Al sleeve and Ti6Al4V sleeve respectively. The material properties of two sleeve materials were measured using cylindrical specimens at the same strain rates as Macor specimens. An image of the above specimens is shown in Fig. 4. Experimental setup The quasi-static tests were carried out using a 50 KN screwdriven Zwick machine under displacement control mode at a loading speed of 0.05 mm/s. The Zwick machine is synchronized with an IDS (Image Development Systems) UEye USB 3.0 Camera with high resolution images (2456 × 2054 pixels) at a frame rate of 5 fps. Dynamic loading tests were conducted using a bespoke designed 16 mm diameter split Hopkinson compression bar synchronized with high speed photographic equipment. Figure 5a shows the image of the Hopkinson bar setup, with the corresponding schematic in Fig. 5b. Specifically, the incident and transmitted bars were made from Ti6Al4V alloy with lengths equal to 2.7 m, together with Macro lens have a resolution of 924 × 748 pixels, at a framing rate of 5 × 10 5 fps and shutter speed 2 us. A similar system has been used by Evers et al. [58] for the tests of Nacre-like alumina, and recently by Varley et al. [59] and Zhang et al. [60,61]. A commercial software, LaVision Davis, 1 was used for the DIC analysis of the dynamic deformation processes. A detailed description of the DIC technique is given in Appendix A. For the Hopkinson bar technique, the stress wave propagation analysis can be found in De Cola et al. [62], Gustavo et al. [63] and Zhang et al. [60]. Two sets of strain gauges 1 and 2 are attached on the incident bar, with a further set of gauge 3 on the output bar. The amplitude of the stress waves was determined by D'Alambert's solution of the wave equations. The stress and strain evolutions were obtained by the classical Kolsky bar analysis [21]. A 0.7-1 mm thickness neoprene rubber sheet was used as a pulse shaper and placed at the impact end of input bar to control the rise time of the incident wave signal and to achieve the dynamic force equilibrium, which is crucial for valid measurements using the Hopkinson bar. Figure 6a shows typical strain gauge signals for a FBD test. Figure 6b compares the input side force obtained from the incident and reflected signals and the output side force measured directly from the transmitted signal. The differences between the forces suggested by Ravichandran and Subhash [64] and Khosravani et al. [65], can be evaluated by a factor , where F_in and F_out are the input and output forces, respectively. Initial R f shows a high value as the stress wave starts propagating into the specimen. Shortly after, R f decreases significantly with an average value of 0.03. This is similar to the observations of Tzibula et al. [66], Zhang et al. [67] and Hoffmann et al. [68], indicating that dynamic force equilibrium is confirmed in the setup. The output force is used for the stress measurement of the specimen. Dynamic FBD Test A typical tensile stress history of the FBD specimen tested with the Hopkinson bar system is presented in Fig. 7, together with the crack open displacements (COD) from the virtual gauges of length 0.8 mm at the two locations shown in Fig. 8. Between the time 130 us to 204 us, the stress increases almost linearly until the peak stress is reached, followed by a rapid load drop and the dramatic increase of COD. The magnitude of tensile failure stress is 100 MPa. Figure 8 shows the tensile strain distribution on the FBD specimen at two consecutive stages at the maximum tensile stress and immediately after. The specimen is loaded on the right side by the Hopkinson incident bar. Clearly, the failure initiates very close to the centre of the specimen (white arrowed position a) and shortly after forms an open crack propagating to the two ends of the specimen along the loading direction, resulting in the final splitting of the whole specimen. Due to the linear increase of the tensile stress, a constant loading rate can be determined by the tangent line (arrowed) in Fig. 7, which is important for the accurate measurement of the tensile strength. The comparison of quasi-static and dynamic tensile strength is shown in Fig. 9. The average dynamic tensile strength is 101 ± 12 MPa (standard deviation), which is higher than the average quasi-static tensile strength 73 ± 6 MPa. Although the standard deviation under dynamic condition is twice that for the quasi-static condition, the effect of loading rate on the tensile strength of Macor is clear. Dynamic NSCB Test The NSCB specimen with pre-notch was dynamically loaded on the Hopkinson bar, and the stress intensity factor history is given in Fig. 10. The COD results as a function of time are extracted from two positions a and b close to the pre-notch tip. The COD curves remain approximately zero up to 46 us in Fig. 11, followed by a rapid non-linear increase when the stress intensity factor (load) drops rapidly after the peak. The closer to the pre-notch tip, the larger COD appears at corresponding times. Figure 11 shows the corresponding tensile strain distributions at three instants before and after the crack initiation, Figure 12 clearly shows the influence of loading rate on the fracture toughness measurements. The average dynamic fracture toughness 2.34 ± 0.18 MPa √ m is apparently higher than the average quasi-static fracture toughness, with a dynamic amplification factor of approximately 1.5. Figure 13 presents the engineering stress history of a cylindrical specimen under compressive loading at high strain rate of 330/s. The high speed images at different stages are shown in Fig. 14. The magnitude of compressive maximum stress is 510 MPa. A very small surface crack (white arrow) can be observed at stage 2. The failure occurs due to the fracture of two small areas (white arrows) close to the right end of specimen at stage 3. This is followed by a damageinduced material softening after the maximum strength point at stage 4, with an additional crack observed at the surface centre. The two further axial splitting areas (white arrows) can be seen at stage 5. The fragmentation of Macor specimen after the complete fracture, which is not marked in the stress history, is shown at stage 6 in Fig. 14 Figure 15 shows the typical strain distributions from DIC analysis before the total fracture. The failure initiation locates at the top right edge of the specimen. The splitting at the centre of the specimen (black arrow) can be seen in the field of tensile strain yy . The strain distribution shows that the tensile strain would be higher than the axial strain xx and the shear strain xy , indicating an additional tensile failure mode due to the radial expansion of specimen under dynamic compressive loading. Figure 16 shows the typical engineering stress histories from compression testing at strain rates about 200 /s-400 /s for Macor specimens without confinement and two levels of confinement. Here, Al and Ti6Al4V sleeves result in dynamic confined pressures of about 86 MPa and 252 MPa respectively. The unconfined specimen shows a rapid load drop after the peak stress. When the lateral confinement is introduced, the engineering stress appears to be different. The low and high confinements see the peak stress increases to 600 MPa and 1.1 GPa respectively. The stress stays at a constant value before decreasing slowly instead of suddenly load drop. The increasing stress with high confinement indicates the role of lateral confinement in postponing damage and final fracture of the specimen. Dynamic Compression Test The unconfined specimen tested at high strain rates was fragmented completely. With lateral confinement, the brittle fragments of Macor were retained by the plastically deformed sleeve. Figure 17 shows the typical recovered confined specimen containing a 45° crack (dash arrow) Microstructural Characterization Fracture mechanisms of the broken FBD specimens under quasi-static and dynamic loading conditions, the broken compressive specimen and NSCB specimens under quasistatic loading are characterized under a Carl Zeiss Evo LS15 VP-Scanning Electron Microscope. The compressive specimen and NSCB specimen were fragmented randomly using the present Hopkinson bar system, consequently, it is impossible to characterize the fracture surfaces of interest. Figure 19a, b show fracture surfaces of the mid-section of FDB specimens under quasi-static and dynamic loading conditions, with the corresponding higher magnification micrographs given in Fig. 19c and Fig. 19d. The fracture mechanisms are predominantly transgranular under both quasi-static and dynamic loading conditions. Fractures propagate mainly along with the mica-glass interfaces and the cleavage planes, which is a typical failure mechanism in mica-glass ceramic [69]. Examples of the cleavage plane (arrow) and bright transgranular fracture surface (circle) can be seen in Fig. 19c. The fracture surface of a dynamically broken FBD specimen in Fig. 19d shows finer cleavage facets with more transgranular fractured mica platelets compared to that in statically broken FBD specimen in Fig. 19c. The micro-morphology of statically broken compressive specimen in Fig. 19e presents smooth cleavage planes (arrow) and a number of short fiber-like lamellar mica phases (dashed arrow) with bright transcrystalline fracture surface (circle). The fracture surfaces under tension and compression loading modes are generally similar. The content of transgranular fractured randomly oriented mica-like platelets and cleavage planes in Fig. 19e are more than that in Fig. 19c from the tensile fracture surface. Regarding the statically broken NSCB specimen with mode I fracture mode, the cleavage facet (arrow), lamellar mica platelet (dashed arrow) and bright transgranular fracture surface (circle) can be found in Fig. 19f. This fracture surface seems similar to the statically broken FBD specimen in Fig. 19c. Although Macor presents a macro brittle behaviour, the mica-glass interlocked phases show micro effective ductility [39,70] and consequently provide good load bearing capacity. Considering the fracture surfaces of the dynamically broken specimen with a conical plug under high confinement condition, Fig. 20a shows a smooth shear zone (arrowed) and Fig. 20b presents the elongated microstructures on the edge of the cone, both indicating the shear failure in the confined Macor specimen. A closer observation of the smooth shear zone (Fig. 20c) shows the elongated fine particles with 5-10 um width marked by a dashed arrow. These fine particles are also randomly distributed in the fracture surface in Fig. 20d with a few mica platelets marked with a circle. The fine particles are caused by the crushing/pulverization of the Macor with high confinement. The fracture micrographs of the dynamically broken specimen with high confinement present a completely different failure mechanism of Macor ceramic in a comminuted state, compared to those in Fig. 19. Discussion With the development of precision processing technology, much work has been carried out to study the compressive response of advanced ceramic materials in recent years. The increasing implementation of advanced ceramic materials into applications, in which the materials would undergo high speed deformation process, drew lots of attention to the measurement of dynamic compressive strength. Due to the excellent heat insulation and low density, the glass ceramic Macor is increasingly used in the electrical components and thermal protection systems in aerospace engineering. Studies concerning the loading mode and lateral confinement dependent dynamic fracture behavior of Macor are of considerable interest to engineers. This work reports our comparative study for compressive and tensile strength values of the Macor ceramic, with all measurements performed in the same laboratory, using the exact same experimental setup from quasi-static to dynamic loading. In addition, the NSCB specimen is employed to determine the mode I fracture toughness of Macor. The high speed photography and DIC technique were used to monitor dynamic deformation and failure processes of the Macor ceramic. A notable example of compression testing of brittle material is the recent investigation on the loading rate dependent compression strength of boron carbide carried out by Swab et al. [71]. A dumbbell-shaped specimens were used to avoid stress concentrations at the interface between the loading platens and the specimen [72]. The present work, however, adopts cylindrical specimens to measure the compressive strength of Macor, due to the complexity associated with the manufacturing of dumbbell-shaped ceramic specimens. In order to reduce stress concentrations and friction between the specimen and loading platens, the ends of the input bar, output bar and specimen were polished and manufactured with high parallelism geometric tolerance. Additionally, a small amount of lubrication was used. The pulse shaping technique was employed to increase the rise time of stress wave and achieve dynamic force equilibrium. The above precautions ensured that the values of uniaxial compressive strength measured at quasi-static and high strain rate are in good agreement with the previous results reported by the manufacturer [49] and Chen and Ravichandran [41]. In addition to reporting the sole result of the global mechanical response, here, the dynamic compressive deformation and failure process of Macor were monitored using high speed photography technique. A very small surface crack formed before the peak stress, this was followed by the failure resulting from the fracture of two small areas close to the end of specimen. The subsequent load drop indicated a damageinduced material softening. Later, additional cracks formed on the surface at the centre of the specimen, leading to the final fragmentation of Macor ceramic. Compared to the compressive test, the tensile testing is more complicated, particularly under dynamic loading condition. Another exceptional example of tensile testing of brittle material was recently performed by Swab et al. [73], for the measurement of quasi-static uniaxial tensile strength of a boron carbide. They found that flexure strength tests predicted the uniaxial tensile strength well. In the present work, the FBD specimen was used for the determination of the static and dynamic tensile strength of Macor. Extra care has been taken in the dynamic force Recovered failed specimen with crack, and conical fractured specimen under high confinement at high strain rates equilibrium by using pulse shaping technique. The applicability of the Brazilian test to evaluate the dynamic tensile strength of brittle materials, can be found in the recent review of Khosravani et al. [16]. With the assistance of high speed photography and DIC techniques, the FBD specimen with two flat planes showed the advantage of reducing stress concentration close to the loading point. The average tensile strength 101 ± 12 MPa MPa at dynamic loading is 1.4 times higher than the average tensile strength 73 ± 6 MPa at quasi-static loading. The standard deviation of 12 MPa in dynamic tensile strength measurement is found to be higher than that of 6 MPa under quasi-static condition. Due to the brittle property of Macor ceramic, the dynamic loading rate was more difficult to control, compared to the quasi-static loading. Note that the values of dynamic tensile strength are averaged between the loading rates from 7 × 10 5 to 1.5 × 10 6 MPa/s. The variation of loading rate may affect the dynamic tensile strength and contribute to the observed variation. Similarly, the scatter could also be related to material inconsistency and the manufacturing process [74]. The flexural strength of Macor quoted by the manufacturer, 94 MPa with unspecified test condition, is between the quasi-static and dynamic tensile strength measurements in the present work. The present dynamic tensile strength agrees with the dynamic flexural strength measured by Dong et al. [42]. However, the tensile strength value from Dong et al. [42] using Brazilian discs is less than 50% of the measured flexural strength value, less than 50% of the flexural strength from the manufacturer, and less than 50% of the tensile strength in the present work. Without monitoring the dynamic deformation process [42], it's difficult to evaluate the problem arising due to stress concentrations at the loading position and the initiation of failure in the Brazilian test. Here, the high speed images clearly showed that, the dynamic failure initiated at the centre of the FBD specimen and then formed an open crack propagating to the specimen ends along the loading direction. This results in the final splitting of the FBD specimen. This information confirms the validity of the tensile test of Macor using a FBD specimen. The above comparison of compressive and tensile tests shows that both static and dynamic tensile strengths are approximately 20% of the corresponding compressive strengths. The microstructural characterization shows that FDB specimens present predominantly transgranular fracture mode with the cleavage facet and bright transgranular fracture surface. The fracture surface of dynamically broken FBD specimen seems to show denser cleavage facets with more mica flakes, compared to that in statically broken FBD specimen. The FBD specimen fails by an extensive splitting. The fracture in compressive specimen also shows a mode I cracking (Fig. 15). The fracture surfaces of the compression specimen and the FBD specimen present similar microstructures. These indicate a similar formation mechanism of mode I crack in compressive specimen and FBD specimen [75]. Further fracture toughness characterization of Macor shows clearly the loading rate effect on the fracture toughness, with the average dynamic fracture toughness being about 1.5 times higher than the average quasi-static fracture toughness. The average quasi-static fracture toughness 1.58 ± 0.18 MPa √ m is in good agreement with the standard fracture toughness value of 1.53 MPa √ m quoted by the manufacturer. The high speed images of the NSCB specimen show that a crack follows the concentrated strain region in the orientation of the pre-notch towards the loading position, and the specimen finally breaks into two parts. This aggress with the observation in the test of an alumina ceramic using NSCB specimen reported by Chen et al. [41]. Hydrostatic pressure has been reported to increase the deformability of brittle materials, e.g., Farbaniec et al. [76], Chocron et al. [77], Forquin et al. [78], Ma and Ravi-Chandar [79]. This work shows that the brittle characteristics of Macor can be suppressed by the lateral confinement without load drop. In this study, the confining pressure was provided by the plastically deformed sleeves with low-strain hardening to obtain an almost constant radial confined stress, an experimental technique proposed by Rittel et al. [55,56,80]. Adopting this technique to characterize the confined response of ceramic would be of interest. Here, extra care was taken in machining the Macor specimen and the metal sleeves. With a certain amount of grease lubrication, the specimen can be tightly inserted into the metallic sleeve with sufficient hand pressure. A good advantage of this method is that no direct measurement of the confining pressure is required, as this can be calculated from the elastoplastic response of the sleeve material. The dynamic compressive strength significantly increases with the increasing confinement pressure and becomes one order of magnitude higher than the dynamic tensile strength. It's also interesting to note that, the recovered confined Macor specimens contain a 45 degree crack with respect to the axial loading direction. This is slightly different from the process of fault formation reported by Chen and Ravichandran [41], with the cracks propagating from the corners into the specimen and intersecting near the middle. Similarly, the confined Macor transforms from a brittle fragmentation to a conical plug fracture mode indicating the ductile failure feature (Fig. 17). Microstructural characteristics of this shear failure mode in Macor are absent from the published literature. Here, observation of the fracture surface in Fig. 20 shows a ductile failure with shear zones and fine particles in the dynamically broken Macor specimen with high confinement. The fine particles, resulting from crushing/pulverization of the Macor, are influenced by the local stress states [81][82][83]. As the deformation continues, the cracked structure is further pulverized under dynamic compression loading. Fig. 19 Comparison of fractographic images for a statically and b dynamically broken FBD specimens, higher magnification of c statically and d dynamically broken FBD specimens, e statically broken compressive specimen, f statically broken NSCB specimen Future work will aim at investigating the test methodology using dumbbell-shaped specimen [71,84] and the numerical modeling with the currently available and further measured experimental results. Efforts will be made to develop the confined tensile test technique to reveal the potential pressure dependent tensile response in order to provide the whole picture of the tensile and compressive properties of Macor and other advanced ceramics [48,58,[85][86][87] in aerospace engineering applications. Likewise, the Hopkinson bar system will be improved to better recover the dynamically deforming specimens. Conclusion This paper reports the loading rate dependent tensile strength, compressive strength and fracture toughness of a glass ceramic Macor. Differences in the tensile and compressive responses of Macor are compared. A series of results have been obtained using the cylindrical specimen, FBD specimen and NSCB specimen. The main outcomes are summarized as follows: Although the ceramics show high compressive strength and a capacity to enhance strength, this work suggests it is important to simultaneously and quantitatively consider the low tensile strength response of ceramic materials. The methodology and techniques used in the present work will support the characterization of impact damage and ballistic performance of current and future advanced ceramics in demanding applications. Appendix The DIC technique supported the observation of dynamic deformation processes of Macor ceramic. The DIC analysis was performed with LaVision Davis commercial software. The Macor specimens were spayed painted with black ink using an airbrush in order to produce a random speckle pattern. The quality of the speckle pattern was verified using the mean intensity gradient (MIG) [88]. The MIG values of about 30 were calculated for the speckled region of interest in Macor specimens, which can be considered of good quality and to provide small errors in the DIC measurements [45,88]. The high speed images were processed using a Least-Squares Matching algorithm [89] with a subset size of 13 × 13 pixels and a 3 pixels step size. An affine shape function and a 6th order spline sub-pixel image interpolation scheme [90], were adopted in the matching process. The calculation settings are listed in Table 2. As suggested by the DIC good practice guide [91] and Sasso et al. [92], a set of 20 images illustrating unloaded cylindrical compression, FBD and NSCB specimens was analyzed to assess the quality of the correlation. Figure 21a illustrates representative strain noise distributions measured on three specimens. The strains show minor noise oscillations predominantly of magnitude below 0.07%. Similar noise oscillations trends can be observed in the displacements of FBD and NSCB specimens in Fig. 21b. The displacement noise oscillations have a magnitude of less than 0.005 mm around zero. These are representative of the measurement uncertainties of strains and displacements in the present system. It is noted that the purpose of DIC technique in the present work is to assist the monitoring of dynamic deformation process of Macor ceramic. See Appendix Table 2 and Appendix Fig. 21.
8,229.2
2022-03-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Intelligent control of external software systems Abstract This paper focuses on the relatively unexplored set of issues that arises when an intelligent agent attempts to use external software systems (EESs). The issues are illustrated initially in the context of the complex agent-ESS interactions in an engineering design example. Approaching the area from the perspective of artificial intelligence (AI) research, we find that in general, agent-ESS interactions vary widely. We characterize the possible variations in terms of performance capabilities required, skill levels at which performance is exhibited, and knowledge sources from which capabilities can be acquired. We are exploring these variations using Soar as our candidate AI agent; the document briefly describes seven Soar-based projects in early stages of development, in which agent-ESS issues are addressed. We conclude by placing agent-ESS research in the context of other work on software technology, and discuss the research agenda we have set for ourselves in this area. CONSTRUCTION PLANEX: Generates construction schedule and estimates cost These tools, together with a computational infrastructure that allows the tools to work together, have been combined in the EBDE (Integrated Building Design Environment) project [4]. The development of the integration framework of IBDE has been a rather formidable undertaking, involving the bulk of several PhD theses and a substantial effort measured in person-years. Figure 1 shows the structure of IBDE. It adopts a hub-and-spokes approach with a centralized global datastore to contain all the information about the building in a hierarchical object-oriented representation. However, each tool uses its own data representation and these differ radically. CORE uses orthogonal structures for generating alternative layouts of rectangular objects; ARCHPLAN, STANLAY, and STRYPES use the so-called Tartan Grid for storing information about the spatial, planar, linear, and point elements on a common orthogonal grid; PLANEX uses labeled points for combining spatial information about each building element with associated construction activities; and FOOTER and SPEX use ad hoc attribute-value representations. Thus, custom-built translation modules are required to allow the datastore to pass information to and from each tool The datastore manager is responsible for communication between the datastore and the individual processes, retrieving and storing data when necessary, performing format conversions and maintaining an audit trail of the producers and consumers of each data item in the datastore. Communications with the human are relatively complex, since each component system has its own interface (recall that each such system is a large software systemon its own account and was developed independently); but there is also an interface that deals with the collections of ESSs as a whole. This latter keeps audit trails and process information on a blackboard Thus, the entire IBDE system is quite complex, which arises in part simply from the total functionality involved (the design of an entire high-rise building) and in part from the fact that it was assembled from individually-designed large software systems. The IBDE structure assumes a human user, who works via the control module shown in the figure. The function of the controller is to make it possible for the human to specify the execution of sequences of the seven systems. This controller is rather sophisticated, using a blackboard structure and a modified contract-net framework to allocate tasks. Nevertheless, its functions are fundamentally low-level. It is not capable of getting a complete task performed by the total system. That is the function of the (intelligent) human user. Our interest here is to replace the human in Figure 1 with an agent (i.e., an AI system) that is to perform the same overall functions as the human designer. We can take for granted the functions performed by the controller, which permit communication and execution. The question is what other functions axe being performed by the human (now, the agent), what knowledge must be available in the agent to perform these functions, and how is that knowledge to be acquired. The discovery of these functions and the construction of agents that can accomplish them is the central core of research on agent capabilities for controlling external software systems. An actual development path for IBDE would not begin with the replacement of the human by an agent Rather, an agent would be inserted between the human and the controller. This agent would attempt to do as much of the total design task as the human user-designer found useful. Undoubtedly, this would tend toward having the agent know the details and idiosyncrasies of the operation and results of the ESSs (the seven systems in the figure), so that the human userdesigner could operate in a higher supervisory and guidance mode. One motivation for this path Umr Figure 1: IBDE: An example of an external software system environment might seem simply that society wants human-controlled and -designed engineering artifacts. But equally strong is that no one knows all the functions that the intelligent human is performing in Figure 1. All we know is that the human does whatever is necessary to get a total effective design. Only when we discover what these functions are, and the ease or difficulty of incorporating them in agents, will we be able to create total systems that have agents as appropriate designer-assistants for humans functioning as high-level supervisors. The purpose of the present paper is to explore what these functions might be and to consider the research required to see how to include them in agents. We do so, as we noted above, by focusing on a single agent with a collection of ESSs. But the ultimate application path envisioned involves cooperative arrangements with humans and agents. The Capabilities Required to use ESSs Let us now ask what functions the agent must have. Figure 2 gives the generic situation. It looks similar to Figure 1, except that we have expanded the agent, showing its inner structure. We take the agent as already having some task to perform, represented within it in some fashion independent of the ESSs that are available to it. Thus, at the most general level the agent must accomplish all those things that will lead it to use the ESSs in some manner of its own determination in order to accomplish some aspects of its task. However, it need not accomplish all of its task via the ESSs, since it has problem solving and computational capabilities of its own. Performance Capabilities The fundamental cycle required for an agent to use an ESS consists of four capabilities. These are the outer four capabilities in Figure 2 (the other two will be introduced presently). Within the context of a local task to perform, the agent formulates a subtask to be performed by the ESS. It then creates the appropriate input data structures for the ESS. Using the operating-system functions, it communicates this input to the ESS, evokes its operation, and obtains the resulting output It then converts the received data structures to an internal form it can process. Finally, it interprets the results in terms of the original task. Let us consider these capabilities in more detail 1. Formulate-subtask: The capability to formulate a computational subtask with the potential of being performed by an ESS. The agent must decide whether a subtask can be solved internally, or if externally, which ESS should perform the task This capability requires understanding the functional demands of the task and the functional capabilities of available solution methods in order to determine whether and how an ESS can be used in the attempt to do the subtask. 2. Create-input: The capability to create appropriate input to an ESS in the ESS's input language, starting from task knowledge in the task's own representation. The translation capability is required because the task representation and the ESS representation are uncoordinated. (The operating system capability we are assuming deals only with lower-level operations of making connections and transmitting messages.) 3. Convert-output: The translation capability, analogous to create-input, to receive output from the ESS in the ESS's output language, and represent it in the task's own representation. 4. Interpret-result: The capability to use the results of a computation in the service of the original task (when expressed in the task's own representation). In many cases, formulating the computational task determines exactly how the results are to be interpreted, so that this capability can effectively be dispensed with. But more generally, interpretation of results need not presuppose pre-envisionment of exactly how the results will be used. ESSs can be deliberately used for exploratory purposes. Also, results from a computation almost always have the potential for surprise, requiring reconsideration of the larger task. This basic four-capability cycle can be seen clearly in the simple situation of an agent that uses a relational database as part of some larger task it is doing. Input to the database is via some query language, such as SQL; output from the database is via some table structure. In accomplishing the larger task, there comes a point where the agent needs some data it does not have. This occurs in the context of dealing with the internal representation of the task, and the needed data in seen in terms of this representation. There follows formulate-subtasfc to determine exactly what dam should be requested as a function of other data in the task and the known types of data in the database. This determination is made in terms of the internal representation. Then comes create-input, to cast this request in terms of SQL and knowledge of the table organization of the database. This may be entirely routine, if the request is simple enough, but (as any user of SQL can testify) issues may arise how to express the request in SQL and which searches are the appropriate ones. In either case it is necessary to perform a process, which is the exercise of the create-input capability. There follows a series of activities, which we have assigned to the operating system and taken for granted: getting the SQL request to the database, getting the search performed, and getting the results back. The agent then faces a set of tables in a fonnat determined by the database and some standard conventions of how to encode such tablet in text streams. This received representation is definitely not the representation of die internal task. Thus, convert-output is required to put this data into a form interpretable by the processes that ate performing the internal task. Finally, interpret-result can occur. This process is likely be minimal, since specific data has been requested and, when delivered (and converted) can simply take its place in the ongoing processing of the internal task. The elementary use of a database is a good illustrative example of the basic cycle, because it makes clear the role of conversion between representations. No one organizes their internal processing of a task around SQL and they are unlikely even to organize it around the sorts of tables that are retrieved Additional capabilities are involved in performance. One is related to issues of how to operate software systems -more is required than just shipping inputs and receiving outputs, especially if the software system is at all complex. 5. Operate-software-system: The capability of dealing with the operation of the ESS as a software system, with its normal and abnormal operating conditions, and a corresponding need for diagnosis and operational response. The basic four-step cycle treats the ESS exclusively in terms of the semantics of what it does -it delivers as output certain task-relevant knowledge if given the appropriate inputs in the appropriate data representation. But ESSs are in fact software systems embedded in a larger software operating environment And the (perhaps sad) state of the an is that ESSs are not totally transparent in use. Well-designed and debugged ESSs are better than flaky systems or systems still under development Much can be smoothed over by a good operating system. But the software world, especially the world of large software systems, is far from perfect An agent that has no knowledge that its ESSs were indeed software systems, but only knew to shove inputs at them and wait for outputs from it, would soon find itself immobilized. Thus a capability for treating ESSs as a software system is required. The last capability is related to being able to simulate the computations done by an ESS. We know humans use such knowledge in working with software systems. In fact, when they have no such knowledge we accuse them of operating mechanically or blindly, and of not understanding what they are doing. 6. Simulate-ESS: The capability to produce internally the same results as a specific ESS. The results may range from the exact output data structure delivered to a given input data structure, to abstract characterizations of the behavior. They may be accurate or only approximate. The essential feature is that the agent can do this itself, without actually evoking the ESS, and that it has some access to its own internal version of the computational process that produces the simulated results of the ESS. This capability is potentially of use in formulating a subtask, interpreting results and even operating the software. For example, it can provide sanity checks on the behavior of an ESS. Also, knowing something of the actual computations provided by an ESS can play a significant role in deciding what subtask an ESS can do and formulating it for (he ESS. If ESSs are expensive, as is often the case, being able to obtain their outputs cheaply in special cases can be useful. We know that humans that understand a set of ESSs gradually build up this sort of capability, but we have little idea of the full range of uses it can be put to. Skills Capabilities are the operational expression of some body of knowledge, realized ultimately in operations on dffft structures. But many different ways exist to organize a given body of knowledge. Figure 3 shows five such organizations, which form a sequence of levels of skill. Recognition* at die top, provides the highest degree of skill and the fastest operation. As one moves down to deliberation* interpretation* simulation and finally derivation, more and more processing is required to determine what to do, and the capability is exercised more slowly. The other side of this coin is that, as one moves down the sequence, the knowledge that constitutes the capability can be represented in more and more complete ways. Thus the possibility increases of exercising the capability flexibly in novel situations. This is just a variant of the familiar procedural-declarative dimension. Let us consider each of the five skill levels in more detail. 1. Direct recognition: The agent immediately recognizes what operations are required to implement the capability. This can be viewed as a set of parallel-acting situation-action rules (or productions). If there are enough patterns and they cover all the situations that in fact arise, behavior is fast and direct. This corresponds to the skill we see in expert humans using a hand calculator or doing simple computations on a system such as an editor. 2. Elementary deliberation: The agent is skilled at performing the basic operations, knows about plausible ways of doing things and can evaluate proposed sequences of action. But it does not know immediately exactly the right thing to do, so it must consider options and alternatives, and evaluate them. This corresponds, in humans, to there being a pause to N give thought 19 to what to do next A user with moderate experience with relational databases exhibits this level of skill in formulating a query in SQL -it isn't quite automatic. In AI systems this corresponds to formulating tasks in problem spaces (i.e., heuristic search spaces) where some search is required before discovering a solution. It is the level at which general reasoning methods such as hill climbing, means-ends analysis, and planning by simple abstraction are routinely used However, this activity should not be seen as the immense combinatorial searches associated with, say, current chess programs. Rather, this exercise of skill consists of a sequence of routinely resolved, small problems, each representing a small act of deliberation on the pan of the agent It is apparent why this level of skill is both slower than recognition (the extra steps) and also why it is more flexible (deliberation is precisely the ability to consider alternatives and bring new knowledge to bear). Instruction interpretation: The agent may have a capability by virtue of having a set of instructions that dictate explicitly the operations to be performed under various conditions. At the extreme, of course, this is just what a computer program is -fetch-execute -which is the fastest way of opcrationalizing a capability. But the sort of instructional situation intended here is more open and flexible. It is having a manual of operation of an ESS, or a set of detailed guidelines for how to use an ESS, or a set of cliches to evoke specific ESS behavior. Thus the process of interpretation is more complex and requires more processing than the basic fetchexecute cycle for computer instructions. The instructions themselves are correspondingly more expressive and cover alternative and exceptional situations. Instruction following is like deliberation, except that die knowledge has not been internalized, but must be extracted from a declarative data structure (the instructions) at the time the task is performed, Hence, it is apparent why this level of skill is slower in general than deliberation. That it might, correspondingly, be more flexible arises from the amount of information that is kept in books, manuals, and other documents -rather than in people's heads. Thus, access to instructional documents opens up the range of tasks that can be solved. Unfortunately, describing this skill level as instruction taking de-emphasizes the correlative skill of finding the relevant instructions, which is required if the instructions are available in documents and not simply given to the agent at execution time. Model-based simulation: The knowledge of a capability may be embodied in a model, which the agent can inspect or run to determine how to implement the capability. For instance, in the EBDE situation, the agent could have a highly abstract simulation model of the operation of the seven ESSs, which take in abstract inputs, characterized only by type of information, and deliver abstract outputs, similarly characterized Then much about how to operate the ESSs could be obtained by running and examining the simulation modcL 5. Theory-baaed derivation: The knowledge of a capability may be embodied in a set of principles, constraints or propositions, from which the agent can derive the actions to implement the capability. For instance, its knowledge of SQL syntax may be given by a BNF grammar. Or the ESSs may conform to a set of input/output conventions, and the agent has a set of expressions for these conventions in (say) predicate logic. Or the response time of the ESSs may be given by a set of equations on parameters of the input data, which the agent has access to. This mode of representing knowledge is, almost by definition, the most general of all (as witnessed by the fact that scientific theories invariably are expressed by such formalized, linguistic descriptions), and thus offers the greatest flexibility in containing the knowledge to cover a wide range of novel contingencies and uses. But correspondingly, the effort required to derive specific results caa be essentially unbounded -it depends on the difficulty of the derivations and the gfriii of the agent at doing mathematics or proving theorems Although there might be other forms in which the knowledge of capabilities can be encoded, the set of five in Figure 3 covers the major cases. They support the essential point that a capability can be implemented in radically different ways, with important consequences for the exercise of the capability, i.e., for its speed and flexibility. We have described skill levels as pure types. An actual capability for a complex function, such as formulating a task to be done with the IBDE collection of ESSs, might well have elements of all these skills, for different aspects of its capability. Indeed, it might have several different skill levels for the same aspect If an agent started out with its knowledge in prepositional form -giving it generality, but slow response -it would be natural to expect it to acquire more efficient levels of skill. But there would be no reason to abandon the original forms of knowledge, especially since the higher (faster) levels of skill might well be narrower in scope. Acquisition from knowledge sources Capabilities, and their realization as particular skills, must be acquired In the current world of software systems (including programmed AI systems), the default is always that the capabilities are simply programmed by the system's creators or maintainers. Thus, some human programmer must design, code and debug the six capabilities of Figure 2 for each ESS, and, if these are to be available at multiple skill levels, each level must be so designed, coded and debugged. This is easily recognized as another instance of the knowledge-acquisition bottleneck. The situation is actually more serious than indicated by the above, essentially static, view. The actual world of ESSs is highly dynamic and changing. New ESSs are introduced, as in additional modules for IBDE. Existing ESSs are continually corrected or functionally enhanced. Agents continually find themselves spending too much effort reasoning through some situation (say, the solution of a math model), hence need to introduce an ESS! (say, a math-model solver) to increase efficiency. In all such cases, if the static view above holds, the agent is helpless until some human reprograms it More generally, as any human that works with complex software systems can testify, large fractions of life are devoted to learning about particular software systems and developing skill in working with them. Thus, the agent itself must be given the capability for acquiring the requisite ESS capabilities. This implies agents that learn and that do so in a variety of ways and from a variety of sources of knowledge. It is very likely that the degree to which agents can learn specific ESS capabilities will be a strong limiting factor on how effective agcnt-ESS system will be. Of course, even with only human-programmed systems, some useful arrangements will be possible. For example, agents can be provided once and for all with the capabilities to deal with generic relational databases. But agent-ESS system will never become an important part of software technology if they are so limited. In sum, learning capabilities is of the essence. It requires a capability to produce a capability. Figure 4 shows the situation. On the right is the capability to be acquired. This might be either to acquire the capabiUty from scratch or to extend the scope of the capability. Furthermore, to acquire a capability means to acquire it at some particular skill level. In fact, if the capability exists, it does so at some skill level. Thus, what is being acquired may be a different skill level -either a more efficient one or a more flexible one. There must be some existing source of knowledge about the new capability. It is not possible to get something from nothing. The learning capabilities exercised by the agent are really capabilities to exploit some knowledge source that has knowledge about the capability, and convert this knowledge into a skill leveL Thus, on the left, Figure 4 starts with various knowledge sources. These are characterized in terms of their form, i.e., of the way they hold their knowledge. Each of these types of sources can contain knowledge about any given capability, e.g., about create-input, or simulate-ESS. We have indicated some important types of sources, but the list is haidly complete. 1. Specifications in some formal language: The new capability may be described in some formal language whose function is to specify capabilities. E.g., it might be a formal specification language, or a high-level program description. Then the knowledge-source access capability would contain an interpreter or compiler for this language, as well as capabilities for analyzing the specification as a body of text. 2. Natural language text about the capability: The knowledge about a capability may reside in manuals, design documents, descriptions of how to use systems, etc. -all created originally for human consumption, hence all in natural language (though often with additional diagrams and formal notations). Experimentation with the capability: Much knowledge about how to do anything, such as run an ESS, comes from experimenting with doing so, and inducting a more general capability. Indeed, for humans we know that, after a point, the only way to learn about computer systems and how to use them is to try out simple cases and discover how matters go. 4. Observation of the exercise of the capability: Much can be learned about how to do something by watching an expert do it, i.e., being an apprentice. It helps, of course, if the expert is sympathetic and provides constructive commentary. In some ways this can be like guided experimentation. These knowledge sources are each radically different in character and involve complex activities to extract the knowledge they contain. Comprehending natural language is a longstanding and difficult area of research in AL AI systems that learn from active experimentation in an external world are just being developed. Research in AI on learning apprentices was initiated some years ago, but they have actually received very little attention. Only the use of formal specifications designed for acquiring a capability has the possible character of requiring only routine operations to obtain their knowledge. Thus, each of these sources requires a separate, complex capability to extract its knowledge. In the figure we have associated such a capability with each knowledge source. However, the capability to exploit a source of knowledge is relatively independent of the particular task for which the extracted knowledge is to be put For instance, a single natural-language comprehension facility subserves the use of language to convey any body of knowledge. So we show all these knowledge-source capabilities as feeding into one skill-acquisition capability, which pertains to how to use the extracted knowledge to create a given skill. Except in special cases, the total knowledge for a capability will not all be available in a single type of source. If human experience is a guide, to learn how to use a complex software system involves all four types of sources -some formalized description, surrounded by naturallanguage explanation, then some introductory sessions with an expert guide (whose helpful commentary is also in natural language), and finally hours of low-level experimentation with the system to build up familiarity and understand what all the previous instruction really meant. All four of the knowledge sources at the left of Figure 4 are external to the agent But the existing skill level is also a potent source of knowledge. One of the best ways to create a higher skill level is to learn from the ability to operate at the current skill level. Thus, the figure shows a fifth knowledge source, which is internal to the agent There would be one such potential source for each existing level of skill, since each provides knowledge in quite different forms for moving to a new skill level. Each of the five skill levels in Figure 3 is a software organization and technology of its own. Consequently, the skill acquisition capability in the center of Figure 4 stands in for five separate capabilities, depending on what skill level is being constructed. It is simply convenient to represent it by a single capability in the figure. Figure 4 itself -that is, to acquiring the capability of acquiring a capability. It might seem that only the first-order acquisitions would be required, and that all the capabilities in Figure 4 could be fixed capabilities, implemented in a fixed way for all time. A moment*! consideration shows this not to be the case. The problems of acquisition are much too central to working with ESSs. For example, consider a case analogous to EBDE case, but where new ESS modules are continually being created and added, e.g., to evaluate the intermediate results of the main design modules. Then, these added ESSs will all no doubt bear a family resemblance to each other. An agent should to be able to exploit the commonalities of this family of modules, and not to have to deal with each new ESS as an instance of a completely generic ESS. That is, die capability for acquiring the capabilities for members of this family of ESSs should improve with experience (a second-order acquisition). The scheme laid out in Figures 2, 3 and 4 presents a bewildering array of capabilities, skill levels and knowledge sources. Everything seems to occur in multiple alternatives. Why can't one settle for one simple version -an operating system good enough to avoid having an operate-softwarc-system capability, all capabilities described in some formal specification language, and a single skill level. The fundamental reason this simpler picture cannot prevail is the nature of real software-system environments. IBDE exists prior to trying to get an agent to use it Not all databases use SQL; many have their own query language. If one wants to have an agent exploit Mathematics because it has some powerful properties, then Mathematica must be. taken as it is. Thus, the great appeal in getting agents to control ESSs is to be able to handle the actual variability and complexity of the real software world. Certainly, demonstration agent-ESS systems could be built by specializing all the dimensions of variability. But the challenge (and the real usefulness) of agent-ESS system lies in confronting and conquering the variability (even if only bit by bit). The Space of ESSs and their Differing Demands There are many ESS situations and they place quite different demands on the agents that use them. Understanding the range of these demands is important to developing effective agents for ESSs. Certain aspects can be critical universally for all applications, whereas others occur only in special situations. Certain aspects cannot be developed before others. It would be pleasant if we could describe a space of such demands, which would make the issues clear. However, at this stage of understanding, we do not know enough to do that The ESS situation arises because that is the current state of engineering design, which is beginning to produce large numbers of independent computer tools, whose integration seems always to be left as an exercise for later. Only as we have become aware of the extent to which ESSs are entering into many Soar systems, have we begun to focus on the common intellectual core problems of obtaining intelligent control and use of ESSs. Second, Soar organizes its tasks in terms of multiple problem spaces, which comprise sets of operators, which are applied to attain a desired state within the space. Operators executed in one space are performed in an implementation subspace (with its own operators). Thus, capabilities are evoked by the selection and application of operators, and capabilities are exercised by operator applications in some collection of spaces. This task organization is implemented by a scheme of immediate recognition of patterns of elements in a working memory, realized as an OPS5-like production system. Thus, an operator that does not require implementation in a subspace is one that is applied simply by immediate recognition, i.e., by the evocation of a set of actions directly in response to detected patterns in working memory. Soar has an automatic learning mechanism (chunking) that continually creates new productions (new recognition patterns with their immediate actions) that capture the processing performed in subspaces, thus avoiding its recurrence. Third, Soar, as it normally operates, moves through the three higher levels of skill: instruction interpretation, elementary deliberation and direct recognition. The movement from deliberation to recognition is the automatic effect of the basic chunking mechanism described above. All deliberation produces additional recognition skill. The movement from instruction to deliberation (often directly to recognition) occurs because of general strategies that Soar employs in interpreting instructions, which cause chunking to learn the right knowledge to increase the skill level. These mechanisms comport exactly with the view in Figure 3. If knowledge is directly available, response is immediate at the recognition level of skill, but there is essentially no flexibility of response. If a solution requires search of problem spaces with directly available operators, response is slower at the deliberation skill level, but there is much more freedom to bring relevant knowledge to bear and be flexible. Interpretation of instructions is even slower, since knowledge must be accessed by working in a subspace that corresponds to the interpretation process, imposing in effect another inner-loop of activity. These mechanisms and methods in Soar affect only the three highest levels of skill. Nothing in the current Soar deals with whether to use instructions, model-based simulations or theory-based derivations as the basis for a capability, or how to structure the processing for each of these levels of skills. That Soar provides for some changes in skill level has two effects. On the positive side, it makes clear that varying levels of skill is a real consideration in agents, not just something imported because we know humans exhibit varying levels of skill. It justifies the complexity of Figure 3 as relevant for the development of agents for ESSs. On the negative side, other mechanisms and methods exist for moving between skill levels, which would occur in other agents. None of that variety or its effects shows up in our considerations here. Fourth, Soar communicates with ESSs through its regular input/output facility. The external environment is perceived by the arrival of elements placed by sensors in Soar's working memory, and the environment is affected by central cognition placing elements into working memory to trigger the actions of effector processes. I/O in Soar is asynchronous, so Soar does not have to enter a busy/wait state while waiting for input to arrive or output to be transmitted and can deal with multiple external devices simultaneously. Because Soar's inner operational loop is the innmcdiafr recognition of whatever is in working memory, Soar can be strongly interrupt driven, reacting immediately to new input and shifting its attention to deal with the novel situation. The following subsections describe Soar/ESS interactions, with the primary investigator for each project given in parentheses following the name of the effort in the title of each subsection. Soar/IBDE (David Steier) We have already described the EBDE ESS situation (Figure 1) and it is evident in a general way how all the basic capabilities are required IBDE actually provides a good example of needing to model the ESSs software, since a large specialized system organization is needed to integrate very complex individual ESSs. The IBDE arrangement brings to the fore several aspects. First, because there are many ESSs, formulate-subtask must be concerned with the selection of which ESS to use at a given time. Actually, this demand goes further than just selection, since it is useful to plan out an entire computational sequence of the ESSs. Second, the data that flows between the ESS modules in EBDE consists of very large data structures that describe various physical structures for an entire building. These are fundamentally incomprehensible to the agent (i.e., Soar) and, indeed, to the human designer. They are simply too big to be read in and assimilated Thus, the strategy (for agents and humans alike) is to analyze the results of the ESSs at one remove. Additional critic ESSs are formed that process the output of a given IBDE module and produce compressed evaluations that can be understood by the agent (or human). For example, one critic ESS could make an initial rough cost estimate of the building from an early output Another could check for the satisfaction of some overall constraint that cuts across modules and therefore is not computed within any module in isolation. Thus, the operation of the agent is entirely in terms of high-level abstractions about the main IBDE ESSs, not in terms of their direct input/output data. This situation leads to the third interesting aspect of EBDE, namely, that the agent (or human) operates in terms of a high-level model of the computation. This abstract model does not exist currently in explicit form, although the engineers involved in IBDE clearly think in terms of such an abstract model (with individual variations, no doubt). So there are two interesting aspects: what should such a high-level model be like; and how is the agent to acquire it There is an initial version of Soar/IBDE, i.e., a Soar system that operates as an agent for IBDE. 1 The current implementation acquires tool selection knowledge from model-driven simulation in a fashion alluded to earlier, the system models the functionality of the seven EBDE systems in terms of the types of inputs and outputs. This initial Soar/EBDE already illustrates that learning will be an integral part of every agent-ESS system. We have shown a significant reduction (75%) in processing effon due to learning, the skill-level shifting from the elementary decision level to direct recognition. l Its original name was Soar/FORS, because it initially was to operate through an experimental interface system called FORS [10]. Soar/database (Bob Doorenbos) Soar has recently acquired a capability for querying a relational database using SQL (Structured Query Language, a standard query language) to obtain information for use in other tasks. One can see clearly in the organization of Soar the four capabilities of the basic cycle, just as described in Section 3.1. In some task space Soar executes an operator that requires some data for its result The space that implements this operator exercises the formulatc-subtask capability by creating an internal representation that indicates the unknown data it wants in the context of the situation it believes that data depends upon. In this formulatc-subtask space it applies an operator to make a database query. This operator is implemented in a separate problem space (the SQL space). It contains operators that know the table structure and the semantics of the database. These operators extract from the representation provided by formulate-subtask the data to be used in the searches. Other operators know how to form SQL queries, including forming complex joins. Thus, this space is exercising create-input. Because of the complexity of SQL (i.e., the alternative ways it provides for searching the database), this capability is more than just conversion; it approaches creating simple programs. The Soar spaces and operators for doing create-input and convert-output can be viewed as a general capability for Soar to deal with relational databases, at least of the standard variety. It is coupled with a general set of representational conventions for how formulate-subtask must indicate what data it wants. These conventions, however, relate to the way Soar systems in general represent their problem-solving states. It contains no knowledge about SQL or the specific structure of the database. It does of course contain some general knowledge on the pan of formulate-subtask of the content of the database or it wouldn't even know to attempt to use it. Databases pose in clear form the issue of the simulate-ESS capability. Suppose, in some circumstances, an agent can know the results from a database query without actually having to query the data base. Can it make any use of such knowledge (except to avoid the time delay of the query)? On the one hand is the programming intuition that if you can get the answer from one system (the ESS) of what use could another way of obtaining the same answer be (except speedup)? On the other hand is the intuition about humans that familiarity with a database is a requirement for its intelligent use. Perhaps one shouldn't think of simply obtaining directly the exact data in the database; rather, simulate-ESS would contain a more abstract or generalized model of the contents of the database. The Soar database research is too recent to provide answers, but the obvious hypothesis is that such knowledge permits more efficient or discriminating queries. CPD-Soar and Interval-Soar (Ajay Modi) CPD-Soar is a system for designing chemical separation systems, collections of distillation columns that split an input volatile fluid containing several chemical species into several fluids that (ideally) each contain only a single chemical species. To evaluate candidate designs, it will use ASPEN PLUS [1], a chemical process simulator, to simulate the results of a single distillation column. ASPEN PLUS is a complex program, the result of many years of research and development, so its calculation cannot be done independently by an agent (Soar). This is one typical form of ESSs: a unique, practically nonreplicablc repository of expertise, which must be accessed as an ESS by both agents and humans. The Soar-ESS system here would appear to be quite simple: Whenever an evaluation is needed (which is relatively straightforward to determine), provide the inputs to ASPEN PLUS, execute it, and get the evaluation number back. There might be some number conversion in create-input and convert-output, if Soar and ASPEN PLUS use different number representations (which they do), but that should be it Additional considerations enter because ASPEN PLUS is very expensive. The basic structure of CPD-Soar is a combinatorial search through the space of candidate separatum systems, using evaluation functions to prune the search. To use ASPEN PLUS for every evaluation is to completely determine the design process by minimizing the number of evaluations performed. Thus, there is a premium on finding some way to obtain cheap evaluation functions, even if they were much more approximate (ASPEN PLUS produces high quality answers). One path towards this is for CPD-Soar to learn from its uses of ASPEN PLUS enough to be able to produce internally some useful approximation to what ASPEN PLUS would produce. Here we see a specific reason why an agent should have a simulatc-ESS capability. The most obvious way to do this is simply to remember that to the input (U, V, W) ASPEN PLUS produced the result Z; and indeed Soar chunking automatically provides this level of learning. The trouble is that this exact input almost never repeats, so this learning does essentially no good at all. Interval-Soar is an attempt to explore an hypothesis about learning to simulate ESSs. Namely, what an agent learns from the operation of an ESS depends on what model of the ESS the agent brings to the experience. If the model is only that the ESS produces function values (numbers in, numbers out), then, per above, that is all that can be learned. If the model of the ESS is that it provides information about the shape of an evaluation function, then the agent can learn about this shape -and that might be enough to make some internal computations that would permit Soar to bypass using ASPEN PLUS. Interval-Soar employs an extremely simple and highly abstract model of the evaluation, namely, that it is a unimodal function. Then a sequence of uses of the ESS evaluation, permits locating the mode of the function with increasing precision. Since evaluation is being used by Soar to compare candidate designs, only the relative order of two evaluations is needed. Often this can be determined by the partial knowledge that has been built up about the value and location of the maximum of the evaluation function. More detailed a priori models of the evaluation could, of course, be used, changing the knowledge that could be learned from the execution of an ESS. We have an additional goal in this research, which is to show that the actual learning, given the model of the ESS, occurs by chunking, Soar's basic learning, mechanism. Soar/Mathematica (Dhiraj Pathak) We are attempting to get Soar to be able to use Mathematica [12]. This differs from the systems described above in requiring a relatively long sequence of interactions between the agent and Mathematica, each fairly small, to obtain a useful result Thus it provides some experience along a major dimension of variation of ESS systems. Formulate-subtask must produce a computational plan, and the other capabilities of create-input, convert-output and interpret-result can be conceptually lumped together into the implementation process for the plan. Except when the agent is quite expert, the initial plans developed by formulate-subtask will be quite incomplete, because the agent will not know enough about how to do in Mathematica what it wants to do mathematically to lay it all out in advance -this is what it means to do experimental programming. So interpret-results will end up being a fairly intelligent diagnostic process (unlike the database case). One issue that has shown prominently in the work with Mathematica can be called the transduction fallacy -the belief that the only capabilities involved in working with an ESS are transactions between the representations of the agent and the ESS, namely, create-input and convert-output (and indeed their most elementary functions). The arises in clear form with Mathematica because in attempting to pose an example or trial task for Soar/Mathematica, one necessarily couches it in some mathematical notation. Then it appears that all that is to be done is to convert from this mathematical expression (which may be very different from the notation and style of Mathematica) into the notation of Mathematica. After a few tries one cannot think that any other capabilities will be involved except transductions. The fallacy, of course, is that we, the ones who are inventing demonstration tasks for Soar/Mathematica, are doing the formulate-subtask ourselves, and leaving just the transduction to Soar. Only when we give Soar a larger analysis task to do, within which the use of Mathematica arises as a potential means, does it become clear that other capabilities are involved. 2 We have seen enough examples where more than transduction is involved to not be taken in by the fallacy. Yet it seems to haunt the exploration of Soar/Mathematica in interesting ways. No matter how far back we go, in terms of Soar's task-oriented problem spaces, the initial form of the task is cast in some mathematical representation. It almost seems as if the primary task must be to convert from whatever is this internal Soar notation into Mathematica notation. We have been exploring some protocols of humans using Mathematica to see what tasks get formed early on. Many activities go on other than simply writing down mathematical expressions (in whatever notation). For instance, there is exploration of Mathematica to find out how to do something with it (so there is acquisition of capabilities through experimentation). Also, there is the fonnulation of the mathematical plan in essentially method-like or qualitative terms, before 2 The transduction fallacy also tends to arise in working with Soar/database, where the only way one can imagine to use Soar/database is to have a user make some, perhaps high-level, request of Soar to find some information -at which point there only remains the transduction into SQL from whatever language the user used to make the request. In our experimental work with Soar/database, we use an overall task that makes Soar determine what information it needs and when it needs it any expressions are created Still, transductkm between mathematical notions is an activity that always seems to loom large. This sugfMts asking what would happen if Soar cast all its mathematical expressions in the notation of Mathematics After all, Mathematica (and other modern symbolic-mathematical systems) are intended to be close enough to natural human mathematical notation that humans are expected to come to think in its terms. How much simpler will a Soar system be if it thinks entirely in terms of Mathematica? Will it really make a lot of difference (one is inclined to think so, analogizing after the human phrases about "thinking it is terms"). But could the continual learning of the notation conversion processes by means of chunking, simply wash out such differences? MFS is a Soar system that supports mathematical-model formulation in the area of operations research, where one goes from a qualitative description of a problem to a set of mathematical expressions that can be given to a standard mathematical programming package for optimization, such as LINDO[11] or GAMS [2]. MFS currently formulates mixed-integer linearprogramming models of production-planning problems, and is at an early stage of development (e.g., it does not yet propose and retract assumptions to perform iterative model refinement). It appears that MFS will use four functionally distinct ESSs: (1) A database that holds the massive given information about the model, e.g., background constants, generic process equations; (2) a statistical package for estimating parameters to functional forms from data; (3) an external memory to hold all the variables and mathematical expressions in the developing candidate model and (4) the mathematical-programming package (e.g., UNDO) that is to solve the model. The use of the database and the statistical packages do not appear special, in that the ESS issues are those already raised in discussing Soar/database and Soar/Mathematica. But the last two ESSs each have their special interest The models for industrial application are often very large in terms of the numbers of variables and constraint expressions. Maintaining the full candidate model in working memory imposes a large processing burden on Soar -for the patterns in Soar's recognition memory continually survey the entire working memory to find matching data (the power of recognition memory lies in such complete and automatic surveillance). Mostly, the attention of MFS is focused on some part of the candidate model, and the remainder can be safely ignored. One way to realize this is to construct an external working memory, i.e., an ESS whose function is to hold the variables and expressions of die candidate model (or several of them, if alternative candidates are simultaneously under consideration). This is analogous to a database, except that storing changes in it and accessing information from it should be highly tailored to the structure of the model components, so that they can be highly efficient So it probably requires a special design. An interesting aspect of this ESS is that its need is not usually foreseen in the early stages of building a Soar system (here MFS), but emerges as the system is scaled up. The issue then arises of rapid acquisition of the ESS, i.e., acquiring the basic performance capabilities to deal with the new ESS. An essential pan of this is reorganizing the existing Soar task spaces to use the ESS, which requires introducing a fonnulatc-subtask capability that fits in with the pre-existing performance organization. If such reorganization and coupling with new ESS is a major enterprise, it is unlikely ever to be undertaken. Thus, speed and ease of acquisition become essentiaL The final ESS it the optimization programming package itself. MFS will submit completed models to the package for solving. But long before this happens MFS will have candidate models that it believes should work, but which in fact contain conceptual errors of various kinds. The models are large and complex, and they cannot be created correctly on the initial try, no matter how much advance intelligent analysis occurs. Thus, the main MSF interaction with the computational package will be in a conceptual debugging loop, in which strange things happen when the model is submitted to the package. Initially, these may be like syntax errors, many of which the package itself will catch and return appropriate error and warning messages. Later in the development, the solutions will appear to be comet, except they will be unrealistic, which must be diagnosed both in the output from the solver and then in the model in terms of what caused the strange result Thus, interpret-result will be the capability that will require the most development for this ESS. Draw-Soar (Gary Pelton) Draw-Soar takes in natural-language descriptions of a set of spatially structured and related objects, such as, "There is a tangential line at the top of the smaller circle/ and produces a set of commands to a drawing system (the ESS), such as McDraw, to draw the corresponding picture on a graphic display device. 3 The interaction with the ESS is two-way and fine-grained, and uses the command-language of the ESS. The picture is composed incrementally, as enough information accumulates to define another part of the picture. The system, as currently envisioned, does not itself have visual capabilities for seeing the developing picture whole. Thus, it is also in interaction with a user, which must be its eyes (this mixes the role of observer as client and as critic, which may not be the best way). Hence, feedback comes through additional natural-language dialog. The current system is working open-loop, just beginning to process simple requests. Draw-Soar provides another point along the dimension of the time-grain of interaction, even more fine-grained than Soar/Mathematica. It can be expected that this will put a great deal of emphasis on developing the recognition skill level, since so little gets done with each commandinteraction. Draw-Soar provides an interesting opportunity to understand the role of simulate-ESS. The ESS, being command-language oriented, admits of a model with a very simple structure -to each command there is an effect -although the effect can be highly context dependent (a change in the geometric display, whose visual effects can be profound). Thus, the agent can develop a range of models of the ESS at different degrees of abstraction and adequacy. The quality of the final result -the final drawing in relation to the original description -depends strongly on the simulate-ESS capability, for any but the simplest pictures. So Draw-Soar should be quite revealing in this respect Strictly speaking, the three-way interaction between Draw-Soar, its drawing-package ESS, and a human user, who sees the output of the drawing package and has two-way communication with Draw-Soar, is outside the paradigm we set ourselves for this paper, namely, the single-agent multiple-ESS situation. But it provides the opportunity to explore a number of issues in the acquisition of capabilities. The communication channel between the user and Draw-Soar is in natural language, a type of knowledge source that is important to learn to exploit But the arrangement can be used in many ways. The user could initially describe figures in terms rather close to the command language of die drawing package, and. then gradually move toward freer descriptions, as Draw-Soar's capabilities increase. The user could also instruct Draw-Soar directly about the drawing package and how to use it Arrangements, using the visual feedback from the user to Draw-Soar, could be set up for Draw-Soar to learn by experimentation or by observing the user doing things directly. These variations need not stress the natural language capability, employing only a small range of sentences; rather, they exploit the flexibility of natural language to express different sorts of situations. EFH-Soar: Learning from Electric Field Hockey (Jill Lehman) The final Soar-ESS system arises from an analysis of humans using interactive educational systems. The construction of the system has not yet commenced; work is still focused on psychological aspects. However, it raises an additional ESS issue that is worth mentioning. The scientific problem is to discover what human students learn about some subject matter by the use of an interactive computer microworld that behaves according to the subject-matter field and is structured so that students can explore the subject matter via die microworld. Many such microworlds are being developed in the application of computers to education. The microworld under investigation is Electric Field Hockey, a computer game built to educate about the physics of electric fields. The student places positive and negative charges in a rectangular arena, creating an electric field; then a charged particle (the puck) is released at one end, the aim being to have it travel into the goal box at the other end. In more complex variations of the task, barriers are placed in the arena, so the puck has to make its way around or between the barriers. If the student understands how electric fields are determined by the distribution of charges, then he or she can place charges so as to win the game. The Soar system is to be a simulation of a human student Thus, it is a single agent (EFH-Soar) with a single ESS (the Electric Field Hockey computer game), but the agent here is to model the student, so we can discover how humans learn in such situations. The interaction is again at the fine-grained end of the scale -students often operate in a highly interactive way, incrementally pushing around the charges already placed in the arena. The interesting aspect from the agent-ESS viewpoint is the nature of the learning required to be successful* The student (EFH-Soar) should be acquiring general knowledge about electnc fields, no longer intertwined with the game of Electric Held Hockey ESS and its details. It should be the acquisition of the subject-matter knowledge that permits the student to be successful at the game. All the learning issues we have raised -the acquisition capabilities in Figure 4 -are focused on acquiring a skill for dealing with the ESS, i.e., for dealing with Electric Field Hockey. For Electric Field Hockey acquisition must go via an outer loop through knowledge that is independent of the ESS, and then back to developing skill in using the ESS. The claim above is not ours. Rather it is implicit in the use of interactive microworlds for educational purposes. Our objective is to explore this claim and discover the extent to which it holds. Along the way some interesting light may be shed on what is and can be acquired in the agcnt-ESS situation more generally. Summary of the issues The issues raised about the control of ESSs by these seven Soar efforts are in many ways a congeries. The systems themselves have been generated for independent reasons, and their involvement with ESSs has been dictated by their inner demands, not by any concern for how to conceptualize agcnt-ESS research issues. Nevertheless, we can attempt some summary of the issues that have arisen in this section. 1 Simulate-ESS shows prominently in CPD-Soar, Soar/database and Draw-Soar. Operate-software-system was entirely absent, because our agent-ESS systems have not matured to where this capability becomes a necessity. The pervasiveness of learning. Learning capabilities show up essentially everywhere. Figure 5 also reveals this, both by the multiple skill levels that are primary to each system, which implies movement between levels, and in the different knowledge sources that play a role, implying acquisition. In the case of immediate performance, automatic skill acquisition for create-input and convertoutput already plays an important role in Soar/EBDE and Soar/database, and will in all other systems as well. Learning simulate-ESS is the heart of CPD-Soar with Interval-Soar. The MFS situation shows the need to acquire the four performance capabilities for a new ESS (an external working memory). We could have used Soar/Mathematica to make the same point, namely, the need to extract quickly from an agent a computational function being performed internally, thenceforth to be provided by an ESS. The acquisition of performance capabilities by drawing knowledge from the full range of available sources (Figure 4) is lying just below the surface in Draw-Soar and EFH-Soar. 4. A few emtrging issues. Not many sharp research issues have emerged yet, but perhaps two are worthy of note. The first is the transduction fallacy, namely, that the central problem of dealing with ESSs lies in create-input, convert-output and the operating system that mediates between agent and ESSs. So long as this view guides the interpretation of the essential nature of agent-ESS systems, the necessary research to construct effective agents for ESSs will not occur on the more intelligence-demanding capabilities of formulate-task, interpret-result and all the varieties of acquisition. The second issue is the possibility of the agent coming to think in terms of its ESSs, that is, to migrate the representations used by the ESSs back into the formulation of its own tasks. Then much of what has to happen in create-input and convert-output disappears. More exciting, the agent and the ESSs become, so to speak, impedance matched, so that the formulation of subtasks becomes more likely of success. How this actually works out is all future research, but it is clearly an issue of general import for agent-ESSs systems. Agent-ESS Systems as a Component of Software Technology The effort to develop agent-ESS systems is important for AI as a relatively unexplored area of intelligent action that is saturated with learning considerations. But its major contribution may ultimately be to software engineering, in making the power of computer software more easily accessible in the service of computational tasks. The type of system under discussion -a single agent with multiple ESSs -has an immense scope for application. This is indicated already by the collection of examples described above. Each example epitomizes an entire class of software systems of applied interest From the large scale collection of tools (represented by Soar/IBDE), to the routine intelligent use of databases (represented by Soar/database), to the use of highly interactive systems (represented by Draw-Soar) -each Soar-ESS system points to a population of application systems. That agent-ESS arrangements have important application potential justifies developing the requisite scientific knowledge base, even if some of the capabilities seem distant from immediate application. Viewed generally, agent-ESS systems belong to the class of software systems that make the software system smarter to improve system effectiveness and software productivity. With the exception of a relatively small community at the interface between software engineering and AI [9], this tactic has not been widely pursued to date. Some areas however should be at least noted One relevant area is that of intelligent interfaces in Human Computer Interaction [6]. Here, the attempt is to develop human-agent-ESS systems, rather than just agent-ESS systems -where a human uses a collection of ESSs via an interface that is an agent, i.e., an intelligent interface. This places most of the intelligence of the total system in the human and casts the agent in the role of aide, guide or facilitator. As noted earlier, this is the form a mature Soar/IBDE would undoubtedly take. Research in intelligent interfaces complements the agent-ESS paradigm. In the intelligent-interface work, the focus is on the interaction of an agent with the human userhow to make that communication intelligent. In the agent-ESS work, the focus is on the interaction of an agent with the software systems to be used Both of these types of interactions need to be understood and developed into effective technologies, so they become available as options, in the software engineering of large systems. A second area of potential relevance is that of distributed artificial intelligence [5], which is concerned with collections of cooperative agents. Again, we see that the basic situation differs from that of agent-ESSs. At the center of attention in distributed AI is agent-agent communication and collaboration. Issues of negotiation, contracting, division of labor, inconsistency of knowledge, diversity of goals, etc., become central. Some of these may have a pale reflection in the agcnt-ESS situation, but basically the passive nature of the ESS removes these issues from center stage. Instead, the focus becomes coping with the input/output representation of the ESS, formulating the agent's task in terms that permit the ESS to provide help, acquiring models of the ESS, etc. -as described throughout this paper. Distributed AI is attacking a more complex set of issues. A mature agent-ESS technology would probably contribute to the substrate on which eventually to build useful distributed AI systems. But there does not seem to be much immediate connection in the other direction, from research in distributed AI to agent-ESS systems. The agent-ESS situation should be viewed as one strand in an expanding conception of software technology -a tool in the total kit of techniques for engineering software systems. Actually, it adds at least two tools, and contributes to a third The first relates to the aim of software-technology research to reduce the amount of effort required to specify some computation to be performed. The evolution has been from requiring the user to be familiar with all the details of the hardware and software implementation, i.e., machine code, to building in increasing amounts of abstraction and having compiler-like software bridge the gap back to the fully specified software system that executes the application. When taken to the limit, the scenario for completely automatic programming has users specifying computations exclusively in terms natural to their application domain. In the long run, an intelligent agent equipped with the capabilities we have described provides an additional radical strategy for moving towards this goal -to wit, having the user deal with agent-ESS systems in abstract terms, because the agent deals with the concrete details of the ESSs that are the applications. Functionally, this casts the agent in the role of the "compiler-like" software, except that the agent engages in quite different operations, namely, the array of performance and acquisition capabilities we have outlined in this paper, which results in being able to adapt existing software to new uses. This, of course, does not substitute for compilation-like techniques (nor for interpretive ones either). Rather, it adds a third tool to the software engineering toolkit, which will be the preferred technique in many cases. The second role relates to reusability. The power of existing software systems, which otherwise might have to be reimplemented or at least integrated by a programmer, can be made accessible through an agent-ESS arrangement. The reach of this technique -how inhospitable an existing collection of ESSs might still be made useful -would seem to depend strongly on the acquisition capabilities of the agent The whole point of this tool is to shift to the agent the burden currently borne by the human system programmer, who must normally refurbish the software system through extensive efforts to understand it and reprogram it. The third (contributory) role relates to the potential of intelligent interfaces, mentioned above, to become an integral part of large software systems. Those efforts, as noted, focus on humanagent communication. But they will only become effective for real applications, if the agent-ESS side of the systems is equally well developed The area of intelligent interfaces is not itself likely to provide this development, because the research issues that are central (and properly so) to human-agent communication are quite different than those of agent-ESS operation. So there is a role for an autonomous agent-ESS development, with the expectation that it will provide the other half of what is needed for intelligent interfaces to be useful. It is worth noting, finally, that ESSs and collections of them can comprise large software systems. The issues for agent-ESS systems do differ for simple systems versus complex systems. That can already be seen by comparing Soar/database with Soar/IBDE, which might be taken as representing simple and mid-range points along the dimension of system complexity and size. But both these agent-ESS systems are feasible, and both fit within the conceptual framework we have outlined in this paper. Thus, as tools for software engineering, agent-ESSs should be viewed as potential contributors to the large-system end of the software-system spectrum, which is where software engineering is most in need of development application systems (in contradistinction to the well-known difficulties of interfacing separate language and application modules). Thus some acquisition capabilities can already be approached in at least elementary form (Draw-Soar provides the clearest example). It is also relevant to the feasibility of ESS research that ESSs form a highly specialized class of systems. Although some capabilities benefit little from the specialization (natural language comprehension would seem to be an example), most of the performance and acquisition capabilities will be much simpler for ESSs than for general physical, chemical and biological systems. Software systems, being constructed from sequential programming languages and discrete data structures have relatively simple structure with good abstractions. The beneficial effects of these simplifications are readily seen in the cuirent Soar-ESSs system, e.g., Soar/database and Draw-Soar. In sum, our own reaction to the requirement for a vast array of highly sophisticated capabilities is that it makes clear that agent-ESS systems are more than just getting the agents and ESSs hooked up at the software level with appropriate data-conversion software (the transduction fallacy). We see the array as providing a map for where we have to go. Our research approach will be composed of the following activities: 1. Empirical exploration: The simultaneous development of diverse systems (such as the collection described above) that reveal the different capabilities needed and provide specific enough contexts to construct instances of such capabilities. These systems, which are independently motivated applications of Soar, also provide the test situations to determine whether we have developed an effective agent for controlling ESSs. Focused search: The selection for development of new systems that focus on specific capabilities that need exploration and that fill out our total picture of the full array of capabilities. 3. Human capabilities: The analysis of how humans perform the same tasks using ESSs as do our Soar systems, in order to obtain clues to the additional capabilities and knowledge that humans have, of which we are currently unsuspecting. Such studies also provide benchmarks against which to calibrate agent performance. Generalized capabilities: The recasting of the specific instances of the capabilities as general abilities that can exist in Soar at all times and be available in any Soar system that needs to make use of ESSs. Such generic capabilities become the starting points for the development of specialized capabilities for particular ESSs or classes of ESSs. Progress on this research activity awaits seeing diverse, multiple instances of the different capabilities, so common structure and operations can be discerned. With this agenda we expect no difficulty in determining whether or not the research is making progress. It is relatively unambiguous whether a given agent-ESS system is adequate -either the agent can use the ESSs to help it do its tasks or it can't. Performance metrics are sometimes helpful in evaluating the individual agent-ESS. How skilled is the agent in using this ESS? Or, to rephrase it, how large a fraction of the agent's effort goes into operating the ESS? Suppose a Soar system uses a database as part of its operation (by having the capabilities of Soar/database). If 80% of its time is occupied by SQL programming (create-input), the agent-ESS performance is pretty poor. In some cases, the norms for performance come from analyzing how fast the agcnt-ESS component must be so the total system can perform the total task satisfactorily In other cases, human performance provides revealing comparisons. Basically, however, the evaluation of an individual agem-ESS situation is simply one of adequacy. The important metric for evaluating research in this area is the expansion of scope What agent-ESS situations can be handled adequately now and how is this growing? Draw-Soar can, say, do simple drawings -can it now produce drawings of the complexity of figures in publications? Soar/database can, say, use relational data bases employing SQL -can it now acquire a database that uses some other query language, not necessarily more complex than SQL but just different? Thus, progress is to be gauged by a sequence of challenge tasks, each posed to force a substantial, but attainable, increment of development of a given agent-ESS system (or class of them). As this sequence evolves these challenge tasks include real aoolicatinrK providing additional calibration and measures of success. applications,
16,591.4
1993-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Applications of machine learning in pine nuts classification Pine nuts are not only the important agent of pine reproduction and afforestation, but also the commonly consumed nut with high nutritive values. However, it is difficult to distinguish among pine nuts due to the morphological similarity among species. Therefore, it is important to improve the quality of pine nuts and solve the adulteration problem quickly and non-destructively. In this study, seven pine nuts (Pinus bungeana, Pinus yunnanensis, Pinus thunbergii, Pinus armandii, Pinus massoniana, Pinus elliottii and Pinus taiwanensis) were used as study species. 210 near-infrared (NIR) spectra were collected from the seven species of pine nuts, five machine learning methods (Decision Tree (DT), Random Forest (RF), Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Naive Bayes (NB)) were used to identify species of pine nuts. 303 images were used to collect morphological data to construct a classification model based on five convolutional neural network (CNN) models (VGG16, VGG19, Xception, InceptionV3 and ResNet50). The experimental results of NIR spectroscopy show the best classification model is MLP and the accuracy is closed to 0.99. Another experimental result of images shows the best classification model is InceptionV3 and the accuracy is closed to 0.964. Four important range of wavebands, 951–957 nm, 1,147–1,154 nm, 1,907–1,927 nm, 2,227–2,254 nm, were found to be highly related to the classification of pine nuts. This study shows that machine learning is effective for the classification of pine nuts, providing solutions and scientific methods for rapid, non-destructive and accurate classification of different species of pine nuts. www.nature.com/scientificreports/ agricultural fields, including research into wheat 17 , soybean 18 , cowpea 19 and rice 12 production. So far, there are few reports on the application of NIR spectroscopy in forestry and pine nut research. Specifically, Tigabu et al. 20 collected visible-NIR spectral data of Pinus sylvestris nuts in different areas and preprocessed the spectral data by means of Multiplicative Scatter Correction (MSC). The nuts source was constructed through Soft Independent Modelling of Class Analogy (SIMCA) and Partial Least Squares Discriminant Analysis (PLS-DA). Loewe et al. 21 collected NIR spectral data of Mediterranean Pinus pinea from Chilean plantations for classification. Moscetti et al. 22 collected the NIR spectral data of the nuts of P. pinea and Pinus sibirica in different regions and established a spectral classification model by using PLS-DA and Interval PLS-DA (IPLS-DA) methods. However, the effects of other different classification models still need to be further discussed in more species of pine nuts. Machine learning based on image has been successfully applied to rice pest identification 23 , Dendrolimus punctatus Walker damage detection 24 and other agricultural and forestry fields. Deep learning, a type of machine learning, uses hierarchical analysis and multilevel calculation to obtain results. Deep convolutional neural network (CNN) has been successfully applied in image recognition for applications such as tomato pesto recognition 25 , fish image recognition 26 . Moscetti et al. 22 collected the image data of the nuts of P. pinea and P. sibirica in different regions, carried out feature extraction, obtained 10 features based on image data, and used these features to construct image-based classification model. Although the feasibility of pine nuts classification has been proved based on manually extracted image-features, the automatic classification model is still worthy of further research in more species of pine nuts. Therefore, the use of modern computer technology to classify pine nuts greatly promotes the research of nondestructive, rapid and accurate classification of pine nuts. In this study, machine learning technology is adopted, and the application potential of machine learning in pine nut classification is verified. The contributions of the current work are: (1) Molecular markers were used to identify pine nuts species; (2) NIR spectroscopy and images of 7 pine nuts (two kinds of edible pine nuts (Pinus bungeana and Pinus armandii) and five common species (Pinus yunnanensis, Pinus thunbergii, Pinus massoniana, Pinus elliottii and Pinus taiwanensis)) were collected. (3) NIR spectroscopy uses five machine learning methods for classification, while image recognition chooses five CNN models. This study verifies the potential of machine learning in pine nuts classification and provides a practical method for faster, non-destructive and accurate identification of pine nut species. Results Molecular markers. The assembled ITS2 and rbcL sequences were used to molecular markers by comparing to the GenBank database (https:// www. ncbi. nlm. nih. gov/ search/ all/? term= blast). Table 1 shows that the ITS2 sequence length ranges from 477-482 bp while the rbcL gene length ranges from 677-720 bp ( Table 2). The GenBank accession numbers are OK274058-OK274066 and OK271114-OK271122. The results show that P. massoniana, P. armandii, P. thunbergii and P. bungeana were recognized while P. taiwanensis (Synonyms is Pinus hwangshanensis) was not recognized. There were not the same species in GenBank compared with the ITS2 gene sequences of P. yunnanensis and P. elliottii. It is evident that ITS2 and rbcL are the suitable molecular markers for the species recognition of some pine nuts and molecular analyses are limited by data publicly available in GenBank. Then by consulting Kunming Institute of Botany, Chinese Academy of Sciences, the labels were carried out again to confirm the reliability and authenticity of pine nut species. Classification model based on NIR spectral data. The collected pine nut NIR spectra were analyzed and are represented in Fig. 1. It is apparent from all original NIR spectra (Fig. 1a) that the amplitude, peaks and troughs of the NIR spectra of the seven pine nuts have similar changes. Among them, the value of P. armandii is at a higher position (indicating the highest absorbance value) compared to the whole range, and the value of P. massoniana is at a lower position. The normalized NIR spectra (Fig. 1b) show that the NIR spectrum of each pine nut is more distinct after normalization, and the changes between the pine nut values can be observed more clearly. Among them, P. armandii and P. bungeana are highly mixed in the range of 9,000-4,000 cm −1 (1,111-2,500 nm). Ten independent analyses were carried out on normalized and non-normalized NIR spectral data using the five traditional machine learning models i.e., the Decision Tree (DT), Random Forest (RF), Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Naive Bayes (NB) ( Table 3). It is evident from Table 3 www.nature.com/scientificreports/ classification of pine nuts is effective using these models. When the data are not normalized, the accuracy of the DT and RF classification models is greater than 0.83. For normalized data, the classification accuracy of the five models is > 0.80, with MLP and SVM providing an accuracy of > 0.93. With pre-process of data, the performance of the MLP and SVM models have been greatly improved, the accuracy of the MLP model reaches 0.99, while the www.nature.com/scientificreports/ SVM model reaches 0.94. Overall, these results show that the RF model is a better classification method when the data are not normalized, while the MLP model is the best for normalized data. The precision (Pre) and F1-score (F1) are presented in Table 4 (non-normalized data) and Table 5 (normalized data). In Table 4, the precision and F1-score of P. armandii and P. bungeana are higher, and the precision of P. bungeana is the highest, reaching 0.97. However, the precision and F1-score of P. taiwanensis and P. massoniana are quite low reaching precision scores of 18% and 22% respectively. In Fig. 1a, the distinction between P. armandii and P. bungeana is clear, while the P. taiwanensis and P. massoniana are less distinct and thus more difficult to classify. However, Table 5 shows that the precision and F1-scores of the seven pine nut species are greatly improved after normalization. This indicates that data normalization is a necessary step for spectral data processing. www.nature.com/scientificreports/ Classification model based on image data. Three pre-processing methods were run for the datasets of image_clip (clipped images), image_trans (transformed images), and image_gray (grayscale transformed images). The image_clip data is used to explore the results of the deep learning model on the original data, image_trans and image_gray are obtained by extending the image_clip transformation. VGG16, VGG19, Xception, ResNet50 and InceptionV3 models were selected with the options of 100 epochs, and accuracy and loss were used as evaluation indicators. Figures 2, 3 and 4 present the accuracy and loss values of the five trained and verified models. From these figures, Xception and InceptionV3 have the best performance with the highest accuracy and lowest loss compared to the VGG16, VGG19 and ResNet50 models. Additionally, among the three pre-processing methods, image_trans outperforms image_gray and image_clip. Therefore, Xception and Incep-tionV3 models are best suited for image-based classification of pine nuts and images should be transformed but not set to grayscale (Table 6). Discussion Previous studies have shown that genus Pinus originated in the early Cretaceous (116-83 Mya) and diverged into two subgenera Pinus (P. massoniana, P. thunbergii, P. yunnanensis, P. taiwanensis and P. massoniana, etc.) and Strobus (P. armandii and P. bungeana, etc.) 2,27 . During the long evolutionary history, it may have experienced many events such as plate movement, sea-land transition and climate changes 2, 28, 29 . The chemical composition of plant organs is the result of the interaction between plants and the environment in the long process of evolution [30][31][32] . Our results suggested that the species P. armandii and P. bungeana of subgenus Strobus have higher bands in regions 9,000-4,000 cm −1 (1,111-2,500 nm) than other five species of subgenus Pinus (Fig. 1). These bands were found to be associated with proteins, amino acids, moisture, lipids and carbohydrates in previous studies 20,22 . Notably, our results also showed that three sensitive bands (1,147-1,154 nm, 1,907-1,927 nm, 2,227-2,254 nm) in these regions (1,111-2,500 nm) have great influence on the model accuracy based on sliding window method (Fig. 1). Different with subgenus Pinus, the species P. armandii and P. bungeana of subgenus Strobus were mainly www.nature.com/scientificreports/ distributed in Northern China (Table S1). The difference of some substances could be caused by certain geographical distribution and environmental conditions such as altitude, average annual temperature, soil characteristics, precipitation, and sunshine 22 . Compared with previous studies based on SVM, RF and PLS-DA methods in seed classification 12,18 , our results showed that MLP model presented excellent performance, which could be explained that the collected NIR spectra were different in sensitivity to the model due to different chemical components. We also found some morphological differences among two subgenera in pine nut images. The seeds of subgenus Strobus probably have a smoother shape and texture than subgenus Pinus (Fig. 7), which would be conducive to the feature extraction of machine learning model. Previous studies have shown that the PLS-DA and IPLS-DA models were achieved good results to recognize the multiple varieties of two species 22 . However, our results suggested that the InceptionV3 model performed best on the pine nut images of seven species with the fastest convergence speed and highest accuracy. The similar model was found to be successfully used to diagnosis of nutrient deficiencies in rice 33 and classification of multiple weed species 34 . The different recognition accuracy of multiple models may be related to the morphological features (shape, color and texture) of nuts between datasets. There are different advantages in three recognition methods of molecular markers, NIR and images (Fig. 5). In terms of accuracy, molecular markers have higher recognition rates than NIR and images. However, molecular labeling takes a long time, as well as being limited by experimental equipment and public reference databases. In terms of cost, image analysis may be better, because it is convenient, fast and free from environmental constraints, but this method requires a large amount of images and has a lower recognition rate. In terms of performance, NIR spectroscopy may be better duo to its higher recognition rate and smaller amount of data generated, but it is costly and requires special devices. In the future, we would take advantage of the ensemble learning approach by merging multiple features of molecule, NIR and images for more species. Table 6. Precision, F1 scores and accuracy of three pre-process methods. a image_clip, the clipped images. b image_trans, the transformed images. c image_gray, the transformed grayscale images. Spectral data acquisition and pre-process. The NIR spectra were acquired using the Antaris Fourier Transform NIR spectrometer (Thermo Fisher Scientific, Massachusetts, USA) equipped with an InGaAs detector with diffuse integrating sphere, a 7.78 cm quartz sampling cup and sample rotary table within the range of 12,800 to 3,800 cm −1 (781 nm-2632 nm) at a resolution of 8 cm −1 . Samples were scanned 48 scanning times, and 2335 bands were obtained. The data were transformed using log(1/R) to represent absorbance. www.nature.com/scientificreports/ The NIR spectra were normalized using a min-max normalization method to eliminate the adverse effects caused by outliers. The original data were normalized to the range of 0 and 1 using Eq. (1). where x represents absorbance values, min(x) and max(x)represent the lowest and absorbance highest absorbance values, respectively. Image acquisition. The pine nut images were captured using a LEICA EZ4 microscope with a white background and eightfold magnification through a Huawei Mate 30 mobile phone with a 40 MP ultrasensitive camera (wide angle, f/1.8) supporting auto focus and manual focus. The shooting angle was set to 90°, the height was 50 cm, and 52 images were taken for each species of pine nut. Image pre-process. During the image capturing process, irregularities arise. These include the size variation of pine nuts, inconsistent positions, and appearance of color, all of which will affect the recognition models and accuracy of classification. Thus, image pre-processing for standardization involved the following two steps: (1) Edge detection and clipping The edge position of the pine nuts was detected with the Sobel method on the OpenCV platform. Once the top, bottom, left, and right vertices of the seed were de-fined, the image was cropped through a matrix frame connecting the four vertices (Fig. 6). In order to maintain a uniform image background (Fig. 6d), further manual cutting was sometimes necessary (Fig. 6e). (2) Data augmentation and image grayscale The clipped images were oriented using the 'flip' and 'resize' functions in OpenCV. The formula (2) was used to transform these aligned images into grayscale images (Fig. 7). The OpenCV's color was used conversion function in this study: CV_BGR2GRAY to perform image grayscale processing. (1) (2) Gray = R * 0.299 + G * 0.587 + B * 0.114 www.nature.com/scientificreports/ Structural design of pine nuts classification model. In order to further study the pine nut classification model, two experimental approaches were employed (Fig. 8). For the first approach involved traditional machine learning methods such as DT, RF, MLP, SVM and NB which were used to classify nuts based on the NIR spectroscopy. The classification model based on NIR spectra includes five steps (Fig. 8a). Data were first prepared and then divided into a training set and a validation set according to the ratio of 8:2. The DT, RF, MLP, SVM and NB learning methods were then used to establish classification models. Following training and validation, the accuracy (Acc), Pre, and F1 were selected as performance evaluation indicators of each classification model. The second approach, five CNN models (VGG16, VGG19, Xception, InceptionV3 and ResNet50) were constructed and trained to classify the images of pine nuts (Fig. 8b). First, the original images in the dataset were of different sizes. Before the experiment, the original images were pre-processed and then cut into 224 × 224 sizes. Second, the pine nut images were divided into a training set and a validation set according to the ratio of 8:2. Then, the VGG16, VGG19, Xception, ResNet50 and InceptionV3 models were loaded on the experimental platform for training and validation. The epochs were set to 100 times, the Stochastic Gradient Descent (SGD) optimization method was adopted, and the initial learning rate was set to 0.005. The learning rate changes with training turns, with attenuation of 1e-6 per turned, and the momentum parameter was set to 0.9. The loss function was sparse_categorical_crossentropy, and the activation function was Rectified Linear Units (ReLU). Finally, the Acc, Pre, and F1 were selected for model evaluation. These two experimental approaches were designed to compare and analyze the performance of different models to evaluate which one would best serve future research of pine nut classification. CNN models were built using the Python libraries Keras-nightly 2.6.0, TensorFlow-nightly-GPU 2.6.0, and Scikit-learn 0.24.2 run in Python v.3.7.
3,813.6
2022-05-25T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Dispatches Dispatches Dispatches Dispatches Dispatches Classification of Cases and the Water Distribution Systems Computer-generated Dot Maps as an Epidemiologic Tool: Investigating an Outbreak of Toxoplasmosis Epidemiologists have traditionally used maps to examine the spatial distribution of disease incidence. Computer-generated dot maps have facilitated identification of case clusters (1), formulation of hypotheses about the source of infection or spatially distributed risk factors (2,3), and analysis of data. Because most populations are served by identifiable water systems, waterborne disease outbreaks lend themselves to being plotted on dot maps (1,4-6). We describe an automated address-matching and base map system in a geographic information system structure used to assess information related to an outbreak of toxoplas-mosis associated with a municipal water system. The Capital Regional District (population 321,585) is located on Vancouver Island, British Columbia, Canada. The district includes the City of Victoria, surrounding municipalities and districts, and the Gulf Islands. In March 1995, an outbreak of toxoplasmosis was suspected when 15 residents of this district were identified as having acute infection with Toxoplasma gondii. Investigation of these and subsequent cases confirmed an outbreak but identified no common food, beverage, or event source. A hand-drawn dot map of the initial 47 cases showed clustering in the Greater Victoria area. The municipal water supply system was considered a possible explanation for this spatial distribution. To examine this hypothesis, computer-based geographic mapping was used to study the distribution of all outbreak-related acute cases and data collected from a population-based serologic screening program to detect T. gondii infection in women who were or had been pregnant (7). Patients were classified as having acute, equivocal, or nonacute cases or as never infected on the basis of serologic tests performed at the Alto Medical Foundation (8-11). Cases were further classified on the basis of clinical symptoms, outbreak relatedness, patient's residence , and pregnancy status (pregnant, non-pregnant) (7). At the time of the outbreak, the Greater Victoria Water District operated two disinfection plants supplying unfiltered, chloraminated We used computer-generated dot maps to examine the spatial distribution of 94 Toxoplasma gondii infections associated with an outbreak in British Columbia, Canada. The incidence among patients served by one water distribution system was 3.52 times that of patients served by other sources. Acute T. gondii infection among 3,812 pregnant women was associated with the incriminated distribution system. Dispatches Dispatches Dispatches Dispatches Dispatches Epidemiologists have traditionally used maps to examine the spatial distribution of disease incidence. Computer-generated dot maps have facilitated identification of case clusters (1), formulation of hypotheses about the source of infection or spatially distributed risk factors (2,3), and analysis of data. Because most populations are served by identifiable water systems, waterborne disease outbreaks lend themselves to being plotted on dot maps (1,(4)(5)(6). We describe an automated address-matching and base map system in a geographic information system structure used to assess information related to an outbreak of toxoplasmosis associated with a municipal water system. The Capital Regional District (population 321,585) is located on Vancouver Island, British Columbia, Canada. The district includes the City of Victoria, surrounding municipalities and districts, and the Gulf Islands. In March 1995, an outbreak of toxoplasmosis was suspected when 15 residents of this district were identified as having acute infection with Toxoplasma gondii. Investigation of these and subsequent cases confirmed an outbreak but identified no common food, beverage, or event source. A hand-drawn dot map of the initial 47 cases showed clustering in the Greater Victoria area. The municipal water supply system was considered a possible explanation for this spatial distribution. To examine this hypothesis, computer-based geographic mapping was used to study the distribution of all outbreak-related acute cases and data collected from a populationbased serologic screening program to detect T. gondii infection in women who were or had been pregnant (7). Classification of Cases and the Water Distribution Systems Patients were classified as having acute, equivocal, or nonacute cases or as never infected on the basis of serologic tests performed at the Provincial Laboratory (British Columbia Centre for Disease Control) and the Toxoplasma Serology Laboratory, Research Institute, Palo Alto Medical Foundation (8)(9)(10)(11). Cases were further classified on the basis of clinical symptoms, outbreak relatedness, patient's residence, and pregnancy status (pregnant, nonpregnant) (7). At the time of the outbreak, the Greater Victoria Water District operated two disinfection plants supplying unfiltered, chloraminated We used computer-generated dot maps to examine the spatial distribution of 94 Toxoplasma gondii infections associated with an outbreak in British Columbia, Canada. The incidence among patients served by one water distribution system was 3.52 times that of patients served by other sources. Acute T. gondii infection among 3,812 pregnant women was associated with the incriminated distribution system. surface water to approximately 292,000 residents of the Capital Regional District. The higher-pressure Japan Gulch distribution system supplied water from the Japan Gulch Reservoir to 73,000 residents, as well as a oneway transfer of water into the other distribution system. The Humpback distribution system supplied 219,000 residents (12). District residents were further classified as receiving high, intermediate, or no exposure to water from the Humpback Reservoir. Geographic Mapping Geographic mapping analysis was performed by using MapInfo Version 3.0 for Windows (MapInfo Corporation, Troy, NY, 1992-94). MapInfo, which enables data containing geographic information to be placed on a map, was used for geocoding (i.e., placing markers into the database, like pins onto a map). Street address data for each record in the datafile were matched against an electronic street map of the Capital Regional District. Geographic coordinates were taken from the electronic street map of the district, which is based on the Transportation Centerline Network of the British Columbia Ministry of Transportation and Highways (13). Lotus Approach 3.0 for Windows (Applied Software Corporation, subsidiary of Lotus Development Corporation) was used to prepare and validate the data before mapping. On the basis of information from the Greater Victoria Water District, a street-level scale map was drawn in MapInfo to delineate the geographic area in the Capital Regional District that receives municipal drinking water from the Humpback Reservoir. Geographic Distribution of Acute Cases All 94 persons with outbreak-related acute cases who lived in the Capital Regional District were grouped by geographic area as being served by the Humpback Reservoir or other water sources. Incidence rates were calculated by using estimated population figures (1994). Information on residential addresses was obtained from laboratory requisitions, physicians' offices, hospital records, the Medical Services Plan of British Columbia, and direct communication with patients. For each case, the residential street address at the time the first specimen was drawn for testing for T. gondii antibodies was used to produce a computer-generated dot map. Of the 94 persons with outbreak-related acute cases who lived in the Capital Regional District, 83 (88%) lived in the area served by the Humpback Reservoir. The incidence rate of acute infection among persons residing in the area served by the Humpback Reservoir was more than three times that for areas served by other sources (RR = 3.53; 95% confidence interval [CI]: 1.88-6.63; p = 0.0003) (Figure 1). Geographic Distribution of Women Screened during Pregnancy Data from a population-based screening program were used to determine whether residents served by the Humpback Reservoir were more likely to have acute infection with T. gondii. Serologic screening was offered to an estimated 4,500 women living in the capital regional district who were pregnant between October 1, 1994, and April 30, 1995. To offer screening to as many of these women as possible, information regarding the screening program was extensively distributed to women, physicians, and the public. Serologic results were available from the Provincial Laboratory database at the British Columbia Centre for Disease Control. Residen- tial street addresses were obtained by linking the Provincial Laboratory database with the Medical Services Plan database, using the unique personal health number assigned by the Province. A computer datafile containing the serologic results of screened pregnant women and their addresses was provided to the Capital Regional District Health Department. The data were then prepared and validated, geocoded and mapped, and statistically analyzed. Dispatches Dispatches Dispatches Dispatches Dispatches Three dot maps were generated by MapInfo, showing the geographic distribution of the screened population according to their laboratory classification of never infected (i.e., no serologic evidence of immunoglobulin [Ig] G or IgM antibody to T. gondii), nonacute (i.e., typically IgG but not IgM antibody to T. gondii), and acute (i.e., serologic evidence of acute infection by a battery of tests) (7)(8)(9)(10)(11). Several data subsets were generated on the basis of two variables: 1) residence in the area served by the Humpback Reservoir and 2) municipality of residence. Odds ratios were then calculated by StatCalc in Epi Info (version 6.03) to test the hypothesis that living in the area served by the Humpback Reservoir was associated with infection with T. gondii. Of 3,982 laboratory records for screened women, 3,962 records were available for coding. The 3,812 women with successfully coded addresses comprise 85% of the estimated 4,500 women eligible for screening. Of these women, 36 (0.9%) were classified according to their laboratory results as having acute infection, 216 (5.7%) as having a nonacute infection, 3,558 (93.3%) as never having been infected, and 2 (0.1%) as having equivocal cases. The age distribution of the women (15 to 45 years [mean 29 years]) did not differ by infection status (women with equivocal status were excluded). Women acutely infected with T. gondii were more than three times as likely as uninfected women to live in the area served by the Humpback distribution system (odds ratio [OR] 3.05, exact 95% CI: 1.08-11.91). In contrast, the geographic distribution of women with serologic evidence of nonacute infection with T. gondii was not associated with the municipal water distribution system (OR 0.91, exact 95% CI: 0.67-1.25). Computer-generated dot maps (Figures 2, 3, 4) provided visual information, as well as data for statistical analysis. These maps are characteristic of an outbreak associated with a one-time event. The geographic distribution of pregnant women who were never infected ( Figure 2) indicates that the susceptible population is similar to the distribution of the whole population. The geographic distribution of pregnant women who had nonacute cases ( Figure 3) is similar to the population distribution, indicating that before the outbreak, residence was not associated with antibody. Visually comparing the geographic distribution of pregnant women with acute cases (Figure 4) with that of pregnant women who were never infected ( Figure 2) suggests that most of the recently infected pregnant women lived in the area served by the Humpback Reservoir. The difference between the distributions seen in Figures 3 and 4 suggests that this event was new and unusual, rather than an ongoing or intermittent exposure that had not been recognized. Well-defined geographic areas of the Capital Regional District were classified as receiving high, intermediate, and no exposure to water from the Humpback Reservoir (exposure scores 3, 2, and 1, respectively). An analysis for linear trend in proportions was performed by using StatCalc in Epi Info to demonstrate whether a dose-response relationship was present. Dispatches Dispatches Dispatches Dispatches Dispatches If water from the Humpback Reservoir was the source of infection, the rate of infection would be expected to increase with increased exposure. The Table shows the geographic distribution of women according to the estimated concentration of water from Humpback Reservoir received at their residence. When acutely infected women were compared with seronegative women, the trend in linear proportions across the three exposure scores was significant (chi-square = 6.67; p = 0.01). In contrast, a significant linear trend in proportions was not demonstrated when women with nonacute cases were compared with seronegative women (chi-square = 1.30; p = 0.25). Conclusions The conclusions drawn from geographic mapping depend on the accuracy and validity of the datasets and the availability of denominator data to avoid making false associations that are a function of population densities. For the overall population, denominators were available from census data. Firm denominator data were available from the serologic screening study. With respect to accurate placement on the maps, the addresses and the delineation of the water distribution systems were critical to the evaluation. Although extensive efforts were Dispatches Dispatches Dispatches Dispatches Dispatches made to verify addresses, there was some potential for misclassifying exposure. However, any misclassification resulting from the use of the Medical Services Plan database should not have systematically biased our analyses. The Greater Victoria Water District engineers delineated the boundaries of the Humpback distribution system and ranked the zones in the Capital Regional District according to the proportion of water received from the Humpback Reservoir. As the engineers had no prior knowledge of the distribution of outbreakrelated cases of toxoplasmosis, it is unlikely that the determination of these geographic boundaries was biased. A limitation of the geographic mapping analysis of cases is that rates were not adjusted for age or other covariates. However, the subsequent analyses using data from the population-based screening of pregnant women were not confounded by age. The age distribution of the women in these analyses did not differ when grouped by infection status. Computer mapping software had the advantages of 1) facilitating the verification and correct placement of addresses, 2) reducing the time required to map the location of large datasets, 3) enabling queries and statistical analysis of the data after mapping, 4) allowing several sets of mapped data to be analysed simultaneously for potential relationships, and 5) generating printouts or overheads for presentations.
3,113.6
0001-01-01T00:00:00.000
[ "Medicine", "Biology" ]
Practical implementation of artificial intelligence algorithms in pulmonary auscultation examination Lung auscultation is an important part of a physical examination. However, its biggest drawback is its subjectivity. The results depend on the experience and ability of the doctor to perceive and distinguish pathologies in sounds heard via a stethoscope. This paper investigates a new method of automatic sound analysis based on neural networks (NNs), which has been implemented in a system that uses an electronic stethoscope for capturing respiratory sounds. It allows the detection of auscultatory sounds in four classes: wheezes, rhonchi, and fine and coarse crackles. In the blind test, a group of 522 auscultatory sounds from 50 pediatric patients were presented, and the results provided by a group of doctors and an artificial intelligence (AI) algorithm developed by the authors were compared. The gathered data show that machine learning (ML)–based analysis is more efficient in detecting all four types of phenomena, which is reflected in high values of recall (also called as sensitivity) and F1-score. Conclusions: The obtained results suggest that the implementation of automatic sound analysis based on NNs can significantly improve the efficiency of this form of examination, leading to a minimization of the number of errors made in the interpretation of auscultation sounds. What is Known: • Auscultation performance of average physician is very low. AI solutions presented in scientific literature are based on small data bases with isolated pathological sounds (which are far from real recordings) and mainly on leave-one-out validation method thus they are not reliable. What is New: • AI learning process was based on thousands of signals from real patients and a reliable description of recordings was based on multiple validation by physicians and acoustician resulting in practical and statistical prove of AI high performance. Background Auscultation has been considered as an integral part of physical examination since the time of Hippocrates. The stethoscope, introduced by Laennec [2] more than two centuries ago, was one of the first medical instruments which enabled internal body structures and their functioning to be checked. The stethoscope still remains a tool that can provide potentially valuable clinical information. However, the results of such examinations are strongly subjective and cannot be shared and communicated easily, mostly because of doctors' experience and perceptual abilities, which leads to differences in the their assessments, depending on their specialization (Hafke et al., submitted for publication). Another important issue is the inconsistent nomenclature of respiratory sounds. This problem is widely recognized [1], but to date, there is still no standardized worldwide classification of the types of phenomena appearing in the respiratory system [10]. There is both a variety of terms used for the same sound by different doctors and different sounds described by the same term. Lung sounds, as defined by Sovijarvi et al. [14], concern all respiratory sounds heard or detected over the chest wall or within the chest, including normal breathing sounds and adventitious sounds. In general, respiratory sound is characterized by a low noise during inspiration, and hardly audible during expiration. The latter is longer than the former [12]. The spectrum of noise of normal respiratory sound (typically 50-2500 Hz) is broader on the trachea (up to 4000 Hz) [11]. Adventitious sounds are abnormalities (pathologies) superimposed on normal breathing sounds. They can be divided into two sub-classes depending on their duration: continuous (stationary) sounds-wheezes, rhonchi, and discontinuous (non-stationary) sounds-fine or coarse crackles. Wheezes are continuous tonal sounds with a frequency range from less than 100 Hz to more than 1 kHz, and a duration time longer than 80 ms [8]. They are generally recognized correctly and rarely misinterpreted, which makes them probably the most easily recognized pathological sound [7]. However, as Hafke et al. (submitted for publication) proved, in the case of describing previously recorded sounds, doctors have difficulty identifying this kind of pathology depending on breathing phase, i.e., inspiratory wheezes were confused with expiratory wheezes and vice versa. Rhonchi are continuous, periodic, snoring-like, similar to wheezes, but of lower fundamental frequency (typically below 300 Hz) and duration, typically longer than 100 ms [8]. It is one of the most ambiguous classes of pathological sounds, as it is often considered to be on the boundary between wheezes and crackles (especially of coarse type). Thus, they may be mistaken for them [15]. Although many authors suggested Brhonchus^as a separate category [10], some doctors use the term Blow-pitch wheeze^ [6]. Due to the fact they have the features of both wheezes and crackles, these phenomena are often differently classified by the respondents. As Hafke et al. proved, this is strongly dependent on the examiner's experience. Moreover, in the cited research, the advantage of pulmonologists was clearly visible. In their case, the number of correct rhonchi detections was 51.2%, while for other groups, this value did not exceed 30%, which was the lowest result for all the phenomena taken into account. Finally, crackles are short, explosive sounds of a non-tonal character. They tend to appear both during inspiration and expiration. Two categories of this phenomenon have been described-fine and coarse crackles. They vary in typical length (ca. 5 ms and ca. 15 ms, respectively) and frequency (broad-band) and may appear in different respiratory system disorders [3]. This is why the proper detection and evaluation of crackles is of high importance. Auscultation includes the evaluation of sound character, its intensity, frequency, and pathological signals occurring in the breathing sound. Its subjective nature is widely recognized, which has led to a new era of developments, for instance computer-based techniques. Recordings made with electronic stethoscopes may be further analyzed by a digital system in terms of its acoustic features and, after proper signal processing, delivered to the doctor at an enhanced level of quality or even complemented by a visual representation, e.g., a spectrogram. The latter should be considered as an association between an acoustical signal and its visual representation, and is beneficial to the learning and understanding of those sounds, not only for medical students [13], but also when it comes to doctors diagnosing patients. Currently, the subject of the greatest attention in the field of computer-based medicine are neural networks (NNs). NNs are a particularly fast developing area of machine learning which learn from examples, as human do. A decade ago NNs were one of many available classifiers. They were trained on a small set of highlevel features and produced probability scores of a sample belonging to one of several predefined classes. Their popularity sharply rose when it was proven that deeper neuron structures are able to learn intermediate features from low-level representations by themselves. These intermediate features learned by the NN are much more distinctive and descriptive in comparison to hand-crafted features in many artificial intelligence (AI) tasks, including audio signal analysis and medicine. Contemporary deep neural networks (DNNs) operate on raw signals directly and are therefore able to identify and exploit all important dependencies that they provide. But in order to be able to do that, a large number of training examples need to be provided. Yet, after these initial requirements are met, the NN algorithm is able to match or even surpass human performance. This is also believed to be the best strategy for dealing with respiratory sounds. Therefore, the aim of this study was to compare the efficiency of AI and a group of five physicians in terms of respiratory sounds identification in four main classes of pathological signals, according to [10]: wheezes (with no differentiation to sub-classes), rhonchi, and coarse and fine crackles. Auscultation recordings The auscultation recording files were gathered from 50 visits performed by pediatricians using StethoMe® and Littmann 3200 electronic stethoscopes. All the recordings were made in the Department of Paediatric Pulmonology (Karol Jonscher University Hospital in Poznan, Poland). The subjects were chosen on random from the patients of the abovementioned hospital. The whole procedure of signal collection (recordings) took 6 months. In this period, patients with different diseases (thus, different pathological sounds) were hospitalized. The decision about the recording was made after auscultation by a pulmonologist working at the hospital. In general, each visit provided a set of 12 recordings-each of them was made at a different auscultation point (Fig. 1). However, in case of children, as was the case in this research, it is often difficult to document breathing sounds from such a number of auscultation points of a sufficiently high quality, due to children's movements and impatience, and crying, or because of other health issues. The age of patients was within the range of 1 to 18 years old (mean 8.5; median, 8). This parameter however was not taken into account by AI, but the physicians were informed about the age of each patient. Therefore, the total number of recordings that were analyzed from 50 visits was 522. Study design The main goal was to investigate the accuracy of NNs in the classification of respiratory sounds in comparison with medical specialists. It must be emphasized that, in opposition to most research in many scientific journals which was performed on a small database or in laboratory conditions (e.g., 5,9), this research was based on a large amount of actual auscultation recordings captured in realistic conditions (hospital). The four abovementioned classes of auscultation phenomena (wheezes, rhonchi, and coarse and fine crackles) were chosen as the most frequently occurring and described. The nomenclature suggested by the European Respiratory Society [10] was applied in order to reduce the influence of ambiguous terminology on the final result. Audio data gathered by electronic stethoscopes was described by doctors in terms of the presence of pathological sounds in certain phases of the breathing cycle and locations on the chest wall. The same description was carried out by the NN. It should be stressed that together with the recording presentation, the information about the location of the point on the chest or back in which recording was made, as well as basic information about the sex and age of the child, the diagnosis, and accompanying diseases, were provided with every recording of a particular visit. The medical description consisted of the assessment of whether in a given recording, coming from a particular point, there were adventitious respiratory sounds from each of the four classes. Those descriptions were compared both with the NN descriptions as well as with the golden standard (GS). Golden standard Because of the fact that there is no objective measure that provides a classification of pathological breath sounds, it was necessary to establish a point of reference, which in this research is specified as the GS. The mentioned procedure for the GS is depicted in a few steps (Fig. 2.). Five pediatricians (different from the previous ones) carried out two self-reliant and independent verifications of the previously described recordings. Thus, each recording had a description and two independent verifications. The recordings with double positive medical verifications were automatically qualified to the GS. When the doctors' opinions were ambiguous-which means there was one positive verification and one negative, the recording was analyzed by an acoustician experienced in signal recognition. Once the acoustician evaluated the description as disputable, which meant its content could be ambiguous in terms of the acoustic parameters, the recording was forwarded to a consilium (2 experienced pediatricians and one acoustician), which was convened to establish a medical description again. It must be emphasized that the GS consisted of real-life recordings collected from real patients in real situations (hospital). Many of the recordings contained additional external noise (crying, talking, stethoscope movements, etc.). To make the GS as reliable as possible, the consilium instead of one physicians described those cases. The descriptions from the consilium were not subjected to further verification (Fig. 2). (Table 1). Both no pathology and more than one pathology in one recording were possible; thus, the number of recordings in the Table 1 is not equal to the total number of 522 recordings used in the experiment. Participants Doctors The set of all the GS recordings set, accompanied with spectrograms and basic information about each patient, was presented to five pediatricians, and they described them in terms of the occurrence of four pathological sounds (Table 1). One description was made for each recording. NN StethoMe AI NN architecture based on a modified version of that proposed by Çakir et al. [4] was used. This is a specialized network suitable for polyphonic sound event detection. It is composed of many specialized layers of neurons, including convolutional layers, which are effective at detecting local correlations in the signal, as well as recurrent layers designed to capture long-time dependencies, e.g., a patient's breathing cycle and the associated recurrence of pathological sounds. The NN had been trained and validated on a set of more than 6000 real and 10,071 artificial/synthetic recordings. This dataset was completely different from the GS set. Furthermore, another database was used in order to provide better noise detection. As output, the NN provided a matrix called the probability raster. In this data structure, the rows represent time, discretized into 10 ms frames, while the columns depict the probability of phenomena detection changing over the frames. The probability values are then thresholded in order to obtain boolean values indicating the presence or absence of such phenomenon along each frame (Fig. 3). Results A GS was used as a point of reference (100%) for tagging recordings performed by doctors and the NN. Therefore, confusion matrices could be analyzed-the values of recall (the proportion of actual positives that are correctly identified as such, also called as sensitivity), precision (the fraction of relevant instances among the retrieved instances), specificity (the proportion of actual negatives that are correctly identified), and the F1-score (the harmonic mean of precision and recall) were measured for the doctors and NN's phenomena detection in comparison with the GS. First the chi-square test (α = 0.05) was performed to investigate if there is a difference in the data gathered for doctors and the NN. The proposed null hypothesis was rejected for all four phenomena. Therefore, the results gathered for the doctors and the NN are statistically different. Detailed results are depicted in Table 2. The lowest F1-score was observed for coarse crackles both in the case of medical and NN descriptions. This may be partially due to the rare occurrence of coarse crackles in the analyzed database (see Table 1). Moreover, this kind of phenomena is often confused with other types of crackles or rhonchi (Hafke et al., submitted for publication) so its correct detection might be problematic. However, it is important to note that the NN F1-score which is related to its performance in correct phenomena detection is higher than in the case of medical descriptions (47.1% vs. 42.8%). The highest F1-score was obtained for rhonchi and wheezes (both continuous, Bmusical^sounds). Medical descriptions for rhonchi are comparable to the GS (which is reflected in F1-score value) in 61.0%, while NN is much more accurate-72.0%. This is undeniable proof of the ambiguous character of rhonchi, which results in poor detection performance (probably caused by mistaking them for other phenomena, as evidenced by low precision and recall (sensitivity) values when compared to the NN). When it comes to wheezes, despite the slightly lower values of precision and specificity noted for the NN, its final performance, expressed in F1-score value, is better than in the case of human tagging. The results are as follows-61.8% and 66.4%, with NN superiority. It can also be noted that the AI-based analysis is more accurate in detecting rhonchi and wheezes. This may be due to the fact that it is based mainly on the spectrograms, which accurately reflect tonal content in a recording. For the doctors, descriptions are mainly based mainly on acoustical cues, while the visual representation is used rather as an additional, supporting tool. This may be an important issue influencing the proper detection of pathology, especially when phenomena is of ambiguous nature (e.g., rhonchi) or accompanied by louder sounds, which make them barely audible (e.g., silent wheezes). The biggest differences in F1-scores, meaning a significant predominance of the new automatic system over doctors, are observed for fine crackles-64.6% vs. 51.1%. Also, all of other parameters are higher for the NN. Generally, for each of the four phenomena, the F1-score for the NN is higher than for doctors with an average of 8.4 percentage points (p.p.), which clearly indicates the advantage of the tested algorithm over the group of doctors. NN is 13 p.p. in average more sensitive and 4 p.p. more precise than the reference group of pediatricians. Discussion The main goal of this research was to investigate the effectiveness of pathological respiratory sounds detection for both doctors and the automatic analyzing system based on the NNs developed by the authors. To measure the performances, the GS was established as a set of 522 recordings taken from the respiratory system of 50 pediatric patients and gathered during auscultation using electronic stethoscopes in real situations. Since auscultation tends to be subjective and there is not an objective measure of correctness, those recordings were then tagged (described) by doctors and experienced acousticians in terms of pathological phenomena content. The recordings with consistent taggings were taken as a point of reference. The inconsistent ones were described by a consilium (2 experienced pediatricians and one acoustician). Only positively verified recordings were used in the next steps of the experiment. In this way, a very reliable GS was established which was taken as a point of reference for the evaluation and comparison of the descriptions of both doctors and the newly developed NN. Since the statistical analysis showed that the performance of those two groups (the doctors and NN) are significantly different, it is reasonable to state that that ML-based analysis that uses the NN algorithm introduced here is more efficient in detecting all four pathological phenomena (wheezes, rhonchi, and coarse and fine crackles), which is reflected in the high values of recall (sensitivity) and the F1-score. It is worth noting that the biggest difference between the performance of doctors and the NN was observed in the case of coarse crackles, where the NN clearly outperformed. Moreover, it has to be mentioned that the NN performance is also higher than that of the doctors in the case of ambiguous sounds (i.e., rhonchi) which tend to be misinterpreted or evaluated in an improper way in everyday medical practice. Finally, the difference between the performance of the doctors and the NN was less significant when it came to the recognition of wheezes; however, this is just because the performance of doctors with those signals which are easiest to interpret is relatively high. Thus, the potential of the proposed solution seems to be enormous. It must be also emphasized that the NN algorithm was taught using thousands of recordings and taggings, which makes the results unique and reliable. Conclusions To conclude, the NN algorithms that were used in this experiment can be described as a very efficient tool for pathological sound detection. This is why AI may become a valuable support for doctors, medical students, or care providers (also lay ones), both when it comes to diagnosing or monitoring processes, on the one hand, and training or education on the other. The database we built is itself a very good tool in this field. Moreover, the AI algorithms can be also beneficial for lay people in terms of monitoring their respiratory system at home, which makes this solution valuable in many areas, e.g., patient safety; reaction speed in case of danger; and, for reducing, the cost of treatment. It also must be emphasized that there are many publications that correlate pathological sounds with particular disease; however, it is more complicated. There are many publications that show that efficiency of physicians is very low [1,10]; thus, the AI solution is a first step in making auscultation more objective with less incorrect identification and thus better correlation with diseases made by physicians. Finally, AI algorithms can also be used in other areas, such as heart disease, which makes this area even more promising, especially taking into account that the results from this experiment which was carried out in real conditions, not in a laboratory with proven high performance of NN.
4,695.6
2019-03-29T00:00:00.000
[ "Medicine", "Computer Science" ]
Integrated environmental management and GPS-X modelling for current and future sustainable wastewater treatment: A case study from the Middle East In the context of today's rapidly changing environmental challenges, accurately predicting the performance and efficiency of environmental management strategies is crucial. Particularly in the Middle East, where research on wastewater treatment plants (WWTPs) is notably lacking, addressing this need is imperative. This study investigates the treatment efficiency of a wastewater treatment plant and proposes various techniques to enhance its performance. Employing a case study method, we utilise the GPS-X model to forecast the plant's performance under diverse scenarios, offering solutions for future challenges. The results reveal that the current plant layout operates efficiently, with removal efficiencies for Total Suspended Solids (TSS), Chemical Oxygen Demand (COD), and Biochemical Oxygen Demand (BOD) at 98.3 %, 95.1 %, and 96.1 %, respectively. The outlet Dissolved Oxygen (DO) of 1.9 mg/L meets local wastewater reuse standards. Furthermore, the GPS-X model forecasts the plant's performance under different scenarios, suggesting the feasibility of a new layout within 20–25 years and the need for additional units after 40 years. As inflow approaches maximum design capacity, simulation results underscore the importance of utilising the full plant design and expanding it for optimal operation over 60 years. This research provides critical insights for improving WWTP performance and emphasizes the significance of strategic planning in addressing long-term environmental management challenges. Moreover, this study represents a pioneering effort in addressing critical water scarcity challenges in Jordan by exploring the potential of treated wastewater (TWW) as a sustainable solution, thus contributing to the advancement of environmental management practices in the region. Introduction Environmental challenges have recently become a global concern.The exceptional magnitude of today's environmental challenges is indisputable within the realm of serious scientific discourse.The understanding of the enormity of these challenges continues to expand [1].These challenges encompass climate change, waste management, water management, pollution, deforestation, and degradation, all of which are interconnected and complex problems that necessitate multifaceted approaches and dynamic solutions.Consequently, it is critical to consider different sorts and sources of knowledge [2] to mitigate their impacts and assure a sustainable future for the earth [3,4]. This research investigates water management; specifically wastewater treatment and water sources.Water management efficiency can be measured by its environmental impact.Water efficiency, wastewater treatment and reuse, and avoiding negative environmental impacts are all good indicators of good and sustainable management [5,6].Water resource mismanagement, environmental and water pollution, over-exploitation of water resources, violations of sustainability regulations, and failure to respect intergenerational equity all point to inefficient water resource management [6].In general, the water situation in the Middle East is challenging, in a region in which climate change is creating water insecurity with a projected deficit of 2 billion cubic meters by 2050 [7].In addition, the region is consistently grappling with depleted water resources due to poor management and resource degradation to the extent that some water sources are unsuitable for human consumption.The rapidly increasing population, food requirements, and urbanisation, are further intensifying the strain on these diminishing resources.Meanwhile, global warming is exacerbating persistent and severe droughts, leading to increased competition for water resources across various sectors and even among different states, as well as contributing to agricultural difficulties [8].Therefore, this presents a significant challenge for both the population and the economy of the region [7]. Jordan is a country that has suffered from water shortages for many years and is considered one of the most water scarce countries in the world [9,10].The renewable water supply in the country covers only about a half of the water demand [11].Additionally, groundwater in Jordan is being depleted faster than it is being replaced [12].The population growth and the influx of refugees from conflict zones put a further strain on water resources, alongside changes in the amount and quality of rainfall, all of which add to the stress [12].In addition, various sectors are experiencing significant development, including a strong manufacturing sector, an exceptional level of health care and education, and a strong information technology sector, which increases the demand for water resources, therefore making it imperative to explore effective solutions for securing new sources of water.These sources should be applicable in industry, agriculture, and other fields [7,13,14].In response to these challenges, in this section, we will explore potential solutions and strategies for environmental management.One of the most promising strategies for extending and conserving available water sources is the use of treated wastewater (TWW).TWW is considered one of the most crucial and effective solutions to address the water shortage in Jordan [14,15].It is defined as the process of removing pollutants and impurities (reclamation) from wastewater and changing its properties to make it more acceptable in the environment or converting it into water that can be used in multiple applications [16][17][18].These treatment processes occur in wastewater treatment plants (WWT), also known as water resource recovery facilities (WRRF) or sewage treatment plants (STP).In these facilities, pollutants present in the wastewater are reduced, diverted, or broken down during the treatment process. The treated wastewater (TWW) can be used in various sectors, including agriculture, landscape irrigation, artificial recharge, and industries [19,20].For instance, the use of TWW is one of the most effective ways to address water shortages and nutrient requirements in agricultural systems [14].Although the use of TWW can alleviate water shortages, it must be applied in a controlled environment to reduce the health risks caused by pathogenic and toxic pollution of agricultural products, soils, surface, and groundwater.The greatest challenge is to maximise the benefits of TWW as a resource while minimising its negative impact on human health [21].Jordan prioritises reuse activities to ensure the highest possible impact and efficiency.To achieve the greatest efficiency, it is important to conduct relevant simulations and forecasts for WWTP outcomes.Given the limited research and investigation into wastewater treatment plants (WWTPs) and the utilisation of the GPS-X model within the Middle Eastern context, especially in Jordan, there is a growing necessity for further exploration in this area. Therefore, this study presents several novel contributions to the field of wastewater treatment in Jordan.Firstly, it is the first known research endeavour in the country to utilise the GPS-X model for optimising wastewater treatment processes.By employing this advanced modelling tool, the study pioneers a novel approach to enhance the efficiency and performance of wastewater treatment plants.This unique application of the GPS-X model in the Jordanian context not only fills a significant research gap but also establishes a valuable reference for future researchers and practitioners in the field.Additionally, the study addresses critical water scarcity challenges in Jordan by exploring the potential of treated wastewater (TWW) as a sustainable solution.By providing insights into the utilisation of TWW and proposing optimisation strategies using the GPS-X model, the research contributes to the advancement of environmental management practices in the region.Overall, the novelty of this study lies in its innovative use of the GPS-X model for wastewater treatment optimisation and its potential to serve as a foundational resource for future studies in Jordan and beyond. The modelling and simulation system GPS-X is a contemporary commercial computer program that accurately simulates the operation of wastewater treatment plants.It aids in the effective management of these plants, ensuring they operate under optimal conditions to achieve desirable results and specifications [22].In the Middle East, GPS-X model analyses have been conducted to optimise wastewater treatment processes.For example, a GPS-X simulator was used in these studies to model and simulate wastewater treatment plants, such as the Karbala wastewater treatment plant in Iraq [23].The application of advanced simulation software is beneficial in cost savings and decision-making processes by reliably assessing the TWW quality in different circumstances [24].The research has focused on improving the denitrification process, lowering phosphate concentrations, and enhancing the overall performance of WWTPs [25].Previous studies in Jordan have not specifically referenced the use of GPS-X model investigations aimed at enhancing wastewater treatment procedures.Therefore, this study provides insightful information about the use of GPS-X modelling in optimising wastewater treatment processes in the country. Ultimately, the main goal of this study is to examine the treatment efficiency of the WWTP and propose various techniques and scenarios to improve its efficiency using the GPS-X model.Furthermore, the research seeks to achieve several specific objectives, which include assessing the removal rates of key pollutants such as BOD, COD, and TSS, while simultaneously evaluating the performance of treatment units.Additionally, it aims to ensure that the quality of effluent meets regulatory standards and to validate the predictive capabilities of the GPS-X model by comparing simulated results with empirical data.Another objective is to forecast the performance of A.S. Odeibat et al. the plant under varying conditions, including changes in influent characteristics, population growth, wastewater inflow volume, and regulatory requirements.Furthermore, the research seeks to develop predictive models that can forecast modifications in plant performance over extended time horizons.Finally, it aims to evaluate the trade-offs between different treatment scenarios, considering their environmental, economic, and social implications.As a result, the following research questions were created: What is the current efficiency of the MWWTP?To what extent does the GPS-X model accurately reflect the reality of the Al-Marad WWTP?How can the best strategies for optimising long-term WWTP performance be predicted?What is the optimal scenario for achieving the highest removal efficiency at the lowest cost for the WWTP? The Al-marad wastewater treatment plant as one of the pioneer stations in the Middle East To achieve the research objectives, this study establishes criteria for selecting WWTPs in Jordan, emphasising the importance of comprehensively understanding the situational context and unique characteristics of these facilities.In Jordan, wastewater is categorised into domestic and industrial types [26].The treated wastewater is primarily used for restricted agriculture irrigation, and any excess treated water is discharged by gravity into valleys and Dams.The selection of the Al-Marad wastewater treatment plant as the focus of this study was driven by several factors.Firstly, it stands out as a modern plant equipped with advanced technological systems, having commenced operations in 2011.Surprisingly, despite its technologically advanced nature and operational history, no prior studies have been conducted on this facility, making it an intriguing subject for investigation.Moreover, its unique location in a region of the country where agriculture is very important further underscores its relevance for researchers.Notably, the Al-Marad wastewater treatment plant was constructed using the GPS-X model, highlighting its innovative approach to wastewater management.This utilisation of advanced modelling technology aimed to optimise the plant's performance, enhance the effectiveness of its treatment units, and prolong its operational lifespan.By conducting research on this specific facility, we sought not only to ensure its continued efficiency and functionality but also to establish it as a benchmark for other wastewater treatment plants across the kingdom. The Al-Marad WWTP utilises an activated sludge system with extended aeration, where microorganisms in the wastewater are preserved in a suitable environment in the aeration basins to facilitate oxygen supply [27].These microorganisms consume organic materials, multiply, and form activated sludge clusters, which settle in the final sedimentation basins.Excess sludge is recycled or transferred to drying beds and dewatering machines.Maintaining treatment efficiency requires balancing the BOD of incoming wastewater with the suspended organic matter concentration, governed by standards such as the F/M ratio (0.5-1), sludge age (13-20 days), and daily SVI stability tests. This paper utilises a case study approach to fulfil the research objectives.An initial step involves evaluating the treatment plant by analysing the type, volume, and characteristics of inflow wastewater.The examination further entails assessing the current treatment units within the plant, and evaluating their suitability and capacity to manage the inflow volume.The methodology involves several key steps: weekly sampling for six months, adhering to Jordanian standards, to analyse and collect data at the plant's entry point, and measuring parameters such as inflow, BOD, COD, TDS, TSS, pH, and temperature.Subsequently, the evaluation of WWTP efficiency is conducted by comparing the concentration of these attributes upon entry and exit from the facility.The process also includes preparing and constructing a GPS-X model to mirror the WWTP's real conditions.Utilising the simulation feature in the GPS-X Model, scenarios are tested by manipulating treatment process parameters to determine the most efficient scenario that optimises removal efficiency at the lowest cost for the WWTP.The following formulas were used to calculate the removal efficiency law and the population growth. Removal Efficiency where: C in is the inflow characteristics, and C out is the outflow characteristics. where P t is the number of populations after t of time, P o is the current population number, i is the annual growth rate and t is the time in years.The plant's operational system encompasses various critical components observed on-site: screens, essential for eliminating solid materials obstructing the purification process, are currently out of service and in need of maintenance.The grit chamber, crucial for removing sand and gravel to prevent equipment erosion, is presently inactive.Both anaerobic tanks designated for phosphorous removal and one aeration tank are out of service.Only one secondary sedimentation tank is operational, while sand filters, responsible for removing sediments, are inactive.The chlorine disinfection system operates intermittently, and only one of two sludge thickener tanks is in service.Seven drying beds, cost-effective and easy to maintain, are in good condition.Of the sludge dewatering units, one requires maintenance, and the other remains in service.Lastly, the equalisation tank, used during peak inflow periods, is currently nonoperational. Modelling and simulation Modelling and simulation are defined as the use of physical, mathematical, logical, or other types of models in simulations to create a computerised representation of a real situation.This allows us to obtain the best and most appropriate solution(s) and decision(s) for A.S. Odeibat et al. a real world situation.Additionally, it can explore the problem under different scenarios, aiding in comparisons and the selection of the correct decision.In the field of engineering, simulations and modelling find wide application across various disciplines.Simulations provide approximations and simplifications of the real-world scenario, enabling the addition and modification of inputs to any system.This, in turn, facilitates error monitoring, identification of the causes of problems, and the development of optimal solutions [28]. Description of GPS-X GPS-X is a design program used for the simulation and design of wastewater treatment plants.The GPS-X software was developed by the Hydromantis Company to provide a clear understanding of wastewater treatment plant performance and assist in predicting the plant's future.The program library contains numerous processes for treating wastewater and solid materials, along with a comprehensive range of biological treatment processes that address nitrogen, phosphorus, carbon, and pH levels.In terms of design, GPS-X is utilised to assess the impact of increased biological loads on treatment plant facilities and to operate the plant under various scenarios, thus saving costs and effort.Additionally, GPS-X examines the effects of internal recycling rates, anoxic zones, and anaerobic zones on processes such as nitrification, denitrification, and overall treatment effectiveness [29].GPS-X can be summarised as follows. • Designing unit processes: primary treatment, secondary treatment, and biosolids handling. • Minimising operational costs while meeting effluent quality requirements. • Predicting the effects of taking one of the unit processes offline for maintenance. • Ensuring a swift recovery from plant upsets. • Accurately evaluating process control improvements. • Providing operator training by illustrating the impact of operating decisions on plant performance. One of the advanced features of GPS-X is the drag, drop, and link capability.With this feature, designers and analysts can combine processing units, and add, or delete the connections between them to create different scenarios.It is also known for its efficiency in entering information for the units used.Additionally, GPS-X can be integrated with MatLab.The software is further distinguished by the presence of its optimiser [22,29]. The utilisation of GPS-X in the modelling process entails employing simulation software to analyse and enhance the efficiency of water treatment facilities.GPS-X functions as a specialised software suite tailored for modelling and simulating wastewater treatment processes.Drawing from the historical performance data of the plant, GPS-X constructs dynamic models capable of simulating various scenarios and refining the treatment process [30].By converting graphical representations of processes into material balance equations, the software facilitates the kinetic depiction of treatment processes and the examination of crucial parameters [31].Moreover, GPS-X serves for the design and simulation of sewage treatment plants, providing calculations and specifications for individual unit operations [32].Furthermore, GPS-X aids in the calibration of mathematical models for activated sludge processes, enabling the analysis of plant performance and capacity assessment.In summary, GPS-X emerges as a potent instrument for both modelling and optimising water treatment processes. The GPS-X modelling process encompasses several key steps.It commences with data collection, where pertinent information regarding the wastewater treatment plant (WWTP), including influent characteristics and effluent quality, is gathered.Following this, a digital representation of the WWTP is established through the model setup, detailing its treatment units, pipelines, and processes.Calibration then ensues, during which the model is fine-tuned to accurately mirror the behaviour of the actual plant, involving adjustments to parameters and settings based on observed data.Subsequently, simulation is conducted to predict the WWTP's performance under various scenarios, potentially incorporating changes in influent characteristics or the addition of new treatment units.An analysis phase follows, assessing the WWTP's performance in order to pinpoint areas for improvement, such as comparing simulated effluent quality with regulatory standards and optimising operational strategies.Optimisation strategies are then developed to enhance the efficiency and effectiveness of the WWTP, which may involve adjustments to operational parameters or the implementation of new technologies.Finally, the model results undergo validation against real-world data to ensure their accuracy, affirming the reliability of the model and the efficacy of the proposed optimisation strategies [30][31][32]. Table 1 Influent readings during the six-month study period. Influent characteristics The practical aspect of this study relies on direct observation and the measurement of various wastewater parameters that enter the treatment plant over six months.Readings were collected twice a week, with samples obtained in the morning from the plant's entrance and subsequently measured in the laboratory.The parameters measured include inflow, BOD, COD, TDS, TSS, pH, and temperature.Table 1 shows the average influent characteristics over the six month period, including the standard deviation (see Table 2). This study aims to assess the performance of the Al-Marad WWTP, which necessitates the examination of its effluent and a comparison of its characteristics with the Jordanian standards.Hence, it was imperative to measure various parameters of the effluent and compare them with those of the influent.The following table shows the average effluent characteristics throughout the study period, in addition to calculating the standard deviation. This research built the model to evaluate and assess the WWTP's current situation and mirror the WWTP's real conditions in order to predict its future outcomes and sustainability.The results section will present the current and future scenarios. Results The evaluation process relies on a comparison of the inputs and outputs of the treatment plant, ensuring that the treated wastewater adheres to the specified local standards.In this section of the study, the efficiency of the Al-Marad WWTP will be quantified by assessing the quality of the effluent water and subsequently comparing it with the recommended specifications.The results were categorised into two primary sections: an evaluation of the Al-Marad Wastewater Treatment Plant based on the influent and effluent readings, and an assessment derived from a simulation of the plant's performance using the GPS-X Model. Evaluation of Al-marad WWTP performance/field results The efficiency of the Al-Marad WWTP will be calculated and its performance will be evaluated based on the laboratory test results.The removal efficiency of the plant will be calculated based on the removal effectiveness law.The general efficiency law, which theoretically calculates the difference between the inflow and outflow of the plant, shows the changes made by the treatment units.This is shown in Table 3 which displays the six-month average of the BOD, COD, TDS, TSS, pH, and temperature readings that were taken at both the inlet and the outlet of the plant. The outlet DO concentration ranged from 1 to 2.2 mg/L, while the pH and temperature adhered to the required standards, being 7.5 and 25-35 • C, respectively.Laboratory tests indicated that the treated wastewater from the Al-Marad WWTP is suitable for irrigating fruit trees and green areas.Additionally, it can be discharged into valleys and streams or utilised to recharge unused groundwater wells not designated for drinking purposes. Evaluation of Al-marad WWTP performance/modelling results GPS-X is an advanced software designed for simulating WWTP performance, offering numerous options and units commonly utilised in wastewater treatment plants.The base model is established and calibrated using data provided from the plant, encompassing design criteria, flow rate characteristics of the influent, and operational data.The base model comprises solely the units currently in operation at the plant.Fig. 1 illustrates the base model of the Al-Marad WWTP as of the current time. To calibrate the model, the physical and chemical characteristics of the inlet wastewater were considered.Inlet characteristics were input based on laboratory findings: COD was set at 1300 mg/L, total phosphorous at 12 mg/L, total Kjeldahl nitrogen (TKN) at 57 mg/ L, ammonia nitrogen at 28 mg/L, and orthophosphate at 9 mg/L, with an average inflow of 4000 m 3 /day.Additionally, the influent fraction was adjusted from default values using data extracted from the plant's manual.Filtered BOD, filtered COD, and fluctuate COD were employed to compute these fractions.Table 4 provides a comparison between the actual and default values of fractions utilised in GPS-X. A simulation of the Al-Marad WWTP was created based on the data entered into the model.The results of 30-day of simulation using the GPS-X model are shown in Table 5. Table 2 Effluent readings during the six-month study period. Table 3 The 6-month average of the BOD, COD, TDS, TSS, pH and temperature readings that were taken at both the inlet and the outlet of the plant, and the efficiency removal % of the Al-Marad WWTP.When calibrating the GPS-X model, it efficiently mirrored the treatment plant's effectiveness.The removal efficiencies obtained for COD, BOD, and TSS were 96.1 %, 69.9 %, and 98.5 %, respectively.Comparatively, the treatment plant achieved removal efficiencies of 95.2 % for COD, 96.2 % for BOD, and 98.3 % for TSS.Fig. 2 illustrates the 30-day simulation results of the current plant's effluent COD, TSS, and BOD. The GPS-X model offers an option that allows the user to determine the amount of sediment in the sedimentation tanks.These tanks are divided into ten equally thick layers each showing the sediment concentration.Fig. 3 illustrates the distribution of TSS in these layers within the sedimentation tank.The illustration indicates that the first six layers effectively maintain acceptable TSS effluent levels, while sediment accumulation significantly affects the remaining layers.Consequently, TSS effluent from the first layer meets standards. Evaluation of Al-marad WWTP performance using the GPS-X model with subsequent future scenarios This section presents the performance analysis of the Al-Marad WWTP under different inflow rates.Through modelling, we will showcase the operational schedule for treatment units over time as the plant approaches its maximum design capacity of 10,000 m 3 /d.Additionally, we will simulate potential future scenarios in which influent rates exceed 10,000 m 3 /d, forecasting the expected influent rate at which the Al-Marad WWTP could potentially fail, and the corresponding date. The base model performance at the maximum inflow design (10,000 m 3 /day) The plant is designed to handle 10,000 m 3 /day.In this section, the current plant layout (Fig. 2) will be assessed under the operating conditions equivalent to the design inflow of 10,000 m 3 /day.The results obtained from a 30-day simulation are summarised in Fig. 4. As illustrated in Fig. 4, the average effluent COD was 500 mg/L, the average effluent BOD was 200 mg/L, and the effluent TSS was 720 mg/L, with an average effluent DO of 0.3 mg/L.Our analysis indicates that the existing plant layout fails to meet the design standards at maximum capacity, as all effluent characteristics exceed acceptable limits.Moreover, under maximum inflow conditions, the average TSS effluent from the initial sedimentation tank layer would reach an unacceptable 700 mg/L, as depicted in Fig. 5. Performance evaluation beyond 5 years In this section, the impact of the inflow volume on the efficiency of the Al-Marad WWTP will be investigated using the GPS-X model.The inflow volume calculation relied on an exponential model predicting population growth.This model estimates population growth over a specific period, assuming a constant growth rate, based on the population formula [33]. The number of homes connected to the sewage network in 2020 is approximately 5750 houses, accommodating roughly 28,800 individuals.Based on the Jordanian growth rate of 2.3 % and this correlation, the estimated population by 2025 (in five years) will be around 32,268 persons. This scenario was replicated using the GPS-X model to analyse the impact of increased inflow volume on the efficiency of the wastewater treatment plant over a period of more than five years, using the current layout.After a 30-day simulation, the effluent levels of TSS, COD, BOD, and DO are summarised in Fig. 6. As depicted in Fig. 6, the current configuration of the treatment plant cannot manage 4481.6 m 3 /day, resulting in unsatisfactory treatment efficiency.The average effluent levels were as follows: TSS at 290 mg/L, COD at 190 mg/L, BOD at 57 mg/L, and DO at 0.05 mg/L.None of these effluent characteristics meet the required standards for treated wastewater.Fig. 7 shows a high concentration of TSS in all sedimentation tank layers, with the effluent TSS from the first layer averaging 89 mg/L, an unacceptable level.Based on the simulation results in this scenario, it is evident that the current plant layout is incapable of managing an inflow of 4500 m 3 /day.Consequently, the current layout would not be sustainable after 5 years.Therefore, our recommendation is to activate the second sedimentation tank, as depicted in Fig. 8, to maintain treatment efficiency within acceptable limits. The GPS-X model was subsequently employed to simulate the MWWTP with the second sedimentation tank in operation.Observing Fig. 9, it is evident that the plant's treatment efficiency meets the standards, as all effluent concentrations for COD, BOD, and TSS remained below the specified limits.Adding a second sedimentation tank led to significant efficiency improvements, achieving a 99.6 % removal rate for TSS, 96.7 % for COD, and 99.7 % for BOD, maintaining an average effluent DO value of 1.51 mg/L.Additionally, this modified layout effectively resolves the sedimentation issue within the tank by distributing the inflow equally between both tanks, employing an equal split fraction of 0.5.As depicted in Fig. 10, the first seven layers yield a TSS effluent that complies with acceptable limits. Performance evaluation beyond 10-20 years We simulated the performance of the Al-Marad WWTP at 5-year intervals until the treatment efficiency fell below the effluent standards.After 10 years, the plant is anticipated to cater to approximately 36,153 individuals, receiving an inflow of 5000 m 3 /day.We assessed the previous layout (depicted in Fig. 9) under the anticipated flow conditions for the next 10 years.The simulation results and the plant's effluent values for TSS, COD, BOD, and DO are presented in Fig. 11. After a 30-day simulation, we determined that this layout is suitable for the next ten years.The concentrations of TSS, BOD, and COD do not exceed 50 mg/L, while DO remains under 1.5 mg/L, all in accordance with the relevant standards.The removal rates are 99.4 % for TSS, 96.5 % for COD, and 99.6 % for BOD.Furthermore, with this layout, both sedimentation tanks perform adequately, with the first six layers effectively maintaining TSS effluents below 50 mg/L, as depicted in Fig. 12. The same layout (Fig. 8) was examined for the 15-and 20-year periods, anticipating inflows of 5600 m 3 /day and 6300 m 3 /day, respectively.Fig. 13 displays the simulation outcomes after 20 years. Based on Fig. 13, it is evident that the layout utilised after 5 and 10 years can also be effectively employed after 15 and 20 years.The plant's treatment efficiency remains acceptable, with all effluent concentrations for COD, BOD, and TSS well below the specified limits.The removal rates for TSS, COD, and BOD are 98.5 %, 96.4 %, and 98.5 %, respectively.Moreover, the sedimentation tanks effectively manage sediment accumulation, and the effluent TSS within the required limit is observed in the first six layers, as depicted in Fig. 14. Performance evaluation beyond 25-35 years In this scenario, the plant's performance will be tested over 25 years, serving 5000 people with an expected inflow rate of 7000 m 3 / day of domestic wastewater.Initially, we will assess the performance using the previous layout (5, 10, 15, and 20 years' layout, as illustrated in Fig. 8).Employing the GPS-X model, we simulated the plant's performance over a 30-day period.The results of this simulation are depicted in Fig. 15. From Fig. 15, it is evident that the plant's effluents do not meet the permissible standards.The average effluent COD reached 160 mg/L, while the average value of BOD was 44 mg/L, and DO values were below 0.1 mg/L.Additionally, the sediments exceeded 10570.43mg/L in each of the last six layers, with the first and second layers producing a TSS effluent with an average of 230 mg/L. The previous simulation results indicate that the current layout cannot effectively treat an inflow of 7000 m 3 /day.This suggests that the previously successful layouts used after 5, 10, 15, and 20 years are unsuitable at the 25-year mark.Therefore, it is necessary to modify the layout and incorporate the operation of the third sedimentation tank using equal split fractions, as illustrated in Fig. 16. The GPS-X model was used to simulate MWWTP performance with the third sedimentation tank in operation.Fig. 17 presents the results of effluent TSS, COD, BOD, and DO after a 30-day simulation.Operating the third sedimentation tank produces good effluent results, with a TSS removal efficiency of 99.4 %, a COD removal efficiency of 96.5 %, a BOD removal efficiency of 99.6 %, and an average effluent DO of 1.79 mg/L, as shown in Fig. 17.Additionally, the first seven layers are in good condition, producing a TSS effluent of less than 50 mg/L.Following the simulation method used for every 5 years, a 30-day simulation was conducted to test the plant's efficiency beyond 35 years.The same layout adopted for beyond 25 years (Fig. 16) was tested for an inflow of 8800 m 3 /day.Fig. 18 presents the effluent COD, TSS, BOD, and DO resulting from using the same layout.The simulation results demonstrated the suitability of the previous layout for an additional 10 years, with an expected inflow of around 8800 m 3 /day.Based on the 30-day simulation results, the plant's effluent meets the standards, with TSS removal at 99 %, COD removal at 96 %, and BOD removal at 99.2 %.Additionally, the average DO effluent is 1.57mg/L.Moreover, the sedimentation tanks produce satisfactory TSS effluent, with the first six layers yielding less than 50 mg/L of TSS, as shown in Fig. 19. Performance evaluation beyond 40 years After 40 years, the inflow is expected to reach the design inflow rate of 9900 m 3 /day.The GPSX model was used to simulate MWWTP performance, employing the layout used beyond 35 years.Fig. 20 presents the results of effluent TSS, COD, BOD, and DO after a 30-day simulation. The removal efficiencies for TSS, COD, and BOD were satisfactory at 98.2 %, 96 %, and 96.8 %, respectively, while the DO effluent decreased to 1.43 mg/L.Additionally, the three sedimentation tanks will be simulated over a 30-day period, as depicted in Fig. 21. The simulation results indicate a significant accumulation of sediment in the three sedimentation tanks, resulting in the closure of the tanks.Additionally, it can be observed that the effluent from the first and second layers complies with the specified limits.This outcome suggests that the predicted layout is inadequate to manage the design inflow.Consequently, the current layout requires modification.The proposed adjustment involves operating the second aeration tank in the MWWTP with an equal split fraction alongside the first aeration tank, as illustrated in Fig. 22. The GPS-X model was used to simulate MWWTP performance using the layout with two aeration tanks over a 30-day simulation.The effluent TSS, COD, BOD, and DO are shown in Fig. 23. The simulation results indicate a TSS removal percentage of 99.1 %, COD removal of 97.4 %, BOD removal of 99.6 %, and an average DO effluent of 2 mg/L.It can be concluded that operating the second aeration tank assists the plant in managing the designed inflow.Furthermore, the simulation results for the three sedimentation tanks illustrate satisfactory performance in the first seven layers, as depicted in Fig. 24.This suggests that the operation of the second aeration tank remains beneficial beyond 40 years with the designed inflow. Additionally, the plant was simulated with an inflow exceeding the design capacity.After 45 years, the expected inflow is 11,000 m 3 /day.The full design of the plant will be examined by operating all units and including them in the treatment process.Fig. 25 depicts the layout beyond 45 years (plant's full design). The simulation results show that the plant's full design is appropriate for an inflow of 11,000 m 3 /day.TSS removal reached 99.8 %, COD removal was 99.6 %, BOD removal was 99.6 %, and the average DO effluent was 2 mg/L.Additionally, the simulation results indicate that the three sedimentation tanks are performing well, particularly the top five effective layers. Future management strategies for plant expansion Based on the findings from the previous scenarios, it is evident that the operational lifespan of the plant is limited to 45 years.To extend the plant's functionality beyond this timeframe, we recommend an expansion of the treatment facility.This expansion involves the addition of new treatment units, potentially including sedimentation and aeration tanks.Specifically, our proposal entails integrating a fourth sedimentation tank into the existing infrastructure.However, incorporating a fourth sedimentation tank necessitates the installation of a new thickener unit. Given that the capacity of a single thickener is 750 m 3 /day, and the inflow from one sedimentation tank to the thickener is 250 m 3 / day, the combined inflow from the fourth sedimentation tanks will surpass the capacity of one thickener.Therefore, the addition of a new thickener, coupled with an equal split fraction to the existing one, is proposed to enhance the plant's efficiency over an extended period.Fig. 26 illustrates the proposed layout for the plant's expansion, comprising the inclusion of an extra sedimentation tank and a dedicated thickener unit. The GPS-X model was employed to simulate the plant's performance with the expanded layout beyond 50 years, incorporating a fourth sedimentation tank and a second thickener.Effluent parameters such as TSS, COD, BOD, and DO were simulated over a 30-day period at an inflow rate of 12,000 m 3 /day, as depicted in Fig. 27. The 30-day simulation results indicate that the expanded layout performs effectively with a pumped flow increase to 300 m 3 /day.TSS removal achieved 99.8 %, COD removal reached 99.5 %, BOD removal reached 99.5 %, and the DO effluent was at 1.5 mg/L.Additionally, the distribution of TSS within the sedimentation tank layers meets acceptable standards. Discussion As mentioned in the Results section, numerous and frequent modifications are essential, particularly in the treatment plant design, to ensure optimal efficiency at the lowest cost.Our study, encompassing various scenarios, revealed that population growth significantly influences the number of operational units required, their activation, and the duration of their effectiveness.Population dynamics also significantly impact the physical and chemical characteristics of the flow, necessitating measurement or calculation of various influential factors in plant operation and the maintenance of its efficacy.The region faces environmental challenges, including a scarcity of fresh water, and is frequently affected by external climate events, particularly droughts [6].This situation necessitates the wise management of water resources [12].The use of TWW is one of the most promising strategies for conserving and extending available water sources.TWW is regarded as one of the most critical and effective solutions to Jordan's water shortage [14,15].Previous studies used a GPS-X simulator to improve the performance of WWTPs [23,24].In Karbala wastewater treatment plants, there was a particular focus on the addition of external carbon sources to enhance denitrification and reduce phosphate concentrations [23], while in the Sharjah case study, the TSS, COD, TKN, and cBOD5 were used to support decision-making strategies and deliver cost savings by reliably evaluating the TWW quality [24] Additionally, some studies used Capdetwork software alongside GPS-X to calculate the economic costs [34].Our results reveal that the existing layout will not suffice after five years, anticipating an inflow of 4500 m 3 /day.Therefore, the second sedimentation tank needs to operate.Once the plant reaches its maximum design inflow of 10,000 m 3 /day the third sedimentation tank must operate to enhance the plant's performance and sustain it.Previously [23,24], analysed the performance of WWTPs by using the GPS-X simulator and reported an overall performance improvement in the outcomes of the WWTPs and stated the best scenarios.Hence our result is in alignment with previous studies as the majority of them agreed on the beneficial uses of the GPS-X model.Accordingly, The GPS-X software model provided crucial and realistic results, aligning well with the changes observed in the facility due to increased flow volume, primarily driven by population growth.Consequently, the model proves to be an effective and accurate tool, adept at simulating treatment plant realities across a wide spectrum of variables. Conclusion In conclusion, the study has focused on assessing the treatment efficiency of the Al-Marad WWTP through rigorous monitoring and simulation using the GPS-X model.By analysing key parameters such as BOD, COD, TSS, and DO levels in both influent and effluent streams over a period of six months, we have gained valuable insights into the plant's performance dynamics.The GPS-X model has proven to be a powerful tool, accurately simulating current plant performance, and offering strategies for managing potential changes. The simulations extend beyond short-term assessments, providing projections of the plant's performance over the next 60 years.These projections highlight the need for proactive measures to ensure the plant's long-term viability.Specifically, our findings suggest the importance of modifying the existing layout to accommodate anticipated increases in inflow, as detailed in the study. Furthermore, our research demonstrates the potential of the GPS-X model in evaluating treatment plant performance in Jordan and beyond.The successful application of the model in simulating the Al-Marad WWTP's performance suggests its reliability for evaluating other treatment plants in the region.Moreover, the model's ability to simulate future operations and assess modifications to the plant's design and operating units underscores its utility as a decision-making tool for administrators and operators. Looking ahead, we recommend that future researchers explore the integration of machine learning technologies, particularly AI, with GPS-X modelling.By leveraging these advanced technologies, researchers can enhance the accuracy and realism of simulation results, facilitating more informed decision-making in wastewater treatment plant management. Fig. 1 . Fig. 1.Base model of Al-Marad WWTP at the present time. Fig.10displays the outcomes for effluent TSS, COD, BOD, and DO after a 30-day simulation. Fig. 3 . Fig. 3. TSS distribution through the sedimentation tank's layers using the current layout. Fig. 5 . Fig. 5. TSS distribution through the sedimentation tank's layers at the maximum inflow conditions using the current layout. Fig. 6 . Fig. 6.Effluent simulated results beyond 5 years using the current layout for (a) BOD mg/L, (b) COD mg/L, (c) TSS mg/L and (d) DO mg/L. Fig. 9 . Fig. 9. Effluent simulated results beyond 5 years using the proposed layout for (a) BOD mg/L, (b) COD mg/L, (c) TSS mg/L and (d) DO mg/L. Fig. 10 . Fig. 10.TSS distribution through the layers of both sedimentation tanks using the proposed layout beyond 5 years. Fig. 12 . Fig. 12. TSS distribution through the layers of both sedimentation tanks using the proposed layout beyond 10 years. Fig. 13 . Fig. 13.Effluent simulated results beyond 20 years using the proposed layout for (a) BOD mg/L, (b) COD mg/L, (c) TSS mg/L and (d) DO mg/L. Fig. 14 .Fig. 15 . Fig. 14.TSS distribution through the layers of both sedimentation tanks using the proposed layout beyond 20 years. Fig. 19 . Fig. 19.TSS distribution through the layers of the three sedimentation tanks using the proposed layout beyond 35 years. Fig. 21 . Fig. 21.TSS distribution through the layers of the three sedimentation tanks using the proposed layout beyond 40 years. Fig. 24 . Fig. 24.TSS distribution through the layers of the three sedimentation tanks using the new proposed layout beyond 40 years. Table 4 Comparison of actual and default values for the fractions used by the GPS-X model. Table 5 Effluent simulated results.
9,366
2024-07-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
A bibliometric analysis of pandemic and epidemic studies in economics: future agenda for COVID-19 research With the rapid global spread of the COVID-19 pandemic, researchers from diverse fields of study have contributed markedly in different research aspects. Considering the substantial economic significance of the pandemic at the micro and macro level throughout the world, we review the scientific publications in the discipline of Economics. To draw a broad inference, we analyze a total of 1,636 scientific publications starting from 1974, which covers the period of earlier pandemics or epidemics that have a close association with COVID-19 using bibliometric analysis. Our analysis and mapping reveal key information related to the contributors at different levels, including author, institution, country, and publication sources. Besides, we identify the historical concentration of research using scientific clustering and illustrate transformations at different times. Moreover, recognizing the underlying inadequacy of economics research, we propose several areas of future research. Our findings and suggestions are expected to act as a roadmap to potential research opportunities and notable implications for business and policymakers. Introduction The Coronavirus disease 2019 (COVID-19) outbreak shocked the world since the first reported case in Wuhan, China, on December 31, 2019. Since then, it has spread all over the world and changed every aspect of human life. The infectious disease crisis, in turn, affects the world economy severely since governments around the world have been taking different policies to tackle the pandemic. The COVID-19 pandemic and related economic and financial crisis are different from others; the gravity of this pandemic, its high contagiousness, and a large number of infections and deaths resulting from it all contribute to the instability in the market and economy (Baker et al., 2020). Moreover, with the recent advancements in technology, all sorts of news and information regarding the pandemic quickly reach all corners of the world in no time. Early estimates have predicted that major economies will lose around 2.4 to 3.0 per cent of their gross domestic product (GDP) during 2020 due to the COVID-19 pandemic (Azevêdo, 2020). Accordingly, it is becoming challenging for most businesses worldwide to keep their financial wheels rolling, given reduced revenues and high uncertainty (Verma & Gustafsson, 2020). Therefore, being a health-related issue, the economic consequences of the COVID-19 pandemic pose a major question for the current and future. To understand the crisis better and develop feasible solutions, there is an urge for comprehensive studies to analyze different facets of COVID-19. Realizing the importance, the World Health Organization (WHO) identified social sciences in the outbreak response as one of nine cutting edge priority areas. The WHO highlighted the aim of this cluster as "the research community overarching aim is to bring social science technical expertise to integrate with biomedical understandings of the COVID-19 epidemic, to strengthen the response at international, regional, national and local levels in order to stop the spread of COVID-19 and mitigate its social and economic impacts". The global roadmap also outlined three objectives under this priority area: understanding contextual vulnerability, how decisions in the field may inadvertently undermine response goals, and how social and economic impacts need to be mitigated. However, the research on different aspects of social science, particularly in economics, remains significantly lower. The dominance of medical and clinical research on different pandemics and epidemics, including the COVID-19 pandemic, is supported by substantial literature review papers, including the bibliometric study. From the methodological standpoint of this study, we find that most bibliometric studies regarding epidemic and pandemic focus on medical and clinical research; COVID-19 related studies are not an exception. The method is widely used in medical research (Liao et al., 2018;Liu et al., 2020), public health (Cash-Gibson et al., 2018;Humboldt-Dachroeden et al., 2020;Kalita et al., 2015;Tran et al., 2019), and particularly to review the literature related to infectious disease and virology research (Azer, 2015;Hendrix, 2008;Ramos et al., 2004;Yang et al., 2020a). Moreover, the earlier researchers conducted such bibliometric studies to review literature that focuses on infectious diseases like influenza (Liang et al., 2018) or HIV (Macías-Chapula & Mijangos-Nolasco, 2002;Sweileh, 2019). A significant amount of research has been conducted on Coronavirus and related diseases like Ebola, severe acute respiratory syndrome coronavirus (SARS), Middle East Respiratory Syndrome coronavirus (MERS) (Deng et al., 2020;Ram, 2020;Sa'ed, 2016;Kostoff & Morse, 2011). Notably, a substantial increase in such studies is found after the outbreak of COVID-19 Danesh & GhaviDel, 2020;Dehghanbanadaki et al., 2020;El Mohadab et al., 2020;Wang & Hong, 2020;Yang et al., 2020b). However, to the best of our knowledge, there is no previous bibliometric review that comprehensively studies the coronavirus and epidemic literature in economics. Hence, this study makes a humble effort to analyze existing literature in the field of economics on COVID-19 and the earlier variants of the Coronavirus. Moreover, we essentially integrate the research on the virus and virus-induced epidemics and pandemics in our study. Given the backdrop of economic consequences brought by the latest COVID-19 pandemic, the study aims to analyze the published scientific works that focus explicitly in economics and related issues. Hence, in this study, we substantively provide a review of literature from the past to present in quantitative terms and make a humble attempt to visualize the prevailing knowledge structure to help future researchers and policy makers. To achieve the objectives, we come up with specific research questions listed below: 1. Who are the top researchers, and what are the leading journals, institutions, and countries investigating the economic aspects of pandemic or epidemic? 2. Is there an existence of geographical concentration, and how is the interconnectedness of research? 3. What are the top keywords and the related prominent research clusters? 4. How are the progression of research in the field of economics and the relative changes during different infectious disease outbreaks? Using the bibliometric method, our study provides a comprehensive summary of Coronavirus, epidemic, and pandemic literature published over more than 47 years in the field of Economics. The analysis effectively considers 1,636 scientific publications in this period that are listed in the Web of Science (WoS) database. We try to show all existing scientific research patterns in a specific field of study to achieve the objectives. In general, we have found an increasing trend in publications since 2002 that coincides with the outbreak of SARS; however, the steady growth in publications experienced a rapid upswing during 2020 since the inception of COVID-19. Besides identifying the most contributing and influential authors, publication sources, research institutions, and countries, we provide several visualizations to comprehend the findings in a more precise way like the publication dynamics of the top publishing sources, country collaboration map as well as the interconnection between institution, country, and publication sources. Such findings help us finding out the research concentration at different levels (i.e., geographic intensity or nature of collaboration). Furthermore, we analyze the conceptual structure of research through the correspondence analysis of keywords to understand the most prominent research clusters. We notice a distinct and significant research cluster that focuses on 'economic growth,' 'risk', 'income', 'demand', 'consumption,' and 'growth.' Even we find that importance is also given to the issues like 'policy,' 'cost-effectiveness', 'strategies' or 'management' aspects and that creates a related yet distinct cluster of research. Moreover, we consider different infectious disease outbreaks during the analysis period to see the changes in knowledge structure. We observe that 'HIV' is the leading disease in different periods, even during the other outbreaks like the SARS coronavirus, the Middle East respiratory syndrome coronavirus (MERS), and Ebola. The findings indicate that there is a lack of research in the field of economics and related business aspects that directly address the impact of Coronavirus. Studies have emphasized 'cost-effectiveness' when studying the risk or impact of health crises in recent years. Our research shows the potential avenues to explore for the researchers who intend to study different economic aspects of the current COVID-19 pandemic. Hence, we contribute to the literature by highlighting other characteristics of existing research and knowledge structure through reviewing publications over 47 years. Furthermore, a wide range of indicators offers potential areas that future research can explore. The remainder of the paper is set out as follows: Section 2 discusses the study's methodological aspects, which briefly explains the indicators and performance metrics of the bibliometric studies to evaluate the scientific outputs. Additionally, this section provides the rationale for employing the chosen data sources and analytic tools. Section 3 presents and discusses the key findings to understand the scientific output in this research area. Section 4 summarizes the findings, outlines the research gaps, and suggests future research areas to conclude the study. Data and methodology As quoted in Akhavan et al. (2016) and referred to Ponce & Lozano (2010), "Bibliometric analysis refers to combining different frameworks, tools, and methods to study and analyze citations of scholarly publications has led to the development of different metrics to gain insights into the intellectual structure of broad academic discipline and evaluate the impact of scientific journals, studies, and researchers accordingly." In the current study, we use bibliometric analysis to review the literature of interest as this method helps us analyze the existing publications objectively (Ellegaard & Wallin, 2015), and compared to other methods of literature review, bibliometric is a systematic, straightforward, and reproducible process that minimizes the intrinsic subjectivity of narrative and systematic reviews (Della Corte et al., 2019). Besides, visualizing the bibliographic information through mapping allows scholars to understand research trends broadly and intuitively by highlighting the boundaries of the existing relevant intellectual territory and knowledge structure (Cobo et al., 2011). To perform a bibliometric review of the relevant literature, we used a five-step procedure proposed by (Zupic & Č ater, 2015). The workflow of the five-step process is presented in Fig. 1 below: To answer the research questions, we use several bibliometric indicators and science mapping techniques. We employ citation analysis to measure authors' and publications' impact as this is the most conventional measure to assess the scientific quality and impact (Waltman et al., 2012). In essence, a high citation indicates the high impact of a particular author or document in a specific field of study (Feng et al., 2017). Another comparable impact analysis measurement is the h-index, which measures the productivity and influence of an author or a publication source through integrating the quality and quantity -'h' number of articles published by an author is cited at least 'h' times each (Hirsch, 2010). Besides, we make use of Lotka's Law to measure the frequency of publications by authors, which determines the productivity patterns in a given field of study over a specified period, allows concluding whether the analyzed area is one in which most of the production is concentrated in a limited number of authors or not (López-Fernández et al., 2016). Similarly, we determine the core journals through Bradford's Law that enlists the journals ascendingly with the highest frequency of publications are ranked as 'core zone' and so on. This method is often used to understand how the literature on a particular subject is scattered or distributed in the journals and used as a guideline to determine the number of core journals within a given subject (Garg & Tripathi, 2018). Besides, we utilize the number of publications and citations information to find out the most influential country and institutions as well as visualize to illustrate the geographic and institutional leadership of the research. Moreover, we use keywords and co-words analysis to map this field of research's existing knowledge structure. This analysis is a systematic method for scientifically discovering subfield linkages, tracking the phenomenon (Feng et al., 2017), and building a semantic field map (Zupic & Č ater, 2015). Whereas, co-word analysis helps us use the actual content of a text directly to capture co-occurrence interactions in constructing the framework (Feng et al., 2017); hence, to extract scientific maps derived based on the high frequencies of words that appear in the text. Using the appropriate clustering algorithm (i.e., Multiple Correspondence Analysis or MCA) of keywords, we present the existing research's conceptual structure and thematic map. The MCA analysis draws a conceptual design of the field and K-means clustering to recognize groups of documents that express common concepts (Aria & Cuccurullo, 2017) and identify the structure of existing research clusters by measuring the proximity of keywords used in the research (Demiroz & Haase, 2019). To analyze the bibliographic information to answer the specified research questions, we consider the Web of Science (WoS) database to perform the bibliometric analysis. The WoS is a well-known database and incorporates all the information with more than 161 million records across 254 subject areas. 1 The database gives access to articles from scientific journals, books, and other academic documents in all disciplines to the scholarly community. Though WoS does not essentially index the largest number of journals in all the different fields compared to other databases like Scopus (Li et al., 2010), it is believed to provide an adequate amount of high-quality literature (Ellegaard & Wallin, 2015). To obtain a representative amount of bibliographic information from the WoS database, we have considered the following search strings. TOPIC: ("Virus*" OR "Pandemic*" OR "Epidemic*" OR "Corona virus" OR "Coronavirus" OR "SARS" OR "MERS" OR "severe acute respiratory syndrome" OR "Middle East Respiratory Syndrome") As we are interested in the historical nature of research and publications, hence we have considered the keywords like 'Virus,' 'Pandemic,' 'Epidemic' as well as the 'Coronavirus' and 'Corona virus' to get a more extensive coverage and to find out potential areas for future studies on COVID-19. Besides, we have included the keywords like 'SARS,' 'MERS,' 'severe acute respiratory syndrome' and 'Middle East Respiratory Syndrome' since it represents the historical alliance with COVID-19. Moreover, we refine our query results with the 'Economics' category of WoS, which allows us to focus on the publications that consider different aspects of the economic implications of the keywords considered. Finally, to analyze and visualize, we use the 'Bibliometrix' package (http://www.bibliometrix.org) developed by Aria & Cuccurullo (2017) in R (an open-source statistical application) to perform the analysis. The Bibliometrix package is well-known for its wide range of features and is used in a growing number of publications (Firdaus et al., 2019;Linnenluecke et al., 2020). Key information and trends in publications After careful filtering and cleaning of retrieved data from the WoS database, we obtain a total of 1636 scientific documents to analyze. We present the key characteristics of data in Table 1 below: From Table 1, we find that the earliest publication listed on pandemic related studies in the field of economics dates back to 1974, whereas some of the publications are already assigned to be published in 2021. Hence, our analysis comprehensively captures publication information of more than 47 years. However, the number of publications in the field of economics is not significant considering the long period. Yet, the publications have experienced steady growth over the year, as presented in Fig. 2. This trend graph indicates a small increase in the number of publications since 2002 compared to earlier years. Conceivably, the outbreaks of severe acute respiratory syndrome coronavirus (SARS-CoV) in 2002 in China spurred the increased interest in this research area. Besides, we notice a slight rise in publications from 2012 onwards, possibly due to the similar coronavirus outbreaks in Middle Eastern countries termed as the Middle East Respiratory Syndrome coronavirus (MERS-CoV). However, we observe a massive shift of publications in recent years (i.e., the year 2020) compared to earlier years. The rapid rise in publication plausibly indicates that the research on COVID-19 has prompted a substantial interest in the scholarly community, even in the field of economics. Moreover, we notice that a total of 4,596 authors have contributed to the publications so far, and their scientific works are published by 393 publication sources in the form of journal article, review, book chapter, proceedings paper, editorial materials, and so on. However, most of the published documents are in the form of journal articles, and it consists of around 64.70% (1,059 out of 1,636), including the early access publications. The authors have used 46,028 references and 2,936 different keywords in their research over time. Most of the scientific publications are collaborative in nature as only 355 publications of the total are single-authored, and each of the documents received around 11 citations, on average. In the following subsections, we analyze and visualize different characteristics of these documents to uncover existing knowledge composition in this field of study. Most contributing authors and publication sources In this section, we present the top contributors to knowledge in Coronavirus and related research in Economics. Initially, we present the authors' publication outputs through Lotka's Law of scientific productivity in Fig. 3. Lotka's Law allows us to conclude whether the analyzed area is one in which most of the production is concentrated in a limited number of authors or not (López-Fernández et al., 2016). In our case, the output is diversified, with many authors (3,972 out of 4,596) having only one publication, which accounts for 86.42% of the total contributing authors. The distribution essentially indicates that scientific publications' contributions are not distributed to a few authors but diversified to the number of authors. However, from the perspective of publication sources, we find that only four journals have positioned themselves as the core publishing sources in this area, according to Bradford's Law. As depicted in Fig. 4, Value in Health, Pharmacoeconomics, World Development, and Health Economics are the core journals that have major influences in publications than other journals in the field. Accordingly, we present the 20 most publishing authors and journals along with their number of total publications (NP), total citations (TC), and corresponding h-index values to understand the productivity as well as the impact of their publications in Table 2. Considering the number of scientific outputs, Baser O. is the most prolific, having published a total of 13 scientific documents, followed by Yuan Y., who has published 11. However, author dominance is not consistent across different indicators. The next most publishing authors, both Mitchell I. and Wang L., have published ten papers while Lanctot K. L. & Li A. have published nine each; however, they have not received any citation until now. Considering the impact of publication in terms of total citations (TC) and h-index, Beutels P. is the top-ranked author with 272 citations and an h-index of 6, followed by Philipson T.J., who has received a total of 148 citations with an h-index value of 5. Postma M.J., Mcewan P., and Laxminarayan R. are among the other influential contributors in this area of research in terms of citations received for their scientific publications and respective h-index value. Besides, in Table 2, we list down the top 20 publication outlets and the core journals presented earlier for the authors to publish scientific documents. Value in Health is the most publishing journal compared to its peers. The journal has published 347 scientific documents to date, which is significantly higher than Pharmacoeconomics has 113 publications on the related topics. However, the ranking alters if we consider the TC and h-index. Though Value in Health journal is the most prolific in terms of publications, Pharmacoeconomics positions itself as the most impactful journal, having received a total citation of 3019, which is more than two and half times higher than the citations received by Value in Health. Similarly, the h-index value of Pharmacoeconomics is 29, which is significantly higher than Value in Health. Besides, the other two core journals in the list, namely, World Development and Health Economics, have published 50 and 39 articles and received 244 and 700 citations, respectively. Considering the discrepancy in the number of publications with citations received by the top journals, we further illustrate the core sources' annual publication dynamics in Fig. 5. increasing trend in publications in this area. One possible reason that Pharmacoeconomics has received a significantly higher number of citations could stem from the journal's dominance in the earlier years. Perhaps, the seminal papers in this field have received a considerable amount of citations by the subsequent publications in the later periods. Altogether, the trend indicates an augmented interest by the journals to publish articles that consider the economic aspects of the issues. Interestingly, the rising trends in publications by the top publication sources coincides with the inception of the overall upward trending publications, as depicted in Fig. 2. Therefore, authors may consider the analysis and illustrations useful to find the right publication outlets more efficiently to publish their latest scientific outputs, especially the research on COVID-19 that covers different economic aspects. Geographic and institutional distribution of research This section analyzes the geographic distribution of publications taking into account authors' affiliated institutions and countries. Table 3 shows the most productive institutions and countries in terms of total publications. Also, it presents the collaborative nature of the scientific outputs of the leading countries through multi-country publications (MCP) and multi-country publication ratio (MCPR). At the institutional level, we observe a significant dominance of the universities and research institutions from the US. Authors affiliated University of Oxford (Univ Oxford) and the University of Cambridge (Univ Cambridge) are the most publishing universities from the UK, contributing to 34 and 21 scientific publications. Australian National University (Australian Natl Univ) is the only institution in the list other than the American and European universities, which has made 16 scientific publications. Alike, at the country level, the US is the most productive country in this field of research (NP = 542), followed by the UK (NP = 153) and China (NP = 122). However, the UK has a superior collaborative publication output compared to other top publishing countries on the list, having an MCPR of 0.3203 among the top three. The ratio indicates that almost one-third of all publications by UK authors collaborate with the researcher from other countries. However, the rate is aced by other countries like Austria, Belgium, Japan, Switzerland, and Germany, having MCPR value of 0.7778, 0.6316, 0.4545, 0.4118, and 0.3654, respectively. Overall, the collaborative research trend is higher for most of the European countries on the list. The country scientific production and collaborative networks are illustrated in Fig. 6. The blue color on the map indicates the existence of publications for a particular country on the issues under analysis, and the color grey indicates no journal. The states with darker blue color represent more publishing countries while the red lines indicate the publishing countries' collaboration networks. The countries that were collaborating most actively are the US and the UK. Being the most publishing countries, authors from both countries have a high collaborative scientific output. Other countries that the US authors are collaborated with include Canada, Germany and China; whereas the UK researchers collaborated with researchers from Italy, Germany, and Belgium besides the US. On the other hand, China has collaborative research outputs with Germany, Singapore, and Australia mostly. The findings indicate that most of the research outputs are dominated by researchers from developed countries. There is an existence of regional concentration in terms of the collaboration of research activities. We encapsulate the top 10 prolific countries and institutions with the top 10 publishing sources through a three-field plot in Fig. 7. This Figure provides us with the idea of the institutions' relative contributions to a country's overall research output. For the US, almost all the top research institutions and universities presented earlier have significant contributions to overall country publications. However, the scenario is relatively different for the UK, where the University of Oxford has contributed the most in overall country scientific production; a notable contribution is made by the University of Cambridge. Interestingly, for China, we do not notice any specific institution's dominance in scientific productions; instead, the publications are distributed among the institutions and indicate the diversity of institutional contributions in the total number counts at the national level. At the same time, some countries like Canada and the Netherlands received their significant contributions from a single institution, correspondingly from the University of Toronto and the University of Groningen. From the publication sources perspective, the highest publishing journal Value in Health is mostly contributed by the authors from the US and the UK, along with China, Canada, the Netherlands, and Italy. Publications in other sources are distributed randomly among different top publishing countries. Keywords analysis and thematic analysis of research This section provides the research keywords used by the authors in Coronavirus related research over time. Statistical analysis of author keywords can offer research directions, which can be a useful way to delve into scientific output development (Du et al., 2013). This section also discusses different research clusters in which the studies are concentrated mostly through the co-occurrence of keywords and research dynamics. Besides, we explain the shifts in research focus by uncovering the research themes at different points in time. We list down the 30 most frequently used keywords in the publications and illustrate the relative occurrences in a Word TreeMap in Fig. 8. The words 'impact', 'health' and 'united states' are used most frequently along with 'epidemic.' Other most commonly used keywords are 'mortality' and 'risk' among the top 5 words. In incoherence with earlier findings, we also identify the geographical concentrations of research, recognizing the significant occurrences of the keyword 'united states.' Though we do not consider these keywords in isolation in our search strings, assuming it would dilute our focus; still, the appearance of such words indicates a strong geographic concentration of the research outputs. We then use the keywords used in the research by the authors conducted the Multiple Correspondence Analysis (MCA) to find out the conceptual structure of research. Using the method, we identify the major research clusters in our area of interest. The two-dimensional plot of research clusters is presented in Fig. 9. The graph indicates that the scientific outputs considered in our study can be organized into five primary clusters, which signify the intellectual structure of research issues addressed by the scholars who concentrate on the related aspects. While a comprehensive review of these five clusters' content is beyond the scope of this article, a few illustrative examples demonstrate the diversity, breadth, and intellectual thrust of the work undertaken in each cluster. The first cluster (color: red) contains a total of eighteen keywords associated with articles that emphasize the 'epidemic.' We notice that studies have focused on different aspects of economics as noticeable through 'economics' and 'economic growth.' Besides, we find the research highlights other aspects of Economics like 'income,' 'demand,' 'consumption,' 'growth' and the like besides the health-related aspect. The findings suggest that research on epidemic or pandemic relate studies tend to see the impact on different aspects of economic welfarehow the income, consumption, or related aspects are affected during such periods of uncertainty are major research interests in economics. Similarly, the second large research cluster identified and colored in green shows the importance is also given on the issues like 'policy,' 'costeffectiveness,' 'strategies' or 'management' aspects. The diagram shows that such research issues have more connection with 'care,' 'efficacy,' 'burden,' or 'quality of life.' We comprehend that the research community has focused on the different business and management related aspects of 'virus infection,' 'therapy' or 'models.' Hence, the cluster indicates a niche focus area of research that carry particular significance during pandemics or epidemics. In these major two research clusters, the concentration is also given to the demographic and geographic vulnerability of the community, which are represented by the keywords such as 'population', 'women,' 'children,' and 'Africa.' However, we notice two distinct research clusters (color: blue and violet) constitute unique research areas that focus on the 'sub-Saharan Africa' and the spread of 'Human immunodeficiency virus' or 'HIV.' However, we do not find any dominant aspects on economics in these research clusters as no such keywords are apparently visible. Perhaps, the publications are multidisciplinary in nature and listed under the economics category too in the WoS database. The authors highlighted the African regions' economic vulnerability in their research instead of focusing on any particular economic characteristic of such disease or transmission. Similarly, the fifth cluster publications (color: yellow) do not depict such an association of financial issues. Furthermore, we attempt to identify the most significant research areas, how the topics have evolved and fused, and the most recent research issues by breaking down the entire research period into four different periods. We have used the inclusion index weighted by word occurrences, with each cluster contains 250 author keywords. From Fig. 10, we notice the research on AIDS and related areas have cut the most attention of the study over time and remained dominant as late as 2014. This indicates a lack of focus given to the other virusrelated epidemic or pandemic, especially on Coronavirus. We have included the keywords to apprehend the studies related to Coronavirus, and associated outbreak periods are chosen. Still, we fail to distinguish distinctive research clusters at different times. Even though the research is dominated mostly by the generic aspects of the disease, we still identify economic issues in the publications. Significantly, the emergence of such problems is more in recent times. We notice a significant development of the research focus on the topics like 'cost-effectiveness.' Through tracking back to an earlier period (2012-2014), we find that the research on 'cost-effectiveness' essentially connected with 'impact,' 'united states' and 'epidemic.' The illustration highlights that a significant focus has been given to the costeffectiveness in dealing epidemic and their impact. Presumably, this particular aspect of pandemic-epidemic economic studies is very relevant to the COVID-19 crisis as it has substantial economic consequences due to lockdown policies. Lockdown prescription is working for COVID-19 infection control but crashing the financial system. As a result, to save the economy, governments are announcing stimulus packages, reducing the portion of health investment. Thus, the cost-effectiveness study becomes crucial in this scenario. However, the other aspects of economics evident in the cluster analysis in the previous section do not segregate the examination in different periods. The findings imply that economics aspects have been given importance to a certain extent; however, the critical matter is still lacking. Summary of the findings This paper traces the publications on epidemic and pandemic studies in economics since 1974. We reviewed a total of 1,636, which are indexed in the WoS database under the economics category. Our analysis has reported major aspects of research in this field, including most influential publications, journals, authors, institutional affiliations, and geographic diversity or concentration. We have further analyzed the most relevant keywords in this area of research, their conceptual construction, and the research dynamics to comprehend historical evolution to the most recent development. Accordingly, we answer the research questions specified in the Introduction section of this study. The major findings are summarized according to the research questions as follows: Who are the top researchers, and what are the leading journals, institutions, and countries investigating the economic aspects of pandemic or epidemic? By analyzing the relevant literature's bibliometric information, we find that Baser O is the most publishing author, having published 13 scientific documents among 4,596 authors. However, Beutels P appeared as the most impactful authors in terms of total citations received and h-index. As a publication source, Value in Health is ranked as the most publishing journal but not the most impactful one. Value in Health journal published 347 scientific works during the analysis period, and their published documents are cited 1,140 times, which is considerably lower than the second most publishing journal Pharmacoeconomics. The latter got total citations of 3,019 for its 113 publications and possesses the h-index value of 29, which is the highest among all the top listed publication sources. Comparably, the US is the most productive country in terms of the number of publications. The authors affiliated with the US universities and research institutions are ranked as the most contributing one in economics, topped by the University of Michigan. Is there an existence of geographical concentration, and how is the interconnectedness of research? Considering the importance of the distribution of knowledge, we further analyze, rank, and illustrate the degree and nature of collaboration. We find that the US has the highest number of collaborative publications with higher collaboration with countries like Canada, Germany, and China. However, the ratio of such collaboration compared to the total publications is relatively low than the other top producing countries in the list. In essence, the UK has a higher collaboration rate and sizable publications, and their collaborations are predominantly with the other European countries like Italy, Germany, or Belgium. In contrast, China has comparatively diverse collaborative research outputs with its most active collaborating partners from Germany, Singapore, and Australia. The findings indicate that researchers from developed countries dominate the publications, and there is an existence of regional concentration in terms of the collaboration of research activities, at least in part. Using a three-field plot, we further illustrate the nature of interrelation among the top journals, countries, and institutions. We find that for some countries, the overall country-specific scientific contributions are concentrated predominantly by a small number of institutions (i.e., the UK, Canada, and the Netherlands). Whereas institutional concentration in publications is relatively lower and the publications are contributed by many universities or research institutions (i.e., the US, China). What are the top keywords and the related prominent research clusters? Keyword analysis provides interesting findings like impact, health, united states, risk, epidemic, and mortality among the most used keywords in the research. Along with the dominance of the study based on the US in the country analysis in the earlier section, we find a remarkable dominance even in the analysis of the keyword. The appearance of the keywords 'united-states' provides further evidence of the geographical intensity of the research. Although the other most used keywords do not explicitly represent the economic traits of publications. Moreover, we identify five main research clusters based on the association of keywords used in the publications. Out of the five research clusters, only the two major clusters capture the interest in different economic research attributes to represent issues like economic growth, income, demand, or consumption. Furthermore, particular importance is given to 'policy', 'cost-effectiveness', 'strategies,' or 'management.' The findings imply that the publications historically focus on the general issues related to economics, to some extent. How are the progression of research in the field of economics and the relative changes during different infectious disease outbreaks? By subdividing the publication into different timeframe, based on the time of various outbreaks related to COVID-19, we find that the most dominating disease investigated by the researchers is HIV or AIDS. Such intensity is not observed for the diseases caused by the coronavirus. Notably, after the sample's time-specific splitting, the collective dominance of the concerns directly connected to economics loses the required significance, thus, unidentifiable at different periods independently. The only economic aspect that is considered substantial in the existing studies is the 'cost-effectiveness.' Future research agenda Our multi-facet review of the literature identifies some gaps that future research can consider filling in. These issues have particular importance in dealing with the current and future epidemic or pandemic. The reoccurrences of different coronavirus diseases (i.e., COVID-19, SARS, MERS, etc.) entail the importance of precautions and early response capabilities. The scholarly community expects to contribute more research. While scientists usually dominate research on disease or outbreak, a significant amount of academic contributions are expected from social scientists, especially those doing research in economics or related areas. An appropriate economic response model would help the government and policymakers maintain resilience during such a crisis besides maintaining public health safety. Accordingly, future research can focus on the cost and effectiveness of different containment measures that have been imposed by different countries across the globe;. finding the economical and most costeffective control measures would help policymakers implement such action quickly and efficiently in the future. Besides, the timing and preparedness of policy interventions are pivotal in dealing with the adverse effects of the pandemic. Since we already have different examples from different countries, appropriate analysis and policy advocacy could be another aspect that future researchers may want to endeavor. Moreover, researchers can attempt to analyze the efficacy of and shock to different new economic models followed by the businesses in recent times; such as, comparing the impact on circular, sharing, or platform economies. In-depth analysis and contrast will help the business managers and the policymakers develop appropriate business strategies and monitoring policies. On top of that, the problem and prospects of the digital economy can be a remarkable area to investigate given the rapid disruptions incited further by the COVID-19 pandemic. Analysis of other emerging issues related to globalization, sustainability, or environmental economics aspects with the lens of the COVID-19 crisis can be interesting research issues for the future too. We observe a multi-phase dynamic in the COVID-19 pandemic. Existing studies do not highlight these issues; instead, they focus on a specific crisis. Perhaps, the economics researchers need to look into the crisis from different perspectivesanalysis of economics and financial turmoil created by earlier pandemic could provide better insights to deal with the present and future disruptions. Finally, the COVID-19 pandemic has a regional variance and varies from sector to sector, one country to another country, and one region to another region. Also, the WHO global research roadmap highlights the need to adjust research following the local needs and realities and implies a need for collaborative research with comprehensive data from diverse regions. However, we fail to find diversification of scientific activities as evidenced by the high concentration of publications by the developed countries and their nature of collaborations. Hence, future studies call for combined, collaborative, and substantial timely research efforts with local experiences to find the best practical solutions to draw an end to this health and economic pandemic crisis.
8,793.4
2021-05-03T00:00:00.000
[ "Economics" ]
Quasinormal modes of hot, cold and bald Einstein-Maxwell-scalar black holes Einstein-Maxwell-scalar models allow for different classes of black hole solutions, depending on the non-minimal coupling function $f(\phi)$ employed, between the scalar field and the Maxwell invariant. Here, we address the linear mode stability of the black hole solutions obtained recently for a quartic coupling function, $f(\phi)=1+\alpha\phi^4$ [1]. Besides the bald Reissner-Nordstr\"om solutions, this coupling allows for two branches of scalarized black holes, termed cold and hot, respectively. For these three branches of black holes we calculate the spectrum of quasinormal modes. It consists of polar scalar-led modes, polar and axial electromagnetic-led modes, and polar and axial gravitational-led modes. We demonstrate that the only unstable mode present is the radial scalar-led mode of the cold branch. Consequently, the bald Reissner-Nordstr\"om branch and the hot scalarized branch are both mode-stable. The non-trivial scalar field in the scalarized background solutions leads to the breaking of the degeneracy between axial and polar modes present for Reissner-Nordstr\"om solutions. This isospectrality is only slightly broken on the cold branch, but it is strongly broken on the hot branch. Introduction The phenomenon of spontaneous scalarization of black holes has received much interest in recent years. In certain scalar-tensor theories, the well-known black holes of General Relativity (GR) remain solutions of the field equations, while in certain regions of the parameter space, additional branches of black hole solutions arise, that are endowed with scalar hair. Spontaneous scalarization can, for instance, be charge-induced, when a scalar field is suitably coupled to the Maxwell invariant F 2 [2]. Whereas the onset of the instability is universal, the properties of the resulting scalarized black holes and the branch structure of the solutions then depend significantly on the coupling function f (φ) [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Einstein-Maxwell-scalar (EMs) models include two distinctive classes, depending on the choice of f (φ): models that have black holes with scalar hair only (and do not allow the GR solutions) or models that allow both the GR solutions and new hairy black holes. The latter splits into two further sub-classes: models wherein the GR black holes are unstable, in some region of parameter space, against the tachyonic instability that promotes scalarization and models wherein the GR black holes are never afflicted by this instability. The last two sub-classes have been labelled as classes IIA and IIB, respectively, in [10]. Recently, we have considered an EMs model employing the quartic coupling function f (φ) = 1 + αφ 4 , which qualifies as class IIB, since the RN branch does not become unstable against scalar perturbations [1]. Instead, we have observed the following interesting pattern: Close to the extremal RN solution, corresponding to the mass to charge ratio q = 1, a first branch of scalarized black holes emerges. This first branch then exists in the range q min (α) ≤ q ≤ 1. At q min (α), it bifurcates with a second branch of scalarized black holes, which extends throughout the interval q min (α) ≤ q ≤ q max (α), and ends in an extremal singular solution at q max (α) > 1. Considering the properties of these three branches, we have termed the RN branch as the bald branch, since it does not carry scalar hair, while we have termed the first and the second branches as the cold and the hot branches, respectively, according to the black hole horizon temperatures. In this first study [1], we have also addressed the radial modes of the black holes for these three branches, since the radial modes signal instabilities with respect to scalar perturbations and thus the onset of scalar hair. While our analysis has shown that there are no unstable radial modes on the RN branch and on the hot scalarized branch, we have found that the cold scalarized branch develops an unstable mode close to the point q = 1. This instability is present throughout the interval q min (α) < q < 1 and ends with a zero mode at the bifurcation point q min (α) with the hot scalarized branch. Thus, the cold scalarized branch is clearly unstable. However, it remained an open issue whether there are really two stable coexisting branches, the bald RN branch and the hot scalarized branch. This question has motivated our present investigation. Here, we study linear mode stability of the black holes on all three branches. To that end, we calculate the lowest quasinormal modes for each type of mode. In particular, since the theory has scalar and vector fields coupled to gravity, we have to consider the perturbations of all these fields. In the presence of nontrivial background fields, i.e., when the black hole solutions carry scalar hair and electromagnetic charge, the different types of perturbations generically couple to each other, leading to scalar-led, electromagnetic-led and gravitational-led modes instead of pure scalar, electromagnetic or gravitational modes, which one would find for a Schwarzschild black hole, for instance. Since parity even (polar) and parity odd (axial) perturbations decouple, we arrive at the following set of modes when expanding in spherical harmonics: polar scalar-led l ≥ 0 modes, axial and polar electromagnetic-led l ≥ 1 modes, and axial and polar gravitational-led l ≥ 2 modes. In Section II, we present the EMs theory studied and the general set of equations. We specify the Ansatz for the spherically symmetric background solutions and give the resulting set of ordinary differential equations (ODEs). We discuss the asymptotic expansions for asymptotically flat black hole solutions with a regular horizon in Section III. Here, we also recall basic properties of the three branches of black hole solutions. In Section IV, we formulate the sets of perturbation equations for spherical, axial and polar perturbations, deferring more details to the Appendix. We present our numerical results for the quasinormal modes in Section V. Here, we show that no further unstable modes arise on any of the three branches. Moreover, we discuss the breaking of isospectrality, i.e., the splitting of the axial and polar electromagnetic-led and gravitational-led modes, caused by the presence of the scalar hair. We conclude in Section V. EMs theory We consider EMs theory described by the action where R is the Ricci scalar, φ is a real scalar field, and F µν is the Maxwell field strength tensor. The coupling between the scalar and Maxwell fields is determined by the coupling function f (φ), for which we assume a quartic dependence, For a positive coupling constant α, the global minimum of the coupling function is at φ = 0. The set of coupled field equations is obtained via the variational principle and reads whereḟ (φ) = df (φ)/dφ, and T φ µν and T EM µν are the scalar stress-energy tensor and the electromagnetic stress-energy tensor, respectively. To construct static spherically symmetric EMs black holes, we employ the line element where the metric functions g and m depend on the radial coordinate r. The black holes are supposed to carry electric charge and, in the scalarized case, also scalar charge. We therefore parametrize the gauge potential and the scalar field by where a 0 and φ 0 are the electric and the scalar field function, respectively, which depend on the radial coordinate r. Inserting this Ansatz into the general set of EMs equations (3)-(5), we obtain the following set of ODEs for the functions g, m, a 0 and φ 0 : where we have introduced the function δ(r) via g = 1 − 2m r e −2δ . Inserting the first equation into the last yields a first integral for the electromagnetic field where Q is the electric charge of the black holes. With this first integral, the above set of equations can be simplified. EMs black holes We here briefly recall the properties of static spherically symmetric electrically charged EMs black hole solutions with the quartic coupling function (2). First of all, the RN black hole, which is also a solution of the EMs equations, is given by The scalarized EMs solutions are obtained numerically [1]. Their asymptotic behavior yields their global charges with black hole mass M , electric charge Q, and scalar charge Q s . Note that there is no conservation law for the scalar field, and that the existence of a horizon imposes a non-trivial relation Q s = Q s (M, Q) [34]. Close to the horizon r = r H , the global charges have the expansion where φ 0 (r H ) = φ H and a 0 (∞)− a 0 (r H ) = Ψ H are the value of the scalar field and the electrostatic potential at the horizon. Further physically relevant horizon properties are, for instance, the temperature T H and the horizon area A H , respectively given by The EMs black hole solutions can be characterized by three dimensionless quantities: by their charge to mass ratio q, their reduced horizon area a H and their reduced horizon temperature t H : In the following we will focus on the case with α = 200, since for other values of the coupling constant the general properties of the black holes are qualitatively similar. We illustrate the EMs black hole branches in Fig. 1, where we exhibit their reduced area a H (left) and reduced temperature t H (right) versus their charge to mass ratio q [1]. The RN black holes are shown in black. The first scalarized branch emerges close to the extremal RN black hole solution. Along the branch, the mass to charge ratio q decreases, while the reduced area a H and temperature t H increase. At a minimal value of q, the first branch bifurcates with the second branch. Along the second branch, a H decreases while t H increases monotonically with increasing q. Thus, the first branch represents the cold (blue) branch, and the second branch the hot (red) branch. Linear perturbation theory Previously, we have addressed the radial stability of these EMs black holes, by looking for radially unstable modes [1]. Our analysis has revealed radial stability for the RN branch and for the hot scalarized branch. In contrast, for the cold scalarized branch, we have found an unstable radial mode. Here, our goal is more ambitious, since we want to fully clarify the linear mode stability of the RN and the hot scalarized branch. As we shall see these are both stable. To show linear mode stability, we will evaluate the quasinormal mode spectrum of the black holes on these branches. For completeness, we will consider the spectrum on all three branches, including the unstable cold scalarized branch. The symmetry of the background solutions suggests to consider perturbations for the following three cases: purely spherical perturbations (l = 0), axial or odd-parity ((−1) l+1 ) perturbations, and polar or even-parity ((−1) l ) perturbations. Spherical perturbations To study the spherical perturbations, we have to perturb all the fields: the scalar field, the electromagnetic field and the metric. For the scalar and the electromagnetic field, we introduce the perturbation functions φ 1 and F a0 , and employ the Ansatz For the metric, we introduce the perturbations functions F t and F r and employ the Ansatz We use ǫ as the control parameter in the linear expansion. The complex quantity ω = ω R + iω I corresponds to the sought-after eigenvalue of the respective quasinormal mode. Its real part is the oscillation frequency; its imaginary part is the damping rate. Inserting the Ansatz (17)- (19) into the general set of field equations (3)-(5) and utilizing the set of equations for the background functions (10) leads to the Master equation for the spherical perturbations. This is a single Schrödinger-like ODE for the function Z = rφ 1 : which involves the tortoise coordinate R * , where and the potential U 0 , reading To determine the quasinormal modes, we need to impose an adequate set of boundary conditions. As the perturbation propages toward infinity, we have to impose an outgoing wave behavior, i.e., for r → ∞. As the perturbation propagates toward the horizon, we have to impose an ingoing wave behavior, i.e., for r → r H . In these expansions, the quantities A ± φ denote arbitrary amplitudes for the perturbation, while all other terms are fixed by the background solution. Axial perturbations When considering axial perturbations, the scalar field does not get affected due to symmetry. Here, only the electromagnetic field and the metric enter. An appropriate Ansatz for the electromagnetic field involves the perturbation function W 2 : where Y lm are the standard spherical harmonics. The corresponding Ansatz for the metric introduces the perturbation functions h 0 and h 1 . Inserting this Ansatz into the field equations (3)-(5) leads to a set of coupled differential equations, which is shown in the Appendix. It involves first order equations for the metric perturbation functions h 0 and h 1 , and a second order equation for the electromagnetic perturbation function W 2 . This system can be put into the form where Ψ A denotes the perturbation functions in the form and M A denotes a 4 × 4 matrix which contains the background functions, the angular number l, and the eigenvalue ω of the quasinormal mode. To determine the axial quasinormal modes, as for the radial ones, we have to impose the proper set of boundary conditions at infinity (outgoing) and at the horizon (ingoing). These can be found in the Appendix. Polar perturbations Since the radial perturbations are a set of polar perturbations (l = 0), it is clear that the general polar perturbations involve again all the fields. For the scalar field, we introduce the perturbation function φ 1 and the Ansatz For the electromagnetic field, we employ the perturbation functions a 1 , W 1 and V 1 and the Ansatz Finally, for the metric, we introduce the perturbation functions N , H 1 , L and T , and employ the Ansatz Inserting this Ansatz into the field equations (3)-(5) then again results in a set of coupled differential equations, which are shown in the Appendix. In fact, this system of equations can be simplified when one introduces the new functions F 0 , F 1 and F 2 , that are defined in terms of the perturbation functions W 1 , V 1 and a 1 (as discussed in the Appendix). The resulting Master equations can again be put in vectorial form, The vector now contains 6 components: The 6 × 6 matrix M P depends again on the background functions, the angular number l and the eigenvalue ω of the quasinormal mode. We remark that for a RN black hole, the equations for the scalar perturbations are decoupled. However, the equations for the electromagnetic perturbations couple with the equations for the metric perturbations. When the charged black hole also carries scalar hair, all equations are coupled to each other. Again, we must impose the proper boundary conditions at infinity (outgoing) and at the horizon (ingoing) to determine the quasinormal modes. As in the axial case, these are shown in the Appendix. Quasinormal mode spectrum In the following, we briefly discuss the method used to extract the quasinormal mode spectrum of the EMs black holes and recall the nomenclature for the modes. Then, we present our numerical results for the l = 0, l = 1 and l = 2 cases and discuss isospectrality breaking. Numerics and nomenclature We calculate the quasinormal modes of the black holes on all three branches, bald, cold and hot, making sure that the respective spectra match, when the background solutions get close. Since the RN quasinormal modes are well-known (see e.g., [35][36][37][38]), this provides an independent check of the numerics used, as well as a first guess for the spectrum of the cold branch, which is typically close to the RN branch for large values of the electric charge. All calculations are performed for a coupling constant α = 200. In a first step, we obtain the background solutions with high precision, solving numerically the set of ODEs (10), subject to the boundary conditions following from the expansions at infinity (13) and at the horizon (14). For this purpose, we employ the solver COLSYS [39], which uses a collocation method for boundary-value ODEs together with a damped Newton method of quasi-linearization. The problem is linearized and solved at each iteration step, employing a spline collocation at Gaussian points. The solver features an adaptive mesh selection procedure, refining the grid until the required accuracy is reached. Once the background solutions are known, we follow the procedure that is analogous to the one we have used before to calculate the quasinormal modes of hairy black holes [32,33,[40][41][42]. We split the space-time into two regions, the inner region r H + ǫ H ≤ r ≤ r J , and the outer region r J ≤ r ≤ r ∞ . In the inner region, we impose the respective ingoing wave behavior; in the outer region, we impose the respective outgoing wave behavior. Then, we calculate sets of linearly independent solutions numerically and match them at the common border r J of the two regions. The eigenvalue ω of the quasinormal modes is found when the functions and their derivatives are continuous at the matching point r J . We follow a common nomenclature for quasinormal modes, that is used when scalar and electromagnetic fields are coupled in the background solutions. Without such a coupling in the background solutions, one would simply obtain scalar or electromagnetic or gravitational modes by solving the respective scalar, electromagnetic or gravitational perturbation equations, since the different types of perturbations would not be coupled to each other. However, when all fields are already present in the background solutions, the different types of perturbations couple. By taking the respective charges, Q s and Q, to zero, the perturbations decouple again. Therefore, we employ a nomenclature that reveals this decoupling limit. i. Modes that are connected with purely scalar perturbations are called scalar-led modes. Typically they have dominant amplitude A ± φ . ii. Modes that are connected with purely electromagnetic perturbations are called EM-led modes. Typically they have dominant amplitude A ± F . iii. Modes that are connected with purely gravitational perturbations are called grav-led modes. Typically they have dominant amplitude A ± g . Scalar-led modes exist for l ≥ 0, EM-led modes for l ≥ 1 and grav-led for l ≥ 2. In the following, we present our results for the quasinormal modes for the cases l = 0, 1 and 2, successively. Spectrum of l = 0 quasinormal modes The l = 0 perturbations are most interesting, since they are the ones that discriminate between stable and unstable EMs black hole solutions. In particular, they feature an unstable mode on the cold black hole branch, whereas the bald and the hot black hole branches possess only stable modes, as our study shows. We exhibit the lowest scalar-led l = 0 modes vs. the charge to mass ratio q in Fig. 2, showing the scaled real part of the frequency, ω R /M , in Fig. 2(a) and the scaled imaginary part −ω I /M in Fig. 2(b). A positive imaginary part signals instability, whereas a negative imaginary part yields the damping time of the mode. Following the color coding of Fig. 1, the bald RN branch is shown in black, the cold EMs branch in blue, and the hot EMs branch in red. As seen in Fig. 2, the fundamental RN branch has only little dependence on q, deviating only slightly from the Schwarzschild mode all the way up to the extremal black hole. The full RN branch is also free of unstable modes. The cold EMs branch features an unstable scalar-led mode throughout its domain of existence. The mode tend to vanish for the largest charge to mass ratio of the cold branch, q = 1. The purely imaginary frequency grows as q decreases. At a certain value of q it reaches a maximum, from where it decreases again to the end point of the branch, where it reaches a zero mode. Here the bifurcation with the hot branch is encountered, and thus the minimal charge to mass ratio q min of both EMs branches is reached. Continuity then requires that at q min , also the hot EMs branch has a zero mode. Besides the unstable mode, the cold branch features a stable scalar-led mode, which is close to the scalarled mode of the RN black hole. Along this branch, from q = 1 to q min the frequency ω R /M first decreases slowly and then increases, becoming larger than the frequency of the RN branch, while the damping rate |ω I /M | increases monotonically. The corresponding stable scalar-led modes of the hot EMs branch extend from q min to q max , where an extremal singular EMs solution is reached. The fundamental branch starts at the zero mode at q min , from where both the frequency ω R /M and the damping rate |ω I /M | rise at first almost vertically. They continue to rise monotonically, reaching final values above those of the extremal RN branch. The first overtone branch starts at the stable scalar-led mode at the bifurcation point. Its frequency ω R /M rises also monotonically, but its damping rate |ω I /M | exhibits an overall but not monotonic decrease. The reason, we exhibit not only the fundamental stable mode for the hot EMs branch but also the first overtone is to demonstrate continuity of the modes at the bifurcation with the cold EMs branch. The zero mode of the cold EMs branch turns into the fundamental mode of the hot EMs branch, whereas the fundamental mode of the RN branch can be followed via the first stable mode of the cold EMs branch to the first stable overtone of the hot EMs branch. Of course, all three classical branches feature sequences of (further) overtones, not studied here in detail. In the RN case, they are well known. There, the rapidly damped modes possess several peculiar features. For instance, the higher modes of the non-extremal black holes have been observed to spiral towards the modes of the extremal black hole with increasing q [35,36]. Spectrum of l = 1 quasinormal modes The l = 1 modes consist of polar scalar-led modes and both axial and polar EM-led modes. We start the discussion with the scalar-led modes, shown in Fig. 3. Again, the lowest RN mode changes smoothly with increasing charge to mass ratio q. The frequency ω R /M increases monotonically, while the damping rate |ω I /M | remains almost constant, showing a slight decrease towards the extremal endpoint q = 1. The lowest scalar-led l = 1 mode of the cold EMs branch changes smoothly from the q = 1 point to the bifurcation point with the hot EMs branch at q min , exhibiting a monotonic decrease of the frequency ω R /M and a monotonic increase of damping rate |ω I /M |. Along the hot EMs branch, the change of the eigenvalue with increasing q is no longer monotonic. As compared to the mode of the extremal RN solution, the frequency ω R /M of the extremal EMs solution is lower and the damping rate |ω I /M | is higher. In Fig. 4 we exhibit the axial and polar EM-led l = 1 modes. The RN black holes are known to exhibit isospectrality, i.e., the axial and polar EM-led modes are degenerate. The EM-led RN modes have an analogous q-dependence to the scalar-led RN modes, but their absolute values differ somewhat. Starting from the q = 1 bifurcation point, the axial and polar modes of the cold EMs branch follow closely the RN modes at first. The frequencies ω R /M of both axial and polar modes are slightly higher than the RN frequencies, but do not deviate strongly even at the bifurcation point q min with the hot EMs branch. The damping rates |ω I /M | start to deviate from the RN damping rate earlier, showing an opposite behavior for the axial and polar modes. For the axial modes, the damping rates increase monotonically, whereas for the polar modes, a more sinusoidal pattern is seen. Along the hot EMs branch from q min ≤ q ≤ q max , the frequencies ω R /M of both axial and polar EM-led modes rise monotonically, with the polar frequencies rising almost twice as much compared to the axial ones. The damping rates |ω I /M | change in a non-monotonic manner again, first rising from the bifurcation point, and then exhibiting some oscillation. At the extremal EMs solution, the damping rates reach similar values, corresponding to about twice the extremal RN value. Spectrum of l = 2 quasinormal modes We now turn to the l = 2 modes, which consist of polar scalar-led modes, axial and polar EM-led modes, and additionally axial and polar grav-led modes. Thus, we now have five types of modes for generic EMs black holes. For the RN case, however, there is again isospectrality of the axial and polar EM-led modes, and there is also isospectrality of the axial and polar grav-led modes. Thus, basically only three types of modes are left, which all show a very similar pattern. It also resembles the pattern seen for l = 0 and l = 1. We will therefore now focus the discussion on the more interesting and varied behavior of the modes of the EMs black holes. As seen in Fig. 5, the scalar-led l = 2 modes change monotonically on the cold EMs branch. The frequency ω R /M decreases, and the damping rate |ω I /M | increases toward the bifurcation point q min . Along the hot EMs branch, the frequency ω R /M rapidly reaches a minimum and then increases, while the damping rate |ω I /M | exhibits a sinusoidal behavior. At the extremal EMs solution, both the frequency and the damping rate are higher than that at the extremal RN solution. The EM-led l = 2 modes are exhibited in Fig 6. They show a pattern that is very similar to the pattern of the EM-led l = 1 modes, although the numerical values of the frequencies and damping rates differ, of course. The grav-led l = 2 modes are exhibited in Fig. 7. The frequencies ω R /M on the axial and polar cold EMs branches follow very closely the RN branch almost up to the bifurcation point q min . This also holds for the damping rate |ω I /M | on the axial cold EMs branch. Only the damping rate along the polar EMs branch starts to deviate from the RN one somewhat earlier. Along the hot EMs branch, the frequencies ω R /M show opposite behavior with the frequency increasing monotonically on the axial branch, while decreasing monotonically along the polar branch. The damping rate |ω I /M | also increases monotonically along the axial branch, whereas it exhibits a strong sinusoidal behavior along the polar branch. Isospectrality As addressed in the above discussion, for RN black holes, isospectrality of axial and polar quasinormal modes holds. Here, axial and polar modes coincide for EM-led modes as well as for grav-led modes, for any allowed angular number l. Moreover, in the extremal case, the EM-led modes with angular number l agree closely with the grav-led modes with angular number l + 1 [35,36]. As seen above and once more highlighted in Fig. 8, this isospectrality survives to a large extent along the cold EMs branch, since the EMs modes follow the RN modes closely over a large range of their existence. In view of Fig. 1, this may, however, not be too surprising, since the cold EMs branch itself follows closely the RN branch almost up to the bifurcation point q min . In contrast, the hot EMs branch reaches far beyond the RN branch and thus also far beyond the cold EMs branch. Its modes are therefore expected to show a generic behavior on their own. In particular, this branch features a significant background scalar field. The presence of a background scalar field, however, leads to a coupling of all the different perturbations, scalar, vector and gravitational in the polar modes, whereas the axial modes remain free of scalar perturbations. Therefore it should not be surprising that isospectrality gets strongly broken along the hot EMs branch, as illustrated in Fig. 8. Conclusion EMs theories represent an interesting simple setting to study generic properties of spontaneous scalarization of black holes and, more generically, the interplay between bald and hairy black holes. Although the presence of scalar hair is charge-induced, many similarities with the more realistic curvature-induced spontaneous scalarization and scalar hairy black holes have been observed, lending weight to EMs studies beyond an intrinsic theoretical interest. Here, we have investigated the linear mode stability of EMs black hole solutions with a quadratic coupling function, representing type IIB characteristics: the presence of scalarized black black holes, while the RN branch remains stable throughout. In particular, the system features besides the bald RN branch a cold EMs branch and a hot EMs branch. The cold EMs branch exists in the interval q min (α) ≤ q ≤ 1, and the hot EMs branch in q min (α) ≤ q ≤ q max (α). The EMs black holes on the cold branch has an unstable scalar-led mode, present for all q min (α) ≤ q ≤ 1. At q min (α), this instability becomes a zero mode that is shared with the hot branch. By continuity, the hot branch then exhibits a stable scalar-led mode, starting at q min (α). In our mode analysis, we have calculated the lowest quasinormal modes of all types of perturbations, scalar-led modes, axial and polar EM-led modes, and axial and polar grav-led modes for all three (bald, cold, hot) branches. For the stable modes on the cold EMs branch, we have found close similarity with the respective RN modes, almost up to the bifurcation point with the hot EMs branch q min (α). Since the cold EMs branch itself follows closely the RN branch, this behavior could have been anticipated. In contrast, the modes of the hot EMs branch show a wide variation and large deviations from the modes of RN branch. But again, the hot EMs branch reaches far beyond the RN branch (for sufficiently large α). RN black holes possess degenerate axial and polar modes in the EM-led and grav-led sectors. As expected, this isospectrality gets broken in the presence of a non-trivial background scalar field, since scalar perturbations contribute in the polar case but not in the axial case. Not surprisingly, the breaking of isospectrality is very limited on the cold EMs branch, but becomes very strong on the hot EMs branch, away from the bifurcation point. The analysis here has focused on an illustrative value of the coupling α = 200 and for ℓ = 0, 1, 2, but the results concerning instability are generic for all values of α; moreover, it is not to be expected that higher l modes introduce more instabilities. Thus our analysis has shown that there are no further unstable modes apart from the unstable scalar-led l = 0 mode of the cold EMs branch. This implies that there are two linearly mode-stable branches in the system, the bald RN branch and the hot EMs branch. While both branches have large regions in parameter space, where their black holes are the only existing black holes, there is also an overlap region q min (α) ≤ q ≤ 1. Here, both branches coexist and both are mode-stable. However, when these branches are considered from a thermodynamical point of view, their reduced horizon areas differ except at one critical point q th (α), where the two curves cross, as seen in Fig. 1. This might suggest that the RN black holes represent the physically preferred state for 0 < q < q th (α), whereas the hot EMs black holes represent the physically preferred state for q th (α) < q ≤ q max (α). Here, dynamical calculations of the EMs evolution equations might give further insight into the interesting question of which type of black hole will represent the end state of collapse in the overlap region. A.1 Axial perturbations By substituting the Ansatz (26) and (25) into the field equations (3)-(5), we obtain a set of differential equations for the axial perturbations: This system of coupled differential equations consists of two first order differential equations for h 0 and h 1 , plus a second order differential equation for W 2 . A perturbation with an outgoing wave behavior satisfying this system of equations has to behave like In addition, close to the horizon, a perturbation with an ingoing wave behavior has to satisfy The asymptotic expansion is determined by two independent amplitudes, one related to the space-time perturbation A ± g and another related to the electromagnetic perturbation A ± F . On the other hand, polar perturbations with an ingoing wave behavior at the horizon satisfy
7,761.6
2020-08-26T00:00:00.000
[ "Physics" ]
The Effects of Adding Heartwood Extractives from Acacia confusa on the Lightfastness Improvement of Refined Oriental Lacquer In this study, a renewable polymeric material, refined oriental lacquer (ROL), used as a wood protective coating, and the Acacia confusa Merr. heartwood extractive, which was added as a natural photostabilizer for improving the lightfastness of ROL, were investigated. The best extract conditions for preparing heartwood extractives and the most suitable amount of addition (0, 1, 3, 5, and 10 phr) were investigated. The lightfastness index including brightness difference (ΔL *), yellowness difference (ΔYI), and color difference (ΔE *), and their applied properties of coating and film were measured. In the manufacture of heartwood extractives, the yield of extractives with acetone solvent was 9.2%, which was higher than that from toluene/ethanol solvent of 2.6%, and also had the most abundant total phenolic contents (535.2 mgGAE/g) and total flavonoid contents (252.3 μgRE/g). According to the SEM inspection and FTIR analysis, the plant gums migration to the surface of films and cracks occurred after UV exposure. The phenomena for photodegradation of ROL films were reduced after the addition of heartwood extractives. Among the different amounts of the heartwood extractives, the 10 phr addition was the best choice; however, the 1 phr heartwood extractive addition already showed noticeable lightfastness improvement. The drying times of ROL were extended and film performances worse with higher additions of heartwood extractives. Among the ROL films with different heartwood extractive additions, the ROL film with 1 phr addition had superior films properties, regarding adhesion and thermal stability, compared with the films of raw oriental lacquer. Compared with the other wood coatings such as solvent-borne coatings including nitrocellulose lacquer, oil-modified alkyd resin, polyurethane resin, drying oil, etc., or other water-borne wood coatings, the ROL film has wide applications on wooden furniture and handicrafts in Taiwan, due to the wax-like gloss, elegant beauty, and excellent durability. It also exhibits biodegradability, identified as an important advantage of biomaterials from a sustainability perspective [9]. However, the inferior lightfastness of ROL, which is resulting in the primary component of catechol derivatives, needs to be improved for advanced uses. Under the strong ultraviolet (UV) radiation, the polymer materials produce excited states, free radicals, peroxides, and singlet oxygen, etc. in photodegradation or photooxidation [10,11]. Under UV irradiation, the catechol derivatives in the ROL film network are first produced oxy radicals (RO *) and peroxyl radicals (ROO *) through photooxidation. Then, these radicals further react with the polymer chains (RH) and generate the hydroxy (OH) and hydroperoxide (OOH) groups. In addition, the carbonyl groups are produced after the internal rearrangement of alkoxy ally biradical in the side chains of catechol derivatives [12]. These degradations of polymers lead to the cracking, chalking, and gloss reduction in films. Furthermore, white speckle is often observed on the shallow layer of films after UV irradiation. This phenomenon is due to the emerging of polysaccharides from the inside to the surface through the damaged network structure [2,13,14]. To inhibit the polymer photo-degradation, several types of photo-stabilizers are widely used including antioxidant, UV absorber, hindered amine light stabilizer (HALS), UV screener, singlet oxygen scavenger, and excited-state quencher [10,[15][16][17][18][19]. In the report of Hong et al. [12], they mixed the benzotriazole UV absorber and HALS with OL for enhancing lightfastness. In the earlier documents [13,14,[19][20][21], the effects of UV screener (titanium dioxide), different types of HALS, and antioxidants on the lightfastness of ROL had been examined. However, many photostabilizers are synthetic and petrol-based compounds, which are potentially harmful to human health and the environment. Therefore, photostabilizers derived from the extractives of natural renewable lignocellulosic biomass have gained increasing attention. The extractives are a diverse group of compounds, including terpenes, polyphenolic, essential oils, fats, and waxes, etc. [22]. Some special or high concentrations of extractives exist in a specific plant and can be used for diversity utilization. For example, phenolic compounds such as 4-vinylguaiacol, vanillin, 4-hydroxybenzoic acid, sterols, terpenoids, and fatty compounds are found in the poplar trees, which can be used to determine the influence of individual compounds on biochemical processes such as enzymatic hydrolysis [23] or alcoholic fermentation for using polar as an energy source of liquid biofuels [24,25]. The presence of proanthocyanidins on coniferous woods shows a higher antioxidant capacity [26]. Aside from these compounds, some of the extractives such as abietic acid, α-pinene, pinosylvin, pinoresinol, gallic acid, α-, β-, and γ-thujaplicin, etc. are also found in the wood cell wall [27]. In addition, many studies showed that many plant products including polyphenolic compounds such as flavonoids have antioxidant and free radical scavenging activities [28][29][30][31]. The flavonoids can absorb UV light, quench singlet oxygen and restrain the radical chain reactions and photooxidation [32]. Acacia confusa Merr. is widely distributed on the Taiwan lowlands and hills and covers an area of about 10,748 ha and a total stock volume of about 1,540,000 m 3 [33]. The plant has been extensively used as a traditional commodity, a charcoal-making material, and due to its beautiful texture of timber is popular among highquality furniture recently in Taiwan. In addition, the heartwood extractive of A. confusa is abundant in various flavonoids, which can effectively restrain wood photodegradation [34][35][36][37][38][39][40][41]; especially, okanin and melanoxetin compounds of flavonols have remarkable multiple photostabilities [42]. Therefore, the heartwood extractives of A. confusa have the potential to be developed as natural photostabilizers and to be applied in the coatings industry. In this study, the best extract conditions of yield, total phenolic content (TPC), and total flavonoid contents (TFC) of A. confusa heartwood, using acetone and toluene/ethanol as solvent, were selected, as well as the extractive additions, which are further discussed for enhancing lightfastness of ROL. Materials The raw oriental lacquer (OL) of R. succedanea was obtained from the Long-Nan Museum of Natural Lacquer Ware (Nantou, Taiwan). The composition analysis was performed in our laboratory corresponding to the CNS 2810 Standard [43]. It was composed of 54.1% laccol, 34.3% water, 7.2% laccase and polysaccharides, and 4.4% nitrogenous compounds in the OL. The heartwood of A. confusa about 20-30 years old was sampled from the Huisun experimental forest of National Chung Hsing University in Nan-Tou county, Taiwan. The dried woods were cut into small pieces and ground into powders with a size of less than 10 mesh. The specimens were prepared corresponding to the CNS 9007 Standard [44], and the substrates were as follows: Cryptomeria japonica radial section planks (moisture content of 11.0%); S-16 wear-resistant steel plates (Jiin Liang Industrial Inc., Taipei, Taiwan); glass plates (Ming Tai Glass Co., Taichung, Taiwan); tin-coated iron plates (Sheng Huei Instrument Corp., Taichung, Taiwan). Manufacture of ROL Under a temperature of 40 • C with 60 rpm stirring, 400 g OL was cooked until the water content was reduced to 3.5% in the 1000 mL glass container, and the ROL was obtained. Manufacture of Heartwood Extractives The A. confusa powders (200 g) were soaked in 70% acetone and a mixture of toluene: ethanol (2:1, v/v) in a ratio of 1:10 (w/v), respectively, at 25 • C for 7 days each time and reduplicated 3 times. The toluene-ethanol system was used for the substitution of harmful benzene according to Antczak et al. [45]. The extractive was filtered by Whatman #1 filter paper and then concentrated under a vacuum. The resulting powder extractive was dried in an oven with a temperature of 30 ± 5 • C, and the yield was calculated. Measurement of Total Phenolics Contents (TPC) The TPC was performed according to the Folin-Ciocalteu method [46], using the standard method with gallic acid. The 100 µL Folin-Ciocalteu reagent was reacted with heartwood extractive (0.01 mg/mL) of A. confuse for 5 min. The mixture was followed by the addition of 100 µL of 20% Na 2 CO 3 solution and was placed at 25 • C for 8 min. Then, the mixture was parted by 12,000 rpm centrifugation for 10 min. The absorbance of the supernatant was estimated by an enzyme-linked immune sorbent assay by a Tecan Sunrise ELISA reader (Tecan, Chapel Hill, NC, USA) at an absorbance of 730 nm. The calibration curve was drawn, and the TPC was expressed as gallic acid equivalents (GAE) in mgGAE/g. Measurement of Total Flavonoid Content (TFC) The TFC was measured corresponding to the AlCl 3 method [47,48]. The heartwood extractive (150 µg/mL) of A. confusa was added with 150 µL AlCl 3 solution (2%). The absorbance was recorded by ELISA at 450 nm after 30 min incubation. The rutin was used for the calibration curve and the tested results were expressed as rutin equivalents (RE) in µgRE/g. Manufacture of Heartwood Extractive-Containing ROL The heartwood extractives with 0, 1, 3, 5, and 10 phr were added into the ROL by the solid content of ROL, respectively. The mixtures were stirred evenly at 120 rpm for 10 min, and the properties of coatings and films were evaluated. Evaluation of Coating Properties The pH values were tested by a Suntex sp-701 probe (Suntex Instruments, Taipei, Taiwan). The viscosity was estimated by a Brookfield DV-E Viscometer (Brookfield Engineering Laboratories, Middleboro, MA, USA). The drying time was evaluated by a 3-speed BK Drying Time Recorder (BYK Additives and Instruments, Wesel, Germany) with the 76 µm wet film thickness. The drying states were divided into touch-free dry (TF) and hardened dry (HD), which were defined by previous research [4,49]. All of the coating properties were performed in the 25 • C and 80% RH. Manufacture and Evaluation of Film Properties The specified substrates were finished with 100 µm wet film thickness by a universal applicator (B-3530, Elcometer, Manchester, UK). The finished substrates were placed in 25 • C and 80% RH for 24 h, followed by moving to 26 • C and 65% RH for 7 days, and then the film measurements were performed. The film lightfastness was evaluated by the film color changes during the exposure in a Paint Coating Fade Meter (Suga Test Instruments Co., Tokyo, Japan). The chamber temperature was 32 ± 4 • C and the light source was the H400-F mercury lamp (SUGA Test Instruments Co., Ltd., Tokyo, Japan). The time-dependent color changes, including color difference (∆E *), yellowness difference (∆YI), and brightness difference (∆L *) with 0, 12, 24, 48, 96, 144, and 192 h exposure, were determined by a spectrophotometer (CM-3600d, Minolta. Osaka, Japan) corresponding to CIE L *, a *, b * color system, with 9 repetitions. The scanning electron microscope (SEM) inspection with 1350× magnification was tested by Topcon SM-200 (Tokyo, Japan). The Fourier-transform infrared spectroscopy (FTIR) analysis was performed by single-spot attenuated total reflection (ATR) mode by a PerkinElmer Spectrum 100 (PerkinElmer, Shelton, CT, USA). The film hardness of glass specimens was measured by a König/Persoz Pendulum Hardness Tester (Braive Instruments, Liège, Belgium) corresponding to the DIN 53157, with 7 repetitions. In the test of film mass retention, approximately 0.3 g film and 250 mL acetone were siphoned in a Soxhlet extractor (Dogger Co., New Taipei City, Taiwan) 4 times/h (total 6 h). Then, the soaked films were dried, and their weight retentions were calculated with 5 repetitions. The film glass transition temperature (Tg) was performed with a tension mode using the dynamic mechanical analysis (DMA 8000, PerkinElmer, MA, USA). The frequency was 1 Hz, the heating rate was 5 • C/min from 0-180 • C. The film impact resistance of wood specimens was measured by the DuPont Impact Tester IM-601 (DuPont Co., Wilmington, DE, USA) with the 300 g falling hammer and an 0.5-inch impact needle diameter corresponding to CNS 10757 [50]. The film adhesion of wood specimens was tested by the crosscut method corresponding to CNS 10756 K 6800 [51]. The optimal adhesion was Grade 10, and the worst is 0. The film bending resistance was measured corresponding to JIS-K-5400 [52] by a bending tester (Ueshima Seisakusho Co., Ltd., Tokyo, Japan). The best bending resistance was <2, and the worst was 10. The film tensile strength and elongation at break were measured by a Shimadzu EZ Test Series Tensile Tester (Shimadzu, Kyoto, Japan) corresponding to ASTM D-638 Standard [53], with 7 repetitions. The film abrasion resistance was evaluated by a Taber Abrasion Tester (Model 503, Taber Industries, North Tonawanda, NY, USA) with 500 g load and CS-10 abrading wheels. The thermogravimetric analyses (TGA) were performed by an STA 6000 (PerkinElmer, Waltham, MA, USA) with nitrogen aeration. The test was performed from 50 to 700 • C with a heating rate of 10 • C/min. Heartwood Extractives of A. confusa Soaked in Different Solvents The yield of extractives from A. confusa heartwood soaked in 70% acetone and a mixture of toluene:ethanol (2:1, v/v), respectively are drawn in Figure 1. The total yield of extractives with a total of three-times acetone extraction was 9.2%, which was higher than that from toluene/ethanol solvent of 2.6%. The result exhibited that the polar solvent acetone has higher efficiency for A. confusa heartwood extraction. Previous studies [54][55][56][57][58] also indicate that polar solvents are more efficient in heartwood extraction than non-polar solvents. In addition, the extracts obtained from polar solvents extraction are mostly composed of polar compounds. Stone et al. [59] also explain that, compared with the extraction by toluene/ethanol mixture, the more yield of acetone extraction is due to the presence of the high amounts of polar compounds such as phenolic compounds and sugar in the heartwood. As shown in Table 1, the acetonic extractives had the most abundant TPC (535.2 mgGAE/g) and TFC (252.3 µgRE/g), compared with that of toluene/ethanolic extractives of 509.3 mgGAE/g and 216.0 µgRE/g, respectively. The same results were also found in the report of Chang et al. [34]. They indicated that the TPC of bark and heartwood of A. confusa soaked in 70% ethanol for 7 days were 470.6 ± 43.9 and 529.7 ± 14.4 mgGAE/g, respectively. The radical scavenging activity of extracts is attributed to the compounds including polyphenols, flavonoids, and phenolic compounds. However, the phenolic compounds act as reducing agents, hydrogen donors, and are capable of scavenging free radicals. Most of the antioxidant activity of plants is considered phenols, and the phenolic content is directly related to their antioxidant properties [60,61]. Compared with the TFC, the TPC is more relevant to the free radical scavenging capacity from DPPH (2,2-diphenyl-1picryl-hydrazyl-hydrate) method [62]. In this study, it could be concluded that the acetonic extractives had the most TPC; therefore, they would be used as natural photostabilizers for improving the lightfastness of ROL. solvents. In addition, the extracts obtained from polar solvents extraction are mostly composed of polar compounds. Stone et al. [59] also explain that, compared with the extraction by toluene/ethanol mixture, the more yield of acetone extraction is due to the presence of the high amounts of polar compounds such as phenolic compounds and sugar in the heartwood. As shown in Table 1, the acetonic extractives had the most abundant TPC (535.2 mgGAE/g) and TFC (252.3 μgRE/g), compared with that of toluene/ethanolic extractives of 509.3 mgGAE/g and 216.0 μgRE/g, respectively. The same results were also found in the report of Chang et al. [34]. They indicated that the TPC of bark and heartwood of A. confusa soaked in 70% ethanol for 7 days were 470.6 ± 43.9 and 529.7 ± 14.4 mgGAE/g, respectively. The radical scavenging activity of extracts is attributed to the compounds including polyphenols, flavonoids, and phenolic compounds. However, the phenolic compounds act as reducing agents, hydrogen donors, and are capable of scavenging free radicals. Most of the antioxidant activity of plants is considered phenols, and the phenolic content is directly related to their antioxidant properties [60,61]. Compared with the TFC, the TPC is more relevant to the free radical scavenging capacity from DPPH (2,2-diphenyl-1-picryl-hydrazyl-hydrate) method [62]. In this study, it could be concluded that the acetonic extractives had the most TPC; therefore, they would be used as natural photostabilizers for improving the lightfastness of ROL. Table 2 shows the coating properties of ROL with various heartwood extractive additions. The ROL had similar pH values of 3.7-3.9 regardless of heartwood extractive additions, exhibiting the laccase activity was not noticeably changed after the extractive addition, and the suitable pH value of ROL for laccase activity was 3.0~5.0 [12,13]. The raw Table 2 shows the coating properties of ROL with various heartwood extractive additions. The ROL had similar pH values of 3.7-3.9 regardless of heartwood extractive additions, exhibiting the laccase activity was not noticeably changed after the extractive addition, and the suitable pH value of ROL for laccase activity was 3.0~5.0 [12,13]. The raw ROL (0 phr) had a viscosity of 2026 cps, and it increased and then decreased with an increasing amount of extractive added. The ROL with 3 phr extractive had the highest viscosity of 2224 cps. The viscosity changes resulted in the conformation of water in oil (w/o) emulsion of the ROL. The ROL had 3.5% water, which was presented with spherical micelles in the organic continuous phase. When the extractive-acetone mixture was added to ROL, the hydrophilic extractives and partial acetone filled in the water micelles, and as a result, the volume of the dispersed phase increased and viscosity raised [63]. However, when the excessive addition of extractives was added over the dissolving capacity of water micelles, the exceeded acetone and extractives diluted the organic continuous phase and decreased the viscosity of ROL. Therefore, the ROL with 10 phr extractive addition had the lowest viscosity of 1994 cps. In the drying time test, the ROL (0 phr) had the shortest TF and HD, 4.5 h and 7.5 h, respectively, whereas it increased with additions of the heartwood extractives. While the extractive addition achieving to 10 phr, the ROL had the longest TF and HD, 15.0 and over 24 h, respectively. The curing process of ROL first began with laccase-catalyzed dimerization of the ortho-dioxy-benzol. Then, the auto-oxidative polymerization occurred from the unsaturated side chain. In addition, polyphenolic compounds such as flavonoids in the A. confusa heartwood extractives have antioxidant and free radical scavenging activities [28][29][30][31][32][64][65][66][67]. In the stage of autooxidative polymerization, the generated hydro-peroxide radicals (ROO *) were captured by the Ar-OH groups of polyphenols to produce hydro-peroxides (ROOH) and slowed the aerobic auto-oxidative polymerization, resulting in a delay of drying with heartwood extractive addition. Figure 2 show the time-dependent color difference (∆E *) of ROL films with different heartwood extractive additions after UV irradiation. In Figure 2, the ∆E *, which is a synergistic effect of ∆L * and ∆YI, showed that the film with 10 phr extractive addition had an efficient decreasing of ∆E *, compared with ROL (0 phr). However, the films with extractive additions with 1, 3, 5 had similar trends and ∆E * values. In addition, the ∆E * values did not decrease with the increasing extractive additions. For further investigation, the time-dependent color changes were separated to the yellowness difference (∆YI) and brightness difference (∆L *), shown in Figures 3 and 4. In Figure 3, the ∆YI of ROL films increased with longer irradiation time, especially more severely within the 96 h irradiation, and then eased. The ROL film (0 phr) showed the highest change in ∆YI, and the film with 10 phr had the lowest ∆YI. Before the 100 h UV exposure, the film with 1 phr extractive addition had lower ∆YI than the film with 3 phr and 5 phr extractive addition. As this trend was reversed in the longer UV exposure, we believe that this phenomenon resulted in the difference in the surface distribution of extractives. After 100 h UV exposure, the ∆YI values were increased with the increasing extractive additions, which were different from the ∆E * of ROL films. The yellowing of film may result in the deep chromophoric groups of quinone structures generated from the benzene ring in catechol derivatives of ROL after photooxidation. In Figure 3, the ΔYI of ROL films increased with longer irradiation time, especially more severely within the 96 h irradiation, and then eased. The ROL film (0 phr) showed the highest change in ΔYI, and the film with 10 phr had the lowest ΔYI. Before the 100 h UV exposure, the film with 1 phr extractive addition had lower ΔYI than the film with 3 phr and 5 phr extractive addition. As this trend was reversed in the longer UV exposure, we believe that this phenomenon resulted in the difference in the surface distribution of extractives. After 100 h UV exposure, the ΔYI values were increased with the increasing extractive additions, which were different from the ΔE * of ROL films. The yellowing of film may result in the deep chromophoric groups of quinone structures generated from the benzene ring in catechol derivatives of ROL after photooxidation. Figure 4 shows the time-dependent ∆L * of ROL films with different heartwood extractive additions after UV irradiation. The ∆L * of films raised with longer irradiation time. The phenomenon resulted in photodegradation, the broken side-chain -C=C of network structure, and photooxidation produced the light chromophoric -C=O groups; furthermore, the white speckle plant gums moved to the shallow layer [68]. The ROL film (0 phr) had the highest ∆L * increase. The result showed that the ROL film with 10 phr addition had a remarkable improvement of brightness change. However, we found the increases in the ∆L * were not proportional to the heartwood extractive additions, as the plant gums migration occurred during the photooxidation but also during the physical disintegration. The SEM images drawn in Figure 5 provide notable evidence for the changes in ∆L *. As expected, the ROL film (0 phr) had most defects such as white speckle of plant gums, cracks, and holes (Figure 5b), which resulted in severe photodegradation under UV irradiation, and the ROL film (10 phr) had the least defects (Figure 5j). In the films with additions of 1, 3, and 5 phr (Figure 5d,f,h), more cracks were found in the higher extractive addition. It proved that these cracks were not generated primarily through photooxidation due to their similar ∆YI trends and values. Therefore, these cracks were attributed to the physical disintegration during the lightfastness test. In Section 3.4, the film properties are investigated, indicating that the film strength and elongation were decreased with more extractive additions. The ΔE *, ΔYI, and ΔL * after the 192 h lightfastness test are summarized in Table 3. The ΔE *, ΔYI, and ΔL * of the ROL film without heartwood extractives (0 phr) were the highest, with 35.4, 103.3, and 14.3, respectively. All of the values of films with heartwood extractive additions were lower than those of ROL film (0 phr), and the 10 phr content had The ∆E *, ∆YI, and ∆L * after the 192 h lightfastness test are summarized in Table 3. The ∆E *, ∆YI, and ∆L * of the ROL film without heartwood extractives (0 phr) were the highest, with 35.4, 103.3, and 14.3, respectively. All of the values of films with heartwood extractive additions were lower than those of ROL film (0 phr), and the 10 phr content had the lowest, with 12.7, 43.6, and 3.9, respectively. The ∆E *, ∆YI, and ∆L * of the ROL film with 1 phr heartwood extractives were 21.8, 82.5, and 6.1, respectively, and it already exhibited noticeable efficiency for enhancing the lightfastness. From the results mentioned above, it was demonstrated that phenolic and flavonoid compounds in heartwood extractives of A. confuse are similar to primary antioxidants such as hindered phenol (Ar-OH) [35,69] and act to scavenge oxy and peroxyl radical intermediates (HO *, RO *, and ROO *) in the photodegradation and photooxidation processes of ROL under UV irradiation to generate the hydroxy groups (OH) and hydroperoxides (OOH) and interrupt the photodegradation reaction; therefore, the poor lightfastness of ROL could be improved by adding the heartwood extractives. Figure 6 shows the FTIR spectra of ROL films with various A. confusa heartwood extractive additions and after 192 h UV irradiation. In the spectrum of ROL film (0 phr), the functional groups included the -OH (3400-3200 cm −1 ); -C-H stretching vibration of -C=C-H (3010 cm −1 ); -CH 2 asymmetric and symmetric stretching vibration of the urushiol side chain (2924 cm −1 and 2854 cm −1 , respectively); -C-O-C-stretching vibration (1715 cm −1 ) and -C=O stretching vibration (1700 cm −1 ); -C=C-stretching vibration and -C-H out of plane bending vibration of the urushiol benzene ring (1615 cm −1 and 730 cm −1 , respectively); the conjugated triene of urushiol side chain could also a be seen at 990 cm −1 . The FTIR spectra of the ROL films with various heartwood extractive additions were similar to that of ROL film (0 phr) before UV irradiation. However, after the 192 h lightfastness experiment, the -C=C-H peak (3010 cm −1 ) disappeared, and peaks at 2924 cm −1 , 2854 cm −1 , and 990 cm −1 were decreased. The characteristic peaks of -C-O-C-(1715 cm −1 ) and -C=O (1700 cm −1 ) increased and were combined, forming a new peak at 1707 cm −1 . The result indicated that the -CH of the side chains degraded by photodegradation, and the peroxide, benzoquinone, and carbonyl group formed [36,68]. Furthermore, the peaks at 1615 cm −1 and 730 cm −1 disappeared or decreased, representing the degradation of benzene rings and the production of quinone compounds. From the FTIR results, the photodegradation of ROL film resulted in the fractures of side chains and catechol structures of the film network. The result was also confirmed by Hong et al. [12]. After 192 h UV irradiation, the ROL films with various heartwood extractive additions had similar peaks in FTIR spectra ( Figure 6). The differences in spectra before and after UV irradiation were manifested in the disappeared peaks at 3010 cm −1 , 990 cm −1 , and 730 cm −1 , decreased peaks at 2924 cm −1 , 2854 cm −1 , and an increased peak at 1707 cm −1 . However, the film with 10 phr heartwood extractive addition had the smallest decrease, at peak 2924 cm −1 and 2854 cm −1 , indicating that it had the best lightfastness; this result was also confirmed in Section 3.3. 021, 13, x FOR PEER REVIEW 11 of 17 Figure 6. FTIR spectra of ROL films with various A. confusa heartwood extractive additions before and after UV irradiation test. Film Properties From the lightfastness experiments mentioned above, it was demonstrated that the poor lightfastness of ROL film can be improved by A. confusa heartwood extractive addition. Although the 10 phr addition showed the best lightfastness, the 1 phr addition already exhibited remarkable efficiency. In addition, when the content of heartwood extractives was more than 5 phr, the drying time of ROL had to be extended to 22.0 h, as shown in Table 2, meaning that the more heartwood extractive is added, the more it prevents the Figure 6. FTIR spectra of ROL films with various A. confusa heartwood extractive additions before and after UV irradiation test. Film Properties From the lightfastness experiments mentioned above, it was demonstrated that the poor lightfastness of ROL film can be improved by A. confusa heartwood extractive addition. Although the 10 phr addition showed the best lightfastness, the 1 phr addition already exhibited remarkable efficiency. In addition, when the content of heartwood extractives was more than 5 phr, the drying time of ROL had to be extended to 22.0 h, as shown in Table 2, meaning that the more heartwood extractive is added, the more it prevents the auto-oxidative polymerization of the unsaturated side chain of catechol derivatives, and the more it affects the properties and finishing operation of ROL. Therefore, only 0, 1, and 3 phr containing ROL were selected in this section to compare the film properties for choosing the appropriate quantity of heartwood extractives between film properties and lightfastness. Table 4 shows the fundamental film properties of ROL. The ROL film (0 h) had the highest hardness of 95 s, and the one with 3 phr heartwood extractives showed the lowest hardness, with 55 s. The mass retention and Tg had similar trends regarding the hardness. The ROL film (0 phr) had the highest mass retention, with Tg of 88.5% and 90 • C, which decreased with increasing the heartwood extractive additions. The ROL film with 3 phr heartwood extractives had the lowest mass retention, with Tg of 85.4% and 69 • C. However, the hardness, mass retention, and Tg of the ROL film with 1 phr heartwood extractives were 88 s, 87.5%, and 88 • C, respectively, which are very close to those of the ROL film (0 phr). The results mentioned above indicated that the auto-oxidative polymerization of the unsaturated side chain of catechol derivatives in ROL was inhibited by adding the A. confusa heartwood extractives, which decreased the cross-linking density of the network structure. In addition, there were no differences in impact resistance, and the ROL film (1 phr) had the remarkable adhesion of Grade 10, while the ROL film (0 phr) had the best bending resistance of 2 mm, which were slightly superior to ROL film (1 phr) and ROL film (3 phr) of 3 mm. The stress-strain curves of ROL films with various A. confusa heartwood extractive additions are described in Figure 7. The tensile strength and elongation at break of ROL film (0 phr) were 18 Mpa and 12%, respectively, while they both decreased with the addition of A. confusa heartwood extractives. The film with 3 phr extractive addition had the worst tensile strength and elongation at a break of 8 Mpa and 6% due to its too-low cross-linking density. Figure 7 also shows that the ROL film (0 phr) is similar to a ductile material and those of the ROL film (1 phr) and ROL film (3 phr) are more similar to brittle and soft materials, respectively. In addition, the weight loss for abrasion resistance of the ROL film (0 phr) was 9.7 mg, which increased with the increasing content of heartwood extractives. The ROL film (3 phr) had the highest weight loss of 19.8 mg, meaning that it had the worst abrasion resistance. The results are consistent with the stress-strain curves (Figure 7), which showed the lowest strain energy of 0.9 kJ for ROL film (3 phr). The higher the strain energy is, the better the abrasion resistance is [54]. The thermogravimetric diagrams (TGA) and derivative thermogravimetric diagrams (DTG) of ROL films with various A. confusa heartwood extractive additions are displayed in Figures 8 and 9, respectively. The DTG curves showed that the thermal degradation of ROL film consisted of three main steps; the first step occurred at 60-200 • C, representing the decomposition of the compounds with low molecular weight and water vaporization; the second step occurred at around 200-350 • C, corresponding to the degradation of the side chain end groups of alkylcatechols and alkenylcatechols; the third step occurred at around 350-550 • C, indicating the fragmentation of benzene ring-side chain bonding (-C-O-Ar) of the network structure of ROL film [1,6,21]. Gaugler and Grigsby [70] reported that the thermal decomposition of condensed tannins from pine bark was suggested at 280 • C and 450 • C. Therefore, the ROL films with or without heartwood extractives showed similar behavior in terms of thermal degradation because of the overlap of thermal decomposition peaks of heartwood extractives and ROL film. In the second step, the ROL (0 phr) film had a temperature of maximum decomposition rate (T max ) and derivative weight at T max around 317 • C and −2.8%/min, whereas the ROL film (1 phr) and ROL film (3 phr) were 319 • C, −2.8%/min and 320 • C and −3.0%/min, respectively. These parameters showed that the crosslinking structures formatted from the side chains are homogeneous, even though the probable differences in the quantity or density of ROL films. Furthermore, in the third step, the T max of ROL film (0 phr), ROL film (1 phr), and ROL film (3 phr) were 462, 464, and 460 • C, respectively, which the derivative weight at T max were −6.0, −5.6, and −6.2%/min, respectively. The results revealed that the thermal stability of ROL film could be slightly improved after adding 1 phr A. confusa heartwood extractives, while the 3 phr addition had an opposite effect. The thermogravimetric diagrams (TGA) and derivative thermogravimetric diagram (DTG) of ROL films with various A. confusa heartwood extractive additions are displaye in Figures 8 and 9, respectively. The DTG curves showed that the thermal degradation ROL film consisted of three main steps; the first step occurred at 60-200 °C, representin the decomposition of the compounds with low molecular weight and water vaporizatio the second step occurred at around 200-350 °C, corresponding to the degradation of th side chain end groups of alkylcatechols and alkenylcatechols; the third step occurred around 350-550 °C, indicating the fragmentation of benzene ring-side chain bonding (-C O-Ar) of the network structure of ROL film [1,6,21]. Gaugler and Grigsby [70] reporte that the thermal decomposition of condensed tannins from pine bark was suggested 280 °C and 450 °C. Therefore, the ROL films with or without heartwood extractiv showed similar behavior in terms of thermal degradation because of the overlap of the mal decomposition peaks of heartwood extractives and ROL film. In the second step, th ROL (0 phr) film had a temperature of maximum decomposition rate (Tmax) and derivativ weight at Tmax around 317 °C and −2.8%/min, whereas the ROL film (1 phr) and ROL fil (3 phr) were 319 °C, −2.8%/min and 320 °C and −3.0%/min, respectively. These paramete showed that the crosslinking structures formatted from the side chains are homogeneou even though the probable differences in the quantity or density of ROL films. Furthe more, in the third step, the Tmax of ROL film (0 phr), ROL film (1 phr), and ROL film phr) were 462, 464, and 460 °C, respectively, which the derivative weight at Tmax were −6. −5.6, and −6.2%/min, respectively. The results revealed that the thermal stability of RO film could be slightly improved after adding 1 phr A. confusa heartwood extractives, whi the 3 phr addition had an opposite effect. The thermogravimetric diagrams (TGA) and derivative thermogravimetric diagra (DTG) of ROL films with various A. confusa heartwood extractive additions are display in Figures 8 and 9, respectively. The DTG curves showed that the thermal degradation ROL film consisted of three main steps; the first step occurred at 60-200 °C, represent the decomposition of the compounds with low molecular weight and water vaporizati the second step occurred at around 200-350 °C, corresponding to the degradation of side chain end groups of alkylcatechols and alkenylcatechols; the third step occurred around 350-550 °C, indicating the fragmentation of benzene ring-side chain bonding ( O-Ar) of the network structure of ROL film [1,6,21]. Gaugler and Grigsby [70] repor that the thermal decomposition of condensed tannins from pine bark was suggested 280 °C and 450 °C. Therefore, the ROL films with or without heartwood extracti showed similar behavior in terms of thermal degradation because of the overlap of th mal decomposition peaks of heartwood extractives and ROL film. In the second step, ROL (0 phr) film had a temperature of maximum decomposition rate (Tmax) and derivat weight at Tmax around 317 °C and −2.8%/min, whereas the ROL film (1 phr) and ROL f (3 phr) were 319 °C, −2.8%/min and 320 °C and −3.0%/min, respectively. These paramet showed that the crosslinking structures formatted from the side chains are homogeneo even though the probable differences in the quantity or density of ROL films. Furth more, in the third step, the Tmax of ROL film (0 phr), ROL film (1 phr), and ROL film phr) were 462, 464, and 460 °C, respectively, which the derivative weight at Tmax were − −5.6, and −6.2%/min, respectively. The results revealed that the thermal stability of R film could be slightly improved after adding 1 phr A. confusa heartwood extractives, wh the 3 phr addition had an opposite effect. Conclusions For enhancing the lightfastness of ROL, the heartwood extractives of A. confusa we used. The best manufacture and most suitable amount of heartwood extractives were vestigated. The manufacture of heartwood extractives showed that the yield of extractiv with acetone solvent was 9.2%, which was higher than that from toluene/ethanol solve of 2.6% and also had the most abundant TPC (535.2 mgGAE/g) and TFC (252.3 μgRE/ The drying time of ROL was inhibited by the heartwood extractive additions and achie ing to 10 phr, the ROL could not be cured within 24 h. The lightfastness results show that the ROL with extractive additions with 1, 3, 5 phr had similar trends of ΔE *, ΔYI, a ΔL * values. The 10 phr addition had the best lightfastness to improve efficiency for t ROL. The SEM inspection and FTIR analysis also provided that the plant gums' migrati to the surface of films and cracks occurred after UV exposure. The phenomena were duced after heartwood extractives addition. The results of ROL film properties reveal that the film strength, elongation, Tg, abrasion resistance, hardness, and mass retenti were decreased with the increase in the amount of heartwood extractives added. In pr tice, compared with the ROL film (0 phr), among the different heartwood extractives co taining ROL, the one with 1 phr showed noticeable lightfastness improvement efficien and superior films properties, especially regarding adhesion and thermal stability. Conclusions For enhancing the lightfastness of ROL, the heartwood extractives of A. confusa were used. The best manufacture and most suitable amount of heartwood extractives were investigated. The manufacture of heartwood extractives showed that the yield of extractives with acetone solvent was 9.2%, which was higher than that from toluene/ethanol solvent of 2.6% and also had the most abundant TPC (535.2 mgGAE/g) and TFC (252.3 µgRE/g). The drying time of ROL was inhibited by the heartwood extractive additions and achieving to 10 phr, the ROL could not be cured within 24 h. The lightfastness results showed that the ROL with extractive additions with 1, 3, 5 phr had similar trends of ∆E *, ∆YI, and ∆L * values. The 10 phr addition had the best lightfastness to improve efficiency for the ROL. The SEM inspection and FTIR analysis also provided that the plant gums' migration to the surface of films and cracks occurred after UV exposure. The phenomena were reduced after heartwood extractives addition. The results of ROL film properties revealed that the film strength, elongation, Tg, abrasion resistance, hardness, and mass retention were decreased with the increase in the amount of heartwood extractives added. In practice, compared with the ROL film (0 phr), among the different heartwood extractives containing ROL, the one with 1 phr showed noticeable lightfastness improvement efficiency and superior films properties, especially regarding adhesion and thermal stability. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
9,042.6
2021-11-24T00:00:00.000
[ "Materials Science", "Environmental Science" ]
Aged garlic extract potentiates doxorubicin cytotoxicity in human breast cancer cells Purpose: To investigate the potential chemo-sensitizing effect of aged garlic extract (AGE) on doxorubicin (DOX) in breast cancer cells (MCF-7), and the possible underlying mechanisms. Methods: Human breast cancer cell line (MCF-7) was treated with AGE and DOX. The cytotoxic effects of AGE and DOX were investigated via cell cycle analysis and apoptosis induction, using flow cytometry. Mechanistic studies involved the determination of cellular uptake of DOX and p-glycoprotein (P-gp) activity. Results: Combined treatment of MCF7 cells with AGE and DOX produced no significant effect at AGE dose of 10 mg/mL. However, co-treatment with AGE at doses of 50 and 93 mg/mL enhanced the cytotoxicity of DOX on MCF-7 cells, with IC50 values of 0.962 and 0.999 μM, respectively, when compared with 1.85 μM DOX alone. Moreover, Annexin V-FITC and PI techniques showed that AGE significantly increased percentage of cells in late apoptosis. Besides, AGE-DOX treatment significantly increased cellular uptake of DOX and inhibited P-gp activity, when compared with DOX alone (p < 0.05). Conclusion: AGE enhances the cytotoxic effect of DOX on MCF-7 cells, most likely due to cell cycle distribution, stimulation of apoptosis, increased uptake of DOX by MCF7, and inhibition of P-gp activity. INTRODUCTION Breast cancer (BC) is the most prevalent malignancy in women worldwide. The worldwide incidence of the BC is on the rise [1]. The incidence of BC varies with geographical location, with the highest cases in advanced countries, and lower cases in developing countries in Asia, the Middle East, and Africa [2]. The first line chemotherapeutic drug used for BC is doxorubicin (DOX) which is a broad spectrum anthracycline antibiotic against lymphomas, leukemia, and solid tumors [3]. However, DOX is associated with cardiotoxic side effects which limit its use. In an attempt to develop a new therapeutic combination so as to maximize the chemotherapeutic effects for DOX at low doses and minimize its adverse effects, a diversity of tactics have been investigated. It is worth noting that herbal medicines have been used for long in the prevention and treatment of various diseases such as heart disease, cancer and obesity. Garlic (Allium sativum) is a member of vegetables belonging to the genus Allium. Organosulfur compounds are responsible for the health benefits derivable from garlic consumption. Recently, the therapeutic benefits of garlic in diverse biosystems has been discovered. These benefits include anti-oxidant, anti-tumorigenic and cardio-protective effects [4,5]. Aged garlic extract (AGE) is a liquid prepared through prolonged ethanol extraction of fresh garlic for up to 20 months at room temperature [6]. The extract does not cause adverse events, and it has been confirmed to be safe in preclinical trials [7]. Compared with other garlic products, AGE is the most useful garlic preparation used as an antioxidant [5]. It exerts cytotoxic properties in a wide variety of tumor cells, including gastric and colon cancer cells [8]. In addition, it has been reported that AGE exerted protective effects against DOX-induced cardiotoxicity in rats [4]. Human breast cancer cell lines are fundamental tools for studying BC at the molecular level. They could be utilized as in vitro models in laboratory cancer research. The MCF7 cell line was isolated by Dr. Soule and his colleagues in 1973 from the pleural effusion of elderly patient with metastatic breast carcinoma at the Michigan Cancer Foundation. This cell line is accepted globally as an appropriate model for use in the evaluation of the anticancer effects of drugs [9]. The current study was designed to determine the potential chemo-sensitizing effect of AGE on the cytotoxicity of DOX against MCF-7, as well as the possible underlying mechanisms, with respect to cell cycle phase distribution, apoptosis induction, DOX cellular uptake and Pglycoprotein activity. Cell culture and measurement of cytotoxicity Human Breast Cancer cell line (MCF-7) was obtained from National Cancer Institute (NCI), Cairo University, Egypt. The cells were cultured in RPMI 1640 supplemented with 10 % FBS, streptomycin (50 µg/ml) and penicillin (100 U/mL) at 37 ˚C in a 5 % CO2 incubator. Cytotoxicity was determined using SRB assay according to the method of Skehan et al [10]. The cells were simultaneously incubated with different concentrations of DOX and AGE i.e. DOX at concentrations of 0.1, 1, 10 and 100 µM; and AGE at concentrations of 10, 50, and 93 mg/mL, with 3 wells for each concentration. After 48 h, cell monolayers were fixed with trichloroacetic acid (10 % w/v) and stained with 0.4 % SRB using Cell Cytotoxicity Assay Kits (Aldrich Chem. Corp., USA), in line with the manufacturer's instructions. Optical density was read at 490 nm in a microplate reader (Model ELx808, BioTek, U.S.A.). Cell cycle analysis The MCF7 cells were seeded in 6-well plates at a density of 2 x10 5 cells/well in RPMI 1640 for 24 h. Thereafter, the cells were incubated with DOX (1.85 µM) alone and/or simultaneously with AGE (50 and 93 mg/mL). After 48 h, the cells were washed thrice with PBS, followed by harvesting via trypsinization. For cell cycle analysis, the pellet was re-suspended at cell density of 1 x 10 6 cells/mL in the assay buffer, and processed according to the instructions in the cell cycle determination kit (Cayman Chemical Company, USA). Cell cycle analysis was carried out using flow cytometry (Becton Dickinson (BD) FACS Caliber, U.S.A) [11]. Determination of apoptosis Apoptotic and necrotic cells were distinguished and investigated using flow cytometry based on the assay of Van Engeland et al [12], using Annexin V-FITC apoptosis detection kit obtained from Aldrich Chem. Corp., USA. The cells were suspended in 200 µL of Annexin V incubation reagent prepared (for each sample) by mixing PI, binding buffer 10X and Annexin V-FITC in deionized water. The mixture was incubated in the dark at 23°C for 15 min, followed by addition of 400 µL of binding buffer to each sample, and flow cytometric analysis (NAVIOS Beckman Coulter, U.S.A.). Assay of cellular uptake of DOX After cell treatment and trypsinization, 1 x 10 6 cells were digested by resuspending them in 2 mL of 1:1 (v: v) mixture of ethanol and 0.3 M HCl for 24 h at 70 o C. Then, 3 mL of PBS was added to each sample. In a clear, black flat-bottom 96well plate, 100 μL of lysed cells was added to each well. The lysed cells were analyzed using a spectrofluorometer (Synergy HT microplate reader, BioTek, USA) at optimal excitation and emission wavelengths of 485 and 590 nm, respectively [13,14]. Assay of P-gp activity Following 24-hour cell seeding, 100 µL of working solution of Rhodamine 123 (2.62 µM) was added per well and the wells were kept in a CO2 incubator at 37 o C in the dark for 30 min. Then, the cells were treated as stated previously. Following trypsinization, the cells were washed once with ice-cold PBS. For P-glycoprotein assay, one million cells were suspended in 1 mL of PBS, with shaking. The lysed cells were analyzed using a spectrofluorometer (Synergy HT microplate reader, BioTek, USA) at wavelength range of 485 -590 [15][16][17]. Statistical analysis Data are presented as mean ± standard error of the mean (SEM). Statistical analysis was done using one-way analysis of variance, and then Tukey's post-hoc test. All analyses were carried out with InStat version 3 software package. Graphs were drawn by means of GraphPad Prism (ISI ® software, USA) version 5 software. Values of p ˂ 0.05 were used as criteria for statistical significance of differences. Interestingly, co-treatment of the cells with AGE (10 mg/mL) decreased the sensitivity to the cells to the cytotoxic effect of DOX, with IC50 elevated to 9.46 µM. Effect of DOX and AGE on cytotoxicity of MCF-7 cells In contrast, co-treatment with higher concentrations of AGE (50 and 93 mg/mL) significantly increased the sensitivity of MCF-7 cells to DOX, with IC50 values of 0.962 and 0.99 µM, respectively, without any obvious difference in cytotoxicity between the two higher concentrations. Cell cycle analysis As shown in Figure 2 and Figure 3 shows the distribution of cells in each quadrant according to necrosis, late apoptosis, live cells, and early apoptosis (Annexin Vpositive cells). Cell treatments with DOX and/or AGE were carried out in a way similar to that used in cell cycle analysis. As shown in Figure 3, the most prominent effect was increase in percentage of cells in late apoptosis quadrant. Effect of treatments on apoptosis The percentages of live and late apoptotic cells after treatments are shown in These findings confirmed that apoptosis was significantly elevated after treatment with DOX. Again, co-treatment with DOX (1.85 µM) and AGE (50 mg /ml) was more effective in increasing percentage of apoptotic cells than treatment with DOX alone. In addition, cell treatment with AGE alone (93 mg/mL) resulted in late apoptosis in almost half of the cell population. Furthermore, co-treatment with DOX and AGE (93 mg/mL) had very potent effect, being able to induce apoptosis in more than 90 % of the cell population. Effect of treatments on DOX cellular uptake and p-gp activity To determine the sensitivity of MCF-7 cells to the growth-inhibitory effect of DOX against MCF-7 cells, intracellular DOX level per 10 6 cells was measured. The cells were treated with DOX (1.85 µM) in the presence or absence of AGE (50 and 93 mg /mL). Figure 4 A shows that DOX cellular uptake/10 6 cells was increased significantly by more than 2 folds on co-treatment with AGE (50 mg/mL). Moreover, co-treatment with DOX and AGE (93 mg/mL) caused about 4-fold increase in DOX uptake by MCF-7 cells. In order to explain the higher cellular uptake of DOX in the presence of AGE, P-gp activity was assayed via determination of the accumulation of Rh123 in MCF7 cells. As indicated in Figure 4 B, Rh123 accumulation was markedly increased in MCF-7/DOX cells in the presence of AGE, when compared to MCF-7/DOX cells. This effect was concentration-dependent, since AGE at a concentration of 50 mg/mL, increased intracellular Rh123 by about 3.8 folds, while AGE at a concentration of 100 mg/mL, had a more prominent effect, being able to cause > 6-fold increase of Rh123 accumulation, when compared to the control value. These results suggest that P-gp active outward transport was inhibited by AGE. DISCUSSION Doxorubicin (DOX) is widely used in cancer therapy for human neoplasms. The clinical benefits of DOX are limited by cell resistance and serious adverse effects, specifically dose-related and cumulative cardiotoxicity [18]. Chemosensitization is a strategy for overcoming chemoresistance. A diversity of tactics has been employed to enhance the therapeutic effects of chemotherapeutic agents, while decreasing their toxicities. Among the potential chemosensitizers is the natural product AGE, which exerts chemopreventive and cytotoxic effects. 3.4 a, b ± 0.26 91.5 a, b ± 1.99 Data are shown as mean ± SEM (n = 3). Statistical analyses were performed using one-way ANOVA, followed by Tukey's post-hoc test. a p < 0.05, compared to control; b p < 0.05, compared to corresponding DOX-alone treatment This study was aimed at investigating the potential chemo-sensitizing effect of AGE on DOX against the growth of MCF-7 cells, as well as the likely underlying mechanisms with regard to DOX cytotoxicity, cell cycle phase distribution, apoptosis induction, DOX cellular uptake and Pglycoprotein activity. Treatment of MCF-7 cells with different concentrations of DOX alone was cytotoxic to the cells, with IC50 of 1.85 µM (equivalent to 1 µg/mL). Similar findings were reported by Buranrat and co-workers who exposed MCF-7 cells to DOX and determined their viability using SRB assay [19]. Their results demonstrated that cell growth was inhibited after 48 h of treatment, with IC50 value of 1.8 ± 0.1 µM at 48 h. In the present study, DOX cytotoxicity was established in the results of cell cycle distribution and induction of apoptosis, where DOX (1.85 µM) caused a significant increase in apoptotic cells, significant decrease in the percentage of live MCF-7 cells, and marked increase in the extent of apoptosis, when compared with control cells. Barzegar et al reported that DOX efficiently arrested cell division and stimulated apoptosis via DNA intercalation, generation of oxidizing moieties, binding to cellular membranes, and inhibition of topoisomerase II (Topo IIA) which is greatly expressed in S and G2/M phases [20]. Approaches based on making good use of natural products for cancer prevention and/or treatment have attracted a lot of attention in the past years. Garlic (Allium sativum) has been known for long for its medical uses. Multiple reports suggest that AGE exerts anti-cancer effect with multiple cellular mechanisms such as induction of apoptosis, cell cycle arrest, and suppression of cell proliferation [11]. Inhibition of early and late stages of cancer by AGE leads to inhibition of tumour growth in many tissues like mammary gland, skin, colon, and gastric tissue [21]. The anticarcinogenic effect of AGE is attributed to its richness in organo-sulfur compounds such as allicin, S-allyl-cysteine, S-allo-mercaptocysteine, diallyl-trisulfide and diallyl-disulfide. Thus, AGE could be considered as a perfect sensitizer to the cytotoxic effect of DOX against breast cancer cells. In the present study, higher concentrations of AGE were more cytotoxic than the lower concentration. As reported by Pourzand et al, high intake of garlic is linked to decreased risk of breast cancer [21]. In the current study, MCF-7 cells treated with AGE alone at higher concentrations (50 and 93 mg/mL) showed a significant increase in apoptotic cells. Aged garlic extract (AGE) has been previously reported to induce dose-related cell cycle arrest, growth inhibition and apoptosis in many human cancer cell lines [22]. To understand the mechanism of interaction between DOX and AGE, apoptosis assay, cell cycle assay and DOX cellular uptake assay were performed. Apoptosis assay showed that DOX-AGE combination significantly increased apoptotic cells, and significantly decreased the population of cells in G1/G0 phase, when compared with DOX alone. These results are in agreement with those of Zhang et al, who reported that SAMC derived from garlic effectively inhibited the growth of MCF-7 cells via stimulation of apoptosis and cell cycle arrest at G0/G1 phase. They also suggested that SAMCinduced apoptosis in MCF-7 (estrogendependent) and human hormone-independent breast cancer (MDA-MB-231) cells occurred through activation of the mitochondrial apoptotic pathway via upregulation of Bax, downregulation of Bcl-2 , and activation of caspase-9 and caspase-3 [22]. In this study, the percentage of cell accumulation at S phase was significantly decreased in combination treatment with DOX and highest concentration of AGE, when compared to the DOX alone. This finding is in agreement with that of Osman et al, who reported that resveratrol induced cell cycle arrest at S phase, thereby exposing a high percentage of tumour cell population to DOX so that more cells underwent apoptosis and entered G0 phase [23]. The results were further strengthened by the rise in DOX cellular uptake following co-treatment with AGE in a concentration-related manner. There was progressive build-up of DOX in cells cotreated with DOX/AGE, which is consistent with exposing higher proportion of MCF-7 cells to DOX through cell accumulation in G0 phase. The rise in DOX MCF-7 uptake might be based on the inhibition of P-gp and multidrug resistance which are involved in DOX absorption, distribution and elimination [24]. In the current study, Rh123 efflux was suppressed in DOX-treated MCF-7 cells in the presence of AGE. This finding is in agreement with that of Abdallah et al, who reported that resveratrol inhibited P-gp, upon co-treatment with DOX, thereby triggering a rise in cellular uptake of DOX [24]. It has been reported that AGE suppressed the energy-dependent efflux pump of P-gp, thereby producing an augmented intracellular drug concentration and increasing cellular toxicity [25]. The extract prevented further absorption of DOX in cardiac tissue (cardio-protection), but it increased DOX concentration in tumour cells. CONCLUSION Aged garlic extract (AGE) has a chemosensitizing effect on DOX in human breast cancer cell line (MCF-7). This effect could be attributed to induction of apoptosis and enhanced intracellular DOX accumulation, the latter of which results from inhibition of P-gp activity.
3,701.6
2020-11-19T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
Crime and Justice in Digital Society: Towards a ‘Digital Criminology’? 1 The opportunities afforded through digital and communications technologies, in particular social media, have inspired a diverse range of interdisciplinary perspectives exploring how such advancements influence the way we live. Rather than positioning technology as existing in a separate space to society more broadly, the ‘digital society’ is a concept that recognises such technologies as an embedded part of the larger social entity and acknowledges the incorporation of digital technologies, media, and networks in our everyday lives (Lupton 2014), including in crime perpetration, victimisation and justice. In this article, we explore potential for an interdisciplinary concept of digital society to expand and inspire innovative crime and justice scholarship within an emerging field of ‘digital criminology’. Introduction The transformative effect of digital and communications technologies, in particular social media, has been a well-documented focus of interdisciplinary study.Since the introduction of personal computer workstations in the early 1980s, and following the launch of the 'world wide web' in 1991, the criminological study of computer and cyber crimes has likewise rapidly expanded.Charting this scholarship alongside developments in computing, communications and other digital technologies reveals the influence of key technological shifts in the focus of criminological theory and research.Yet in this article, we suggest that recent and ongoing developments in technology, such as the social web, big data and the Internet of things, have been inadequately considered by criminologies of computing and cyber crime.Much research in the field continues to focus foremost on policing and investigations, legislative frameworks, and the motivations of cyber criminals, often in the context of individualised and 'rational offender' theories that seek to explain technology as merely a tool in the commission of otherwise familiar crimes.Moreover, the topics addressed by much computer and cyber crime research have remained relatively consistent over the last decade and predominantly include hacking; financial theft and identity fraud; illicit online markets and networks; child sexual exploitation; cyberbullying; and, more recently, information privacy and digital surveillance. The conventional scope of computer and cyber criminologies has arguably developed to the comparative neglect of a wider range of ways in which computers and digital networks enable social harm.These include the role of technologies in a wider range of offending and victimisation, and recognise the increasingly embedded nature of online/offline experiences in crime and justice, the online victimisation of marginalised communities (such as on the basis of race, gender and sexuality), and broader issues of persistent social and digital inequalities as they relate to crime and justice.Rapidly emerging issues such as online justice movements, digital vigilantism, and so-called 'open-source' policing or social network surveillance, provide further examples under-examined within criminology.One possible explanation for what might be described as a 'siloed' cyber criminological focus lies in critiques of the discipline more broadly; namely, that criminology itself has become increasingly insular and self-referential, losing some of its fundamental and dynamic origins as the multidisciplinary study of crime, deviance and justice (see Garland 2011 for a detailed discussion).In light of this, we suggest that computer and cyber criminologies could benefit from an expansion and revitalisation that might be inspired by reference to developments in social, critical and technological theory from outside the discipline itself.Indeed, it is our intention to re-invigorate an ongoing conversation within the discipline that might expand the scope of conventional frameworks of cyber crime towards a broader exploration of crime and justice in the digital age. In this article we offer the concept of the 'digital society' and its associated theoretical antecedents as one such framework to inform, expand and inspire innovative crime and justice scholarship within an emerging field of 'digital criminology'.To do so, we first provide a brief history of key developments in Internet and mobile technologies and their associated trends in crime and criminological research, as well as some limitations of this scholarship to date.Secondly, we consider recent criminological engagements with theories of technosociality, which have sought to expand the discipline beyond a conventional 'cyber' or 'virtual' approach to online crime and criminality.Third, informed by interdisciplinary social and technology theory, we define what we mean by the digital society and explore how this concept may further push the boundaries of 'cyber' criminologies and fruitfully expand theoretical frameworks and empirical examinations of crime and justice in the digital age. A brief history of computer and cyber criminologies The Internet has long-reaching origins, from advances in computing in the 1950s, to the first messages sent via the US military funded 'Advanced Research Projects Agency Network' (ARPANET) in 1969, to the earliest electronic mail (and spam) communications within private closed networks in the 1980s, to the global web in the 1990s (Leiner et al. 2009).Indeed, technological developments and their associated implications for crime and criminology can be charted across three broad periods.The 'pre-web' era of the 1980s to early 1990s, the 'global web' era of the 1990s to early 2000s, and the 'social web' era from the mid-2000s to the present day.Such a historical account is not intended to suggest that the challenges and opportunities for both crime and criminological scholarship presented by each era were (or are being) replaced or discontinued with the next but, rather, that each period brings with it unique advances in technology that have particular impacts for crime and can be broadly linked with associated shifts in criminological thinking and research. It was not until the 1980s that personal computers 2 were widely adopted in workplaces and public institutions (see Ceruzzi 2003).From the 1980s onwards, however, the information and activities of governments, education institutions and corporations were rapidly computerised and associated with greater electronic data storage, as well as increased connectivity within closed internal and private networks (Ceruzzi 2003;Williams 1997).Criminology in this pre-web era (1980s to 1991) recognised that such widespread computer availability and electronic data storage, combined with internally networked workstations and dial-in connections, had openedup governments, corporations and educational institutions to new forms of crime through technology misuse.Computer-related economic crimes (including financial data theft and identity fraud), 'eavesdropping' and the interception of confidential communications, software piracy (via illegal disk-based copies), and the security and privacy of confidential information systems were among the predominant concerns of the time (see, for example, Clough and Mungo 1992; Sieber 1986).Given the predominance of computer technology in public and corporate organisations, these emerging harms were in turn largely associated with white-collar crime (Croall 1992;Kling, 1980;Montgomery 1986). This pre-web period also marked the initial legislative leaps to address computer-enabled crime. For example, one of the earliest laws that defined computer crime was passed in Florida in the United States in 1978 in response to the fraudulent printing of winning tickets at a dog-racing track using a computer (Hollinger and Lanza-Kaduce 1988).This law was notable as it defined all unauthorised access to a computer as an offence regardless of whether or not there was malicious intent (Casey 2011: 35).By 1983 another 20 states had introduced computer crime legislation. The Computer Fraud and Abuse Act of 1984 criminalised various forms of unauthorised computer accessing of information.Viewing information relating to defence or foreign relations matters was regarded as a felony offence, whereas intrusions designed to access or alter all other nonclassified forms of information were regarded as misdemeanours (Griffith 1990: 460).Similar computer crime laws were subsequently introduced elsewhere.In Australia, for instance, a 1989 amendment to the Crimes Act outlined three categories of computer 'hacking' crimes: mere access without seeking out or altering specific information; access without initial intent but seeking or altering information; and access with intent to seek or alter specific information.In England, meanwhile, it was 1990 before the first criminal statute to tackle the misuse of computers was passed (Wasik 1991).Consistent with many legislative frameworks the very act of using a computer to breach a network or database was highlighted as an offence regardless of specific intent (Greenleaf 1990: 21).Indeed, a tension running throughout these initial legal reforms was the question of whether the act of computer misuse or unauthorised access itself should be specifically criminalised, in addition to the equivalent terrestrial or analogue crimes that may result. The modern 'World Wide Web' went live to a global public on 6 August 1991 (Leiner et al. 2009 perpetrators who attacked machines through machines ... started attacking real humans through the machines [emphasis added]' (Jaishankar 2011: 26).Thus, while financial fraud, data theft, information privacy and identity crime became (and remain now) persistent themes in criminological research, the attention of 'cyber crime' scholars broadened to include interpersonal harms such as online child sexual exploitation and 'child pornography' (see, for example, Armagh 2001;Esposito 1998;Mitchell et al. 2010) with both crimes having become the focus of much public and policy concern. The scope and focus of cyber crime scholarship in the global web era is well captured by David Wall's (2001) original and highly influential typology which comprises four categories of cyber crime: 1) cyber-trespass, incorporating unauthorised access to a computer system, network or data source, such as through on-site system hacking, online attacks, and/or malicious software ('malware'); 2) cyber-deception/theft, including financial and data thefts, intellectual property thefts and electronic piracy.Such crimes may be facilitated through fraudulent scams, identity fraud and malware; 3) cyber-porn and obscenity, referring to the online trading of 'sexually expressive material' and including sexually deviant and fetish subcultures, sex work, sex trafficking and sex tourism, as well as child sexual grooming and exploitation material; and 4) cyber-violence, referring to the various ways that individuals can cause interpersonal harms to others.Such harms include cyberstalking, cyberbullying, harassment and communications that support prospective acts of terror (for example, including 'bomb talk' or the circulation of instructions for making explosives and other weaponry). These in turn can be understood according to a common categorisation in cyber crime research whereby the first category represents 'computer focused' acts (that is, directed at the machine), while the latter three are more readily described as 'computer assisted' acts (see, for example, Jewkes and Yar 2010; Smith, Grabosky and Urbas 2004).Wall's (2001) early work identified that the Internet had influenced crime across these categories in at least three broad ways.First, it provided a platform for communications that may enable and sustain existing harmful and criminal activities, such as drug trafficking, hate-speech, stalking and sharing information on how to offend.Second, it enabled participation in a transnational environment that provides new opportunities and expanded reach for criminal activities that would be subject to existing law in sovereign states.Third, the distanciation of time and space creates potentially new, unbounded, contestable and private harms, such as the misappropriation of imagery and intellectual property. In particular, he suggested that the shrinking role of the state and the relative ungovernability of cyberspace presented particular challenges both for policing this 'virtual community' and for the discipline of criminology more broadly (Wall 1997).Wall argued that, while the new 'cyberspace' offered enormous democratising potential, 'there are also many opportunities for new types of offending' and that the Internet posed a 'considerable threat to traditional forms of governance and ... to traditional understandings of order' (Wall 1997: 208). Despite the plethora of studies on cyber crimes, cyber criminality and cyber law enforcement that emerged over this period, comparatively fewer studies have sought to apply or adapt criminological theory to such research (Holt andBossler 2014, 2015).The works that have undertaken such conceptual development have drawn predominantly on a handful of 'rational choice', deviant lifestyle and subcultural theories of crime (for reviews, see Diamond and Bachman 2015;Holt and Bossler 2014).In particular, Routine Activity Theory (RAT) (Cohen and Felson 1979) features so repeatedly in cyber crime theorising that it might be described as the prevailing orthodoxy in such research (Holt and Bossler 2008;Hutchings and Hayes 2008; IJCJ&SD 21 Online version via www.crimejusticejournal.com © 2017 6(2) Pyrooz, Decker and Moule 2015;Reyns, Henson and Fisher 2011;Yar 2005).As Grabosky (2001: 248) explains: '[o]ne of the basic tenets of criminology holds that crime can be explained by three factors: motivation, opportunity, and the absence of a capable guardian ... [and although] derived initially to explain conventional "street" crime, it is equally applicable to crime in cyberspace'. While not all criminologists agree as to the applicability of the theory to cyber crimes (see Jaishankar 2008;Yar 2005), its dominance is arguably highly influential in framing the focus of much research with regards to identifying the motivations of individual cyber offenders, 'target hardening' and identifying 'risky' online victim behaviours.Moreover, identifying the challenges of law enforcement (as a form of guardianship) across a global network is a trend that has continued in computer and cyber criminologies.Indeed, in recent a review of the current state of cyber crime scholarship, Holt and Bossler (2014: 21) describe the preceding twenty years of criminological research as predominantly focused on the study of the 'impact of technology on the practices of offenders, factors affecting the risk of victimization, and the applicability of traditional theories of crime to virtual offences [emphasis added]'. With the millennium came web 2.0 and the 'social web' (2000s to present), as online communications became increasingly collaborative with expanded capacity for user-generated content development and sharing, as well as online social networking.Between 2002 to 2010, there was an explosion of social networks and image-sharing platforms including Friendster, Myspace, Facebook, YouTube, Twitter, Tumblr and Instagram.Research into cyberbullying, cyberstalking and online harassment rapidly expanded over this period as the relative ease, anonymity and reach of online communications were associated with (continuing) concerns regarding invasive and threatening communications (Pittaro 2007;Reyns, Henson and Fisher 2011;Spitzberg and Hoobler 2002), particularly in relation to vulnerable groups such as children and young adults. As the social web expanded, so too did the 'dark web' or 'deep web', a shorthand for the content on the Internet that is not indexed (and thus not searchable) by standard search engines and/or protected by layers of encryption and other security mechanisms (see Bergman 2001).Not all content on the dark web is necessarily or by definition illicit.Nonetheless, the concealment of such underground networks provides the ideal environment for illicit content (including child exploitation material), criminal organising (such as by terrorist or organised crime networks), and black markets (such as trading in malware and illicit drugs) (Martin 2014;Weimann 2016;Yip, Webber and Shadbolt 2013).A growing focus of cyber crime research has thus sought to identify and understand the nature and patterns of such online criminal social networks (Décary-Hétu and Dupont 2012; Holt 2013; Westlake and Bouchard 2016). A further feature of the social web era is the increasingly 'mobile web', with smartphones and wearable technology becoming ever-more ubiquitous and simultaneously collecting expansive 'big data' about ourselves, our identities and our everyday lives.Criminological research has also sought to engage with these increasingly automated, algorithmic and computational capacities as they relate to crime data analytics, law enforcement and justice system practices (Berk 2008 Breaking through the online/offline and real/virtual binaries As the preceding discussion implies we suggest that there are notable gaps in the current field of cyber crime research (see also Hayward 2012;Holt and Bossler 2014).Despite the passing of more than ten years since the rise of the social web, much criminological scholarship arguably remains focused on computing and Internet technologies either as the targets of crime, or as mere tools in the commission of otherwise familiar and recognisable crimes.The topics and foci of much cyber crime research is likewise limited in scope.An overview of both seminal and contemporary works including books, edited collections and journal special issues over the past twenty years yielded recurring topics.These have included hacking, data theft, online fraud and scams, digital piracy, child 'pornography', online sex work, cyberbullying and cyberstalking, and cyberterrorism and online extremism, as well as the challenges for cyber legislation and law enforcement (Grabosky and Smith 1998;Holt 2011;Jaishankar 2011;Wall 1997) Sutton and Tuffin 2003;Sutton 2002).Indeed in their recent review of cyber crime scholarship, Holt and Bossler (2014) make no mention of technology-enabled and online violence against women (despite discussing studies on harassment, stalking and bullying, which they overtly associate foremost with juvenile victims/offenders), or of Internet hate such as racially motivated hate speech or harassment focused on sexuality and/or gender-identity.This apparent oversight reflects a general dearth of cyber crime research that has engaged with forms of violence against marginalised and/or minority communities. Arguably, there remains an inherent dualism whereby cyber crimes continue to be framed as a mirror or the online double of their terrestrial counterparts, differing perhaps by medium and reach, but not by nature; trespass becomes cyber-trespass, theft becomes cyber-theft, bullying becomes cyber-bullying; terrorism becomes cyber-terrorism.The foremost focus on the cyber, itself a direct reference to Internet and 'virtual' technologies, also obscures the diverse and embedded nature of digital data and communications in contemporary societies.Jaishankar (2007: 2) for instance, describes the field of cyber criminology itself as studies of 'cyber crime, cyber criminal behaviour, cyber victims, cyber laws and cyber investigations [emphasis added]', as if these categories were all readily or neatly distinguishable from a 'non-cyber' equivalent. Yet, in a ground-breaking article featured in Theoretical Criminology, Sheila Brown (2006a: 227) challenges such computer and cyber criminology to look outside of its conventional disciplinary frameworks and look instead 'towards theories of the technosocial [emphasis added]'.Analyses of cyber crime, she suggests, are likewise caught up in false distinctions between 'virtual' and 'embodied' crime; seeking to develop and translate 'old' legal and theoretical frameworks to understanding the 'new' crimes in cyberspace.Brown argues that, within criminology, 'nowhere is captured the vision of the crucial nature of the world as a human/technical hybrid ...' (Brown 2006a: 227), in which all crime occurs in networks, which vary only in degrees of virtuality/embodiment.Drawing variously on social and technology theorists such as Latour (1993), Lash (2002), Haraway (1985Haraway ( , 1991) ) and Castells (1996Castells ( , 2001)), Brown suggests a need for criminologists to understand crime and criminality at the increasingly blurred intersections of biology/technology, nature/society, object/agent and artificial/human.Computing and information theories, she argues, 'will increasingly infuse both domains of Law and Criminology' (Brown 2006a: 236) Ten years after Brown's (2006a) challenge, and despite a burgeoning literature on computer and Internet-enabled crime, few criminologists have embraced this important conceptual undertaking.A notable exception lies in the emerging work of cultural criminologists who have sought to explore how the social web may be changing the culturally constructed nature, and socially constituted practices, of crime and deviance (see Jewkes 2007;Jewkes andYar 2010, 2013;Surette 2015).For example, criminologist Majid Yar (2012) makes a persuasive case for considering the impact of communications technologies and new media as itself a motivator of criminality.In discussing the practice of 'happy slapping', 3 Yar (2012: 252) argues that 'crucial to understanding this phenomenon is the role played by participants' desire to be seen, and esteemed or celebrated, by others for their criminal activities'.He argues that this 'will-torepresent' one's transgressive self is linked to broader trends both of a self-creating subjectivity associated with processes of de-traditionalisation (Beck and Beck-Gernsheim 2002;Giddens 1991), and the ready availability of new media platforms for such self-creation (Yar 2012: 251). A further pertinent example lies in Keith Hayward's (2012) article in which he similarly notes the narrow scope of conventional cyber crime scholarship, and calls for further criminological engagement with spatial and socio-technical theory.Rather than a cyber crime focus on technology as a tool of diffusion which has increased criminal opportunities and networks, he suggests 'a better way of thinking about digital/online (criminal) activities is as a process, namely as phenomena in constant dialogue and transformation with other phenomena/technologies' (Hayward 2012: 455, emphasis in original).Hayward (2012: 456), drawing on Actor-Network Theory (Latour 1993(Latour , 2005) and Castell's (1996) networked 'space of flows', among others, notes the potential for communication technologies 'to alter the way we experience the sense of being in an environment [emphasis in original]'. The core of Brown's (2006a) and, indeed, others' (such as Aas 2007; Hayward 2012; Wood 2016) related arguments, that criminological theory is enhanced by a hybridised concept of the human/technology nexus and a reconfigured concept of the agency exercised by human/technological hybrid 'actants', is not, however, without criticism.For example, Owen and Owen (2015: 17) take issue with Brown's central thesis 'that it is increasingly difficult to distinguish "human agency and culpability"' from "non-human objects and technology"'.Rather, they argue that, regardless of environmental conditions (of which technology is infused) 'reflexive agents possess the agency to choose not to engage in criminal activities where they believe that their actions will harm others …' (Owen 2014: 3).With the dominance of rational actor theories in conventional cyber crime scholarship, Latour's concept of agency as expressed in Actor-Network Theory represents a substantial ontological leap.Indeed perhaps this ontological dissonance in part explains why Brown's (2006a) criminology of hybrids-or, as elsewhere described, virtual criminology (Brown 2006b)-does not appear to have been widely adopted as a term in the international scholarship or, indeed, as a disciplinary sub-field.Furthermore, as Brown (2006b: 486) defines virtual criminology as one which 'places simulated and disembodied relations centre stage', we suggest the term itself and its definition invokes, even re-institutes, the very binary frame of real versus virtual that it seeks to disrupt.Nonetheless we take the sentiment of Brown's (2006a) challenge to criminology as a platform from which to launch into a broader exploration of how our conceptualisations of crime and justice in an increasingly 'digital society' might be further advanced by an associated and broadly cast field of digital criminological scholarship.The conventional scope of computer and cyber criminologies has arguably developed to the comparative neglect of a range of influences.These include the role of technologies in a widening range of offending and victimisation; recognising the increasingly embedded nature of online/offline experiences in crime and justice; the online victimisation of marginalised communities (such as on the basis of race, gender, and sexuality); broader issues of persistent social and digital inequalities as they relate to crime and justice; and rapidly emerging topics including online justice movements, digital vigilantism, and so-called 'open-source' policing or social network surveillance.Conversely, we suggest that these are appropriate issues of empirical analysis and theorisation for criminology and, as such, there is much to be gained by moving beyond a relatively siloed cyber-oriented criminology towards an exploration of the broader implications of digital technologies as embedded in emerging technosocial practices that are shaping crime, deviance, criminalisation, and justice and community responses to crime in various ways. Digital society and its implications for criminology Criminology has not been alone in its desire to better understand the influence of diverse contemporary technosocial practices.A diverse range of explanations has been offered through interdisciplinary concepts such as the network society (Castells 1996), information society (Webster 1995), cyberculture (Levy 2001) and cybersociety (Jones 1994).Similar to the issues present in 'computer' and 'cyber crime' scholarship, many of these explanations have focused on a particular element of a technosocial shift to highlight or explain the causation of the change. Despite some limitations, a recurring theme amongst theories of technological advancement rests in establishing technologies as enabling (and disabling) rather than determining (Silverstone 1999: 21). One way that criminology can account for the enabling and disabling effects of technologies is to conceptualise crime, deviance and justice as increasingly technosocial practices within a digital society.Gere (2002: 12) advocates the utility of digital as a 'marker of culture' that 'encompasses both the artefact and the systems of signification and communication that most clearly demarcate our contemporary way of life from others'.Key to the digital society then, is the recognition of a shift in structures, socio-cultural practices and lived experience that does not distinguish between the online and offline world.By focusing on the digital, Deuze (2006) contends that researchers can explore the impact of technologies that shape cultural artifacts, arrangements and activities, both online and offline.Extending the disintegration of the boundary between online and offline realities, Baym (2015: 1) notes that the distinguishing features of digital technologies are the manner in which they have transformed how people engage with one another.This enmeshment of the digital and social has also been referred to as the digitalisation of society in which 'technology is society, and society cannot be understood or represented without its technological tools' (Castells 1996: 5).By focusing on digital society, over other prefixes such as cyber or virtual, criminologists are prompted to move beyond framing 'computer', 'cyber' or 'virtual' crime and justice as fundamentally distinct from or, indeed, oppositional to 'non-technological' forms of crime and justice.At the same time, encouraging research under the more pervasive concept of digital society draws the criminological imagination towards an exploration of the relational, cultural, affective, political and socio-structural dimensions of crime and justice that are reproduced, reinstitutionalised and potentially resisted, in both familiar and unfamiliar ways.Indeed, part of our motivation for using digital 'society', over other similar and popular suffixes such as 'age' or 'era', is to deliberately invoke analyses of social inequalities, socio-cultural practices and socio-political factors that underpin crime and justice more broadly, and that arguably persist as our lives become increasingly digital.We propose that such a conceptual focus on digital society also opens up several new and rapidly emerging foci for criminological theory and research.Although far from being a comprehensive list, we identify seven avenues for the study of crime, deviance and justice in a digital society, drawing on examples from interdisciplinary research across sociology, cultural and media studies, journalism, policing and surveillance studies, as well as law and criminology. Digital spectatorship Just as traditional media and crime scholarship have highlighted tendencies for crime media consumers to be more punitive in their attitudes towards crime and justice, there is also potential for technological advancements to increase the immersion of crime news in our daily lives to be associated with an amplification of 'penal populism' (see Quilter 2012).The potential for increased spectatorship is itself facilitated in large part by the Internet of things and social media as well as our 'perpetual contact' via wearable technologies that provide live access to crime and justice news as events unfold.with respect to intensifying misperceptions and fear of crime, as well as calling on everyday citizens to more actively participate in crime news as eye-witnesses and citizen journalists (see Allan 2013). Digital engagement Expanding beyond cultural criminology and media-crime scholarship, the portability, ubiquity and perpetual contact of digital technologies allow the public to adopt new 'gatewatching' roles (Bruns 2003(Bruns , 2005)).In accessing a diverse variety of media content available to them, publics are now able to (re)consume, (re)produce, and (re)publish through digital technologies that offer new opportunities for criminologists to explore (Bruns 2003(Bruns , 2005)).These opportunities lie across a variety of platforms such as social media (Facebook, Twitter, Instagram), traditional media source (television, radio, print), and online media sources (news websites, blogs, Reddit).One example can be found in Milivojevic and McGovern's (2014) analysis of Facebook users' responses to Melbourne woman Jill Meagher's assault and murder in which they identify disruptive narratives from the public that shifted the traditional media's all-too familiar and predictable victim-blaming tropes to provide a counter-frame that re-focused the emphasis towards men's violence against women. Digital investigation and evidence Digital technologies offer opportunities for a range of actors to explore and investigate criminal behaviour in both online and offline settings.Data that have been stored or transmitted on digital devices are increasingly and readily used to explore theories of how offences occurred or to assist in developing other elements of offences such as providing an alibi or proving intent (Casey 2011: 7).Emergent in the discussion of digital evidence is the utility of technologies other than the personal computers such as mobile, personal and wearable devices that expand the repertoire of investigators and traditional law enforcement agencies (Chaikin 2006).For example, wearable fitness technologies have been introduced as evidence in criminal trials to identify the location of key figures at the time of the crime (Rutkin 2015;Gottehrer 2015).Importantly, digital evidence is collected and used in different ways that require a greater understanding of the investigation process.Digital investigations raise new and important questions about how evidence is collected, retained and regulated in relation to privacy and individual liberties (Kerr 2005: 280). For example, where online platforms such as Facebook provide government agencies with new opportunities for investigation, the monitoring and policing of them can represent a form of surveillance creep (Trottier 2014: 79).This was evident during the 2011 Vancouver riot, where police and Facebook users drew on posted content and collaborated to identify suspected rioters (Trottier 2014). Digital justice and 'digilantism' The democratisation effect of digital technologies has enabled state agencies to engage with the public in ways that were previously unavailable.The use of social media by police (Goldsmith 2015;McGovern and Lee 2012) and the courts (Johnston and McGovern 2013;McGovern 2011) both encourage access and engagement with the justice system, whilst also causing problematic and potential disruptive effects on traditional justice process such as the involvement of juries in the court system (Aaronson and Patterson 2012;Browning 2014).At the same time, the digital society has also encouraged 'informal' justice practices and community responses in relation to crime.For example, Corien Prins (2011) has advocated for a sub-field of e-victimology exploring how digital participation facilitates new practices for self-help and self-activism, as well as the increased potential for threats to victims' well-being and privacy.Similarly, Anastasia Powell (2014Powell ( , 2015) ) and Bianca Fileborn (2014) 2013) whereby digital vigilantism can result in injustices, harassment and violence towards alleged offenders.Also labelled 'viral justice' (Aikins 2013;Antoniades 2012;Thompson, Wood and Rose 2016), such analyses suggest a need to further examine the nature and impacts of citizen-led justice practices that are enabled by digital participation. Digital surveillance Digital technologies enhance opportunities for state-sanctioned surveillance to occur.From this has emerged increasing sociological and criminological critiques of the powers that are enabled by such technologies (Bauman and Lyon 2012;Lyon 2003;Graham and Wood 2003). Governments caught by judicial bodies in activities that have identified serious breaches of privacy, due process and individual liberties have further intensifying the critique of these programs (Bauman et al. 2014;Margulies 2013).As the opportunities for criminologists to explore digital technologies and surveillance expand, so too has the pervasive nature of countersurveillances through a digital evolution developed whereby agents of power within criminal justice systems are increasingly tracked, documented and held accountable for their actions and responsibilities (Bradshaw 2013;Marx 2003;McGrath 2004).In addition to surveillance of the powerful, the digital society allows for peer-to-peer or lateral digital surveillance which monitors crime from collectives rather than from positions of privilege (Smyth 2012;Trottier 2012).For example, 'crowdsourced surveillance' represents a 'socio-technical assemblage' of citizens, policing and private institutions that allows for criminological investigation in the relationship to between these technologies and responses to and prevention of crime (Trottier 2014: 81). Digital space and embodied harms A growing body of literature is exploring issues of spatiality and embodiment as they relate specifically to the harms of gendered and sexual violence (see Henry and Powell 2015), as well as racial, sexuality and/or gender-identity based hate (Awan and Zempi 2016;Citron 2014;Mann, Sutton and Tuffin 2003).The harassment, forms of violence and hate speech experienced by such groups take place not only via digital communications but also in a specific context of broader patterns of violence and abuse that persist and are perpetuated by cultures and structures of inequality, marginalisation and exclusion.'Cyber' versus 'real' harassment, violence and hate speech typologies can serve to minimise harms enabled by communications and online technologies.Nevertheless, understanding these harms situated in digital society arguably better captures the lived experiences of marginalised communities and the operation of power, inequalities and violence across every aspect of their daily lives. Digital social inequalities Threaded throughout each of the above potential foci of criminological research are persistent themes of social inequalities such as the intersections of race, class, gender and sexuality.While sociological, political and technology studies have continued to theorise the nature of 'digital social inequality' (see Gilbert 2010; Halford and Savage 2010; Orton-Johnson and Prior 2013), such examinations are arguably under-developed in criminology.Yet both equity of access and equity of participation are increasingly important issues not only in society more broadly, but also with implications for crime and justice.While unequal technosocial relations may be facilitating new practices and cultures of particularly racial and gender-based harms (Mann, Sutton and Tuffin 2003;Powell and Henry 2016), importantly, the capacity of and nature of resistance to these harms and to broader racial and gender inequalities have arguably been changed by digital communications in significant ways.The ability for marginalised communities to 'watch the watchers', to share video evidence of private abuses and police brutality, to organise via both tweets and streets protests against continued racial and gender inequalities are not merely technological shifts but, rather, have enabled invigoration of social justice movements in a broader political context of disenchantment.Understanding the nature, impacts and justice movements of digital-social inequalities is thus a further crucial research topic within a digital criminology.In an edited collected titled What is Criminology?(Bosworth and Hoyle 2011), David Garland argues that, to the detriment of our discipline, criminology is losing its dialogic nature of crossdisciplinary engagement and needs to be regularly infused with empirical and theoretical innovation from the outside.In large part, our argument in this article is that criminological engagement with computer and cyber crime has, to date, been likewise largely insular; and lacking in a critical and interdisciplinary engagement with disciplines such as sociology, computer science, politics, journalism, and media and cultural studies.This, we suggest, is particularly detrimental to advancing a new generation of scholarship concerning technology, crime, deviance and justice in our digital age. The avenues for digital criminological research presented here are intended both to encompass and to expand substantially the traditional focus of computer and cyber crime scholarship.At the same time, they represent an enticement and a provocation for continued development of the field.While there are many social and technological theoretical frameworks and disciplinary influences that may invigorate criminological research, what underlies many of them is a fundamental recognition that the influences of technology on contemporary crime and justice cannot be understood either as mere tools or as operating in a separate sphere of our experience. Rather, here we have deployed the concept of 'digital society' to emphasise the embedded nature of technology in our lived experiences of criminality, victimisation and justice; the emergence of new technosocial practices of both crime and justice; and the continued relevance of social, cultural and critical theories of society in understanding and responding to crime in a digital age. As such, 'digital criminology' refers to the rapidly developing field of scholarship that applies criminological, social, cultural and technical theory and methods to the study of crime, deviance and justice in a digital society.Rather than necessarily a sub-discipline per se, we advocate that digital criminology may provide a fruitful platform from which to expand the boundaries of contemporary criminological theory and research.Our intention is to foster a broader and ongoing conversation within the discipline that cuts across technology, sociality, crime, deviance and justice; and to inspire new conceptual and empirical directions. Correspondence: Dr Greg Stratton, Lecturer, Justice and Legal, School of Global, Urban and Social Studies, 411 Swanston Street, Melbourne VIC 3000, Australia.Email<EMAIL_ADDRESS>;Birks, Townsley and Stewart 2012;Brantingham 2011).There is to date, however, a comparative dearth of criminological research that has begun to empirically and critically explore the range of challenges and opportunities presented by 'big data' analytics.More recently, Janet Chan and Lyria Bennett Moses (2016: 25) have noted criminologists' relatively small engagement with big data research has tended to lie in two main areas: social media data analysis; and an increasing uptake of computer modelling/algorithms as a predictive tool in police and criminal justice decision making.They suggest that criminologists and, indeed, social scientists more broadly must increasingly 'share the podium' and collaborate with technical experts to further progress this field. IJCJ&SD22 Online version via www.crimejusticejournal.com © 2017 6(2) Halder and Jaishankar 2012;Powell and Henry 2016;Mann,rected towards information privacy and data surveillance(Thomas and Loader 2000;Yar 2012).Moreover, minimal cyber crime scholarship has engaged with persistent social inequalities-the digital divide-as it relates to crime (seeHalford and Savage 2010)and, as such, few studies have explored the unequal nature, impacts and responses towards cyber crimes and other digital harms with respect to gender, gender-identity, race and/or sexuality (notable exceptions includeHalder and Jaishankar 2012;Powell and Henry 2016;Mann, Hess and Waller 2014)g 'informal justice' practices of victim-survivors and their advocates in response to sexual violence and street harassment operating in civil society.Meanwhile, some scholars have also raised concerns surrounding informal justice processes embracing the use of technologies, highlighting the potential for a digital media 'pillory' (seeHess and Waller 2014), and 'digilantism' (seeVan Laer and Van Aelst
8,470
2017-05-22T00:00:00.000
[ "Sociology", "Law", "Computer Science" ]
AAPM BTSC Report 377: Physicist Brachytherapy Training in 2021—A survey of therapeutic medical physics residency program directors Abstract Background Brachytherapy (BT) was the first radiotherapeutic technique used to treat human disease and remains an essential modality in radiation oncology. A decline in the utilization of BT as a treatment modality has been observed and reported, which may impact training opportunities for medical physics residents. A survey of therapeutic medical physics residency program directors was performed as part of an assessment of the current state of BT training during residency. Methods In March 2021, a survey consisting of 23 questions was designed by a working unit of the Brachytherapy Subcommittee of the American Association of Physicists in Medicine (AAPM) and approved for distribution by the Executive Committee of the AAPM. The survey was distributed to the directors of the Commission on Accreditation of Medical Physics Education Programs (CAMPEP)‐accredited therapeutic medical physics residency programs by the AAPM. The participant response was recorded anonymously in an online platform and then analyzed using MATLAB and Microsoft Excel software. Results The survey was distributed to the program directors of 110 residency programs. Over the course of 6 weeks, 72 directors accessed the survey online, and 55 fully completed the survey. Individual responses from the directors (including partial submissions) were evaluated and analyzed. Nearly all participating programs (98%) utilize high dose rate BT treatments with 74% using low dose rate BT techniques. All programs treated gynecological sites using BT, and the next most common treatment sites were prostate (80%) and breast (53%). Overall, the residency program directors had a positive outlook toward BT as a radiotherapeutic treatment modality. Caseload and time limitations were identified as primary barriers to BT training by some programs. Conclusions Based on the responses of the program directors, it was identified that the residency programs might benefit from additional resources such as virtual BT training, interinstitutional collaborations as well as resident fellowships. Programs might also benefit from additional guidance related to BT‐specific training requirements to help program directors attest Authorized Medical Physicist eligibility for graduating residents. INTRODUCTION Brachytherapy (BT) is an important radiotherapy modality that has been in use for various types of cancer treatments for over a century. Recently, there has been a decline in the utilization of BT as a treatment option. [1][2][3][4][5] While this decline has clear implications for public health and cancer morbidity, there are related challenges inherent in training future generations of brachytherapists. [1][2][3][4][5][6] Declining utilization directly correlates with declining opportunities for medical physicists in training to attend BT procedures. The path to American Board of Radiology certification for a therapeutic medical physicist now requires a residency to prepare for providing clinical services. Residency programs are accredited by the Commission on Accreditation of Medical Physics Education Programs (CAMPEP) which is sponsored by several related professional societies, including the American Association of Physicists in Medicine (AAPM), the American College of Radiology (ACR), the American Association for Radiation Oncology (ASTRO), the Canadian Organization of Medical Physicists (COMP), and the Radiological Society of North America (RSNA). A set of essential guidelines provided by Report 249 of the AAPM regarding the training of medical physics residents includes a comprehensive list of BT-related competencies (section 4.5.4). 7 In this era of declining BT utilization, it is a valuable endeavor to assess the current state of BT-related medical physics resident training to anticipate current and future needs as well as identify barriers to resident training. The Brachytherapy Subcommittee (BTSC) of the Therapy Physics Committee (TPC) under the Science Council of the AAPM pursued an investigation into the current state of residency training. A working unit was formed, comprising of two senior subcommittee members and two junior members. The unit proposed to evaluate the state of BT training through surveys of the trainers (i.e., residency program directors) and the trainees (i.e., residents). This work presents the results of a survey that was distributed to the CAMPEPaccredited residency program directors, with a goal to evaluate medical physics resident training in the areas of high dose rate (HDR), low dose rate (LDR), radiopharmaceutical, and electronic BT. Through this survey, the authors collected information about the various BT modalities used at different training programs: types and frequency of cases, distribution and frequency of treatment sites, typical workload/caseload per resident, and availability of adequate and diverse learning opportunities involving residents. Information regarding attestation of the training and experience requirements documented on the Nuclear Regulatory Commission (NRC) form 313A 8 was also gathered. This attestation may have implications for a newly employed physi-cist's ability to serve as an Authorized Medical Physicist (AMP), i.e., be listed on an institution's radioactive materials license. Survey design The survey consisted of 23 questions of four types including multiple choice, select all that apply, fivepoint scale, and free response. The questions were designed to collect information regarding six areas: (1) the types of modalities, radioactive sources, treatment sites, and dose calculation algorithms utilized at each program; (2) the average annual patient-and fractionbased caseload at each program as well as the length of time residents spend training in BT; (3) AMP eligibility requirements as they relate to the training received during a residency program; (4) program directors' views on BT as a treatment modality; (5) program directors' views on the BT training received by residents both generally as well as specific to their own program; and (6) a free response question to identify additional thematic information not directly asked during the survey.A copy of the survey questions is provided in the Appendix. Marcrom et al. 6 designed a similar survey for radiation oncology residents, the results of which were published in 2018. Approval process The survey content was thoroughly vetted by the AAPM. The survey was first reviewed and approved by the BTSC before undergoing review by the TPC. The survey was then distributed to both the Professional and Education Councils for comments and approval. Following consideration and incorporation of feedback, the survey was presented to the Executive Committee (ExCom) of the AAPM by the Education Council. The survey was approved for distribution to the residency program directors by ExCom. Survey distribution A link to the survey was distributed among the program directors according to the CAMPEP-accredited therapy residency program list (March 2021). 9 This list included 110 different programs. The survey invitation was sent by AAPM headquarters staff, and a response was requested within 6 weeks. Reminder emails were sent at the completion of weeks 3 and 5 as well as a final reminder that was sent on the day of the survey deadline. Analysis Numerical results were analyzed using summary statistics with Microsoft Excel 10 and MATLAB 11 software programs. Responses from participants that consisted of free-form text were collated using identification of keywords and authors' interpretation of the comments. Common pertinent themes were identified based on the comments and are summarized in the Results section. Survey response rate Seventy-two program directors responded to the survey out of 110 survey recipients. Fifty-five program directors completed the full survey for an overall response rate of 65% and a completion rate of 50%. Surveys were considered "completed" if respondents electronically submitted the survey, even though they may not have answered all questions. All survey answers were included in the analysis independent of whether a program director completed the survey in full or in part. The average time spent on the survey by all respondents was 27 min. The average time required to complete the survey was slightly greater at 33 min. 3.2.1 Modalities and treatment sites at programs Program directors were asked about the treatment sites, source types, treatment modalities, caseload for each modality, dose calculation algorithms, imaging modalities, and use of therapeutic radionuclides at their respective institutions. This information allowed evaluation of the breadth of procedures in which residents receive BT-related training. The distribution of the utilization of BT for various treatment sites is presented in Figure 1. All program directors (100%) out of a total of 60 who answered this question reported treating gynecological (GYN) sites using BT, followed by prostate (80%) and breast (53%). Skin/surface, ocular, sarcomas, endobronchial, liver, head and neck, as well as brain cancer treatments using BT were also reported. A variety of BT sources are used by the survey participants for treatments.Individual respondents could select multiple sources for this question. The distribution of sources utilized for BT treatments by the programs is presented in Figure 2. The results of the reported BT technique and treatment site distribution are presented in Figures 3 and 4. Fifty-one out of 52 respondents to this question (98%) utilize HDR vaginal cylinder treatments, followed by 75% of the respondents using HDR interstitial GYN treatments, and 73% using HDR tandem-and-ring treatments. LDR prostate treatments and HDR tandemand-ovoid treatments are performed by approximately two-thirds of the programs. The distribution of other treatment sites presented in Figure 3 contrasts the utilization of BT among different programs for sites such as ocular,skin,breast,liver,and endobronchial and showcases the variety of clinical applications in which BT is utilized. In terms of caseload, as seen in Figure 4, the use of BT for treatment of GYN cancers was most common. Many programs specialized or had exceedingly large numbers of fractions compared with the average for certain procedures such as skin, liver, and ocular BT. Most (98%) programs reported using the AAPM TG43 formalism 12 as their BT dose calculation algorithm. Fifteen percent of the programs reported using modelbased dose calculation algorithms (MBDCAs) as well, with one program solely using MBDCAs. Finally, 4% of the programs reported using a TG43 hybrid calculation approach. All but one program reported using computed tomography (CT) as an imaging modality for BT. Close to 80% of the programs reported using magnetic resonance imaging (79.6%) or ultrasound (77.8%) while performing BT procedures. Other imaging modalities frequently used included fluoroscopy (24.1%), digital radiography (18.5%), single-photon emission computerized tomography (11.1%), and others (5.6%). Other modalities included PET, DYNA CT, and linac-based kV imaging. Seventy-four percent of the programs used three or more modalities for BT procedures. Approximately half (48%) of the programs reported residents receiving training in the use of therapeutic radionuclides. The list of radionuclides used is presented in Table 1 along with the percentage of programs using each specific radionuclide. The calculated percentages are normalized to the total number of programs who reported training their residents in the use of therapeutic radionuclides. BT training details Program directors were asked about the annual BT caseload at their institution in terms of patients treated as well as the number of fractions. Additionally, data were collected about the length of time residents spend training in BT in terms of months on a BT rotation as well as the number of training hours received. Twenty-two percent of programs reported an annual caseload between 101 and 150 BT patients each year, with 21% of programs treating more than 300 patients each year using BT. Only three programs reported 50 or fewer cases per year. The plurality (31%) of programs reported performing more than 500 BT fractions each year with 21% of programs performing between 201 and 300 fractions each year. Twenty-five percent of the directors reported 200 or fewer fractions treated per year (and no programs reported 50 or less). Most programs (66%) require residents to spend 3-4 months on a BT rotation. Five percent of programs reported not having a dedicated BT rotation. All programs that offer a dedicated BT rotation reported residents spending no fewer than 2 months training in BT.Fourteen percent of the programs reported residents spending more than 5 months for BT-related training. The plurality (44%) of programs reported residents receiving between 300 and 500 hours of BT-specific training. Fifty-seven percent of the program directors reported attesting to AMP eligibility for their graduating residents, while 43% do not. When program directors were asked if their program would benefit from more structured guidelines for the number of cases required for a physicist to become an AMP, 42.6% of program directors responded in the affirmative and 14.8% replied in the negative. The remaining 42.6% were unsure if their program would benefit from more detailed guidelines. Program directors' view of BT as a modality Program directors were asked about the importance of BT as a treatment modality, whether they thought BT utilization will increase in the future, as well as their thoughts on the adequacy of the status of BT training. F I G U R E 5 Perceived level of preparedness of residency program graduates to independently practice BT When asked about the importance of BT as a treatment modality in radiation oncology, 86% of program directors believed it was essential, 3% were neutral, and 11% were ambivalent. When asked about the future of BT utilization for treating cancer, most respondents (57%) believed there will be some degree of increase, while 39% of the respondents believed there would be no change,and 4% believed there will be a decrease in utilization. None of the program directors reported believing that the BT training was optional. A third of the directors believed that the training was adequate, 30% believed it requires some expansion, and 37.5% of the directors believed significant expansion was needed to the BT training. 3.2.4 Program directors' view of resident training Program directors were asked to provide their opinions on the preparedness of their residents at the end of the residency training program, the degree of support from radiation oncology colleagues in allowing medical physics residents to engage in BT procedures, and the level of independence achieved by residents for different BT tasks. Additionally, directors were asked if a BT fellowship would be beneficial to their residents,what the greatest hurdles are in providing BT training, and what resources would be needed to overcome these hurdles. Finally, programs were asked what percentage of their residents practice BT in their first postresidency position. The plurality (43.6%) of program directors believed CAMPEP-accredited programs partially prepare residents to independently practice BT in their first postresidency position, with 36.4% of the directors perceiving their residents as more than partially prepared but not proficient. The results are shown in Figure 5, with 16.4% of the directors believing residents are proficient in BT after finishing their residency program. Only two programs (3.6%) believed that their residents are underprepared. When asked if radiation oncologists are generally supportive of medical physics residents being involved in BT procedures, most (59%) program directors believed radiation oncologists were encouraging, while 28.6%, 10.7%, and 1.8% of directors felt radiation oncologists were supportive, neutral, or unsupportive, respectively. Figure 6 presents the distribution of program directors who believed their residents achieve independence in various clinical BT-related tasks. All program directors believed their residents achieved independence in three areas: (1) safety and radiation protection, (2) BT measurement equipment, and (3) physics quality assurance (QA) of radiation sources. Treatment planning was the area in which the fewest number of programs believed residents achieve independence. Only 65.5% of program directors believed their residents achieve independence in treatment planning for all sites treated at their program, and only 72.7% of directors believed independence is achieved by residents in planning of specific treatment sites. Most (87.3%) respondents believed their residents could benefit (34.6%) or could maybe benefit (52.7%) from a BT fellowship. Only 12.8% of program directors believed their residents would not benefit from such a fellowship. Approximately a third of program directors (32.1%) were unsure if graduates of their program practiced BT in their first postresidency position. The results are provided in Table 2. 3.2.5 Free response: Identification of barriers to adequate resident training The greatest hurdle identified by residency directors regarding resident training was caseload. This includes the lack of treatment options and/or variety of case types that are handled at various programs. The second largest reported barrier was regarding time constraints including both lack of time for daily activities and on a larger scale such as the residency duration itself. Some procedures and modalities would not occur during a trainee's BT rotation, which correlates with a lack of caseload. A resident's limited time commitment while balancing other clinical tasks was also noted. Many respondents recognized they could not affect a change in patient-load constraints and focused on alternatives such as virtual resources for resident training. This included access to a database of treatment scans and plans, online learning/lecture series, and simulation labs.Others suggested longer residency training.A common theme was a need for higher levels of support for BT procedures ranging from physician acknowledgment of physicist necessity to more space and patience from the staff. Some centers reported no concerns with caseload or time constraints. Interestingly, centers with large patient volumes (>300 cases per year) were equally likely to report concerns with time constraints and caseloads, indicating even large patient numbers alone are not a reliable solution for case and modality diversity. Several respondents expressed apathy or lack of ability to change their training opportunities as barriers to BT training. While this may reflect the inability of an individual to change practice patterns at their institution, some expressed the feeling that BT use was declining following the patterns of care described earlier. [1][2][3][4][5][6] Collaboration with other institutions, fellowship programs, and AAPM-sponsored cross-training were ideas that would allow more hands-on opportunities. Several respondents indicated a desire to have detailed guidelines and caseloads like medical radiation oncologists when attesting to AMP license requirements. Another recurring theme was that of needed collaboration between various programs to afford residents with exposure to diverse caseloads, particularly those involving newer modalities and radiopharmaceutical administration. DISCUSSION One method to display a data-driven look at strengths and weaknesses of a particular system is to perform a Strength, Weakness, Opportunities, and Threats (SWOT) analysis. In Table 3, relevant survey questions and answers have been assigned an appropriate quad-rant for quick overview and review. A topic was assigned to internal factors if residency directors and/or institutions could potentially influence or change the topic. A topic was assigned to external factors if outside institutions and organizations or cancer care in general were responsible. AAPM Report 249 characterizes the guidelines and essential curriculum for a residency in medical physics. 7 Published in 2013, it updates prior reports written in 2006 and 1992. 7 The report is now nearly 10 years old and may not reflect the current state of professional practice and patterns of care. The stated requirements for medical physicists are generally topical and do not require a specific number of cases. In comparison, training radiation oncologists are required to accumulate and document a minimum number of cases in various body sites, using various modalities and techniques. 8 In order to understand how well contemporary medical physics residencies can address the requirements of the Report 249, requirements were evaluated and grouped by topical area. These topical areas are discussed in this section. According to AAPM report 249, the trainees should demonstrate an understanding in a variety of treatment sites including GYN, prostate, and soft tissue sarcomas, as well as optional sites such as the eye, and the liver. Training modalities included HDR, LDR, interstitial BT as well as optional modalities such as pulsed dose rate (PDR),electronic BT,and Y-90 microsphere applications. Based on the information presented from responses to survey questions 1 and 6, the authors found that approximately 100% of centers treated GYN, 80% prostate, 45% ocular, 35% sarcoma, and only 12% were involved with electronic BT. About 40% of the programs reported using Y-90, and 5% PDR. The greatest barrier to adequate resident training identified by the residency directors was caseload. This includes the lack of modalities and/or variety of case types that are handled at various programs. Programs TA B L E 3 SWOT analysis of therapeutic medical physics resident BT training that treated fewer than 150 patients a year were more than twice as likely to report issues than programs who treated greater than 150 patients per year. This result reaffirms the finding of Marcrom et al., which was reported for physician residents. 6 The authors reported "caseload was the greatest perceived barrier in BT training, with confidence correlated with case volume. 6 "One solution to address the lack of caseload may be the adoption of simulation trainings that use anthropomorphic phantoms as described by several authors. [13][14][15] On a larger scale, the disparity of time allotted for BT training varied from no formal training to more than 5 months with the average time being 3 months. Time constraints on the treatment team were also indicated by the respondents, for example, in having limited time to train the residents in these modalities, which may be partially due to the urgent nature and short timeframe of BT procedures. Internal factors Internal factors Strengths (+) Reference Weaknesses (-) Reference Regarding treatment planning and medical dosimetry, trainees are commonly required to perform a decay calculation and calculate the total dose delivered by a temporary and permanent implant as outlined by Report 249 recommendations. These are general and can be performed in a classroom setting. For specific plans, the report requires a treatment plan for a vaginal cylinder applicator, and inspection of Figure 3 shows that this can readily be met by 100% of responding pro-gram directors. Considering a treatment plan utilizing a cervical applicator, Figure 3 shows that approximately 75% and 65% of programs can provide the service for a tandem-and-ring and a tandem-and-ovoid, respectively. While clinics may offer one (or both), it is not clear that all programs can easily provide this service using internal resources. Finally, for an optional dose calculation requirement for a microsphere BT treatment, Figure 3 shows that fewer than 40% of programs report practicing microsphere therapy. Similarly, ocular BT is included as an optional training topic, although it is also only utilized at approximately 45% of responding programs. In summary, most programs will not be able to offer the optional training experiences without external assistance. For the radiation safety requirements listed in Report 249, most requirements can be performed in the classroom or laboratory setting. These include performing source receipt procedures, source management activities, room surveys, regular inventories, leak checks for sealed sources,shielding calculations,and shielding surveys. Figure 6 presents the distribution of respondents who believe their residents achieve independence in some of these tasks. For source assays or calibrations,Report 249 requires discussion and performance of an assay for sealed sources. If possible, it also recommends assays for unsealed sources; however, as noted above, most programs will not be able to offer this experience to their trainees. Finally, the trainee is expected to discuss and perform verification of source strength and reconcile their measurements with the vendor's certificate. Finally, Report 249 has several items related to QA activities for trainees. These include demonstrating an understanding of and participating in periodic spot checks, safety procedures, and source exchange QA, including source calibration. In addition, there is a requirement to understand the comprehensive periodic QA of a remote afterloader. Considering that nearly all programs offer HDR Ir-192 treatments, most of these items should be readily achievable for most clinics. Additional QA training topics include secondary dose calculations, treatment planning system QA, and acceptance testing for metrology equipment. These items are generally applicable and should be achievable at most training programs. Our survey indicated program directors are divided on whether to attest to AMP eligibility via NRC form 313A for residents at the completion of residency training. This could pose a challenge for graduates seeking employment at institutions offering BT services. This study included program directors reporting the amount of time residents spend receiving BT training in units of months, with some residents receiving as few as 2 months, while others received more than 5 months. Of note, form 313A for AMPs requires an accounting of training duration in units of years but does not clarify if the training must be specific to BT or general training inclusive of BT. 8,16 Uncertainty regarding whether graduates of medical physics training programs meet AMP eligibility requirements is evident from the program directors' responses. Most respondents, that is, 87% of the program directors who stated they do not attest to AMP eligibility and 84% of the program directors who stated they do attest to AMP eligibility, reported that their program would benefit from having more structured guidelines for attesting to AMP eligibility. Only 14.9% of program directors stated their program would not benefit from additional guidance regarding completion of NRC form 313A. There are differences in the requirements of form 313A for authorized user radiation oncologists (AU) compared with AMPs. 8,16 Most notably, the training and education requirements for AUs are measured in hundreds of hours, while training and education requirements for AMPs are measured in years. The duration of residency training for AUs is generally twice as long as that of AMPs, making it easier to assure compliance with the AU requirements. In addition, the AU eligibility criteria directly references the Accreditation Council for Graduate Medical Education residency training requirements, which include specification of a minimum number of clinical cases. In contrast, the CAMPEP requirements for BT training leave caseload requirements to the discretion of the local program director. Thus, there is the potential for more variability in AMP training compared with AU training. Data collected during this study could be used to inform the types of BT treatments and the number of cases per treatment type that training centers could accommodate based on typical caseloads. Additional effort would be needed to identify ways for residents to receive additional training experiences if their program did not offer the minimum requirements. Potential solutions include virtual offerings, simulations, 17 and workshops. Medical physicists should review the current AMP requirements and determine whether residency training programs should be designed to give program directors confidence to attest to form 313A (or equivalent) for BT. Alternatively, the medical physics community may petition the NRC to update or modify AMP criteria to better reflect modern medical physicist training and preparation. Addressing this challenge is important, as it represents a potential barrier for physicists to practice BT. This work was limited to surveying CAMPEPaccredited residency programs. The data as well as accompanying analysis about BT as a treatment option and its availability are thus representative of programs of a similar nature and may not be representative of all residency programs. Additionally, this survey reflects opinions and training activities reported by medical physics residency program directors during the spring of 2021, and the results may have been influenced by the SARS CoV-2 pandemic-related implications. CONCLUSIONS A professional society-driven survey of medical physics residency program directors was performed to glean insight into the current state of training specific to the practice of BT. Based on the responses of the program directors, it was identified that residency programs might benefit from additional resources to support training such as simulations, fellowships, and external rotations in an era of declining utilization. In addition, responses demonstrated that, specific to the practice of BT, there was little commonality between programs due to the variety of sites, techniques, and modalities. This challenges uniformity of practice and training in accredited programs. Finally, program directors might benefit from additional guidance related to attesting AMP eligibility for graduating residents. AU T H O R C O N T R I B U T I O N S All authors worked on all the aspects of the manuscript. Manik Aima and Samantha J. Simiele contributed equally to the publication and should be considered joint first authors. Two authors acted as more senior authors and two as more junior in a mentorship arrangement. The survey design, results, and written summary were all contributed to by all authors. • Please rank the following statement on the scale of (1) not essential -(3) neutral -(5) essential: Brachytherapy is an essential modality in radiation therapy. 12. Please rank the following statement on the scale of (1) not utilized -(3) no change -(5) expanded utilization: Brachytherapy will be an important modality in radiation therapy in the future. 13. Please rank the following statement on the scale of (1) optional elective -(3) adequate focus -(5) needs significant expansion: Training in brachytherapy is a necessary component in medical physics residency programs. 14. Please rank the following statement on the scale of (1) no preparation -(3) partially prepared -(5) proficient: CAMPEP-accredited medical physicist residency programs sufficiently prepare physicists to be independent brachytherapy physicists in their first position/employment
6,595.4
2023-01-18T00:00:00.000
[ "Medicine", "Physics" ]
Temminck’s pangolins relax the precision of body temperature regulation when resources are scarce in a semi-arid environment ABSTRACT Climate change is impacting mammals both directly (for example, through increased heat) and indirectly (for example, through altered food resources). Understanding the physiological and behavioural responses of mammals in already hot and dry environments to fluctuations in the climate and food availability allows for a better understanding of how they will cope with a rapidly changing climate. We measured the body temperature of seven Temminck’s pangolins (Smutsia temminckii) in the semi-arid Kalahari for periods of between 4 months and 2 years. Pangolins regulated body temperature within a narrow range (34–36°C) over the 24-h cycle when food (and hence water, obtained from their prey) was abundant. When food resources were scarce, body temperature was regulated less precisely, 24-h minimum body temperatures were lower and the pangolins became more diurnally active, particularly during winter when prey was least available. The shift toward diurnal activity exposed pangolins to higher environmental heat loads, resulting in higher 24-h maximum body temperatures. Biologging of body temperature to detect heterothermy, or estimating food abundance (using pitfall trapping to monitor ant and termite availability), therefore provide tools to assess the welfare of this elusive but threatened mammal. Although the physiological and behavioural responses of pangolins buffered them against food scarcity during our study, whether this flexibility will be sufficient to allow them to cope with further reductions in food availability likely with climate change is unknown. Figure S2. The 24h body temperature patterns of a pangolin, showing the estimated (Est.) and actual (Act.; camera trap data) burrow emergence and return times for two consecutive summer days (top) and two consecutive winter days (bottom) Pangolin 24h body temperature records showed a notch (indicated by an increase or decrease in body temperature by at least 0.5°C in less than one hour; 93% of notches were ≥0.5°C for data points for which we had camera trap data for validation) around the times of emergence from and return to their burrows.We tested the accuracy of the body temperatures deviations to identify the time of emergence and return to the burrow by matching 178 camera trap times of emergence and 65 camera trap times of return with 24h body temperature records.The success of using of a conspicuous body temperature notch to detect burrow emergence or return was calculated by counting the number of times (reported as a percentage of time) the notch was detectable to within one hour of the actual emergence and return using camera traps (Figure S2).The use of the body temperature notch to detect time of emergence (for when camera trap data were available) was successful for 89% of the time during autumn, for 89% of the time during spring, and for 100% of the time during winter.In other words, for only up to 11% of the time, depending on the season, the body temperature notch was not conspicuous enough to detect time of emergence from the burrow.During summer, however, the use of body temperature notches to detect time of emergence was only possible for 64% of the time because the notches were not as distinct during summer compared to the rest of the year.The low detection success during summer resulted in fewer estimated times of emergence being available for analysis during summer compared to the rest of the year.The use of the body temperature notch to detect time of return (for when camera trap data were available) was successful for 100% of the time during autumn, for 92% of the time during spring, for 100% of the time during winter, and for 93% of the time during summer.In other words, for only up to 8% of the time, depending on the season, the body temperature notch was not conspicuous enough to detect time of return to the burrow. The time of emergence and return estimation error was determined by calculating the absolute difference between the estimated time of burrow emergence or return using 24h body temperature patterns and the actual time of burrow emergence or return using camera traps.On average, the time of emergence estimation error was 32 ± 28 (mean ± SD of total emergences) minutes for summer, 20 ± 22 minutes for autumn, 12 ± 11 minutes for winter, and 18 ± 15 minutes for spring.The time of return estimation error was 17 ± 18 (mean ± SD of total returns) minutes for summer, 13 ± 7 minutes for autumn, 11 ± 15 minutes for winter, and 21 ± 24 minutes for summer. Frequency histograms created using activity obtained from camera traps only (n=243; emergences, 65 returns) and activity derived from 24h body temperature only (n=4998; emergences, 2254 returns) revealed that the overall distribution of the data was similar for the two methods (Figure S3).We were therefore confident that 24h body temperature notches could be used to accurately estimate time of emergence and return to the burrow to within one hour of actual activity. Figure S3 . Figure S3.Frequency distributions of time of emergence (A) and return (B) data obtained directly from camera traps only (black bars) and indirectly from 24h body temperature patterns only (grey bars) for which camera trap data were available.The hourly bins represent the time of day during which emergence from or return to the burrow occurred (for example, 1 = 01h00-01h59 and 20 = 20h00-20h59) Figure S6 . Figure S6.Estimate marginal means of daily minimum globe temperature by season, averaged over period.Error bars show the 95% confidence interval for the estimates Figure S8 .Figure S9 . Figure S8.Estimate marginal means of daily amplitude of globe temperature by season, averaged over period.Error bars show the 95% confidence interval for the estimates. Figure S10 .Figure S11 .Figure S12 . Figure S10.Raw monthly rainfall recordings at the study site for the period 1 November 2015 to 31 October 2017. Figure S14 . Figure S14.Estimated marginal means of prey abundance by season, averaged over period.Error bars show the 95% confidence interval for the estimates.Values are back-transformed from the log scale Figure S15 . Figure S15.Estimate marginal means of daily minimum body temperature by season, averaged over period.Error bars show the 95% confidence interval for the estimates Figure S16 . Figure S16.Estimate marginal means of daily maximum body temperature by season, averaged over period.Error bars show the 95% confidence interval for the estimates Figure S17 . Figure S17.Estimate marginal means for daily amplitude of body temperature by season, averaged over period.Error bars show the 95% confidence interval for the estimates Figure S18 . Figure S18.Estimate marginal means of daily mean body temperature by season, averaged over period.Error bars show the 95% confidence interval for the estimates Table S1 . Days of data per season in each period of study 3 Missing days due to weather station failure 8 Emergence times Raw data Figure S5.Distribution of burrow emergence times for each animal Table S3 . Linear regression model of seasonal and yearly (period) differences in daily minimum globe temperature (ºC) Table S4 . Pairwise contrasts of estimated marginal means of daily minimum globe temperature Table S5 . Linear Estimate marginal means of daily maximum globe temperature by season, averaged over period.Error bars show the 95% confidence interval for the estimates regression model of seasonal and yearly (period) differences in daily maximum globe temperature (ºC) 1 CI = Confidence Interval Figure S7. Table S6 . Pairwise contrasts of estimated marginal means of daily maximum globe Table S7 . Linear regression model of seasonal and yearly (period) differences in daily amplitude of globe temperature (ºC) Table S10 . Pairwise contrasts of estimated marginal means of daily mean globe temperature Table S13 . Generalised linear mixed-effects model (negative binomial link function) results of the interannual and seasonal differences in ant abundance Table S14 . Pairwise contrasts of estimated marginal means for prey abundance (ants/trap) Table S16 . Pairwise contrasts of estimated marginal means for minimum body temperature Table S18 . Pairwise contrasts of estimated marginal means for maximum body temperature Table S20 . Pairwise contrasts of estimated marginal means for daily amplitude of body Table S22 . Pairwise contrasts of estimated marginal means for daily mean body temperature
1,897
2023-08-28T00:00:00.000
[ "Environmental Science", "Biology" ]
Strain Tunable Intrinsic Ferromagnetic in 2D Square CrBr$_2$ Two-dimensional (2D) intrinsic magnetic materials with high Curie temperature (Tc) coexisting with 100% spin-polarization are highly desirable for realizing promising spintronic devices. In the present work, the intrinsic magnetism of monolayer square CrBr2 is predicted by using first-principles calculations. The monolayer CrBr2 is an intrinsic ferromagnetic (FM) half-metal with the half-metallic gap of 1.58 eV. Monte Carlo simulations based on the Heisenberg model estimates Tc as 212 K. Furthermore, the large compressive strain makes CrBr2 undergo ferromagnetic-antiferromagnetic phase transition, when the biaxial tensile strain larger than 9.3% leads to the emergence of semiconducting electronic structures. Our results show that the intrinsic half-metal with a high Tc and controllable magnetic properties endow monolayer square CrBr2 a potential material for spintronic applications. I. INTRODUCTION Spintronic offers tremendous developments in the field of quantum computing and the next generation information technology [1,2].100% spin-polarization FM half-metal is a key ingredient of the high performance spintronic devices [3], one spin channel exhibits metallic property, the other spin channel has a energy gap as a semiconductor.Since the half-Heusler alloy ferromagnetic materials NiMnSb and PtMnSb are predicted in 1980s [4], there have been further researches into magnetic half-metals, such as RbSe, CsTe, NbF 3 , CoH 2 , ScH 2 and so on [5][6][7]. With the explosive research of 2D materials in the past decade, 2D FM materials have been highly investigated as one of the ideal candidates for nano-spintronic devices [8].Although FM half-metals are also discovered in 2D materials, most of them are still realized by the external conditions, such as pressure or doping [9,10].So far, the intrinsic completely spin-polarized character is relatively rare in natural 2D materials [11].Recently, as a new class of 2D materials, metal dihalides (MX 2 , X=Cl, Br, I) are promising nanoelectronic devices due to the halfmetallicity.The experiments observe that bulk metal dihalides have natural layered structure, similar to many other 2D materials [12].Among them, FeCl 2 is the classic experimentally known material with half-metallicity in monolayer form [8]. The half-metallic gap and spin gap are about 1.0 eV and 4.4 eV, respectively, but the low T c of 17 K [13] greatly blocks the prospects in spintronics.In addition, there are a flurry of researches focusing on the magnetic half-metals, such as TiCl 3 , VCl 3 , and MnX (X=P, As) [14].But, the intrinsic half-metallic material with wide spin gap and high T c is still absent [15][16][17].Thus, the exploration of 2D intrinsic FM half-metals is still an important frontier.*<EMAIL_ADDRESS>In this work, the structure of monolayer square CrBr 2 is identified by the particle swarm optimization (PSO) method [18,19], within an evolutionary algorithm as implemented in the CALYPSO code [20].Then, we systematically investigate the magnetic states of monolayer CrBr 2 by using first-principles calculations.The results show that it is an intrinsically half-metallic ferromagnetism with the wide half-metallic gap of 1.58 eV and spin gap of 3.72 eV.More strikingly, the electronic structures and magnetic configurations can be controlled by the feasible uniaxial and biaxial strains.A uniaxial compressive strain of 2.7% or a biaxial compressive strain of 2.1% causes a magnetic phase transition from FM to AFM.The biaxial tensile strain larger than 9.3% can open the spin-up gap of monolayer CrBr 2 , thus, the electronic performance transfers from half-metal into semiconductor.At last, monolayer square CrBr 2 is predicted to have a high T c of 212 K, which can be improved to about 404 K by the strain.Our calculations indicate that monolayer CrBr 2 is potentially promising for spintronic devices. II. METHODS The low-energy structure of 2D square monolayer CrBr 2 was identified by the PSO method [18,19] within an evolutionary algorithm as implemented in the CA-LYPSO code [20].In the PSO simulation, the population size and the number of generations were set as 30 and 50, respectively.The monolayer CrBr 2 was set in the xy plane with a buckled structure along the z direction and the vacuum space was set at 20 Å.The structural study in the first generation was based on random structures generated automatically using imposed symmetry constraints.At each step, only 60% of the structures were taken into the next generation, while the other structures were generated randomly to guarantee the structural diversity.Density functional theory (DFT) calculations were performed using the projector augmented wave method, as implemented in the Vienna ab initio simulation package VASP [21][22][23].We used a Perdew-Burke-Ernzerhof (PBE) type generalized gradient approximation (GGA) in the exchange-correlation functional [24].The plane-wave cutoff energy 450 eV and Monkhorst-Pack special k-point mesh of 11 × 11 × 1 for the Brillouin zone integration were used in all calculations [25].The HSE06 hybrid functional was employed to obtain the accurate electronic band gap [26].A conjugate-gradient algorithm was employed for geometry optimization using convergence criteria of 10 −5 eV for the total energy and 0.01 eV/ Å for Hellmann-Feynman force components.The supercell size of 4×4×1 unit cells was built to calculate the phonon spectrum by using the density functional perturbation theory (DFPT) [27].In order to estimate the Curie temperature, we mapped the system by Monte Carlo simulations based on the Heisenberg model in the 2 × 2 × 1 supercell. III. RESULTS AND DISCUSSION As shown in Fig. 1 To determine the preferred magnetic ground state, we considered two different magnetic configurations, i.e.FM configuration and AFM configuration [Fig.1(c)].The energy difference ∆E (∆E= E AFM -E FM ) of 0.176 eV in 2×2×1 supercell indicates an FM ground state of CrBr 2 .Additionally, the energy difference of 6.54 eV between the non-magnetic state and the magnetic state is too huge to neglect the nonmagnetic state [15].CrBr 2 has FM ordering with large magnetic moments and the magnetism mainly stems from the Cr atom when Br atom holds small opposite spin moments [Fig.1(c)].The square crystal structure renders the magnetic state governed by significant competition between two mechanisms [28].One is the direct AFM interaction between two neighboring Cr atoms.The other is superexchange FM interaction of two neighboring Cr atoms mediated by Br atom.The preference for either FM or AFM ordering can be rationalized using Goodenough-Kanamori-Anderson (GKA) formalism [29][30][31].In monolayer CrBr 2 , Cr-Br-Cr bond angle is 103 • closer to 90 • , which usually associates with FM ordering according to GKA rules.So the neighboring Cr atoms favor superexchange to direct interaction, which leads to FM ground state. Notably, the electronic structures of monolayer CrBr 2 with PBE and HSE06 functionals show the spin-up states cross the Fermi level, while the spin-down channel acts as a semiconductor [Fig.2].Furthermore, the intrinsic halfmetallic property of monolayer CrBr 2 is also ensured by the GGA+U functional [43] rameter, the half-metallic gap is defined as the minimum of the difference between Fermi level and the bottom of spin-down conduction bands, and the difference between Fermi level and the top of spin-down valence bands [12].With the PBE functional, the half-metallic gap is 1.58 eV, which raises to 2.41 eV with HSE06 functional.It is large enough to efficiently prevent the thermally agitated spin-flip transition.The difference between the bottom of spin-down conduction bands and the top of spin-down valence bands in the half-metal is defined as spin-gap, which is as high as 3.72 (5.69) eV with PBE (HSE06) functional.In addition, the large spin exchange splitting is crucial for the spin-polarized carrier injection and detection.The spin exchange splitting is 0.93 eV [labeled as ∆ in Fig. 2(a)] for monolayer CrBr 2 , larger than 0.24 eV in CrGeTe 3 [32]. As a common method in experiments, strain engineering is a well-controlled scheme to tune the electronic and magnetic properties of 2D materials in previous works [33][34][35][36][37][38].Here, we also calculate the electronic and magnetic properties of monolayer CrBr 2 under uniaxial and biaxial strain range from -10% to 10%. Figure .3shows the energy difference between the FM and AFM phases and the projected magnetic moments of each Cr atom monolayer CrBr 2 under various uniaxial and biaxial strains.The change in ∆E indicates a possible FM-AFM phase transition occurs around the uniaxial compressive strain of 2.7%, or the biaxial compressive strain of 2.1%. In the band structures under uniaxial and biaxial strain [Fig.4], the half-metallic gap becomes smaller (larger) under compressive (tensile) strain.It is more noteworthy that the opened spin-up band gap shows the characteristics of a semiconductor under the biaxial tensile strain within the reasonable range of 9.3%-10%. E n e r g y ( e V) E n e r g y ( e V) , where J ij represents the exchange interaction parameter of the nearest neighbor Cr-Cr pairs, S i represents the spin of atom i, A is magnetocrystalline anisotropy energy parameter (MAE), and S z i is the spin component along the z direction.The exchange interaction parameters J ij are determined by expressing the total energy of the FM and AFM configurations.With J ij of 11.0 meV and MAE of 126 µeV, T c of monolayer CrBr 2 is about 212 K and is much higher than those in the 2D FM half-metals reported early, e.g.monolayer FeCl 2 (17 K) [13].Besides, the negative integrated crystal orbital Hamilton population (-ICOHP) [39][40][41][42] and T c of CrBr 2 under different strain are summarized in Tab.I.As mentioned above, FM ground state is driven by direct versus indirect exchange interaction between two neighboring Cr atoms.The results show that the direct interaction (Cr-Cr) is reduced, but the superexchange (Cr-Br) is enhanced, which leads to the enhanced FM and the increased T c in CrBr 2 .T c rises to a maximum of nearly 404 K under the strain of 8%.Moreover, under the large biaxial tensile strain (>9.3%),CrBr 2 is a room-temperature FM semiconductor with T c of 297 K. IV. CONCLUSION In conclusion, we have made a systematic computation of stable monolayer square CrBr 2 by using first-principles calculations.The electronic structures show that CrBr 2 exhibits ferromagnetic half-metal properties with a large half-metallic gap of 1.58 eV and the spin gap of 3.72 eV at the PBE functional level.In particular, the electronic and magnetic performances can be significantly modulated by the strain.Application of a uniaxial compressive strain of 2.7% or a biaxial compressive strain of 2.1% causes the FM-AFM transition.The biaxial tensile strain larger than 9.3% results in the electronic performance transition from half-metal into semiconductor.By using Monte Carlo simulation with Heisenberg model, T c of monolayer CrBr 2 is predicted to be 212 K, which can rise to 404 K after the application of strain.The intrinsic half-metallic with high T c and controllable magnetic properties endows monolayer CrBr 2 a potential functional material for spintronic applications. (a), the optimized lattice parameters for CrBr 2 are a=b=3.91Å.Each layer of CrBr 2 is composed of three atomic layers: a layer of Cr sandwiched between two layers of Br.The Cr-Br bond length is 2.60 Å and the Cr-Br-Cr bond angle is about 103 • .The absence of imaginary phonon modes in phonon spectrum indicates the dynamic stability of CrBr 2 [Fig.1(b)]. FIG. 1 . FIG. 1.(a) Top and side views of monolayer square CrBr2 structure.Cr and Br atoms are displayed as blue and red spheres, respectively.(b) Theoretical phonon spectrum of monolayer CrBr2 obtained from DFPT calculations.(c) The spin polarization distribution for FM and AFM magnetic configurations, respectively.The yellow and blue colors indicate the net spin-up and spin-down polarization, respectively. FIG. 2 .FIG. 3 . FIG. 2. (a) Band structure of monolayer CrBr2 with PBE functional (solid line) and HSE06 hybrid functional (dotted line).The spin-up and spin-down bands are displayed as black and red lines, respectively.(b)The projected densities of states of monolayer CrBr2. TABLE I . The T c (K), -ICOHP (Cr-Br) and -ICOHP (Cr-Cr) for the monolayer CrBr2 under uniaxial strain.Finally, the present work employs Monte Carlo simulations with the Heisenberg model to predict T c .The spin Hamiltonian of monolayer CrBr 2 can be considered as
2,738.4
2021-09-18T00:00:00.000
[ "Materials Science" ]
Effect of identified non-synonymous mutations in DPP4 receptor binding residues among highly exposed human population in Morocco to MERS-CoV through computational approach Dipeptidyl peptidase 4 (DPP4) has been identified as the main receptor of MERS-CoV facilitating its cellular entry and enhancing its viral replication upon the emergence of this novel coronavirus. DPP4 receptor is highly conserved among many species, but the genetic variability among direct binding residues to MERS-CoV restrained its cellular tropism to humans, camels and bats. The occurrence of natural polymorphisms in human DPP4 binding residues is not well characterized. Therefore, we aimed to assess the presence of potential mutations in DPP4 receptor binding domain (RBD) among a population highly exposed to MERS-CoV in Morocco and predict their effect on DPP4 –MERS-CoV binding affinity through a computational approach. DPP4 synonymous and non-synonymous mutations were identified by sanger sequencing, and their effect were modelled by mutation prediction tools, docking and molecular dynamics (MD) simulation to evaluate structural changes in human DPP4 protein bound to MERS-CoV S1 RBD protein. We identified eight mutations, two synonymous mutations (A291 =, R317 =) and six non-synonymous mutations (N229I, K267E, K267N, T288P, L294V, I295L). Through docking and MD simulation techniques, the chimeric DPP4 –MERS-CoV S1 RBD protein complex models carrying one of the identified non-synonymous mutations sustained a stable binding affinity for the complex that might lead to a robust cellular attachment of MERS-CoV except for the DPP4 N229I mutation. The latter is notable for a loss of binding affinity of DPP4 with MERS-CoV S1 RBD that might affect negatively on cellular entry of the virus. It is important to confirm our molecular modelling prediction with in-vitro studies to acquire a broader overview of the effect of these identified mutations. Introduction The Middle East Respiratory Syndrome of Coronavirus (MERS-CoV) is a zoonotic enveloped single-strand positive RNA virus. This novel emerging betacoronavirus was isolated for the first time in 2012 in a human patient with a severe pneumonia [1]. MERS-CoV is now of global public health concern, responsible for over 2581 cases with a high fatality rate of 34.4%, as of the end of January 2021 [2]. Dromedary camels have been identified as the zoonotic source for human MERS-CoV infection following close contact with these animals [3,4]. Sustained human-to-human transmission has been so far limited to health settings [5,6]. Sporadic cases of MERS-CoV disease have so far been restricted to the Arabian peninsula [7]. However, MERS-CoV does appear to transmit asymptomatically in North and Sub-Saharan Africa as detected by a seroprevalence of neutralizing antibodies of 0.18% in comparison to the Arabian peninsula (0.72%) [8]. Since Africa has by far the largest numbers of dromedary camels, the lack of zoonotic disease is surprising [9]. MERS-CoV exploits dipeptidyl peptidase 4 (DPP4, also known as CD26) for cellular entry and viral replication [10]. DPP4 forms a homodimer. Each subunit contains two domains: α/ β-hydrolase domain and a β-propeller domain. The full-length DPP4 is a type II transmembrane protein in which amino acids 7-28 constitute the membrane-spanning region. The α/βhydrolase domain, located closest to the membrane, consists of amino acids 39-51 and 506-766, and contains the active triad Ser630, Asp708 and His740 [11]. Residues 55-497 form the eight-bladed β-propeller domain, and it has a glycosylation-rich region comprising blades II-V while blades VI-VIII are in a cysteine-rich region. Each blade shows a 4-stranded antiparallel β sheet motif, and blade IV has an additional antiparallel sheet (Asp230-Asn263) between strands 3 and 4 of blade IV [11,12]. According to structural analyses, the MERS-CoV spike protein's receptor binding domain (RBD) mediates viral infections by binding restrictively to blades IV and V of the N-terminal β-propeller domain of the DPP4 receptor [12]. The resolution of full protein crystallographic structure of the DPP4 binding to MERS-CoV S protein complex mediated the identification of 16 amino acids residues in DPP4 receptor binding domain (RBD) interacting directly with MERS-CoV S protein [13,14]. Humans, camels and bats use the DPP4 receptor for binding with MERS-CoV S protein [15]. The genetic variability of these DPP4 amino acids residues in direct contact with MERS-CoV among animal species was a determinant factor for cellular non-permissiveness of MERS-CoV among some animal species [16]. The DPP4 cell surface receptor is widely expressed in human tissues. It is involved in diverse cellular functions, playing a critical role in physiologic glucose homeostasis [17]. Its enzymatic activity has been implicated in the regulation of the biologic activity of multiple hormones, chemokines and T-cell activity [18][19][20]. Serious human health conditions such as diabetes and myocardial infarction have been strongly associated with the presence of genomic mutations or SNPs in the DPP4 gene [21,22]. However, there is an urgent need to address the lack of information on genomic human variation of DPP4, specifically on the binding area to MERS-CoV S protein that might affect the DPP4 -MERS-CoV S protein complex binding affinity by inducing structural conformation changes. Therefore, throughout this study, we aimed to identify a potential presence of mutations in DPP4 receptor binding domain among a population in Morocco highly exposed to dromedary camels and thus to MERS-CoV and predict their effect on DPP4 -MERS-CoV binding affinity through an in-silico approach. Study population and field sampling The genomic characterization of DPP4 RBD was conducted on human subjects (n = 100) belonging to a human population with exposure to dromedary camels, and thus to MERS--CoV, in southern regions of Morocco. This population had a seroprevalence of MERS-CoV neutralising antibodies of 0.83% [8]. These selected human subjects belonged to three exposure risk categories: general population without direct exposure to camels (n = 34), camel herders (n = 33) and slaughterhouse workers (n = 33) having a direct exposure to camels. Each of the selected participants provided a whole blood sample collected in 5 ml EDTA tubes with a signed informed consent to participate in MERS-CoV related studies (IRB reference number 55/16). Molecular analysis DNA extraction and DNA quality and quantification. Human genomic DNA was isolated from 200 μl of whole blood samples using TRIzol LS reagent (Invitrogen, Thermo Fischer scientific) according to the manufacturer's instructions. Following DNA isolation, the DNA concentration of each sample was measured using NanoDrop 2000/2000c spectrophotometer (Invitrogen, Thermo Fischer scientific). Primers design. The DPP4 RBD is coded by four exons of DPP4 human gene: exon 9, exon 10, exon 11 and exon 12. Thus, we designed four sets of intronic primers for conventional PCR and sequencing to amplify targeted exonic regions, using Primer Express 3.0.1 (Applied Biosystem, Foster City, CA, USA) and the reference sequence of the hDPP4 gene (NC_000002.12) retrieved from GenBank database. Designed primers were evaluated for specificity using MFEprimer-3.0 online tool [23], and are detailed in Table 1. Conventional PCR. The amplification of the targeted hDPP4 exons of each DNA sample was performed separately in a final volume of 25 μL containing the 1X Master Mix (SuperMix, Invitrogen, Thermo Fischer scientific), 0.4 μM of each primer and 250-300 ng of DNA. In each serial, a no template control (NTC) without DNA or RNA was included. PCR amplification was performed using a GeneAmp PCR system 2720 (Applied Biosystem, Foster City, CA, USA), under the following PCR cycling conditions: 1 cycle of 94˚C for 4 minutes; 40 cycles of 94˚C for 30 secs, 50˚C for 30 secs, 72˚C for 30 secs and 1 cycle of 72˚C for 5 minutes. Amplification products were analyzed by electrophoresis method on a 2% Agarose gel and visualization using a molecular imager (Gel Doc XR with the Quantity-One software, BioRad). Sanger sequencing and data analysis. The amplicons of each target were purified using PureLink PCR Purification kit (Invitrogen, Thermo Fischer scientific) according to the manufacturer's instructions. Sequencing PCR reaction was performed using BigDye Terminator kit The retrieved sequences were analysed for nucleotide mutations associated with amino acid binding residues of the hDPP4 receptor with MERS-CoV S protein using BioEdit v.7.1.9 software [24]. Each sequence was aligned with the reference hDPP4 gene (NC_000002.12) using ClustalW algorithm. Statistical analysis The statistical significance of the presence of a mutation in the interacting amino acid residues of DPP4 receptor by sex, exposure group, and MERS-CoV serological profile was analyzed using the Fisher test, Risk Ratio was determined. Statistical significance was defined as p <0.05. Molecular modelling analysis To evaluate the effect of non-synonymous nucleotide mutation identified on amino acid residues participating directly in DPP4 binding with MERS-CoV, we followed two molecular modelling approaches. The first in-silico approach was based on mutation effect prediction tools Mutabind2 [25] and DynaMut [26]. Then a second approach based on molecular modelling of each identified mutation on conformation stability and binding affinity of the complex through docking and molecular dynamics. Thus, we selected the EM-cryo structure of DPP4 -MERS-CoV S1 RBD protein complex (PDB: 4L72) from PDB public database. Crystal water molecules were removed from the PDB file using PyMOL [27] before downstream application. Prediction tools for non-synonymous mutation effect. We predicted first the effect of non-synonymous mutations by evaluating the binding affinity of DPP4 mutants to the MERS--CoV S1 RBD in comparison with wild type complex using Mutabind2 and DynaMut webservers using 4L72 PDB file. These computational tools compare binding affinities after mutations to predict whether they stabilize or destabilize the protein-protein interaction by determining the overall change in binding free energies (ΔΔG). The effect of non-synonymous mutations in protein-protein complex stability and estimation of change in the folding free energy (DDG Destabilizing ¼ DG Destabilizing mutated À DG Destabilizing wild ) was predicted using DynaMut webserver. A positive and negative outcome correspond to stabilizing and destabilizing mutations predicted to decrease and increase folding free energy respectively. Whereas, changes in binding affinity (DDG Binding ¼ DG Binding mutated À DG Binding wild ) upon single mutation was predicted with Muta-bind2 online tool, as a positive and negative outcome correspond to destabilizing and stabilizing mutations predicted to decrease and increase binding affinity correspondingly for DynaMut and Mutabind2 respectively. Structure preparation for molecular dynamics approach. A mutant model of the DPP4 carrying one of each non-synonymous mutations was created using the Mutagenesis tool in PyMOL software [27]. The models were named as the following respectively to wild type or the name of present mutation in DPP4 protein; 4L72-WT, 4L72-N229I, 4L72-K267E, 4L72-K267N, 4L72-T288P, 4L72-L294V, 4L72-I295L. Afterward, the wild and mutant DPP4 models were validated using an online MolProbity server for Ramachandran plot analysis [28], Verify 3D [29], and ProSA analysis [30]. Molecular docking of MERS-CoV S1 RBD with wild and mutant models of DPP4. Protein docking was implemented through the online docking webserver HADDOCK 2.4 (https://wenmr.science.uu.nl/haddock2.4/) [31]. For the docking of MERS-CoV S1 RBD (PDB: 4L72_B) to DPP4 (4L72_A), we submitted the corresponding monomeric crystal structure prepared models to HADDOCK 2.4 webserver to obtain the complex model. We defined active amino acid residues positions directly involved in the interaction based on the DPP4 -MERS-CoV S1 RBD interaction interface of the crystal structure complex. Sixteen residues for DPP4 protein and eighteen residues for MERS-CoV S1 RBD were selected as active residues for complex docking [13,14]. Passive residues for both proteins of the complex were defined by default parameters within a radius of 4 Å. We selected the top of the ten generated DPP4 -MERS-CoV S1 RBD complex with the lowest z-score and HADDOCK score for molecular dynamics analysis. Molecular dynamic (MD) simulation. A MD simulation was performed for all the DPP4 -MERS-CoV S1 RBD wild and mutant models using Gromacs 5.1.3 program [32,33] with CHARMM36 force field [34]. Each system was first solvated in a dodecahedron box of SPC water molecules with a box-system minimal distance of 1.0 nm between the solute and the wall of the box. The system was neutralized with an appropriate amount of sodium ions by replacing water molecules. The minimization was carried on the system through 5000 steps of the steepest descent. The systems were then equilibrated (500 ps for NVT heating to 300 K, followed by 500 ps for NPT), applying the position restraints on protein complex with periodic boundary conditions. The pressure and temperature were set at 1 bar and 300 K pressure using Parrinello-Rahman and Berendsen coupling methods, respectively. Particle Mesh Ewald (PME) computed the long-range electrostatic interaction with a distance of 1.2 nm for shortrange non-bonded cut-off. All covalent bonds including heavy atom-H were constrained by the LINCS algorithm. Finally, the system was further equilibrated to carry out 150 ns MD simulations at a constant temperature of 300 K, maintained by the v-rescale thermostat and a time step of 2 fs. Population study demographics The selected population is characterized by a male/female sex-ratio of 3:1, with camels herders (n = 34), slaughterhouse workers (n = 31) and the general population (n = 35) having a sexratio of 3:0, 15:0 and 1:1 respectively. This population is characterized by an average age of 40 years [16-76 years] and a median age of 38 years. The most representative age groups in this study were 21-30 years (23%) and 41-50 years (22%). While the age groups � 20 years and � 70 years were the least represented in this population with 9% and 3% respectively ( Table 2). This population study has a MERS-CoV seroprevalence by ELISA technique of 22%, while 23% were borderline positive and 55% were negative. However, neutralizing antibodies in this population have established a MERS-CoV seroprevalence of 4% (Table 2) [8]. Genetic characterization of DPP4 gene The molecular characterization of the exons of interest (exon 9-12) through Sanger sequencing permitted the identification of six non-synonymous mutations, inferring a change in amino acid residues of the DPP4 protein chain structure. These mutations are mainly identified on DPP4 exon 9; c.686C> T (p. Asn229> Ile, N229I) and exon 10; c.801G> C (p. Lys267Asn, K267N), c.799A> G (p. Lys267Gln, K267E), c.862A> C (p. Thr288Pro, T288P), c.880T> G (p. Leu294Val, L294V) and c.884T> A (p. Ile295Leu, I295L). Moreover, two synonymous mutations were also identified on exon 10 and exon 11, and described respectively as follows: c.872T> C (p.Ala291 =, A291 =) and c.951G> A (p.Arg317 =, R317 =). The synonymous mutation R317 = was the most prevalent (3%) among the population studied, while the non-synonymous mutation L294V was identified in 2% of the population. The other described mutations respectively accounted for 1% of the study population. However, no participant subject in this study carried more than one mutation at the interaction residues level of the hDPP4 gene. We aligned study subjects sequences carrying the non-synonymous mutations with MERS-CoV permissive and non-permissive animal species DPP4 gene to evaluate the interaction residues homology among human identified mutations and wild type protein sequence Study population characteristics and mutational relevance Patients carrying synonymous or non-synonymous mutations identified during this study presented a variable serological profile towards MERS-CoV. A seropositive or borderline seropositive profile by the ELISA technique was found in seven participants, three of whom presented a non-synonymous mutation and four a synonymous mutation. However, only two patients with nonsynonymous and synonymous mutations, respectively, have neutralizing antibodies to MERS-CoV. Three patients with a seronegative profile for MERS-CoV presented a non-synonymous mutation at the level of the interaction amino acid residues, the L294V mutation was found in two seronegative patients. The N229I mutation was characterized in a seronegative patient (Table 3). There is no statistical significance according to the Fisher test, between the presence of a mutation, synonymous or non-synonymous, at one of the DPP4 binding residues with MERS-CoV S1 RBD according to participants sex (Fisher: p-value = 0.29) or the type of exposure (Fisher: p-value = 0.32). The correlation between the serological profile of the participants and the presence of a mutation at DPP4 binding residues is not significant according to the ELISA test (Fisher: p-value = 0.29) or the presence of neutralizing antibodies (Fisher: pvalue = 0.059) ( Table 4). To generate a complex protein model of MERS-CoV S1 RBD protein bound to DPP4 structure via HADDOCK webserver. According to HADDOCK, the top cluster is the most reliable [31]. Thus, the first model of the top cluster was selected for each complex model after verification of HADDOCK score, z-score, RMSD, Van der Waals energy, electrostatic energy, desolvation energy, restraints violation energy and buried surface area (S2 Table). Prior to MD simulation, each complex model was submitted first to energy minimization. Reliability of complex models were assessed through energy potential, temperature, pressure and density parameters (S3 Table). Prediction of mutation effect through prediction tools To model the influence of identified DPP4 mutations on direct binding residues with MERS--CoV S1 RBD protein, we performed a computational analysis on DPP4 -MERS-CoV S1 RBD protein complex stability using Mutabind2 and DynaMut computational prediction tools. Each prediction tool uses custom calculation parameters linked to the Gibson free energy (ΔΔG) equation, which results in a disparity in the predictive effect of a mutation on PPI. Our results exhibit a destabilizing effect on binding affinity in all mutant complex models using Mutabind2 tool (Table 5). Contrariwise, the effect on protein complex stability and change in folding energy predicted a destabilizing effect by DynaMut upon DPP4 mutations K267N, L294V and I295L. While a stabilizing effect was predicted following N229I, K267E and T288P mutation in DPP4 (Table 5). Root mean square fluctuation (RMSF). DPP4 protein residual flexibility and local movement were characterized using Cα-Backbone RMSF in wild type and mutant models, and plotted against residue position numbers (Fig 3). A large fluctuation within the DPP4 receptor binding motif (RBM) to the MERS-CoV S1 RBD protein, located between residues 119 and 354 at blade IV and V of the antiparallel β-sheet of the DPP4 protein were observed (Fig 3). The highest RMSF value within the DPP4 RBM was witnessed for 4L72-WT model at 0.91 nm. While, the DPP4 mutant models were less flexible than the 4L72-WT model with the highest RMSF value within DPP4 RBM was 0.34 nm, 0.61 nm, 0.5 nm, 0.4 nm and 0.35 nm respectively for mutant models 4L72-K267N, 4L72-K267E, 4L72-T288P, 4L72-L294V and 4L72-I295L (Fig 3B-3F). Although, the DPP4 mutant model carrying N229I mutation inferred an increase of conformation flexibility linked to a higher fluctuation of DPP4 RBM with the highest RMSF value of 0.99 nm compared to 4L72-WT (Fig 3A). Whereas MERS-CoV S1 RBD protein highlighted a similar overall residual flexibility in wild type and mutant complex models, key binding residues returned comparable RMSF values in wild type and mutant complex models while emphasizing a higher fluctuation in protein C-terminal and N-terminal regions (S2 Fig). Radius of gyration (Rg). The PPI compactness characterizing folding, shape and stability of the dynamic complex structure over time was evaluated by measuring the Rg of the complex as well as its individual component, i.e: DPP4 and MERS-CoV S1 RBD. Small Rg values designate a compact protein structure while higher Rg values designate loose protein structure. Mutant models (4L72-K267N, 4L72-K267E, 4L72-T288P, 4L72-L294V and 4L72-I295L) showed a relative constant Rg values neighboring wild type (4L72-WT) model Rg values through MD simulation (Fig 4B-4F). Whereas, 4L72-N229I model highlights an abrupt increase fluctuation of Rg values unlike 4L72-WT after 11 ns of MD simulation before reaching a stability of complex structure compactness (Fig 4A). However, the compactness of DPP4 and MERS-CoV S1 RBD individually highlighted a stable compactness of proteins structure throughout MD simulation, as Rg mean values were comparable to wild type model (S3 Fig). Solvent accessible surface area (SASA). The surface area of a biomolecule interacting with the solvent molecules was evaluated with SASA parameter to determine its conformational stability in an aqueous medium. Since DPP4 binding residues to MERS-CoV S1 RBD protein are present at the surface of the protein, thus the presence of mutations at this level induce an accessibility conservation of solvent to protein surface area during MD simulation (Fig 5B-5F). Still, the N229I mutation abruptly seems to fluctuate significantly between 40-60 ns. After 57 ns simulation, 4L72-N299I model reaches the highest SASA value of 440 nm 2 while 4L72-WT model reaches the lowest SASA value of 407 nm 2 during MD simulation. Nevertheless, both models (WT and N229I) stabilize at the same average SASA value of 425 nm 2 between 60 ns and 150 ns (Fig 5A). Regarding complex model components, wild type and mutant DPP4 and MERS-CoV S1 RBD protein structures maintained respectively a similar Hydrogen bonds (H-bonds). Hydrogen bonds induce formation of secondary and tertiary protein structure. An increase of hydrogen numbers infer stronger PPI interactions. MD simulation of mutant models highlighted a notable increase in hydrogen interaction at the level of the complex formed during the interaction of DPP4 with MERS-CoV S1 RBD protein (Fig 6). Thus, the increase in hydrogen bonding implies a stability of the bond between the two proteins participating in the formation of the complex overtime regardless of conformation change in case of 4L72-N229I model. Yet, the average number of hydrogen bonds present at the interface between DPP4 and MERS-CoV S1 RBD during MD simulation was 8.012 +/-2.672 in wild type model, while N229I mutant model returned the lowest h-bonds number (4.178 +/-2.622) in 150 ns MD simulation reaching a null h-bonds at the interface of both proteins at 150 ns (Fig 7). Yet, other mutant models highlighted similar or stronger binding of complex protein as number of h-bonds present between their interfaces and was comparable or higher than wild type model (Fig 7). Principal component analysis. Principal components analysis (PCA) extracts the dominant modes of motion and magnitude in a molecule from trajectory of MD simulation resulting a matrix of eigenvectors and a set of associated eigenvalues that combined highlights respectively principal component (PC) and amplitude of local motion within a protein. Thus, to understand the dynamics a the complex we have generated a 2-D projection plot relaying on two PC coordinates projecting the overall dataset into manageable compact dimension appropriate for 2-D plotting. Fig 8 highlights the projection of two eigenvectors for DPP4 -MERS--CoV S1 RBD complex models. The complex with less phase space and a stable cluster indicated a more stable structure, whereas the complex with greater space and a non-stable cluster denoted a less stable structure. Therefore, complex models 4L72-K267N, 4L72-K267E, 4L72-L294V and 4L72-I295L were regarded as more stable structures compared to 4L72-WT complex model. Whereas, 4L72-T288P complex showed similar structural stability as 4L72-WT complex. Yet, 4L72-N229I denoted a less stable complex than 4L72-WT complex (Fig 8). Local structural changes. Through the MD simulation, a structural comparison between wild type and mutant 4L72 models at different time scales, highlighted a structural conservation of the protein complex models at the beginning of MD simulation (t = 0 ns) (S4 Fig). However, throughout the simulation the 4L72-WT sustains a structural stability in contrast with the 4L72 mutant models highlighting structural changes and folding specially in the MERS-CoV S1 RBD protein (S4 Fig). Then, using the three dimensional structure of the complex models, we compared local structural changes induced by these introduced mutations (N229I, K267N, K267E, T288P, L294V, I295L) upon MD simulation using Chimera 1.15 software [35] (Fig 9). Each of these DPP4 key binding residues interact with MERS-CoV S1 RBD protein via hydrophobic bonds. Thus, the presence of a mutation on these locations induce a loss of existing bonding for all mutant models, except for L294V and I295L, responsible for the local structural changes during the MD simulation (Fig 9). Upon DPP4 N229I mutation, this residue lost an h-bond with T231 and two hydrophobic bonds with I193 and P264. Thus, bringing DPP4 blade IV domain closer by 0.8 Å and looseness of DPP4 blade V domain by 0.4 Å. However, I229 lost a hydrophobic binding with NAG-NAG-BMA ligand inferring a loss of interaction with MERS-CoV S1 RBD binding residues W535 and E536. DPP4 residue 288 is located between blade IV and V of the protein. DPP4 T288P mutation brought the blade IV domain closer to the structure by 1.1 Å, gaining a hydrogen bond with C339. Yet DPP4 T228P mutation provoked a loss of hydrophobic binding with MERS-CoV S1 RBD binding residues N501 and S557. DPP4 residue 294 is located at the end of the short helix α3 in blade IV. DPP4 L294V marks a gain of hydrophobic bond within DPP4 structure with G296, bringing blade IV domain closer to structure by 0.3 Å. However, no effect have been observed on hydrophobic binding with MERS-CoV S1 RBD binding residues T540 and W555. Regarding K267N and K267E mutation, DPP4 267 residue lost a hydrogen bond with T265 prompting a narrower blade IV and V by 0.3 Å and 0.1 Å respectively. This structural effect point toward a higher compact complex structure upon DPP4 K267 mutation. Discussion In the present study, we report our results by addressing the effect of genetic variability of DPP4 receptor binding residues with MERS-CoV S1 RBD protein in a human population with a significant exposure risk of infection with MERS-CoV via dromedaries. Our study is the first to address this concern by selecting a study population from a human population in the southern regions of Morocco, highly exposed to camels, with a neutralizing antibody seroprevalence of 0.83% [8]. The mutations K267E and K267N identified in this study were previously described and listed in genomic databases with very low mutation allelic frequencies. Yet, the mutations N229I, T288P, A291 =, L294V, I295L and R317 = carried by the study participants are described for the first time in the present work. The presence of the non-synonymous mutation L294V and I295L appear not to disrupt DPP4 and MERS-CoV S1 RBD binding affinity, and to safeguard the ability of the virus to complete its replicative viral cycle at the cellular level. The amino acids Isoleucine (Ile), Leucine (Leu) and Valine (Val) holds similar physicochemical properties of a non-polar nature with an aliphatic side chain, which could explain this binding conservation of DPP4 and MERS-CoV. As DPP4 L294V mutation favored a gain of two hydrophobic bonds with A289 and G296 bringing DPP4 blade IV domain closer by 0.3 Å. While DPP4 I295L maintained hydrophobic interaction within DPP4 protein, thus no structural changes were observed in critical DPP4 blade IV and V binding domain. However, DPP4 residue 294 has been described as a critical residue in MERS-CoV cellular permissiveness in chimeric mDPP4 carrying A294L [36,37]. Interestingly, we have reported a novel DPP4 T288P mutation that is common in wild type murine DPP4 (mDPP4) protein. Murine species have been proven non-permissive to MERS-CoV, presenting a P288 and four different residues at glycosylation sites [36]. As mDPP4 P288T mutation had no effect on MERS-CoV cellular permissiveness; therefore, DPP4 residue 288 is not critical on human permissiveness to MERS-CoV and infection outcome [36,37]. Remarkably, a computational study described DPP4 288 residue as critical inferring a significant flexibility on DPP4 protein without disturbing the binding standing conformation of MERS-CoV S1 RBD after docking with DPP4 [12]. These findings are in accordance with the effect of the sole T288P mutation on DPP4 -MERS-CoV S1 RBD complex stability described in the present study highlighting a narrower DPP4 blade IV by (1.1 Å) gaining an hydrogen bond with DPP4 C339. Yet a loss of hydrophobic bonds with MERS-CoV S1 RBD binding residues N501 and S557 did not disrupt significantly complex compactness as number of hydrogen bonds present at DPP4 -MERS-CoV S1 RBD interface level were of 5 (+/-2) hydrogen bonds in last 50 ns of MD simulation. In order to evaluate the effect of these DPP4 non-synonymous mutations, a computational analysis by molecular modeling was done to predict their impacts on DPP4 and MERS-CoV S1 RBD protein binding affinity. Surprisingly, DPP4 protein carrying one of the identified mutations (K267N, K267E, T288P, L294V and I295L) inducing a more stable structural conformation of the complex mainly linked to a decrease in amino acid residues fluctuation of the DPP4 RBM located in the critical regions of blade IV and V of the antiparallel β-sheet of the DPP4 protein structure. The gain in stability of the complex carrying these mutations is related to a narrower DPP4 blade IV domain due to a gain or loss of a hydrogen bond of key binding residues within DPP4 structure in each complex model. Thus, a slight increase of compactness and hydrogen interactions within these complex models was perceived in contrast to wild type DPP4 -MERS-CoV S1 RBD complex model, sustaining a compact complex structure with an average of 5 (+/-2) hydrogen bonds between DPP4 -MES-CoV S1 RBD complex interfaces in last 50 ns of MD simulation. Thus, the presence of one of these five mutations in humans could be associated with a more efficient attachment of MERS-CoV to cells carrying the DPP4 receptor and consequently a probable increase in the capacity of MERS-CoV to replicate in human cells. A contradicting effect of DPP4 K267N and K267E mutation was described in a study [38] through in-vitro cellular modeling. In fact, the presence of asparagine or glycine residues at position 267 instead of lysine residue seems to reduce MERS-CoV infectious ability by repressing virus cellular entry. The N229 residue, via the monosaccharide N-acetylglucosamine (NAG), interact with the amino acids W535 and E536 of MERS-CoV S1 RBD protein warranting the PPI of the complex [13,14]. However, the residual N229I substitution favored DPP4 RBM fluctuation, inducing a conformational change and loss of compactness due to a loss of binding of I229 with NAG ligand mediating the binding with MERS-CoV S1 RBD protein. Unlike the wild type complex, N229I mutation prompted a narrower DPP4 blade IV domain by 0.8 Å and a relaxed blade V domain by 0.4 Å characterized with a loss a hydrogen bond with T231 and two hydrophobic bonds with I193 and P264. This mutation inferred a loss of all hydrogen bonds between the interfaces of DPP4 -MERS-CoV S1 RBD complex in last 20 ns of MD simulation that could cause a critical destabilization of the interaction between DPP4 and the MERS-CoV S1 RBD protein, involving a viral replication attenuation. Interestingly, the assessment of estimated protein-protein relative binding affinity via mutation prediction tools, Mutabind2 and DyanMut, returned divergent Gibson free energy for DPP4 -MERS-CoV S1 RBD complex in contrast to MD simulation approach. The disparity of results of both computational approaches should account for the nature of the algorithm; MD simulation is a rigorous accurate method whereas ΔΔG prediction tools are non-rigorous throughput methods [39]. The occurrence of a non-synonymous mutation could induce a change in the physicochemical properties of the three-dimensional structure, which could explain this gain or loss of structural stability between DPP4 and MERS-CoV S1 RBD advanced by our computational approach. Although, in-vitro cellular modelling would be advantageous to acquire a broad overview of the effect of these mutations. TMPRSS2 enzyme and tetraspanin CD9 have been exhaustively implicated in their functional role of MERS-CoV cellular attachment and entry [40,41]. However, it is still unknown whether these enzymes can maintain a cellular entry in case of the presence of a mutation in the DPP4 binding domain to MERS-CoV. It has also been described that the occurrence of non-synonymous mutations on ACE-2 protein might disrupt the binding affinity to the novel emergent SARS-CoV-2 through computational approach [42]. There is a serious need to carry out a genomic characterization of the DPP4 receptor in human population at a large scale, of different ethnicity to get a broader landscape of non-synonymous mutations in DPP4 binding residues to MERS-CoV. Thence, it will complete our understanding of DPP4 inter-human genetic variability potential effect on MERS-CoV restricted circulation in some geographic areas. Conclusion In summary, the study of inter-human DPP4 genomic variability is of great interest in understanding the degree of severity of MERS-CoV in humans that could be associated with the origin of sporadic human cases identified mainly in west Asia. Thus, our computational approach based on the crystallographic structure of DPP4-MERS-CoV S1 RBD protein complex highlights a possible increase in the binding affinity between the two proteins in the presence of mutations (K267N, K267E, T288P, L294V, and I295L) and loss of affinity due to the N229I mutation. The latter could play a key role in the stability of the host-virus interaction since it is mediated by the monosaccharide NAG. This study could guide current therapeutic approaches to face the adversities that MERS-CoV presents to global public health.
7,457.8
2021-10-14T00:00:00.000
[ "Medicine", "Computer Science" ]
Study on Coupled Combustion Behavior of Two Parallel Line Fires : In this study, the interaction of two parallel line fires with a length-width ratio of greater than 50 was investigated and compared to a single line fire. Considering different length–width ratios and spacings between the fire sources, experiments were carried out to analyze the fire characteristics, such as the burning rate, the flame-merging state, the flame height, the flame tilt angle, and the flame length of the line fires. Its regularity was revealed by combining two mechanisms, namely, heat feedback enhancement and air entrainment restriction. The results revealed that the burning rate under different length–width ratios shows a uniform law, which increases first and then decreases with a greater spacing between the fire sources. There is a special relationship between the flame-merging probability 𝑃 𝑚 and the dimensionless characteristic parameters (S/𝑍 𝐶 )/(𝐿/𝑑) 0.27 . Based on this relationship, a critical criterion of flame merging can be obtained as (S/𝑍 𝐶 )/(𝐿/𝑑) 0.27 = 2.38 . In addition, the height and the length of the flame were studied to better describe the flame shape when the flame is tilted. Since the flame is bent, the flame length has an abrupt change at a specific position, and the inclination angle also has the same phenomenon. Finally, it was found that the influence of the length–width ratio on the line fires is relatively limited, which is further weakened under a greater length–width ratio. Introduction A line fire is a special fire type, and its length is much larger than its width.According to previous studies, a fire source with a length-width ratio of more than ten can be considered as a line fire [1,2].Its burning surface and flame can be regarded as twodimensional.In practice, line fires are not uncommon, such as the cable fires and forest fires that spread along the ground and grass.The initial stage of the fire is often accompanied by multiple fires, and the interaction between the fires could lead to moreserious consequences.Therefore, investigating the burning characteristics of two parallel line fires is of great significance for the related fire hazards, such as forest fires and building fires. The coupled combustion behavior of double (or multiple) fire sources is a complex process that has been examined by theoretical and numerical methods [3].So far, numerous related works of the combustion-characteristic parameters focused on the simple shapes of fire sources such as centrosymmetric squares and circles fire sources.Kamikawa et al. [4] conducted many experiments using a series of square propane combustion arrays and analyzed the total heat-release rate and merged flame heights.Delichatsios [5] established a simple correlation of flame height of merging fires of an array of gaseous burners based on ref. [6] and developed a merging relation for merging flames to estimate the flame height of "group" fires.Liu et al. [7] number of square fire arrays experiments.The results show that the burning rate of the pool fire arrays increased with the decrease in distance, but it began to decrease after the distance reached a critical value.Further, Liu et al. [8] presented a new approach to simulate fire propagation among discrete fuel sources and indicated that the surrounding new fire points have a positive effect on the burning rate of the original one.Vasanth et al. [9,10] studied the mass burning rate, the flame shape, the flame height, and the flame temperature of multiple round pool fires with small pool sizes (48, 68, and 83 mm).He found that in multiple pool fires, all these increased with the increase in the diameter of the participating pool.The flame temperature is unsymmetrical, while the double pool fire is set at a different height [11].Wan et al. [12] investigated the flame-merging probability and flame height of two square gas burners under different spacing and established a flame-merging probability function.Li et al. [13] established the flamemerging probability function and flame-tilt-angle function from twin square propane burners under a cross wind.Jiao et al. [14] studied the fire-interaction mechanisms of nheptane and ethanol multiple pool fires and found that the flame height of the former is larger than that of the latter.With the increase in the length-width ratio, the research perspective is changed from the square fire source to the rectangular fire source and the line fire source.For two rectangular fires, Thomas and Baldwin [15] summarized a critical criterion of flame merging based on the force of the flame surface during flame interaction by simplified derivation.Hasimi and Nishihata [16] studied rectangular propane burners with a lengthwidth ratio from 1 to 10 and observed an exponential relationship between the flame height and the heat-release rate.Yuan and Cox [17] developed a flame-height model consistent with the above-mentioned model, together with a model addressing the relationship among the temperature, the mass flow rate, and the heat release rate per unit length.Huang et al. [18] conducted experimental research by using rectangular propane burners with three different length-width ratios and established a modified expression for estimating flame height.Zhang et al. [19] found that there is a power-law relationship between the flame height and the heat-release rate per unit length by using a nozzle with dimensions that are 3 mm (width) × 95 mm (length).Tang et al. [20] studied the mean flame height and the radiative heat flux of four rectangular fire sources with different length-width ratios (1, 2, 4, and 8, respectively) under the same fire-source surface area.Liu et al. [21] analyzed the merging characteristics of two parallel rectangular burners under different fire-source spacings, length-width ratios, and heat-release rates and established a model among the flame-merging probability, the flame height, and the above-mentioned parameters.Tao et al. [22] conducted a series of experiments to investigate flame interaction and the merging flame length of double pool fires with the length-width ratios of 1:1, 1:2, 1:4, and 1:8, respectively. For a larger length-width ratio, the burner changes from rectangular pool fires to diffusions of micro-slot burners, and the fuel changes from liquid to gaseous fuel, which may be because the liquid fuel is difficult to maintain in the experiment.Kuwana et al. [23] studied the heat-release rate and flame shape of two micro-slot flames with a size of 1 mm × 80 mm and found that the total heat-release rate basically increases and that the flame will eventually merge into a unified flame with the decrease in spacing.Hu et al. [24] conducted experiments to study the interaction between two identical slot-burners (length: 142.5 mm; width: 2 mm) and developed an analytical model to characterize the critical burner pitch for flame merging. To sum up, there are few works on the combustion behavior of two parallel liquid line fires with large length-width ratios.Most of the previous work focused on the simple shapes of fire sources and rectangular fire sources with low length-width ratios, in addition to a few gaseous slot burners with a higher length-width ratio; studies mostly focus on the gas fire source, which may be because the liquid fuel is difficult to maintain in the experiment.The controllability of fire sources based on gaseous fuel is higher, but liquid fuel can better reflect the characteristics of free combustion.Therefore, a constant liquid burning system was developed.To reduce the effect of an insufficient length-width ratio on the two-dimensional linear fire, a 1 cm wide slot with a length-width ratio of greater than 50 was used as a burner in the experiment.The line-fire burning experiments at different spacings were conducted to explore the distribution and variation of burning characteristics, such as the burning rate, the flame-merging probability, the flame height, and the tilt angle.The research outcomes revealed line fire's burning dynamics and behavioral characteristics, which provides theoretical guidance on the related fire-safety assessment. Experimental Setup All the tests were conducted in a 20 m-high closed hall in a stable environment.As depicted in Figure 1, two parallel rectangular burners were positioned on an experimental platform, which was 0.9 m above the ground.The rectangular burners had an identical rim height of 0.01 m, and a width (d) of 0.01 m, and were made of stainless steel (the thickness was 2 mm).The burners were embedded in a fireproofing board, and the top edge was aligned over the board, allowing only the fuel surface to receive the radiative heat and reduce the heat loss.Three groups of burners with lengths (L) of 0.6 m, 0.8 m, and 1.0 m were used.In contrast, the length-width ratio (L/d) of each burner was greater than 50.The rectangular burners, called line fires, were used to simulate the line-burning surface.Each line-fires group was tested with different fire spacings, denoted as S, varying from 0 to 0.45 m with an interval of 0.05 m.Each single line fire (with lengths of 0.6 m, 0.8 m, and 1.0 m) was also tested as a control. High-purity (>98%) n-heptane was used as the fuel.Based on the siphon-pipe principle [14,25], the fuel levels were maintained at a fixed height by two identical reservoir and delivery systems.The mass loss throughout the burning processes was measured by an electronic balance (measuring range: 10 kg; accuracy: 0.1 g; and sampling frequency: 0.5 s) placed under a fuel-reservoir tank.The burning processes were recorded by two cameras (SONY FDR-AX700 and SONY FDR-AX100E, manufactured by SONY China Co., LTD, made in Shanghai, China) on both sides of the experimental platform for the possible inclined flame. Measurement of the Burning Rate The time-averaged burning rates were obtained from the electronic balance with a resolution of 0.1 g.Part of the tests was performed three times to verify the experimental repeatability.For example, Figure 2 shows the instantaneous fuel mass and the burning rate during the entire burning process when L/d = 60 and S = 20 cm.After a developing period, the burning rates of the line fire in three tests stabilized around a constant, about 0.3118 g/s.Similar test results under different length-width ratios and spacings indicated good experimental repeatability.Due to symmetry, the burning rates of two parallel identical line fires are theoretically the same.Therefore, the time-averaged burning rates in a steady period were used in the following analysis. Flame Image Processing The RGB images of the inclined flame during the steady period were obtained from video processed by MATLAB, while the burning processes were recorded at 25 frames per second.Each RGB image was converted to a grayscale image.Based on the OTSU algorithm [26], a suitable threshold value was chosen, and then the obtained grayscale image was changed to a binary image.The flame area can be obtained from the consequent binary image.Then, the flame-merging probability, the height, and the inclination angle can be investigated.Three kinds of images are shown in Figure 3. The flame-merging probability Pm was defined as the number ratio of flame-merging frames to the total frames during the stable phase (see also Ref. [27]).The probability of flame merging ranges from 0, corresponding to no flame merging occurs, up to 1, corresponding to flames continuously merge together.The flame-merging judgment area was 0.5 mm wide on both sides of the centerline of the two parallel line flames, and the height is the distance from the burner surface to the ceiling When the flames are merged, this region contains a flame with a length greater than 0, otherwise the flames are not merged.Figure 4 is a schematic diagram of the judgment area.The width of the judgment area was 1 mm, and the height was much larger than the flame height of two parallel line fires. Measurement of Flame Geometric Parameters To obtain the relationship between the pixels in the video and the length in reality, before the start of the experiments, a ruler with a length of 60 cm was vertically fixed on the center symmetry line of the two burners as a reference for shooting.The fitting result is shown in Equation (1).The pixel value was read by a human; the error was within one pixel; and the result showed a good fitting.The proportional relationship of the captured image was: one pixel is equal to 0.134 cm.ℎ = 0.134 − 26.26, 2 = 0.9999 (1) where h is the actual scale value of the ruler; x is the pixel coordinate value of the scale value of the ruler on the image. The average flame height H was defined as the vertical distance between the flame root and the highest position where the flame appearance probability is above 50% [28].If the flame tips were not merged, the flame height was considered as the average of the two flame heights.As shown in Figure 5, the flame shape on the front side was slightly shorter on the two sides.Additionally, the shorter part on both sides occupied a small proportion in the joint flame, so only the middle part was considered when calculating the flame height.Therefore, the side image seen in Figure 6 can also present the average flame height, and the calculation process was relatively simple.The flame tilt angle was defined as the angle between the oblique flame and the vertical direction, determined by image recognition.The methods of determining H and in the flame merging and the non-merging state were slightly different.See Figure 6 for details. Combustion Behavior For multiple pool fires, the burning behaviors are affected significantly by the fire interactions (i.e., competition of heat feedback enhancement and air entrainment restriction [7,8]), which is dominated by S/D (D is the diameter of the fuel pan) [8].Due to the air-supply restriction among fires, the negative pressure in the interior of the burning area causes the flames to lean inward, and the flame tips could even be in touch with each other when S/D is small enough. Figure 7 shows the side view of the flames during the steady period from all the tests.The flame was merged to a higher degree at a shorter distance , which can be regarded as one flame when there is no gap at the bottom.For a given /, as increases, the flame gradually disengages from the merged state until it is completely separated, and the degree of the flame expansion is also reduced.The air-supply restriction between the fire sources gradually decreases as the spacing increases.The pressure drops between the flames decrease; the degree of the flame tilt decreases and the oxygen required for combustion can be obtained in a smaller space, so the degree of flame expansion is reduced. Under the same S, there is no significant difference in the degree of flame merge and extension by comparing the flame morphology under different / .Obviously, the influences of the length-width ratio of the fire source on the shape of the two line-fires flames are limited.This phenomenon is different from Thomas et al. [15], which can be attributed to the huge length-width ratio of the fire source.The fire source in the experiment can be regarded as a two-dimensional structure under a large length-width ratio.It is also believed that the interaction between line fires is not dependent on the length-width ratio.However, because the experiment cannot reach the ideal state and cannot eliminate the influence from the short side of the fire source, the length-width ratio is still considered in the following discussion.However, it is foreseeable that its impact on the experimental results could be minimal on such a large scale. Figure 8 shows the flame of line fires with different length-width ratios at the same spacing.Due to the air entrainment on the sides, there is a phenomenon that the flame on both sides slope to the middle, and the height is slightly lower than the middle part of the flame.However, with the different length-width ratios, the flame-inclination degree was almost the same.It was indicated, to some extent, that the effect of side air entrainment on these flames is similar.However, for the whole combustion process, the proportion of its influence gradually decreased with the increased length-width ratio.Therefore, for a line fire with a large length-width ratio, the effect of the length-width ratio on the burning processes becomes limited.It is further foreseeable that, as the length-width ratio increases, the effect will continue to decrease until it disappears.Besides, when the two flames are close together, there is an air-supply competition between the two flames.The air enters the cavity between the flames and moves upward with the fire plume, resulting in a weakened convective heat-transfer effect in the middle of the flame compared to the two ends of the flame.However, the enhancement effect of radiant heat in the central region is sufficient to offset the weakened effect of convective heat conduction, so the middle of the flame is higher than the two ends. Steady-State Burning Rates Air entrainment could cause the flame to tilt and merge.At this time, radiative heat feedback received on the fuel surface came from itself and another fire source.This situation is called the radiative-heat feedback-enhancement effect, which directly affects the burning intensity.The burning rate is an essential characteristic parameter reflecting the severity of burning.Therefore, this section discussed the variation in the burning rate under various spacings and length-width ratios of the fire sources. Assuming a dimensionless burning rate * = / 0 , where 0 is the burning rate of a single fire source, the change in the burning rate under the influence of the two fire sources is explored.In this experiment, it was assumed that the interaction between fire sources depends on the two control variables, namely, the distance between the two fire sources and the length-width ratio of the fire sources.Figure 9 shows the data of * under different and /.It can be found that under different /, * shows the same trend, namely, the overall trend increases at first and then decreases, and all of them are greater than 1.The interaction between the fires lead each of them to burn at a higher rate than those of a single, independent fire source.Although there is a competitive mechanism between the radiation-enhancement effect and the air entrainment limiting effect, it is evident that with the increasing S, the heat feedback enhancement increases first and then decreases, and the air entrainment restriction decreases all the time, which leads to the first increase and then the decrease in m*.When S is large enough, the burning rate of each fire source is equivalent to that of an independent fire source; there is no interaction between them, and the value of * should be 1. It can be seen from Figure 9 that the value of * rises sharply when < 0.05 m, indicating that the coupling result of the two action mechanisms are obvious, and * reaches a peak at S = 0.05 m.The flame tilts as the spacing increases.Based on Ref. [29], the radiative heat feedback received by the fuel surface and another fire source begin to decrease, which is still higher than that of a single fire source.The space on the adjacent side of the two fires becomes larger; the air entrainment restriction is reduced; and the influence is weaker than the decrease in the radiation heat.Therefore, it shows a decreasing trend approaching 1, indicating that the interaction between the fire sources is getting weaker. When is specified, the difference in * under different / is small.Since * characterizes the growth rate of the burning rate compared with a single fire source, it can be inferred that the length-width ratio of the fire source has a limited effect on the * .In the case of the steady-state burning rate of the two parallel line fires under different /, besides the fuel type, spacing is a dominant factor. Flame Merging When there is an interaction between fires, with the changed , the fusion state of the flame can be divided into three stages: full merger, intermittent merger, and nonmerger.Full merger means that the flame is always in contact with each other throughout the whole burning stage when the distance between fires is small, which can be considered as a single flame; with increased S, the flame begins to separate intermittently, which can no longer be considered as a single flame, and the flame is in the intermittent merge stage at this time; after the continually increased S, the flame is completely separated, which is considered as the non-merger stage. The flame-merging probability is defined here to analyze and determine the fusion state of the flame.It can be found that the interaction between fire sources is related to , /, and the heat-release rate.Therefore, the relationship between and appeal parameters was discussed.Liu et al. [8] considered that the flame-merging probability is relevant to the flame tilt angle, and a dimensionless model / was developed for characterization, where is defined as in which ′ is the heat-release rate per unit length of the single fire source, and ∞ , , and ∞ are the density, specific heat, and temperature of the ambient air, respectively, while g is the gravitational acceleration. Since the flame height is directly related to the test environment and fuel type, the characteristic scale can represent the flame height.This study used this characteristic quantity for the follow-up analysis.Since the tested fuel is liquid n-heptane with a calorific value of 44.56 KJ/kg [30], ′ can be calculated based on the burning rate per unit length.The flame height can affect the merging of the flames.During the data processing and experimental observation, it was found that in the intermittent merge stage of flame merging, as the increased, the probability of a merger showed a decreasing trend.For different /, the flame-merging probability also changed, and the inferred can be described by the characteristic quantities, such as , / , and .Therefore, the dimensionless model (/)/(/) was constructed to analyze the function of the flamefusion probability.A relationship between the flame intermittence merging stage and the parameters was observed: A fitting was performed based on Equation (3).Compared with the previous study [21] and data analysis, a fixed solution a = 1 can be adopted to calculate the optimal solution: = −0.77(/ )/(/) 0.27 + 1.83 2 = 0.99 Figure 10 shows the with clear several stages.To benefit the analysis, we defined: = (/ )/(/) 0.27 .When is small, = 1, which means that the flame is completely merged; when is increasing, < 1; the flame is in an intermittent merge state; and the merging probability gradually decreases with the increased ; when is increasing to a certain value, decreases to 0, where the flame is no longer merged.Further analysis on the critical state of the flame merging shows that, when > 0, the flame shows a merging phenomenon, so = 0 can be used as the criterion for the flame merging.Combined with the appeal fitting result, = 0, = 2.38; so, in this, the criterion for flame merging was = 2.38.If we divide the entire combustion process into a merged and non-merged phase, = 0.5 can be used as the critical point determining if the flame is merged or not, where = 1.73.Furthermore, the power of / was only 0.27.It can be seen that the influence of the length-width ratio of the fire source on the flame merging probability was weak.Therefore, the flame-merging probability is dominated by and the fuel type. Based on the above analysis, the function of can be expressed as follows: Dimensionless Analysis of Flame Height Compared with a single fire source, the effects of two mechanisms of action on the flame height were explored.A dimensionless parameter * = / 0 was proposed, where 0 is the flame height of a single fire source.The flame height is also related to , / , , and experimental conditions.Figure 11 shows the * under different (S/ )/(/) 0.27 and finds that the convergence is good. The trend of flame height with is the same for those fire sources with different /.After rising in the initial stage, * decreases regularly and gradually approaches 1.This is attributed to the interaction between fires.On the one hand, the enhanced radiative heat feedback leads to more fuel vapor and more air to support the burning processes.At this time, the air entrainment is limited, so it needs more space to complete combustion, which leads to a higher flame height.On the other hand, the pressure drop between fires leads to the flame inclination with the increased spacing between fires.The radiationfeedback intensity decreases, and the degree of air entrainment limitation decreases; so, the fire-source height decreases.This situation continues until the mutual influence between the fire sources disappears; it results in two independent fire sources, and * equals 1.The reason for the presence of peaks is related to the burning rate, and the airentrainment limitation is weak at smaller spacing, so the flame height depends on the burning rate.The vertical line in Figure 11 is the boundary of the three merged stages according to the .In section 4.1, it was found that the law of interaction between the two fires can be reflected in the intermittent merge phase of the flame.The * in the intermittent merge stage also shows a regular trend, so fitting the data of this phase can obtain the following functional relationship: * = 1.27[(/ )/(/) 0.27 ] −0.25 2 = 0.98 Figure 11 shows that there seems to be a clear rule in the complete merge phase after the peak appears.The data of this phase were fitted with the following functional relationship: * = −0.46(/ )/(/) 0.27 + 1.73 2 = 0.98 This test did not consider the flame height distribution during the non-merger phase of the flame, so it was not discussed here.In summary, the function of the flame height can be obtained: The flame height is still dominated by / .When / increases, the * decreases regularly.Under different /, * shows the same law.It can be seen from the functional relationship that, although the effect of / on the flame height exists, it is still weak. Flame Length and Tilt Angle With the increased , the two flames can no longer be regarded as a fusion flame due to the gap at the flame root (approximately = 25 cm).At this time, the flames are inclined to each other, and the flame height is insufficient to characterize the burning state.Therefore, the concept of the flame length is proposed to characterize the flame shape.Figure 12 shows the variation in the dimensionless flame length / 0 along with / , where 0 is the flame height of the single line fire.Under increased / , there is a sudden increasing phenomenon of / 0 when / approaches about 5. / 0 reaches its maximum value at s = 35 cm, which is just at the critical point of flame merging mentioned above, = 0.5.When > 0.5, the flame is tilted due to the interaction, and the flame tips were in contact with each other, so the flame shows a certain curvature due to the limited air entrainment.When < 0.5, the flame tips were no longer merging all the time, resulting in the air entrainment restriction and the flame bending decrease.As the spacing increased, the burning rate and the flamemerging probability decreased, making the flames in a critical state of merging and nonmerging, and the mutual traction between the flames reached the maximum' besides, the restriction of air supply decreased, resulting in the reduction in the pressure difference between the inside and the outside of the flame.So, the dimensionless flame length / 0 showed a sudden rise, which was consistent with the changing trend between the flame tilt angle and / . It can be seen from Figure 13 that tends to increase gradually and then decrease slowly with the increased / .When the spacing is zero, the two fires can be regarded as a single fire source with a width of 2d.In other words, when / = 0, there is no flame mutual inclination, so the tilt angle is zero.As the spacing increases, the air entrainment is limited due to the separation of the flame root; the air-pressure difference exists between the flames; and the flames begin to appear oblique to each other.As the spacing becomes larger, the inclined space of the flame becomes larger as well.Since the air entrainment is always limited, increases gradually and reaches the maximum value when the flame bending disappears.At this time, the air-pressure difference on both sides of the line fire reaches the maximum value.Then, as the degree of air entrainment limitation is reduced, the pressure drop between the flames becomes smaller, and naturally decreases.In addition, it can be predicted that with the further increase in the spacing between the fires, the would continuously reduce.and the two fire sources can be regarded as independent fire sources.At that time, the will gradually approach 0. and the flame will be vertical, as in its natural state. Conclusions This work focused on the coupled combustion behavior of two parallel line fires.By considering the different length-width ratios (/) and spacings () of the fire sources, the interaction laws between two parallel line fires were studied and the main conclusions were summarized as follows. (1) In the real-time monitoring of the burning rate, each of the two parallel line fires had a higher burning rate than the independent fire source, indicating that the coupling effect of the two interaction mechanisms showed a promoting effect.Besides, under different /, the burning rate per unit length and its growth rate were basically the same, which was not affected by / but enormously affected by the spacing . (2) The image-processing method was used to obtain the merge probability of the flame. Three phases were divided into the full merger, the intermittent merger, and the nonmerger.In addition, the three-segment function of the merging probability was proposed, and the critical criterion for determining the start of the flame fusion was determined. (3) In the analysis of the dimensionless flame height * , it was found that * is dominated by / , and the function expression was further obtained.In addition, there was an abrupt-change phenomenon of the flame length in a certain interval, which was attributed to the decrease in the air supply-restriction, and it had the same law in the flame tilt angle. Copyright: © 2022 by the author.Licensee MDPI, Basel, Switzerland.This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/license s/by/4.0/). Figure 2 . Figure 2. Instantaneous burning of line fires with L/d = 60 and S = 0.2 m. Figure 3 . Figure 3. Flame image processing in side view: (a) RGB image; (b) grayscale image; and (c) binary image. Figure 5 . Figure 5. Front view of the flame. Figure 6 . Figure 6.Schematic diagram of flame tilt angle and length definition: (a) flame merged and (b) flame not merged. Figure 7 . Figure 7.Typical snapshots of combustion behavior of two parallel line fires. Figure 8 . Figure 8. Diagram of the influence of side air entrainment on flame. Figure 9 . Figure 9. Variation law of dimensionless burning rate with fire-source spacing. Figure 12 . Figure 12.The relationship between the dimensionless flame length / 0 and / . Figure 13 . Figure 13.The changing trend of the flame tilt angle with / .
7,207.2
2022-01-22T00:00:00.000
[ "Engineering", "Physics" ]
Akan ethnomathematics: Demonstrating its pedagogical action on the teaching and learning of mensuration and geometry The mathematics curriculum implementation depends largely on teachers ’ choice of pedagogical skills that would influence meaningful teaching and learning. The suggestive ideal approach this paper presents is to consider exploration and demonstration of Akan (a tribe in Ghana, West Africa) ethnomathematics in the teaching and learning of some selected mensuration and geometrical concepts found in the secondary school curriculum. The study found various Akan ethnomathematical processes supporting the teaching and learning of school geometrical and mensuration topics such as artefacts, buildings, tools, and others. The ethnomathematical processes reveal a resemblance of pi ( 𝜋 ) concepts and its application to ethno-technology from selected artefacts used for pedagogical demonstrations. We recommend further research into the practical effect of ethnomathematics move in teaching other mathematical concepts in several communities where there exists cultural diversity. It is suggested that mathematics educators adopt ethnomathematics methodology by integrating it into the curriculum implementation process to check its impact on the teaching and learning of mathematics. INTRODUCTION Culture and indigenous application to geometry and mensuration are interconnected, making school formal content closely related to the environment as well as the culture in which the mathematical concept is taught.In the teaching of mensuration for example, ethnomathematics could be used to exemplify various applications of ethnomathematics ideas adapted to suit the child's environment (Acharya, 2019;Unodiaku, 2018;Zirebwa, 2008).The teaching and learning of mensuration and geometry has been a problem to most students especially in Ghana. Ethnomathematics philosophers in the past decades has shown interest towards finding the most prudent way to facilitate Mathematics teaching and learning from cultural perspective (D'Ambrosio, 1985;D'Ambrosio & Rosa, 2017;Davis & Seah, 2016;Kusuma et al., 2019;Machaba, 2018;Sunzuma & Maharaj, 2019).This has been the concern of not only government but also general stakeholders who have child education at heart.Researchers investigate a lot on how to make the mathematics education learning friendly and teaching pleasant. A new way researchers are looking up to is to consider the teaching of Mathematics in a cultural context.Ethnomathematical moves in teaching mathematical concepts have been considered in the recent times to be one of the most efficient way of teaching mathematics.Various theoretical discussions have been made towards this philosophy (Peni, 2019).The practical aspect of this mathematics philosophy has not been considered so much.In order to utilize the cultural know-how of the people and making mathematical concept and socially connected as the norms of culture demands, there is the need to consider Realistic Mathematics Education (RME) through ethnomathematics pedagogies (Azmi et al., 2018;Purwanti & Waluya, 2019;Sumirattana et al., 2017).There is the need to experiment through demonstrational lesson to investigate the impact of ethnomathematical moves on the teaching and learning of mathematics. Throughout history, the mathematics education curriculum has been designed, developed and delivered within Eurocentric philosophy (Scott, 2018).The oppressive nature of mathematics is best described by (Rowlands & Carson, 2002) as Eurocentrism and deviated from curriculum best known through culture.This conforms to the view of mathematics as primarily white, with European imagination and male genetically dominated.Practically in Ghanaian educational system, about 90% of students studying mathematics find it difficult to understand geometry because of the way it thought in schools.The mathematical constructs, concepts and ideas are western dominated in culture.Has the imitation of foreign cultural principles guiding the teaching and learning of mathematical content been helpful?Can mathematics educators then bridge the gap between Euro-Western idealisation of mathematical education to Afro-multicultural mathematics where we can exemplify mathematics concepts with indigenous issues from cultural perceptive?The argument surrounding this is of interest to some African Mathematics Educators who hold facts to ethnomathematics (Amit & Qouder, 2017;Davis & Seah, 2016;Mills & Mereku, 2016). The learner is placed in the centre of learning situation to interact with several mathematical concepts integrated from formal perceptive.Mensuration and Geometry seems to be one of the broadest topics in many elementary and secondary schools.Most students dislike it and would not make any attempt to related problems that come in their external examinations.Teaching mathematics from cultural context could help bridge this gap.To what extent is there interconnection between what they know from their cultural perspective to what they intent learning from formal interactions in the said topic?Ethnomathematics encourages relationships and open dialogue in learning mathematics for classroom (Borba, 1990).The teacher can teach through ethnomathematics principles which students respond to it from their experience with their cultural understanding.According to Borba (1990), such interaction would help strengthens their socio-cultural roots since their foundational or root knowledge is now considered valuable in class.Whatever they learn would be understood well and be meaningful. Every individual is surrounded by cultural imprints one way or the other.The Akan communities have various artifacts and among cultural imprints that can support the teaching of geometry.Making sense of mathematics out of culture brings meaningful and exploratory learning grounded on the bases of what we know already on the bases on what culture has taught us.Culture will continue to teach us meaningful mathematics if we open up to admit its precepts to the curriculum implementation.The pedagogical process mathematics educators adopt to teach selected topics such as geometry can enhance students' knowledge cognition.The connection between cultural diversity and subject-based concept teaching has been explored by (Orey & Rosa, 2004;Rosa & Orey, 2016).A focus on making a connection to the curriculum adaptation has little concern.However, culture has now informed educators on the need to consider cultural diversities in designing teaching methodologies.Ethnomathematics is seen by researchers in different allusions best seen by them.Concerning the field of mathematics, and in line with "consideration on mathematics as human and cultural knowledge" (Bishop, 1994).There appears to be a change in the meaning of ethnomathematics as diversity within mathematics and within mathematical practices (Rosa & Orey, 2016).Thus, ethnomathematics is seen to describe the different mathematical practices that have its base concerning cultural diversity. Several surveys have been done on ethnomathematics defined to be culturally influencing the mathematical approach to induce meaningful learning of mathematics.Kurumeh (2004) discusses the way in which students' immediate environment support ethnomathematical teaching and learning of mathematics in schools.Learning in Akan communities are surrounded by many ethnomathematics process and ethno-technology that can lead to clearer understanding of geometrical lessons.Home sweet home they say; (Kurumeh, 2004) see students who attend school from their own community as getting meaningful understanding of whatever they learn.It is a good practice in Ghana to see dominantly many Ghanaian (e.g., Akan communities) children attending community school existing in their immediate environment.Enukoha (2002) admits to mathematical instruction that supports learners understanding when the learner's cultural background is considered for exemplification of mathematical problem-solving.The indigenous mathematics and euro-centric mathematics (modern classroom mathematics) need to be weighed to see what favors the African child from the European child.There is the need to groom mathematics education for the African child instruction based on cultural materialistic and illustrations, examples, improvisations, surveys, case study etc.The frontiers of curriculum implementers could start experimenting the ethnomathematics based philosophy to see the way forward of the mathematics education. Ethnomathematics methodology should be embraced in our school system as part of the curriculum implementation.Many mathematics educators (teachers) believe that there is no place for cultural constructs in mathematics teaching and learning.There should be a primary goal to find meaningful ways of bringing components of ethnomathematics into the formal curriculum mainstream (Naresh, 2015).The mathematics curriculum from the formal settings should intensify integrating ethnomathematical ideas into mathematics classrooms.In the light of finding the friendliest teaching approach for geometry lessons, this paper discusses ethnomathematics pedagogical moves and processes that could be used to teach elementary concept of geometry and the inherent of these mathematical concepts in the Akan culture. Objectives of the Study The main objective was to: 1. Identify ethnomathematical Akan ethnomathematics processes that support the teaching and learning of geometry in Ghana 2. Demonstrate pedagogical actions of Akan ethnomathematics deemed to support the teaching and learning of geometry. 3. Investigate geometrical concepts knowledge-based in Akan artifacts through ethno-technology. METHOD The research sought to demonstrate ethnomathematics pedagogical action on the teaching of geometry.The study is motivated by the theoretical framework on the teaching and learning of mathematical contents based on ethnomathematics handbook outlined by Forbes, (2018).Several ethnomathematics philosophers has sought and suggested the interest of making mathematics lesson and curriculum implementation ethnomathematics pedagogically based instruction (Peni & Baba, 2019;Putra, 2018;Rosa & Orey, 2019).The study sought the need to use an exploratory mixed method to investigate the impact of ethnomathematical moves in teaching and learning process.Ethnomathematics pedagogy were demonstrated on selected concepts in geometry that meets Akan ethnomathematics as resourceful teaching of the constructs.We explored the mathematical concepts in the Akan culture and demonstrate its effect on the teaching and learning of formal mathematics system. Akan is a tribal community in Ghana, West Africa with diverse culture and sub-tribal communities scattered all over many geographical locations in Ghana.They possess a common ethnomathematics deem to support the teaching of many mathematical concepts such as geometry.The limitation exists for informal mathematical constructs where teaching and learning is scripted.Mathematics among the community members are passed on through oral tradition where the young learn by imitation from the older folks.Their children however take on to the formal education.There is the need to link what they know and interact with from their culture to the formal classroom where formal curriculum under Eurocentric idealization is met.There is the need therefore to investigate and demonstrate the need to mathematicize geometric content to make the teaching and learning friendliest. RESULTS AND DISCUSSION Among Akan ethnomathematics is the collections of basic existing artefacts which clearly suggest their knowledge base of mathematical concepts from informal position.Figure 1 shows some selected Akan artefacts used popular in most community Kitchen that undoubtedly picture mathematical sense, skills and knowledge based on critical implication to geometry, mensuration and general conics concept in mathematics.Akan indigenous artefacts come in different forms; earth bowl ware, mortar, buildings, clothing (batiks), and among others.There has been research done to investigate other linkage of ethnomathematics in relation to existing cultural (Pradhan, 2017;Pramudita & Rosnawati, 2019).Cultural artefacts create a means of conceptualizing ethnomathematics in connection to formal mathematics concepts (Garegae, 2015).Innumerable studies in mathematics education have concentrated on the design and implementation of didactical activities significantly based on architectural experience and on the manipulation of concrete objects and artefacts with the aim of fostering the development of particular mathematical meanings and thinking.For example, culture describe how, if not all, some mathematical concept thrives from informal knowledge based to the most formal that is widely accepted today.Students' use of specific artefacts in solving mathematical problems contributes to their development of mathematical meanings to the concept studied, in a potentially 'coherent' way with respect to the mathematical meanings aimed at in the formal teaching activity (Bussi & Xu-Hua, 2016).However, it is important to keep in mind that, in fostering the development of mathematical meanings an essential component is the students' sharing, comparing and evolving of strategies (which can be accomplished in a number of different ways).These mathematical meanings, of course, can include structure sense, evidence-based ethnomathematics such as artefacts that can serve as a resourceful teaching of the formal mathematical concept and general curriculum implementation.This can be promoted through a variety of different mathematical content picked from ethnomathematics perceptive to make transition to the formal classroom curriculum implementation (Bussi & Xu-Hua, 2016). Akan Ethnomathematics on Circular Artefact From Figure 1, traditional indigenous artifacts from the selected part of Akan communities are typical ethnomathematics which speaks implication of both informal and formal mathematical concepts regarding conics-circles, cylinders, and cones etc. Below are some selected kitchen artifacts suggesting ethnomathematics of mensuration as the content scope of the SHS core mathematics. Various traditional Ghanaian kitchen has circular artefact of which most children are exposed to before going to school (see Figure 1).The Ghanaian multicultural system is gifted with a lot of traditional indigenous artefacts, traditionally engineered to form an informal technology that has connection to mathematical constructs that come in the form of geometry, conics, mensuration, and geometrical shapes such as triangles, squares, rectangle, rhombus, and kite and among others. Exploring the concept of pi (𝝅) from Akan ethnomathematics Most Akan techno-ethnomathematics base is connected to their artefacts and production of things.The figure below, for example is manufactured based on the manufacturer knowledge of circles (mensuration).The researcher was interested in finding out the manufacturers' knowledge in circle characteristics.When the radius, height and circumference was measured from Figure 2, the estimated pi, circumference and diameter were observed, as follows: Note that the radius of the circular asanka displayed in Figure 2 was 15.6 with a height (h) of 12.5 cm to find the pi, we take the ratio of C : d, as follows: 48.9 15.6 ≈ 3.14 Hence, the estimated pi of = 3.14 depicts precision in the craftiness of the circular artefact to conform to the formal concept of (). The mathematical concept of mensuration is seen as some of the artefacts are engineered as circular based.For example, circles, cones, cylinders, parabolas, ellipse, and hyperbolas are seen from the various kitchen technological tools used for grinding, storing food and water, eating earthen bowls and among others.For example, investigating into the concept of pi in selected Akan circular objects (as seen in Table 1) reveals a close margin estimate of = 3.142. Students in their groups sampled several artifacts from their traditional homes and were asked to measure the circumference () and diameter () of each objects measured in centimeters ().Students placed a thin rope around each circular artifact and stretched their measurement on rule to record the results.Similarly, they placed circular objects between two hard flat surfaces parallel enough to show the diameter for measurement.The results of their findings (/) are tabulated in Table 1. We investigate further to find out whether some selected Akan indigenous circular artefacts are well crafted with formal concepts of circle.What characterizes the regularity of circle is the concept of pi(π) which is approximately considered as 3.142 (3 d.p.).A marginal error of approximating pi to 3.1 (1 d.p) suggest selected Akan ethnomathematics artefact from focus group who make and sell them has on the average, the concept of pi approximately 3.14.The knowledge based in creating, crafting, and engineering them shows a significant informal mathematics.Children dairy use of them in their homes put to some extent, sensory stimulus to revive their relevant previous knowledge to the mensuration concept from the formal curriculum implementation.Creating awareness of ethnomathematics in the curriculum implementation process is what ethnomathematicians challenge mathematics educators to consider in their pedagogical actions (D'Ambrosio, 1985;Favilli, 2007;Rosa & Orey, 2016;Shirley, 2001). We can establish geometrical linkage to formal mathematical concepts from this Akan ethnomathematics on artefacts.Table 2 shows the interconnection of these ethnomathematical constructions closely linked to mathematical content suggested by the mathematics curriculum.Participants were asked to identify as many as geometrical shapes seen from this ethnomathematics edifices from their neighbourhoods.The possible responses were jotted down with their reasons. Participants were able to link their formal mathematics knowledge on geometry to what they see from their neighbourhood.Participants were able to link their formal mathematics knowledge on geometry to the problem given in Table 2 where they were given the problem of identifying the appropriate geometrical shape that is associated with the identified artefacts from their community.Majority of those who have had a formal education up to grade 6 (primary 6) were able to associate the geometrical shapes to the artefacts as circles, rectangles, squares, Rhombus, spheres, cones, cylinders, cones, kites and cuboids. In the bid to establish the importance of ethnomathematics artefacts in illustrating meaning to early grade mathematical concepts teaching (Baccaglini-Frank, 2015) turns our attention to the critical role of the structure of artefacts and the ways that young students interpret and construct representations.Secondary school students were tested whether their recognition to dairy exposal to such artefacts would help them solve mathematics problem associated with merging the formal and informal.Students were given the following problem from Figure 3.Given the diagram in the following specimen A-D, if the rectangle in specimen A has length (l)=15 cm, width (b)=10 cm, and the height of cylinder is 5 cm with radius 8 cm while the conical roof in specimen C has radius 12 cm with height 2.5 cm, respectively; find the total surface area of the building in spacemen D made out from A, B, and C. Few of the students were able to figure out some routing and non-routine attacking phase to the problem.When they were however given the clue of the visualization process, they were able to internalize the mathematical representation to unlock the solution.Students begin to internalise the visualised 'structure' of the diagram associated with the artefacts, and we can infer that they have internalised the structure of the grid.The use of structure sense is embedded within this example as seen from Figure 3.The ability to decompose or partition mathematical representations is directly linked to the child's strategies for calculating it.This is most often articulated by the child's strong visual imagery of buildings to be broken up and through verbalisation of 'I break… into parts', 'components are put together', etc.The key process here is not counting by ones, computing all at the same time, or repeated addition of the combined concepts by structuring and partitioning or 'breaking up' into constituent part and solving to integrate them together (see Figure 3 and Table 3). Table 2. Respondents' view on their knowledge of geometrical shapes to identified artefacts from their community Akan ethnomathematics is based on ethno-technology; a situation where indigenous technology is seen from traditional engineering processes.Technology and science is ruling the world in recent times.The bases of technological revolutions evolved from societal needs with background rooted from what culture suggests.The Akan ethnomathematics systems has certain indigenous technological applications.The technology is governed by how they use their knowledge of mathematics in building, solving their societal pertinent problems, making tables and chairs, chop-box and among others.Table 3 shows how some Akan ethno-technology has the base knowledge of geometry and mensuration. Akan concept of geometrical ethnomathematics is applied in their various artefacts and technological know-how as depicted from Table 3. Geometrical knowledge is used to create habitable houses mostly found in the Akan rural communities which mostly hold the typical Akan cultural systems. The implication of these informal ethnomathematical concepts to formal teaching is enormous.To bridge the gap between cultural-based mathematics and formal mathematics, there is the need to mathematicised cultural-based mathematics as suggested by Peni, (2019).According to Oray & Rosa (2016), mathematization is the development of a given problem, that is, the transformation of the problem into mathematical language.A typical example is to teach mensuration in secondary school where traditional artefacts with cylindrical and circular based would be used to exemplified solutions (see Figure 3).The need to consider ethnomathematics pedagogical approach to mathematics teaching has been recommended.A manual to assist such application of ethnomath to formal teaching process has been crafted for mathematics educators to consider (Baba & Iwasaki, 2001;Forbes, 2018;Peni, 2019;Putra, 2018).Knowledge of circular artefacts discussed so far suggest to the most formal way the concept of conics can become meaningful.There are embedded concepts of circles, parabola, ellipse, and hyperbola in the various illustrations of Akan ethnomathematics (see Figure 4).Researchers are of the view that, as we create linkage by bridging the gap between ethnomathematics from informal position to formal mathematical learning concept, the application of mathematics becomes more meaningful, interesting and appreciative.From Figure 4, an assembly of conic section comprising of circles, parabola, ellipse, hyperbola, and other geometrical course description is suggesting a close connection to Ghanaian Akan traditional ethnomathematical technologies that could serve as a base of curriculum implementation on the teaching of geometry and mensuration part of the formal scope of the mathematics teaching in senior high schools. Culture and Geometry In the teaching of total surface area for specimen A, B, and C from Table 2, a multifaceted of these three could be mathematicised to consist of specimen D, where all these individual geometrical and area concepts have been put together.Let a mathematical problem be crafted for students to find a total surface area of D excluding the rectangular entrance.All things been equal, students who are believed to come from communities where these are found or perhaps who live in such apartments might be able to internalize the solution better than a student from non-Akan communities where these are not found.This might be as a result of familiarity of this to conform to traditional indigenous building technology found in an around the learners' environment.The students there would find the problem quite close to their environmental exploration, and the stimulus idealization might be sharper than students with equal ability from where these are not found.This conform to Vygotsky theory where the students' internalization falls within his or her proximal zones (Vygosky, 1998).Students understanding the mathematical concept is fostered by idealization from the environment, culture and tradition best known to the students (Clark et al., 2013). With regard to this connectedness, Sunzuma and Maharaj (2019) studied mathematics teachers' perception and challenges of implementing ethnomathematics in the Zimbabwean classroom as suggested by the mathematics syllabus.The school system has indicated that geometry should be connected to the learners' environment and culture.There is existence of teacher-related challenges to the incorporation of ethnomathematics approaches into the teaching of geometry.Teachers who expressed their views on the challenges that affect the integration of ethnomathematics approaches into the teaching of geometry emphasized on resistance for change.Major challenges (Sunzuma & Maharaj, 2019) found included lack of knowledge on ethnomathematics approaches and how to integrate these approaches into the teaching of geometry; teachers' lack of geometry content knowledge, teachers' views of geometry taught in schools, teachers' competence in teaching geometry, teaching and professional experience as well as resistance to change by teachers.There is recommendation of redesigning the curricula to include ethnomathematics approaches to teaching as well as the need for in-service training on ethnomathematics approaches to all teachers. There are quite a number of cultural dynamics from not only Akan ethno-technological perspectives but also general multicultural settings that really support curriculum implementation in the formal mathematics.The mathematical educational systems could be supported by pedagogical approaches that adapt ethnomathematics methodologies suggested by the learners' environment, culture and materialistic tools that serve as the bases of resourceful teaching and improvisation in mathematics.There is a clearer interconnection between an informal ethnomathematics ideas to the formal mathematics presumed by most researchers as Eurocentric (Powell & Frankenstein, 1997).The implementation of the Ghanaian mathematics core curriculum has a close imprint of ethnomathematical ideas, but the implementation of these through teaching and learning do not consider ethnomathematics.The broad structure and scope of the mathematics curriculum have a close connection to many traditional and socio-cultural dynamics.These cultural dynamics support the content-based structure of the subject discussed so far. Measurement of Length and Distance Akans have their own measurement system in their communities.They still use their traditional measurement system.New generation use modern system of measurement tools to measure length and distance.Among the Akan people traditional measurement system include arm-stretching, leg footing, and among others.The distance and length units are basafa (arm's length), kwansin (miles), and anamon (kilometers).If they have to measure the very short length of anything they use stretched fingers (nsayℇm) measured formerly (in SI unit) as inches.Similarly, they use insatea (finger) to measure distance between tips of the thumb to tip of the pointing figure as presented in following Figure 5.Other forms of body gestures that also comes in the form of sensorimotor, perceptive, and kinaesthetic-tactile experiences is the steps and hand-counting techniques popularly seen among the Akan ethnomathematics.These fundamental body movements of ethnomathematics is used for the formation of mathematical concepts.The key role attributed to the use of handstretched counting and footing in the development of number sense also seems to be highly resonant with the frame of embodied cognition. Step counting (footing) and hand counting has long been used as a method of measuring distance.Starting in the mid-1900s, researchers became interested in using steps per day to quantify physical activity that needs measurement of length and tracing angles.Steps counting have several advantages as a metric for assessing physical activity: They are intuitive, easy to measure, objective, and they represent a fundamental unit of human arbitrary counting activity.However, they can be used as arbitral measurement tools when scientific tools are absent to facilitate teaching and learning (Cycleback, 2014;Davids, 2019). The Akan people flip their hands along the line segment in which the measurement is taking.An arbitral counting of the distance is recorded and comparison made to other distances.This approach is good in introducing measurement of length through arbitral exploration.Terms used to connote distance measurement are anamↄn, kwansin, kwansitenten, basafa, kwantia, etc.In the same way, some (for example, the farmers and surveyors) stretch their feet to count for shorter distance and stretches the long leg for counting length of longer distances such as plot of land.Participating students were guided to use practical activities of foot and hand counting techniques to measure distances for parameters and geometrical measurement during mensuration lessons.Such ethnomathematics activities were able to established meaning to perimeter of rectangles, squares and recognition of Pythagorean theory, as follows: As seen from Figure 6, the area and perimeter of the triangular and rectangular plots were found using the foot-step and hand arbitral counting system of Akan ethnomathematics.The key content and mathematical concept or construct the ethnomathematics illustrates (see Figure 6) was to recognize the Pythagoras theorem based on Pythagorean principles to establish the area of the fields as well as the parameter enclosed to the given plot.The mathematical processes associated with these ethnomathematics suggests a metacognition of Akan concept of geometry illustrated through body movements and gestures, games, artefacts and indigenous technology such as buildings, roofing, and among others. The Akan kente cloth depicts the geometrical knowledge in ethnomathematical concepts that best describes or link the formal mathematical concepts of geometry and mensuration.Figure 7 shows kente designs in various form of geometrical shapes such as rectangles, rhombus, triangles, kites, squares arrows stripes, and among others. CONCLUSIONS AND RECOMMENDATION Culture spells out a lot of mathematical contents.Akan ethnomathematics have evidences of geometric and mensuration content.This ethnomathematics processes take the form of artefacts such as earth-bowl-ware, blinder, jars, kente clothes, bowls, and among others.The Akan ethnomathematics support the teaching and learning of mathematical concepts of which mensuration and geometry are not exception.Evidence from discussions and analyses of Akan ethnomathematical processes shows a clearer interconnections of the formal and informal mathematical contents enshrined in school-based curriculum.The pedagogical moves adopted from this basic ethnomathematics are applicable in various resourceful teaching processes.The existence of ethnomathematics among the Akans culture helps the children from the communities to follow the lesson development to enhance their mathematical skills acquisition.The Akan ethnomathematics revealed the concept application of conics pi (π) enhancing how students can conceive this mathematical concept in mensuration topic from the formal teaching perspective. Typically, among the Akans ethnomathematics is application of informal mathematical applications called ethno-technology.This has been the basis of Akan indigenous traditional creation of artefacts that come in the form of kente weaving process to bring geometrical shapes.Buildings and traditional artefacts were engineered from the ethnomath and ethno-tech idealization.The informal position of knowledge transmission of these ethnomathematical ideas is a great limitation to extracting a connection to the formal Eurocentric approach of teaching mathematics.To start from somewhere, there is the need to consider mathematization of informal ethnomathematics to adjust somehow to the formal curriculum implementation processes through appropriate ethnomathematical pedagogies.Culture has a lot to give mathematics educators if we conform to admit its integration into the teaching and learning processes. We recommend further research into the practical effect of ethnomathematics move in teaching other mathematical concepts in several communities where there exists cultural diversity.It is suggested that, mathematics educators adopt ethnomathematics methodology by integrating it into the curriculum implementation process to check its impact on the teaching and learning of mathematics. Figure 3 . Figure 3. Problem spacemen from a merge of Akan informal ethnomathematics to the formal Figure 4 . Figure 4. Some selected conic section closely connected to Ghanaian ethnomathematical constructs suggested by the schoolbased mathematics curriculum (Sullivan, 2010) Figure 5 . Figure 5. Foot and hand counting system of measurement (Source: field survey) Figure 6 . Figure 6.Field parameter and area measurement using foot and hand counting (Source: field survey) Figure 7 also shows other designs that were made out of rhombus, kite, and circular designs, respectively.The technological know-how of cloth making shows Akan knowledge of geometry similar to other studies' findings carried out by Africa researchers of ethnomathematics.It can be applied as resourceful teaching in elementary geometry concepts.Most of African traditional artefacts with ethnomathematics are dominantly geometrics based; reflecting on their dominant knowledge in Euclidean geometry, circles and general conics concepts in mathematics. Figure 7 . Figure 7. Examples of Akan kente clothes (source: field survey of Akan kente clothes) Table 1 . Selected Akan circular artefact with concept of pi (π) Source: Field survey from Akan traditional homes (2020) Table 3 . Exemplification of geometric and mensuration problems with ethno-technology application
6,757.8
2021-05-14T00:00:00.000
[ "Mathematics", "Education" ]
A Multi-Criteria Group Decision-Making Method with Possibility Degree and Power Aggregation Operators of Single Trapezoidal Neutrosophic Numbers Single valued trapezoidal neutrosophic numbers (SVTNNs) are very useful tools for describing complex information, because of their advantage in describing the information completely, accurately and comprehensively for decision-making problems. In the paper, a method based on SVTNNs is proposed for dealing with multi-criteria group decision-making (MCGDM) problems. Firstly, the new operations SVTNNs are developed for avoiding evaluation information aggregation loss and distortion. Then the possibility degrees and comparison of SVTNNs are proposed from the probability viewpoint for ranking and comparing the single valued trapezoidal neutrosophic information reasonably and accurately. Based on the new operations and possibility degrees of SVTNNs, the single valued trapezoidal neutrosophic power average (SVTNPA) and single valued trapezoidal neutrosophic power geometric (SVTNPG) operators are proposed to aggregate the single valued trapezoidal neutrosophic information. Furthermore, based on the developed aggregation operators, a single valued trapezoidal neutrosophic MCGDM method is developed. Finally, the proposed method is applied to solve the practical problem of the most appropriate green supplier selection and the rank results compared with the previous approach demonstrate the proposed method’s effectiveness. Introduction Multi-criteria decision-making (MCDM) problems are important issues in practice and many MCDM methods have been proposed to deal with such issues.Due to the vagueness of human being thinking and the increased complexity of the objects, there are always much uncertainty, incomplete, indeterminate and inconsistent information in evaluating objects.Traditionally, vagueness information is always described by fuzzy sets (FSs) [1] using the membership function, intuitionistic fuzzy sets (IFSs) [2] using membership and non-membership functions and hesitant fuzzy sets (HFSs) [3] using one/several possible membership degrees.Many fuzzy methods are proposed, for example, Medina [4] extends the fuzzy soft set by Multi-adjoint concept lattices, Pozna & Precup [5] proposed the operator and application to a fuzzy model, Jane et al. [6] proposed fuzzy S-tree for medical image retrieval and Kumar & Jarial [7] proposed a hybrid clustering method based on an improved artificial bee colony and fuzzy c-means algorithm.However, fuzzy sets cannot deal with the indeterminate information and inconsistent information which exists commonly in complex MCDM problems.As a generalization of the IFSs [2], neutrosophic sets (NSs) [8][9][10] are proposed to deal with the uncertainty, incomplete, indeterminate and inconsistent information by using the truth-membership, indeterminacy-membership and falsity-membership functions. Due to the advantages of handling uncertainty, imprecise, incomplete, indeterminate and inconsistent information existing in real world, NSs have attracted many researchers' attentions However NSs are proposed from the philosophical point of view, it is difficult to be directly applied in real scientific and engineering areas without specific descriptions.Therefore, in accordance with the real demand difference, three main subsets of NSs were proposed, namely single valued neutrosophic sets (SVNSs) [11], interval neutrosophic sets (INSs) [12] and multi-valued neutrosophic set (MVNSs) [13].Based on the aforementioned sets by specifying the NSs, many MCDM methods were developed, which can be classified as three main aspects: aggregation operators, measures and the extension of classic decision-making methods.These methods have been successfully applied in many areas, such as medical diagnosis [14,15], medical treatment [16], neural networks [17], supplier selection [18,19] and green product development [20]. With regard to aggregation operators of SVNSs, Liu and Wang [21] proposed a single-valued neutrosophic normalized weighted Bonferroni mean operator, Liu et al. [22] proposed the generalized neutrosophic operators, Sahin [23] developed the neutrosophic weighted operators.Considering real situations, INSs is more suitable and flexible for describing incomplete information than SVNs.Sun et al. [24] introduced the interval neutrosophic number Choquet integral operator, Ye [25] proposed the interval neutrosophic number ordered weighted operators, Zhang et al. [26] proposed the interval neutrosophic number weighted operators.All of these methods demonstrate the effectiveness. In respect of the extension of classic decision-making methods, Zhang and Wu [19] developed an extended TOPSIS method for the MCDM with incomplete weight information under a single valued neutrosophic environment; Biswas et al. [37] developed the entropy based grey relational analysis method to deal with MCDM problems in which all the criteria weight information described by SVNSs is unknown; Peng et al. [38] developed the outranking approach for MCDM problems based on ELECTRE method; and Sahin and Yigider [39] developed a MCGDM method based on the TOPSIS method for dealing with supplier selection problems.Chi and Liu [40] developed the extended TOPSIS method for deal MCDM problems based on INSs. Peng et al. [13] firstly defined MVN and developed the approach for solving MCGDM problems based on the multi-valued neutrosophic power weighted operators.Wang and Li [41] proposed the Hamming distance between multi-valued neutrosophic numbers (MVNN) and the extended TODIM method for dealing with MCDM problems.Wu et al. [42] proposed the novel MCDM methods based on several cross-entropy measures of MVNSs. However, these subsets of NSs cannot describe the assessment information with different dimensions.For overcoming the shortcomings and improving the flexibility and practicality of these sets, by extending the concept of trapezoidal intuitionistic fuzzy numbers (TrIFNs) [43], single valued trapezoidal neutrosophic numbers (SVTNNs) [44] are proposed for improving the ability to describe complex indeterminate and inconsistent information.Then, SVTNNs attract the attention of some researchers on them as very useful tools on describing evaluation information.Based on SVTNNs, Ye [44] developed the MCDM method on the basis of trapezoidal neutrosophic weighted arithmetic averaging (TNWAA) operator or trapezoidal neutrosophic weighted geometric averaging (TNWGA) operator.However, the correlation of trapezoidal numbers and three membership degrees has been ignored and the indeterminate-membership degree is regarded to be equal to falsity-membership degree in these operators, which will lead to information distortion and loss.Meanwhile, it does not take into account the information about the relationships among the assessment information being aggregated, which always exists in the process of solving MCDM problems.To overcome this shortcoming, motivated by the ideal of power aggregation operators [45,46], considering the relationship among the information being aggregated and the possibility degree widely used as a very useful tool to aggregate and rank uncertain data from the probability viewpoint, in this paper we propose the possibility degrees of SVTNNs, single trapezoidal neutrosophic power average (SVTNPA) and single valued trapezoidal neutrosophic power geometric (SVTNPG) operators to deal with MCGDM problems.The prominent characteristics of these proposed operators are taking into account relationship among the aggregation information and overcome the drawbacks of the existing operator of SVTNNs.Then, we utilize these operators and possibility degrees to develop a novel single valued trapezoidal neutrosophic MCGDM method. The motivation and main attribution of the paper are presented as below: (1) The novel operation laws of SVTNNs are conducted to overcome the lack of operation laws of SVTNNs appeared in previous paper.(2) Based on the novel operations of SVTNNs, the SVTNPA and SVTNPG operators are developed. (3) Based on the concept of the possibility degree, the possibility degree of SVTNNs is defined and presented.(4) Based on possibility degree of SVTNNs, SVTNPA and SVTNPG operators, a novel method for solving MCGDM problems under single trapezoidal neutrosophic environment is developed. The rest of the paper is organized as follows.In Section 2, we introduce some basic concepts and operators related to subsets of NS.In Section 3, we propose new operations, possibility degrees and comparison of SVTNNs.SVTNPA and SVTNPG operators are developed in Section 4. The method for solving MCGDM problems under single trapezoidal neutrosophic environment is developed in Section 5.An illustrative example for selecting the most appropriate green supplier for Shanghai General Motors Company is provided in Section 6. Meanwhile a comparison with other method is presented to show the effectiveness of the proposed approach.Finally, conclusions are drawn in Section 7. Preliminaries In this section, some basic concepts, definitions of SVTNNs and two aggregation operators are introduced, which are laying groundwork of latter analysis. Definition 1 ([14] ).Let X be a space of points (objects), with a generic element in X denoted by x .A NS A in X is characterized by three membership functions, namely truth-membership function , i.e., ( ) : ] 0,1 . Therefore, it is no restriction on the sum of ( ) The neutrosophic set needs to be specified from a technical point of view, otherwise it is difficult to apply in the real scientific and engineering areas.Therefore, Wang et al. [13] proposed the concept SVNS as an instance of neutrosophic set for easily operating and conveniently applying in practical issues. Definition 2 ([13] ).Let X be a space of points (objects).A SVNS A in X can be expressed as follows:   Definition 3 ([43,47] ).Let a  be a trapezoidal fuzzy number a a a a    . Then its membership function Because of the great validity and feasibility of trapezoidal fuzzy numbers and SVNSs in decision-making problems, Ye [44] developed the SVTNNs by combining the two concepts. Definition 4 ([44] ).Let U be a space of points (objects).Then a SVTNN  can be represented as T  , indeterminacy-membership ( ) I  and falsity-membership ( ) F  can be described as follows: . PA and PG Operators The power average (PA) operator was firstly proposed by Yager [45]; then, based on PA operator, Xu and Yager [46] developed the power geometric (PG) operator. ) ( , , , ) ( , ) i j Sup h h is the support for i h from j h , satisfying the following properties: (1) ( , ) ( , ) , where a and b are two positive real numbers. New Operations and Comparison of SVTNNs In this section, new operations and comparison method of SVTNNs are proposed for overcoming the limitations in Reference [44] which can avoid information loss and distortion effectively. The New Operations of SVTNNs In order to aggregate different SVTNNs in decision-making process, Ye [44] defined the operations of SVTNNs. Then the operations of SVTNNs can be defined as follows: (1) ) ) However, there are some shortcomings in Definition 7. (  are not considered.Thus, the operations would be unreasonable. (2) The three membership degrees of SVTNNs are also operated as the trapezoidal fuzzy numbers in the operation  , which can produce the repeat operation and make the result bias.  can be obtained by using Definition 6.   The three membership degrees of these SVTNNs are operated repeatedly which make the result distort significantly and conflict with common sense. For overcoming the limitations existing in the operations proposed by Ye [44], motivated by the operations on triangular intuitionistic fuzzy numbers proposed by Wang et al. [48], new operations of SVTNNs are defined as below. Compared with the operations proposed by Ye [44], the new operations of SVTNNs have some excellent advantages on reflecting the effect of all truth, indeterminacy and falsity membership degrees of SVTNNs on aggregation results and taking into account the correlation of the trapezoidal fuzzy numbers and three membership degrees of SVTNNs, which can avoid information loss and distortion effectively. In terms of the corresponding operations of SVTNNs, the following theorem can be easily proved.  be three SVTNNs and 0   .Then the following equations must be true and easy to proof. (1) The Possibility Degree The possibility degree, which is proposed from the probability viewpoint, is a very useful tool to rank uncertain data reasonably and accurately. Definition 8 ([49,50]). Let be two real number intervals with uniform probability distribution, the probability y z  can be represented as ( ) p y z  , which exists the following properties: is an arbitrary interval or number, ( ) 0.5 p y z   , ( ) 0.5 Based on the concept of the possibility degree, the possibility degree of two arbitrary positive SVTNNs is presented. ] ( ) ( ), 0 max 1 max where the value of is the coefficient that can reflect the attitudes of decision-makers. Now we prove the property (2), the proofs of other properties are similar to the proof the property (2), thus, they are omitted. The Comparison Method of SVTNNs In this subsection, based on the concept of the possibility degree of two arbitrary positive SVTNNs defined in Definition 9, the new comparison method for two SVTNNs is presented. For comparing different SVTNNs in decision-making process, Ye [44] defined the score function and comparison of SVTNNs.  However, the score function is operated by assuming that the parameters of trapezoidal fuzzy numbers own same weight, which cannot reflect the different importance for the four parameters of a trapezoidal fuzzy number and make aggregating result bias. Example 6. Let We cannot compare these two SVTNNs using the above function but it is easy to know that Meanwhile, the function operates the indeterminacy-membership degree as like the false-membership degree, which does not take the preference of decision-makers into consideration.Definition 11.Let  and  be two positive SVTNNs,  be an arbitrary positive SVTNN and then the comparison method can be defined as follows. Example 7. Let . When using the data of Example 4 and the following can be obtained. When using the data of Example 5 and the following can be obtained. Thus, the results of the above two examples are consistent with our common sense.Because the score function can overcome the shortcoming existing in Reference [44] by calculating the indeterminacy-membership degree by taking into account the preference of decision-makers, the results are more grounded in reality than the results obtained by using the score degree proposed by Ye [44]. Single Valued Trapezoidal Neutrosophic Power Aggregation Operators In this section, the SVTNPA and SVTNPG operators based on the new operations of SVTNNs are developed. [ , , , ], ( ), ( ), ( )  be a collection of positive SVTNNs.Then the single valued trapezoidal neutrosophic power average (SVTNPA) operator can be defined as follows: Sup   is the support for i  from j  , satisfying the following properties. (1) For   , by the operations described in Definition 10, we have , SVTNPA     can be calculated as follows. Because ( ) 0.467  , we can obtain the following results. Example 10. Use the data of Example 9. Then The proof of Theorem 6 can refer to Theorem 4. A MCGDM Method Based on Possibility Degree and Power Aggregation Operators under Single Valued Trapezoidal Neutrosophic Environment In this section, the possibility degrees of SVTNNs, single trapezoidal neutrosophic power weighted aggregation operators are applied to MCGDM problems single valued trapezoidal neutrosophic information. For a MCGDM problems with single valued trapezoidal neutrosophic information, assume that the set of alternatives is is the set of decision-makers who evaluate the alternatives according to the criteria . The evaluation information ( 1, 2, , ; 1, 2, , ; which is described by positive SVTNNs, can be given by decision-makers ( 1, 2, , ) when they assess the alternatives and then the decision matrices ( ) method of determining the ranking of the alternatives is introduced here and the decision-making procedures are shown as follows. Step 1. Normalize the decision matrices. Normalize the decision-making information . The criteria can be classified into the benefit type and the cost type.For the benefit-type criterion, the form of the evaluation information needs no change; but for the cost-type criterion, the negation operator is used. The normalization of the decision matrices can be represented as follows: , ( ) , , where T B is the set of benefit-type criteria and T C is the set of cost-type criteria. The normalized decision matrices are denoted as ( ) Step 2. Aggregate the values of alternatives on each criterion to get the collective SVTNNs. Based on the Definitions 12 or 13, the collective SVTNNs ij  or ij   can be gotten by SVTNPA or SVTNPG operator, the aggregation values of decision-makers on each alternative are as follows: Then the collective preference matrix ( ) can be obtained. Step 3. Aggregate the values of alternative on each decision-maker to get the overall SVTNNs. Based on the Definitions 12 or 13, the overall SVTNNs ij  or ij   can be gotten by SVTNPA or SVTNPG operator, the aggregation values of alternative on each decision-maker are as follows: Then the coverall preference matrix Step 4. Calculate the possibility degrees of the assessment values of each alternative superior than other alternatives' values. Based on Definition 9, the possibility degrees of ' ( ') Step 5. Calculate the collective possibility degree index of each alternative to derive the overall values of the alternatives. Aggregate U or U  to get the overall possibility degree index ( ) i p B of the alternative i B by using the following functions: Then the overall possibility degree index matrix   ( ) Step 6. Rank the alternatives and select the best one. According to the results obtained in Step 5, rank the alternatives by the overall values in descending order and the first order alternative is the best. Illustrative Example In this section, a green supplier selection problem is used to illustrate the validity and effectiveness of the developed method. Background The following case background is adapted from [51].In recent years, more and more people pay attention the serious environmental problems caused badly by the rapid economic development of all over the world.The green supply chain management becomes imperative under this situation because of its advantages on the sustainable development of economics and protection of environment.Meanwhile, it can bring tremendous economic benefit and competitive strengthen for the enterprises. Motivated by the advantages of green supply chain management, Shanghai General Motors (SGM) Company wants to select the most appropriate green supplier as its cooperative alliance.After pre-evaluation, four suppliers become the final alternatives for further evaluation, including The Procedures of Single Valued Trapezoidal Neutrosophic MCGDM Method The proposed MCGDM method is used for determining the ranking of the green suppliers. Step 1. Normalize the decision matrices. The four criteria ( 1, 2, 3, 4) j C j  are regarded as the benefit-type criterion, so the decision matrices change nothing. Step 2. Aggregate the values of the four alternatives on each criterion to get the collective SVTNNs. Use the SVTNPA or SVTNPG operator to aggregate the values of four alternatives on each criterion, the collective SVTNNs are obtained shown in P and P  . Step 3. Aggregate the values of the four alternatives on each green supplier to get the overall SVTNNs by using the SVTNPA or SVTNPG operator. The coverall preference matrix shown in K or K  .  Step 5. Calculate the collective possibility degree index of each alternative to derive the overall values of the alternatives. Aggregate U or U  to get the overall possibility degree index and the overall possibility 0.512 0.526 0.497 0.465 Comparison Analysis and Discussion In order to validate the accuracy of the proposed single valued trapezoidal neutrosophic MCGDM method, a comparative study is conducted based on the illustrative example in this paper and the method used for comparison was proposed by Ye [44].When resolving the above example using the approach described in Reference [44], which involves the use of t trapezoidal neutrosophic arithmetic averaging (TNWAA) operator or trapezoidal neutrosophic weighted geometric averaging (TNWGA) operator with known weights to comprehensively analyze green suppliers, the weights of the decision-makers and criteria can be generated using the PA operator ( ( ) ij S  [44] is the score function value of the SVTNN ij a .The overall values of four alternatives on each criterion obtained by using TNWAA operator are shown as the matrix M , the matrix M  got by using TNWGA operator. The collective values of the four green suppliers can also be obtained by using the TNWAA operator as the matrix U or the matrix U  by using the TNWGA operator. B B B B    and the best green supplier obtained by using the approach in Reference [44] is 1 B .The ranking results of different methods can be shown in Table 1. Table 1.The ranking results of different methods. Operators Ranking of Alternatives The method in Reference [44] NNTWA operator B B B B    The proposed method SVTNPA operator and the possibility degrees SVTNNs B B B B    From Table 1, it can be seen results of the ranking on the four green suppliers obtained by the proposed single trapezoidal neutrosophic MCGDM method in this paper is quite different from that the ranking obtained by the method introduced in Reference [44].The main reasons are summarized as follows. (a) The new operations of SVTNNs defined in this paper, which take the conservative and reliable principle, can take account of the correlation between trapezoidal fuzzy numbers and three membership degrees of SVTNNs.However, the operations in Reference [44] divide the trapezoidal fuzzy numbers and three membership degrees of SVTNNs into two parts and calculate them separately, which make aggregating results deviate from the reality.(b) The new comparison of SVTNNs proposed in this paper has some crucial advantages over comparison of SVTNNs based on the score degree function in Reference [44], which can take the preference of decision-makers into consideration.(c) The relationship among the aggregation information, which exists in the aggregation process of in practical MCDM problems, is ignored [44].Whereas, the SVTNPA and SVTNPG operators, which can effectively take the relationship among the assessment information being aggregated into consideration and in this paper, the advantages of the possibility degree of SVTNNs are combined to rank the uncertain information reasonably and accurately from the probability viewpoint.Hence, the ranking result of this paper is more objective and reasonable than that obtained by using the operators in Reference [44]. Conclusions In order to improve the reasonability and effectiveness of the methods on dealing with valued trapezoidal neutrosophic MCGDM problems, also overcome the limitations of the existing approaches.In this paper, a single valued trapezoidal neutrosophic MCGDM method is proposed form the possibility degree of SVTNNs and the single valued trapezoidal neutrosophic power aggregation operators.Firstly, the new operations of SVTNNs are proposed for avoiding information loss and distortion, the possibility degrees of SVTNNs are proposed from the probability viewpoint.Based on the proposed operations and possibility degrees, SVTNPA and SVTNPG operators are proposed.Furthermore, a single valued trapezoidal neutrosophic MCGDM method based on SVTNPA, SVTNPG operator and the possibility degrees of SVTNNs is developed.The prominent advantages of the proposed method are not only its ability to effectively deal with the preference information expressed by SVTNNs but also the consideration of the relationship among the information being aggregated in the process on dealing with the practical MCGDM problems and the advantage of the possibility degrees of SVTNNs, which can avoid information loss and distortion, is combined.Thus, the final results are more scientific and reasonable.Finally, the method is applied to a practical problem on selecting the most appropriate green supplier for SGM Company, meanwhile, the comparison with other method is carried on and demonstrates its feasibility and effectiveness in dealing with MCGDM problems. In future research, the developed method will be extended to other domains, such as personnel selection and medical diagnosis. 1  is equal to 2 . However, it is obvious that 2  is superior to 1 . These shortcomings existing in the score function given in Definition 10 may make the comparison results of SVTNNs unacceptable.For overcoming the limitations of Definition 10, based on the concept of the possibility degree of two arbitrary positive SVTNNs defined in Definition 9, we propose a new comparison method. 4 0Step 6 . 4 B Rank the green suppliers and select the best one.The ranking of the four green suppliers is 2 13 B B B    .Therefore, SGM Company will choose Sino Trunk as its cooperative alliance.The rankings of green suppliers using the SVTNPA operators for different values of λ are shown in Figure1.In general, larger values of λ are associated with relatively pessimistic decision-makers; thus, the alternatives were associated with relatively overall possibility degree index.In contrast, lower values of λ are associated with relatively optimistic decision-makers.When the decision-makers do not indicate any preferences, the most commonly-used value (λ = 0.5) is used. Figure 1 . Figure 1.Rankings of various green suppliers for different values of λ. i s i  of
5,628
2018-11-02T00:00:00.000
[ "Computer Science" ]
A Learning Approach to Optical Tomography We describe a method for imaging 3D objects in a tomographic configuration implemented by training an artificial neural network to reproduce the complex amplitude of the experimentally measured scattered light. The network is designed such that the voxel values of the refractive index of the 3D object are the variables that are adapted during the training process. We demonstrate the method experimentally by forming images of the 3D refractive index distribution of cells. We describe a method for imaging 3D objects in a tomographic configuration implemented by training an artificial neural network to reproduce the complex amplitude of the experimentally measured scattered light. The network is designed such that the voxel values of the refractive index of the 3D object are the variables that are adapted during the training process. We demonstrate the method experimentally by forming images of the 3D refractive index distribution of cells. The learning approach to imaging we describe in this paper is related to adaptive techniques in phased antenna arrays 1 and inverse scattering 2, 3 . In the optical domain an iterative approach was demonstrated by the Sentenac group 4, 5 who used the coupled dipole approximation 6 for modelling light propagation in inhomogeneous media (a very accurate method but computationally intensive) to simulate light scattering from small objects (1µm × 0.5µm) in a point scanning microscope configuration. Very recently an iterative optimization method was demonstrated 7 for imaging 3D objects using incoherent illumination. Our method relies on digital holography 8,9 to record the complex amplitude of the field. We use the Beam Propagation Method (BPM) 10,11 to model the 1 arXiv:1502.01914v1 [physics.optics] 5 Feb 2015 scattering process and the error back propagation method 12 to train the system. At the end of the training process the network discovers a 3D index distribution that is consistent with the experimental observations. We experimentally demonstrate the technique by imaging polystyrene beads and HeLa and hTERT-RPE1 cells. Experimental Setup A schematic diagram of the experimental setup is shown in Figure 1. It is a holographic tomography system 13 , in which the sample is illuminated with multiple angles and the scattered light is holographically recorded. Several variation of the holographic tomography system have been demonstrated before [14][15][16][17] . The optical arrangement we used is most similar to the one described by Choi et al. 14 . The samples to be measured were prepared by placing polystyrene beads and cells between two glass cover slides. The samples were illuminated with a continuous collimated wave at 561nm at 80 different angles. The amplitude and phase of the light transmitted through the sample was imaged onto a 2D detector where it was holographically recorded by introducing a reference beam. These recordings constitute the training set with which we train the computational model that simulates the experimental setup. We construct the network using the BPM. The inhomogeneous medium (beads or cells) is divided into thin slices along the propagation direction (z). The propagation through each slice is calculated as a phase modulation due to the local transverse index variation followed by propagation in a thin slice of a homogenous medium having the average value of the index of refraction of the sample. Methodology A schematic description of the BPM simulation is shown in Figure 2. The straight lines connecting any two circles represent multiplication of the output of the unit located in the l-th layer of the network at x = n 1 δ, y = m 1 δ by the discretized Fresnel diffraction kernel e jπ[(n 2 where n l and m l are integers and λ is the wavelength of light. δ is the sampling interval in the transverse coordinates (x, y) whereas δ z is the sampling interval along the propagation direction z. The circles in the diagram of Figure 2 perform a summation of the complex amplitude of the signals converging to each circle and also multiplication of this sum by e j(2π∆nδzz)/λ . ∆n(x, y, z) is the unknown 3D index perturbation of the object. In the experiments the network has 420 layers with ∆n(x, y, z) being the adaptable variable. In contrast with a conventional neural network, the output of the layered structure in Figure 2 is a linear function of the input complex field amplitude. However, the dependence of the output is nonlinearly related to ∆n(x, y, z). The BPM can be trained using steepest descent exactly as the back propagation algorithm in neural networks [18][19][20] . Specifically, the learning algorithm carries out the following minimization: In the above expression E k (∆n) is the current prediction of the BPM network for the output when the system is illuminated with the k-th beam and M k (∆n) is the actual measurement obtained by the optical system. ∆n indicates the estimate for the index perturbation due to the object. The term S(∆n) is a sparsity constraint 21 to enhance the contrast while τ is a parameter that can be tuned to maximize image quality by systematic search. The positivity constraint takes advantage of the assumption that the index perturbation is real and positive. The optimization is carried out iteratively by taking the derivative of the error with respect to each of the adaptable parameters following steepest descent is the error, α is a constant and the change in ∆n is proportional to the error and its derivative. This is achieved efficiently via a recursive computation of the gradient, which is the back propagation part of our learning algorithm. Results We first tested the system with polystyrene beads encapsulated between two glass slides in immersion oil. The sample was inserted in the optical system of Figure 1 and 80 holograms were recorded by illuminating the sample at 80 distinct angles uniformly distributed in the range -45 degrees to +45 degrees. The collected data is the training set for the 420-layer BPM network which simulates a physical propagation distance of 30µm and transverse window 37µm × 37µm (δ x = δ y = 72nm). The network was initialized with the standard filtered back projection reconstruction algorithm (Radon transform) 22 and the resulting 3D images before and after 100 iterations are shown in Figure 3. The final image produced by the learning algorithm is an accurate repro- duction of the bead shape. A sample of a HeLa cell was also prepared and the same procedure was followed to obtain a 3D image. The results are shown in Figure 4 where the error function is plotted as a function of iteration number. In this instance, the system was initialized with a constant but nonzero value (∆n = 0.007). Also shown in Figure 4 are the results obtained when the system was initialized with the Radon reconstruction from the same data. After 100 iterations both runs yield essentially identical results. Notice that the error in the final image (after 100 iterations) is significantly lower than to the error of the Radon reconstruction. This is also evident by visual inspection of the 6 images in Figure 4 where the artifacts due to the missing cone 23 and diffraction 14 are removed by the learning process. In general, optical 3D imaging techniques rely on the assumption that the object being imaged does not significantly distort the illuminating beam. This is assumed for example in Radon or diffraction/holographic tomography. In other words, these 3D reconstruction methods rely on the assumption that the measured scattered light consists of photons that have only been scattered once before they reach the detector. The BPM, on the other hand, allows for multiple forward scattering events. The only simplification is that reflections are not taken into account; these could eventually be incorporated in the network equation without fundamentally altering the approach described in this paper. Since biological tissue is generally forward scattering, BPM can be a good candidate to model propagation of thick biological samples and this may be the most significant advantage of the learning approach. To demonstrate this point, we prepared two glass slides with a random distribution of hTERT-RPE1 cells (immortalized epithelial cells from retina) on each slide. When we attach the two slides together, we can find locations where two cells are aligned in z, one on top of the other. Figure 5 (a)-(e) shows the image of such a stack of two cells produced with a direct inversion using the Radon transform. Figure 5 (f)-(j) shows the same object imaged with the proposed learning algorithm. The learning method was able to distinguish the two cells where the Radon reconstruction merged the two into a single pattern due to the blurring in z which is a consequence of the missing cone. In conclusion, we have demonstrated a neural-network based algorithm to solve the optical
2,038.4
2015-02-05T00:00:00.000
[ "Computer Science", "Engineering", "Physics" ]
Designing Patient-Driven, Tissue-Engineered Models of Primary and Metastatic Breast Cancer The rising survival rate for early-stage breast cancer in the United States has created an expanding population of women in remission at risk for distant recurrence, with metastatic spread to the brain demonstrating an especially poor prognosis. The current standard of care for breast cancer brain metastases is not well defined or differentiated from the treatment of brain metastases from other primary sites. Here, we present tissue-engineered models of the primary and brain metastatic breast cancer microenvironments informed by analysis of patient tumor resections. We find that metastatic resections demonstrate distinct cellular and matrix components compared with primary resections or non-cancerous controls. Using our model systems, we find that the observed deposition of collagen I after metastasis to the brain may enhance breast cancer invasion. Future optimization of these models will present a novel platform to examine tumor-stroma interactions and screen therapeutics for the management of metastatic breast cancer. Introduction One significant achievement of modern medicine is the extent of public and institutional support afforded to women with breast cancer in the United States. The five-year survival rate for women with breast cancer has improved over the past three decades, rising to 91% on average and 99% for patients with early-stage disease due to improved screening and advances in treatment [1]. However, this continuous increase in survival rate has led to a surge in the number of breast cancer survivors living in the United States, with the figure currently estimated at 3.9 million women. These patients, considered to be in remission, demonstrate high risk for distant recurrence or metastasis. Indeed, of these 3.9 million women, up to 24% are expected to develop metastases to the brain during their lifetimes [2]. Distant recurrence to the brain is associated with the shortest survival time and poorest prognosis compared with other sites of spread, with the time from diagnosis until death taking just 17 months on average. Given this rapidly expanding high-risk population, the current standard of care for the management of metastatic breast cancer needs to be reassessed in order to meet the needs of breast cancer survivors living in remission. The current standard of care for breast cancer brain metastases is not well defined and includes whole-brain radiation therapy and surgical resection if possible [3]. However, whole-brain radiation therapy has not been shown to have a significant impact on patient survival and can result in severe side effects such as memory loss, exhaustion, and dementia. In addition, recurrence after resection is common given that surgery has a conservative nature in order to minimize the removal of healthy brain tissue. Furthermore, all brain metastases receive the same treatment regardless of the original tumor type, indicating a need to better understand how unique microenvironmental interactions within the brain affect cancer progression [3]. Despite these shortcomings, each breast cancer patient diagnosed with brain metastases generates almost $20,000 per month on average in healthcare-related expenses [4]. Given the limited therapeutic and economic efficacy of these options, the creation of improved treatments will be essential for the future management of breast cancer brain metastases. Increasing evidence implicates the tumor microenvironment, or the tissue adjacent to the tumor bulk known as the stroma, in regulating cancer cell behavior [5,6]. This microenvironment is an inherently complex system consisting of stromal cells such as fibroblasts and immune cells, blood and lymphatic vessels, and biophysical forces such as interstitial fluid flow. These components have been shown to aid malignant progression and contribute to therapeutic resistance across tumor types [6,7]. In particular, resident fibroblasts within the breast are common stromal cells associated with enhanced tumor invasion [8][9][10]. Astrocytes demonstrate similar effects for gliomas within the brain [11]. Although both these stromal populations are known to mediate matrix remodeling under physiological conditions in the breast and brain, respectively, the molecular mechanisms by which they promote invasion in the context of cancer are less clear. Understanding the differences between how these components modulate breast cancer behavior will serve as an initial step to inform the creation and modification of therapies for the management of breast cancer after metastasis to the brain. Three-dimensional multicellular and matrix models are an established approach for investigating the relationships between tumor cells and their microenvironments [12][13][14]. Here, we present models of primary and metastatic breast cancer based on immunofluorescence of patient tumor resections. These tissueengineered models serve as vehicles to examine tumor-stroma interactions in vitro and screen therapeutics for the management of breast cancer brain metastases. Sample Selection Patient samples were accessed through the University of Virginia Biorepository and Tissue Research Facility. These samples were selected from archived patients with a definitive diagnosis of breast cancer who received no treatment prior to tumor resection. Samples were de-identified before use. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional review board of the University of Virginia and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Immunofluorescence Formalin-fixed, paraffin-embedded sections were deparaffinized with xylene and rehydrated in graded ethanol solutions. Antigen retrieval was performed by boiling the samples for 30 min in a citrate-based antigen unmasking solution (Vector Labs, Burlingame, CA, USA). The samples were washed two times with permeabilization solution (0.1% Triton-X in 1× TBS) and then incubated at room temperature with blocking solution based on secondary antibody hosts (4% serum in permeabilization solution). The primary antibodies were diluted in blocking solution and applied to the samples overnight at 4 • C. The samples were then washed three times with permeabilization solution and incubated at room temperature with secondary antibodies diluted in 2% bovine serum albumin in permeabilization solution. The samples were incubated with DAPI (Sigma-Aldrich, St. Louis, MO, USA) diluted to 1.43 µM in 1× TBS and washed three times with 1× TBS. Residual liquid was removed from the slides before the samples were mounted with Fluoromount-G (South-ernBiotech, Birmingham, AL, USA) and sealed with nail polish. The following reagents were used for immunofluorescence: anti-alpha-smooth muscle actin (Thermo Scientific, Waltham, MA, USA, 41-9760-80, 1 µg/mL), anti-pan-keratin (Thermo Scientific, Waltham, MA, USA, MS-343-P, 4 µg/mL), anti-GFAP (Abcam, Cambridge, UK, 7260, 1:1000 dilution), anti-collagen I (Rockland, Pottstown, PA, USA, 34755, 1:200 dilution), anti-tenascin C (R&D, Minneapolis, MN, USA, AF3358, 1 µg/mL), hyaluronic acid binding protein (Millipore, Burlington, MA, USA, 385911, 2.5 µg/mL). Image Analysis Stained slides were analyzed with an EVOS fluorescent cell imaging system (Thermo Scientific, Waltham, MA, USA) and processed using ImageJ (National Institutes of Health, Bethesda, MD, USA). Random non-overlapping 856 × 476 µm (407,465 µm 2 ) regions within the tumor or stroma were selected for imaging. The number of regions analyzed varied depending on the size of the tissue sample. Based on markers previously established in the literature, breast cancer cells were identified by anti-pan-cytokeratin staining, cancerassociated fibroblasts were identified by anti-alpha-smooth muscle actin staining, and activated astrocytes were identified by anti-GFAP staining [15][16][17]. In ImageJ, the "Merge Channels" command was used to false color and overlay individual images. Confocal imaging was performed on a Zeiss LSM 700 microscope running ZEN 2009 software (Zeiss, Oberkochen, Germany). Cell Culture and Labeling The human breast cancer cell line MDA-MB-231 and human dermal fibroblasts were obtained from ATCC (Manassas, VA, USA) and human cortical astrocytes were obtained from Sciencell (Carlsbad, CA, USA). MDA-MB-231 cells and fibroblasts were cultured and maintained in Dulbecco's Modified Eagle's Medium (DMEM; Life Technologies, Carlsbad, CA, USA) supplemented with 10% FBS. Astrocytes were cultured in supplemented astrocyte medium (Sciencell, Carlsbad, CA, USA). Cells were labeled with fluorescent CellTracker Dyes (Thermo Scientific, Waltham, MA, USA) at recommended dilutions in serum-free media for one hour before use. Preparation of Tissue-Engineered Models Experiments were carried out with 12 mm-diameter, 8 µm-pore tissue-culture inserts (Millipore). Hydrogels were created based on previous publications [12,18,19]. For collagenbased gels, 1.8 mg/mL rat tail collagen I (Corning, Corning, NY, USA), 0.2 mg/mL basement membrane extract (Trevigen, Gaithersburg, MD, USA), serum-free DMEM, 10× PBS, 1 M NaOH, and distilled water were combined in a microcentrifuge tube before the addition of labeled cells. For hyaluronan-based gels, 1.2 mg/mL thiolated hyaluronan (Glycosil; ESI BIO, Alameda, CA, USA), 1.2 mg/mL rat tail collagen I, serum-free DMEM, 10× PBS, 1 M NaOH, distilled water, and Extralink PEDGA (ESI BIO, Alameda, CA, USA) were combined in a microcentrifuge tube before the addition of labeled cells. Gel volume was kept constant at 100 µL and cell densities determined from immunofluorescence were normalized to 120,000 cells per gel and below. 700 µL of serum-free DMEM was placed outside of the insert and 100 µL were placed inside the insert for standard static conditions. These volumes were switched to create a pressure gradient for flow conditions. This pressure head results in linear flow velocities between 0.7-1.2 µm/s. Gels were analyzed after 18-24 h of incubation. Experiments without invasion as an outcome measure were completed in 96-well plates with a reduced gel volume of 50 µL and densities of 60,000 cells per gel and below to conserve reagents. Invasion Assays and Quantification After 18 h, gels were removed and the number of MDA-MB-231 cells remaining on the underside of the insert membrane were counted as a relative measure of tumor cell invasion. Five random fields of view were imaged and counted per insert to extrapolate the total percentage of seeded cells that had migrated through the membrane. Percent invasion was calculated as the number of cells that traversed the membrane/number of cells seeded × 100. Immunofluorescence of Primary and Metastatic Breast Cancer Resections Immunofluorescence of both primary and metastatic triple-negative breast cancer patient resections was performed to establish appropriate cell seeding ratios and matrix compositions for the respective tissue-engineered models. Primary resections were stained for breast cancer cells (anti-pan keratin) and activated fibroblasts (anti-αSMA) and then regions of the tumor (Figure 1a,b) and stroma (Figure 1e,f) were quantified to determine the cell densities across six patient samples. Metastatic resections were stained for breast cancer cells (anti-pan keratin) and activated astrocytes (anti-GFAP) and then regions of the tumor (Figure 2a,b) and stroma (Figure 2e,f) were similarly quantified to determine the cell densities across six separate patient samples. Five distinct regions of the tumor and five distinct regions of the stroma were analyzed per patient. In general, primary and metastatic resections demonstrated similar total cell densities within the tumor, around 500,000 cells per mm 3 (Figures 1c and 2c). In contrast, the cell densities of the stroma surrounding the primary tumors were more than twice as dense as the corresponding stroma in the brain (Figures 1g and 2g). Moreover, astrocytes were not as prevalent within the metastatic tumor bulk and stroma, exhibiting densities about one order of magnitude lower in both locations than fibroblasts exhibited in the breast. However, for both primary (Figure 1d,h) and metastatic (Figure 2d,h) resections, the cellular composition (as a percentage of total cells) demonstrated a fair degree of heterogeneity between patients. Metastatic resections were also stained for the matrix components collagen I (anti-collagen I), tenascin C (anti-tenascin C), and hyaluronan (hyaluronic acid binding protein). While hyaluronan remained consistently diffuse between metastatic and non-cancerous control resections ( Figure 3c,f), the metastatic resections demonstrated unexpected networks of fibrillar collagen I and tenascin C (Figure 3a,b,d,e). In contrast, the non-cancerous control resections only demonstrated positive staining for collagen I around blood vessels, as expected for healthy brain tissue [20]. Overall, immunofluorescence revealed that primary fibroblast densities were significantly greater than metastatic astrocyte densities (Figures 1g and 2g) and that the metastatic tumor microenvironment contains more fibrillar collagen I and tenascin C than non-cancerous controls ( Figure 3). Immunofluorescence of Primary and Metastatic Breast Cancer Resections Immunofluorescence of both primary and metastatic triple-negative breast cancer patient resections was performed to establish appropriate cell seeding ratios and matrix compositions for the respective tissue-engineered models. Primary resections were stained for breast cancer cells (anti-pan keratin) and activated fibroblasts (anti-αSMA) and then regions of the tumor (Figure 1a,b) and stroma (Figure 1e,f) were quantified to determine the cell densities across six patient samples. Metastatic resections were stained for breast cancer cells (anti-pan keratin) and activated astrocytes (anti-GFAP) and then regions of the tumor (Figure 2a,b) and stroma (Figure 2e,f) were similarly quantified to determine the cell densities across six separate patient samples. Five distinct regions of the tumor and five distinct regions of the stroma were analyzed per patient. In general, primary and metastatic resections demonstrated similar total cell densities within the tumor, around 500,000 cells per mm 3 (Figures 1c and 2c). In contrast, the cell densities of the stroma surrounding the primary tumors were more than twice as dense as the corresponding stroma in the brain (Figures 1g and 2g). Moreover, astrocytes were not as prevalent within the metastatic tumor bulk and stroma, exhibiting densities about one order of magnitude lower in both locations than fibroblasts exhibited in the breast. However, for both primary (Figure 1d,h) and metastatic (Figure 2d,h) resections, the cellular composition (as a percentage of total cells) demonstrated a fair degree of heterogeneity between patients. Metastatic resections were also stained for the matrix components collagen I (anti-collagen I), tenascin C (anti-tenascin C), and hyaluronan (hyaluronic acid binding protein). While hyaluronan remained consistently diffuse between metastatic and non-cancerous control resections (Figure 3c,f), the metastatic resections demonstrated unexpected networks of fibrillar collagen I and tenascin C (Figure 3a,b,d,e). In contrast, the non-cancerous control resections only demonstrated positive staining for collagen I around blood vessels, as expected for healthy brain tissue [20]. Overall, immunofluorescence revealed that primary fibroblast densities were significantly greater than metastatic astrocyte densities (Figures 1g and 2g) and that the metastatic tumor microenvironment contains more fibrillar collagen I and tenascin C than non-cancerous controls ( Figure 3). Breast cancer brain metastases exhibit remodeled extracellular matrix. Non-cancerous control brain resections (a-c) and breast cancer brain metastases resections (d-f) were stained for collagen I (anti-collagen I), tenascin C (anti-tenascin C), and hyaluronan (hyaluronic acid binding protein). Representative images were selected across six patients (n = 6). Scale bar = 200 μm. Design and Composition of Representative Tissue-Engineered Models Previously validated hydrogel systems were selected as the base for our models of the primary and metastatic breast cancer microenvironments. A collagen-based gel was selected for the primary model given that breast tissue is rich in fibrillar collagen I, while a hyaluronan-based gel was selected for the metastatic model as hyaluronan is an essential component of brain tissue which contains far fewer fibrillar proteins [12,13]. However, to account for the extensive, unexpected presence of fibrillar collagen I observed in metastatic patient resections (Figure 3d), we also established an additional "remodeled" metastatic tumor condition where the metastatic cell components were seeded in a collagenbased gel. We normalized the cell densities determined by immunofluorescence for incorporation into the models such that the highest density was 120,000 cells per gel (primary tumor) and all other densities were proportionally lower. Graphical depictions of the primary tumor and stroma models, along with corresponding cross-sectional images of each are illustrated in Figure 4a-d. The cell components and gel-base for all models are enumerated in Figure 4e. Note that the incorporated cell populations were fluorescently Breast cancer brain metastases exhibit remodeled extracellular matrix. Non-cancerous control brain resections (a-c) and breast cancer brain metastases resections (d-f) were stained for collagen I (anti-collagen I), tenascin C (anti-tenascin C), and hyaluronan (hyaluronic acid binding protein). Representative images were selected across six patients (n = 6). Scale bar = 200 μm. Design and Composition of Representative Tissue-Engineered Models Previously validated hydrogel systems were selected as the base for our models of the primary and metastatic breast cancer microenvironments. A collagen-based gel was selected for the primary model given that breast tissue is rich in fibrillar collagen I, while a hyaluronan-based gel was selected for the metastatic model as hyaluronan is an essential component of brain tissue which contains far fewer fibrillar proteins [12,13]. However, to account for the extensive, unexpected presence of fibrillar collagen I observed in metastatic patient resections (Figure 3d), we also established an additional "remodeled" metastatic tumor condition where the metastatic cell components were seeded in a collagenbased gel. We normalized the cell densities determined by immunofluorescence for incorporation into the models such that the highest density was 120,000 cells per gel (primary tumor) and all other densities were proportionally lower. Graphical depictions of the primary tumor and stroma models, along with corresponding cross-sectional images of each are illustrated in Figure 4a-d. The cell components and gel-base for all models are enumerated in Figure 4e. Note that the incorporated cell populations were fluorescently . Breast cancer brain metastases exhibit remodeled extracellular matrix. Non-cancerous control brain resections (a-c) and breast cancer brain metastases resections (d-f) were stained for collagen I (anti-collagen I), tenascin C (anti-tenascin C), and hyaluronan (hyaluronic acid binding protein). Representative images were selected across six patients (n = 6). Scale bar = 200 µm. Design and Composition of Representative Tissue-Engineered Models Previously validated hydrogel systems were selected as the base for our models of the primary and metastatic breast cancer microenvironments. A collagen-based gel was selected for the primary model given that breast tissue is rich in fibrillar collagen I, while a hyaluronan-based gel was selected for the metastatic model as hyaluronan is an essential component of brain tissue which contains far fewer fibrillar proteins [12,13]. However, to account for the extensive, unexpected presence of fibrillar collagen I observed in metastatic patient resections (Figure 3d), we also established an additional "remodeled" metastatic tumor condition where the metastatic cell components were seeded in a collagen-based gel. We normalized the cell densities determined by immunofluorescence for incorporation into the models such that the highest density was 120,000 cells per gel (primary tumor) and all other densities were proportionally lower. Graphical depictions of the primary tumor and stroma models, along with corresponding cross-sectional images of each are illustrated in Figure 4a-d. The cell components and gel-base for all models are enumerated in Figure 4e. Note that the incorporated cell populations were fluorescently labeled before use. A breakdown of the collagen-based and hyaluronan-based gel compositions can be found in the methods section. 022, 8, x FOR PEER REVIEW 6 of 10 labeled before use. A breakdown of the collagen-based and hyaluronan-based gel compositions can be found in the methods section. The Tumor Microenvironment Affects Breast Cancer Invasion In order to examine the effect of the collagen I deposited in the metastatic brain resections on breast cancer cell invasion, we seeded MDA-MB-231 breast cancer cells into collagen-based or hyaluronan-based gels polymerized in tissue-culture inserts. We selected the highly invasive, triple-negative MDA-MB-231 cell line for these studies as triple-negative breast cancer patients with brain metastases demonstrate the poorest prognosis compared to other molecular subtypes [21,22]. In addition, gels were cultured under either static or flow conditions to mimic interstitial fluid flow, which is known to enhance breast cancer cell invasion [13]. For flow conditions, a media head was added on top of the gels to generate pressure-driven fluid flow on the order of 1 μm/s, similar to rates observed in vivo [12,13]. After 18 h, the number of MDA-MB-231 breast cancer cells in the tissue-culture insert membranes was quantified as a relative measure of invasion. Significantly more cells invaded through the collagen-based gels than the hyaluronan-based gels under flow conditions ( Figure 5). Moreover, a similar increase was observed for the static conditions, indicating that a change in the matrix composition of the culture environment alone affected breast cancer cell motility. The Tumor Microenvironment Affects Breast Cancer Invasion In order to examine the effect of the collagen I deposited in the metastatic brain resections on breast cancer cell invasion, we seeded MDA-MB-231 breast cancer cells into collagen-based or hyaluronan-based gels polymerized in tissue-culture inserts. We selected the highly invasive, triple-negative MDA-MB-231 cell line for these studies as triplenegative breast cancer patients with brain metastases demonstrate the poorest prognosis compared to other molecular subtypes [21,22]. In addition, gels were cultured under either static or flow conditions to mimic interstitial fluid flow, which is known to enhance breast cancer cell invasion [13]. For flow conditions, a media head was added on top of the gels to generate pressure-driven fluid flow on the order of 1 µm/s, similar to rates observed in vivo [12,13]. After 18 h, the number of MDA-MB-231 breast cancer cells in the tissue-culture insert membranes was quantified as a relative measure of invasion. Significantly more cells invaded through the collagen-based gels than the hyaluronanbased gels under flow conditions ( Figure 5). Moreover, a similar increase was observed for the static conditions, indicating that a change in the matrix composition of the culture environment alone affected breast cancer cell motility. Creation of Multi-Layered Models to Mimic the Tumor-Stroma Interface To further examine how breast cancer cells interact with the surrounding microenvironment, we layered our tumor and stroma models (Figure 6a) to recapitulate the transition zone found at the tumor-stroma border in vivo as a proof of concept. These stacked gels allowed for regions of differing cell densities and matrix components to be included within the same tissue-culture insert for analysis. We were able to optimize the layering process for our primary tumor and stroma models, ensuring that there was a distinct boundary between the gels while maintaining the ability of MDA-MB-231 breast cancer cells to migrate into the lower stromal layer within 18 h of gel formation (Figure 6b). the gels to generate pressure-driven fluid flow on the order of 1 μm/s, similar to rates observed in vivo [12,13]. After 18 h, the number of MDA-MB-231 breast cancer cells in the tissue-culture insert membranes was quantified as a relative measure of invasion. Significantly more cells invaded through the collagen-based gels than the hyaluronan-based gels under flow conditions ( Figure 5). Moreover, a similar increase was observed for the static conditions, indicating that a change in the matrix composition of the culture environment alone affected breast cancer cell motility. test were performed to determine statistical significance (* p < 0.05). Error bars represent standard deviation (n = 3). Creation of Multi-Layered Models to Mimic the Tumor-Stroma Interface To further examine how breast cancer cells interact with the surrounding microenvironment, we layered our tumor and stroma models (Figure 6a) to recapitulate the transition zone found at the tumor-stroma border in vivo as a proof of concept. These stacked gels allowed for regions of differing cell densities and matrix components to be included within the same tissue-culture insert for analysis. We were able to optimize the layering process for our primary tumor and stroma models, ensuring that there was a distinct boundary between the gels while maintaining the ability of MDA-MB-231 breast cancer cells to migrate into the lower stromal layer within 18 h of gel formation (Figure 6b). Discussion Here, we present patient-driven, tissue-engineered models of primary and metastatic breast cancer. From immunofluorescence of tissue resections, we observe differences in the cellular and matrix composition between primary and metastatic tumors, as well as between individual patients. Using these models, we found that both matrix composition and the addition of fluid flow influence breast cancer invasion. Breast cancer cells use integrin heterodimers and CD44 to migrate by engaging with collagen I and hyaluronan, respectively [23,24]. The greater invasion rate in collagen-based gels suggests that integrin-based migration may be more efficient for these cells if available, though the study is limited by the use of a single representative cell line. While differences in substrate mechanics could also influence tumor cell migration, the collagen-and hyaluronan-based gels used here have similar elastic moduli on the order of 1000 Pa [12]. Of note, high concentrations of collagen I are correlated with a poor prognosis and predisposition for metastasis in primary breast cancer [25]. The extensive collagen I deposition we observed in the metastatic patient resections presents a distinct matrix phenotype from healthy brain tissue and primary brain tumors, which may worsen clinical outcomes. Collectively, these results suggest that breast cancer cells mediate deposition of fibrillar collagen I in the otherwise nanoporous hyaluronan-based brain extracellular matrix to promote metastatic progression. This elevated degree of collagen I in the brain could be secreted by metastatic breast cancer cells [26], by recruited progenitor cells [27], or by astrocytes or other neuroglial cells induced to secrete collagen I in response to tumor-derived signals [28,29]. Therefore, future work is needed to understand what molecular mechanisms guide this matrix remodeling and if these factors could be targeted therapeutically. Multicellular and matrix models offer many advantages over traditional 2D co-cul- Gel orientation remained intact after fixation and removal from the tissue-culture insert. Breast cancer cell invasion from the tumor layer into the stroma layer can be seen, indicating that cells were able to migrate through the gel boundary. Scale bar = 200 µm. Discussion Here, we present patient-driven, tissue-engineered models of primary and metastatic breast cancer. From immunofluorescence of tissue resections, we observe differences in the cellular and matrix composition between primary and metastatic tumors, as well as between individual patients. Using these models, we found that both matrix composition and the addition of fluid flow influence breast cancer invasion. Breast cancer cells use integrin heterodimers and CD44 to migrate by engaging with collagen I and hyaluronan, respectively [23,24]. The greater invasion rate in collagen-based gels suggests that integrinbased migration may be more efficient for these cells if available, though the study is limited by the use of a single representative cell line. While differences in substrate mechanics could also influence tumor cell migration, the collagen-and hyaluronan-based gels used here have similar elastic moduli on the order of 1000 Pa [12]. Of note, high concentrations of collagen I are correlated with a poor prognosis and predisposition for metastasis in primary breast cancer [25]. The extensive collagen I deposition we observed in the metastatic patient resections presents a distinct matrix phenotype from healthy brain tissue and primary brain tumors, which may worsen clinical outcomes. Collectively, these results suggest that breast cancer cells mediate deposition of fibrillar collagen I in the otherwise nanoporous hyaluronan-based brain extracellular matrix to promote metastatic progression. This elevated degree of collagen I in the brain could be secreted by metastatic breast cancer cells [26], by recruited progenitor cells [27], or by astrocytes or other neuroglial cells induced to secrete collagen I in response to tumor-derived signals [28,29]. Therefore, future work is needed to understand what molecular mechanisms guide this matrix remodeling and if these factors could be targeted therapeutically. Multicellular and matrix models offer many advantages over traditional 2D co-culture systems for examining tumor-stroma interactions and response to treatment. For example, these platforms provide a 3D culture environment, which has been shown to regulate tumor cell behavior ranging from migration to chemoresistance, and include matrix ligands to better mimic human disease [30,31]. However, several limitations prevent these models from fully recapitulating the microenvironment found in vivo. The use of previously validated hydrogels made of natural biomaterials does not allow for the material properties of each condition to be independently tuned. For example, each collagen-based gel contained the same concentration of collagen I and therefore did not account for documented differences in matrix density, ligand presentation, and tissue stiffness observed between regions of tumor and stroma [32,33]. Moreover, these models cannot be used to culture cells at the densities found within human tumors while maintaining cell viability and structural integrity. Despite these limitations, the presented models offer a novel approach for examining breast cancer behavior and screening therapies in a tractable setting prior to animal studies and clinical trials. In the future, the described model systems can be adapted to investigate other tumor-stroma interactions that mediate cancer progression, with the end goal of improving treatment options for patients with metastatic breast cancer.
6,448.4
2022-01-18T00:00:00.000
[ "Medicine", "Engineering" ]
The benefits of socioemotional learning strategies and video formats for older digital immigrants learning a novel smartphone application The need to continually learn and adjust to new technology can be an arduous demand, particularly for older adults who did not grow up with digital technology (“older digital immigrants” or ODIs). This study tests the efficacy of socioemotional learning strategies (i.e., encoding information in a socially- or emotionally-meaningful way) for ODIs learning a new software application from an instructional video (Experiment 1) or a written manual (Experiment 2). An experiment-by-condition effect was identified, where memory was greatest for participants engaging socioemotional learning strategies while learning from a video, suggesting a synergistic effect of these manipulations. These findings serve as a first step toward identifying and implementing an optimal learning context for ODIs to learn new technologies in everyday life. Introduction Navigating the modern world requires us to constantly learn, update, and adjust to new technology.This need to learn new technology can present challenges for middle-aged and older adults who did not grow up with technology as part of their daily experience.Here we refer to this group of individuals as "older digital immigrants" (ODI).Unlike "digital natives", or individuals who grew up using technology such as smartphones, laptops, tablets, etc., ODIs were introduced to these technologies in mid-to-later life.Although technology can ultimately be helpful to ODIs-allowing them to keep track of everything from their own daily schedules to the medication regimens of those for whom they may be assuming a caregiver role --this technology is constantly changing, requiring them to update their memories of how to interact with their electronic devices, even after learning to use them.For ODIs, the demands of this constantly shifting technological landscape can be particularly challenging, drawing on the types of memory abilities with which individuals of their age show particular difficulty (see Park and Festini, 2017), and can have a significant impact on their daily lives (Parikh et al., 2016). One factor contributing to older adults' difficulty learning new technology may relate to their tendency to utilize poor learning strategies (e.g., Old and Naveh-Benjamin, 2012).Although older adults report knowing a number of effective strategies for improving memory (Hache et al., 2018), they are still likely to rely heavily on less effective strategies (Aronov et al., 2015).Providing older adults with strategies to learn information can boost their memory, but these strategies are often effortful to use and difficult to generalize beyond laboratory settings.For instance, if asked to learn word pairs, older adults might be instructed to form a sentence linking these words together (Frankenmolen et al., 2017;Kuhlmann and Touron, 2017).Although older adults can learn to use these strategies, because they are effortful to use, they have been most effectively engaged by older adults with high executive function or IQ (e.g., Bender and Raz, 2012;Frankenmolen et al., 2017), and they tend not to be spontaneously generalized to other learning tasks that older adults encounter.As a result, the examination of learning strategies that may generalize to individuals beyond those with high executive function or IQ is needed in order to assist the growing older adult population in the context of the changing technological landscape. The current research examines the efficacy of memory strategies that capitalize on older adults' known strengths.Both young and older adults are more likely to remember content that elicits an emotional response (e.g., Kensinger, 2009) or is encoded using a self-referencing mnemonic strategy (Gutchess et al., 2010;Gutchess et al., 2007).In the current study, we consider these processes together as socioemotional strategies, based on evidence of shared mechanisms supporting episodic memory (see Gutchess and Kensinger, 2018 for review).Indeed, extensive research has revealed that there are age-related gains in socioemotional abilities, with older adults giving high priority to the implementation of these processes (Carstensen et al., 1999;Charles and Carstensen, 2010;Scheibe and Carstensen, 2010).Prior research suggests that it is possible for older adults to assist their memory performance by connecting the content they are being asked to learn to these prioritized socioemotional goals (e.g., Carstensen and Turk-Charles, 1994;Fung and Carstensen, 2003;Kensinger and Gutchess, 2017). The current study tests the efficacy of a socioemotional encoding strategy as a learning tool that can be taught to ODIs and deployed flexibly to enable them to effectively learn to utilize new technologies.There are several reasons to believe socioemotional encoding strategies would be effective, not only relative to baseline (i.e., the absence of an instructed learning strategy) but also relative to standard learning strategies.First, traditional learning strategies that benefit younger adults (e.g., repetition, generating a sentence from novel information, generating a mental image of novel information) have failed to help older adults as much (e.g., Fox et al., 2016), likely because they relied heavily on self-initiated control processes that are impaired in older adults (e.g., Dunlosky and Hertzog, 1998).Indeed, prior research suggests that learner-centric interventions, such as the socioemotional instructions used in the current study, seem to show the most success for older adult learners (Bottiroli et al., 2013;Flegal and Lustig, 2016).Second, the use of socioemotional strategies appears to require less effort than the use of other types of strategies and may be more automatically employed by older adults in memory tasks (Kensinger and Gutchess, 2017).Finally, socioemotional strategies may help older adults feel less threatened by new technology.Older adults often reject new technology, sometimes before even attempting to learn how to use it (Nilsson and Townsend, 2010;Czaja et al., 2013).Older adults' frustration with technology can come from many sources (Van Volkom et al., 2014;Hill et al., 2015), but often it is because they do not see how the technology would be beneficial to daily life (Murthy and Mani, 2013;Pew Research Center, 2014), or they are frustrated or overwhelmed by the associated learning demands (Larsson et al., 2013).Despite this initial reaction, once they learn the new technology, older adults often report the same degree of benefit from technology adoption as younger adults (Pew Research Center, 2014;Chopik, 2016;Li et al., 2024).Socioemotional learning strategies may assist older adults' learning as well as their ability to more quickly understand how the technology may be beneficial to their daily life. In addition to individual strategies, encoding may also be influenced by the mode of presentation.In the current study, we focus on the difference between a written manual, and a guided video tutorial.On one hand, video presentation, particularly in unsupervised online studies, could allow the participant to "zone out" (i.e., not engage meaningfully with the video) or to let the video run while directing mental resources to another task.This attention lapse could result in reduced encoding of new material (Baddeley et al., 1984), where the additional effort of reading the written manual may ensure deeper encoding (Craik and Byrd, 1982).Indeed, many prior studies conducted in young adults have shown superior memory for content in news reports presented in print relative to audiovisual presentation (e.g., Gunter et al., 1984;Furnham and Gunter, 1985;Gunter et al., 1986;Furnham and Gunter, 1987;DeFleur et al., 1992). On the other hand, the benefit of print over video may be specific to videos in which there is limited redundancy between visual and verbal information, such as those typically used in news reports.When videos with greater visual/verbal overlap are used (such as news videos designed for children), young adults show no benefit of print or video (Furnham et al., 2002) or a benefit of video presentation (Walma van der Molen and van der Voort, 2000).The video in the current study walked participants through each step, both verbally and visually, and therefore likely contained sufficient overlap to confer a benefit over text.Presenting verbal and visual information can also reduce cognitive load during encoding, facilitating transfer of information from working to long-term memory (Mayer, 2017), Finally, an instructional video may encourage learners to engage with the material by capturing their attention.We are naturally attracted to moving objects (Howard and Holcombe, 2010), allowing videos to capture our attention and maintain it for longer durations of time.Videos may also capture attention by being more socially-relevant than written manuals, often including the voices and/or faces of those providing instruction.This more socially-relevant format may encourage us to treat the new material as more social or emotional, leading to a baseline socioemotional memory benefit (see Kensigner and Gutchess, 2017). In this current study, we compare the efficacy of instruction for a socioemotional learning strategy to a control condition (no instructed learning strategy) and to instruction for a standard learning strategy on the ability of adults ages 55+ (those who are not "digital natives" and would not have had access to personal computing until they were adults) to learn a new software application.We do so using two different types of learning materials: an instructional video that includes the social context of a human voice (Expt 1) and a PDF manual devoid of overt social cues (Expt 2).Across these two experiments, we test the hypothesis that, for those who are older digital immigrants, socioemotional learning strategies can be engaged to make detailed information, like the functions within a new software application, more memorable compared to when they are learned with no strategies or standard strategies. Participants Data from Experiment 1 are from 234 participants (age 55-78, M = 62.85,SD = 5.08; 153 females) who reported being a native English speaker age 55+ and who completed both parts of a 2-day online experiment with adequate performance on attention and quality assurance checks.Participants were recruited from Amazon's Mechanical Turk (MTurk) and from a database of participants who had expressed interest in completing studies in our lab.The current study focuses on participants ages 55 and older, as it is likely that these participants did not grow up using digital technology and had to acquire that skill later in life (i.e., "older digital immigrants", or ODIs).Although all participants were able to sign-up for and complete an online study, and 99.6%, reported using their computer daily, we wanted to limit our sample to participants who did not have professional computer experience.Therefore, we excluded participants who reported specialized computer training (31 participants) as a computer programmer, app developer, or computer science teacher.Therefore, the final Procedure The current study took place over the course of two half-hour study sessions, both conducted online using Qualtrics and Amazon's Mechanical Turk.On Day 1, participants were randomly assigned into one of three experimental conditions: Control (n = 70), Standard Strategy (n = 65), and Socioemotional Strategy (n = 68).All data and materials, including the videos used, have been made publicly available at the OSFHome and can be accessed at DOI 10.17605/OSF.IO/TFWSY (https://osf.io/tfwsy/?view_only= 29bd4af188934f07a68e81a8344738a5). Day 1: Encoding (See Figure 1 for depiction of Day 1 procedures).All participants viewed a 10-min instructional video-teaching them the functions of a novel smartphone application-immediately followed by a short memory test.Prior to the application instructional video, participants in the Socioemotional and Standard learning conditions viewed a 2.5-min video instructing them on how to use the socioemotional or standard strategies, respectively. 1 The analysis that includes these computer experts is included in the Supplementary Material. Frontiers in Aging frontiersin.org Participants in the Control condition were presented with a 10min video that introduced the functions of a new smartphone application.This mock application (designed for the purposes of this study, only, to ensure that no participant had prior familiarity with it) was a medical application that helped users to manage medical data and contact medical professionals.In the 10-min video, a female narrator (author E.A.K.) walked participants through a number of possible uses for the application and the function of all icons.Although the narrator's voice could be heard, they were not seen on the screen.The video was followed by a brief memory test in which participants were presented with an image of the home page of the app and asked to label the 8 icons. The instructional video and immediate memory test used in the Standard and Socioemotional Strategy conditions were identical to those in the Control condition.However, prior to viewing this video, participants in the strategy conditions watched additional video tutorials on specific learning strategies.The "Standard Strategy" manipulation condition was designed to mimic the deep encoding strategies known to generally benefit memory and that have most commonly been used to try to enhance memory performance (e.g., Kirchhoff et al., 2012).Participants in the Standard Strategy condition viewed a 2.5 min video teaching them to: -Repeat novel information -Generate a sentence from novel information -Create a mental image of novel information The "Socioemotional Strategy" manipulation condition was designed to rely on processes that are typically relatively preserved in older adults, including self-referential processing and emotional engagement.Participants in the Socioemotional Strategy condition viewed a 2.5 min video teaching them to: -Generate actions related to novel information -Engage in the self-relevance of novel information -Focus on the emotion related to novel information After watching the video tutorials for these strategies, participants watched three additional 1-2 min videos that walked them through a practice of each strategy.After each strategy video, participants were asked if they understood how to use the strategy and were given the opportunity to practice it again if not.Importantly, while the instructional video was presented by a female narrator, using conversational tone and inflection, the videos used to train participants on strategy use were narrated by a computer with an emotionless tone and with stable prosody, using Amazon Polly text-to-speech (https://aws.amazon.com/polly/).This was done because, while the same instructional video was used in all conditions, the training videos differed across conditions.We wanted to ensure that it was only the content of those trainings that differed, with no differences in prosodic or other vocal social cues. After completing all tutorials and practices, participants in the strategy conditions were presented with the 10-min instructional video walking them through the medical application.As in the Control condition, participants in these conditions completed a brief identification memory test immediately after the video.The encoding survey took approximately 20-30 min for participants to complete and can be found at DOI 10.17605/OSF.IO/TFWSY. Day 2: Retrieval On Day 2, approximately 24-h after memory encoding, all participants completed the same retrieval task, regardless of encoding condition.The memory task included free response, multiple-choice, and matching questions.The full retrieval survey can be found at DOI 10.17605/OSF.IO/TFWSY. Scenarios: First, participants were asked to describe 3 distinct scenarios where they could use the application.For each, they were asked to include: 1) A description of the situation 2) An explanation for why this application would be helpful 3) A step-by-step description of what they would do in the app 4) A description of the appearance and location of icons Responses could earn up to 1 point each for the description of the situation, explanation of why the app would be helpful, and the description of the icons, and up to 2 points for the description of steps to be taken to accomplish the goal, for up to 5-points per response. Free response: Participants were also asked to respond to four more specific free response questions; for these questions, the number of possible points (listed below) was based on the total number of pieces of information needed to fully respond to the prompt: 1) Describe how they would use the app to find an existing medical report (up to 3 points) 2) Describe the medical contacts page (up to 8 points) 3) Describe how to add particular medical records (up to 3 points) 4) Describe the prescriptions page and how to use it (up to 4 points) Free responses were scored by two researchers who were blind to encoding condition.To establish inter-rater reliability, the seven free response questions for a subset of 35 participants were scored by both raters.Scores were highly reliable (average Cronbach's alpha = .97,with alphas ranging from .88 to .99 across questions) and inconsistent responses were discussed to establish agreement.The responses from the remaining participants were divided up between raters. Multiple-choice: Participants were presented with 9 multiplechoice questions in which they were shown a page from the app and asked to identify the icon that would be used to perform a particular function.These were scored as either correct or incorrect, with correct responses earning 1 point. Matching: Participants were presented with 15 hypothetical scenarios and asked to match each with the appropriate icon from the home screen.These were scored as either correct or incorrect, with correct responses earning 1 point. The retrieval task took participants approximately 15-25 min to complete, followed by a 5-min survey that asked participants to consider their engagement in the task.Participants were asked how motivated they were to learn the new application and whether they would learn and use a similar application if it were available. They were then asked to consider what strategies they used during the task.Participants were provided with a list of 15 possible strategies and were asked to rate, on a 1-5 scale, the extent to which they employed each during the encoding task.They then were asked to rank all 15 strategies in order of most to least used.These strategies could be categorized as "standard", "socioemotional", or "other"; "other" referred to general strategies that many people use when learning new information (memorize the function, spending time learning the function, paying and keeping attention).These strategies can be found in the full survey (DOI 10.17605/OSF.IO/ TFWSY), but are also listed in Supplementary Material.Finally, participants completed a series of attention and quality assurance questions to establish inclusion eligibility (see survey at DOI 10.17605/OSF.IO/TFWSY). Data analysis Memory Score Participants' memory scores were calculated by adding up their points across all memory tasks and dividing by the total possible score so that scores ranged from 0 to 1.The first 32 participants (12 control, 12 socioemotional, and 8 standard) were not presented with all questions.Seventeen were missing two of the free responses questions while 15 were only presented with the initial encoding questions and the matching task.Therefore, their total possible score was not the same as the remaining participants.Scores for these participants were calculated by dividing their score by an adjusted "total possible" score (i.e., the total score possible based on the questions that they were presented) to take these differences into account.All analyses were conducted using the full sample, but follow-up analyses confirmed that findings were consistent when only including the 202 participants who answered all memory questions. Analyses A factorial ANOVA was used to examine the effects of condition (control, standard, or socioemotional) on memory performance.To examine self-reported use of different strategies, the average ratings of "socioemotional", "standard", and "other" strategies were calculated.A mixed-model ANOVA was used to examine strategy use with condition (control, standard, or socioemotional) as a between subject factor and strategy type (socioemotional, standard, and other) as a within-subject factor on self-reported strategy use.A final factorial ANCOVA looked at the effects of condition (between subject factor) and strategy use (other, standard, and socioemotional strategy use, as covariates) on memory performance.This analysis looked at the main effects of each strategy as well as the interactions with condition. There was a main effect of strategy use (F (2,400) = 102.28,p < .001,η p 2 = .34);participants reported using other strategies (M = 4.67, SE = .03)to a greater extent than socioemotional strategies (M = 4.05, SE = .06)or standard strategies (M = 4.08, SE = .05).There was no main effect of condition (F (2,200) = .09,p = .911,η p 2 = .001),but there was a significant condition-by-strategy interaction (F (4,400) = 9.29, p < .001,η p 2 = .085;See Figure 3A).This interaction was driven by significantly greater standard strategy in the standard condition (M = 4.26, SE = .09)relative to the control condition (M = 3.92, SE = .09;p = .006,LSD correction) and that factors related to stay-at-home orders might influence results, data collection was paused until September-October 2021, at which point vaccinations were widely available and most pandemic-related restrictions had been lifted (n = 46).Effects of timing, relative to the pandemic, are reported in Supplementary Material. Procedure Experiment 2 was identical to Experiment 1, but instead of viewing a 10-min instructional video introducing the functions of a new smartphone application, participants were given 10 min to read through an 8-page pdf manual describing these functions (manual can be found at DOI 10.17605/OSF.IO/TFWSY).Participants were not allowed to advance from the manual until the time limit was up.Strategy tutorial videos and all memory questions remained the same as in Experiment 1 (See Figure 4).by-condition interaction (F (2,338) = 3.35, p = .036,η p 2 = .019)that reflects the significant effect of condition seen in Experiment 1, but not Experiment 2 (see results reported above). Summary Experiment 2 was designed to examine whether effects of condition on memory and strategy use were specific to a video learning format.The significant difference in overall memory between Experiment 1 and 2 suggests that learning format may influence how ODIs learn new technology, with a video generally leading to better memory than a pdf, although the experiment-bycondition interaction suggests that this may only be the case when participants are asked to focus on socioemotional encoding strategies.Similarly, the enhancing effect of socioemotional condition in Experiment 1 was not replicated in Experiment 2, suggesting that there may be a synergistic effect of using videos and socioemotional strategies. The effects of condition on strategy use in Experiment 2 were the same as in Experiment 1, with no effects of experiment or experiment-by-condition interactions.In other words, although memory was influenced by learning format, this was not mediated by self-reported changes to strategy use. Discussion The current study was the first to examine the effects of socioemotional strategies on older digital immigrants' learning of a novel smartphone application.Results point to benefits conveyed by learning information in a socioemotional context, but only within the context of a video tutorial.When a training video was used (Experiment 1), there was a significant memory benefit for socioemotional learning strategies.However, this memory benefit did not extend to learning via a manual (Experiment 2).Similarly, the modality effect of video (Experiment 1) relative to pdf (Experiment 2) was specific to the socioemotional strategy condition.Thus, memory performance was best when an instructional video was watched with socioemotional learning strategies engaged (see Figure 2). These findings suggest actionable changes that ODIs can make in their attempts to more effectively learn new technology.In particular, the results suggest that such individuals could benefit from the use of tutorials or training videos.As they watch these videos, thinking about the self-relevance of each step and of specific, emotional scenarios in which they would execute it, may also provide learning benefits.These strategies may be less demanding for older adults or more intuitive for them to use, which could lead to real-world benefits. The findings surrounding subjective judgements of strategy use are more difficult to interpret without additional research.Although both experiments revealed that training participants to use specific strategies can lead to subtle changes in the likelihood that they utilize those strategies, these patterns were weaker than expected.Further, there was limited evidence for a link between self-reported strategy use and memory performance.It is possible that the retrospective self-report ratings employed in the current study to evaluate strategy use did not adequately capture real-time use of the strategies.Future studies could innovate ways to include more objective or real-time measures of strategy use.It was notable that the most participants reported using "other" strategies more often than specific learning strategies.This is not surprising as these strategies (e.g., "I kept my attention focused on the task", see Supplementary Material) were selected to reflect overall intention and effort on the encoding task.There were also fewer of these strategies in the ranking list (4) relative to the list of socioemotional and standard strategies (6 each). Finally, the finding that ODIs may learn better from a video relative to a written manual is intriguing, as it reflects a simple change that they may make to more effectively learn new technology.Further research will be needed, however, to understand the possible mechanisms underlying this benefit.One possible explanation is that videos, being a more socially-relevant format, naturally encourage socioemotional strategies during encoding.If so, participants may have benefitted from the enhancement conferred upon neutral content encoded in a socioemotional context (see Kensigner and Gutchess, 2017).Videos may also be more engaging than text, maintaining participants' attention throughout the entire encoding duration with changing visual and auditory content.Indeed, attention is captured by motion (Howard and Holcombe, 2010), abrupt onsets (Yantis and Jonides, 1984), and color singletons (i.e., an item of one color contrasted against a backdrop of other colored items ;Pashler, 1988), all of which can be incorporated into videos.The extent to which videos are easier for older adults to attend to and learn from may also depend on literacy engagement, as frequent leisure reading can improve skills such as verbal working memory and episodic memory (Stine-Morrow et al., 2022).This finding suggests that reading for pleasure can have enhancing effects on memory that should be explore further.In addition, future research should consider individual differences in literacy engagement when considering the benefits of video to a written manual. It is important to note that the current study focused on learning formats where individuals passively receive pre-recorded information, which we believe reflects the way most ODIs learn new applications in their daily lives.It is possible that hands-on training, with the ODI performing application functions on their own, could provide better learning than either of these approaches (i.e., the enactment effect; Engelkamp, 1998). There is an expansive literature focusing on strategies to make educational videos more or less effective (see Brame, 2016 for review).Although the current informational video was designed with some recommendations in mind (e.g., having a concise, clear, and focused message in which audio and visual messages are aligned), there were others that were intentionally not implemented.For instance, the education literature highlights the benefits of social cues such as narrators using a conversational tone or interacting with the viewer (Brame, 2016).The current study kept the narrator's tone neutral and informative to avoid imbuing the video with additional socioemotional cues, thereby focusing the video v. manual comparison as tightly as possible.Social cues may also differentially affect young and older adults, introducing confounds beyond the scope of the current study.For instance, having an instructor on screen (Wang and Antonenko, 2017) can facilitate learning in undergraduates, but other factors (e.g., whether the instructor is an older adult or a younger adult) could affect the magnitude of benefits in older adults.The current study did not show a narrator on the screen to eliminate this potential confound.Future research should be conducted to see if additional social cues (e.g., faces, gestures, conversational tone, etc.) further support new learning in ODIs. The current study also focused on the modality of the smartphone app instruction, keeping the training for strategy use constant.An important follow-up would be to explore the potential impact of strategy training modality.Given the significant modalityby-condition interaction identified in the current study, it is possible that ODIs may not benefits from socioemotional strategy instruction that is done in writing. As with most studies that are conducted online, the current research has some significant limitations that should be considered when interpreting the results.First, although the current study excluded ODIs with specialized computer training, the online nature of the study means that participants had developed a relatively high level of digital competency (i.e., ability to sign up for and complete an online study).It is possible that the results of the current study may not extend to older adults who do not have a high baseline technical ability. The current research was also affected by the onset of the COVID-19 pandemic halfway through Experiment 2. The onset of the pandemic was associated with increases in anxiety and stress (Rodriguez-Seijas et al., 2020;Morin et al., 2021;Fields et al., 2022) which may have made it more difficult for participants to learn new information.Stay-at-home orders and changes to work-from-home policies may also have influenced the sample of participants available for online research studies or have increased the technological capabilities of ODIs who had no prior experience.To control for these changes, a portion of Experiment 2 was conducted in September-October 2021, when many of these policies had stopped and subjective stress and negative affect had lowered (Fields et al., 2022).Further, exploratory analyses suggest that the differences between Experiment 1 and 2 exist even in those participants from Experiment 2 who completed the study prior to the pandemic (see Supplementary Material). Conclusion The current study serves as a first step in understanding how socioemotional processing and presentation format may aid ODI in learning a new smartphone application.Technology is a central and critical component of the modern world, and one that may provide obstacles to many older adults.The current research suggests that how new technology is learned may assist use of this technology: Memory performance was significantly enhanced when participants learned about the application from an instructional video and utilized socioemotional encoding strategies.Future research will build on these findings to find ways to facilitate the use of these strategies in everyday life. FIGURE 1 FIGURE 1Visual depiction of the Experiment 1 encoding procedure for the Control (n = 70), Socioemotional (n = 68), and Standard (n = 65) learning conditions.All participants viewed a 10-min instructional video-teaching them the functions of a novel smartphone application-immediately followed by a short memory test.Prior to the application instructional video, participants in the Socioemotional and Standard learning conditions viewed a 2.5-min video instructing them on how to use the socioemotional or standard strategies, respectively.
6,808.2
2024-06-24T00:00:00.000
[ "Education", "Computer Science", "Sociology" ]
Targeting SPHK1/PBX1 Axis Induced Cell Cycle Arrest in Non-Small Cell Lung Cancer Non-small cell lung cancer (NSCLC) accounts for 85~90% of lung cancer cases, with a poor prognosis and a low 5-year survival rate. Sphingosine kinase-1 (SPHK1), a key enzyme in regulating sphingolipid metabolism, has been reported to be involved in the development of NSCLC, although the underlying mechanism remains unclear. In the present study, we demonstrated the abnormal signature of SPHK1 in NSCLC lesions and cell lines of lung cancers with a potential tumorigenic role in cell cycle regulation. Functionally, ectopic Pre-B cell leukemia homeobox-1 (PBX1) was capable of restoring the arrested G1 phase induced by SPHK1 knockdown. However, exogenous sphingosine-1-phosphate (S1P) supply had little impact on the cell cycle arrest by PBX1 silence. Furthermore, S1P receptor S1PR3 was revealed as a specific switch to transport the extracellular S1P signal into cells, and subsequently activated PBX1 to regulate cell cycle progression. In addition, Akt signaling partially participated in the SPHK1/S1PR3/PBX1 axis to regulate the cell cycle, and the Akt inhibitor significantly decreased PBX1 expression and induced G1 arrest. Targeting SPHK1 with PF-543 significantly inhibited the cell cycle and tumor growth in preclinical xenograft tumor models of NSCLC. Taken together, our findings exhibit the vital role of the SPHK1/S1PR3/PBX1 axis in regulating the cell cycle of NSCLC, and targeting SPHK1 may develop a therapeutic effect in tumor treatment. Introduction Lung cancer is one of the most common malignant tumors and a leading cause of cancer-related morbidity and mortality worldwide, with nearly 2 million new cases each year and an extremely low 5-year survival rate [1,2]. Non-small cell lung cancer (NSCLC), a major type of lung cancer accounting for 85~90% of the cases, is generally divided into adenocarcinoma, squamous cell carcinoma and large cell carcinoma [3]. Currently, the most effective treatment for NSCLC includes surgery combined with radiotherapy and chemotherapy, although it comes with a dismal prognosis [4]. Hence, exploring related genes and finding important pathways in the development of NSCLC will be critical to understand the malignancy of tumors and to improve the survival of NSCLC patients. Biologically active lipids have emerged as signaling molecules with pleiotropic effects on important cellular processes, including cell proliferation, migration and pluripotency [5]. Sphingosine kinase-1 (SPHK1) is a primary member of the SPHK family, catalyzing the production of sphingosine-1-phosphate (S1P) [6]. The S1P-related pathway orchestrates numerous cellular processes essential for cell proliferation and survival through five Gprotein-coupled receptors (GPCRs), S1PR1-5, in an inside-out manner or as an intracellular signaling lipid messenger [7]. The accumulated data have indicated that SPHK1 is an integral component of the cancer cell network and can be "hijacked" for cell renewal and survival, including in breast, ovarian and lung cancer [8][9][10]. Masayuki et al. demonstrated a critical role for circulating S1P produced by tumors and the SPHK1/S1P/S1PR1 axis in obesity-related inflammation, the formation of lung metastatic niches and breast cancer metastasis [11]. In bladder cancer, elevated SPHK1 expression was also proved to enhance the chemoresistance to cisplatin and contribute to poor survival rates in patients [12]. In addition, previous studies have already linked the high expression of SPHK1 with tumor progression and the poor survival of patients with NSCLC [13], but the molecular mechanism still needs further investigation. Pre-B cell leukemia homeobox-1 (PBX1) is a member of the TALE family of atypical homeodomain transcription factors, initially identified as part of the chimeric transcription factor resulting from the chromosomal translocation t (1; 19) in pre-B cell acute lymphoblastic leukemias [14,15]. PBX1 or the products of the fusion between E2A and PBX1 genes involved in regulatory networks are frequently equipped tumors with self-renewal, repopulation and resistance to chemotherapeutics [16,17]. Luca et al. identified PBX1 amplification as a functional hallmark of aggressive ERα-positive breast cancers [18]. In ovarian cancer and myeloproliferative neoplasm, PBX1 has been shown to participate in maintaining cancer stem cell-like phenotypes and promoting platinum resistance, at least partially, through its intricate interaction with the JAK2/STAT3 signaling network [16,19]. Here, we explored the role of PBX1 in SPHK1-promoted cell cycle progression in NSCLC. We uncovered that the SPHK1/S1PR3/PBX1 axis and a feedback interaction loop between PBX1 and S1PR3, were potential factors for the proliferation and development of NSCLC, which might bring new insights into NSCLC treatment. Accumulation of SPHK1 in NSCLC First, the Cancer Genome Atlas database (TCGA) validated that both in lung adenocarcinoma and squamous cell carcinoma (due to the lack of large cell carcinoma sample sizes, there is no corresponding database), the expression of SPHK1 in tumor tissues was notably up-regulated ( Figure 1A). Kaplan-Meier survival analysis showed that high SPHK1 expression predicted poorer overall survival (OS) than that with weak SPHK1 expression ( Figure 1B). Based on the database analysis, we then examined endogenous SPHK1 expression in various NSCLC cell lines, including A549, H460, H520, H1299, H1975 and H226. Western blot showed that all expressed relatively high levels of SPHK1 in comparison with normal lung alveolar epithelial cell line BEAS-2B ( Figure 1C). Further evaluation of SPHK1 levels in paraffin-embedded, archived clinical tumor specimens of NSCLC cases, using IHC analysis with an antibody against human SPHK1, revealed significantly elevated SPHK1 levels in NSCLC cases compared to the matched adjacent tissue ( Figure 1D). Collectively, these data demonstrated that SPHK1 accumulated in NSCLC and might serve as a potential prognostic marker for patients with NSCLC. SPHK1 Deletion Inhibited Cell Proliferation in NSCLC Given the significant expression difference and clinical relevance of SPHK1, we further evaluated the direct roles of SPHK1 in NSCLC cells. Then, SPHK1-targeting shRNA (shSPHK1) or corresponding controls (shNC) were used to establish a stable SPHK1knockdown cell line in H460 cells. The knockdown efficiency was evaluated by RT-qPCR and Western blot analysis (Figure 2A). Results showed that SPHK1 deletion significantly suppressed cell viability, colony formation and migration in H460 cells, as confirmed by MTT, colony formation, transwell and wound-healing assays ( Figure 2B-E). The cell cycle is an indispensable progression for cell proliferation, and central to this process are cyclin-dependent kinases complexed with the cyclin proteins [20,21]. It was revealed that H460 cells with SPHK1 silence induced G1/S phase arrest, consistent with the significantly decreased expression of G1 phase cyclin markers CDK4, CDK2 and CyclinD1 ( Figure 2F,G). SPHK1 catalyzes the phosphorylation of sphingosine to form S1P, a novel lipid messenger with both intracellular and extracellular functions to promote cell proliferation and survival [22,23]. To intuitively evaluate the effect of S1P, we then detected the proliferative ability of H460 cells with SPHK1 silence in the presence or absence of exogenous S1P. EdU assays showed that the inhibitory effect on cell proliferation induced by SPHK1 deletion was notably reversed by the exogenous S1P ( Figure 2H). Thus, these findings strongly supported the oncogenic role of SPHK1 in promoting a proliferative phenotype in NSCLC cells. SPHK1 Deletion Inhibited Cell Proliferation in NSCLC Given the significant expression difference and clinical relevance of SPHK1, we fur ther evaluated the direct roles of SPHK1 in NSCLC cells. Then, SPHK1-targeting shRNA (shSPHK1) or corresponding controls (shNC) were used to establish a stable SPHK1 knockdown cell line in H460 cells. The knockdown efficiency was evaluated by RT-qPC and Western blot analysis (Figure 2A). Results showed that SPHK1 deletion significantl suppressed cell viability, colony formation and migration in H460 cells, as confirmed b Pharmacological Inhibition of SPHK1 by PF-543 Induced Cell Cycle Arrest PF-543, a novel and selective inhibitor of SPHK1 among all the SK inhibitors, exerted potent antiproliferation and cytotoxic effects applied in multiple cancer treatments [24]. A study demonstrated that PF-543 alleviated lung injury caused by sepsis in acute ethanol intoxication rats by suppressing the SPHK1/S1P/S1PR1 signaling pathway [25]. To further characterize the oncogenic role of SPHK1 in NSCLC cells, MTT assay was performed to test the effect of PF-543 on cell survival. Results showed that compared with BEAS-2B, treatment with PF-543 led to substantial declines in cell viability, represented by H460, H226 and H1299 cells, suggesting that cancer cells were more sensitive to SPHK1-inhibition than normal lung epithelial cells, with a half maximal inhibitory concentration (IC 50 ) of 20.45 µM in H460 cells, 16.80 µM in H226 cells and 26.55 µM in H1299 cells, respectively ( Figure 3A). Based on the MTT assay, we then chose a 15 µM concentration of PF-543 for the following cell experiments, which was lower than the IC 50 value. Likewise, drug-treated H460 and H226 cells displayed lower clone formation and migration abilities, as confirmed by clone formation and wound-healing assays ( Figure 3B-D). Moreover, protein and flow cytometry analysis revealed PF-543 treatment significantly induced G1/S phase arrest in H460 and H226 cells, while exogenous S1P replenishment promoted the cell cycle transition from the G1 phase to the S phase, as revealed by a decreased percentage of G1 phase cells and an increased percentage of S phase cells ( Figure 3E-H). To sum up, these pharmacological experiments established the oncogenic role of SPHK1 in promoting proliferation and the cell cycle in NSCLC. PBX1-Mediated SPHK1 Inhibition Induced G1/S Stage Cell Cycle Arrest The foregoing data have clearly demonstrated that the regulation of SPHK1, regardless of SPHK1 depletion or pharmacological inhibition, resulted in G1/S phase arrest in NSCLC cells, which made us curious about how SPHK1 participates in the cell cycle regulation. Based on the sphingolipid metabolism and its downstream effectors, we focused on the transcription factor PBX1. It has been suggested that PBX1 overexpression promotes cell proliferation, cell cycle progression and osteogenesis [26]. Meanwhile, the transcriptional activation of E2F5, a cell cycle regulator, was also coregulated by E2A-PBX1, RUNX1 and MED1 [27]. In addition, a study revealed that another cell cycle regulator CyclinD1 was subjected to the regulation of transcription factor PBX1 in clear cell renal carcinoma [28]. Our experimental results showed the relatively high endogenous PBX1 expression in various NSCLC cell lines compared to BEAS-2B ( Figure 4A), but we wondered whether there was a relationship between SPHK1 and PBX1 in the cell cycle regulation. To prove the association, H460 cells were transiently transfected with siSPHK1 or siPBX1 compared to their negative control siNC, respectively. Western blot analysis showed that SPHK1 silence significantly down-regulated the expression of PBX1, while PBX1 suppression made no reference to SPHK1 expression ( Figure 4B). Regarding the cell cycle process, protein and flow cytometry analysis further revealed that the addition of exogenous S1P moderately up-regulated the PBX1 expression and G1 phase-related markers in cells with SPHK1 silence; whereas S1P replenishment did not contribute to the cell cycle arrest and the activation of cyclin markers in cells with PBX1 silence, which was quite different from S1P promoting the cell cycle in SPHK1 depletion cells ( Figure 4B,C). These results made us speculate as to whether SPHK1 was in the upstream of PBX1 and might regulate the cell cycle through PBX1. Then we used SPHK1-knockdown (shSPHK1) and the corresponding control (shNC) H460 cells to generate cells with different levels of PBX1 overexpression (pcPBX1) or its negative control (pcNC). RT-qPCR showed the transfection efficiency in shSPHK1 cells was approximately nine-fold that of shNC cells ( Figure 4D). Protein and flow cytometry analysis supported that ectopic PBX1 expression was capable enough to rescue the proliferative potential of cells with SPHK1 suppression, as confirmed by the decreased proportion of G1 phase and increased S phase in shSPHK1 cells with PBX1-overexpressing compared with the control group ( Figure 4E,F). The similar role of PBX1 mediated in SPHK1-regulated cell cycle progression was also corroborated in another NSCLC cell line H226 (Figure S1A,B,G). In addition, we performed IHC staining with both anti-SPHK1 and anti-PBX1 antibodies on NSCLC tumor specimens to measure the correlation between the two molecules. Cases showed high expression of SPHK1 accompanied with higher PBX1 expression at the same location of pathological sections, suggesting a potentially positive correlation between SPHK1 and PBX1 on protein level ( Figure 4G). Hence, these results supported that PBX1 may be downstream of SPHK1 and indispensable for the SPHK1-regulated cell cycle process in NSCLC cells. S1PR3/PBX1 Axis Regulated Cell Cycle Progress In our study, we discovered that H460 and H226 cells contained richer extracellular S1P levels compared with intracellular levels ( Figure 5A), and replenishing exogenous S1P reversed the effect of cell cycle arrest induced by SPHK1 suppression, leading us to speculate that S1P produced by SPHK1 might act via an "inside-out" manner. In tumor cells, S1PR1 was identified as a potential target to block STAT3 signaling in activated B cell-like diffuse large B-cell lymphoma [29]. S1PR2 signaling induced AML growth and was shown to activate ezrin-radixin-moesin (ERM) proteins to induce motility and the invasion of HeLa cells in culture [30]. S1PR3 signaling promoted aerobic glycolysis by the YAP/c-MYC/PGAM1 axis in osteosarcoma [31]. Therefore, we then investigated the possible role of S1PRs in the SPHK1-regulated cell proliferation of NSCLC. After exposing cells to exogenous S1P for 24 h, we detected the gene transcription of S1PR 1-5 by RT-qPCR, and results showed that exogenous S1P stimulation prominently up-regulated the transcription level of S1PR3 compared to other receptors ( Figure 5B). Consistent with gene results, H460 cells with SPHK1 silence had significantly lower S1PR3 expression compared to that of S1PR1 ( Figure 5C). To further characterize the effect of S1PR3 in the SPHK1/PBX1 axis of NSCLC, H460 cells were transiently transfected with siRNAs targeting S1PR3 or treated with TY-52156 (a potent S1PR3 antagonist). Protein and flow cytometry results revealed that S1PR3 suppression was similar to that of SPHK1 silence, and induced G1/S phase arrest. As expected, S1P replenishment did not contribute to the G1/S phase arrest, as well as the reactivation of cyclin proteins and the downstream PBX1 expression, suggesting the possible bridge role of S1PR3 in SPHK1-mediated cell function ( Figure 5D,E). Meanwhile, the similar tendency was also confirmed in another NSCLC cell line H226 (Figure S1C-E). Moreover, IHC staining showed that the higher expression level of S1PR3 was detected in NSCLC specimens compared with the adjacent control (Figure S1H), and the S1PR3 level was also proven to be positively correlated with tumor progression [32]. Above all, we proved that PBX1 participated in sphingolipid-regulated cell cycle progress, so PBX1, as a transcription factor, would in turn affect the transcription and expression of sphingolipid. Interestingly, protein analysis showed that cells with PBX1 silence significantly decreased S1PR3 expression, while PBX1 overexpression obviously up-regulated S1PR3 expression ( Figure 5G and Figure S1G). The TIMER database (http://timer.cistrome.org/, accessed on 1 July 2022) further validated a positive correlation between the two proteins in lung cancer ( Figure 5F and Figure S1F), which led us to wonder whether the transcription factor PBX1 promoted the S1PR3 expression. To this end, we acquired the promoter sequence of S1PR3 in the Ensembl online website (https://asia.ensembl.org/index.html, accessed on 5 May 2022) to detect whether PBX1 could directly bind and promote the transcription of S1PR3. A potential binding site for PBX1 (CGCTCAATCATG) was discovered in the promoter region of S1PR3 ( Figure S3). The former and revised primers were designed around the potential binding motif, and the results of ChIP PCR illustrated that PBX1 did bind to the S1PR3 promoter region ( Figure 5H). This may be the reason why S1PR3 was down-regulated in cells with SPHK1 silence, perhaps through the down-regulation of lated cell cycle ( Figure S2A,B). It was supported that the feedback loop between PBX1 an Akt was mutually beneficial for maintaining HF-MSCs in a highly proliferative state wit multipotential capacity [26]. Meanwhile, cells with different levels of PBX1 expression ap peared to be irrelevant to the p-Akt expression, suggesting PBX1 might be downstream o Akt signaling (Figures 5G and S1G). Zhou et al. elucidated the regulatory mechanism up stream of PBX1 in dNK cells, in which EVT-derived HLA-G promoted the activation o Akt1 driving the expression of PBX1 [35], which was consistent with our study that PBX was located downstream of the Akt signaling pathway. Figure 5. S1PR3/PBX1 axis regulated cell cycle progress. (A) Extracellular and intracellular S1P con tents were detected by S1P ELISA kit. **** p < 0.0001, extracellular versus intracellular, Student's Figure 5. S1PR3/PBX1 axis regulated cell cycle progress. (A) Extracellular and intracellular S1P contents were detected by S1P ELISA kit. **** p < 0.0001, extracellular versus intracellular, Student's t-test. (B) RT-qPCR detected possible S1PRs expression in H460 cells after stimulation with exogenous S1P (5 µM) for 24 h. **** p < 0.0001, control versus S1P, Student's t-test. (C) Western blot detected the expression of S1PR3, S1PR1 and SPHK1 in H460 shNC and shSPHK1 cells. (D) Western blot detected the expression of S1PR3, SPHK1, PBX1, CDK4, CDK2 and CyclinD1 in H460 cells with S1PR3 silence (siRNA 50 nM) in the presence or the absence of S1P (5 µM). (E) Flow cytometry analyzed cell cycle distribution in H460 cells with TY-52156 (5 µM) treatment in the presence or the absence of S1P (5 µM). (F) Gene correlation analysis between S1PR3 and PBX1 of LUAD in TIMER database. (G) Western blot detected the expression of p-Akt, Akt, S1PR3 and PBX1 in H460 cells with PBX1 silence or PBX1 overexpression. (H) RT-PCR was performed to determine gene abundance of S1PR3 promoter region in the different groups, which were immunoprecipitated using an anti-PBX1 antibody in H460 cells. **** p < 0.0001, Input versus IgG and IP versus IgG, Student's t-test. Among the signaling pathways initiated by S1PRs, Akt and Erk signaling have been reported to play a role in regulating the cell cycle and cell survival [33,34]. We then examined whether the two signaling pathways participated in the SPHK1/PBX1-regulated cell cycle progress. Protein analysis showed that the decreased expression of PBX1 and the cyclin markers could be induced by the treatment of PF-543, TY-52156 and wortmannin (PI3K/Akt antagonist), while MEK1/2 inhibitor AZD6244 made no difference to the cyclin markers, suggesting that Akt signaling might be involved in the SPHK1/PBX1 axis-regulated cell cycle ( Figure S2A,B). It was supported that the feedback loop between PBX1 and Akt was mutually beneficial for maintaining HF-MSCs in a highly proliferative state with multipotential capacity [26]. Meanwhile, cells with different levels of PBX1 expression appeared to be irrelevant to the p-Akt expression, suggesting PBX1 might be downstream of Akt signaling ( Figure 5G and Figure S1G). Zhou et al. elucidated the regulatory mechanism upstream of PBX1 in dNK cells, in which EVT-derived HLA-G promoted the activation of Akt1 driving the expression of PBX1 [35], which was consistent with our study that PBX1 was located downstream of the Akt signaling pathway. PF-543 Suppressed Tumor Growth and Induced Cell Cycle Arrest in H460 Xenograft Tumor To further assess the effects of SPHK1 in vivo, we subcutaneously injected equal amounts of luciferase-expressing H460 cells (2.5 × 10 6 ) into BALB/c male nude mice. After 10 days, mice were randomized into two groups and subjected to treatment with PBS or PF-543 intraperitoneally (20 mg/kg body weight) every day for 3 weeks following the schedule ( Figure 6A). Bioluminescence intensity was monitored and relative photon flux was quantified at the indicated times. Compared with the control group, we observed that PF-543 treatment displayed a lower bioluminescence intensity growth rate ( Figure 6C,D) and diminished tumor volume ( Figure 6E) but had no obvious effect on the body weight of nude mice at the specified concentration ( Figure 6B). At the end point of the experiment, tumors were removed for further tissue proteins and IHC staining analysis. Protein results showed that PF-543 treatment significantly down-regulated the PBX1 expression and the phosphorylation of Akt, as well as the lower expression of the cyclin makers CDK4, CDK2 and CyclinD1 ( Figure 6F). IHC staining further confirmed that the levels of Ki67, SPHK1, PBX1, S1PR3 and cyclin-related proteins were markedly reduced in tumors derived from the PF-543 treatment group ( Figure 6G). Overall, these in vivo results corroborated the potential role of the SPHK1/PBX1 axis in the cell cycle regulation, which was consistent with in vitro studies. Discussion Sphingolipids, a unique group of bioactive lipids, have been demonstrated to play a role in the oncogenic process, which is associated with maintaining the equilibrium between the pro-survival and the apoptotic signaling of cells [36]. The propensity of S1P to oppose apoptosis pathways and to promote malignant phenotypes coupled with the exaggerated expression of SPHK1 in some human tumor specimens and a negative correlation between SPHK1 mRNA levels and survival have prompted a heated suggestion that SPHK1 serves not only as a predictive biomarker, but also as a potential target in tumor therapy. As a part of the sphingolipid metabolism, SPHK1 is the rate limiting enzyme of the rheostat and maintains the dynamic balance of the sphingolipid rheostat to reach relative homeostasis under normal physiological conditions [37], whereas elevated SPHK1 expression has been found in many types of cancer and leads to increased cell proliferation, migration capability, invasiveness, angiogenesis and inflammation with different oncogenic mechanisms [38]. It was reported that the activated SPHK1/Akt/NF-κB signaling pathway promoted cell proliferation, cell cycle G1/S transition and reduced the apoptosis and chemosensitivity of pancreatic cancer cells [39]. Moreover, an SPHK1-driven NF-κB/IL-6/STAT3/S1PR1 amplification loop, was also essential for the development and progression of colitis-associated cancer [40]. As for NSCLC, studies showed that miR-495-3p reprogramed the sphingolipid rheostat towards ceramide by targeting SPHK1 and induced lethal mitophagy to suppress NSCLC tumorigenesis [41]. The research here reported the accumulation of SPHK1 in NSCLC with the potential role in cell cycle regulation, functionally through the activation of the downstream effector PBX1, and PBX1 in turn directly promoted the expression of S1PR3, thus activating the sphingolipid metabolic circuit. Homeodomain transcription factor PBX1 orchestrates a complicated gene expression network and synergistically interacts in a temporal and spatial manner to maintain cells with self-renewal, repopulation and reprogramming [42]. In this study, we demonstrated that the SPHK1/PBX1 axis induced the activation of cyclin-dependent kinases, thereby promoting NSCLC tumorigenicity in vitro and in vivo, which was in agreement with a previous study showing that the overexpression of PBX1 in HF-MSCs promoted the progression of the cell cycle from G0/G1 to the S phase [43]. In addition to the regulatory role of the SPHK1/PBX1 axis in the cell cycle, SPHK1/S1P signaling is also known to be involved in hyperoxia-mediated ROS generation [44]. Huang et al. demonstrated a critical role for SPHK1/S1P signaling in TGF-β-induced Hippo/Yes-associated protein (YAP) 1 activation and mtROS generation, resulting in fibroblast activation and pulmonary fibrosis induction [45]. Meanwhile, PBX1 was also reported to attenuate HF-MSC senescence and apoptosis by alleviating ROS-mediated DNA damage [46], as well as the ROS production regulation in lung cancer cells [47]. However, in our research, whether there is a relationship between the SPHK1/PBX1 axis and the changes in intracellular oxidative stress needs to be further studied. In conclusion, these findings all support a potential oncogenic role of the SPHK1/S1P axis in lung diseases, and understanding its carcinogenic mechanism may bring new insights into the treatment of NSCLC or other diseases. As mentioned above, the S1P-related pathway orchestrates diverse cell functions by acting in an "inside-out" manner through five S1P receptors, S1PR1-5, or as an intracellular secondary lipid messenger. In mammals, S1PR1, S1PR2 and S1PR3 are ubiquitously expressed in all tissues, while S1PR4 and S1PR5 are tissue-specific, with S1PR4 being highly expressed in lymphoid tissues and blood cells, and S1PR5 mostly presenting in the brain, skin and natural killer cells [48]. Deregulation of the S1P signaling pathway contributes to the development and progression of a variety of cancer types, and the altered expression of SPHK1 and the S1PRs are common mechanisms. S1PR1 overexpression significantly induced the expression and activity of urokinase plasminogen activator (uPA) and, thus, cell invasion in glioblastoma multiforme [49]. Similarly, the SPHK1/S1P/S1PR3 axis promoted the expansion of aldehyde dehydrogenase (ALDH)-positive cancer stem cells (CSCs) via ligand-independent Notch activation [50]. According to our work, NSCLC cells with S1PR3 silence down-regulated PBX1 expression and induced G1 phase arrest, which failed to respond to the exogenous S1P stimulation, suggesting the bridge role of S1PR3 in the SPHK1/PBX1 axis-mediated cell function. Zhao et al. revealed that the levels of S1PR3 were significantly increased in human lung adenocarcinoma specimens, mechanistically, at least in part due to the TGF-β/SMAD3 signaling axis [51]. Our data supported that S1P catalyzed by SPHK1 activated PBX1 through S1PR3, and in turn PBX1 could bind to the S1PR3 promoter region and promote to the S1PR3 transcription, thus forming a positive feedback pathway among these two molecules. The evidence is compelling to support an oncogenic role for the SPHK1/S1P/S1PRs signaling cascade in carcinogenesis and efforts are now focused toward targeting this pathway in a therapeutic context. For example, SK1-I, a sphingosine analogue and competitive inhibitor of SPHK1, attenuated glioblastoma growth and proliferation in cell lines and xenograft models [52]. Pro-drug FTY720, a structural analogue of S1P but functional antagonist for S1PR1, targeted I2PP2A/SET and mediated lung tumor suppression via the activation of PP2A-RIPK1dependent necroptosis [53]. Sphingomab, a novel anti-S1P monoclonal antibody (mAb), was used to inhibit systemic S1P signaling and attenuate lung metastasis [7]. Studies on SPHK1 inhibitors have made some progress, while no ideal SPHK1 inhibitor is currently used for treating cancer in the clinic, and there are also other compounds including S1PR1 and S1PR3 (VPC03090) or S1PR2 (AB1) antagonists under preclinical evaluation [54,55]. Although there are limited studies regarding the application of these inhibitors in NSCLC, there is no denying that targeting the SPHK1/S1P/S1PRs signaling cascade presents a potential treatment prospect in related diseases including NSCLC. Despite the results obtained, there are still some questions worth our attention. Firstly, elevated SPHK1 expression has been observed in multiple types of cancer, but the molecular mechanism of SPHK1 up-regulation in tumors remains unclear. The activation of SPHK1 is mediated by three known ways, phosphorylation of the Ser225 site of SPHK1 by extracellular-regulated protein kinases, by external stimuli including growth factors and proinflammatory factors, and by the up-regulation of transcription levels, especially the epigenetic regulation [37]. Xia et al. demonstrated that miR-124 down-regulated SPHK1 expression by directly targeting its 3 -untranslated region (3 -UTR) and that miR-124 expression was inversely correlated with SPHK1 expression in gastric cancer samples [56]. Thus, the dysregulation of non-coding RNA caused by abnormal DNA methylation may be an important reason for the high expression of SPHK1 in tumors [6]. Secondly, our research revealed that SPHK1 exerted an effect on the cell cycle by regulating the PBX1 expression. However, the mechanism for how the SPHK1/PBX1 axis affects cyclin proteins and how it participates in the cell cycle progression remain unclear. Lin et al. reported that the transcription factor E2A-PBX1 suppressed the expression of the cell cycle inhibitor CDKN2C by mediating the epigenetic regulator SETDB2's expression, thus establishing an oncogenic pathway in ALL [57]. Similarly, the cycle regulator CyclinD1 was maintained as a target gene of the JAK2/STAT3 signaling pathway subjected to the regulation of PBX1 in clear cell renal carcinoma [28]. However, in our study, whether PBX1 regulates the expression of cyclin proteins in these ways needs to be further explored. Thirdly, among the downstream signaling pathways initiated by S1P, the Akt signaling pathway seemed to participate in the SPHK1/S1PR3/PBX1 axis to regulate cell cycle progression. Meanwhile, the Erk pathway was also involved in the regulation of PBX1 expression, and whether the Erk pathway was involved in PBX1 mediated other malignant phenotypes and pathological processes of tumor cells. Studies have revealed that PBX3, the homologous family protein of PBX1, promoted the migration and invasion of glioblastoma and colorectal cancer via activation of the MAPK/ERK signaling pathway, which might bring new insights into the oncogenic role of PBX1 in NSCLC [58,59]. In addition, it is known that sphingolipid homeostasis is highly regulated by multiple metabolic enzymes and whether the inhibition of SPHK1 could cause compensatory changes in other metabolic enzymes in the sphingolipid pathway. The contribution of S1P in cells is regulated by its synthesis catalyzed by SPHKs and catabolism by S1P phosphatases or S1P lyase (S1PL); thus the combination treatment of S1P synthesis inhibition together with its catabolism promotion may bring potential benefits for tumor therapy [60]. In summary, this study linked the key enzyme of the sphingosine metabolism with the transcription factor PBX1 for the first time, and illustrated the potential regulatory role of the SPHK1/S1PR3/PBX1 axis in the cell cycle and a feedback interaction loop between PBX1 and S1PR3 in NSCLC. Our study gave a new enlightenment to the molecular mechanisms for sphingosine metabolism in the development of NSCLC and may provide new ideas for the treatment of NSCLC. Reagents The main reagents used in this experiment include PF-543 citrate ( shRNA/siRNA The lentiviral SPHK1 shRNA plasmid or shRNA control plasmid (GenePharma, Shanghai, China) was transfected into HEK293T cells together with auxiliary plasmids pLP1, pLP2 and VSV-G to package lentivirus. Then indicated cell lines were established with stable SPHK1 knocked down or its scrambled controls. Single colonies were obtained after 2 weeks of 2.5 µg/mL puromycin selection. Cells were seeded at 50% confluence in six-well plates overnight and then transfected with either 50 nM siRNA including siSPHK1, siPBX1, siS1PR3 (Tsingke, Beijing, China) or 2.4 µg PBX1-expressing plasmid (RuiBo, Guangzhou, China), and its negative control siRNA, or pcNC control plasmid using Lipofectamine 2000 Transfection Reagent (Invitrogen, 11668027, Waltham, MA, USA) according to the manufacturer's instructions. Immunoblotting Assay The total protein was extracted using RIPA lysate, and the concentration was measured via bovine serum albumin (BSA) kit (Thermo Fisher Scientific, Waltham, MA, USA). Equal amounts of protein were mixed with 4×loading buffer and separated by 6-15% SDS-PAGE, then transferred onto polyvinylidene fluoride (PVDF) membranes (Millipore, HATF00010, Billerica, MA, USA), which were blocked with 3% Bovine Serum Albumin (BSA) for 60 min. The membrane was incubated with specific primary antibodies overnight at 4 • C. Then the blots were washed with TBS + 0.1% Tween-20 (TBST) 3 times (10 min per time) and incubated with peroxidase-conjugated secondary antibodies for 60 min at room temperature. The TANON image software (Beijing YuanPingHao Biotech, Beijing, China) was used to detect the enhanced chemiluminescence (Millipore, WBKlS0100, USA) of specific membranes. Real-Time Quantitative PCR (qPCR) Total mRNA was extracted using TRIzol reagent (CWBIO, CW0580S, Beijing, China) and RNA concentrations were determined using an ultraviolet spectrophotometer. cDNA was performed using a Prime Script™ RT Master Mix Kit (TransGen, AE301-02, Beijing, China) according to the manufacturer's instructions. qPCR was performed using an Ultra SYBR Mixture Kit (CWBIO, CW0957C, Beijing, China) and a Bio-Rad CFX Maestro system. The reaction parameters were as follows: 95 • C for 15 min, 45 cycles of amplification with three steps: denaturation at 95 • C for 10 s, annealing at 55 • C for 20 s and extension at 72 • C for 30 s. Cell Viability Assay 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) reduction is one of the most frequently used methods for measuring cell proliferation and cell viability. Cells were seeded into 96-well plates at a density of 4 × 10 3 cells per well in 100 µL of medium containing 10% FBS for 12 h, and fresh 1% medium containing PF-543 at different concentrations (2.5-50 µM) was incubated with cells for 48 h. The medium was removed, and the cells were cultured with MTT (0.5 mg/mL, 100 µL MTT/well) for 2 h. Then, the absorbance of DMSO-dissolved blue formazan crystals was read and quantitated. Values are expressed as the means ± SD; n = 5 /group. IC 50 (inhibitory concentration causing a 50% response) of drug dose response curves for cell lines was calculated using the log (inhibitor) vs response method in GraphPad Prism (Version 8.3.0) according to the cell viability values obtained above. Wound-Healing Assays Cells were seeded in a 6-well plate at a density of 1 × 10 5 cells per well. A 1000 µL micropipette tip was used to make a vertical scratch in the well center, then the cells were cultured in the absence or presence of PF-543 (15 µM) in 1% FBS culture medium. The scratch width in each well was observed and the pictures in different time periods obtained by light microscope. The data were analyzed with Image-J software. Colony Formation Assay Cells were seeded in a 6-well plate at a density of 2000 to 3000 cells per well and treated with 15 µM PF-543 for 72 h, followed by culturing in RPMI-1640 with 10% FBS for 10-14 days. Colonies were then fixed with 10% formalin for 5 min and stained with 0.05% crystal violet for 15 min. The data were analyzed with Image-J software. Transwell Migration Assay Cells were starved for 24 h, and 8 × 10 4 cells were resuspended in 200 µL serum free media and then transferred into the upper chamber of a 24-well plate with 750 µL fresh medium containing 10% FBS in bottom chamber. After 48 h, the inserts were washed with PBS, fixed with 10% formalin for 5 min and stained with 0.05% crystal violet for 15 min. The data were analyzed with Image-J software. Cell Cycle Analysis Cells were seeded at 50% confluence in six-well plates overnight and then treated with fresh 10% medium containing PF-543 (15 µM), S1P (5 µM) or TY-52156 (5 µM) for 48 h. Cell cycle analysis was performed with cell cycle staining kit (Multi Sciences, CCS012, Hangzhou, China) according to the manufacturer's instructions. Finally, flow cytometry analysis of cells was performed on a BD FACS Aria II flow cytometer (BD Biosciences, LSR Fortessa, Franklin, NJ, USA). EdU Proliferation Assay Cells were seeded at 50% confluence in six-well plates overnight and then transfected with 50 nM siSPHK1 or S1P (5 µM) for 24 h. Cell proliferation was evaluated using the EdU Proliferation Kit (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions. DAPI (Solarbio, S2110, Beijing, China, 10 µg/mL) was used to stain nucleus. Then, slides were washed and mounted with anti-fade mounting medium, and the fluorescence intensity was revealed by confocal fluorescence microscope (Leica, TCS SP8, Allendale, NJ, USA). The representative images by 630× magnification are presented. Enzyme-Linked Immunosorbent Assay (ELISA) Cell supernatant and cell precipitate were collected, and cell precipitate was suspended with PBS, ultrasound or repeated freeze-thaw lysed cell membrane to release cell contents. The contents of intracellular and extracellular S1P were determined using an S1P ELISA kit (BlueGene, Shanghai, China) according to manufacturer's instructions. Values are expressed as the means ± SD; n = 3. Chromatin Immunoprecipitation (ChIP) ChIP assay was performed using the ChIP Chromatin Immunoprecipitation Kit (Absin, abs50034, Shanghai, China) with anti-PBX1 antibody according to the manufacturer's instructions. Primers for the amplification containing PBX1-binding site-specific region of S1PR3 promoter (nucleotides −422 bp to −272 bp) were forward 5 GAATGCACG-GTCGGATAA and reversed 5 CCATGATTGAGCGAACAC, with an expected size of 151 bp. Xenograft Tumor Model All experiments involving mice were approved by the Animal Care Committee of Nankai University, Tianjin, China (Animal Ethics Number, 2022-SYDWLL-0005000). BALB/c nude male mice at 6 weeks of age (22-25 g) were maintained under specific pathogen-free conditions (animal certificate number, SYXK [JIN]-2019-0001). Equal number (2.5 × 10 6 ) of luciferase-expressing H460 cells were resuspended in 0.1 mL of PBS and then were injected subcutaneously into the left and right flanks of mice (n = 4 for each group). When the tumor volumes reached about 200 mm 3 (volume = length × width 2 /2), the mice were randomly divided into two groups. The administration group was treated with PF-543 (20 mg/kg, i.g.), diluted in PBS. The control group was treated with an equal volume of PBS, and the medicine was given every day. Primary tumors were measured using Vernier calipers. Mice were given the substrate D-luciferin (PerkinElmer, 122799, Waltham, MA, USA) by intraperitoneal injection (150 mg/kg), and tumors were assessed using bioluminescence imaging (BLI; PerkinElmer, IVIS Spectrum, USA) every week. Animals were head-fixed and anaesthetized with isoflurane (∼1%) during imaging. After three weeks, tumors were harvested for the experiments. NSCLC Patient Lung Tissue NSCLC patient lung tissues were obtained from the Tianjin Cancer Hospital, China. Written informed consent was obtained from each patient in this study, and protocols were approved by the ethical committee of Tianjin Cancer Hospital (Medical Ethics Number, NKUIRB2022106). Statistical Analysis Results are shown as means ± SD of at least three independent experiments and were analyzed using GraphPad Prism (Version 8.3.0). Student's t-test was used to compare significance of the difference between two groups, and two-way ANOVA was used for multiple-group comparisons. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001. Supplementary Materials: The supporting information can be downloaded at: https://www.mdpi. com/article/10.3390/ijms232112741/s1. Author Contributions: Z.L. and Y.L. planned, performed and analysed the animal experiments. Y.L., X.H. and Z.F. contributed to the design and supervision of the study. Z.L., C.L. and Z.F. wrote and reviewed the manuscript. Corresponding author Z.T. provided clinical samples and, together with C.L., provided critical feedback, and helped shape the research and analyze the manuscript text. All authors participated in the experimental process, approved the final version of the manuscript and had full access to the data. All authors have read and agreed to the published version of the manuscript.
8,586.2
2022-10-22T00:00:00.000
[ "Biology", "Chemistry" ]
Roughening in Nonlinear Surface Growth Model : The aim of this paper is to examine the coarsening process in the evolution of the surface morphology during molecular beam epitaxy (MBE). A numerical approach for modeling the evolution of surface roughening in film growth by MBE is proposed. The model is based on the nonlinear di ff erential equations by Kuramoto–Sivashinsky (KS) namely, KS and CKS (conserved KS). In particular, we propose a “combined version” of KS and CKS equations, which is solved as a function of a parameter r for the 1 + 1 dimensional case. The computation provides film height as a function of space and time. From this quantity the change of the width of the film over time has numerically been studied as a function of r . The main result of the research is that the surface width is exponentially increasing with increasing time and the change in surface width for smaller r values is significantly greater over longer time interval. Introduction One of the great challenges of physics and materials science is to understand the growth and surface morphology of the interfaces, both in nature and in technological applications. A recently developed, highly active field of research in statistical physics is dealing with the understanding of surface growth processes [1][2][3][4][5][6][7][8][9]. Researchers are challenged to explore the relationship between the structure and properties of nanostructured materials, and to develop nanoscale structures in a conscious and planned way. The industrial application of coating processes allows thin layers with prescribed properties to be formed on a solid substrate [2]. Different layer-forming techniques are used to produce films, such as ion-beam sputtering (IBS), molecular beam epitaxia (MBE), chemical vacuum deposition (CVD) or physical vacuum deposition (PVD). The technique of growth surfaces under MBE has received considerable attention for a wide range of technological and industrial applications. This approach provides unique capability to grow crystalline thin films with precise control of thickness, composition and morphology. This enables scientists to build nanostructures as pyramidal or mound-like objects. The evolution of the surface morphology during MBE growth results from a competition between the molecular flux and the relaxation of the surface profile through surface diffusion. The extraordinary richness of pattern forming during MBE is determined by processes which occur locally at the surface. During the last 30 years, extensive theoretical and experimental research activity has been devoted to this topic. A comprehensive review can be found in [3]. Theoretical models have been proposed which include dynamic scaling [4,5]. Continuous theory [6], stochastic approaches with nucleation and growth [7], and lattice gauge theory [8] have improved significantly the knowledge of the growth process phenomenon. In surface dynamic evolution, both surface coarseness and smoothening mechanisms are observed, for example, in MBE and PVD [10][11][12]. During the MBE, a weak particle board is directed to the sample so that the thin layer can be applied in virtually atomic layers and the composition can be easily controlled. In the case of vacuum spray, the particles exert a non-thermal effect on the source but generate plasma. During the two processes, the deposition of the particles and the smoothing of the resulting surface patterns are determined by surface diffusion [9][10][11][12][13][14][15]. On the growing surface, self-similar structures can be observed, but the growth of periodic patterns is unstable. In fact, the growth processes of amorphous thin films can be interpreted in an attractive system. Amorphous structures exhibit isotropic spatiality in the absence of a long-term structural sequence. Experimental studies of amorphous thin films, generated by electron beam atomization, show the formation of mound-like structures in mesoscopic length [2][3][4][5][6][7][8][9]16,17]. Stochastic growth equations of continuous continuum models indicate the complexity of atomic size growth processes [1]. Thin films deposited by vacuum techniques such as MBE typically produce thin film of a thickness in the range of 10-25 µm. In the working chamber of the MBE equipment, each element is steamed separately in steady state. The speed of vaporization of each element and thus the composition of the layer to be created can be controlled very precisely ( [10,[18][19][20][21]). The MBE method has been developed to produce single-crystal layers of semiconductors. The media is usually kept at a few hundred • C. Atoms arriving on the surface (data meshes) first bind weakly to the surface, migrate and ultimately make a perfect two-dimensional array. Growth speed is relatively slow, usually one monolayer/s. The benefits of the molecular beam epitaxia are that: • The parameters can be checked very accurately (the substrate temperature, growth rate, a crystalline layer of orientation), • The formation of the sheet is continuously monitored, • It is also applicable for making hetero structured semiconductors with sharp transitions and magnetic thin films, • It is suitable for mass production. Several non-independent height sensitive parameters are specified for the analysis of the surface profiles. The measure of the surface morphology are roughness parameters of individual layers and location in the profile. The average surface roughness and surface height are widely used in the industrial practice to characterize the topography of mechanical surfaces. The results of the surface profile and the roughness on friction were studied in [22][23][24][25][26][27][28][29][30][31][32][33][34]. In non-equilibrium physical problems, nonlinear dynamics and pattern formation have attracted the attention of researchers to physical, chemical and biological processes for several decades. It has become clear that similar phenomena play a key role in nanostructured processes. While in case of macro and microsystems it is a great advantage to control the processes with special tools and devices, for nanoscale these devices are missing, or their use is prohibitively expensive [10]. This indicates that the observation of the pattern formation and spontaneous self-organization is particularly promising in controlling of these methods. Structural coatings with nanostructured thin films can generate or control the pattern of the deposition. The size and shape of nanostructures are spontaneously generated by the internal stress or using external stress. Our goal is to understand and to predict their temporal evolution of thin films. The aim of this paper is to develop numerical simulations to the physical model to obtain surface structure depending on time and parameters involved in the physical model and to determine the temporal change of the width of the film in some special cases. Physical Background In the mathematical approach, it is very important to incorporate the uncertainties of the parameters into the model. The main sources of uncertainties are difficult to predict. These are for example, the atomic level of elastic conjoint effects and surface state changes. Partial differential equations describing the characterization of irregular surfaces are provided with free boundary conditions. Partial differential equations cannot be solved analytically or only very rarely, and the numerical algorithms used are generally unstable. Therefore, variational methods are required. Governing equations describing the phenomenon are usually strongly nonlinear differential equations. In the surface coating processes, the initial state of the substrate is an almost flat surface, the particles in the vapor are perpendicular to the surface and the deposition process is characterized by the deposition flux. Particles on the surface go through various surface diffusion processes when they reach their final position. The growing surface layer is composed of these particles after a certain time. The height or morphology H(t, x, y) of the surface layer depends on time t and spatial coordinates x, y. The evaluation of the surface structure is determined by the particles arriving on the surface and by the condensed particles on the surface. The process in time and space is determined by the interplay of different mechanisms such as coarseness, smoothening and shape formation. To examine the roughness of the surface, we control the height profile with h(t, x, y) = H(t, x, y) − Ft, where F is the mean deposition rate and h satisfies the equation: with a functional G of spatial derivatives and depending on the setting parameters of the deposition procedure and material parameters. Our goal is to investigate deterministic equation whose solutions are properly characterizing the physical phenomenon and from which the results obtained with the initial condition are likely to remain valid even after a long period of time. The equations are sometimes supplemented with stochastic members that represent temperature or instrumental noises. In this paper we do not address the effects of noise. The processes in time t are analysed in one dimension by one spatial coordinate x. The surface height is characterized by the height function h(t, x) from the theoretical plane surface. For amorphous growth processes temporal and translational invariance in growth direction and in the direction orthogonal to growth are considered [3]. The isotropy of the amorphous phase implies invariance under rotation and reflection around the direction of growth. Therefore, the presence of the derivatives terms of odd order of h(t, x) is not allowed. The symmetry up and down ( h → −h ) is necessary. The dynamics can be described by effective one-dimensional equations. The simplest equation describing the film growth is the Edward-Wilkinson model [18]: where the indices denote the derivative with respect to the variable t or x, and ν is constant and η denotes the noise term. The term ν h xx is the surface tension. For example, when desorption is important, the growth can be described by the Kardar-Parisi-Zhang equation [4] which is considered as the paradigm of the stochastic roughening. For the analytical treatment of the vapor phase deposition of thin films, a nonlinear term was added to Equation (1) to model the deposition and formation of a thin film: where λ is a constant. In Equation (2), h t represents the rate of absorption, the first term at the left side the relaxation of the interface and the second term the tendency of the surface locally growing normal to itself. Specially, the Kuramoto-Sivashinsky (KS) equation is a paradigm of the deterministic dynamic system which leads to the complex spatial and temporal chaotic behavior of the physical phenomenon with applications in flame fronts, plasma ion waves and chemical phase turbulence [19]. Its general form is of the form: of 10 where h xxxx takes into account the diffusion. Equation (3) was originally derived in the context of plasma instabilities, flame front propagation and phase turbulence in reaction-diffusion system [9]. In case of vanishing desorption and weak asymmetry, a modified form of Equation (3) is used to model amorphous thin film layers and roughening processes of the surface, where the nonlinear member is replaced by a nonlinear member on the right: Equation (4) is the so-called 'conserved Kuramoto-Sivashinsky' (CKS) equation [35,36], where parameter r is a parameter. In our investigations a combined version of the (KS) and (CKS) equations will be analyzed where the third term at the right side equilibrates the slope dependent adatom concentration. Equation (5) considers relaxation mechanisms, lateral growth, surface diffusion, and desorption and is suitable to characterize the coarsening of thin films. Benlahsen et al. [11] gave an analytic solution in closed form and it was used for examining the coarsening. Munoz-Garcia, Cuerno and Castro [12] have derived and analyzed numerically the growth model (5). Numerical Results In this paper, the solution to (5) is examined with numerical simulations. These results help us to explain the phenomena observed by experiments and validate the mathematical model. So, we follow the stress effects in understanding the physical phenomena. Our choice for the initial condition applied to (5) which mimics the initial surface structure of the substrate: The solution of the one-dimensional combined CKS Equation (5) subjected to (6) is determined numerically by using Fourier spectral collocation in space and the fourth order Runge-Kutta exponential time differencing scheme for time discretization. Numerical solution for height profiles are calculated for different values of r with the following data: x ∈ [0, 32π], t ∈ [0, 250], N = 256, ∆t = 1/100. The surface height profile h(t, x) is shown on Figure 1 calculated from the nonlinear deterministic growth Equation (5) in 1+1 dimension using the parameter r = 0.01. We can observe that the evaluation of h starts about t =50 for initial condition (6). We note that for taking an amplitude greater than A = 1 in (6), the evaluation of h starts earlier. In Figure 1 we see the chaotic evaluation of the ripples which merge and split at different t and x. Figure 2 illustrates the cross-section of the surface structure at fixed time t = 1000 for different values of r (r = 0.01, 0.5, 1.) It can be seen that with increasing r the cross section became more regular. For large r (r = 1), the cross section at t = 1000 shows periodicity like |sin x|. Function is used to call the mean interface width. We recall that in our calculations 32 . Roughening One of the most important tasks in tribology is to design surfaces tailored to operation. Tribological parameters are important in studies and many of them show strong correlation with surface operational performance. The surface texture affects the contact and temperature conditions between the contacting surfaces. These conditions define the tribological process and the wear mechanism occurring between the surfaces. Roughening One of the most important tasks in tribology is to design surfaces tailored to operation. Tribological parameters are important in studies and many of them show strong correlation with surface operational performance. The surface texture affects the contact and temperature conditions between the contacting surfaces. These conditions define the tribological process and the wear mechanism occurring between the surfaces. We note that in papers [11] and [12] analytical and numerical results were given to Equation (5). In both cases 50 is considered. In [11] a truncated parabola was chosen as the initial condition. In [23], the initial height values were uniformly chosen between 0 and 1 randomly. In comparison the solutions show the same structure in an in for large t ( 1000) in [12] for 50 and in our case for 10 ; the parabolic or |sin | solutions show periodicity and they have the same amplitude. The authors noted that there are parabolic cells grow in width and height in time and the The temporal evaluation of the film roughness described by its height profile h(t, x) can be seen in Figure 2. The surface roughness is characterized by the surface width w(t) which is defined by the rms fluctuation in the h(t, x), in time t for a linear size L of the sample. Statistical evaluation of the parameters is calculated on the surface (mean, standard deviation, etc.). On the profile, where data points are joined by line segments the linear interpolation is applied, but a spline interpolation provides usually closer result to the continuous profile. The texture parameters are calculated, the approximation method by summations is replaced by integrals. To ensure the correctness, the parameter definitions are given for the continuous functions, i.e., parameters are expressed with integrals. In modern specification standards, parameter definitions are always given for the continuous case, i.e., expressed with integrals. In case of statistical evaluation of the parameters for the microgeometry of the surface, it is convenient to centre the height function in the evaluation of the surface texture parameter, i.e., the mean height is subtracted from the height. The use of continuous definitions ensures the correctness of the definition and does not imply any numerical approximation. In this paper, the change of roughness in time is illustrated by the function w(t) with the deviation from the average height as follows: where the average height function h(t) is calculated with the form: Function w(t) is used to call the mean interface width. We recall that in our calculations L = 32π. Roughening One of the most important tasks in tribology is to design surfaces tailored to operation. Tribological parameters are important in studies and many of them show strong correlation with surface operational performance. The surface texture affects the contact and temperature conditions between the contacting surfaces. These conditions define the tribological process and the wear mechanism occurring between the surfaces. We note that in papers [11,12] analytical and numerical results were given to Equation (5). In both cases r = 50 is considered. In [11] a truncated parabola was chosen as the initial condition. In [23], the initial height values were uniformly chosen between 0 and 1 randomly. In comparison the solutions show the same structure in h an in w for large t (t = 1000) in [12] for r = 50 and in our case for r = 10; the parabolic or |sin| solutions show periodicity and they have the same amplitude. The authors noted that there are parabolic cells grow in width and height in time and the coarsening process slows down until stopping completely. In [11] it was shown that the amplitude behaves as 0.5 lnt for large values of t. Several evaluation techniques are known to test surface microgeometry [37,38]. In industrial practice, the parameter-based characterization is used to provide general roughness (Ra). There is a long list of parameters employed in the industry and the standard [39,40] defines the method and parameters of evaluation. The field parameters use the data point measured in the area examined. Applying field parameters, the surface heights, the slopes and the wavelength can be evaluated. Figure 3 introduces the surface width calculated from the solutions (5)-(6) with formula (7). It can be seen from Figure 3 that the surface roughening only starts after approximately 50 time units. This time depends on the quality of the initial surface. Our simulations show that when in the boundary condition (6) and the amplitude of 1 on the right side of the equality decreases to 0.01, h(0, x) = 0.01cos x 16 1 + sin x 16 , this surface roughening starts at approx. 120 time units. It can be stated that the smoother the starting surface the later the coarsening of the surface topography appears. The shape of the curves is also influenced by the coefficient r, which combines the material and physical parameters in Equation (5). We can conclude that w(t) is smoother for higher values of r. Conclusions Although several growth models have been introduced in the literature, the understanding of growth processes is very important. We address the problem of MBE growth in one spatial direction. The time-dependent growth equation introduced by the combined Kuramoto-Sivashinsky equation is investigated using the model (5). In this paper we concentrate on the surface evolution and the surface roughening. We performed simulations for the surface profile applying various values of the parameter . The profile is strictly The time dependence of the surface width is exhibited on Figure 4 for different values of r. We can also see that, for the given time, the surface roughness is decreases with increasing parameter r. Moreover, for large t, the surface width w is exponentially increasing with time t. Conclusions Although several growth models have been introduced in the literature, the understanding of growth processes is very important. We address the problem of MBE growth in one spatial direction. Conclusions Although several growth models have been introduced in the literature, the understanding of growth processes is very important. We address the problem of MBE growth in one spatial direction. The time-dependent growth equation introduced by the combined Kuramoto-Sivashinsky equation is investigated using the model (5). In this paper we concentrate on the surface evolution and the surface roughening. We performed simulations for the surface profile applying various values of the parameter r. The profile is strictly depending on the initial condition (on the profile of the substrate at the beginning) and on the value of the parameter r involved in Equation (6). If r is small, i.e., the impact of the term (h x ) 2 xx is small, chaotic ripple formation appear. The surface morphology is characterized by the surface width w(t). The roughening starts at approximately 50 time units. The evolution of surface morphology is affected by parameter r. Figures 2-4 demonstrate the effect of parameter r. The surface width is exponentially increasing with increasing time and it is decreasing with increasing parameter value r. We performed simulations for our growth model for simplicity in 1 + 1 dimensions, but it can be straightforwardly generalized to any dimension.
4,716.6
2020-02-20T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Effect of Post Treatment on the Microstructure, Surface Roughness and Residual Stress Regarding the Fatigue Strength of Selectively Laser Melted AlSi10Mg Structures This paper focusses on the effect of hot isostatic pressing (HIP) and a solution annealing post treatment on the fatigue strength of selectively laser melted (SLM) AlSi10Mg structures. The aim of this work is to assess the effect of the unprocessed (as-built) surface and residual stresses, regarding the fatigue behaviour for each condition. The surface roughness of unprocessed specimens is evaluated based on digital light optical microscopy and subsequent three-dimensional image post processing. To holistically characterize contributing factors to the fatigue strength, the axial surface residual stress of all specimens with unprocessed surfaces is measured using X-ray diffraction. Furthermore, the in-depth residual stress distribution of selected samples is analyzed. The fatigue strength is evaluated by tension-compression high-cycle fatigue tests under a load stress ratio of R = −1. For the machined specimens, intrinsic defects like pores or intermetallic phases are identified as the failure origin. Regarding the unprocessed test series, surface features cause the failures that correspond to significantly reduced cyclic material properties of approximately −60% referring to machined ones. There are beneficial effects on the surface roughness and residual stresses evoked due to the post treatments. Considering the aforementioned influencing factors, this study provides a fatigue assessment of the mentioned conditions of the investigated Al-material. Introduction Selective laser melting (SLM) enables the manufacturability of complexly shaped and topographically optimized components. Additive manufacturing (AM) is contemplated to find significant application in demanding fields such as automotive, aviation and biomedical engineering [1][2][3][4][5]. Particularly in complex structures, post built machining is not always possible; hence, it is of upmost importance to investigate the influence of the unprocessed surface on the fatigue strength in conjunction with the effect of subsequent post treatments [6,7]. It is estimated that about 90% of all engineering failures are caused by fatigue-related damage mechanisms [8,9]. Along with Ni-based alloys, stainless steel and titanum, aluminum alloys, AlSi10Mg is especially a very commonly used material for powder-bed based AM and therefore causes the necessity of a proper as well as safe assessment of the material qualification regarding fatigue [10]. Current studies on stainless and tool steels as well as titanimum alloys deal with the importance of surface quality, process parameters as well as post treatments and possible reasons for defects formations. For example, powder defects, insufficient energy and consequent partially melted powder particles or material vaporization impact static and cyclic material properties [11][12][13][14][15][16]. Additionally, the manufacturability of lattice structure by AM provides huge potential in terms of lightweight design and is subject to many research works. The interaction between the building direction, microstructure, and crack propagation is discussed in [11]. The microstructure is found to have great influence on the fatigue crack morphology and crack deflection effects. Fatigue crack initiation and the propagation rate play a major role in fatigue properties, whereby it is found that initiation is strongly linked with the surface roughness and the crack propagation rate with the microstructure and stress level [17]. Among others, hot isostatic pressing (HIP) and solution annealing (T6) are two common procedures to enhance material properties [18][19][20]. Given the fact that HIP leads to a reduction of the volume fraction of porosity and improved fatigue resistance for sand-casted aluminum components, an according HIP treatment may be beneficial to AM parts as well [21][22][23]. SLM structures generally exhibit an extremely fine microstructure due to high cooling rates [24]. A heat treatment above the solubility temperature of AlSi10Mg causes microstructural coarsening, since grain boundaries are dissolved as well as the precipitation of second phase particles [9,[25][26][27]. These microstructural changes result in reduced fatigue properties, and therefore demand a subsequent age hardening process in order to counteract those unfavourable effects [28]. The exact post treatment parameters are set up incorporating the knowledge of the specimen manufacturer. The influence of the post treatments is further investigated in terms of the surface roughness and residual stresses. The fatigue strength of engineering components is decreased with increasing surface roughness. Elevated surface roughness tends to generate stress concentration factors and favors failure initiation [6]. In this study, the effect of the unprocessed surface is investigated and described using a notch effect factor referring to a machined condition [29,30]. The applicability of an endurance limit reducing factor is researched and validated with experimental results. The impact of residual stresses on the fatigue strength is studied as well within this work. It is of utmost importance to holistically assess material qualification, since a present residual stress state can significantly alter the stress condition at the failure initiating imperfection [31,32]. A post treatment also influences the residual stress condition in great measure. Neglecting residual stresses may lead to non-conservative designing of components, which is the reason for the conducted research work. It is of technical and economical relevance to investigate the influence of residual stresses and enhance existing concepts to properly as well as safe assess material qualifications regarding fatigue. This study provides a method how to assess the impact of surface features under consideration of residual stresses acting as mean stresses. The authors propose an approach to account for residual stresses in fatigue design and furthermore look at notch effects due to surface roughness independently, which allows a differentiated assessment of roughness features and residual stress effects. Materials and Methods Three different post treatment conditions are the subject of this work. Therefore, it was necessary to clearly distinguish between the test series. The following enumeration clarifies the abbreviations used in the present study and provides the applied treatments for each condition. A detailed description of the respective routines is given in Table 1. The first column refers to the treatment, followed by temperature, pressure and time, which provides information about the minimum holding time of the respective treatment. The exact post treatment parameter is defined incorporating the knowledge of the specimen manufacturer, aiming to enhance material properties. For this reason, the used parameter sets are classified: • Test series "AB": As-built condition (no post treatment applied), • Test series "HIP": Hot isostatic pressing + age hardening, • Test series "SA": Solution annealing + age hardening. In order to quantify the impact of the surface roughness, each of the above-mentioned test series (AB, HIP and SA) consisted of two batches-one lot exhibiting a machined and polished surface-denoted as "M", and a second set of specimens in as-built (not machined) surface condition-denoted as unprocessed "UP". Therefore, in total, six test series were investigated. There were nine specimens that exhibited a polished surface and five specimens with unprocessed surfaces manufactured for each condition. The abbreviation for the surface condition was added before the post treatment e.g., M-HIP means machined surface and HIP treated or UP-SA stands for unprocessed surface and solution annealing. The used AlSi10Mg powder for specimen manufacturing showed the chemical composition given by the powder manufacturer in Table 2 [33]. According to manufacturer specifications, the material corresponds to the standard DIN EN 1706:2010 [34]. All specimens were built in a vertical direction on an EOS M290 system, using a Yb fiber laser with a power of 400 W. The beam diameter is set to 100 µm. The standard parameter set provided by EOS is used for printing. To ensure all surface-related effects are eliminated for the investigation of the machined conditions, a respective number of specimens is manufactured with a certain machining allowance to subsequently remove the boundary layer. Following the manufacturing process, the respective post treatment was applied. Afterwards, the specific specimens for the machined test series were processed to the geometry by turning and polishing, shown in Figure 1. The geometry of the specimen corresponds to no standard but is designed to minimize the stress concentration within the testing section caused by the narrowing shape. A numerical analysis reveals a maximum principal stress concentration of K t = 1.045, hence 4.5% at the thinnest point. The same specimen geometry and manufacturing parameter are used for previous work already published by the authors in [35]. SEM Investigation To characterize the impact of the respective post treatment on the microstructure, backscatter-SEM images of microsections were taken with a Carl Zeiss EVO MA 15 microscope in accordance with [36]. Both post treatments were conducted above the solubility temperature of the investigated material [37][38][39]. It is mentioned that the solution temperature of the cast alloy is above 450°C, and therefore a subsequent age hardening at low temperatures leaves the microstructural evolution unchanged [40]. High Cycle Fatigue Assessment For all test series, a modified staircase test method was utilized [41]. The high-cycle fatigue testing was carried out under a load stress ratio of R = −1 on an RUMUL Mikrotron resonant testing rig. The test frequency was in the region of 106 Hz. Specimens were gripped with collets at both ends. The test was aborted when total fracture occurred, or the run-out criterion of 1E7 load cycles was reached. In order to generate more data within the finite life region, conservatively not ruling out the possibility of pre-damaging at load levels below the fatigue limit, run-outs were reinserted [42]. In the following work, selected results referring to the AB and HIP conditions have been partially published within preliminary studies in [35]. All given stress values were normalized to the nominal ultimate tensile strength (UTS) of the base material without any post treatment, given by the powder manufacturer [33]. The fatigue strength at 1E7 load-cycles for a survival probability of 50% (σ f ) was statistically determined by applying the arcsin √ P-transformation, described in [43]. The assessment of the S/N-curve within the finite life region was done utilizing the ASTM E739 standard [44]. Mean stresses impact the fatigue strength whereby the endurance limit is decreased with growing mean stresses such as static loads along with cyclic loading [45]. The effect is usually depicted as fatigue strength amplitude plotted over mean stress. A large number of concepts have been developed in order to predict the fatigue strength for different mean stress states [46,47]. Two models, one according to Gerber [48] and another one developed by Dietman [49] were utilized within this work to consider a certain mean stress state caused by residual stresses and its impact on fatigue. Equations (1) and (2) serve as two models to correct the endured stress amplitude dependent on the present residual stress state. Both required the ultimate tensile strength σ uts for the respective condition, which was provided by the specimen manufacturer. The parabolic Gerber concept as well as the empirical Dietmann equation showed high statistical correlation with experimental data, which is why those two models were applied. In the following, σ a(−1) stands for the stress amplitude at a load stress ratio of R = −1, and σ m refers to the present mean stress. Considering this, the endurable stress amplitude σ a , at a certain mean stress, can be estimated: Residual Stress Measurement Methodology The holistic characterization of contributing factors to the fatigue strength causes the necessity to assess the residual stress state [31,32], especially in regard to the building process [50,51]. The analysis was performed with X-ray diffraction using an X-RAYBOT from MRX-RAYS, located in Brumath, France. A psi-mounting configuration with Cr-Kα radiation was used along with a collimator size of 2 mm in diameter. The evaluation was based on the 2θ − sin 2 (ψ) method. The measurement setup was according to the ASTM E915-96 standard [52]. The exposure time was set to 30 s for each increment, opting for 25 ψ-increments, with a tilting angle of the X-ray tube from −40°to +40°. The measurement procedure corresponds to the ASTM E2860-12 standard [53]. The residual stress analysis is performed on all unprocessed specimens to avoid falsifying of the results due to influences of machining. Since the fatigue strength at 1E7 load cycles is of interest, one should be aware of a possible depletion of residual stress under tensile loading. For this reason, the validation of the cyclic stability of residual stresses is necessary in order to ensure the usability of the measured stresses in following work. Therefore, in situ residual stress measurements were conducted while fatigue testing. For the assessment of the cyclic stability of the present residual stresses, the fatigue testing was stopped, residual stresses were measured, and, afterwards, the testing is continued. In order to avoid falsifying of the results, the specimen remains clamped in the testing rig. Surface Roughness Evaluation An engineering approach to characterize the reduction of the fatigue strength due to the surface roughness includes the maximum depth of roughness valleys as well as the roughness valley radius. Based on a concept of Peterson, the unprocessed surface, exhibiting micro notches due to the building process, was characterized. Considering the localized stress concentration of such features, the consequent reduction of fatigue properties can be described by the notch effect factor K t ; see Equation (3) [30]. This approximate solution for a shallow, assumed ideal elliptical notch, is only a function of the notch depth and radius of the curvature. Therefore, this concept incorporated the maximum surface deviation S t and the notch root radius ρ. Based on recommendations by the author, the support effect was not taken into account and set to n = 1 due to a conservative approach; for this reason, K t equals K f . This concept finds application within this study to predict the reduced endurable stress amplitude of the unprocessed specimens, beginning with the fatigue strength of the machined ones, respectively, in mean (residual) stress free state: Utilizing a light optical microscope and three-dimensional image processing, it was possible to determine the average maximum surface deviation (S t ) in a non-destructive way [54], shown in Figure 2. Since the specimen geometry is round and additionally possesses a curvature within the testing area, proper filtering of the captured surface topography is necessary. In a first step, the round specimen was partitioned into 12 sections that are individually captured and represent the entire surface. Exemplary, Figure 3a pictures the primary profile, respectively the geometrical structure of one surface segment, detected by the digital optical microscope. The thereby generated three-dimensional datasets were processed within a user-defined routine, as described in [55]. By means of a second order robust Gaussian regression filter, the roughness profile is calculated applying a cut-off wavelength λ c of 2.5 mm. The cut-off length was chosen as recommended by the authors in [55]. This results in the waviness profile as pictured in Figure 3b and the associated roughness profile, see Figure 3c, of the exemplified surface segment. The roughness profile now entirely reflects the surface topography as the waviness profile corresponds to the specimen geometry, respectively form. After areal roughness calculation, the evaluated area is separated into sub-areas, 1 × 1 mm 2 in size by means of the routine and plotted onto the measured surface image. An exemplary roughness map of the areal roughness parameter S t is shown in Figure 2. Yellow areas mark high roughness values, and blue areas mark low ones. Due to that, not only can local areal roughness parameters be linked to surface topography properties, such as notch depth, but also information about the location of the structures are gained. Microstructural Analysis In the untreated condition, see Figure 4a, one can identify pores and grain boundaries, also detected in [56]. The post treated conditions differ from the as-built condition, as significant changes in the microstructure are detected. Grain boundaries are no longer clearly visible, and precipitates are formed within the microstructure. This is observed for both post treatments; see Figures 4b and 5. By virtue of the heat influence, the post treatment causes melt pool boundary softening, implying microstructural evolution and precipitation [57]. Additionally, the porosity and the maximum extension of pores are significantly decreased for the HIP condition, also detected in [58] and published within previous work on this topic in [35]. The changes to the microstructure found in the conditions with a heat-treatment above 500°C are investigated in detail. Iron-rich precipitates and silicon agglomerations are detected; compare [59]. These microstructural features are also found in [27] for both the HIP and SA conditions. A performed EDX-analysis on a Fe-rich precipitate, the spot marked as 'a' in Figure 5, shows a chemical composition (Al 70.24 Si 15.24 Fe 14.32 ) that calculates to Al 5 Si 1.1 Fe 1.02 and is similar to the β-phase Al 5 SiFe, reported and found in [60][61][62]. Due to the elevated temperature above the solubility temperature, silicon crystals are precipitated at the grain boundaries which grow to their respective size throughout the subsequent annealing [37,38,63]. An analysis at spot 'c' confirms the labelled agglomerations as Si-particles that are well reported in [64,65]. The detected microstructural features decelerate the long crack growth. The crack front interferes with these microstructural features, and the propagation is obstructed and forced to change its direction, whereby the overall resistance against fatigue crack growth is enhanced. The improved resistance against crack propagation is attributed to deflection and energy dissipation at the crack tip [25,66]. Within this study, this microstructural behavior is observed for the HIP and the SA condition; compare [35]. After the post treatment, the base material in area 'b' shows a chemical composition of Al 94.27 Si 5.73 , which differentiates to the as-built matrix due to precipitation. Surface Residual Stresses and Cyclic Stability For the unprocessed condition, it is highly necessary to know the residual stresses at the surface, since this is the location of the failure origin, and the condition within the failure initiation area is essential. The interaction between surface condition, residual stresses, and, furthermore, the microstructure as well as understanding the importance of their codependency is also reported in [67,68]. To ensure a proper assessment of the axial residual stresses at the surface, three measurements along the circumference in a distance of 120°are performed. The measurements are conducted before testing and clamping. For further analysis, the mean value is considered to serve as a base value with the scatter band representing a confidence level of 95%. The residual stress results are normalized to the UTS of the material and abbreviated as σ res,ax,surf . This allows for quantifying the intensity of residual stresses as a share of the ultimate tensile strength and enables a sophisticated valuation of the range in which the occurring stresses lie. All measured stresses are in the tensile region. The analysis reveals a significant decrease of residual stresses for both post treated conditions referring to the AB condition. It is found that HIPing reduces the axial residual stresses at the surface by 54.2% and solution annealing by 46.7%. Each specimen which reached the run-out criterion was measured again and showed no change. The outcome of the in situ residual stress measurements validate that testing at the fatigue limit (run-out load level) causes no notable changes of surface residual stresses. This case is depicted by the two black lines in Figure 6. However, increasing the tensile load above the fatigue limit either leads to a relaxation of residual stresses or failure before measurable changes to the residual stress state; see red lines in Figure 6, occur. The findings therefore prove that residual stresses measured before testing are still present after testing at run-out level or remain even unchanged until failure. This enables to look at measured values before testing as permanent present mean stresses. All results are given in Table 3, whereby all stress values are normalized to the surface stress before testing but after the specimen is clamped. In-Depth Residual Stress Distribution To characterize the residual stress state directly at the crack initiation site for the machined specimens, it is necessary to electrolytically polish into the depth in which the failure responsible defects lie. The determination of the residual stresses at the crack origin is essential since they are substantially involved in failure initiation and crack growth; the present stress is denoted in the following as σ res,ax,surf for crack initiation at the surface and σ res,ax,bulk for failure from internal defects. To negate the effect of machining, an in-depth progression of residual stresses of the AB and HIP condition is performed. Based on the fracture surface analysis of the machined specimens, it is found that the average failure critical imperfection either lies at the surface or in a maximum depth of about 200 µm beneath the surface. Considering this, a conservative assumption is made to take the mean residual stress estimated within the aforementioned region for further analysis. The in-depth progression is shown in Figure 7, in which all stress values are normalized to the respective stress measured at the surface to highlight the distribution of residual stresses in depth. The greyed out area marks the machining allowance of 1 mm that is added to the building process. Beneath the unprocessed surface, a stress peak is observed for the HIP and AB conditions. Both show a similar progression with significantly increased axial tensile stresses in the area in which the critical imperfections lie, signalized by the red-shaded area. The results are summarized in Table 4. Considering the comparably high residual stresses at the crack initiation spot as an existing mean stress, they change the present mean stress state and affect the crack initiation, propagation and consequently the fatigue strength in great measure [69,70]. Surface Roughness Parameter Evaluation For the application of the notch effect concept by Peterson, mean values of all gathered data of S t and ρ are taken into the calculation of K t , since the most critical surface feature is a certain combination of notch depth and notch valley radius. Since the aim is to non-destructively determine the reduction in fatigue strength, the values for S t and ρ are taken from the optical surface assessment and not from a subsequently performed fracture surface analysis. Empirical investigations show that the mean value of the maximum valley depth of all 12 segments describes the critical surface roughness properly. For a suitable assessment of the area-based roughness parameter S t , comparison, and validation of the optical evaluation, the maximum surface deviation is also measured within the fractured surfaces. The non-destructive optical surface evaluation is in sound correlation with the mean values from measurements on fractured specimens. The average deviation of the two methods varies between 5.8% and 7.4%, which confirms the applicability of the used evaluation routine. The results for the surface roughness parameter S t are normalized to the mean value evaluated by the fracture surface analysis and are summarized in Table 5. It is observed that both post treatments have a beneficial impact on the surface roughness; S t is decreased by about 14%. The specimens are printed in a vertical (axial) direction, which leads to a periodically repetitive formation of the surface shape in the building direction. This recurring surface texture for additively manufactured structures is also reported in [71]. Three-dimensional surface imaging allows the measurement of the recurring roughness valley radii (ρ) in the loading direction with only minor deviations; see Figure 8. The evaluation is based on line measurements at several selected specimens and different locations around each specimen. It is mentioned that the notch radii can not be measured in the fractured surface since this would provide the notch radius within the wrong plane, namely perpendicular to the loading direction. The comparison of the investigated conditions reveals that the average roughness valley radius increases due to the post treatments, which mitigates the sharpness of the notch. High Cycle Fatigue Testing The high-cycle fatigue test results for the HIP condition are displayed in Figure 9. The solid lines denote the machined surface condition, whereby black with square markings represents the AB condition and blue with triangle markers is used for the HIP condition. Solely, the comparison of both machined HIP to AB conditions is published within a previous study in [35]. The dashed lines stand for the unprocessed surface condition. The displayed SN-curves are evaluated at a survival probability of 50%. All results are summarized in Table 6. The finite life region is denoted as FLR, and the long life region is abbreviated as LLR. In order to obtain reasonable results and ensure testing within the linear-elastic region, the peak load level for testing is below the yield strength of the material. Comparing the machined conditions, the HIP treatment leads to an increase in fatigue strength by 13.8% referring to the AB condition. A similar trend is observed for the unprocessed condition. The HIPed series exhibits a 25.3% higher fatigue strength than the AB series. For both post treatment conditions, the difference between machined and unprocessed surface condition is significant. The as-built surface decreases the fatigue strength for the HIP condition by 62.2% and by 65.6% for the AB condition. Hence, the assessment of the surface roughness is essential. Regarding the scattering between 10% and 90% survival probability, HIPing narrows the scatter band for each surface condition within the finite life region as well as in the long life region. It is observed that the HIP treatment also positively impacts the slope of the S/N-curves in terms of a less steep behaviour. Partially, these results are already published in [35]. The following Figure 10 shows the fatigue test results for the solution annealed condition. As described before, black lines and markings refer to the AB condition. Analogous to Figure 9, the green solid line presents the results for the machined, and the green dashed line the results of the unprocessed condition. Green circular markings are used to flag the test data. Solution annealing reveals the same trend as observed for the HIP condition. The fatigue strength of the machined SA condition lies 5.9% above the fatigue strength of the machined AB. In regard to the unprocessed surface condition, solution annealing enhances the fatigue strength by 25.3%. One can observe that the unprocessed surface again has a major impact on the fatigue behaviour, as machining leads to an improvement of +146%. The scattering between 10% and 90% survival probability is again decreased for the machined condition. The slope in the finite life region is again found to be less steep than for the AB condition. Fracture Surface Analysis In order to holistically characterize the fatigue behaviour of the investigated material, a fracture surface analysis is carried out for every tested specimen. It is found that there are different mechanisms that cause the failure. Failure from Intrinsic Imperfections Investigating the fractured surfaces of the machined AB condition reveals that, in every case, surface-near pores are responsible for failure; see Figure 11a. The size and location of the imperfection are the determining criteria in terms of the fatigue strength [72][73][74]. For the machined HIP test series, the failure initiates from microstructural inhomogeneities. The debonding of Si-crystals is responsible for crack initiation, which is depicted in Figure 11b. This failure behaviour is already published within preliminary studies on this topic [35]. The post treatment of the SA condition is similar to the HIP treatment, which leads to a comparable microstructure. On the contrary, the fracture surface analysis displays a combined failure cause of microstructural inhomogeneities and porosity, as shown in Figure 11d. The occurring porosity may be attributed to the lack of isostatic pressure during the SA treatment. To be sure about the failure mechanism, an EDX-Analysis is performed on the fractured surface. In regard to Figure 11c, area 'a' shows a chemical composition of Al 18.06 Si 65.41 Mg 16.53 . Spots 'b' and 'c' consist of a great measure of Silicon, which leads to the interpretation of debonding Si-crystals, also found in [66]. In comparison, spot 'd', which lies beneath a delaminated Si-Slab, is found to be base material. Failure from Surface Features The main outcome of the fracture surface analysis for all test series and each specimen exhibiting an unprocessed surface is that the surface texture is in every case failure critical. The effect of the surface roughness dominates all other imperfections and microstructural features in terms of crack initiation and the consequential fatigue strength. This behaviour is also observed in [75]. Figure 12a,b highlight the failure origin from a roughness valley. The substantive effect of the surface roughness on the fatigue strength is well reported in [76][77][78]. The given examples are from the unprocessed AB series. No evidence of pores or microstructural inhomogeneities is found in the surrounding area for any test series. In conclusion, one can distinctively determine the surface condition as the crucial feature, which overshadows all other failure reasons and are therefore neglectable in the presence of an unprocessed surface. Mean Stress Correction Macroscopic residual stresses of the first order may be considered to overlay with load stresses and therefore act as mean stresses, encouraging a shift of the actual load stress ratio to an effective stress ratio R eff [79,80]. The intended testing is performed at a load stress ratio of R = −1, which means that the mean stress is zero. Taking the effective mean stress caused by load and residual stresses into account, the load stress R-ratio is shifted to an effective R-ratio, according to Equation (4): For the HIP condition, the present residual stresses lead to an effective stress ratio of R eff = −0.36 for the machined and to R eff = −0.38 for the unprocessed surface condition. The effective stress ratio for the AB machined condition calculates to R eff = 0.09 and even to R eff = 0.1 with an unprocessed surface. Hence, it is clearly shown that residual stresses alter the testing condition significantly. To independently assess the impact of the surface roughness, the stress amplitude is extrapolated to a ratio of R = −1. The aim is to eliminate all influencing factors but one, the surface roughness. This enables the independent quantification of it. This correction of the stress amplitude to a mean stress of zero accounts for the influence of residual stresses and simultaneously gives a conservative estimation of the endurable fatigue strength amplitude as if no residual stresses would be present. Figure 13 presents the mean stress corrected fatigue strength amplitude according to Gerber, which is denoted as σ f,M,cor,G in the following. The same procedure is applied for the correction according to Dietmann, denoted as σ f,M,cor,D , shown in Figure 14. The results are also summarized in Table 7. Comparing both concepts, the model according to Gerber is more conservative than the Dietmann one in regard to the experimental results σ f,exp . In conclusion, one can state that it is proven that the residual stress state contributes in great measure to the fatigue resistance; this effect can be observed by the increase of the endurable fatigue strength amplitude for the AB and HIP condition. The difference in the residual stress free state between AB und HIP may be attributed to beneficial microstructural changes and the different failure initiation modes for the HIP condition, as previously presented and published within [35]. Both concepts lead to similar results, estimating a benefit due to HIPing of approximately +5.8% for the machined and 23.9% for the unprocessed condition, see Table 8. The importance of the assessment of the surface roughness caused by the building process is obvious, since it is unequivocally found to be the fatigue strength determining factor. The fatigue test results as well as the fracture surface analysis emphasize the evaluation of the surface roughness and its influence. The results for the notch factor of all conditions are given in Tables 9 and 10, in which the estimated fatigue strength based on the analytical model is abbreviated as σ f,UP,mod , and the experimental results are denoted as σ f,UP,exp , respectively, for each unprocessed condition. As expected based on the roughness parameters, the notch effect is more pronounced for the AB condition than for the post treated conditions. Beginning with the corrected fatigue strength of the machined condition (σ f,M,cor ) and dividing it by the notch factor (K t ), which acts as a reduction factor accounting for the surface roughness, estimates the fatigue strength of the unprocessed condition; see Equation (5): Eventually, the analytically estimated, mean stress corrected fatigue strength is compared to the experimentally determined fatigue strength, both in a residual stress freed state. The results of the analytical approach deviate in the range of +6.4% to +16.3% from the experimental results, which acknowledges the applied procedure to be deployable for the estimation of the reduction of fatigue properties due to the surface roughness starting from a machined surface condition in a residual stress freed state utilizing mean stress corrected values according to Gerber, see Table 9 and Dietmann, summarized in Table 10. Both concepts present a minor non-conservative approach, but the scatter band (1:Ts) in the long life region of 1:57 for the UP-AB, and 1:43 for the UP-HIP condition, as given in Table 6, needs to be considered as well. Consequently, the estimated mean fatigue strength is well within the scattering of the experimental results. The above presented concept is utilized to predict the fatigue strength of the SA condition. Both of the others, AB and HIP, reveal in machined and unprocessed conditions the same effective stress ratio due to residual stresses because only the residual stresses in unprocessed SA conditions are measured, assuming the same R-ratio in machined conditions. Applying this procedure, the fatigue strength of the machined SA condition can be properly predicted with both concepts, denoted as σ f,M,pred,G/D . The deviation from the experimental results is calculated to only +3.4%; see Table 11. Discussion Based on the results presented in this paper, the fatigue strength of additively manufactured AlSi10Mg structures is altered by post treatments, the residual stress state and the surface condition. The fatigue strength is improved by HIPing and solution annealing, for a machined as well as a unprocessed surface, compared to the AB condition. This study also proves a beneficial effect of the investigated post treatments on the microstructure and consequently on fatigue. The outcome of the investigations on the surface condition reveals that, by virtue of the roughness, fatigue properties are significantly reduced. Comparing the as-built surface to a machined surface, this work reveals that the unprocessed surface causes a significant reduction of fatigue properties of about −60%. The surface roughness analysis shows that the HIP as well as the SA treatment positively influences decisive surface related characteristics due to the heat input and the applied pressure during the HIP process. The maximum roughness valley depth is decreased and furthermore the average roughness valley radius is mitigated compared to the AB condition. These beneficial changes to the surface topography contribute to an improved fatigue behaviour of +25.3% for both conditions compared to the AB condition. This work leads to the conclusion that the residual stress state at the respective failure origin can be considered as a present mean stress, whereby a shift of the intended load stress ratio to an effective stress ratio occurs. Another finding of the conducted investigations is that, due to the heat influence of the post treatments, residual stresses are reduced by roughly 50%. An analysis of the in-depth progression reveals increased tensile residual stresses compared to the surface by a factor of almost three. By the means of the presented methodology, a prediction of the reduced fatigue strength of unprocessed specimen, in relation to the machined condition, is given. The developed model is shown to be well applicable to the investigated test series in a residual stress free state. Although the fatigue strength amplitude prediction is slightly non-conservative, the estimation is well within the scatter band of the the experimental results in the long life region.
8,310.4
2019-10-16T00:00:00.000
[ "Materials Science" ]
Signatures of Synchrony in Pairwise Count Correlations Concerted neural activity can reflect specific features of sensory stimuli or behavioral tasks. Correlation coefficients and count correlations are frequently used to measure correlations between neurons, design synthetic spike trains and build population models. But are correlation coefficients always a reliable measure of input correlations? Here, we consider a stochastic model for the generation of correlated spike sequences which replicate neuronal pairwise correlations in many important aspects. We investigate under which conditions the correlation coefficients reflect the degree of input synchrony and when they can be used to build population models. We find that correlation coefficients can be a poor indicator of input synchrony for some cases of input correlations. In particular, count correlations computed for large time bins can vanish despite the presence of input correlations. These findings suggest that network models or potential coding schemes of neural population activity need to incorporate temporal properties of correlated inputs and take into consideration the regimes of firing rates and correlation strengths to ensure that their building blocks are an unambiguous measures of synchrony. In particular, count correlations computed for time bins larger than the intrinsic temporal scale of correlations can vanish for some functional forms of input correlations. These potential ambiguities were not reported in previous studies of leaky integrate and fi re models which focused on the analytically accessible choice of white noise input currents (de la Rocha et al., 2007;Shea-Brown et al., 2008). The paper is organized as follows: we fi rst introduce several common spike count measures (Section "Materials and Methods") and the statistical framework (Section "Results"). Then we study the zero time lag correlations (Section "Spike Correlations with Zero Time Lag") and the infl uence of the temporal structure of input correlations on measures of spike correlations (Section "Temporal Scale of Spike Correlations"). We show that spike count correlations can vanish despite the presence of input cross correlations (Section "Vanishing Count Covariance in the Presence of Cross Correlations"). Finally, we discuss potential consequences of our fi ndings for the design of population models and the experimentally measured spike correlations. MEASURES OF CORRELATION The spike train s i (t) of a neuron i is completely described by the sequence of spike times t i . This description is often simplifi ed using discrete bins of size T (Figure 1). To describe pairwise spike correlations, several competing measures are used (Perkel et al., 1967;Svirskis and Hounsgaard, 2003;Schneidman et al., 2006;de la Rocha et al., 2007;Shea-Brown et al., 2008;Roudi et al., 2009). Here, we focus on the most commonly used measures of spike correlations: conditional INTRODUCTION Coordinated activity of neural ensembles contributes a multitude of cognitive functions, e.g., attention (Steinmetz et al., 2000), encoding of sensory information (Stopfer et al., 1997;Galan et al., 2006), stimulus anticipation and discrimination (Zohary et al., 1994;Vaadia et al., 1995). Novel experimental techniques allow simultaneous recording of activity from a large number of neurons (Greenberg et al., 2008) and offer new possibilities to relate the activity of neuronal populations to sensory processing and behavior. Yet, understanding the function of neural assembles requires reliable tools for quantifi cation, analysis and interpretation of multiple simultaneously recorded spike trains in terms of underlying connectivity and interactions between neurons. As a fi rst step beyond the analysis of single neurons in isolation, much attention has focused on the pairwise spike correlations (Schneidman et al., 2006;Macke et al., 2009;Roudi et al., 2009), their temporal structure and the infl uence of topology (Kass and Ventura, 2006;Kriener et al., 2009;Ostojic et al., 2009;Tchumatchenko et al., 2010). Pairwise neuronal correlations are traditionally quantifi ed using count correlations, e.g., correlation coeffi cients (Perkel et al., 1967). However, it remains largely elusive how correlations present in the input to pairs of neurons are refl ected in the count correlations of their spike trains. What are the signatures of input correlations in the count correlations? And vice versa, what conclusions about input correlations and interactions can be drawn on the basis of count correlations and their changes? Here we address these questions using a framework of Gaussian random functions. We fi nd that correlation coeffi cients can be a poor indicator of input synchrony for some cases of input correlations. Signatures of synchrony in pairwise count correlations Tatjana Tchumatchenko 1,2,3 *, Theo Geisel 1,2 , Maxim Volgushev 4,5,6 and Fred Wolf 1,2 fi ring rate, correlation coeffi cient, normalized correlation coeffi cient and count covariance. We will consider the relation between these measures and their dependence on (1) the underlying input correlation strength, (2) fi ring rate, (3) temporal structure of spike trains, and (4) size of the time bin used to compute count correlations. The spike timing correlations of two spike trains s i (t) and s j (t) are often quantifi ed using the conditional fi ring rate function ν cond,ij (τ) (Binder and Powers, 2001;Tchumatchenko et al., 2010): Here ν i and ν j are the mean fi ring rates of neurons i and j, respectively. Correlations within a spike train are described by the auto conditional fi ring rate ν cond (τ). An alternative measure based on count correlations is the correlation coeffi cient ρ ij (Perkel et al., 1967;de la Rocha et al., 2007;Greenberg et al., 2008;Shea-Brown et al., 2008;Tetzlaff et al., 2008): where n i (T) and n j (T) are spike counts of neuron i and j measured in synchronous time bins of width T, see Figure 1. A related measure of pairwise correlations is the normalized correlation coeffi cient c ij (Roudi et al., 2009). It determines pairwise interactions J ij in maximum entropy models of networks of N neurons with average fi ring rate ν (Schneidman et al., 2006;Roudi et al., 2009): In this limit, the properties of ρ ij , c ij are largely determined by ν cond,ij (0). Several experimental studies used bin sizes ranging from T = 0.1 to 1 ms, which are compatible with this T-regime of correlation coeffi cients (e.g., Lampl et al., 1999;Takahashi and Sakurai, 2006). The quantities presented here all measure different aspects of spike correlations and can potentially have different computational properties. Furthermore, each of the quantities can exhibit a nonlinear dependence on fi ring rate, input statistics or bin size. Below, we consider these measures of spike correlations, as well as their dependence on fi ring rate, input statistics and bin size. RESULTS To access spike correlations in a pair of neurons, we use the framework of correlated, stationary Gaussian processes to model the voltage potential V(t) of each neuron. This approach generates voltage traces with statistical properties consistent with cortical neurons (Azouz and Gray, 1999;Destexhe et al., 2003). The simplest conceivable model of spike generation from a fl uctuating voltage V(t) identifi es the spike times t j with upward crossings of a threshold voltage (Rice, 1954;Jung, 1995;Burak et al., 2009). The times t j determine the spike train: where ψ 0 is the threshold voltage, and δ(·) and θ(·) are the Dirac delta and Heaviside theta functions, respectively. Each neuron has a stationary fi ring rate ν = 〈s(t)〉. We model V(t) by a random realization of a stationary continuous correlated Gaussian process V(t) (Azouz and Gray, 1999;Destexhe et al., 2003) with zero mean and a temporal correlation function C(τ), which decays for larger time lags τ. 〈·〉 denotes the ensemble average. We assume a smooth C(τ) such that C n (0) exist for n ≤ 6 and the rate of threshold crossings is fi nite (Stratonovich, 1964). All other properties of C(τ) can be freely chosen. This makes our formal description applicable to a large class of models, each of which is characterized by a particular choice C(τ). For simulations using digitally synthesized Gaussian processes (Prichard and Theiler, 1994) and numerical integration of Gaussian integrals (e.g., Wolfram Research, 2009) we used a correlation function compatible with power spectra of cortical neurons (Destexhe et al., 2003): In cortical neurons in vivo the temporal width of C(τ) can from 10 to 100 ms (Azouz and Gray, 1999;Lampl et al., 1999). We characterize the temporal width of C(τ) using the correlation time constant τ s : Note, that the correlation time τ s as defi ned in Eq. 14 is close to a commonly used defi nition of autocorrelation time τ τ σ For C(τ) as in Eq. 13 τ a = πτ s /2. The correlation time τ s and the threshold ψ 0 determine the fi ring rate ν: The fi ring rate ν is the rate of positive threshold crossings, which is equivalent to half of the Rice rate of a Gaussian process (Rice, 1954). For non-Gaussian processes the rate of threshold crossings can deviate from Eq. 15 and there is no general approach for obtaining ν in this case (Leadbetter et al., 1983). We note, that the fi ring rate ν of a neuron depends only on two parameters: the correlation time and the threshold-to-variance ratio, but not on the specifi c functional choice of the correlation function. Hence, processes with the same correlation time but with a different functional form of C(τ) will have the same mean rate of spikes, though their spike auto and cross correlations can differ signifi cantly. Our framework can be expected to capture neural activity in the regime where the mean time between the subsequent spikes is much longer than the decay time of the spike triggered currents. This occurs if the spikes are suffi ciently far apart and the spike decision is primarily determined by the stationary voltage statistics rather than spike evoked currents. Therefore, this model should only be used in the fl uctuation driven, low fi ring rate ν < 1/(2πτ s ) regime, which is important for cortical neurons (Greenberg et al., 2008). The leaky integrate and fi re (LIF) model (Brunel and Sergi, 1998;Fourcaud and Brunel, 2002) has a similar spike generation mechanism. To compare both models, we study the transformation of input current to spikes. The LIF neuron driven by Ornstein-Uhlenbeck current I(t) with time constant τ I can be described by where τ M is the membrane time constant and I 0 is the mean input current. When V(t) reaches the threshold ψ 0 , the neuron emits a spike, and V(t) is reset to V r . The LIF model mainly differs from our framework by the presence of reset after each spike. For low fi ring rates, where the reset has little infl uence on the following spike, the threshold model and the LIF model can be expected to yield equivalent results. In Figure 1C we compare the fi rst order fi ring rate approximation (fi rst order in τ τ I M / ) of a LIF neuron driven by colored noise, which can be obtained via involved Fokker-Planck calculations (Brunel and Sergi, 1998;Fourcaud and Brunel, 2002) and the fi ring rate of the corresponding threshold neuron ν π τ τ ψ τ τ σ τ ). In general, the details of the spike generating model can have a strong effect on current susceptibility and spike correlations (Vilela and Lindner, 2009). However, we fi nd that both models have a very similar current susceptibility for a range of input currents and spike correlations derived in the forthcoming sections are consistent with the corresponding correlations in the LIF model, e.g., fi ring rate dependence of weak cross correlations (de la Rocha et al., 2007;Shea-Brown et al., 2008), the infl uence of noise mean and variance on the fi ring rates and spike correlations (Brunel and Sergi, 1998;de la Rocha et al., 2007;Ostojic et al., 2009), sublinear dependence of correlation coeffi cients on input strength (Moreno-Bote and Parga, 2006; de la Rocha et al., 2007). We include cross correlation between two spike trains i and j via a common component in V i (t) and V j (t), r > 0: where ξ c denotes the common component and ξ i , ξ j are the individual noise components. In a Gaussian ensemble any expectation value is determined by pairwise covariances only. Thus April 2010 | Volume 4 | Article 1 | 4 Tchumatchenko et al. Synchrony signatures in count correlations all pairwise correlations are determined by the joint Gaussian probability density p k k C k C Matrix entries are covariances C xy = 〈k x k y 〉 with C ij = rC(τ). Below, we calculate the conditional fi ring rate ν cond,ij (τ) (Eqs 1 and 11) for several important limits. SPIKE CORRELATIONS WITH ZERO TIME LAG The above framework allows one to derive an analytical expression for the cross conditional fi ring rate with zero time lag, ν cond,ij (0). Via Eqs 5, 9 and 10 ν cond,ij (0) can be related to c ij , ρ ij and J ij . For a pair of statistically identical neurons with (ν = ν 1 = ν 2 ). ν cond,ij (0) in Eq. 1 can be solved by transforming the correlation matrix C (Eq. 18) into a block diagonal form via a variable transformation: The matrix C is then the identity matrix for τ = 0, and We obtain: Equation 19 ( Figure 3A) shows, as expected, that ν cond,ij (0) increases with increasing strength of input correlations r. Since both correlation coeffi cients ρ ij , and normalized correlation coeffi cient c ij are proportional to ν cond,ij (0) (Eqs 9 and 10), both measures also increase with increasing r, which is consistent with experimental fi ndings (Binder and Powers, 2001;de la Rocha et al., 2007). However, the functional form of r-dependence and the sensitivity to the fi ring rate ν of c ij and ρ ij are different (Figure 2). The normalized correlation coeffi cient c ij and pairwise coupling J ij are both inversely proportional to ν, and thus decrease with increasing ν for any value of r (Eqs 4 and 5; Figure 2B). Notably, we fi nd that c ij can be normalized to c ij → c ij · (νT) to yield a less ambiguous measure of the input correlation strength (Eqs 4 and 10; Figures 3C,D). Additionally, we fi nd that the fi ring rate dependence of ρ ij is different for the weak and strong correlations. Equation 19 further exposes one important feature of ν cond,ij (0), and thus of c ij and ρ ij for small time bins: all three measures depend on the temporal scale of the input correlations (τ s ), but not on the functional form of input correlation C(τ). Thus, changes in ν cond,ij (0) and correlation coeffi cient ρ ij can be interpreted as a change of the strength of underlying input correlation strength, if a fi ring rate modifi cation can be excluded. In the linear r-regime, the analytical expression for ν cond,ij (0) can be further simplifi ed: In this limit, ν cond,ij (0) shows a strong dependence on the fi ring rate ν (Figure 3A, right, Figure 2A, top). This dependence is remarkably similar to the fi ring rate dependence found previously in vitro and in vivo in cortical neurons and LIF models (de la Rocha et al., 2007;Greenberg et al., 2008;Shea-Brown et al., 2008). In the limit of strong input correlations, Eq. 19 can be simplifi ed to: In this regime, ν cond,ij (0) does not depend on the fi ring rate ν (Amari, 2009). Furthermore, for strong input correlations and small bin sizes T the correlation coeffi cient ρ ij also changes only marginally over a range of fi ring rates (0 < ν < 15 Hz, Figure 2A), since it depends linearly on ν cond,ij (0). Note, as r is approaching 1 the temporal width of ν cond,ij (τ) is approaching 0 and the peak ν cond,ij (0) diverges, corresponding to the delta peak in the autoconditional fi ring rate ν cond (τ) which results from the self-reference of a spike. For r ≈ 1, almost every spike in one train has a corresponding spike in the other spike train, however these two are jittered. The temporal jitter of the spikes can be characterized by the peak of the conditional fi ring rate ν τ τ τ τ cond,12 ( ) /( ) /( τ and its temporal width ∝ 2 1− r s τ , both of which are threshold and fi ring rate independent in this limit. Notably, the threshold independence and the dependence on temporal scale of input correlations are consistent with previous experimental fi ndings on spike reliability (Mainen and Sejnowski, 1995). TEMPORAL SCALE OF SPIKE CORRELATIONS So far we considered only spike correlations occurring with zero time lag. However, spike correlations can also span across significant time intervals (Azouz and Gray, 1999;Destexhe et al., 2003). The temporal structure of spike correlations, as refl ected in the conditional fi ring rate ν cond,ij (τ), can induce temporal correlations within and across time bins and could potentially alter count correlations. To capture correlations with a non-zero time lag, spike correlation measures are calculated for time bins T spanning tens to hundreds of milliseconds, e.g., 20 ms (Schneidman et al., 2006), 30-70 ms (Vaadia et al., 1995), 192 ms (Greenberg et al., 2008) and 2 s (Zohary et al., 1994). For time bins longer than the time constant of the input correlations, measures of correlations become sensitive to the temporal structure of ν cond,ij (τ). Moreover, the values of ρ ij and c ij depend on the bin size T used for their calculation. Figure 3 shows how dependence of ρ ij and c ij on the fi ring rate is altered by a change in bin size. Increasing the bin size leads to the increase of the calculated correlation coeffi cient ρ ij , and also increases the sensitivity of ρ ij to the fi ring rate. The fact that increasing T brings the calculated correlation coeffi cient closer to the underlying input correlation r could justify the use of long time bins in the above studies. But do correlation coeffi cients always increase with increasing time bins? To further clarify how the temporal structure of input correlations infl uences the temporal correlations within and across spike trains, we investigate the covariance of spike counts recorded at different times where n i (T, t) and n i (T, t + τ) are the spike counts of neurons i, j measured in time bins of the same duration T, but shifted by the time lag τ. For each time lag τ, covariance of the spike counts can be calculated using ν cond,ij (τ) (Eq. 1). Below, we will fi rst address the temporal structure of auto correlations in a spike train, and then consider the cross correlations between spike trains. The auto conditional fi ring rate ν cond (τ) For large time lags τ we expect the auto conditional fi ring rate to approach the stationary rate but to deviate from it signifi cantly for small time lags. Of particular importance for population models is the limit of small but fi nite τ, which determines the time scale on which adjacent time bins are correlated. At τ = 0, the auto conditional fi ring rate has a δ-peak refl ecting the trivial auto correlation of each spike with itself. In the limit of small but fi nite time lag (0 < τ < τ s ) we fi nd a period of intrinsic silence, where the leading order ∝τ 4 is independent of a particular functional choice of C(τ). We solve ν cond (τ) (Eq. 2) by transforming the correlation matrix in Eq. 18 into a block diagonal form using new variables Then only few elements of the corresponding symmetric density matrix C ∑ Δ For C(τ) as in Eq. 13 we obtain a simple analytical expression in the limit of 0 < τ < τ s : This equation shows that ν cond (τ) depends on the temporal structure of a neuron's input and fi ring rate, Figure 4B. Respectively, the silence period after each spike depends on the functional form and time constant of the voltage correlation function C(τ) and fi ring rate (Figures 4B and 5A). Figure 4B illustrates ν cond (τ) obtained using numerical integration of Gaussian probability densities (e.g., Wolfram Research, 2009), ν cond (τ) obtained from simulations of digitally synthesized Gaussian processes (Prichard and Theiler, 1994) and the τ < τ s approximation in Eq. 23. In this framework, the silence period after each spike mimics the refractoriness present in real neurons (Dayan and Abbott, 2001). Count correlations within a spike train Here we study how the input correlations shape the temporal structure of spike autocorrelations. In particular, we focus on how the input correlations and spike autocorrelations are refl ected in count correlations within a spike train. The silence period after a spike is refl ected in vanishing ν cond (τ) for 0 < τ < τ s and results in negative covariation of spike counts in adjacent time bins. We fi nd that the relation between ν cond (τ) and spike count covariance is most salient for higher fi ring rates (Figure 4C, 10 Hz). For small time bins, the covariance mimics the functional form of ν cond (τ) for time bins covering several time constants. Plots of spike count covariance calculated for increasing bin sizes T reveal an important feature of count correlations: covariance of adjacent bins persists even when the bin size is increased well over the time scale of intrinsic correlations (T >> τ s ), Figure 4. This suggests that avoiding statistical dependencies associated with neuronal refractoriness by choosing longer time bins (Shlens et al., 2006) might not be possible, particularly for higher fi ring rate neurons. We conclude that temporal count correlations within a spike train generally need to be considered in the design of population models. Cross conditional fi ring rate ν cond,ij (τ) We explore the temporal structure of spike correlations in a weakly correlated pair of statistically identical neurons (ν = ν 1 = ν 2 ). This is an important regime for cortical neurons in vivo (Greenberg et al., 2008;Smith and Kohn, 2008). To solve ν cond,ij (τ) (Eq. 1), we expand the probability density τ using a von Neumann series of the correlation matrix C in Eq. 18. We obtain ν cond,ij (τ) up in linear order where and . Equation 24 shows that weak spike correlations are generally fi ring rate dependent and directly refl ect the structure of input correlations C(τ). Figure 5A shows three examples of voltage correlations which have the same τ s , but different functional form. All three functional dependencies are refl ected in the cross conditional fi ring rate ν cond,ij , but result in markedly different shapes of auto conditional rate ν cond (τ) (Figures 5A,B). In the next section we study how the functional choice of C(τ) affects the correlation coeffi cient. Count correlations across spike trains We now use the spike correlation function obtained above to study the pairwise count covariance. which allows to obtain the correlation coeffi cient for a weakly correlated pair of neurons: This offers the opportunity to study how changes in the input structure affect spike count correlations. Figure 5 shows that correlation coeffi cient ρ ij depends on both bin size T and the functional form of input correlation function C(τ). Figure 5C illustrates that different functional form of underlying membrane potential correlations can lead to a strikingly different dependence of ρ ij on the bin size. After an initial increase for all three voltage correlation functions, correlation coeffi cient continues increasing slowly for C 1 , remains at the same level for C 2 , but decreases dramatically for C 3 . This latter type of behavior was not observed in previous studies of LIF models (de la Rocha et al. (2007), Suppl.), which focused on the analytically accessible choice of white noise currents and reported a monotonously increasing correlation coeffi cient in the limit of large T. Below we will further consider how dependence of ρ ij on T is infl uenced by the choice of the form of voltage correlations C(τ). We will show that some voltage correlation functions can lead to vanishing correlation coeffi cients in the limit of large bin size T. Vanishing count covariance in the presence of cross correlations Count covariances and correlation coeffi cients rely on the integral of the spike correlation function (Eqs 3 and 7). In cortical neurons, the spike correlation functions can exhibit oscillations and signifi cant undershoots in addition to a correlation peak (Lampl et al., 1999;Galan et al., 2006), this may alter the correlation coeffi cients and their dependence on bin size T. In the weak correlation regime we obtained an analytic expression for ν cond,ij (τ) (Eqs 24 and 26). This allows us to explore analytically how a change in the functional choice of voltage correlations will infl uence count correlations. To qualify as a reliable measure of synchrony, count cross correlations between two neurons should refl ect primarily correlation strength and be independent of the functional form of input correlations. Our framework offers the possibility to test this hypothesis and explore whether previously reported fi nite correlation coeffi cients obtained for LIF model using white noise approximation (Shea-Brown et al., 2008) can be generalized to a larger class of input correlations. Here we consider spike correlations generated by a voltage correlation function with a substantial undershoot (e.g., as in Figure 1E in Lampl et al., 1999). For illustration, we could use any voltage correlation function with a large undershoot and vanishing long- Defi ned this way, the correlation time of C 3 (τ) is τ s and , which is equivalent to vanishing spectral power for zero frequency. We note that the correlation coeffi cients and count covariances calculated for this functional form of input correlations can be arbitrarily small if T >> τ s . This means that the absence of longtimescale variability in the inputs ( ) is equivalent to an absence of long-timescale co-variability in the spike counts. Notably, despite vanishing cross covariance, the variability of the single spike train is maintained and count variance of the single spike train (Eq. 8) is fi nite for C 3 (τ) in infi nite time bins. Equation 28 implies that experimental correlation coeffi cients calculated for large time bins are most susceptible to the infl uence of temporal structure of correlations, and experimental studies focusing on large bin sizes [e.g., T = 192 ms (Greenberg et al., 2008) or T = 2 s (Zohary et al., 1994)] could potentially underestimate the correlation strength. For the important regime of low fi ring rates (Greenberg et al., 2008), where the reset has little infl uence on the following spike, the threshold model and the LIF model can be expected to yield equivalent results. In this case, Eq. 28 and Figure 5 suggest that fi nite correlation coeffi cients, which are increasing with bin size T as reported for the LIF model (de la Rocha et al., 2007) might be limited to the subset of input correlation functions without sizable undershoots. To obtain fi nite count cross correlations, the voltage correlation functions need to fulfi ll ∫ > −∞ ∞ C d ( ) τ τ 0, as C 1 (τ),C 2 (τ) in Figure 5 do. Notably, spike count correlations of cortical neurons in vivo can decrease or increase as the length of the time bin increases (Averbeck and Lee, 2003;Smith and Kohn, 2008). These results are consistent with our fi ndings ( Figure 5C). Thus, in contrast to the correlation coeffi cients computed for small T which are independent of C(τ) (Eqs 9 and 19), the count correlations computed for T ≥ τ s are a potentially unreliable measure of synchrony. DISCUSSION Unambiguous and concise measures of spike correlations are needed to quantify and decode neuronal activity (Abbott and Dayan, 1999;Greenberg et al., 2008;Krumin and Shoham, 2009). Pairwise spike count correlations are frequently used to describe interneuronal correlations (Averbeck and Lee, 2003;Kass and Ventura, 2006;Greenberg et al., 2008) and many population models are based on these measures (Schneidman et al., 2006;Shlens et al., 2006;Roudi et al., 2009). However, quantitative determinants of count correlations so far remained largely elusive. Here, we used a simple statistical model framework based on the threshold crossings and the fl exible choice of temporal input structure to study the signatures of input correlations in count correlations. In general, the details of the spike generating model can have a strong effect on spike correlations, f.e. depending on the dynamical regime, two (28) quadratic integrate and fi re neurons or two LIF neurons can be more strongly correlated (Vilela and Lindner, 2009). Notably, we found that our statistical framework can replicate many important aspects of neuronal correlations, e.g., nonlinear dependence of spike correlations on the input correlation strength (Binder and Powers, 2001) (Eq. 19), fi ring rate dependence of weak spike correlations (Svirskis and Hounsgaard, 2003;de la Rocha et al., 2007) (Eq. 20), and independence of spike reliability of the threshold (Mainen and Sejnowski, 1995) (Eq. 21). Furthermore, spike correlations derived here are consistent with many recent results in the commonly used LIF model, e.g., fi ring rate dependence of weak cross correlations (de la Rocha et al., 2007;Shea-Brown et al., 2008) (Eqs 20 and 24), the infl uence of noise mean and variance on the fi ring rates and weak spike correlations (Brunel and Sergi, 1998;de la Rocha et al., 2007;Ostojic et al., 2009) (Eqs 15, 20 and 24), or sublinear dependence of correlation coeffi cients on input strength (Moreno-Bote and Parga, 2006;de la Rocha et al., 2007) (Eq. 19, Figure 3). While the analytical accessibility of the LIF model is limited by the technically demanding multi dimensional Fokker-Planck equations and provides solutions only in special limiting cases (Brunel and Sergi, 1998;de la Rocha et al., 2007;Shea-Brown et al., 2008), the framework presented here allows for an analytical description of spike correlations. Measurements of correlation coeffi cients under different experimental conditions often aim to compare the input correlation strength in pairs of neurons (Greenberg et al., 2008;Mitchell et al., 2009). But is a change in count correlations always indicative of a change in input correlations? The tractability of our framework revealed that spike count correlations can be a poor indicator of input synchrony for some cases of input correlations. Count correlations computed for time bins smaller than the intrinsic scale of temporal correlations could be independent of the functional form of input correlations but depend on the fi ring rate and input correlation strength. This suggests that a change in the correlation coeffi cient can be related to a change in the input correlation strength, if a fi ring rate change and a change of intrinsic time scale can be excluded. On the other hand, a change in correlation coeffi cients computed for large time bins is indicative of a change in input correlation strength only if a change in fi ring rate, time scale and functional form of input correlations can be excluded. Furthermore, count correlations computed for large time bins can either increase or decrease with increasing time bin or even vanish in a correlated pair. This seemingly contradictory behavior is consistent with the functional dependence of spike count correlations observed in cortical neurons (Averbeck and Lee, 2003;Kass and Ventura, 2006;Smith and Kohn, 2008). Our results suggest that emulating neuronal spike trains, building effi cient population models or determining potential decoding algorithms requires the analysis of full spike correlation functions in order to compute unambiguous spike count correlations. In particular, spike count coeffi cients computed for time bins larger than intrinsic timescale of correlations can be an ambiguous estimate of input cross correlations in a neuronal population with potentially heterogeneous distribution of input structures. Furthermore, the details of the spike generation model can be very infl uential for the transfer of current correlations to spike correlations, and the analytical results obtained here could facilitate quantitative comparisons between different types of models and between models and real neurons, by providing a maximally tractable limiting case for future studies.
7,508.2
0001-01-01T00:00:00.000
[ "Computer Science" ]
Comparative study of magnetic and magnetotransport properties of Sm0.55Sr0.45MnO3 thin films grown on different substrates Highly oriented polycrystalline SSMO thin films deposited on single crystal substrates by ultrasonic nebulized spray pyrolysis have been studied. The film on LAO is under compressive strain while LSAT and STO are under tensile strain. The presence of a metamagnetic state akin to cluster glass formed due to coexisting FM and antiferromagnetic/charge order (AFM/CO) clusters. All the films show colossal magnetoresistance but its temperature and magnetic field dependence are drastically different. In the lower temperature region the magnetic field dependent isothermal resistivity also shows signature of metamagnetic transitions. The observed results have been explained in terms of the variation of the relative fractions of the coexisting FM and AFM/CO phases as a function of the substrate induced strain and oxygen vacancy induced quenched disorder. Introduction In doped rare earth manganites of the type RE 1-x AE x MnO 3 (RE: rare earth cations; La 3+ , Nd 3+ , Sm 3+ etc., AE: alkaline earth cations; Ca 2+ , Sr 2+ etc.) the lowering of the average RE/AE-site cationic radius (⟨ ⟩) decreases the electron bandwidth (W) that in turn results in increased carrier localization through the Jahn-Teller (JT) distortion of the MnO 6 octahedra. At reduced W the magnetic and magnetotransport properties show strong sensitivity to even weak external perturbations like small magnetic field, electric field, substrate induced strain, electromagnetic radiation, etc. and intrinsic disorders like oxygen and cationic vacancies, etc. [1][2][3][4][5][6] This is believed to be due to the enhanced competition between the ferromagnetic double exchange (FM-DE) that increases the kinetic energy of the itinerant electron and hence favours carrier delocalization and the JT distortion that favours antiferromagnetic superexchange (AFM-SE) and carrier localization. 1,2,7,8 Hence, at reduced W, the possibility of magneto-electric phase coexistence, especially in the vicinity of the half doping appears as a natural tendency. The most prominent example in this regards is Sm 1-x Sr x MnO 3 . This compound is unique due to its proximity to the charge order/orbital order (CO/OO) instability and shows the most abrupt insulator metal transition (IMT) and the most prominent magnetocaloric effect. [9][10][11][12][13][14][15] The ground states of Sm 1-x Sr x MnO 3 are (a) ferromagnetic metallic (FMM) for 0.3<x0.52, and (b) antiferromagnetic insulating (AFMI) for x>0.52. [9][10][11] The charge ordering (CO) occurs in the range 0.4x0.6 and the corresponding ordering temperature (T CO ) increases from 140 to 205 K with x increasing in the above range. Colossal magnetoresistance (CMR) is observed at all the compositions corresponding to the FMM ground state. Near half doping (0.45x0.52), very sharp (first order) transitions from paramagnetic insulating (PMI) to the FMM state are observed. Several studies on narrow band manganites have shown that the first-order nature of phase transition can be preserved even in presence of quenched disorder arising due to the size mismatch between RE and AE ions. 1 Like other low bandwidth manganites, Sm 1-x Sr x MnO 3 has a natural tendency towards phase separation/phase coexistence (PS/PE) that causes evolution of a strong metamagnetic component around half doping (x~0.50). This metamagnetic makes the composition-temperature (x-T) phase diagram extremely fragile to external perturbations. Despite detailed studies on the bulk polycrystalline and single crystalline Sm 1-x Sr x MnO 3 forms, thin films have not been investigated in much detail. In this regard we would like to mention that as compared to the large and intermediate W manganites like La 1-x Sr x MnO 3 and La 1-x Ca x MnO 3 the growth of single crystalline Sm 1-x Sr x MnO 3 thin films has been found to be rather difficult. [16][17][18][19] One of the factors that could supress the occurrence of PM-FM and IM transitions could be the extreme sensitivity of the magneto-electric phases (e.g., PMI, FMM and AFM-CO insulator (AFM-COI)) to the substrate induced strain in low W compounds. In small W compounds, (i) the reduced average RE-site cationic radius and hence smaller tolerance factor (t), (ii) the smaller Mn-O-Mn bond distance and the corresponding angle and (iii) enhanced size mismatch induced quenched disorder ( 2 ) could lead to stronger sensitivity to the impact of substrate induced strain. As regards the impact of substrate it is generally accepted that the compressive (tensile) strain favours FMM (AFM-COI) and is inimical to the AFM-COI (FMM) phases. 6,20 However, in narrow W manganites like Sm 1- x Sr x MnO 3 substrate induced strain may not be the only factor determining the magnetoelectric phase profile. It is worth mentioning that there are some reports showing the anomalous behaviour wherein the compressively strained thin films on LAO substrates do not show any IMT at lower film thickness (e.g., 25 nm and 50 nm), which, however, is seen in 120 nm film. 21 In contrast the tensile strained film on STO shows IMT even at film thickness of 50 nm. 21 The impact of substrate induced strain on magnetic phase coexistence and consequent magnetotransport properties of small W material like Sm 1-x Sr x MnO 3 appears to be more dramatic. 19,22 In fact recently it has been shown that even a small strain can cause appreciable modifications in the magnetoelectric phase landscape of a low W manganite like Sm 1-x Sr x MnO 3 . 23 Recently we have studied Sm 0.55 Sr 0.45 MnO 3 (SSMO) thin films, wherein it was demonstrated that the substrate induced strain and oxygen vacancy ordering/disordering have significant impact on the magnetotransport properties. 23 In the continuation of above mentioned work, here we report the detailed study on SSMO thin films grown over LAO, STO, and LSAT substrates, which provide compressive strain, tensile strain and least strain, respectively. Our results clearly demonstrate that despite the polycrystalline nature of these films the impact of substrate is indeed dramatic and unambiguously manifested in the magnetic and magnetotransport properties. Experimental Details Polycrystalline thin films of Sm 0.55 Sr 0.45 MnO 3 (thickness ~100 nm) on single crystals LAO (001), STO (001) and LSAT (001) substrates were synthesized by using ultrasonic nebulized spray pyrolysis. 23 Stoichiometric amounts of high purity Sm, Sr, and Mn nitrates (Sm/Sr/Mn=0.55/0.45/1) were dissolved in deionized water and the solution was homogenized. Film deposition was done at substrate temperature; T S~2 00 C and the films were annealed in air at temperature T A~1 000 C for 12 hrs, followed by slow cooling with cooling rate 4 °C/min. Here we would like to point out that high temperature annealing does not lead to any observable interdiffusion at the film substrate interface. 24 The structural and surface characterizations were performed by X-ray diffraction (XRD, PANalytical PRO X'PERT MRD, Cu-K 1 radiation λ=1.5406 Å) and atomic force microscopy (AFM), respectively. The cationic composition was studied by energy dispersive spectroscopy (EDS) attached to scanning electron microscope. The temperature and magnetic field dependent magnetization was measured by a commercial (Quantum Design) PPMS at H=500 Oe magnetic field applied parallel to the film surface. The electrical resistivity was measured by the standard four probe technique in the magnetic field range 0  H  50 kOe. Results and Discussion The XRD data ( Fig. 1) shows the occurrence of the (00ℓ) reflections alongside the corresponding substrate peaks (marked by S in Fig. 1) and absence of any other diffraction maxima corresponding to the film material. This shows strong texturing and orientation along the out of plane direction. The crystal structure of SSMO (x=0.45), as reported by Tomioka et al., 9 is orthorhombic (Pbnm) and the c-parameter corresponding to the cubic unit cell is c ≈ 3.83 Å. The out of plane lattice parameter (OPLP) of the film on LAO substrate is c LAO =3.855 Å, which is found to be larger than the corresponding bulk value. The OPLP of films on STO and LSAT are c STO =3.822 Å and c LSAT =3.826 Å, respectively. These estimations suggest that the films grown on LAO are compressively strained (a LAO =3.79 Å) and slightly smaller lattice constants of SSMO on STO (a STO =3.905 Å) and LSAT (a LSAT =3.868 Å) substrates could be attributed to the small tensile strain. As mentioned earlier the tensile strain is believed to favour the AFM-COI phase, while the compressive strain enhances the FMM fraction. At the lattice level the compressive strain results in an elongation of the MnO 6 octahedra in the OP direction with a concomitant compression in the basal plane that causes a reduction in the degree of JT distortion, and hence weakens the spinlattice coupling. On the other hand the tensile strain elongates the MnO 6 octahedra in the basal plane with a concomitant compression along the OP direction. 6,20 Here, we must point out that the impact of strains in polycrystalline films is expected to be of localized character (due to the presence of discontinuity at the grain boundaries, where the strain could be relaxed easily) and hence may not be unambiguously visible (as in case of epitaxial/singlecrystalline thin films) in gross structural characteristics like XRD patterns. As revealed by SEM (not shown here), the surface of these films generally consists of a mixture of large continuous layers, which are intermittently covered by small granules. This could be suggestive of local epitaxial like grown regions having strong texturing. Surface topography of all the films was probed by AFM. In case of the film on LAO the surface appeared to consist of large and big granules, while in case of the films on LSAT and STO substrates the granule size are more uniform. As compared to single crystalline Sm 0.53 Sr 0.47 MnO 3 thin films prepared by DC magnetron sputtering, 19,22 the surface roughness of these polycrystalline films is relatively higher. The representative surface topographs of films on LAO and STO are presented in Fig. 2. The temperature dependent resistivity (-T) of these films, measured at H=0 kOe and H=50 kOe, is plotted in Fig. 3. The zero filed insulator-metal transition (IMT) temperature (T IM ) of the SSMO films on LAO, LSAT and STO substrates are found to be ≈164 K, 130 K and 105 K, respectively. From the resistivity profile of the films it is clear that the IMT of the film on LAO is broadest and that of the film on LSAT substrate is the sharpest, where the resistivity decreases sharply by nearly three orders of magnitude. The sharpness of the IMT is also demonstrated by the temperature coefficient of resistivity (TCR) [defined as ], which is an important property from application. The peak TCR value of films on LAO, LSAT and STO is found to be 7 %, 31 % and 9 % respectively. Here we must point out that the film on LAO shows large enhancement in the transition temperature as compared to the polycrystalline/single crystalline bulk (T IM ~130 K) and thin films of similar composition. 9-17 However, the abrupt (first order) transition seen in such bulk polyand single crystalline samples 1,9,10 has been transformed into a continuous second order transition in this film. Furthermore, the hysteretic behaviour of the -T measured in heatingcooling cycles is also absent in this film. 23 The most probable reason for blocking of the first order phase transition in the film on LAO appears to be the presence of quenched disorder. 2,3 The films on LSAT and STO show irreversibility in -T data (results not shown) that is more pronounced in the later. The difference in the T IM measured during the two cycles is up to ~10 K. The application of the magnetic field leads to decrease in the resistivity, enhancement in T IM and broadening of the transition. At H=50 kOe the IMT of films on LAO, LSAT and STO is enhanced to ≈200 K, 184 K and 128 K, respectively. The corresponding magnetic field induced enhancement in the IMT values (∆T IM ) are ≈36 K, 54 K and 23 K for films on LAO, LSAT and STO, respectively. This shows that magnetic field induced enhancements (10.8K/10 kOe) in the IMT is the largest in the film on LSAT. The temperature dependent zero field cooled (ZFC) and field cooled (FC) magnetization data (M-T) measured at H=500 Oe magnetic field is shown in Fig. 4. All the films show well defined PM-FM transition and the Curie temperature is found to be T C 165 K, 130K and 120 K, respectively for LAO, LSAT and STO films. The magnetization of the LAO film starts rising at T190 K and then shows FM transition at T C 165 K, which like the IMT, is uncharacteristically broad for a low W compound like SSMO. At T<T C , the ZFC and FC branches are observed to diverge appreciably. In the low temperature regime the ZFC (x~0.5) in the FMM regime (T<T C ). 19,22 In case of textured polycrystalline thin films the nature of the strain state is expected to be drastically different from the single crystalline and epitaxial thin films. In the textured/oriented polycrystalline thin film the local epitaxial regions are interrupted by the presence of GBs and around these regions the strain, irrespective of its nature (whether it is compressive or tensile) is expected to get relaxed. Such strain discontinuity at the GBs would make the strain weak and spatially non uniform. These GBs works as inhomogeneity which could cause quenched disorder (QD). [1][2][3]23 Further, in manganites, it has been demonstrated that the oxygen vacancies can destabilize the AFM-COI phase both in single crystalline as well as polycrystalline materials quite efficiently. 23,26,27 Since the oxygen stoichiometry is related to the effective hole concentration and even a mild spatial inhomogeneity in the oxygen vacancies could result in spatially varying carrier density that may also act as QD. Thus the origin of the quenched disorder could be traced to (i) the compressive strain provided by the substrate, and (ii) the ordering of oxygen vacancies created by high temperature annealing. Such vacancies are expected to be more at and in the vicinity of the film surface and the film-substrate interface. Here we must emphasise that ordering and disordering of these oxygen vacancies could have the decisive role. 23 As we have earlier shown that the abrupt IMT, which is akin to a first order phase transition is recovered either by rapidly cooling these films after annealing in air, or annealing this film in flowing oxygen and then cooling it slowly. 23 Thus in textured/oriented polycrystalline thin films two types of quenched disorders could possibly arise, the first being due to the strain inhomogeneity and the second due to the oxygen vacancy induced carrier density inhomogeneity. Further, since highly oriented films the grain boundary contribution to the electrical transport properties is considerably reduced therefore the contribution of the QD is expected to be decisive. Thus in case of the film on LAO the QD could transform the long range AFM-COI into short range and enhance the FMM fraction. This explains the observed rise in the magnetization at T>T C as well as the blocking of the abrupt resistive transition. However, the films on STO show stronger decrease in T C /T IM , while LSAT films show same T C /T IM as reported for polycrystalline/single crystalline bulk (T C /T IM ~130 K) and thin films of similar composition. The huge decrease in T C /T IM of STO films could be attributed to the substrate induced tensile strain which strengthens the JT distortion of the MnO 6 octahedra and hence favours the AFM-COI phase. Thus we can conclude that spatially inhomogeneous strain as well as other factors such as the oxygen vacancy, etc. also plays a crucial role in determining the magnetotransport properties in low W manganites. The temperature dependence of MR measured at H=50 kOe of all the films is plotted in Fig. 5. In case of the film on LAO the MR rises rapidly on lowering the temperature and has a peak value of 87 % at T≈140 K. On further lowering the temperature the MR decreases and saturates to ≈40% at 5 K. The fact that the peak in the MR-T curve of the film on LAO occurs much below the T C /T IM could also be a possible consequence of the coupled effect of quenched disorder and compressive strain. In the film on STO the MR rises very slowly till T≈ 160 K and then undergoes a sharp increase, reaching the peak value ≈91 % at T≈100 K. In the lower temperature region the MR of this film decays to ≈55 % at 5 K. As the temperature is lowered the temperature dependence of MR in the film on LSAT is similar to that in the film on LAO till T≈190 K. Below this temperature the MR rises sharply and approaches ≈99 % at T≈144 K. Interestingly the MR of this particular film remains in excess of 99% in the temperature range 144-110 K, hence causing a plateau like feature in the MR-T curve (Fig. 5.) Thus it is clear that the MR in the film on LSAT substrate approaches 99 % about 15 K above the T C /T IM and remains nearly constant down to 110 K. The occurrence of CMR over such large temperature range suggests towards appreciable presence of AFM-COI cluster in this temperature range. These AFM-COI clusters are transformed into FMM ones by the applied magnetic field. At this point we would like to mention that this film also shows the largest shift in the T IM due to the applied magnetic field, which as mentioned earlier is ∆T IM ≈54 K. The observed pattern in the variation of MR-T data could be related to the different ratios of the two competing magnetoelectric phases, viz. FMM and AFM-COI on different substrates. Isothermal magnetic field dependent resistivity (-H) measured at several temperatures shows many interesting features. In the lower temperature regime, e.g., T=5 K, the -H data of all the films shows signature of a soft metamagnetic component. The normalized isothermal resistivity [ (H)/ (50 kOe)] is plotted as a function of the applied magnetic field in Fig. 6. As seen in the plot, in the initial magnetic field cycle the resistivity of all the films first shows slow decrease as the H is increased and then drops sharply beyond a critical magnetic field value H * . This kind of feature is generally attributed to the collapse of the AFM-COI state, that is the magnetic field induced AFM-COI to FMM transformation. 28 The observed values of H * is ≈17.5 kOe, 22.5 kOe and 25 kOe in the film on LAO, LSAT and STO, respectively. This clearly shows that the AFM-COI state is the strongest in the film on STO substrate. In the subsequent cycles, although the initial value of the resistivity is not reattained but in all the films the -H curves show strong hysteresis. Except for the slope the phases. 17,28 The sharp magnetic field drop in the resistivity is observed only in the lower temperature regions and is absent in the -H loops measured at T50 K. Hence this could be correlated to the occurrence of a metamagnetic component as also evidenced by the strong bifurcations in the ZFC-FC magnetization curves of these films (Fig. 4). As explained earlier such bifurcation of the ZFC-FC curves is regarded as generic feature of the CG like metamagnetic state, which in the present case is caused by coexisting FMM and AFM-COI phases. As demonstrated by the magnetization and electrical transport data, the film on the LAO substrate has the lowest fraction of the AFM-COI phase. This explains the smallest value of the H * (≈17.5 kOe) in the film on LAO. On the contrary, the film on STO has the highest AFM-COI component and hence the value of H * is the largest in this. The sharp drop and the fact that the virgin resistivity is not achieved in the subsequent cycles suggests that the AFM ordered COI clusters are melted by the applied field and major fraction of these get transformed permanently into FMM ones, that is the AFM-COI to FMM transformation is not fully reversible. The MR calculated from the isothermal resistance measurements is plotted in Fig. 7 (a-f). At T=5 K, the film on LAO and STO have almost identical behaviour, wherein they show similar hysteresis and rather small MR~13% at 50 kOe. In contrast the MR of film on LSAT shows jump, in addition to hysteresis, that occurs at different magnetic fields during the field increasing and decreasing cycles. Beyond these jumps, the slope of the MR-H curves is changed, albeit no saturation of the MR is observed up to 50 kOe. At 50 K, (Fig. 7b) the film on LAO shows the lowest MR≈39 % at H=50 kOe and it also has the narrowest hysteresis. The film on STO shows MR≈51 % at H=50 kOe and a strong hysteresis is seen in the MR-H curve. In the film on LSAT the MR increases sharply as H is increased up to ~20 kOe and beyond that the slope of the MR-H curve is appreciably lowered but no saturation like behaviour is seen. At T=100 K, the film on LAO shows MR≈72 % (H=50 kOe) and the hysteretic behaviour of the MR-H curve has almost vanished. The films on STO and LSAT, in contrast still show strong hysteretic field dependence and much higher MR. The MR in both these films rises sharply up to H≈20 kOe. However, MR in none of these films shows a saturation tendency up to H=50 kOe. At T= 125 K, the hysteresis in the MR-H curve of the film on LAO vanishes and MR shows a sharp rise at H ≤ 20 kOe. In this film the saturation tendency is still not seen. The MR of the film on LSAT still shows a strong hysteresis and sharply reaches ~99 % at H≈20 kOe and saturates at slightly higher fields. Since the measurement temperature (125 K) is higher than the T C /T IM of the film on STO (PM phase), the MR in this film remain very small till H≈15 kOe and then rises sharply with weak saturation like tendencies appearing around H=50 kOe. At T=150 K, in all the films the MR-H hysteresis vanishes. Since the film on LAO is still in the FM state, its MR is still about ≈81 % at 50 kOe. The film on STO does not have any significant MR till about H=25 kOe but at further higher fields, MR approaches ~18 %. In case of the film on LSAT, the MR rises very sharply beyond H≈10 kOe and appear to saturate at ~96 % at H=50 kOe. The occurrence of such large MR at T > T C and the nonlinear nature of the MR-H curve clearly suggests that AFM-COI cluster could be present in the PM regime also and as the magnetic field is increased they get transformed in to the FMM. At T=200 K, all the films show typical behaviour of MR in the paramagnetic regime above IMT having linear increment of MR which decreases to ≈47 %, 33 % and 4 %, respectively for LAO, LSAT and STO films. The MR-H data presented above clearly suggest that the temperature dependent hysteresis could be consequence of the varying fractions of the FMM and AFM-COI phase at T < T C and AFM-COI and PMI phases at T > T C . The occurrence of very large hysteretic MR in the film on LSAT even at moderate magnetic fields at T < T C is clear signature of the presence of AFM-COI clusters in the FMM regime, while the presence of non-hysteretic but nonlinear MR at T > T C shows the presence of AFM-COI clusters in the PMI regime. One more aspect for the occurrence of hysteresis in MR-H curve could be the nature of magnetic spin alignment in both directions of magnetic fields. In increasing magnetic fields the spins are easily aligned in field direction but when reverse magnetic field is applied it requires more energy to rotate magnetic spins in the field direction to achieve previous magnitude of magnetoresistance indicating a strong coupling between spin and magnetic easy axis. Conclusion In summary, we have synthesized oriented high quality polycrystalline SSMO thin films deposited on LAO, LSAT and STO single crystal substrates and investigated the impact of substrate on magnetic and magnetotransport properties. Our results clearly show that even a subtle change in the nature and magnitude of the strain results in appreciable modifications in the magnetic and magnetotransport properties. The large enhancement in the T C /T IM of the film on LAO with a simultaneous blocking of the abrupt resistive transition has been explained in terms of quenched disorder whose origin has been traced to the inhomogeneous compressive strain and the surface/interfacial oxygen vacancies. In contrast, the decrease in the T C /T IM of the film on STO has been caused by the enhanced AFM-COI fraction due to the tensile strain. The film on LSAT, which is least strained shows the sharpest PM-FM and an abrupt insulator-metal transitions, the strongest metamagnetic component and CMR in excess of 99% over a broad temperature around T C /T IM. This shows that the coexistence of the FMM and AFM-COI phases is more delicately balanced in film on LSAT. The competing FMM and AFM-COI phases cause a metamagnetic state akin to the cluster glass. The substrate and temperature dependent variation in the fraction of the FMM and AFM-COI phases has a strong bearing on the magnetotransport properties on these films.
5,972.4
2013-05-16T00:00:00.000
[ "Materials Science", "Physics" ]
Finite Entanglement Entropy in String Theory We analyze the one-loop quantum entanglement entropy in ten-dimensional Type-II string theory using the orbifold method by analytically continuing in $N$ the genus-one partition function for string orbifolds on $\mathbb{R}^2/\mathbb{Z}_N$ conical spaces known for all odd integers $N>1$. We show that the tachyonic contributions to the orbifold partition function can be appropriately summed and analytically continued to an expression that is finite in the physical region $0<N \leq 1$ resulting in a finite and calculable answer for the entanglement entropy. We discuss the implications of the finiteness of the entanglement entropy for the information paradox, quantum gravity, and holography. Introduction Entanglement entropy is a quantity of fundamental importance in quantum mechanics and quantum field theory, and even more so in quantum gravity.The naively defined von Neumann entropy measuring the entanglement between the inside and the outside of a black hole is divergent in quantum field theory and proportional to the horizon area in units of the short-distance cutoff.This divergence across any sharp boundary is a consequence of the fact that field values on the two sides of the boundary have strong short-distance correlations in a local quantum field theory.If this divergence is not cured in quantum gravity, then it would imply that the black hole has infinite number of qubits and can store arbitrary amount of information.Unitary evolution would then be impossible unless the black hole is interpreted as a remnant with all the attendant problems of this interpretation.Finiteness of entanglement entropy is thus at the heart of the information paradox in black hole physics. Given the ultraviolet finiteness of string perturbation theory, it behooves us to ask if one can define a suitable notion of entanglement entropy in string theory and examine its finiteness order by order.A direct definition of such a quantity has proven elusive partly because it is not clear how to define in string theory the relevant density matrix and appropriate notions corresponding to its von Neumann or Rényi entropy.One expects that it would be difficult to introduce sharp boundaries in string theory given the soft behavior of strings at short distances.One can instead attempt an indirect definition by a generalization of Rényi entropy adapted to string theory using Z N orbifolds [1].The simplest example constructed in [1] is Type-II string on M 8 × R 2 /Z N where M 8 is 7 + 1 dimensional Minkowski spacetime, R 2 is two-dimensional Euclidean plane, and the orbifold action is generated by a rotation in the plane through an angle 4π/N for N odd. It is convenient to write entanglement entropy as S = S (0) + S q (1) where S (0) is the classical contribution and S q is the quantum contribution from higher-genus Riemann sur-faces [2,3].The classical spacetime partition function Ẑ(0) (N ) of the Z N orbifold theory is nontrivial and analytic in N after including a boundary contribution, and S (0) is simply given by the Bekenstein-Hawking entropy [2] but with tree-level, unrenormalized Newton's constant G 0 .The quantum spacetime partition function Ẑq (N ) is related to the worldsheet partition function Z q (N ) by with a genus expansion The expression for Ẑq (N ) is expressed as a sum over orbifold sectors and is not obviously analytic.If an analytic continuation exists, then the quantum entanglement entropy would be given [1-3] by a perturbative expansion (4) The orbifold method thus applies uniformly for computing both the classical and quantum contributions. It is significant that the orbifold method in string theory automatically supplies a classical term in (4) related to the Bekenstein-Hawking formula, and that all higher order terms are proportional to area.It thus appears to offer a natural gravitational generalization of von Neumann entropy with a systematic expansion that is intrinsically holographic.The computation is formally analogous to the replica method in field theory [4] and a priori has nothing to do with black hole physics.But, unlike in field theory, it appears that in a consistent theory of gravity the classical term is inevitable in a discussion of quantum entanglement.In fact, it is of crucial importance.Heuristic arguments indicate that field-theoretic divergences in the quantum entanglement entropy S q can be absorbed into renormalization of Newton's constant [5,6] suggesting that the total entropy must be finite.In semiclassical gravity, a corresponding fact in the more abstract formulation is that inclusion of gravity turns the algebra of observables from Type-III to Type-II [7,8].As discussed in [3], one expects a stronger statement that the 'algebra of observables' should be akin to Type-I corresponding to finite entanglement entropy. Since the quantum effective action includes not only local terms but also nonlocal terms arising from loops of massless fields, not all contributions can be attributed to the renormalization of Newton's constant in the Wilsonian action.Therefore, the entanglement entropy should be finite but nonzero order by order in string perturbation theory.Holographic considerations discussed in [3] also confirm this expectation. It is possible that the entropy defined by ( 4) is a more fundamental notion than black hole entropy if one can make sense of (4).At the quantum level, the definition of black hole entropy has potential ambiguities from the choice of the ensemble and from the contributions of the thermal bath surrounding the black hole.For supersymmetric black holes at zero temperature it is possible to define and in some cases compute exactly [9] the quantum entropy, but these ambiguities need to be resolved.Entropy in ( 4) is free of these ambiguities.It is more general and geometric since it focuses on the bifurcate Rindler horizon which in the Euclidean continuation corresponds to the tip of the cone.It thus depends only on dividing space or the Hilbert space into two parts and not on a specific spacetime geometry or the asymptotics.Moreover, (4) presumably gives the fine-grained entropy much like the von Neumann entropy and not the coarsegrained thermodynamic entropy as for a black hole. In this note we develop further the program initiated in [1] to compute quantum entanglement entropy using the orbifold partition functions as the starting data, and in particular to examine the finiteness of entanglement entropy.This idea immediately encounters a possible obstacle.The spectrum of the Z N orbifold contains several tachyons for all N > 1.As a result, the worldsheet partition function Z q (N ) suffers from severe infrared divergences [1].It would thus appear that we have simply traded the ultraviolet divergence for an infrared divergence.However, unlike ultraviolet divergences, infrared divergences are not a matter of renormalization but contain important physics.The spectrum of tachyons has a specific structure dictated by the internal consistency of string theory.Experience suggests that it is wise to pay attention to what the string is trying to tell us. To better understand the physics of the tachyons, we note that the Euclidean plane R 2 can be regarded as the Euclidean Rindler space.The spacetime partition function Ẑq (N ) can be viewed as the thermal partition function for a Rindler observer: where H R is the Rindler Hamiltonian which generates the translations of Euclidean Rindler time corresponding to rotations in the plane.One can regard H R as the modular Hamiltonian of a density matrix ρ := exp(−2πH R ). One is forced to define the density matrix in this indirect way since at present one does not know how to define notions such as partial trace or algebra of local observables in string theory.The partition function Ẑq (N ) can then be viewed as a generalization of the Rényi entropy: for non-integer N with N = 1/N .On physical grounds one expects good analytic behavior of this function in the right half-plane Re(N ) ≥ 1 but not necessarily in the left half-plane Re(N ) < 1 where the tachyons exist.We elaborate on this point following the observations in [10].In quantum mechanics, the density matrix ρ is a positive Hermitean matrix normalized to Tr(ρ) = 1.The eigenvalues of ρ have to be less than or equal to unity.Therefore, the trace Tr(ρ N ) exists in the region Re(N ) ≥ 1, and the absolute value of Ẑq (N ) is bounded by unity.On the other hand, Tr(ρ N ) in general need not be welldefined in the region Re(N ) < 1 -the convergence of In other words, the tachyons may not be a physical threat despite their menacing comportment in the unphysical realm. In string theory, the partition function ( 3) is represented as an integral over a moduli space.This representation provides a useful refinement.One could ask if the tachyonic terms in the integrand can be summed and analytically continued to obtain a finite and sensible physical answer for the integral in the physical region.As we shall see, this is indeed the case. Entanglement Entropy in Type-II String Theory Following [1] we consider orbifolds of Type-II superstring in light-cone gauge on flat space R 6 × R 2 .The orbifold group Z N = {1, g, . . ., g N −1 } is generated by where J is the generator of rotations in R 2 .The one-loop partition function can be written compactly [1, 3] as an integral over the fundamental domain D of the modular group SL(2, Z) in the upper half τ plane: Here A H is the regularized horizon area in string units The function is defined in terms of the Jacobi theta-function and the Dedekind eta-function with product representations: where q := exp(2πiτ ) and y := exp(2πiz).With τ = τ 1 + iτ 2 , the fundamental domain D can be taken to be the usual 'keyhole' It is useful to regard the partition function as a trace over a Hilbert space of the worldsheet theory to see more clearly its physical interpretation in terms of states.The orbifold Hilbert space has N sectors labeled by the 'twists' k = 0, . . ., N − 1, where k = 0 corresponds to the untwisted sector.A term with a given k and ℓ in (8) can be viewed as a trace over the oscillator modes in k-twisted sector of the form where N L and N R are the oscillator energy operators and ǫ L and ǫ R are the ground state energies for the left and the right movers respectively in the k-twisted sector. In the Green-Schwarz light-cone formalism . The integral over τ 1 in (8) ensures that only level-matched states with N L = N R contribute to the trace.Summing over the 'twines' ℓ = 0, . . ., N − 1 inserts the projection operator into the trace which ensures that only Z N -invariant states contribute. For spacetime interpretation, we recall that can be identified as the mass operator of states in the string spectrum.States for which M 2 is positive, zero, negative are respectively massive, massless, or tachyonic.One can identify 2πτ 2 with the Schwinger parameter or the proper time of particle trajectories in spacetime.Large τ 2 corresponds to the infrared regime whereas small τ 2 corresponds to the ultraviolet regime.We write the integral (8) as a modular integral of a modular function F (τ ) with the Weil-Petersson measure over the fundamental domain D: The famously soft ultraviolet behavior of strings corresponds to the fact that the modular integral is restricted to the keyhole region.As a result, one does not expect any UV divergences which arise in field theory from the proper time integral for the heat kernel near τ 2 going to zero.On the other hand, the integral does suffer from IR divergences because the integrand grows exponentially as τ 2 goes to infinity. To analyze these divergences systematically, we note that the modular function admits a Fourier expansion The zero mode is given by The infrared divergence comes from terms in F 0 (τ 2 , N ) that grow exponentially as τ 2 becomes large which correspond to the propagation of tachyonic states.The integral over τ 1 and the sum over twines ℓ ensure that only level-matched and Z N -invariant tachyons contribute. There are only a finite number of such terms.The expression ( 12) is invariant under the exchange of k and N −k.Using this symmetry one can restrict the attention to 1 ≤ k ≤ N −1 2 .Using (10) and the product representations it is easy to see that the tachyonic part of the integrand F T 0 (τ 2 , N ) can be written as with f k (τ 2 , N ) given by where r k is a non-negative integer such that (17) contains only tachyons.In other words, for a given k in the sum (17), r k is the largest non-negative integer such that r(2N − 4k) < 2k. It is noteworthy that all terms in the sum in ( 17) and ( 18) have unit coefficients indicating that tachyonic states of a given k and a given mass-squared have unit degeneracy.This important fact can be easily verified also in the Hamiltonian formalism.In the k-twisted sector, the ground state is unique with negative energy ǫ L + ǫ R = −2k/N .In spacetime it corresponds to a tachyon with mass-squared M 2 = −2k/N and unit degeneracy.We refer to it as the 'leading' tachyon in the sense that it has the most negative M 2 .Raising operators on the ground state in each sector can only increase the total energy and can give rise to subleading tachyons as long as M 2 remains negative.After acting with a sufficient number of raising operators, M 2 eventually becomes positive and therefore only a finite number of tachyonic terms are possible.It turns out that only the fractionally-moded oscillators of the single complex boson coordinatizing the Rindler plane R 2 are relevant because the action of raising operators of other fields yield nontachyonic states.As a result, the subleading tachyons also have unit degeneracy. Analytic Continuation of Tachyonic Contributions The spectrum is thus replete with tachyons and the partition function ( 8) is badly divergent in each twisted sector for all k = 1, . . ., N − 1.It seems hopeless to try to make sense of the integral for N > 1. Remarkably, the precise structure of string theory allows for a summation of the tachyonic terms in the integrand.This sum can be analytically continued to the physical region Re(N ) ≥ 1 where it tends to a finite limit as τ 2 tends to infinity and the modular integral ( 20) is then finite. The function f k (τ 2 , N ) depends on r k with an intricate dependence on k which makes it difficult to obtain a simple answer for the sum over k in (17).It is useful to consider instead a function fk (τ 2 , N ) obtained by taking r k to infinity in (18).It corresponds to adding infinitely many (fictitious) massive string states and some massless ones which do not change the convergence properties at large τ 2 .One can similarly define F T 0 (τ 2 , N ) by replacing f k (τ 2 , N ) by fk (τ 2 , N ) in (17).The k-sum can now be readily performed for all values of τ 2 to obtain Remarkably, this function is perfectly finite for 0 < N ≤ 1 as τ 2 → ∞ even though it diverges for N > 1. To separate the tachyonic and non-tachyonic contributions, one can rewrite the partition function (14) as where With this splitting, one can examine the dependence on N of each of the two terms in (20) separately. The integral of F R (τ, N ) has no tachyonic divergences by construction and is convergent both in the IR and the UV.This finite integral could be performed numerically for each N to obtain finite values for all odd integers.An interpolation method like the Newton Series [3] could then be used to deduce from this data an analytic continuation or 'extrapolation' in the physical region 0 < N ≤ 1 as long as there is no unexpected oscillatory behavior. The integral of F T 0 (τ 2 , N ) is divergent in the region N > 1 but has a finite limit as τ 2 tends to infinity for 0 < N ≤ 1.We thus find that the tachyonic part of the integrand can be summed and analytically continued such that the total integral is free of the IR divergences in the physical domain 0 < N ≤ 1 or N ≥ 1. It is worth emphasizing that this surprising finiteness is not accidental but depends critically on three very specific 'just so' properties of superstring theory. 1.There are exactly N − 1 leading tachyons with unit degeneracy in each twisted sector.The analytic continuation of (19) would not have the desired behavior in the N plane if there were, for example, N + 1 leading tachyons, or if the multiplicities were different. 2. The total ground state energy −2k/N in the ktwisted sector is linear in k.In the light-cone Green-Schwarz formalism, there are four complex fermions twisted by k/N with ground state energy +1/12 − (1 − k/N )k/2N .There is single complex boson twisted by 2k/N with ground state energy −1/12 + (1 − 2k/N )k/N for 2k < N , and three untwisted complex bosons with ground state energy −1/12.The contributions quadratic in k/N cancel out as do the constant terms.This precise cancelation resulting in ground state energy linear in k depends on the specific structure of the superstring and is not true, for example, for the bosonic string. 3. The subleading tachyons also have energies linear in k and have unit degeneracies for reasons explained earlier.A simple geometric sum (19) would not be possible without these properties. The taming of the tachyons in the physical region is very encouraging.One can now compute the entropy from the region near N = 1 approaching from N < 1.It is interesting that one can extract a nontrivial finite answer for a physical quantity even though in the intermediate steps supersymmetry is broken and the spectrum is inflicted with tachyons.A similar phenomenon has been noted in the open string sector using quite different methods [10] and appears to be a general feature. There is a perverse possibility that the terms coming from massless and massive modes which are finite for N > 1, sum up to a divergent answer for 0 < N ≤ 1.On physical grounds this seems unlikely.For any set of massive or massless fields, the physical density matrix ρ is expected to have eigenvalues less than or equal to unity.With the ultraviolet cutoff provided by string theory, it would be unphysical if Tr(ρ N ) turns out to be convergent for N < 1 but divergent for N ≥ 1.In any case, it is important to explore the integral further to rule out this eventuality and explicitly compute the finite remainder. The situation is reminiscent of the Euler gamma function Γ(N ).The representation of a function analytic in the entire N plane which reduces to the factorial for positive integers can be deduced knowing the values of the function for positive integers and its integral representation in the region Re(N ) > 0. The resulting analytic function has poles at non-positive integers in the region Re(N ) ≤ 0. The situation for the function Z(N ) is somewhat similar but, in fact, in the reverse.We seem to be required to determine the analytic representation of Z(N ) knowing something about the function in the region Re(N ) > 1 for odd integers where the function is ill-defined because of the tachyonic divergence. Fortunately, string theory provides an integral representation for this function.Even though the integral (20) diverges for N > 1, the integrand is finite for fixed τ .The tachyonic part is easy to analytically continue to the physical region.Our results indicate that the resulting integral in the region N ≤ 1 is finite, which is an essential ingredient in obtaining a sensible physical interpretation. Discussion The orbifold method thus seems to yield a finite answer for the entropy defined by (4) to one-loop order.There is no renormalization of Newton's constant in the tendimensional superstring.Therefore, in this case the finite part of our one-loop computation gives directly the oneloop entanglement entropy.One expects to be able to garner more information than just this single number.The analyticity property suggests that one can obtain a finite expression for the entropy Tr(ρ N ) as an analytic function of N in the physically relevant region Re(N ) ≥ 1 to learn more about the short-distance degrees of freedom of string theory.The fact that Z(1) = 0 is consistent with this interpretation.Using (2) it suggests that Tr(ρ) = 1, and therefore it may be possible define a density matrix with a well-defined trace.Finiteness of the entanglement entropy is physically very significant for various reasons which are worth recalling. As discussed earlier, the resolution of the information paradox is closely linked to the finiteness of the entanglement entropy.Unitarity of the boundary theory in holography indicates that most probably the time evolution in the bulk is also unitary.However, it is essential to understand the resolution of the information paradox directly in the bulk gravitational theory.Understanding the finiteness of entanglement is a step towards this goal.Entanglement entropy is a critical ingredient also in the formulation of the strong sub-additivity paradox for Hawking emission [11][12][13], or in the proof of the generalized second law of thermodynamics [14].Holographic entanglement entropy beyond the classical formula also requires a definition of entanglement in the bulk [15][16][17][18][19][20][21]; and quantum entanglement entropy is relevant for defining the quantum extremal surface and the proposals for reconstructing bulk observables and the black hole interior [22][23][24][25][26][27][28].Some of the works implementing the replica method assume absence of tachyons -it would be inter-esting to explore this in more detail.A string-theoretic definition of finite entanglement entropy is desirable in these various contexts. As emphasized in [3], entanglement entropy in flat space computed here is relevant near the horizon of any large mass two-sided black hole with bifurcate horizon, in particular, the Schwarzschild black hole in anti de Sitter spacetime.In this context, the ground state of the bulk spacetime theory corresponds in the boundary to an entangled state in the thermofield double [29][30][31].Tracing over the left conformal field theory results in a thermal state in the right conformal field theory.The entanglement entropy of the thermo-field double state thus equals the entropy of the thermal bath, which has a systematic double expansion that can be matched to a perturbative expansion in the bulk in the string coupling g s and the string scale l 2 s .The classical Bekenstein-Hawking entropy corresponds to the leading entropy of the thermal state of order N 2 in the large N limit.One expects that the quantum entanglement entropy can be matched to the order N 0 correction which is finite and in principle calculable.It would be interesting to make this comparison for the finite mass, nonsupersymmetric black hole. In the holographic context, the algebra of observables of the right CFT is manifestly Type-I since it admits an irreducible representation over the right Hilbert space, reflected in the fact that the entanglement entropy is finite.One expects a corresponding statement in the bulk as discussed in [3].The algebra of observables is Type-III in the field theory limit of the bulk but one expects that string theory should ameliorate the situation.Finiteness of entanglement indicates that the algebra of observables of the bulk quantum gravity is indeed akin to Type-I as in the boundary theory.In quantum gravity one cannot really define an algebra of local observables and it is not clear what generalization will correspond to notions like modular Hamiltonian and entanglement entropy.See [7,8,[32][33][34][35] for recent discussions.A computable and finite entanglement entropy in string theory would be a useful guide in the search towards such a generalization.The orbifold has an exact conformal field theory description which is the required data for defining off-shell string field theory on the conical background.It would be interesting if the machinery of string field theory [36,37] can be brought to bear on this important problem.
5,614
2023-06-01T00:00:00.000
[ "Physics" ]
The Potential Role of Polyphenols in Modulating Mitochondrial Bioenergetics within the Skeletal Muscle: A Systematic Review of Preclinical Models Polyphenols are naturally derived compounds that are increasingly being explored for their various health benefits. In fact, foods that are rich in polyphenols have become an attractive source of nutrition and a potential therapeutic strategy to alleviate the untoward effects of metabolic disorders. The last decade has seen a rapid increase in studies reporting on the bioactive properties of polyphenols against metabolic complications, especially in preclinical models. Various experimental models involving cell cultures exposed to lipid overload and rodents on high fat diet have been used to investigate the ameliorative effects of various polyphenols against metabolic anomalies. Here, we systematically searched and included literature reporting on the impact of polyphenols against metabolic function, particularly through the modulation of mitochondrial bioenergetics within the skeletal muscle. This is of interest since the skeletal muscle is rich in mitochondria and remains one of the main sites of energy homeostasis. Notably, increased substrate availability is consistent with impaired mitochondrial function and enhanced oxidative stress in preclinical models of metabolic disease. This explains the general interest in exploring the antioxidant properties of polyphenols and their ability to improve mitochondrial function. The current review aimed at understanding how these compounds modulate mitochondrial bioenergetics to improve metabolic function in preclinical models on metabolic disease. Introduction Polyphenols are naturally derived compounds that are widely studied for their health benefits [1]. In fact, polyphenols can be grouped into four major categories, which include such as gallic acid and catechins can reduce body weight and attenuate metabolic abnormalities, especially scavenging free radical species through their abundant antioxidant properties [13]. Indeed, the bioactivity of polyphenols has been mainly attributed to their abundant antioxidant properties, which have been linked with improved metabolism, reduced inflammation, and ameliorating oxidative stress [14]. Notably, inflammation and oxidative stress are some of the key destructive components that are implicated in the development of metabolic anomalies and deteriorated metabolic health. Inflammation is characterized by enhanced pro-inflammatory cytokines such as tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6) [15]. On the other hand, oxidative stress arises because of overproduction of reactive oxygen species (ROS) that trigger suppression of intracellular antioxidant such as glutathione, superoxide dismutase, catalase, and thioredoxins [16]. Recently, impaired mitochondrial dysfunction has been reported to play an important role in the generation of oxidative stress through the altered actions of the electron transport chain [17]. For example, enhanced substrate delivery including free fatty acids (FFAs), especially under the conditions of metabolic syndrome, can impede the actions of the mitochondrial electron transport chain, resulting in the leakage of electrons and the overproduction of ROS. In fact, a few studies have correlated impaired mitochondrial bioenergetics with the generation of oxidative stress and reduced metabolic function [18]. As a result, many studies have targeted the main energy regulating tissues with abundant mitochondria, such as the skeletal muscle, to understand how increased substrate availability reduces or affects metabolic function [19,20]. Similarly, several studies have been published focusing on understanding how polyphenols affects mitochondrial bioenergetics in conditions of metabolic stress [21,22]. Currently, there is limited reviews on this topic or those targeting the modulation effect of polyphenols on skeletal muscle physiology. Thus, the current study aims to systematically extract and discuss relevant literature on the impact of polyphenols and plants rich in these compounds on their ameliorative effects against metabolic complications by targeting mitochondrial bioenergetics within the skeletal muscle. Data Sources and Search Strategies The present review included preclinical trials obtained from a comprehensive search conducted on electronic databases, such as PubMed, from date of conception up to 30 December 2020. Two investigators, SXHM and KZ, independently conducted the search process and evaluated studies for eligibility and a third reviewer (PVD) was consulted in cases of disagreements. The systematic search was conducted using medical subject heading (MeSH) terms such as "polyphenols", "bioactive compounds", "mitochondria", "metabolic syndrome", and "skeletal muscle". The search was restricted to English only. Mendeley reference manager version 1.19.4-dev2 software (Elsevier, Amsterdam, The Netherlands) was used to identify any duplicated studies. Inclusion and Exclusion Criteria This review includes in vitro and in vivo studies reporting on the impact of polyphenols on mitochondrial bioenergetics and related complications in skeletal muscle. In this review, only preclinical studies reporting on evidence involving skeletal muscle, polyphenols, and/or bioactive compounds and mitochondrial bioenergetics were included. This review is focused on better understanding the importance of polyphenols and bioactive compounds on pre-clinical studies, therefore human studies, books, letters, case reports, and reviews were excluded. Data Extraction and Representation Studies from the initial search on PubMed were screened for eligibility, they were then subsequently evaluated by full-text screening. Data was extracted by two investigators (SXHM and KZ) independently with (PVD) as a third investigator in case of any disagreements. Data extraction was performed in the following format: polyphenols/bioactive compounds, experimental model, effective dose, and intervention period, and main findings and author details (name and year of publication). An Overview of Results The primary outcome of the study was to evaluate the impact of polyphenols on mitochondrial bioenergetics, oxidative stress, and/or any other metabolic complications within the skeletal muscle. Figure 1 shows the flow chart of the study selection. Briefly, 7 studies were initially identified; however, after screening and reviewing the titles and abstracts, only 40 studies were eligible for the full-text assessment. After reviewing the full-text articles, a total of 25 studies were irrelevant to the topic of interest. Therefore, 15 met the inclusion criteria and were discussed within the review. A Brief Overview on Polyphenolic Compounds and Their Impact on Mitochondrial Bioenergetics and Linked Metabolic Function in Various Preclinical Models In addition to giving a brief background on the source and bioavailability profile, both in vivo and in vitro studies are discussed based on each polyphenolic compound, systematically extracted from the literature, as represented in Tables 1 and 2. A Brief Overview on Polyphenolic Compounds and Their Impact on Mitochondrial Bioenergetics and Linked Metabolic Function in Various Preclinical Models In addition to giving a brief background on the source and bioavailability profile, both in vivo and in vitro studies are discussed based on each polyphenolic compound, systematically extracted from the literature, as represented in Tables 1 and 2. Increased oxygen consumption was accompanied by regulation of the genes for mitochondrial biogenesis such as peroxisome proliferator-activated receptor γ coactivator 1 α (PGC1α) acetylation and activity [30] HFD-fed Sprague Dawley rats 100 mg/kg b.w./day resveratrol for 8 weeks Reduced intramuscular lipid accumulation and ameliorated insulin resistance, in part by enhancing NAD-dependent deacetylase sirtuin 1 (SIRT1) activity, increasing mitochondrial biogenesis and β-oxidation [14] Catch-up growth-induced insulin resistance Sprague Dawley rats 100 mg/kg b.w./day resveratrol treatment for 4 and 8 weeks Enhanced SIRT1 activity and improved mitochondrial number and insulin sensitivity, as well as decreased levels of reactive oxygen species and restored antioxidant enzyme activities, including superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx) [31] C57/BL6J mice 25-30 mg/kg b.w/day (low dose) and 215-230 mg/kg b.w/day (high dose) resveratrol for 8 months 50 µM dose significantly decreased ATP levels early as 1 h after treatment and activated AMPK independently of SIRT1. At 25 µM resveratrol increased mitochondrial function by increased expression of PGC1α, PGC1β, and TFAM including the transcription factor B2 (TFB2M) in a SIRT1-dependent manner. This was also supported by an increase on mtDNA content. Furthermore, resveratrol AMP-activated protein kinase (AMPK) and increased NAD+ levels [21] HFD-induced insulin resistance Sprague Dawley rats 100 mg/kg/day resveratrol for 8 weeks Ameliorated insulin resistance through increased SIRT1 and SIRT3 expressions and elevated mtDNA and mitochondrial biogenesis. This included enhancing mitochondrial antioxidant enzymes including SOD, CAT, and GPx [32] HFD-fed C57BL/6J mice 0.02, 0.04, and 0.06% resveratrol for 12 weeks Reduced the plasma insulin and glucose concentrations, which were accompanied by an increased miR-27b overexpression, which improved mitochondrial function in a Sirt1-dependent manner [24] HFD-induced sarcopenic obesity Sprague Dawley rats 0.4% resveratrol for 20 weeks Ameliorated mitochondrial dysfunction and oxidative stress via the serine-threonine kinase LKB1 (PKA/LKB1)/AMPK pathway. This was evident by increased activity of complexes I, II, and IV, and raised PGC1α, TFAM, and mfn2, as well as decreased drp1 expression. Moreover, there was an increase in the total antioxidative capability (T-AOC), SOD, GPx, MDA, and carbonyl protein [17] Molecules 2021, 26, 2791 6 of 14 Reduced insulin resistance, improved mitochondrial respiration, mitochondrial oxidative capacity, and fatty acid oxidation as evident by increased mitochondrial enzymatic activities, AMPK phosphorylation, and the expression of peroxisome proliferator-activated receptor α (Pparα) and UCP2 [34] Flavan 3-ols fraction derived from cocoa powder C57BL/J mice 50 mg/kg b.w./day flavan-3-ols for 2 weeks Enhanced lipolysis and promoted mitochondrial biogenesis marked by increased carnitine palmitoyltransferase 2 (CPT2) expression and mitochondria copy number [35] Naringenin and quercetin High-fructose diet-induced insulin resistance Wistar rats 50 mg/kg b.w./day naringenin and quercetin for 6 weeks Both naringenin and quercetin reduced the plasma glucose and insulin levels accompanied by a significant increase in SIRT1 and PGC1α expression, AMPK phosphorylation, and glucose transporter type 4 (GLUT4) translocation [26] Icariin C57BL/6 mice 10 or 40 mg/kg/day icariin for 14 days Decrease in body weight gain by increasing FNDC5, PGC-1α, and p-AMPK expression levels [28] Flavonoids Type 2 diabetic (db/db) mice 180 mg/kg flavonoids for 7 weeks Ameliorated insulin resistance and symptoms associated with diabetes through increased p-AMPK and PGC1α, raised m-GLUT4 and T-GLUT4 protein expression, and improved mitochondrial function [29] 3.2.1. Resveratrol Resveratrol (3,5,4 -trans-trihydroxystilbene, Figure 2) is a polyphenolic phytoalexin also belonging to the stilbene family that is abundant in grape skin and seeds, but is also found in various types of plant foods such as berries, peanuts, and wine [36]. This polyphenol is widely available and it is synthesized by more than 70 species of plants [37]. Although it exhibits low bioavailability and solubility [37], experimental data on resveratrol have been widely reviewed, and it has shown potential benefits for human health and exhibits protective effects against metabolic complications such as inflammation, oxidative stress, and aging. Moreover, resveratrol has shown promising properties in ameliorating complications linked with diseases such as diabetes and obesity. Evidence from studies by Price et al. [21] and Higashida et al. [23] demonstrated that resveratrol enhanced mitochondrial function and biogenesis in a SIRT1-dependent manner, and this was consistent with improved mtDNA content in palmitate-treated skeletal muscle cells and HFD-fed mice. This includes increasing the protein expression of PGC1α and other mitochondrial functional genes such as TFAM, mfn2, and drp1, as well as the activity of mitochondrial complexes I-V in skeletal muscle cells [21,23,24]. Molecules 2021, 26, x FOR PEER REVIEW 7 of 14 muscle of these rats. More evidence included in this review demonstrated that resveratrol increased the phosphorylation of AMPK in the skeletal muscle of both C57/BL6J mice and HFD-induced sarcopenic obesity Sprague Dawley rats [17,21]. Overall, resveratrol demonstrates a wide array of benefits in improving metabolic function, in part by effectively regulating energy metabolism and mitochondrial bioenergetics within the skeletal muscle. Gingerol Gingerol is the primary bioactive phenylpropanoid in the rhizome of ginger (Z. officinale Roscoe; Zingiberaceae) ( Figure 3) which is known for its pungent taste and aroma [25]. Ginger is widely used a spice and medicinal herb, highlighting the general interest in the potential health benefits of the bioactive compounds found in this functional food product [39]. Generally, ginger contains pungent phenolic substances known as gingerols, Furthermore, in vivo studies suggest that resveratrol exhibits strong antioxidant properties in improving skeletal muscle function in various HFD-induced insulin-resistant models [17,31,32]. These effects were shown by a decreased level of ROS, a strong indicator of oxidative stress, which occurred concomitant with restored antioxidant enzyme activities, including SOD, CAT, and GPx. Furthermore, Huang et al. [17] demonstrated that resveratrol ameliorated insulin resistance in HFD-induced obese Sprague Dawley rats by reducing intramuscular lipid accumulation and enhancing SIRT1 activity. This was, in part, by increasing mitochondrial biogenesis and β-oxidation in the skeletal muscle of these rats. More evidence included in this review demonstrated that resveratrol increased the phosphorylation of AMPK in the skeletal muscle of both C57/BL6J mice and HFD-induced sarcopenic obesity Sprague Dawley rats [17,21]. Overall, resveratrol demonstrates a wide array of benefits in improving metabolic function, in part by effectively regulating energy metabolism and mitochondrial bioenergetics within the skeletal muscle. Gingerol Gingerol is the primary bioactive phenylpropanoid in the rhizome of ginger (Z. officinale Roscoe; Zingiberaceae) ( Figure 3) which is known for its pungent taste and aroma [25]. Ginger is widely used a spice and medicinal herb, highlighting the general interest in the potential health benefits of the bioactive compounds found in this functional food product [39]. Generally, ginger contains pungent phenolic substances known as gingerols, shogaols, paradols, and zingerone [39]. Amongst the constituents of gingerols [6]-gingerol (1-[4 -hydroxy-3 -methoxyphenyl]-5-hydroxy-3-decanone) is the major pharmacologically active component [40,41]. This polyphenol is known to display a variety of biological properties, including anticancer [42], antioxidant, anti-inflammatory [43], and antifungal effects [44]. Our literature search showed that this bioactive compound has the potential to enhance mitochondrial function in L6 rat myotubes. Briefly, was is demonstrated that treating normal L6 myotubes with 50, 100, and 150 µM (S)-[6]-gingerol for 24 h could activate AMPKα and further improve mitochondrial content number and the gene expression of PGC1α in vitro [45]. Other studies reported that the polyphenol found in ginger could affect metabolic function by reducing blood glucose levels in diabetic animal models and increase glucose uptake in in vitro cultured cells [43,46]. Overall, (S)- [6]-gingerol displays the potential beneficial effects on metabolic function by modulating skeletal muscle mitochondrial function, further suggesting that ginger may be effective in preventing the development of metabolic syndromes. However, additional data is required to confirm its metabolic properties, there has been concern with regard to the low solubility and poor oral absorption of [6]-Gingerol, as reported elsewhere [42] muscle of these rats. More evidence included in this review demonstrated that resveratrol increased the phosphorylation of AMPK in the skeletal muscle of both C57/BL6J mice and HFD-induced sarcopenic obesity Sprague Dawley rats [17,21]. Overall, resveratrol demonstrates a wide array of benefits in improving metabolic function, in part by effectively regulating energy metabolism and mitochondrial bioenergetics within the skeletal muscle. Gingerol Gingerol is the primary bioactive phenylpropanoid in the rhizome of ginger (Z. officinale Roscoe; Zingiberaceae) ( Figure 3) which is known for its pungent taste and aroma [25]. Ginger is widely used a spice and medicinal herb, highlighting the general interest in the potential health benefits of the bioactive compounds found in this functional food product [39]. Generally, ginger contains pungent phenolic substances known as gingerols, shogaols, paradols, and zingerone [39]. Amongst the constituents of gingerols [6]-gingerol (1-[4′-hydroxy-3′-methoxyphenyl]-5-hydroxy-3-decanone) is the major pharmacologically active component [40,41]. This polyphenol is known to display a variety of biological properties, including anticancer [42], antioxidant, anti-inflammatory [43], and antifungal effects [44]. Our literature search showed that this bioactive compound has the potential to enhance mitochondrial function in L6 rat myotubes. Briefly, was is demonstrated that treating normal L6 myotubes with 50, 100, and 150 µM (S)-[6]-gingerol for 24 h could activate AMPKα and further improve mitochondrial content number and the gene expression of PGC1α in vitro [45]. Other studies reported that the polyphenol found in ginger could affect metabolic function by reducing blood glucose levels in diabetic animal models and increase glucose uptake in in vitro cultured cells [43,46]. Overall, (S)-[6]-gingerol displays the potential beneficial effects on metabolic function by modulating skeletal muscle mitochondrial function, further suggesting that ginger may be effective in preventing the development of metabolic syndromes. However, additional data is required to confirm its metabolic properties, there has been concern with regard to the low solubility and poor oral absorption of [6]-Gingerol, as reported elsewhere [42] Quercetin and Naringenin Quercetin (3,3 ,4 ,5,7-pentahydroxyflavone, Figure 4) has the ability to exhibit robust antioxidant, anti-apoptosis, and anti-inflammatory properties in different preclinical models [48]. For example, in our literature search, we found that quercetin also has the ability to enhance mitochondrial function [49]. Alternatively, Mutlur Krishnamoorthy, and Carani Venkatraman (2017) [26] showed that treating palmitate-induced insulin resistance L6 myotubes with 750 mM quercetin or 75 µM naringenin for 16 h could improve glucose homeostasis and mitochondrial bioenergetics by enhancing GLUT4 translocation, as well as increasing AMPK phosphorylation, and SIRT1 and PGC1α expression. Apparently, the comparative efficacy of quercetin and naringenin in ameliorating various metabolic anomalies has been subject to increasing preclinical research [22]. Naringenin (2,3-dihydro-5,7-dihydroxy-2-(4-hydroxyphenyl)-4H-1-benzopyran-4-one) is a naturally occurring flavonoid found mostly in some edible fruits, such as citrus species [50]. This flavonoid has been the subject of ongoing research to assess its broad biological effects in preclinical models. For example, this flavonoid has an ability to decrease some lipid peroxidation biomarkers and promote carbohydrate metabolism in preclinical models of metabolic syndrome [26]. Furthermore, naringenin has been shown to have antioxidant and anti-inflammatory effects [51]. A similar effect was observed in an in vivo study, where high fructose diet-induced insulin resistance Wister rats showed decreased mitochondrial function [52]. However, this effect was reversed in rats that were also fed 50 mg/kg body weight/day naringenin and quercetin for 6 weeks [26]. Here, both naringenin and quercetin reduced the plasma glucose and insulin levels, including GLUT4 translocation, as well as the expression of SIRT1, PGC1α, and AMPK phosphorylation in the insulin resistant Wister rats [26], suggesting that both these polyphenols may improve metabolic function in part by regulating energy metabolism, or by improving glucose uptake and targeting markers of mitochondrial function. Quercetin and Naringenin Quercetin (3,3′,4′,5,7-pentahydroxyflavone, Figure 4) has the ability to exhibit robust antioxidant, anti-apoptosis, and anti-inflammatory properties in different preclinical models [48]. For example, in our literature search, we found that quercetin also has the ability to enhance mitochondrial function [49]. Alternatively, Mutlur Krishnamoorthy, and Carani Venkatraman (2017) [26] showed that treating palmitate-induced insulin resistance L6 myotubes with 750 mM quercetin or 75 µM naringenin for 16 h could improve glucose homeostasis and mitochondrial bioenergetics by enhancing GLUT4 translocation, as well as increasing AMPK phosphorylation, and SIRT1 and PGC1α expression. Apparently, the comparative efficacy of quercetin and naringenin in ameliorating various metabolic anomalies has been subject to increasing preclinical research [22]. Naringenin (2,3-dihydro-5,7-dihydroxy-2-(4-hydroxyphenyl)-4H-1-benzopyran-4one) is a naturally occurring flavonoid found mostly in some edible fruits, such as citrus species [50]. This flavonoid has been the subject of ongoing research to assess its broad biological effects in preclinical models. For example, this flavonoid has an ability to decrease some lipid peroxidation biomarkers and promote carbohydrate metabolism in preclinical models of metabolic syndrome [26]. Furthermore, naringenin has been shown to have antioxidant and anti-inflammatory effects [51]. A similar effect was observed in an in vivo study, where high fructose diet-induced insulin resistance Wister rats showed decreased mitochondrial function [52]. However, this effect was reversed in rats that were also fed 50 mg/kg body weight/day naringenin and quercetin for 6 weeks [26]. Here, both naringenin and quercetin reduced the plasma glucose and insulin levels, including GLUT4 translocation, as well as the expression of SIRT1, PGC1α, and AMPK phosphorylation in the insulin resistant Wister rats [26], suggesting that both these polyphenols may improve metabolic function in part by regulating energy metabolism, or by improving glucose uptake and targeting markers of mitochondrial function. Pinosylvin Pinosylvin (3,5-dihydroxy-trans-stilbene, Figure 5) is part of the stilbenoids group, which is a group of polyphenols found in plants, berries, and nuts. These polyphenolic compounds exhibit antimicrobial and antifungal function in plants [55]. Recent information reveals that A stilbene-based compounds might have potential as antiviral agents [56]. Although the widely investigated naturally occurring stilbenoids such as resveratrol are acknowledged, on the other hand, emerging evidence suggests that pinosylvin is gaining attention due to it anti-inflammatory properties [57]. Pinosylvin is a natural polyphenol trans-stilbenoid that is produced by plants as a secondary metabolite to protect against microbes and insects [57]. This polyphenol is mainly found in heartwoods and leaves of Pinus sylvestris. Pinosylvin exerts various biological activities including anti-inflammatory effects [57]. In fact, Modi et al. [27] reported that treating cultured skeletal muscle cells (L6 myotube) with 20 or 60 µM pinosylvin for 24 h activated SIRT1 and stimulated glucose Pinosylvin Pinosylvin (3,5-dihydroxy-trans-stilbene, Figure 5) is part of the stilbenoids group, which is a group of polyphenols found in plants, berries, and nuts. These polyphenolic compounds exhibit antimicrobial and antifungal function in plants [55]. Recent information reveals that A stilbene-based compounds might have potential as antiviral agents [56]. Although the widely investigated naturally occurring stilbenoids such as resveratrol are acknowledged, on the other hand, emerging evidence suggests that pinosylvin is gaining attention due to it anti-inflammatory properties [57]. Pinosylvin is a natural polyphenol trans-stilbenoid that is produced by plants as a secondary metabolite to protect against microbes and insects [57]. This polyphenol is mainly found in heartwoods and leaves of Pinus sylvestris. Pinosylvin exerts various biological activities including anti-inflammatory effects [57]. In fact, Modi et al. [27] reported that treating cultured skeletal muscle cells (L6 myotube) with 20 or 60 µM pinosylvin for 24 h activated SIRT1 and stimulated glucose uptake through the activation of AMPK. Although the role of this stilbenoid is emerging, its effects on mitochondrial bioenergetics or function is still very limited. uptake through the activation of AMPK. Although the role of this stilbenoid is emerging, its effects on mitochondrial bioenergetics or function is still very limited. Figure 5. The chemical structure of pinosylvin, a stilbene-based compound found in plants, berries, and nuts [58]. Icariin Icariin is a typical flavonol glycoside also known as the primary active component of Epimedii Herba (Figure 6) [28]. Icariin is commonly known as yin yang hou or goat weed [28]. The extracts of Epimedii Herba have been commonly used in Chinese herbal medicine to treat sexual functions, skeletal muscle deterioration, and other diseases [28,59]. Various pharmacological effects of icariin have been reported, including immunoregulation and vasodilation through the enhanced production of bioactive nitric oxide, as well as showing activity against multiple cardiovascular diseases through antioxidant and anti-inflammatory action [28,60]. In this review, we found that icariin might have a beneficial effect on the mitochondrial function [28], in part through effective modulation of energy metabolism related pathways/genes such as irisin/FNDC5, PGC1α gene expression, and dose-dependently increased AMPK phosphorylation in normal C2C12 cells. Interestingly, the same effect was also observed in C57BL/6 mice that were fed 10 or 40 mg/kg/day icariin for 14 days, displaying decreased body weight and enhanced expression of FNDC5, PGC1α, and p-AMPK levels. Other studies reported that icariin was also found to have a protective effect against diet-induced obesity by ameliorating insulin resistance [61,62]. Overall, the preclinical evidence summarized in this review seems to validate the anecdotal capacity of icariin to act on the skeletal muscle and modulate energy metabolism to potentially ameliorate metabolic disease related complications. Icariin Icariin is a typical flavonol glycoside also known as the primary active component of Epimedii Herba (Figure 6) [28]. Icariin is commonly known as yin yang hou or goat weed [28]. The extracts of Epimedii Herba have been commonly used in Chinese herbal medicine to treat sexual functions, skeletal muscle deterioration, and other diseases [28,59]. Various pharmacological effects of icariin have been reported, including immunoregulation and vasodilation through the enhanced production of bioactive nitric oxide, as well as showing activity against multiple cardiovascular diseases through antioxidant and antiinflammatory action [28,60]. In this review, we found that icariin might have a beneficial effect on the mitochondrial function [28], in part through effective modulation of energy metabolism related pathways/genes such as irisin/FNDC5, PGC1α gene expression, and dose-dependently increased AMPK phosphorylation in normal C2C12 cells. Interestingly, the same effect was also observed in C57BL/6 mice that were fed 10 or 40 mg/kg/day icariin for 14 days, displaying decreased body weight and enhanced expression of FNDC5, PGC1α, and p-AMPK levels. Other studies reported that icariin was also found to have a protective effect against diet-induced obesity by ameliorating insulin resistance [61,62]. Overall, the preclinical evidence summarized in this review seems to validate the anecdotal capacity of icariin to act on the skeletal muscle and modulate energy metabolism to potentially ameliorate metabolic disease related complications. Molecules 2021, 26, x FOR PEER REVIEW 9 of 14 uptake through the activation of AMPK. Although the role of this stilbenoid is emerging, its effects on mitochondrial bioenergetics or function is still very limited. Figure 5. The chemical structure of pinosylvin, a stilbene-based compound found in plants, berries, and nuts [58]. Icariin Icariin is a typical flavonol glycoside also known as the primary active component of Epimedii Herba (Figure 6) [28]. Icariin is commonly known as yin yang hou or goat weed [28]. The extracts of Epimedii Herba have been commonly used in Chinese herbal medicine to treat sexual functions, skeletal muscle deterioration, and other diseases [28,59]. Various pharmacological effects of icariin have been reported, including immunoregulation and vasodilation through the enhanced production of bioactive nitric oxide, as well as showing activity against multiple cardiovascular diseases through antioxidant and anti-inflammatory action [28,60]. In this review, we found that icariin might have a beneficial effect on the mitochondrial function [28], in part through effective modulation of energy metabolism related pathways/genes such as irisin/FNDC5, PGC1α gene expression, and dose-dependently increased AMPK phosphorylation in normal C2C12 cells. Interestingly, the same effect was also observed in C57BL/6 mice that were fed 10 or 40 mg/kg/day icariin for 14 days, displaying decreased body weight and enhanced expression of FNDC5, PGC1α, and p-AMPK levels. Other studies reported that icariin was also found to have a protective effect against diet-induced obesity by ameliorating insulin resistance [61,62]. Overall, the preclinical evidence summarized in this review seems to validate the anecdotal capacity of icariin to act on the skeletal muscle and modulate energy metabolism to potentially ameliorate metabolic disease related complications. Flavonoids, Flavanols and Proanthocyanidins Flavones and flavonols (Figure 7) are the most prominent ketone-containing compounds [64]. Furthermore, flavan-3-ols, also known as flavanols, are unique for containing the 2-phenyl-3,4-dihydro-2H-chromen-3-ol skeleton [65]. These compounds encompass catechin, epicatechin gallate, epigallocatechin, epigallocatechin gallate, proanthocyanidins, aflavins, and arubigins [2,66]. In fact, increasing literature has reported on the impact of these compounds in improving metabolic function in various preclinical models [33,67]. Reducing oxidative stress and inflammation, as well as regulating insulin signaling path-ways such as the PI3K/AKT and energy homeostasis mechanisms such as the AMPK are the prominent effects by which these compounds may improve metabolic function [68]. metabolic syndrome [35]. As one of the major flavonoids, proanthocyanidins were shown to improve skeletal muscle mitochondrial bioenergetics in obese Zucker fatty rats by reducing citrate synthase activity, oxidative phosphorylation complexes I and II levels, and Nrf1 gene expression, which in turn translated to ameliorated ROS production [33]. These actions were parallel to reduced insulin resistance, improved mitochondrial respiration, mitochondrial oxidative capacity, and fatty acid oxidation, with effective regulation of prominent energy regulation markers such as AMPK, Pparα, and UCP2 [33]. Overall, flavonoids and flavonols show great potential in improving metabolic function by effectively regulating skeletal muscle energy metabolism and mitochondrial bioenergetics in preclinical models of metabolic disease. Summary and Future Perspective It is now widely accepted that a healthy diet is essential to defend the human body against certain types of diseases, especially non-communicable diseases such as obesity, type 2 diabetes, and cardiovascular diseases [71]. Certainly, food sources such as fruits and vegetables have become an attractive source of nutrients and health benefits. In fact, these food sources are known to contain various biological compounds, including polyphenols, that present with enhanced potential beneficial effects in improving metabolic function. Accumulative preclinical evidence suggests that polyphenols can improve metabolic function by effectively regulating energy metabolism, as well as enhancing glucose uptake and mitochondrial function. Here, it was apparent that polyphenolic compounds such as gingerol, icariin, and resveratrol can target the skeletal muscle to regulate energy metabolism and improve mitochondrial function in preclinical models of metabolic syndrome. This is important to establish since it is already known that the pathogenesis of Similarly, data from this review suggest that flavonoids from mulberry (Morus alba L.) leaves can perform the same as metformin (an established glucose lowering drug) in improving muscle glucose uptake and mitochondrial function in L6 muscle cells [31]. These actions were, achieved by activating AMPK and increasing the expression of PGC-1α and GLUT4 [31]. Importantly, the actions of these flavonoids were consistent with improved mitochondrial function in the skeletal muscle of db/db mice. Furthermore, flavan 3-ols fractions derived from cocoa powder were shown to promote lipolysis and mitochondrial biogenesis consistent with increasing β-oxidation through regulating carnitine palmitoyltransferase 2 (CPT2) expression and mitochondria copy number in mice with metabolic syndrome [35]. As one of the major flavonoids, proanthocyanidins were shown to improve skeletal muscle mitochondrial bioenergetics in obese Zucker fatty rats by reducing citrate synthase activity, oxidative phosphorylation complexes I and II levels, and Nrf1 gene expression, which in turn translated to ameliorated ROS production [33]. These actions were parallel to reduced insulin resistance, improved mitochondrial respiration, mitochondrial oxidative capacity, and fatty acid oxidation, with effective regulation of prominent energy regulation markers such as AMPK, Pparα, and UCP2 [33]. Overall, flavonoids and flavonols show great potential in improving metabolic function by effectively regulating skeletal muscle energy metabolism and mitochondrial bioenergetics in preclinical models of metabolic disease. Summary and Future Perspective It is now widely accepted that a healthy diet is essential to defend the human body against certain types of diseases, especially non-communicable diseases such as obesity, type 2 diabetes, and cardiovascular diseases [71]. Certainly, food sources such as fruits and vegetables have become an attractive source of nutrients and health benefits. In fact, these food sources are known to contain various biological compounds, including polyphenols, that present with enhanced potential beneficial effects in improving metabolic function. Accumulative preclinical evidence suggests that polyphenols can improve metabolic function by effectively regulating energy metabolism, as well as enhancing glucose uptake and mitochondrial function. Here, it was apparent that polyphenolic compounds such as gingerol, icariin, and resveratrol can target the skeletal muscle to regulate energy metabolism and improve mitochondrial function in preclinical models of metabolic syndrome. This is important to establish since it is already known that the pathogenesis of metabolic diseases like diabetes is consistent with skeletal muscle mitochondria deficiency, leading to impaired cellular functions [34][35][36]. Apparently, in addition to the effective modulation of cellular mechanisms such as insulin signaling and energy regulating pathways through PI3K/AKT and AMPK, these polyphenols seem to target PGC1α and other mitochondrial functional genes such as TFAM, mfn2, and drp1 to improve mitochondrial bioenergetics. These findings also highlight the potential impact naturally derived compounds and micronutrients can have on improving human health by targeting major organ tissues such as the skeletal muscle, as previously discussed [72]. In fact, the summarized data remain essential in developing precise therapeutic targets to be further tested in human subjects and to protect against the rapid rise of metabolic diseases. Although the current study informs on essential preclinical mechanisms that may be involved in the amelioration of metabolic complications, additional experiments and elucidations are still necessary to better understand the therapeutic potential of polyphenols, especially the relevance of their metabolism and bioavailability in the human body. Data Availability Statement: Data related to search strategy, study selection and extraction items will be made available upon request after the manuscript is published. Acknowledgments: The degree from which this study emanated was funded by South African Medical Research Council through its Division of Research Capacity Development under the internship scholarship program from funding received from the South African National Treasury. The content hereof is the sole responsibility of the authors and does not necessarily represent the official views of the SAMRC or the funders. Conflicts of Interest: The authors declare no conflict of interest. Sample Availability: Samples of the compounds are available from the authors.
7,379.4
2021-05-01T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
The Fault Diagnosis of Rolling Bearings Is Conducted by Employing a Dual-Branch Convolutional Capsule Neural Network Currently, many fault diagnosis methods for rolling bearings based on deep learning are facing two main challenges. Firstly, the deep learning model exhibits poor diagnostic performance and limited generalization ability in the presence of noise signals and varying loads. Secondly, there is incomplete utilization of fault information and inadequate extraction of fault features, leading to the low diagnostic accuracy of the model. To address these problems, this paper proposes an improved dual-branch convolutional capsule neural network for rolling bearing fault diagnosis. This method converts the collected bearing vibration signals into grayscale images to construct a grayscale image dataset. By fully considering the types of bearing faults and damage diameters, the data are labeled using a dual-label format. A multi-scale convolution module is introduced to extract features from the data and maximize feature information extraction. Additionally, a coordinate attention mechanism is incorporated into this module to better extract useful channel features and enhance feature extraction capability. Based on adaptive fusion between fault type (damage diameter) features and labels, a dual-branch convolutional capsule neural network model for rolling bearing fault diagnosis is established. The model was experimentally validated using both Case Western Reserve University’s bearing dataset and self-made datasets. The experimental results demonstrate that the fault type branch of the model achieves an accuracy rate of 99.88%, while the damage diameter branch attains an accuracy rate of 99.72%. Both branches exhibit excellent classification performance and display robustness against noise interference and variable working conditions. In comparison with other algorithm models cited in the reference literature, the diagnostic capability of the model proposed in this study surpasses them. Furthermore, the generalization ability of the model is validated using a self-constructed laboratory dataset, yielding an average accuracy rate of 94.25% for both branches. Introduction Rolling bearings are pivotal components in rotating machinery and also susceptible to damage.Their operational state directly impacts equipment performance.Reaching an accurate diagnosis of faults in bearing vibration signals poses a formidable challenge due to factors such as diverse operating conditions and the presence of noise interference.Therefore, precise fault diagnosis of bearings plays a crucial role in ensuring equipment safety and stable operation. Currently, the primary methods for diagnosing faults in rolling bearings include modelbased diagnosis, signal analysis, and data-driven approaches.Model-based diagnostic techniques commonly used are residual analysis (RA), state estimation (SE), parameter identification (PI), etc. [1].The signal analysis method determines whether there is a fault by analyzing system or equipment signals.This approach typically involves analyzing features such as frequency F and amplitude A of the signals to determine the type and location of faults [2].Signal analysis mainly includes traditional signal analysis and diagnosis methods Sensors 2024, 24, 3384 2 of 19 in the time domain [3], frequency domain [4,5] and time-frequency domain.The collected vibration signals are projected to the time domain to extract different parameters, such as dimensional constants such as peak value, mean value, and variance, and dimensionless parameters such as peak index, kurtosis, and skewness.Bearing fault diagnosis using datadriven approaches refers to predicting and diagnosing equipment faults through large-scale historical data analysis and modeling using techniques such as machine learning and data mining [6].Data-driven methods for rolling bearing fault diagnosis can be further divided into machine learning-based methods and deep learning-based methods [7]. Machine learning-based fault diagnosis methods for rolling bearings involve feature extraction and model training using bearing fault data, enabling the classification and prediction of bearing faults.Common machine learning methods include logistic regression (LR) [8], random forest (RF) [9], and artificial neural networks (ANNs) [10].Chen Yin et al. [11] proposed a novel approach based on improved ensemble noise-assisted empirical mode decomposition (IENEMD) and adaptive threshold denoising (ATD).This method addresses the mixing problem in the original EMD while reducing noise interference.Yongbo Li et al. [12] introduced a new method for rolling bearing fault diagnosis based on adaptive multiscale fuzzy entropy (AMFE) and support vector machines (SVM).Unlike existing fuzzy entropy algorithms such as MFE, AMFE adaptively determines scales using robust Hermite local mean decomposition (HLMD).Although these methods are capable of extracting and identifying desired fault features, they often rely heavily on complex mathematical tools to extract signal characteristics from bearing faults.Therefore, traditional diagnostic methods for extracting fault features excessively depend on knowledge related to signal processing. The method for diagnosing rolling bearing faults, based on deep learning, involves constructing a deep neural network model to automatically extract features and predict classifications of bearing fault data.Common deep learning methods include convolutional neural networks (CNNs) [13] and recurrent neural networks (RNNs) [14].Guo Liang [15] determined the effectiveness of feature extraction through correlation analysis and utilized the useful feature set as input for an RNN, achieving superior diagnostic performance compared to self-organizing map methods.However, in practical engineering applications, vibration signals collected by sensors are often influenced by environmental noise.Additionally, the rotational speed of rolling bearings is affected by varying load conditions.Therefore, when diagnosing bearing faults under the interference of noise and varying operating conditions, common deep learning neural networks exhibit limited accuracy and generalization ability.Capsule neural networks (CapsNets) [16,17], which differ from traditional neural networks as they consist of neurons composed of vectors, can effectively extract and store more detailed features from input data while minimizing information loss in feature representation.However, due to its relatively simple initial structure, CapsNet does not further extract original image data features, resulting in incomplete extraction of detailed features; thus, its feature extraction capability still requires improvement. This paper proposes a dual-branch CapsNet model for diagnosing faults in rolling bearings based on the aforementioned problem analysis.This approach transforms onedimensional time series into grayscale image inputs and utilizes the dual branches to independently detect both fault types and damage diameters.By doing so, it effectively reduces the feature quantity in each branch while maintaining accuracy, thereby alleviating the challenge of feature extraction.Consequently, this model enhances the accuracy of multi-state classification for rolling bearings and strengthens diagnostic capability for novel faults.Furthermore, it demonstrates robustness against noise interference and varying operating conditions. Preprocessing of Grayscale Images In fault diagnosis methods driven by data, it is crucial to thoroughly investigate potential mapping relationships within the dataset.The process of transforming one-Sensors 2024, 24, 3384 3 of 19 dimensional time series signals into grayscale images involves mapping the temporal sequence onto the horizontal axis and representing its values along with signal intensity on the vertical axis, resulting in a two-dimensional grayscale image.Grayscale values are adjusted based on signal intensity using either linear or nonlinear mappings.Figure 1 illustrates this principle, and specific conversion steps are outlined below. 1. Re-scale the one-dimensional temporal signal to ensure its values are normalized within the range of 0 and 1; 2. The normalized signal is sampled at fixed time intervals to obtain a discrete temporal signal; 3. Employ interpolation techniques to estimate the continuous temporal signal by leveraging the neighboring sampling points; 4. Each point on the continuous curve corresponds to a pixel point on the grayscale image.The value of each point on the curve is then converted to the grayscale value of the corresponding pixel by selecting the appropriate grayscale mapping method. Preprocessing of Grayscale Images In fault diagnosis methods driven by data, it is crucial to thoroughly investigate potential mapping relationships within the dataset.The process of transforming one-dimensional time series signals into grayscale images involves mapping the temporal sequence onto the horizontal axis and representing its values along with signal intensity on the vertical axis, resulting in a two-dimensional grayscale image.Grayscale values are adjusted based on signal intensity using either linear or nonlinear mappings.Figure 1 illustrates this principle, and specific conversion steps are outlined below. 1. Re-scale the one-dimensional temporal signal to ensure its values are normalized within the range of 0 and 1; 2. The normalized signal is sampled at fixed time intervals to obtain a discrete temporal signal; 3. Employ interpolation techniques to estimate the continuous temporal signal by leveraging the neighboring sampling points; 4. Each point on the continuous curve corresponds to a pixel point on the grayscale image.The value of each point on the curve is then converted to the grayscale value of the corresponding pixel by selecting the appropriate grayscale mapping method.Furthermore, grayscale images offer two distinct advantages: Firstly, this method effectively preserves the temporal characteristics of the original signal to a maximum extent.Secondly, it simplifies the preprocessing work by eliminating the need for extracting new parameter indicators.This not only saves valuable preprocessing time but also reduces the burden on researchers in terms of fault information requirements. Capsule Neural Network The CapsNet [18,19] is a novel network model proposed by Geoffrey Hinton in October 2017.This architecture comprises an input layer, a convolutional layer, a primary capsule layer, a digit capsule layer, and a fully connected layer.Unlike conventional neural networks, the fundamental unit of the capsule neural network is represented by capsules that encapsulate local object features holistically.Each capsule consists of multiple neurons and both its input and output are vector representations.While the length of the vector signifies probability as seen in traditional neurons, its direction conveys additional information.A key highlight of this architecture lies in its utilization of a dynamic routing mechanism to replace the max pooling method employed in conventional convolutional Furthermore, grayscale images offer two distinct advantages: Firstly, this method effectively preserves the temporal characteristics of the original signal to a maximum extent.Secondly, it simplifies the preprocessing work by eliminating the need for extracting new parameter indicators.This not only saves valuable preprocessing time but also reduces the burden on researchers in terms of fault information requirements. Capsule Neural Network The CapsNet [18,19] is a novel network model proposed by Geoffrey Hinton in October 2017.This architecture comprises an input layer, a convolutional layer, a primary capsule layer, a digit capsule layer, and a fully connected layer.Unlike conventional neural networks, the fundamental unit of the capsule neural network is represented by capsules that encapsulate local object features holistically.Each capsule consists of multiple neurons and both its input and output are vector representations.While the length of the vector signifies probability as seen in traditional neurons, its direction conveys additional information.A key highlight of this architecture lies in its utilization of a dynamic routing mechanism to replace the max pooling method employed in conventional convolutional neural networks.This approach effectively circumvents information loss caused by pooling layers and results in enhanced recognition accuracy.The model is shown in Figure 2. neural networks.This approach effectively circumvents information loss caused by pooling layers and results in enhanced recognition accuracy.The model is shown in Figure 2. The loss function of CapsNet consists primarily of two components: margin loss and reconstruction loss.Margin loss is employed to penalize incorrect identification outcomes in terms of both false negatives and false positives.It is computed using the following formula. In the equation, Th represents the classification indicator function (1 if class h exists, 0 otherwise); νh represents the output data of the network; m+ is the upper bound that penalizes false positives; and m− is the lower bound that penalizes false negatives.For this study, we have chosen empirical values of m+ = 0.9 and m− = 0.1.λ is a proportion coefficient that adjusts the weight between them with an initial default value of 0.5. The reconstruction loss, also known as the mean squared error (MSE) loss, primarily focuses on the task of image reconstruction.Following the passage through the capsule layer, three fully connected layers are constructed to generate output values that precisely match those of the original data points.Subsequently, the squared sum of the distances between the original and output data is computed as a measure for evaluating this process.The total loss encompasses both margin loss and a multiplied reconstruction loss, where a multiplication factor λ ranging from 0.0001 to 0.001 is incrementally applied in this calculation in a stepwise manner.Consequently, margin loss is generally regarded as its principal indicator. Inception Module The Inception module was initially introduced in the deep learning architecture known as GoogleNet, wherein convolutional kernels of varying sizes are stacked together to enhance network width and extract comprehensive feature information.Moreover, it incorporates 1 × 1 scale convolutional kernels to reduce input feature map dimensionality, thereby decreasing parameters and accelerating network computation and training speed.By incorporating activation functions, the nonlinear expressive capability of multiple layers of convolutional kernels is enhanced while increasing the depth of convolutional layers helps prevent gradient vanishing.The structure of Inception V2 is depicted in Figure 3.The loss function of CapsNet consists primarily of two components: margin loss and reconstruction loss.Margin loss is employed to penalize incorrect identification outcomes in terms of both false negatives and false positives.It is computed using the following formula. In the equation, T h represents the classification indicator function (1 if class h exists, 0 otherwise); ν h represents the output data of the network; m+ is the upper bound that penalizes false positives; and m− is the lower bound that penalizes false negatives.For this study, we have chosen empirical values of m+ = 0.9 and m− = 0.1.λ is a proportion coefficient that adjusts the weight between them with an initial default value of 0.5. The reconstruction loss, also known as the mean squared error (MSE) loss, primarily focuses on the task of image reconstruction.Following the passage through the capsule layer, three fully connected layers are constructed to generate output values that precisely match those of the original data points.Subsequently, the squared sum of the distances between the original and output data is computed as a measure for evaluating this process.The total loss encompasses both margin loss and a multiplied reconstruction loss, where a multiplication factor λ ranging from 0.0001 to 0.001 is incrementally applied in this calculation in a stepwise manner.Consequently, margin loss is generally regarded as its principal indicator. Inception Module The Inception module was initially introduced in the deep learning architecture known as GoogleNet, wherein convolutional kernels of varying sizes are stacked together to enhance network width and extract comprehensive feature information.Moreover, it incorporates 1 × 1 scale convolutional kernels to reduce input feature map dimensionality, thereby decreasing parameters and accelerating network computation and training speed.By incorporating activation functions, the nonlinear expressive capability of multiple layers of convolutional kernels is enhanced while increasing the depth of convolutional layers helps prevent gradient vanishing.The structure of Inception V2 is depicted in Figure 3. Inception V1 primarily employs multi-scale convolutional kernels and incorporates 1 × 1 convolutions to reduce the dimensionality of feature maps, effectively decreasing computational complexity.In V2, two smaller 3 × 3 convolutional kernels are utilized instead of a larger 5 × 5 kernel in V1.This approach ensures an expanded receptive field while reducing the number of parameters, thereby preventing expression bottlenecks and enhancing linear expressive capabilities.Inception V1 primarily employs multi-scale convolutional kernels and incorporates 1 × 1 convolutions to reduce the dimensionality of feature maps, effectively decreasing computational complexity.In V2, two smaller 3 × 3 convolutional kernels are utilized instead of a larger 5 × 5 kernel in V1.This approach ensures an expanded receptive field while reducing the number of parameters, thereby preventing expression bottlenecks and enhancing linear expressive capabilities. The Attention Mechanism in Coordinate Automata (CA) The attention mechanisms commonly employed in constructing convolutional neural networks currently encompass the SE attention mechanism (which incorporates attention to the channel dimension), ECA attention mechanism (which applies channel-wise attention weighting), CBAM attention mechanism (composed of both channel and spatial attentions), and CA attention mechanism.When computing channel attentions for the first three mechanisms, global max pooling/average pooling is typically utilized, resulting in a loss of spatial information pertaining to objects.However, the CA mechanism integrates positional information into channel attentions, thereby circumventing this issue.The schematic diagram illustrating its operational principle is depicted in Figure 4. The Attention Mechanism in Coordinate Automata (CA) The attention mechanisms commonly employed in constructing convolutional neural networks currently encompass the SE attention mechanism (which incorporates attention to the channel dimension), ECA attention mechanism (which applies channel-wise attention weighting), CBAM attention mechanism (composed of both channel and spatial attentions), and CA attention mechanism.When computing channel attentions for the first three mechanisms, global max pooling/average pooling is typically utilized, resulting in a loss of spatial information pertaining to objects.However, the CA mechanism integrates positional information into channel attentions, thereby circumventing this issue.The schematic diagram illustrating its operational principle is depicted in Figure 4. Inception V1 primarily employs multi-scale convolutional kernels and incorporates 1 × 1 convolutions to reduce the dimensionality of feature maps, effectively decreasing computational complexity.In V2, two smaller 3 × 3 convolutional kernels are utilized instead of a larger 5 × 5 kernel in V1.This approach ensures an expanded receptive field while reducing the number of parameters, thereby preventing expression bottlenecks and enhancing linear expressive capabilities. The Attention Mechanism in Coordinate Automata (CA) The attention mechanisms commonly employed in constructing convolutional neural networks currently encompass the SE attention mechanism (which incorporates attention to the channel dimension), ECA attention mechanism (which applies channel-wise attention weighting), CBAM attention mechanism (composed of both channel and spatial attentions), and CA attention mechanism.When computing channel attentions for the first three mechanisms, global max pooling/average pooling is typically utilized, resulting in a loss of spatial information pertaining to objects.However, the CA mechanism integrates positional information into channel attentions, thereby circumventing this issue.The schematic diagram illustrating its operational principle is depicted in Figure 4.The steps of the CA attention mechanism are as follows: 1. The input feature map is globally average pooled in both width and height directions to obtain feature maps in both dimensions; 2. The two parallel stages are merged by transposing the width and height onto the same dimension; they are then stacked together to combine their respective features.At this point, we obtain a feature layer of [C, 1, H + W], which undergoes further processing using convolution + normalization + activation functions to extract additional features; The number of channels is adjusted by employing 1 × 1 convolution, followed by the application of the sigmoid function to obtain attention weights on the width and height dimensions.These weights are then multiplied with the original features, resulting in the output.The capsule neural network employs a dynamic routing mechanism to replace the conventional max pooling technique in convolutional neural networks, thereby circumventing the issue of diminished recognition rates resulting from information loss in pooling layers.Algorithms based on this network model have been enhanced to more effectively explore fault information in rolling bearings. An Enhanced Firstly, when converting to a two-dimensional feature image input, the lack of effective features poses a challenge.To enhance the computational efficiency of the network, we introduce the Inception V2 module to augment the width of the network model.This involves replacing a large 5 × 5 convolution kernel with two smaller 3 × 3 convolution kernels, thereby expanding the receptive field and reducing the parameter quantity while circumventing expression bottlenecks and enhancing linear expressive power.Additionally, incorporating a 1 × 1 convolution kernel in this structure aids in reducing computation without compromising output results. Secondly, to minimize information loss during training, we incorporate the CA attention mechanism that comprehensively focuses on both the channel information and spatial positional information of the model.With regards to network depth, we adopt GoogleNet's approach by integrating multiple identical Inception V2 modules (with unchanged inputs and outputs).We append seven layers of inception structures to amplify the network depth and refine the model accuracy. Finally, designing this model as a dual-branch [20] one aims at separately extracting feature information for different types of rolling bearing faults and damage diameters through two branches.By fully leveraging fault information and mitigating challenges in feature extraction posed by single branches alone, it enhances the generalization ability of the model. The enhanced version of the capsule neural network is illustrated in Figure 5. Process for Diagnosing Faults in Rolling Bearings The fault diagnosis flowchart of the proposed dual-branch convolutional capsule neural network is illustrated in Figure 6.The specific steps are outlined as follows: Step 1: The establishment of the original dataset is crucial, encompassing vibration signals from rolling bearings in various states, including different fault types and damage diameters.This comprehensive dataset will facilitate the verification of the model's duallabel detection capability and diagnostic accuracy. Step 2: Dataset preprocessing involves converting raw vibration signals from rolling bearings into grayscale images by assigning random numbers to consecutive data points.This step transforms the data into a two-dimensional image dataset that serves as input for the dual-branch convolutional capsule neural network. Step 3: The training process of the diagnostic model includes dividing the twodimensional image dataset into training and testing sets based on a specific ratio.The training set is utilized to train the diagnostic model effectively. Step 4: Model testing entails conducting multiple experiments using test samples to validate the efficacy of this proposed model. Step 5: To assess the generalization performance of our model, we employ a trained framework with publicly available datasets to train and validate data obtained during actual experimental processes.Continuous optimization of our model is performed based on real-world conditions, enabling subsequent diagnostics and decision-making tasks. Process for Diagnosing Faults in Rolling Bearings The fault diagnosis flowchart of the proposed dual-branch convolutional capsule neural network is illustrated in Figure 6.The specific steps are outlined as follows: Step 1: The establishment of the original dataset is crucial, encompassing vibration signals from rolling bearings in various states, including different fault types and damage diameters.This comprehensive dataset will facilitate the verification of the model's duallabel detection capability and diagnostic accuracy. Step 2: Dataset preprocessing involves converting raw vibration signals from rolling bearings into grayscale images by assigning random numbers to consecutive data points.This step transforms the data into a two-dimensional image dataset that serves as input for the dual-branch convolutional capsule neural network. Step 3: The training process of the diagnostic model includes dividing the two-dimensional image dataset into training and testing sets based on a specific ratio.The training set is utilized to train the diagnostic model effectively. Step 4: Model testing entails conducting multiple experiments using test samples to validate the efficacy of this proposed model. Step 5: To assess the generalization performance of our model, we employ a trained framework with publicly available datasets to train and validate data obtained during actual experimental processes.Continuous optimization of our model is performed based on real-world conditions, enabling subsequent diagnostics and decision-making tasks. Establishment of the CWRU Bearing Dataset The experimental data utilized in this study were obtained from the rolling bearing dataset provided by Case Western Reserve University (CWRU) [21].The fault bearings examined on the test rig include the drive-end bearing (SKF6025) and the fan-end bearing (SKF6023).Vibration signals were acquired using a 16-channel data recorder with a sam- Establishment of the CWRU Bearing Dataset The experimental data utilized in this study were obtained from the rolling bearing dataset provided by Case Western Reserve University (CWRU) [21].The fault bearings examined on the test rig include the drive-end bearing (SKF6025) and the fan-end bearing (SKF6023).Vibration signals were acquired using a 16-channel data recorder with a sampling frequency of 12 kHz.Furthermore, for the drive-end bearing, additional data were recorded at a sampling frequency of 48 kHz, encompassing measurements taken at speeds of 1730 r/min, 1750 r/min, 1772 r/min, and 1797 r/min.Figure 7 illustrates the experimental setup. Establishment of the CWRU Bearing Dataset The experimental data utilized in this study were obtained from the rolling bearing dataset provided by Case Western Reserve University (CWRU) [21].The fault bearings examined on the test rig include the drive-end bearing (SKF6025) and the fan-end bearing (SKF6023).Vibration signals were acquired using a 16-channel data recorder with a sampling frequency of 12 kHz.Furthermore, for the drive-end bearing, additional data were recorded at a sampling frequency of 48 kHz, encompassing measurements taken at speeds of 1730 r/min, 1750 r/min, 1772 r/min, and 1797 r/min.Figure 7 illustrates the experimental setup.The dataset comprises ten types of data, including one normal type and nine fault types.The nine fault types consist of inner race faults with a diameter of 0.18 mm, outer race faults with a diameter of 0.18 mm, rolling element faults with a diameter of 0.18 mm, as well as inner race, outer race, and rolling element faults with diameters of 0.36 mm and 0.54 mm, respectively. One-hot encoding is utilized to categorize the classes using a ten-digit vector label representation.Based on the fault type (inner race, outer race, or rolling element) and damage diameter (0.18 mm, 0.36 mm, or 0.54 mm), each state is transformed from its original ten-dimensional feature vector into two four-dimensional feature vectors using binary onehot encoding (where label 1 corresponds to the fault type while label 2 corresponds to the damage diameters).The fault types are classified as normal, inner race faults, outer race faults, or rolling element faults, whereas the damage diameters are categorized as normal (00 mm), 0.18 mm, 0.36 mm, or 0.54 mm.The dual-branch convolutional capsule neural network combines features extracted from both branches in four dimensions instead of ten dimensions in order to enhance its adaptive feature extraction capability for improved diagnostic performance in identifying different states based on Table 1, which presents the experimental samples and labels. The term 'ball_18' in the aforementioned table denotes an inner race fault category characterized by a severity level of 0.18 mm.The chosen vibration signals originate from the drive-end bearing (SKF6025) and were acquired at a sampling frequency of 12 kHz.The dataset comprises ten distinct categories, wherein each category encompasses a contiguous set of 784 data points representing grayscale image samples.Every category undergoes independent random sampling for precisely 500 instances (once per rotational speed, where there are four kinds of rotational speed), thereby yielding a cumulative count of 2000 samples pertaining to this specific type.Overall, there exist twenty thousand samples distributed among the ten categories; furthermore, every individual sample is assigned two labels through the one-hot encoding methodology.The following Figure 8 represents an experimental grayscale sample image.For enhanced computational efficiency and streamlined extraction of crucial features, all samples are standardized to possess dimensions measuring at precisely 28 × 28.During the experiment, the training set and test set are randomly divided in a ratio of 4:1 using a fixed random seed input to ensure consistent results for each division. Data Type The Original Label Label 1 Label Sensors 2024, 24, x FOR PEER REVIEW 10 of 19 Taking the 12 Hz operating condition of the drive end from the CWRU database as an example, we categorized the rolling bearing fault types and damage diameters into ten distinct categories.To ensure comprehensive capture of characteristic information in the bearing, a sampling point number of 1024 was set, and Figure 9 illustrates the resulting time domain waveform.Taking the 12 Hz operating condition of the drive end from the CWRU database as an example, we categorized the rolling bearing fault types and damage diameters into ten distinct categories.To ensure comprehensive capture of characteristic information in the bearing, a sampling point number of 1024 was set, and Figure 9 illustrates the resulting time domain waveform.Taking the 12 Hz operating condition of the drive end from the CWRU databa an example, we categorized the rolling bearing fault types and damage diameters in distinct categories.To ensure comprehensive capture of characteristic information bearing, a sampling point number of 1024 was set, and Figure 9 illustrates the resu time domain waveform. Model Parameter Configuration The experimental setup for this study involved the utilization of the PyTorch deep learning framework on a Windows 10 operating system, with an Intel(R) Core(TM) i5-8300H CPU and NVIDIA GeForce GTX1050TI GPU.PyCharm was used as the editing tool, and the system had 16 GB of RAM.Python version 3.7.6 was employed for implementation purposes.During the model training process, optimization was performed using the Adam optimizer.The activation function for convolutional layers was set to ReLU, while capsule layers utilized the squash function as their activation mechanism.The hyperparameters were configured as follows: batch size = 16, maximum number of epochs = 50, learning rate = 0.001, and early stopping loss threshold = 0.0001.The primary capsule layer had a capsule vector dimension of 32, whereas the advanced capsule layer had a dimension of 256.Please refer to Table 2 for specific details regarding the parameter settings in our model.The output size of a capsule unit in Table 2 is denoted as (6 × (8)), indicating that the feature layer has a width of six and each vector possesses eight dimensions.In the capsule layer, dynamic routing is employed to transform scalar features from previous convolutional outputs into feature vectors, which are subsequently operated on between capsule layers.Similarly, (16 × (10)) represents sixteen vectors with ten dimensions each, while (10 × (8)) signifies ten vectors with eight dimensions each.To leverage the strong fitting capability of capsule networks, dropout operations are incorporated in the capsule layer during training by randomly deactivating neurons at each iteration and freezing their weights.This strategy aids in reducing network complexity and mitigating overfitting. Analysis of Experimental Results for CWRU Bearing Data Based on the parameters set in Section 4.2, Figure 10a,b depict the convergence curves of loss rate and accuracy for the test set.These curves demonstrate that while the fault type branch (Figure 10a) of rolling bearings converges after only 15 training sessions, it takes up to 30 sessions for the damage diameter branch to fully converge.The accuracy of the fault type branch steadily increases from 70.53% to 99.88%, with a decrease in its loss rate from 0.0322 to 0.0053.However, the fitting effect of the damage diameter branch (Figure 10b) is slower and reaches a maximum accuracy of only 99.72% at the last training session, with a final loss rate of 0.0055 after starting at an initial value of 0.0382.This confirms that our proposed dual-branch convolutional capsule neural network model is effective when utilizing specialized labels encoded with binary one-hot encoding input into both branches simultaneously.The confusion matrix of the test set in a certain experiment is illustrated in Figure 11a,b.The horizontal axis represents the predicted fault types (damage diameters) labels, while the vertical axis represents the actual fault types (damage diameters) labels.Each row indicates the number of samples correctly classified as a specific fault type or incorrectly classified as other fault types (damage diameters) within the test set.Each column denotes the number of test samples predicted for each fault type (damage diameter). Experimental Evaluation of Noise Addition for Performance Testing During operation, rolling bearings are susceptible to varying levels of external noise interference.To evaluate the proposed model's anti-noise performance and generalization capability, Gaussian white noise with a specific signal-to-noise ratio (SNR) was introduced into the original samples.SNR serves as a crucial indicator for assessing the level of noise present in a signal, making its primary expression essential.SNR = 10 × lg P signal P noise (2) In the above equation, P signal represents the power of noise and P noise represents the signal's effective power. Taking the outer ring fault signal at a rotational speed of 1750 r/min as an example, Gaussian white noise with a signal-to-noise ratio (SNR) of 3 dB is introduced to this signal.In order to visually demonstrate the process of noise addition, the original signal, noisy signal, and post-noise-added signal are individually plotted in the time domain waveform.As depicted in Figure 12, upon introducing noise to the outer ring fault signal, its temporal characteristics become submerged within the background noise, rendering it exceedingly challenging to discern the original waveform features. Experimental Evaluation of Noise Addition for Performance Testing During operation, rolling bearings are susceptible to varying levels of external noise interference.To evaluate the proposed model's anti-noise performance and generalization capability, Gaussian white noise with a specific signal-to-noise ratio (SNR) was introduced into the original samples.SNR serves as a crucial indicator for assessing the level of noise present in a signal, making its primary expression essential.Taking the outer ring fault signal at a rotational speed of 1750 r/min as an example, Gaussian white noise with a signal-to-noise ratio (SNR) of 3 dB is introduced to this signal.In order to visually demonstrate the process of noise addition, the original signal, noisy signal, and post-noise-added signal are individually plotted in the time domain waveform.As depicted in Figure 12, upon introducing noise to the outer ring fault signal, its temporal characteristics become submerged within the background noise, rendering it exceedingly challenging to discern the original waveform features. . In the noise addition experiment, various signal-to-noise ratios (−3 dB, 3 dB, 6 dB, and 9 dB) were employed to comprehensively assess the model's robustness against diverse types of noise.Furthermore, the diagnostic accuracy of the model was separately evaluated under two conditions: dataset 1 (training set without noise, testing set with In the noise addition experiment, various signal-to-noise ratios (−3 dB, 3 dB, 6 dB, and 9 dB) were employed to comprehensively assess the model's robustness against diverse types of noise.Furthermore, the diagnostic accuracy of the model was separately evaluated under two conditions: dataset 1 (training set without noise, testing set with noise) and dataset 2 (both training and testing sets with noise).The recognition accuracy for the fault type and damage severity branches is presented in Table 3.According to the data comparison presented in Table 3, it can be observed that dataset 2 generally exhibits a higher accuracy rate than dataset 1 by approximately 1-2% across different signal-to-noise ratios.This observation suggests that dataset 2 (training set with noise, testing set with noise) is more suitable for the proposed method.Specifically, under a −3 dB signal-to-noise ratio, dataset 2 achieves an accuracy rate of approximately 96%.Furthermore, when compared to the −3 dB signal-to-noise ratio, at a +3 dB signal-to-noise ratio, both the fault type branch and damage diameter branch show improvements in their respective accuracy rates by approximately 1.34% and 0.74%, respectively.In this study, multi-scale convolution was employed in the pooling layer to extract comprehensive information from fault data samples.Notably, when subjected to noise levels above 6 dB, the fault type branch demonstrates an accuracy rate exceeding 99%, while the damage diameter branch attains an accuracy rate surpassing 98%.These results indicate that our proposed method exhibits robustness against high levels of noise. Comparative Validation of Diagnostic Efficacy across Diverse Algorithms The proposed model in this paper is validated for its superior diagnostic performance and compared with traditional single-label methods using binary one-hot encoding (two labels) on the dataset.Various algorithms are employed to diagnose the CWRU dataset.The algorithm model proposed in this paper represents a significant advancement over both the CNN and CapsNet architectures.Therefore, several models based on these networks are selected for comparison to demonstrate the superiority of the double-branch convolutional capsule neural network model.As the method described in this paper adopts a dual-branch approach, both accuracy and loss rates are calculated by averaging the results from both branches.Figure 13 illustrates the comparison of recognition accuracy among different methodologies. 1. In reference [22], a single tag is utilized for data labeling.The vibration signals undergo a short-time Fourier transform, and the resulting time-frequency feature maps are inputted into a convolutional neural network (CNN) with adjusted parameters tailored for fault diagnosis.Data labeling remains consistent with a single tag.The result is method 1 in Figure 13; 2. In reference [23], data are annotated using a singular tag.One-dimensional signals are converted into grayscale images and integrated with a CNN for the purpose of rolling bearing fault diagnosis.The result is method 2 in Figure 13; 3. In reference [24], data are annotated using a single tag, followed by feature extraction through a two-dimensional convolutional layer.The extracted features are then fed into capsule layers for fault diagnosis, where both primary and digit capsule layers employ dynamic routing algorithms to transform the feature vectors.The result is method 3 in Figure 13; 4. In reference [25], a singular label is employed to annotate the data, while onedimensional temporal signals are fed into capsule neural networks for feature extraction.The fault diagnosis task is accomplished by leveraging two convolutional layers within the capsule neural network.The result is method 4 in Figure 13; 5. In reference [26], data are annotated with a single tag, and we propose a TF-RCNN model based on the utilization of time-frequency regions.This model leverages multiple regions characterized by TFR features, while also incorporating an attention module to enhance the classification efficacy for different types through advanced classification strategies.The result is method 5 in Figure 13; 6. Reference [27] introduces a single tag for labeling and proposes a multi-ensemble approach for rolling bearing fault diagnosis based on deep autoencoders (DAE). Multiple DAEs are trained with different activation functions to extract type-specific features, which are then merged into a feature pool.The final result is determined through majority voting among the classifiers of each sample set.The result is method 6 in Figure 13; 7. In reference [28], data were annotated with a singular label, proposing an enhanced AlexNet model for the diagnosis of rolling bearings.The optimal pre-training was determined based on the classification diagnostic rate.A modified calculation model was selected to reduce the parameter count and mitigate overfitting.Superior classification results were achieved by incorporating mixed concepts using classifiers.The result is method 7 in Figure 13.The parameters set in the aforementioned literature are consistent with those of the original text.Similarly, for the CWRU dataset (with experiments conducted at a speed of 1750 r/min), training was iterated 50 times and divided into a 7:3 dataset split.The diagnostic results are depicted in Figure 13.While previous studies [22][23][24][25][26][27][28] employed a singlelabel approach to identify the fault positions and damage levels of rolling bearings, this paper proposes a dual-label marking method that takes into account both the fault types and damage diameters of rolling bearings.It can be observed that the proposed method in this study achieves an improvement in accuracy by 2.17%, 2.98%, 0.9%, 3.12%, 0.19%, 3.36%, and 1.06%, respectively, compared to the other seven mentioned methods.The method mentioned in this article also has the lowest loss rate. Revising the Model's Performance Generalization Verification Due to the idealized nature of publicly available datasets, rolling bearings in real operating environments are exposed to various factors such as working conditions, temper- The parameters set in the aforementioned literature are consistent with those of the original text.Similarly, for the CWRU dataset (with experiments conducted at a speed of 1750 r/min), training was iterated 50 times and divided into a 7:3 dataset split.The diagnostic results are depicted in Figure 13.While previous studies [22][23][24][25][26][27][28] employed a single-label approach to identify the fault positions and damage levels of rolling bearings, this paper proposes a dual-label marking method that takes into account both the fault types and damage diameters of rolling bearings.It can be observed that the proposed method in this study achieves an improvement in accuracy by 2.17%, 2.98%, 0.9%, 3.12%, 0.19%, 3.36%, and 1.06%, respectively, compared to the other seven mentioned methods.The method mentioned in this article also has the lowest loss rate. Revising the Model's Performance Generalization Verification Due to the idealized nature of publicly available datasets, rolling bearings in real operating environments are exposed to various factors such as working conditions, temperature fluctuations, surrounding environmental noise, and electromagnetic interference during the data collection process.To further validate the generalization performance of this model on diverse datasets, we conducted data collection using our laboratory mechanical transmission simulation test rig.The experimental rig structure and sampling equipment are illustrated in Figure 14.The term 'ball_18' in Table 4 represents a fault type of rolling element failure with a damage diameter of 0.18 mm.Similarly, 'inner_36' indicates a fault type of inner ring failure with a damage diameter of 0.36 mm, while 'outer_54' refers to a fault type of outer ring failure with the same damage diameter.The last column presents the data obtained under loaded and unloaded conditions, respectively. All fault bearings in this dataset have been artificially damaged through electric spa cutting techniques.Single-point faults with diameters measuring 0.18 mm, 0.36 mm, a 0.54 mm are intentionally placed on the inner race (inner), outer race (outer), and rolli elements (ball), respectively, whereas another category represents normal data witho any discernible faults.The fault bearing in the laboratory is illustrated in Figure 15 belo The convergence curves of the model's loss values and accuracy on our self-made dataset are illustrated in Figure 16.From these plots, it can be observed that the branch pertaining to fault types (Figure 16a) in rolling bearings essentially achieves convergence after 18 training iterations, while the branch concerning damage diameters requires up to 28 iterations for complete convergence.The accuracy of fault type classification steadily increases from 54.55% to 95.55%, accompanied by a decrease in loss rate from 0.0236 to 0.0020.On the other hand, the fitting effect of the damage diameters branch is not as rapid but continues to steadily improve, reaching a maximum accuracy of 92.88%.The corresponding loss rate decreases from 0.0252 to 0.0030. The accuracy of the self-made dataset test set is only 94.25%, primarily due to the direct collection of vibration signals in the laboratory without prior noise reduction and filtering processes, resulting in their conversion into two-dimensional grayscale images.However, when compared to the accuracy achieved in Section 4.4's experiment with a signal-to-noise ratio of −3 dB, there is an average decrease of approximately 1% observed in the self-made dataset's accuracy.Consequently, this method exhibits commendable generalization performance across diverse datasets. 28 iterations for complete convergence.The accuracy of fault type classification steadily increases from 54.55% to 95.55%, accompanied by a decrease in loss rate from 0.0236 to 0.0020.On the other hand, the fitting effect of the damage diameters branch is not as rapid but continues to steadily improve, reaching a maximum accuracy of 92.88%.The corresponding loss rate decreases from 0.0252 to 0.0030.The accuracy of the self-made dataset test set is only 94.25%, primarily due to the direct collection of vibration signals in the laboratory without prior noise reduction and filtering processes, resulting in their conversion into two-dimensional grayscale images.However, when compared to the accuracy achieved in Section 4.4's experiment with a signal-to-noise ratio of −3 dB, there is an average decrease of approximately 1% observed Conclusions This article investigates the diagnosis method of rolling bearings using a dual-branch convolution CapsNet and presents the following conclusions: 1. The article proposes a novel diagnostic model for rolling bearings, which enables the identification of both fault type and damage diameters through a dual-branch structure.By effectively leveraging fault information to extract more intricate features, it significantly enhances the accuracy of diagnosis.Moreover, the adoption of a onehot encoding binary labeling method reduces dimensionality and facilitates feature extraction in each branch while ensuring high precision; 2. The model was validated using the CWRU bearing dataset and a self-made dataset. The experimental results demonstrate that both branches of the model exhibit high accuracy, achieving an average accuracy of 99.8% for each branch on the CWRU dataset and an average accuracy of 94.25% for each branch on the self-made dataset. In comparison to four other fault diagnosis algorithm models in the existing literature, this model demonstrates a superior fault recognition rate and provides more comprehensive diagnostic information; 3. The model's robustness against noise and superior generalization ability are demonstrated through experiments involving noise addition and evaluation of generalization performance. The future will require further optimization of the methods outlined in this paper to enhance the stability and universality of network parameter selection under real-world operating conditions. Figure 4 . Figure 4. CA mechanism of attention.Figure 4. CA mechanism of attention. Figure 4 . Figure 4. CA mechanism of attention.Figure 4. CA mechanism of attention. Approach for Fault Diagnosis in Capsule Networks 3.1.Development of a Dual-Branch Convolutional Capsule Neural Network for the Diagnosis of Rolling Bearing Faults Model 19 Figure 5 . Figure 5.The model of the dual-branch convolutional capsule neural network rolling bearing. Figure 5 . 19 Figure 6 . Figure 5.The model of the dual-branch convolutional capsule neural network rolling bearing.Sensors 2024, 24, x FOR PEER REVIEW 8 of 19 Figure 7 . Figure 7. Test bench and rolling bearing.Figure 7. Test bench and rolling bearing. Figure 7 . Figure 7. Test bench and rolling bearing.Figure 7. Test bench and rolling bearing. Figure 10 . Figure 10.Branch fitting curve(CWRU).(a) Failure type branch fitting curve (CWRU).(b) Damage diameter branch fitting curve (CWRU).The confusion matrix of the test set in a certain experiment is illustrated in Figure11a,b.The horizontal axis represents the predicted fault types (damage diameters) labels, while the vertical axis represents the actual fault types (damage diameters) labels.Each row indicates the number of samples correctly classified as a specific fault type or incorrectly classified as other fault types (damage diameters) within the test set.Each column denotes the number of test samples predicted for each fault type (damage diameter). Figure 10 .Figure 11 . Figure 10.Branch fitting curve(CWRU).(a) Failure type branch fitting curve (CWRU).(b) Damage diameter branch fitting curve (CWRU).The confusion matrix of the test set in a certain experiment is illustrated in Figure 11a,b.The horizontal axis represents the predicted fault types (damage diameters) labels, while the vertical axis represents the actual fault types (damage diameters) labels.Each row indicates the number of samples correctly classified as a specific fault type or incorrectly classified as other fault types (damage diameters) within the test set.Each column denotes the number of test samples predicted for each fault type (damage diameter). equation, Psignal represents the power of noise and Pnoise represents the signal's effective power. Figure 12 . Figure 12.Time domain of different SNR states.(a) Original signal (the fluctuations are relatively moderate).(b) Noisy signal (the noise exhibits substantial fluctuations).(c) Post-noise-added signal (the time series is overwhelmed by noise). Figure 12 . Figure 12.Time domain of different SNR states.(a) Original signal (the fluctuations are relatively moderate).(b) Noisy signal (the noise exhibits substantial fluctuations).(c) Post-noise-added signal (the time series is overwhelmed by noise). Figure 13 . Figure 13.Identification accuracy and loss rate of each method. Figure 13 . Figure 13.Identification accuracy and loss rate of each method. Sensors 2024 , 19 Figure 14 . Figure 14.Experimental bench structure.The experimental setup primarily comprises a motor, a drive shaft, a coupling, and a test bearing.Additionally, it incorporates data acquisition equipment and bidirectional channel acceleration vibration sensors for capturing vibration signals from diverse bearing types.The measured data are obtained from the outermost end of the base where an installed deep groove ball bearing with model number 6025 can be found.According to GB/T276-94 standards, this bearing has an inner diameter (d) of 25 mm, an outer diameter (D) of 52 mm, and a thickness (B) of 15 mm.The sampling frequency for this experiment is set at 12.8 kHz while maintaining the motor speed at 750 r/min.To ensure adequate collection time for each fault type, each sampling duration should not be less than 50 s.All fault bearings in this dataset have been artificially damaged through electric spark cutting techniques.Single-point faults with diameters measuring 0.18 mm, 0.36 mm, and 0.54 mm are intentionally placed on the inner race (inner), outer race (outer), and rolling elements (ball), respectively, whereas another category represents normal data without any discernible faults.The fault bearing in the laboratory is illustrated in Figure15below. Figure 15 . Figure 15.Fault bearing in the laboratory.Data types: different fault types of rolling bearings.Training set (quantity): this is the Figure 14 . Figure 14.Experimental bench structure.The experimental setup primarily comprises a motor, a drive shaft, a coupling, and a test bearing.Additionally, it incorporates data acquisition equipment and bidirectional channel acceleration vibration sensors for capturing vibration signals from diverse bearing types.The measured data are obtained from the outermost end of the base where an installed deep groove ball bearing with model number 6025 can be found.According to GB/T276-94 standards, this bearing has an inner diameter (d) of 25 mm, an outer diameter (D) of 52 mm, and a thickness (B) of 15 mm.The sampling frequency for this experiment is set at 12.8 kHz while maintaining the motor speed at 750 r/min.To ensure adequate collection time for each fault type, each sampling duration should not be less than 50 s.All fault bearings in this dataset have been artificially damaged through electric spark cutting techniques.Single-point faults with diameters measuring 0.18 mm, 0.36 mm, and 0.54 mm are intentionally placed on the inner race (inner), outer race (outer), and rolling elements (ball), respectively, whereas another category represents normal data without any discernible faults.The fault bearing in the laboratory is illustrated in Figure 15 below.Data types: different fault types of rolling bearings.Training set (quantity): this is the quantity of samples for this type of training set.Test set (quantity): the number of samples for this type of training set.Under-loaded: the loading condition of this type of rolling bearing.The term 'ball_18' in Table4represents a fault type of rolling element failure with a damage diameter of 0.18 mm.Similarly, 'inner_36' indicates a fault type of inner ring failure with a damage diameter of 0.36 mm, while 'outer_54' refers to a fault type of outer ring failure with the same damage diameter.The last column presents the data obtained under loaded and unloaded conditions, respectively. Figure 15 . Figure 15.Fault bearing in the laboratory. Figure 15 . Figure 15.Fault bearing in the laboratory. Table 1 . The experimental samples and labels. Table 2 . The structural parameter design. Table 3 . Noise experimental model accuracy. Table 4 . Laboratory homemade datasets and their construction.
11,382
2024-05-24T00:00:00.000
[ "Engineering", "Computer Science" ]
Domain walls in 4dN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 1 SYM 4dN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 1 super Yang-Mills (SYM) with simply connected gauge group G has h gapped vacua arising from the spontaneously broken discrete R-symmetry, where h is the dual Coxeter number of G. Therefore, the theory admits stable domain walls interpolating between any two vacua, but it is a nonperturbative problem to determine the low energy theory on the domain wall. We put forward an explicit answer to this question for all the domain walls for G = SU(N), Sp(N), Spin(N) and G2, and for the minimal domain wall connecting neighboring vacua for arbitrary G. We propose that the domain wall theories support specific nontrivial topological quantum field theories (TQFTs), which include the Chern-Simons theory proposed long ago by Acharya-Vafa for SU(N). We provide nontrivial evidence for our proposals by exactly matching renormalization group invariant partition functions twisted by global symmetries of SYM computed in the ultraviolet with those computed in our proposed infrared TQFTs. A crucial element in this matching is constructing the Hilbert space of spin TQFTs, that is, theories that depend on the spin structure of spacetime and admit fermionic states — a subject we delve into in some detail. 1 Domain walls in 4d N = 1 SYM 4d N = 1 super Yang-Mills (SYM) -Yang-Mills theory with a massless adjoint fermionis believed to share with QCD nonperturbative phenomena such as confinement, existence of a mass gap, and chiral symmetry breaking. 4d N = 1 SYM with simple and simplyconnected gauge group G has h trivial vacua arising from the spontaneously broken Z 2h chiral R-symmetry down to Z 2 , where h is the dual Coxeter number of G (see table 1). The vacua are distinguished by the value of the gluino condensate [1][2][3] tr λλ = Λ 3 e 2πia/h , a = 0, 1, . . . , h − 1 . A supersymmetric domain wall that interpolates between two arbitrary vacua a and b at x 3 → ±∞ can be defined. The Z 2h symmetry implies that the domain wall theory depends only on the difference between vacua, on n ≡ a − b mod h. We denote the resulting 3d low energy theory on the wall by W n (see figure 1). While the n-wall tension is fixed by the supersymmetry algebra [4], it is a nonperturbative problem to determine the low energy (i.e. E Λ) effective theory on the domain wall. A supersymmetric domain wall preserves 3d N = 1 supersymmetry and therefore a universal 3d N = 1 Goldstone multiplet describes the spontaneously broken translation and supersymmetry. The nontrivial dynamical question is whether anything else remains in the infrared, a topological quantum field theory (TQFT) or gapless modes and, if so, which one(s). In this paper we put forward a detailed answer to this question for all the n-domain walls for G = SU(N ), Sp(N ), Spin(N ) and G 2 , and for n = 1 for arbitrary gauge group G. 1 The proposal for G = SU(N ) was put forward long ago by Acharya-Vafa [7] motivated by brane constructions. 2 We provide nontrivial new evidence for the SU(N ) proposal and for all the new proposals in this paper. The case G = Spin(N ) is particularly subtle and rich. JHEP03(2021)259 Simons level k = 1 2 h − n . (1.2) In other words, we propose the infrared description Since the n and h − n domain walls are related by time-reversal (see figure 1), consistency of this proposal requires that the corresponding infrared phases must also be related by time-reversal, that is, by sending k → −k in the 3d theory. This requirement is indeed fulfilled by the identification between n and k in (1.2). Determining the infrared phase of 3d N = 1 SYM is also a nonperturbative problem. In [22] it was proposed that this theory flows in the infrared to a nontrivial TQFT. The domain wall theories, we conjecture, are the "quantum phases" put forward in [22][23][24]. This predicts the following domain wall theories: 3 • G = SU(N ). The n-domain wall theory is W n = U(n) N −n,N Chern-Simons theory. This reproduces the proposal in [7]. • G = Sp(N ). The n-domain wall theory is W n = Sp(n) N +1−n Chern-Simons theory. • G = Spin(N ). The n-domain wall theory is W n = O(n) 1 N −2−n,N −n+1 Chern-Simons theory. We review the construction of this TQFT in section 4.3. • G = G 2 . The theory has h = 4 vacua and two independent walls: n = 1, 2. The 2-domain wall theory is W 2 = SO(3) 3 × S 1 , with SO(3) 3 Chern-Simons theory and S 1 the 3d sigma model on the circle. For the n = 1 wall see below. The expectation is that the n and (h − n)-domain walls are related by time-reversal, that is W h−n = W n (see figure 1). This is realized by virtue of the level-rank dualities of Chern-Simons theories [23,25,26] and time-reversal flipping the sign of the Chern-Simons levels: (1.4) The domain walls with n = h/2 are nontrivially time-reversal invariant. These TQFTs emerge in the infrared of 3d N = 1 SYM G 0 , with vanishing Chern-Simons level, which is time-reversal invariant. We also conjecture that: • Arbitrary group G. The n = 1 domain wall theory connecting neighboring vacua is W 1 = G −1 Chern-Simons theory. This is consistent with the proposals put forward above due to the level-rank dualities U(1) N ↔ SU(N ) −1 , Sp(1) N ↔ Sp(N ) −1 and O(1) 1 N −3,N = (Z 2 ) N ↔ Spin(N ) −1 . 3 The notation G k for Chern-Simons theories refers to Chern-Simons theory with gauge group G at level k ∈ Z. The Chern-Simons theory U(n) k,k ≡ SU(n) k ×U(1) nk Zn has two levels, and the theory based on O(n) has three levels (see section 4.3). JHEP03(2021)259 We subject the conjecture (1.3) to a number of nontrivial quantitative tests. We exactly match renormalization-group invariant partition functions computed in the 4d N = 1 domain walls with the corresponding partition functions computed in the proposed infrared 3d TQFTs. This lends nontrivial support for our domain wall proposals in 4d N = 1 SYM. We stress that one computation is performed using the 4d degrees of freedom, and the other using the proposed 3d TQFT degrees of freedom. The most basic partition function of the n-domain wall is the Witten index [27,28] I n = tr Wn (−1) F , (1.5) where tr Wn denotes the trace over the torus Hilbert space of W n with periodic boundary conditions, and (−1) F fermion parity. This partition function was first computed by Acharya and Vafa in [7] using the 4d N = 1 SYM fields. We introduce and compute additional partition functions on the domain wall theory where the Witten index is twisted by a global symmetry of SYM. 4d N = 1 SYM with gauge group G can have charge conjugation zero-form symmetry C and one-form symmetry Γ [29]. 4 Γ is the center of G, since the fermion in 4d N = 1 SYM is in the adjoint representation of the gauge group. The symmetries C and Γ do not commute when acting on Wilson lines, and combine into S = Γ C (see table 1). C acts on local operators and Wilson lines, and Γ on the Wilson lines of the theory. These symmetries are unbroken in each of the h vacua of 4d N = 1 SYM. S is the unbroken symmetry at each vacuum, while Z 2h is spontaneously broken to Z 2 . This allows us to define the following twisted Witten indices on the n-domain wall theory 5 I c n = tr Wn (−1) F c , (1.6) where c ∈ C, and I g n = tr Wn (−1) F g , (1.7) where g ∈ Γ. Consistency of our conjecture requires that these partition functions, computed on either side of (1.3), match. We compute the Witten indices in terms of the 4d degrees of freedom in section 2, and in the 3d TQFTs in section 4. Computing the domain wall Witten indices on the 3d side of the proposal requires understanding the Hilbert space of spin TQFTs, and not merely counting the number of states on the torus, as has been often stated in the literature. We delve into the details of constructing the Hilbert space of spin TQFTs and determining the fermionic parity of the states in section 3. The (twisted) Witten indices (1.5), (1.6), (1.7) map to twisted partition functions in the infrared spin TQFT. Importantly, the dimension of the Hilbert space and the index differ in general, as we shall see. In particular, the index sometimes vanishes in theories of interest. While the index can vanish, the twisted indices are non-vanishing, and supersymmetry on the domain wall is unbroken. 4 C is the outer automorphism group of the Dynkin diagram g of G while S is the outer automorphism group of the extended Dynkin diagram g (1) of the affine Lie algebra associated to G. The group Γ is defined as the quotient S/C, i.e., the symmetries of g (1) that are not symmetries of g. 5 One could also twist by any element cg ∈ S. JHEP03(2021)259 G SU(N ) Sp(N ) Spin(2N + 1) Spin(4N ) Spin(4N + 2) Table 1. Lie data for the simple Lie groups G. Here h denotes the dual Coxeter number (defined as tr(t adj t adj ) ≡ 2h(t, t ), where (·, ·) denotes the Killing form on g, normalized so that the highest root has (θ, θ) = 2). C, Γ are the zero-form and one-form symmetry groups of 4d N = 1 SYM with gauge group G, and S = Γ C. D N denotes the dihedral group with 2N elements, and S N the symmetric group with N ! elements. For SU(2) the zero-form symmetry group is trivial, and for Spin(8) the zero-form symmetry group is enhanced to S 3 and the total symmetry group to S 4 . The D N symmetry of pure SU(N ) YM was considered in [30]. We summarize here the results of our computations, performed both in terms of the 4d fields and the conjectured 3d topological degrees of freedom, for which we find perfect agreement. We find it convenient to organize the results into master partition functions, which are defined as the generating functions for the twisted Witten indices. In other words, we sum the (twisted) partition functions over all n-walls: where q is a fugacity parameter, and where s ∈ S is an element of the unbroken symmetry group. These partition functions have an elegant interpretation as twisted partition functions of a collection of free fermions in 0 + 1 dimensions with energies determined by the Lie data of G (see section 2). Interestingly, the twisted partition function can be expressed as the untwisted partition function of an associated affine Lie algebra, whose extended Dynkin diagram is obtained by the "folding procedure" introduced in [31]. The master partition functions take a rather simple form: • SU(N ): where c denotes the non-trivial element of C = Z 2 , and g is any element of Γ = Z N , thought of as an N -th root of unity. • Spin(N ), N odd: where g denotes the non-trivial element of Γ = Z 2 . • Spin(N ), N even: where c denotes the non-trivial element of C = Z 2 . (1.18) Expanding these formulas in a series in q yields I s n (see section 2). See also section 2.6 for the n = 1 domain wall twisted Witten indices for arbitrary simply-connected G. The plan of the rest of the paper is as follows. In section 2 we review the calculation of the untwisted Witten index for general domain walls in 4d N = 1 SYM, develop the necessary tools to study the twisted indices, and present a detailed calculation thereof, for all the classical Lie groups. In section 3 we explain how the Hilbert space of a spin Chern-Simons theory is constructed and, in particular, how to determine the fermion parity (−1) F of the different states. In section 4 we use this refined understanding of spin Chern-Simons theories to compute the twisted partition functions of the 3d TQFTs that, conjecturally, describe the infrared dynamics of the domain walls, and show exact agreement. We end with some forward-looking comments in section 5. We delegate to appendices some technical details that are needed in the computation of the twisted partition functions in section 4 and some additional material. JHEP03(2021)259 2 Twisted Witten indices In this section we study the twisted Witten indices on the 3d N = 1 domain walls, as computed in terms of the ultraviolet 4d degrees of freedom, namely the gluons and gluinos. This requires considering 4d N = 1 SYM on a two-torus and quantizing the space of zero energy states. This leads to a 2d N = (2, 2) sigma model on the moduli space of flat Gconnections on a two-torus, which is the weighted projective space WCP r i is the comark for the i-th node in the extended Dynkin diagram g (1) of the affine Lie algebra associated to G and r = rank(G) [32,33]. Just as 4d N = 1 SYM, this 2d theory also has h quantum vacua. A supersymmetric domain wall in 4d N = 1 SYM corresponds to a supersymmetric soliton in the 2d N = (2, 2) sigma model [7]. Using the 2d N = (2, 2) sigma model, Acharya and Vafa argued that the Witten index of the domain wall is encoded in the Hilbert space of r +1 free fermions in 0+1 dimensions. Each fermion ψ i is associated to the i-th node of the extended Dynkin diagram g (1) of G and the energy of each fermion is a ∨ i . The fermion Hilbert space is graded by the energy of the states The Witten index of all n-domain walls is encoded in the partition function of the fermions with periodic boundary conditions on a circle, corresponding to a sum over all states weighted by the energy: This partition function is readily evaluated which implies, in particular, that the Witten index for the n and h − n wall are the same (I n = (−1) r+1 I h−n ) since the fermionic Hilbert space for the n and h − n walls are related by particle-hole symmetry. This beautifully reproduces the expectation that the n domain wall and the h − n domain wall (cf. figure 1) are related to each other by time-reversal! JHEP03(2021)259 A symmetry s ∈ S of 4d N = 1 SYM acts in a simple way on the Wilson lines of the gauge theory. A Wilson line is labeled by a representation of G with highest weight λ ≡ λ 1 ω 1 + λ 2 ω 2 + · · · + λ r ω r , where ω i is the fundamental weight associated to the i-th node of the Dynkin diagram g. The Wilson line W i labeled by the fundamental weight ω i transforms under c ∈ C as c : where ω c(i) is the fundamental weight which is charge conjugate to ω i . An element g ∈ Γ acts by g : where α g (ω i ) ∈ Γ * is the charge of ω i under the center Γ of G. The action of a symmetry on the fundamental Wilson lines W i induces an action on the fermions ψ i , which are labeled by a node in the extended Dynkin diagram g (1) . We recall that C acts as an outer automorphism of g, S acts as an outer automorphism of g (1) and Γ = S/C. We now proceed to compute the Witten index on the domain wall twisted by the symmetries of the system, S. This group is identified with the group of symmetries of the extended Dynkin diagram g (1) , i.e., a given s ∈ S can be thought of as a permutation of the nodes i → s(i) that leaves the diagram g (1) invariant. The induced action on the effective 0 + 1 system of fermions is s : where ψ i is the fermion associated to the i-th node of g (1) . Similarly, the twisted partition function computes the generating function of twisted indices: An efficient way to compute this partition function is as follows. Take the i-th fermion, and consider its orbit under s: 12) where N i denotes the length of the orbit of the i-th node under the symmetry s, i.e., the minimal integer such that s N i (i) ≡ i. In the trace (2.10), the only configurations that contribute are those where the occupation number λ i in (2.2) is constant along the orbit: JHEP03(2021)259 This means that we may restrict the sum over H n F in (2.2) to those configurations where this identity is satisfied. We enforce this by dropping all but one of these labels, and multiplying its energy by N i , i.e., we replace (2.2) by where the sum is over one representative for each orbit, r is the number of orbits of s, and a i ∨ = N i a ∨ i is the combined energy of all the elements of the orbit of λ i . With this, the twisted Witten index (1.6) on the domain wall can be computed as the untwisted partition function of r + 1 free fermions with energies a i ∨ : Since h = r i=0 a ∨ i we see that I s n = (−1) r +1 I s h−n , as required by time-reversal. Diagrammatically, twisting by a symmetry folds the Dynkin diagram g (1) according to the action of s on the nodes [31]. This yields a new affine Dynkin diagram, which has r + 1 < r + 1 nodes, and comarks a i ∨ = N i a ∨ i . The twisted Witten index is identical to the untwisted Witten index of the folded diagram. A quick remark is in order. Let λ i be a node in the folded diagram, and let N i be the number of nodes in the original diagram that folded into λ i . The node λ i is therefore a bound state of N i fermions, and thus has fermion parity (−1) F = (−1) N i . Moreover, the symmetry s permutes these fermions, which generates an extra sign corresponding to the signature of the permutation. When the permutation is cyclic, which is the case relevant to this paper, the signature is just N i − 1. All in all, the contribution of λ i to the twisted trace is (−1) F s = (−1) (N i +N i −1) = −1. Therefore, in the folded diagram the node behaves as a regular fermion, just with more energy, and so (2.15) is correct as written: the fermionic signs are all taken care of automatically by the folding. The twisted Witten index can also be computed by diagonalizing the action of s (2.12) by a direct sum of unitary transformations, one for each orbit, which is a symmetry of the collection of fermions. In this basis, s acts with eigenvalue s i on the i-th fermion, where s i is an N i -th root of unity. The twisted partition function can, therefore, also be expressed as This also makes the action of time-reversal symmetry on the domain walls manifest, cf. I s n = (−1) r+1 det(s)I s h−n , where det(s) = s 0 s 1 · · · s r = (−1) r+r is the parity of the permutation induced by s. We now discuss zero-form symmetries and one-form symmetries in turn. Zero-form symmetries. 4d N = 1 SYM has charge conjugation symmetry C if and only if the (unextended) Dynkin diagram g of the Lie algebra of G has a symmetry. This corresponds to an outer automorphism of the Lie algebra of G (see table 1). Such symmetry is present for the A r , D r , E 6 algebras, where C acts as a transposition (order-two JHEP03(2021)259 permutation) on the nodes of the Dynkin diagram (with low-rank exceptions A 1 , D 4 ). In this case, folding the diagram by C gives rise to what is usually called the twisted affine Dynkin diagram g (2) [34,35], which is constructed by identifying the nodes of g that are permuted by C (and adding the extending node). As C = Z 2 , the eigenvalues in (2.16) are trivial to determine: if a node i is fixed by c ∈ C, then its eigenvalue is c i = +1. On the other hand, if the pair of nodes i, j are swapped, then the eigenvalues are ±1, which can be assigned as c i = +1 and c j = −1 (or vice-versa). One-form symmetries. As discussed in section 1, 4d N = 1 has a one-form symmetry group Γ given by the center of the gauge group G. Here, the eigenvalues s i in (2.16) have a very natural interpretation. An element g ∈ Γ acts as an outer automorphism of g (1) , a permutation of the r + 1 nodes. Diagonalizing this permutation results on an eigenvalue g i on the fermion ψ i associated to the i-th node, which is the charge of the element of the center g ∈ Γ on the i-th fundamental weight of g [36][37][38][39][40], that is (2.17) Note that this is precisely how g ∈ Γ acts on the ultraviolet Wilson loops W i (cf. (2.7)). We now proceed to compute the twisted Witten indices for G = SU(N ), Sp(N ), Spin(N ), and G 2 respectively. G = SU(N ) Consider the algebra A N −1 = su N . The symmetries of this algebra are as follows: • The group SU(N ) has a Z 2 zero-form symmetry, which corresponds to complex conjugation. It acts by interchanging the i-th node with the (N − i)-th node in g. The associated diagonal action can be chosen as follows: take c 0 = +1 for the extended node, and c i = +1 for the first half of the unextended nodes, and c i = −1 for the second half. • The group SU(N ) has a Z N one-form symmetry, whose associated charge is the Nality (the number of boxes in the Young diagram modulo N ). If g denotes a primitive root of unity, then a generic element g t ∈ Z N acts on the extended diagram g (1) as a cyclic permutation by t units, λ i → λ i+t mod N . The center acts on a representation with highest weight λ as follows: This result was also obtained, by an entirely different method, in [41]. We now move on to the twisted indices. Charge conjugation acts on the extended Dynkin diagram A (1) N −1 as follows: where the blue node denotes the affine root, and the integers denote the comarks a ∨ i . The automorphism folds the diagram in half. 6 The result is where the folded diagrams both have m + 1 = N/2 + 1 nodes. From this we conclude that the folded diagram has r + 1 = (N + 1)/2 and r = (N + 2)/2 nodes for N odd and N even, respectively. In the first case, one node has comark equal to 1, and the rest equal to 2; while in the second case, there are two nodes with comark 1, and the rest equal to 2. Using (2.15), the c-twisted partition function is 7 and, expanding, the c-twisted Witten indices are (−1) n/2 (N − 1)/2 n/2 N odd, n even, (2.24) 6 For N odd, the (N ± 1)/2-th nodes would naively fold into a loop, which does not yield a valid Dynkin diagram. The correct folding is given by the theory of twisted Kač-Moody algebras [34,35]. We henceforth fold the diagrams following [34,35]. The only information we need from the diagram are the comarks. 7 In SU(2) the action of c is a gauge transformation and c is not a symmetry; indeed, Z c (q) = Z(q). JHEP03(2021)259 One can also compute the partition function in the diagonal basis, where c i = +1 for the first half of the nodes, and c i = −1 for the second half. Plugging this into (2.16) yields the same expression for the twisted partition function. Let us now consider the partition function twisted by the Γ = Z N one-form symmetry. If g denotes a primitive N -th root of unity, then a generic element g t ∈ Z N acts on the extended diagram as follows: where the folded diagram has gcd(N, t) nodes, each with energy N/ gcd(N, t). In other words, g t folds the diagram into the affine diagram of SU(gcd(N, t)), with comarks N/ gcd(N, t). This immediately yields the twisted partition function as (2.15) (2.26) The twisted index reads Naturally, for t = 0 this reduces to the untwisted result. Alternatively, we may compute the same partition function in the diagonal basis. Using equations (2.16) and (2.18), the twisted partition function is given by the so-called q-Pochhammer symbol, essentially defined by this product. One may prove that this is in fact identical to (2.26). Expanding the product, the twisted index becomes I g n = (−1) n g 1 2 n(n−1) N n g , N n g := (g; g) N (g; g) n (g; g) N −n , (2.29) where the term in parentheses denotes the so-called q-binomial coefficient. This is again identical to (2.27). G = Sp(N ) Consider the algebra C N = sp N . The symmetries of this algebra are as follows: • The group Sp(N ) has no zero-form symmetry. • The group Sp(N ) has a Z 2 one-form symmetry, whose charged representations are the pseudo-real ones. The non-trivial element g ∈ Z 2 acts on the extended diagram by reversing the nodes λ i → λ N −i . The center acts on a representation λ as follows: and, expanding, the Witten index 4d Sp(N ) N = 1 SYM has no charge conjugation symmetry. We can consider instead the index twisted by the Γ = Z 2 one-form center symmetry, which acts on the extended Dynkin diagram as follows: where the blue node denotes the affine root, and the integers denote the comarks a ∨ i . The automorphism folds the diagram in half (see footnote 6). The result is where the folded diagrams have m + 1 = N/2 + 1 nodes. From this we learn that the folded diagram has r +1 = (N +2)/2 and r +1 = (N +1)/2 nodes, for N even and N odd, respectively. In the first case, one of these nodes has energy JHEP03(2021)259 equal to 1, and the rest equal to 2; while in the second case, they are all of energy 2. Plugging this into (2.15) the one-form twisted partition function is (2.35) and, expanding, the twisted Witten index N even, n odd, (2.36) One can also compute the partition function in the diagonal basis, where g i = +1 for the even nodes, and g i = −1 for the odd ones. Plugging this into (2.16) yields the same expression for the twisted partition function. G = Spin(2N + 1) Consider the algebra B N = so 2N +1 . The symmetries of this algebra are as follows: • The group Spin(2N + 1) has no zero-form symmetry. • The group Spin(2N + 1) has a Z 2 one-form symmetry, whose charged representations are the spinors. The non-trivial element g ∈ Z 2 acts on the extended diagram by permuting the zeroth and first nodes, λ 0 ↔ λ 1 . The center acts on a representation λ as follows: which means that the eigenvalues in (2.16) are Let us begin by computing the untwisted partition function. The comarks for Plugging this into equation (2.5) we obtain the untwisted partition function and, expanding, the Witten index (2.39) JHEP03(2021)259 Note that the index vanishes for N = 1 mod 4 and n = (N − 1)/2 and by time-reversal for n = h − n = (3N − 1)/2. This clearly illustrates the crucial difference between the dimension of the Hilbert space and the index. 4d Spin(2N + 1) N = 1 SYM has no charge conjugation symmetry. We can consider instead the index twisted by the Γ = Z 2 one-form center symmetry. The non-trivial element g ∈ Z 2 acts on the extended Dynkin diagram as follows: 2N : (2.40) where the blue node denotes the affine root, and the integers denote the comarks a ∨ i . From this we learn that the folded diagram has r + 1 = N nodes, one of which has energy equal to 1, and the rest all energy equal to 2. Plugging this into (2.15) the one-form twisted partition function is and, expanding, the Witten index One can also compute the partition function in the diagonal basis, where g i = +1 for all the nodes except for the last one, which has g N = −1. Plugging this into (2.16) yields the same expression for the twisted partition function. G = Spin(2N ) Consider the algebra D N = so 2N . The symmetries of this algebra are as follows: • The group Spin(2N ) has a Z 2 zero-form symmetry. The corresponding charge is the chirality of the representation. This symmetry acts by permuting the last two nodes in the unextended Dynkin diagram. The associated diagonal action can be chosen as follows: take c i = +1 for all but the last two nodes, and c N −1 = +1 and c N = −1. • The group Spin(2N ) has a Z 2 × Z 2 one-form symmetry if N is even, and Z 4 if odd. They act on the extended Dynkin diagram as follows: one of the Z 2 's for N even, and the Z 2 subgroup of Z 4 for N odd, acts as the permutation λ 0 ↔ λ 1 and λ N −1 ↔ λ N , while fixing the rest of Dynkin labels in the extended diagram. The other Z 2 factor reverses the order of the extended Dynkin labels, while Z 4 acts as and it reverses the order of the rest of Dynkin labels. For N even, let (g 1 , g 2 ) ∈ Z 2 × Z 2 ; and, for N odd, let g ∈ Z 4 ; all thought of as roots of unity. The center acts on a representation λ as follows: JHEP03(2021)259 Therefore, the eigenvalues in (2.16) are and, expanding, the Witten index Note that the index vanishes when N is even and n corresponds to the time-reversal symmetric wall n = h/2 = N − 1. It also vanishes for the exceptional pairs (N, n) such that 2 + 4n + 2n 2 − 3N − 4nN + N 2 = 0. Let us now consider the index twisted by charge conjugation. Its action on the extended Dynkin diagram, and the resulting folded diagram, are as follows: where the folded diagram has N nodes. From this we learn that the folded diagram has r + 1 = N nodes, two of which have energy equal to 1, and the rest all energy equal to 2. Plugging this into (2.15) the zero-form twisted partition function is and, expanding, the twisted Witten index (2.49) JHEP03(2021)259 One can also compute the partition function in the diagonal basis, where c i = +1 for all the nodes except for the last one, which has c N = −1. Plugging this into (2.16) yields the same expression for the twisted partition function. Let us now consider the one-form-twisted partition functions. The symmetry depends on whether N is even or odd, which we consider in turn. N even. Here the symmetry is Z 2 × Z 2 . g 1 and g 2 act as follows: The folded diagrams are which have N − 1 and N/2 + 1 nodes, respectively. The folding by g 1 g 2 is in fact identical to that of g 2 , i.e., the second diagram. The twisted partition functions read and, expanding, the twisted Witten indices N odd. Here the one-form symmetry is Z 4 , whose action on the extended Dynkin diagram, and the corresponding folded diagram, are as follows: where the comarks are all 4 if we fold by a generator of Z 4 , and all 2 if we fold by a generator squared. The number of nodes is (N − 1)/2 in the first case, and N − 1 in the second case. The folded diagram corresponds to C (1) (N −1)/2 and C (1) N −1 , respectively. If we let g denote a generator of Z 4 , the twisted partition function is and, expanding, the twisted Witten indices (2.56) As usual, one may also compute these partition functions in the diagonal basis. Using the phases (2.44) in (2.16) yields the same expressions for the twisted partition functions, as expected. G = G 2 G 2 has no zero-form or one-form symmetry. The comarks for G 2 are a ∨ 0 = a ∨ 2 = 1 and a ∨ 1 = 2. Plugging this into equation (2.5) we obtain the untwisted partition function and, expanding, the Witten indices (2.58) Note that I 3 = −I 1 , as expected from the action of time-reversal on domain walls. Minimal wall for arbitrary gauge group The domain wall theory for n = 1 admits a uniform description for all simply-connected groups, including the exceptional ones. Indeed, the only fermion configurations with total energy equal to 1, that is the solutions to (2.2) are clearly of the form λ i = 1 for one i such that a ∨ i = 1, and λ j = 0 for all j = i. In other words, in each configuration there is only one excited fermion, which moreover JHEP03(2021)259 necessarily has energy a ∨ i = 1. All these configurations have the same fermion number, namely (−1) F = −1, which means that the index is where m 1 denotes the number of nodes in the extended Dynkin diagram g (1) with comark equal to 1. The values of m 1 are given in the following table: Note that, for simply-laced G, m 1 is the order of Γ. The index twisted by a symmetry s ∈ S is where m s 1 denotes the number of nodes in the extended Dynkin diagram g (1) with comark equal to 1 that are fixed by s. m s 1 has already been computed for the classical groups SU(N ), Sp(N ), Spin(2N + 1) and Spin(2N ). For the exceptional groups, only E 6 and E 7 have non-trivial symmetry group S (see table 1). In E 6 , the zero-form charge-conjugation symmetry leaves invariant the extended node, which has comark 1, and permutes the other two nodes with comark 1. In E 6 and E 7 , the one-form center symmetry permutes all the nodes with comark 1. Therefore, letting c denote the non-trivial element of C, and g any non-trivial element of Γ, the indices are The (twisted) indices for the exceptional groups E 6 , E 7 , E 8 , F 4 and arbitrary n have been included in appendix B for completeness. This concludes our discussion of the twisted Witten indices of the domain walls, as computed in terms of the 4d ultraviolet degrees of freedom. A rather nontrivial consistency test of our proposal is that the Witten indices on the domain walls we just computed are reproduced by the corresponding partition functions of our conjectured 3d TQFTs. Computing the image of the (twisted) Witten indices in these TQFTs is nontrivial and requires understanding in detail the Hilbert space of spin TQFTs and the action of (−1) F on it, a subject to which we now turn. Hilbert space of spin TQFTs The domain wall theory preserves 3d N = 1 supersymmetry and observables depend on the choice of a spin structure. Therefore, the TQFT that emerges in the deep infrared of the domain wall must also depend on a choice of a spin structure, that is, it must be a spin TQFT [42]. The data of a TQFT in 3d includes the set of anyons A (or Wilson lines) and the braiding matrix B : A × A → U(1) encoding their braiding. A spin TQFT is a TQFT that JHEP03(2021)259 has an abelian 8 line ψ that braids trivially with all lines in A and has half-integral spin. 9 Transparency of ψ implies that it fuses with itself into the vacuum, that is ψ × ψ = 1. Since ψ is transparent and has half-integral spin, the observables of a spin TQFT depend on the choice of a spin structure. A spin TQFT can be constructed from a parent bosonic TQFT which has an abelian, non-transparent fermion ψ with ψ ×ψ = 1, that is, a bosonic TQFT that has a Z ψ 2 one-form symmetry generated by a fermion [43][44][45][46][47]. The bosonic parent theory defines a spin TQFT upon gauging its Z ψ 2 one-form symmetry generated by ψ This procedure is an extension of the notion of bosonic "anyon condensation" [48][49][50]. 10 Upon gauging, the fermion ψ in the parent bosonic theory becomes the transparent fermion ψ in the spin TQFT. The gauged one-form symmetry Z ψ 2 of the parent bosonic theory gives rise to an emergent zero-form symmetry Z 2 in the spin TQFT that is generated by the fermion parity operator (−1) F , and which acts on the "twisted sector". We will discuss the action of (−1) F on the Hilbert space of spin TQFTs shortly. The lines of the parent bosonic theory A can be arranged as the disjoint union of two sets A = A NS ∪ A R according to their braiding with ψ. Lines in A NS , by definition, braid trivially with ψ while lines in A R have braiding −1 with ψ. This partitions the lines of the bosonic TQFT according to their Z ψ 2 quantum number. The lines in each set can be organized into orbits of Z ψ 2 , generated by fusion with ψ. The orbits can be either two-or one-dimensional. The lines in one-dimensional orbits are referred to as "Majorana lines" in that they can freely absorb the fermion ψ: The Majorana lines, if any, are necessarily in A R . 11 The lines of the bosonic parent theory thus split as An abelian line is one that yields a single line in its fusion with any line in A. 9 A TQFT that has a line with half-integral spin which braids nontrivially with at least one line in the theory is not spin. There is no unambiguous way to assign a sign to the fermion as we move it around a circle since the phase it acquires depends on which lines link with the circle, unlike when the fermion is transparent. 10 More precisely, the parent bosonic TQFT must be attached to a suitable 4d SPT phase so that the combined system is non-anomalous, and the symmetry can be gauged. 11 To prove this we compute the braiding of ψ with a Majorana line m and show that it necessarily has where h denotes spin of lines and in the second equality we have used the defining relation for a Majorana line (3.2). See also [47]. JHEP03(2021)259 The first set, referred to as the Neveu-Schwarz (NS) lines, is what is usually regarded as the set of Wilson line operators in the spin TQFT. The second set, the Ramond (R) lines, change the spin structure background. This decomposition will be useful shortly in the construction of the Hilbert space of the spin TQFT. The Hilbert space of the spin TQFT on the spatial torus depends on the choice of spin structure. There are two equivalence classes of spin structures on the torus (or, more generally, on any Riemann surface): even and odd spin structures. Consider the even and odd spin structure Hilbert spaces H NS-NS and H R-R . H NS-NS correspond to choosing antiperiodic boundary conditions on the two circles while H R-R corresponds to periodic boundary conditions. The other two even spin-structure Hilbert spaces H NS-R and H R-NS can be obtained from H NS-NS by the action of the mapping class group. This group is a non-trivial extension of the modular group SL 2 (Z) by the Z 2 fermion parity symmetry. It is known as the metaplectic group Mp 1 (Z). It does not preserve the individual spin structures but it does preserve their equivalence class. The Hilbert spaces of spin TQFTs realize a unitary representation of this group. The states in the Hilbert space H B of a bosonic TQFT are constructed from the path integral on a solid torus by inserting lines M ∈ A along the non-contractible cycle [51]. This defines conformal blocks on the torus. We represent this pictorially by The Hilbert space of the spin TQFT can be constructed from its definition as a quotient of the bosonic parent TQFT (3.1). 12 The states in H NS-NS are labeled by a ∈ A NS , and are represented as The states in H R-R are constructed from conformal blocks of the bosonic parent TQFT on the torus and, in the presence of Majorana lines (3.2), from the once-punctured torus conformal blocks of the bosonic parent TQFT. By virtue of m being a Majorana line obeying the fusion rule ψ × m = m, the one-point conformal block on the torus with m along the cycle and ψ at the puncture is nontrivial, as it is allowed by the fusion rules. The states in H R-R are labeled by x, m ∈ A R , and are represented as 13 (3.6) 12 More details of the explicit construction of the Hilbert space of spin TQFTs will appear elsewhere [52]. 13 Unlike in bosonic anyon condensation, where a fixed line in the parent theory yields multiples states in the quotient theory, a Majorana line is in an irreducible representation of Cliff(1|1) and yields a unique state in the quotient (spin) TQFT. JHEP03(2021)259 Modular transformations preserve the odd spin structure, i.e., they map H R-R into itself. The negative sign in (3.6) guarantees that under modular transformations the states in H R-R are mapped into themselves. Note that the pair of lines a and a × ψ in the bosonic parent descend to a pair of lines in the spin TQFT, because these are distinct anyons, being distinguishable by their spin. On the other hand, the pair of states |a and |a × ψ descend to a single state in the spin TQFT. Thus, while in a bosonic TQFT the number of states is the same as the number of lines, in a spin TQFT there are twice as many lines as there are states. Our next task is to compute the action of fermion parity, i.e. (−1) F , on the Hilbert space of the spin TQFT. The Z 2 symmetry generated by (−1) F is the emergent zeroform symmetry that appears upon quotienting the parent bosonic theory by Z ψ 2 in (3.1). The charged states are therefore those constructed from the once-punctured torus in the bosonic theory (−1) F |m spin = −|m spin , (3.7) (−1) F acts nontrivially on the ψ puncture in the once-punctured torus. Depending on the choice of spin structure on the "time" circle we can define the following 2 3 = 8 partition functions for spin TQFTs: 14 where O is an operator in the theory. We will be interested in the case when O is a symmetry of the TQFT. We note that (−1) F is only non-trivial in the R-R sector, because this is the only Hilbert space that may contain Majorana states. This is the most subtle and rich sector, and the one of interest as far as the twisted Witten indices is concerned. Partition function of spin TQFTs The twisted Witten index of the domain wall theory is computed by considering the odd spin structure on the spatial torus and periodic boundary condition on the time circle. This implies that the twisted Witten indices, computed via the 4d ultraviolet fields, must be reproduced by appropriate odd-spin-structure partition functions of our conjectured 14 − represents antiperiodic boundary condition while + period boundary conditions. JHEP03(2021)259 infrared 3d spin TQFTs. In other words, a nontrivial check that our proposed infrared spin TQFTs describe the n-domain wall theories is proving that for symmetries s ∈ S. This requires, in particular, identifying the image of the symmetries s ∈ S in the infrared TQFT. Let us begin by considering the untwisted partition function. Given the construction of the Hilbert space H R-R in (3.6) and the action of (−1) F in (3.7) we can compute the desired partition function as follows • For N odd, Spin(N ) 1 is the Ising category, which has three lines {1, σ, ψ}: the vacuum 1, the spin operator σ, and the energy operator ψ. These primaries have spins h = 0, N 16 , 1 2 , and fusion rules σ 2 = 1 + ψ, ψ 2 = 1, ψ × σ = σ. The theory SO(N ) 1 is obtained by condensing ψ, that is SO(N ) 1 = Spin(N ) 1 /Z ψ 2 . Using the fusion rules and the spins we see that 1, ψ are neutral under Z ψ 2 and e, m and σ are charged. In other words, the Neveu-Schwarz sector is while the Ramond sector is {e, m} N even. JHEP03(2021)259 For a second example let us now consider the spin TQFT SO(3) 3 Chern-Simons theory, which is the simplest non-trivial spin TQFT. The bosonic parent theory is SU(2) 6 since where the abelian line ψ is the line in SU(2) 6 with j = 3 and spin h = 3/2. The lines in A = {j = 0, 1 2 , 1, . . . , 3} which have braiding −1 with ψ are those with half-integral isospin: Under fusion with ψ we have the following Z ψ 2 orbits (3.15) Therefore, in the R-R sector of SO(3) 3 there is a length-2 orbit with (j = 1 2 , 5 2 ) and a Majorana line with j = 3 2 . Thus, N x = N m = 1. There are N x + N m = 2 states, but one of them is a boson and the other is a fermion, which means that the partition function with periodic boundary conditions vanishes The vanishing of this trace will be important when discussing the 2-domain wall theory in 4d N = 1 SYM with gauge group G 2 (cf. section 4.4). This example clearly illustrates the importance of looking at the appropriate partition function and not merely at the dimension of the Hilbert space. We now discuss a different way to compute the partition function that does not rely on computing N x and N m directly. The basic idea is to gauge the emergent zero-form Z 2 symmetry generated by (−1) F in the spin TQFT to obtain the bosonic parent theory back [45,46,[53][54][55] spin TQFT Gauging this Z 2 amounts to summing the spin TQFT over all spin structures of the threemanifold M . Taking M to the three-torus, and summing over the 2 3 = 8 spin structures, corresponding to either periodic or antiperiodic boundary conditions around each of the three circles, we find that tr ±,± (see (3.8)-(3.9)) denotes the trace over the Hilbert space on the spatial torus with boundary conditions ±, ±, and tr H B the trace over the torus Hilbert space of the bosonic parent theory. Using the fact that the dimension of the torus Hilbert space is the same in all spin structures and that (−1) F acts nontrivially only in H R-R (see (3.7)), we find the formula JHEP03(2021)259 where dim(H B ) is the dimension of the torus Hilbert space of the bosonic parent TQFT and dim(H F ) the dimension of the torus Hilbert space in any one spin structure of the spin TQFT. This formula offers a significant advantage in that it requires computing the total number of states dim(H F ) = N x + N m , and not separately N x and N m , as in formula (3.11). Even simpler, one may compute dim(H F ) = N a in the NS-NS sector directly, where all orbits are of length-2: the number of states is just half the number of lines of the spin TQFT. 15 As a consistency check, consider the case where G F is the product of a bosonic theorỹ G times a trivial/invertible spin TQFT which is precisely what one would expect, given the tensor product structure of G F and the fact that the trace over SO(N ) 1 is (−1) N . Put differently, in the Hilbert space of G F =G × SO(N ) 1 we have N a = N x and N m = 0 for N even, and N a = N m and N x = 0 for N odd. That is, in the R-R sector, either no states are Majorana or all are, depending on the parity of N . This implies that which indeed equals (3.21). There are spin TQFTs which factorize in a nontrivial fashion into the product of a bosonic TQFT and a trivial spin TQFT by virtue of a level-rank duality, as for example U(1) k ↔ SU(k) −1 ×{1, ψ}. In these theories tr H R-R (−1) F also just measures the dimension of the Hilbert space (up to possibly a sign). JHEP03(2021)259 Indeed, the single state of SO(N ) 1 in H R-R is a Majorana state and thus has odd fermion parity for N odd only. Therefore, when comparing the spin TQFT partition function with the Witten index of the domain wall, we will match their absolute values, as if those match, the signs can be also be matched by stacking a suitable trivial spin TQFT, which can be thought of as a purely gravitational counterterm [56]. 16 The generalization to twisted indices is straightforward. Given a symmetry s ∈ S TQFT of the TQFT, which acts s : H R-R → H R-R , the partition function counts the number of bosons fixed by s, minus the number of fermions fixed by s. That being said, there are some subtleties that must be kept in mind. A state fixed by s does not necessarily contribute with s = +1 to the trace -it might contribute with s = −1 instead, the reason being that the symmetry s might be realized projectively in the Hilbert space. The most common example where this may happen is charge-conjugation c. We can illustrate this in U(1) 1 Chern-Simons theory, the simplest theory where this phenomenon occurs. This is an invertible spin TQFT, which means that it has a unique state on any spin structure. This state is clearly fixed by c but, interestingly, it has c = −1 in the odd-spin-structure Hilbert space H R-R . We can show this as follows. The bosonic parent theory is U(1) 4 Chern-Simons theory, which has four states, labeled by q = 0, 1, 2, 3. The U(1) 1 theory is obtained by condensing the fermion ψ, which has q = 2. The Ramond lines are easily checked to be q = 1, 3, and they are paired by fusion with ψ into a single two-dimensional orbit, since 1×2 = 3. Thus, the unique state in the R-R sector is (cf. (3.6)) This indeed satisfies c|1 spin = −|1 spin , inasmuch as c : q → −q mod 4 in the bosonic parent, which exchanges |1 and |3 . Domain wall TQFT partition functions In this section we calculate partition functions twisted by a symmetry s ∈ S TQFT tr H R-R (−1) F s (4.1) of the Chern-Simons TQFTs we proposed emerge in the infrared of the domain wall theories (see section 1). Our calculations beautifully reproduce the results obtained in section 2. Namely, we will now demonstrate that the trace (4.1) agrees with the twisted Witten index on the n-domain wall W n (cf. (2.10)) as computed in terms of the original 4d fields We identify each symmetry s ∈ S in 4d N = 1 SYM with a symmetry s ∈ S TQFT in the infrared TQFT. 16 Staking SO(N )1 for odd N to a 3d theory has the same effect as stacking to a 2d theory the trivial spin TQFT known as the Arf-invariant, which changes the sign of the partition with odd spin structure. JHEP03(2021)259 In 4d N = 1 SYM with gauge group Sp(N ) our proposed domain wall theory corresponds to a Chern-Simons theory based on a group that is simple, connected, and simplyconnected, whereas for SYM with SU(N ), Spin(N ) and G 2 gauge groups, the proposed infrared Chern-Simons theories are based on a group that is neither. We discuss both cases in turn. Chern-Simons theory G k , with G simple, connected and simply-connected is always a bosonic TQFT. These theories are made spin by tensoring with the trivial spin TQFT SO(N ) 1 . It follows from our discussion in section 3 that since all states have the same fermion parity -all bosonic, or all fermionic, depending on the parity of N . Therefore the partition function of G k × SO(N ) 1 in (4.3) is, up to possibly a sign, the dimension of the Hilbert space of G k Chern-Simons theory on the two-torus. The states in the torus Hilbert space of G k Chern-Simons theory are conformal blocks on the torus, which are labeled by the integrable representations of the corresponding affine lie algebra g (1) at level k [51,57]. By definition, the representations of G that are integrable are those whose highest weight λ satisfies (λ, θ) ≤ k, with θ the highest root of G. Expanding the latter in a basis of simple coroots, and introducing an extended label λ 0 := k − (λ, θ), integrability can be expressed as The dimension of the Hilbert space tr G k (1) is equal to the number of solutions to this equation. Much like the discussion in section 2, where the Witten index on the domain wall was computed through an auxiliary system of free fermions, tr G k (1) has a nice combinatorial interpretation in terms of a system of free bosons in 0 + 1 dimensions. Indeed, the number of integrable representations tr G k (1) is the number of ways of creating a state of energy k from r + 1 free bosons, each with energy a ∨ i . Each boson is associated with a node in the extended Dynkin diagram g (1) , and λ i ∈ {0, 1, 2, . . . } in (4.4) corresponds to the occupation number of the i-th boson. Introducing a fugacity parameter q defines a generating function, which is the partition function of the bosons on the circle The partition function is thus The Chern-Simons trace tr G k (1) is the coefficient of q k in (4.6). In a similar fashion, we define the trace twisted by a symmetry s ∈ S TQFT of G k Chern-Simons theory: tr G k (s) . JHEP03(2021)259 When s = c is a zero-form symmetry, this corresponds to inserting a surface operator, i.e., the symmetry defect is supported on the whole spatial torus. On the other hand, if s = g denotes a one-form symmetry, the symmetry defect is a line operator, and one must specify a homology cycle on the torus on which it is supported. The states of G k Chern-Simons are created by wrapping on a cycle Wilson lines labeled by integrable representations λ; if g is supported on the same cycle, it acts on the states via fusion: Conversely, if g is supported on the dual cycle, it acts on the states via braiding: where α g (λ) is the charge of λ under the center of G (cf. (2.7)). More generally, one can wrap a pair of symmetry defects on both cycles, but one can always conjugate such configuration via a modular transformation to either of the two options above. This operation, being a similarity transformation, does not affect the value of the trace. In other words, the value of tr G k (g) is independent of which cycle we define g on. When s is a symmetry of the classical action of G k Chern-Simons theory, it is induced by an outer automorphism of the extended Dynkin diagram g (1) , and it acts as a permutation of the nodes thereof. In that case, s induces an action on the system of bosons, which permutes them in the same way it permutes the nodes of the Dynkin diagram. As in the system of free fermions, the trace above can be obtained from the partition function of these bosons where Z s (G, q) denotes the bosonic partition function twisted by the permutation s. One can evaluate this partition function by the same methods as in section 2, i.e., by diagram folding or directly in the diagonal basis where s i are the eigenvalues of the permutation. Note that, for s = g a one-form symmetry, diagram folding naturally corresponds to g acting as a permutation, i.e. (4.8), while the diagonal action corresponds to g acting via braiding, i.e. (4.9). Indeed, it is a well-known fact that an S modular transformation -which interchanges the two cycles -diagonalizes the fusion rules. It should be noted that Chern-Simons theories can have "quantum symmetries". These are symmetries of the entire TQFT data that are not symmetries of the Lagrangian. Many JHEP03(2021)259 explicit examples of these symmetries have been found in [58]. These symmetries permute the Wilson lines of the theory, in a way that does not necessarily correspond to a permutation of their Dynkin labels. As such, the free boson representation cannot be used to evaluate the twisted trace, but it must be computed from the action of the symmetry on the Hilbert space of the TQFT. That being said, we find that the symmetries S in the ultraviolet domain wall map to classical symmetries of the infrared Chern-Simons theories, and we can compute the twisted index using (4.11). Some of our proposed domain wall TQFTs are Chern-Simons theory with a group G that is not connected and/or simply-connected, in which case the theory G k (with k a set of integers that defines the Chern-Simons action) may depend on the spin structure of the underlying manifold. There are four distinct Hilbert spaces corresponding to the four spin structures on the spatial torus (see section 3), but our interest here is in the Hilbert space H R-R . Now G k can have fermionic states, which correspond to once-punctured conformal blocks of the parent bosonic theory, and (−1) F is in general a non-trivial operator. Our goal is to compute where we use the latter to simplify notation. We shall next compute the twisted traces for all the Chern-Simons theories of interest. We begin by considering the simply-connected group Sp(n), and then we move on to the more subtle and interesting cases U(n), O(n). We finally make a few remarks concerning the exceptional groups. The remaining simply-connected groups SU(n), Spin(n) as well as SO(n) are studied in appendix A. G = Sp(n) The n-domain wall theory for 4d N = 1 SYM with G = Sp(N ) is proposed to be Sp(n) N +1−n Chern-Simons theory. Let us proceed to study the partition functions of Sp(n) k . Consider the algebra C n = sp n . The comarks are all a ∨ i = 1. Plugging this into (4.11) we obtain the generating function as (4.13) and, by expanding, the untwisted trace This is the number of integrable representations of Sp(n) k , that is, the dimension of the torus Hilbert space of this Chern-Simons theory. Let us also compute the partition function twisted by the one-form symmetry Γ = Z 2 . This symmetry reverses the order of the extended labels, and the charged representations are the pseudo-real ones. Denoting by g the non-trivial element of Z 2 , and using (4.11) and (2.30), we get the twisted partition function We are now ready to test our proposal. Recall that the conjectured infrared theory corresponding to the n-domain wall of Sp(N ) SYM was W n = Sp(n) k , with k = N + 1 − n. Using this value of the level in (4.14) and (4.16) indeed reproduces the (twisted) Witten indices computed in the ultraviolet, cf. (2.32) and (2.36). G = U(n) The n-domain wall theory for 4d N = 1 SYM with G = SU(N ) is proposed to be U(n) N −n,N Chern-Simons theory. Let us proceed to study the partition functions of U(n) k,n+k . The Chern-Simons gauge group is not simply connected. The theory is defined as where Z n is the one-form symmetry generated by the line ψ = [0, k, 0, . . . , 0] ⊗ (n + k). Here and in what follows, [λ 0 , λ 1 , . . . , λ n−1 ] denotes the Dynkin labels of an SU(n) k representation, and (q) ∈ Z the charge of a U(1) n(n+k) representation. The spin of the generator is easily computed to be h ψ = k(n−1) This theory is spin if and only if k is even. We now proceed to compute the relevant traces. As the theory can be a spin TQFT, the theory may contain fermionic states, and (−1) F will in general be a non-trivial operator, which we need to understand to compute tr((−1) F s). In other words, we have to identify which of the states of this Chern-Simons theory are bosons, and which are fermions. Note that, unlike the general discussion of section 3, this theory is more conveniently presented as a Z n quotient rather than a Z 2 quotient, so let us slightly generalize the discussion in that section to such quotients. In section 3 we argued that the bosonic states after a Z 2 fermionic quotient are the length-2 orbits, while the fermions are the fixed-points. We now claim that the general statement for Z n fermionic quotients is that the bosonic states are the orbits of even length, while the fermions are the orbits of odd length. To prove this claim, consider a general spin TQFT that can be written as JHEP03(2021)259 where G B is some bosonic TQFT, and where Z n is a one-form symmetry generated by a fermion ψ. Since h ψ p = ph ψ , ψ p is a fermion if p is odd, and a boson if p is even. This means that n is necessarily even, because ψ n = 1 is a boson. The braiding phase with respect to ψ is always an n-th root of unity, it is the charge with respect to the Z n symmetry. This fact allows us to partition the lines of G B into n equivalence classes according to their n-ality j, i.e., the value of braiding B(α, ψ) = e 2πij/n , j = 0, 1, . . . , n − 1. The lines with j = 0 are the NS lines (so that B(α, ψ) = +1), and those with j = n/2 are the R lines (so that B(α, ψ) = −1). The rest of lines are projected out by the Z n quotient (unless we turn on a suitable background for the dual Z n zero-form symmetry). Furthermore, in each sector the lines are organized into Z n orbits, {α, ψα, ψ 2 α, . . . , ψ |α|−1 α}, (4.19) where |α| ∈ [1, n] denotes the length of the orbit -the minimal integer such that ψ |α| ×α = α. An orbit is Majorana if and only if its length is odd, for then and only then it may absorb a fermion. Indeed, the conformal block with puncture ψ |α| is non-vanishing only if ψ |α| × α = α. In conclusion, the fermionic states in the R-R sector of the quotient theory G F correspond to the orbits of G B R-lines with an odd number of elements, as claimed. We are now in position to study the theory U(n) k,n+k . The discussion above has taught us how to identify fermionic states in the Hilbert space of the theory. Rather anticlimactically, we shall now argue that this theory has, in fact, no fermionic states at all! This means that the trace tr(−1) F actually just counts the number of states of the theory, much like in a bosonic theory. This explains why the counting of states in [7] matched the domain wall index -because all states are bosonic. This, importantly, is not always the case for other spin TQFTs, such as O(n) (see below, section 4.3). Let us prove that the theory has no fermionic states. U(n) k,n+k is level-rank dual to U(k) −n,−(n+k) as a spin TQFT. Therefore, if either k or n is odd, the theory factorizes as a bosonic theory times an invertible spin TQFT, and so the theory clearly has no Majorana states. The only non-trivial case is, therefore, that of n, k both even, which we assume in what follows. The theory in the numerator of the quotient description of U(n) k,n+k in (4.17) is bosonic (recall that U(1) K is spin for K odd and bosonic for K even; here K = n(n + k), which is even). The states of U(n) k,n+k are Z n orbits of SU(n) k × U(1) n(n+k) representations. If we manage to prove that there are no orbits of odd length, we succeed in proving that the theory has no Majorana states. In fact, we show that, more generally, all orbits have length-n, i.e., all orbits are long. This implies that the states correspond to conformal blocks with no punctures, i.e., all states are bosonic, (−1) F ≡ +1. JHEP03(2021)259 Let us now use this information to compute the different U(n) k,n+k partition functions. The untwisted trace is the number of conformal blocks (in any of the 2 3 spin structures). Counting this is a straightforward exercise in combinatorics: we have a factor of n(n + k) due to U(1) n(n+k) , times a factor of n+k−1 k due to SU(n) k (cf. (A.2)), and a factor of 1/n 2 due to the quotient Z n (one factor of n is due to the projecting out of lines, and the other one because the neutral lines are organized into length-n orbits). All in all, the number of states -the untwisted trace -is This standard argument was already used in [7]. An important aspect of this computation, much overlooked in the literature, is that this equals tr(−1) F only because all the states have trivial fermion parity, which is nontrivially true in this theory. This shall not be the case in the orthogonal group O(n), where tr(−1) F does not just count the total number of states, but rather the bosons minus the fermions, both sets being typically nonempty. Recall that the conjectured infrared TQFT corresponding to the n-domain wall of SU(N ) is W n = U(n) k,n+k , with k = N − n. Using this value of the level in (4.21) indeed reproduces the Witten index computed in the ultraviolet, cf. (2.20). We now proceed to computing the trace twisted by the charge conjugation symmetry c of U(n) k,n+k . Consider first the case of odd k, where the theory is naturally bosonic. In this case, computing the trace amounts to counting the real representations of U(n) k,n+k . A representation of U(n) k,n+k can be labeled by the pair (R, q), where R is an SU(n) k representation, and q ∈ [0, n(n + k)), subject to |R| = q mod n, where |R| denotes the number of boxes in the Young diagram of R. Representations (R, q) and (σ ·R, q+ (n+k)), with σ ·R the SU(n) representation with Dynkin labels (σ ·λ) i = λ i− mod n , are identified by Z n spectral flow. The abelian charge q is correlated with the SU(n) representation. Indeed, if n is even and R is real modulo σ , there is a single charge q ∈ [0, n(n + k)) that makes (R, q) real; if n is odd, there are two such charges. 17 Therefore, the number of real representations in U(n) k,n+k is the number of representations of SU(n) k that are real up to the action of σ, divided by n (the length of the orbits), and multiplied by 2 if n is odd. Let us now count the SU(n) k representations. JHEP03(2021)259 The number of solutions to this equation is n even, odd, k odd, 2 (n + k − 1)/2 (k − 1)/2 n even, even, k odd, (n + k)/2 k/2 + (n + k)/2 − 1 k/2 − 1 n even, even, k even, (n + k)/2 − 1 k/2 n even, odd, k even, (4.23) where, for future reference, we have also included the case of k even. We now sum over all = 0, 1, . . . , n − 1. For n odd, this just multiplies (n+k)/2−1 (k−1)/2 by n. If n is even, it multiplies 2 (n+k−1)/2 (k−1)/2 by n/2, because half the cases yield no solutions. Next, we divide by n (due to the quotient), and multiply by 2 if n is odd. This yields the number of real representations of U(n) k,n+k with k odd as 18 (n + k − 1)/2 (k − 1)/2 n even, k odd. The case of k even is slightly more complicated because the theory is naturally spin. For n odd we can obtain the twisted trace from the k odd case by using level-rank duality U(n) k,n+k ↔ U(k) −n,−(n+k) . But for n, k both even, the theory is spin, and cannot be written as a bosonic theory times a trivial spin theory -at least not using the standard level-rank duality. Thus, we have to explicitly compute the trace of c in the R-R sector. This is non-trivial because, among other things, c may act as −1 on some states (see the discussion around (3.26)), and thus it is not enough to just count real representations. A shortcut to compute the trace of c over the odd spin structure, for k, n both even, is to sum over all spin structures: where tr B denotes the trace over the bosonic parent. From this expression, and noting that (−1) F is trivial in U(n) k,n+k theories (due to the lack of Majorana lines), we can solve for JHEP03(2021)259 the trace we are after: Let us begin with the first term. As this is a trace over a bosonic Hilbert space, we are just to count real representations of SU(n) k × U(1) n(n+k) . The first factor corresponds to = 0 in (4.23), while the second factor has two real representations (namely, q = 0 and q = n(n + k)/2). The end result is Let us now compute the second term in (4.26). This is a trace over a fermionic Hilbert space, but over the NS-NS sector, and so we only have to count fixed-points, as they all contribute with c = +1. In other words, the trace is just the number of real representations of U(n) k,n+k , that is, the number of solutions to (4.22), summed over = 0, 1, . . . , n − 1, and divided by n due to the quotient. Using (4.23), we get (4.28) Plugging these two traces into (4.26), the twisted index, for n, k even, becomes Note that this is invariant under n ↔ k, as required by level-rank duality. 19 This expression for the twisted partition function of U(n) k,n+k with k = N − n matches the twisted Witten index computed in the ultraviolet, cf. (2.24). Finally, we briefly sketch the computation of the trace twisted by the one-form symmetry g t ∈ Z n+k of U(n) k,n+k , where g denotes a primitive root of unity, and t ∈ [0, n + k). The states of U(n) k,n+k are orbits of the form {(σ · R, q + (n + k))} , (4.30) where ranges from 0 to n − 1. All the orbits are of length-n. The theory has a Z n+k one-form symmetry that acts as q → q + tn, where t ∈ [0, n + k). A state is invariant if and only if this transformation cyclically permutes the elements of the orbit, i.e., if a representative (R, q) is mapped into itself up to spectral flow, (R, q + tn) ≡ (σ · R, q + (n + k)) . (4.31) It is clear that if tn is not of the form (n + k) for some ∈ Z, then no state is invariant, and the twisted trace vanishes. So let us assume that such an exists; it is JHEP03(2021)259 clear that it is unique, so counting invariant orbits reduces to counting appropriate SU(n) k representations. More specifically, the number of invariant states is tr U(n) k,n+k (g t ) = n(n + k) n 2N , (4.32) where n(n + k) denotes the number of states in U(1) n(n+k) , and the factor of n 2 is due to the Z n quotient.N denotes the number of SU(n) k representations that satisfy R = σ · R, with := tn/(n + k) ∈ Z. Counting such SU(n) k representations is easy, because this is a simply-connected group, so the states are labelled by the Dynkin labels, λ 0 , λ 1 , . . . , λ n−1 , which can be thought of a collection of independent bosons (cf. (4.11)). The most efficient way to count the representations that are invariant under σ is to recall that the associated diagonal phase is just the charge under the center (2.18), which is a multiplicative phase, so the partition function factorizes: and thereforeN Recall that the q-binomial coefficient at a root of unity can be expressed as a regular binomial coefficient, cf. (2.29) and (2.27). G = O(n) The n-domain wall theory for 4d N = 1 SYM with G = Spin(N ) is proposed to be O(n) 1 N −2−n,N −n+1 Chern-Simons theory. Let us proceed to study the partition functions of O(n) 1 k,L . The O(n) 1 k,L Chern-Simons theory is defined as [23] O(n) 1 k,L := where (Z 2 ) L ↔ Spin(L) −1 denotes a Z 2 gauge theory with twist L, and the quotient denotes the gauging of a diagonal Z 2 one-form symmetry. The value of the level we shall be interested in is L = k + 3. On the other hand, the first factor is given by the following: JHEP03(2021)259 • If n is even, the theory O(n) 1 k,0 is defined as the CM-orbifold of SO(n) k . Here C denotes the charge-conjugation Z 2 zero-form symmetry that acts by permuting the last two Dynkin labels in SO(n), and M is the magnetic Z 2 zero-form symmetry that is dual to the gauged one-form Z 2 symmetry in the denominator of SO(n) k ≡ Spin(n) k /Z 2 . As such, it permutes the lines that split in the quotient, i.e., the lines of Spin(n) k that are fixed by fusion with the extending simple current. • If n is odd, the group O(n) is a direct product of SO(n) and Z 2 . The Chern-Simons theory O(n) 1 k,0 itself does not necessarily factorize, because of the convention of which Z 2 subgroup the reflection represents. The choice in [23] was (4.37) Let us compute the different traces in this theory. As above, the details depend sensitively on the parity of n and k, so we consider each case separately. Even/Even. We begin with the theory Spin(2n) 2k . Its integrable representations satisfy which has tr Spin(2n) 2k (1) solutions (cf. (A.12)). We now construct SO(2n) 2k , i.e., we gauge a Z 2 one-form symmetry, which acts as λ 0 ↔ λ 1 and λ n−1 ↔ λ n . This is a bosonic quotient. The neutral representations satisfy λ n−1 + λ n = even, which has solutions. These are divided into length-2 orbits, and fixed points. The former satisfy λ 0 = λ 1 ∨ λ n−1 = λ n , and the latter λ 0 = λ 1 ∧ λ n−1 = λ n . The number of fixed points is (4.40) and the number of length-2 orbits is 1 2 (N − F). Finally, the number of representations of SO(2n) 2k is which is invariant under n ↔ k, as expected by level-rank duality. This also agrees with expression (A.15). We now orbifold by CM, which acts by swapping the lines in 2F pairwise, and as λ n−1 ↔ λ n . The representations that are fixed under CM are the subset of the length-2 JHEP03(2021)259 Take the states of O(2n) 1 2k,0 as above, i.e., N twisted and N untwisted , and tensor by Spin(2k + 3) −1 = {1, σ, χ}. The NS and R lines are as follows: We now quotient by the Z 2 one-form symmetry. This symmetry maps 1 ↔ χ, and it fixes σ; and, also, it permutes lines in A pairwise, a ↔ a , and it fixes those in 1 4 B+F. Therefore, in the NS sector it acts as which are all length-two orbits (recall that there are never fixed-points in the NS sector). Thus, the dimension of the Hilbert space is This corresponds to the trace of 1 over the Hilbert space on any of the spatial spin structures. Consider now the R sector. The one-form symmetry acts as and so all of A are in length-two orbits, while all of 1 4 B + F are fixed-points. Thus, the number of fermions and bosons is (4.52) Note that N boson +N fermion agrees with the dimension of the Hilbert space as computed in the NS sector (cf. (4.50)). On the other hand, the trace in the odd spin structure, weighted by fermion parity, is N boson − N fermion : As a consistency check, recall that one can also express the fermionic trace as (3.19)). The dimension of the bosonic Hilbert space is (4.55) which indeed matches the expression above. Recall that the conjectured infrared theory corresponding to the n-domain wall of Spin(N ) was W n = O(n) 1 k,k+3 , with k = N − 2 − n. Using this value of the level in (4.53) indeed reproduces the Witten indices computed in the ultraviolet, cf. (2.46). Odd/Odd. We consider As the theory is a tensor product, the traces factorize: For example, the Z 2 gauge theory has four states, all bosonic, tr (Z 2 ) 2(n+k) (−1) F ≡ 4, which means that the untwisted index is where we have used the trace of SO(2n + 1) 2k+1 as given in (A.10). Similarly, the index twisted by the zero-form symmetry c has tr (Z 2 ) 2(n+k) (c) ≡ 2, where c acts by permuting the two spinors (this is the only zero-form symmetry of this Z 2 gauge theory, cf. [58,60]; it fixes both the identity and the vector). On the other hand, the only zero-form symmetry of SO(2n + 1) 2k+1 is fermion parity, 20 and there is in fact a natural identification c = (−1) F (cf. [23]). Thus, the c-twisted trace weighted by fermion parity actually computes the untwisted trace, with antiperiodic (NS) boundary conditions on the time circle: where we have used (A.9). All in all, the twisted trace of O(2n + 1) 1 2k+1,2k+4 is The index twisted by the one-form symmetry is also straightforward. This symmetry is Z 2 2 for O(4n + 1) 4k+1 and O(4n + 3) 4k+3 , and Z 4 for O(4n + 1) 4k+3 and O(4n + 3) 4k+1 . 20 The Dynkin diagram of SO(N ) for N odd has no reflection symmetries, i.e., its outer automorphism group is trivial. Thus, the zero-form symmetries of SO(N ), if any, must be due to the global structure of the group, as its algebra has no symmetries. Indeed, the zero-form symmetry comes from π1(SO(N )) = Z2, but this is just the magnetic dual to the gauged Z2 one-form symmetry, which means that the magnetic symmetry is formally just (−1) F . If we were to gauge this symmetry, we would recover Spin(N ). JHEP03(2021)259 These correspond to fusion with the abelian anyons of Spin(L) −1 , with L = 0 mod 4 and L = 2 mod 4 respectively, which indeed have a Z 2 2 /Z 4 fusion algebra. As abelian fusion has no fixed-points, all the twisted traces vanish: where (g 1 , g 2 ) ∈ Z 2 2 and g ∈ Z 4 , respectively. Recall that the conjectured infrared theory corresponding to the n-domain wall of Spin ( Odd/Even & Even/Odd. We only need to consider one; the other follows by the levelrank duality. Take Note that there are no fixed points, and all orbits are of length 2: Therefore, a set of representatives can be taken as λ tensor ⊗ 1 and λ spinor ⊗ m. In what follows we drop the second label, as it is correlated with λ in a unique way. The number of tensors and spinors is (cf. (A.5)) We now tensor the theory by a factor of Spin(2k + 3) −1 = {1, σ, χ}, and gauge the fermionic one-form symmetry generated by f = a⊗χ. The Ramond sector requires h α×f = h α mod 1, which means that the lines are (λ tensor , σ), (λ spinor , 1 or χ) . (4.66) Note that only the former can be a fixed-point under the fermionic quotient, inasmuch as χ × σ = σ while χ : 1 ↔ χ. In particular, the fixed-points are λ tensor , λ 0 = λ 1 , (4.67) G = G 2 The 2-domain wall theory for 4d N = 1 SYM with G = G 2 is SO(3) 3 ×S 1 , where S 1 denotes then nonlinear sigma model with S 1 target space. We already proved in section 3.1 that the theory SO(3) 3 has vanishing Witten index, and since there is a unique vacuum of the S 1 sigma model on the torus, the infrared index vanishes. This matches the Witten index computed in the ultraviolet, which is given by the coefficient of q 2 in (1.20). Indeed, expanding this polynomial one finds that the index vanishes. The domain wall with n = 1 (and n = 3, which is the anti-wall of n = 1) is addressed below. Minimal wall for arbitrary gauge group The n = 1 domain wall theory for 4d N = 1 SYM with arbitrary G is proposed to be G −1 Chern-Simons theory. JHEP03(2021)259 As G is simply-connected, the theory is naturally bosonic, and the trace tr G −1 (−1) F computes the dimension of the Hilbert space, that is, the number of integrable representations at level 1. In other words, the trace is the number of solutions to (4.4) with k = 1, namely which, as in (2.59), requires λ i = 1 for some i with a ∨ i = 1, and λ j = 0 for all j = i. Therefore, the trace is where m 1 denotes the number of nodes in the Dynkin diagram of G that have comark equal to 1. This clearly reproduces the ultraviolet index (2.60), as required. For simply-laced G, G −1 Chern-Simons theory is in fact an abelian TQFT, and all the lines generate one-form symmetries. The number of lines is the number of one-form symmetries, that is, the order of Γ, which indeed agrees with m 1 . Equivalently, it is known that simply-laced theories at level 1 admit a K-matrix representation, where one can take K as the Cartan matrix of g. The number of states is indeed det(K) ≡ |Γ|. 21 One can define a zero-form twisted index for (E 6 ) −1 . The only node with comark 1 preserved by the charge conjugation symmetry of this theory is the extended node, and thus (4.73) Since (E 6 ) −1 and (E 7 ) −1 are abelian, twisting by a one-form symmetry has no fixed points and Concluding remarks and open questions In this paper we have proposed explicit 3d topological field theories on the domain walls of 4d N = 1 SYM with gauge group G. We have found precise agreement between computations carried out in terms of the ultraviolet 4d degrees of freedom (gluons and gluinos) and the conjectured infrared topological 3d degrees of freedom. We have highlighted the importance in identifying the infrared of the domain wall theories of studying the Hilbert space of spin TQFTs, in particular the partition function in the R-R sector and identifying the fermionic states in the Hilbert space, as opposed to merely counting states. The nontrivial matching of the twisted Witten indices provides strong support for our proposal. A heuristic argument can be made in favor of our proposal that the n-domain wall in 4d N = 1 SYM with gauge group G is the infrared of 3d N = 1 G h/2−n SYM (see equation (1.3)). 22 Consider 4d SYM on R 3 × S 1 with the YM θ-angle linear in the S 1 21 Note that abelian systems typically have a very large number of zero-form symmetries [58], most of which are emergent in our picture, inasmuch as the ultraviolet theory only has C as its zero-form symmetry group. 22 We would like to thank D. Gaiotto for an interesting discussion regarding this point. JHEP03(2021)259 coordinate and winding number n around the circle. This theory can be defined while preserving half of the supersymmetry. 23 When the radius of the circle is large one can expect the theory to be gapped everywhere except at the location of the wall W n . For small radius, the theory reduces to 3d N = 1 G −n SYM with an adjoint real multiplet (the scalar is compact, as it arises from reducing the gauge field along a circle). It was argued in [61] that with a suitable superpotential for the real multiplet, the multiplet gaps out and flows to 3d N = 1 G h/2−n SYM, where the shift is induced by integrating out the massive fermion in the real multiplet. Assuming that there is no phase transition as the size of the circle is reduced leads to the proposal (1.3). However, the lack of control over the superpotential upon reduction makes the argument suggestive but heuristic. The n > 1 domain wall theories for the groups G = F 4 , E 6 , E 7 and E 8 remain to be discovered. Equivalently, the phase diagram of the corresponding 3d N = 1 G k SYM with k < h/2 − 1 remains elusive. We collect in appendix B the twisted partition functions computed in the ultraviolet for future reference. One strategy towards the identification of the infrared domain wall theory is to search for novel level-rank dualities in G k Chern-Simons theories that go beyond the ones that follow from conformal embeddings. In general, level-rank dualities follow from embeddings into holomorphic theories (theories with only one state), and this approach could lead to suitable level-rank dualities and in turn to explicit proposals for the remaining 3d N = 1 G k SYM phase diagrams (and associated 4d domain walls). In this paper we have made an intriguing connection between the Hilbert space of Chern-Simons theories on the torus and the Hilbert space of fermions in 0 + 1 dimensions labeled by the extended Dynkin diagram g (1) corresponding to a Lie group G. That is, the fermionic Hilbert space H n F with energy n is isomorphic as super-vector spaces to the R-R Hilbert space of a suitable spin TQFT, which we denote by TQFT n (1) ←→ U(2) 3n,2−n . (5.2) 23 The Lagrangian of this theory can be written as L = d 2 ϑ XWαW α , where X is a background chiral multiplet and Wα the chiral gauge field strength. Re(X) determines the gauge coupling and Im(X) the θangle. The background Im(X) ∝ nx3 with FX ∝ in preserves half of the supersymmetries. The background for FX induces a mass term for the gaugino ∝ inλγ (5) γ 3 λ, where λ is Majorana. 24 In writing this we use the duality (G2)1 ↔ U(2)3,1 and the notation U(2)6,0 ≡ SO(3)3 × S 1 . JHEP03(2021)259 Another route to constructing the domain walls for G = F 4 , E 6 , E 7 and E 8 is to identify the TQFT whose R-R Hilbert space on the torus is that of the collection of free fermions based on the corresponding affine Dynkin diagram g (1) . A Chern-Simons with unitary and orthogonal groups In this appendix we compute several traces on the torus Hilbert space of Chern-Simons theories over simply-connected Lie groups. These traces are useful when studying more complicated theories over non-simply-connected groups. A.1 G = SU(n) Consider the algebra A n−1 = su n . The comarks are all a ∨ i = 1. Plugging this into (4.11) we get the generating function as and, by expanding, the untwisted trace This is the number of integrable representations of SU(n) k , that is, the dimension of the torus Hilbert space of this Chern-Simons theory. This result will be useful when we discuss the Chern-Simons theory over the unitary group U(n), see section 4.2. JHEP03(2021)259 This is the number of integrable representations of Spin(2n+1) k , that is, the dimension of the torus Hilbert space of this Chern-Simons theory. For future reference, it is also useful to break up the states into the tensors and spinors. In other words, we shall be interested in knowing how many of the states of Spin(2n+1) are tensorial representations, and how many are spinorial representations. These are defined by λ n = even and λ n = odd, respectively, which yields the following: For a more interesting example, let us now compute the partition function of SO(2n + 1) k = Spin(2n + 1) k /Z 2 , which corresponds to the algebra so 2n+1 extended by the simple current χ = [0, k, 0, . . . , 0]. This current has spin h χ = k/2, and so the extension is fermionic for odd k. The current acts on a given representation [λ 0 , λ 1 , . . . , λ n ] as λ 0 ↔ λ 1 . Consider first the case of even k, so that SO(2n + 1) k makes sense as a bosonic theory. The extension has two effects: first, it projects out all the spinors, and second, it organizes the tensors into Z 2 -orbits. Such an orbit may have length two or one; the latter corresponds to a fixed-point under spectral flow, i.e., to a tensor with λ 0 = λ 1 , which splits into two primaries in the quotient. The number of fixed-points corresponds to the number of solutions to λ 0 + λ 1 + 2(λ 2 + · · · + λ n−1 ) + λ n = k with λ 0 = λ 1 and λ n even, i.e., n+k/2−1 k/2 . Therefore, the number of conformal blocks is Let now k be odd, which makes SO(2n + 1) k a spin theory. The total number of states is the same on every spin structure, so we shall count the bosons and fermions in the Ramond sector (which is the richest case, as only this sector may contain fermions). The total number of states is the sum, while the Witten index is the difference. In the Ramond sector, the quotient projects out the tensors, and it organizes the spinors into Z 2 -orbits. The bosons are the length-two orbits, and the fermions are the fixed-points. The latter are the representations with λ 0 + λ 1 + 2(λ 2 + · · · + λ n−1 ) + λ n = k with λ 0 = λ 1 and λ n odd, for all spatial spin structures, except for the odd structure for which We see that tr (1) is invariant under n ↔ k, as required by level-rank duality. Similarly, tr(−1) F is invariant up to a sign, which is due to the difference in the framing anomalies (i.e., the precise level-rank duality [26] is SO(2n + 1) 2k+1 ↔ SO(2k + 1) −2n−1 × SO((2n + 1)(2k+1)) 1 , with the invertible factor contributing with a global factor of (−1) (2n+1)(2k+1) ≡ −1 to the trace, cf. (3.24)). (A.12) This is the number of integrable representations of Spin(2n) k , that is, the dimension of the torus Hilbert space of this Chern-Simons theory. For future reference, it is also useful to break up the states into the tensors and spinors. In other words, we shall be interested in knowing how many of the states of Spin(2n) are tensorial representations, and how many are spinorial representations. These are defined by λ n−1 + λ n = even and λ n−1 + λ n = odd, Consider first the case of even k, so that SO(2n) k makes sense as a bosonic theory. The extension has two effects: first, it projects out all the spinors, and second, it organizes the tensors into Z 2 -orbits. Such an orbit may have length two or one; the latter corresponds to a fixed-point under spectral flow, i.e., to a tensor with λ 0 = λ 1 and λ n−1 = λ n , which splits into two primaries in the quotient. The number of fixed-points corresponds to the number of solutions to λ 0 + λ 1 + 2(λ 2 + · · · + λ n−2 ) + λ n−1 + λ n = k with λ 0 = λ 1 and λ n−1 = λ n , i.e., n+k/2−2 k/2 . Therefore, the number of conformal blocks is Let now k be odd, which makes SO(2n) k a spin theory. The number of states is the same on every spin structure, so we shall count the bosons and fermions in the Ramond sector (which is the richest case, as only this sector may contain fermions). The total number of states is the sum, while the Witten index is the difference. In the Ramond sector, the quotient projects out the tensors, and it organizes the spinors into Z 2 -orbits. The bosons are the length-two orbits, and the fermions are the fixed-points. Note that the spinors have λ n−1 + λ n = odd, which is incompatible with the fixed-point condition λ n−1 = λ n , and so there are no fixed-points. Thus, the number of bosons and fermions is for all spatial spin structures. Note that the equality of tr(1), tr(−1) F on all spin structures was in fact expected from the level-rank duality SO(2n) 2k+1 ↔ SO(2k + 1) −2n , the B The exceptional groups In this appendix we gather the different indices for the exceptional groups, whose domain wall theory is yet to be identified. Any given proposal for the dynamics of such walls ought to be consistent with the indices below. By particle-hole symmetry, the indices satisfy I s n = ±I s h−n , and therefore we only show the first h/2 indices, so as to avoid repetition. We compute the untwisted indices, and the indices twisted by the zero-form and oneform symmetries (see table 1). The symmetries c ∈ C = Z 2 and g ∈ Γ = Z 3 act on the Dynkin diagram of E 6 as follows: The symmetry g ∈ Γ acts on E 7 as follows: Using these diagrams we find: • E 6 : Z(q) = 1 − 3q + 7q 3 − 3q 4 − 6q 5 + · · · Z c (q) = 1 − q − 2q 2 + q 3 + q 4 + 2q 5 + · · · Z g (q) = 1 − 2q 3 + · · · (B.3)
22,876
2021-03-01T00:00:00.000
[ "Physics", "Engineering" ]
Artificial Intelligence for Modeling Real Estate Price Using Call Detail Records and Hybrid Machine Learning Approach Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel machine learning method is proposed to tackle real estate modeling complexity. Call detail records (CDR) provides excellent opportunities for in-depth investigation of the mobility characterization. This study explores the CDR potential for predicting the real estate price with the aid of artificial intelligence (AI). Several essential mobility entropy factors, including dweller entropy, dweller gyration, workers’ entropy, worker gyration, dwellers’ work distance, and workers’ home distance, are used as input variables. The prediction model is developed using the machine learning method of multi-layered perceptron (MLP) trained with the evolutionary algorithm of particle swarm optimization (PSO). Model performance is evaluated using mean square error (MSE), sustainability index (SI), and Willmott’s index (WI). The proposed model showed promising results revealing that the workers’ entropy and the dwellers’ work distances directly influence the real estate price. However, the dweller gyration, dweller entropy, workers’ gyration, and the workers’ home had a minimum effect on the price. Furthermore, it is shown that the flow of activities and entropy of mobility are often associated with the regions with lower real estate prices. Introduction Delivering insight into the housing markets plays a significant role in the establishment of real estate policies and mastering real estate knowledge [1][2][3]. Thus, the advancement of accurate models for predicting real estate prices is of utmost importance for several essential economic key functions, for example, banking, insurance, and urban development [4][5][6]. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems [7]. The call detail record (CDR) data has recently become popular to study social behavior patterns, including mobility [8][9][10]. The expansion of the new generation technology standard for broadband cellular networks has further increased this data source's popularity worldwide [11]. Although the literature includes a wide range of applications of CDR from urban planning to land management and from tourism to epidemiology, the CDR's true potentials in modeling complex systems are still at the very early stage [12]. Consequently, this study explores the potential of CDRs in modeling and predicting the real estate price [13][14][15]. Data The call detail record (CDR) [30][31][32] has recently become popular to study social behavior patterns, including mobility. The expansion of the new generation technology standard for broadband cellular networks has further increased this data source's popularity worldwide. The true potentials of CDR data in modeling complex systems are still at the very early stage. In this study, the CDR data has been produced at the Vodafone facilities located in Budapest, Hungary. The spatiotemporal dataset consists of anonymous billing records of calls, text messages, and internet data transfer without specifying the activity type. Thus, a record includes a timestamp, a device ID, and a cell ID. The locations of the cell centroid are also available for geographic mapping. Worth mentioning is that the data accuracy depends on the size of the cells [33,34]. The size of the cells which are located downtown are smaller and placed more densely than in the underpopulated areas. In this study, the data acquisition covers the entire city during spring 2018. This contains 955,035,169 activity records from 1,629,275 SIM cards. However, many of these SIM cards have only a very few activities. Less than 400 thousand SIM cards have regular enough daily activities. Several mobility metrics are calculated using active SIM cards, including the radius of gyration [35] and the entropy [36]. The home and work location are estimated, and the distance of the two locations is also used as a metric. SIM card-based mobility metrics are aggregated to cells based on the subscribers who live or work in a given cell. This results in the following columns used as independent variables for the hybrid machine learning model: dweller entropy, dweller gyration, worker entropy, worker gyration, dwellers' work distance, and workers' home distances. Additionally, the dependent variable is the real estate price. The normalized real estate price values are associated with every subscriber of the CDR data based on the assumed home location to describe social-economic status. The real estate price data is provided by the ingatlan.com website based on the advertisements in August 2018. The data contains slightly more than 60 thousand estate locations with floor spaces and selling prices ( Figure 1 shows its distribution). for the hybrid machine learning model: dweller entropy, dweller gyration, worker entropy, worker gyration, dwellers' work distance, and workers' home distances. Additionally, the dependent variable is the real estate price. The normalized real estate price values are associated with every subscriber of the CDR data based on the assumed home location to describe social-economic status. The real estate price data is provided by the ingatlan.com website based on the advertisements in August 2018. The data contains slightly more than 60 thousand estate locations with floor spaces and selling prices ( Figure 1 shows its distribution). The normalization is performed by dividing the floor space by the selling price. Figure 2 shows the estate advertisements over the map of Budapest. The more expensive estates are represented not just by color but by larger markers as well. The normalization is performed by dividing the floor space by the selling price. Figure 2 shows the estate advertisements over the map of Budapest. The more expensive estates are represented not just by color but by larger markers as well. for the hybrid machine learning model: dweller entropy, dweller gyration, worker entropy, worker gyration, dwellers' work distance, and workers' home distances. Additionally, the dependent variable is the real estate price. The normalized real estate price values are associated with every subscriber of the CDR data based on the assumed home location to describe social-economic status. The real estate price data is provided by the ingatlan.com website based on the advertisements in August 2018. The data contains slightly more than 60 thousand estate locations with floor spaces and selling prices ( Figure 1 shows its distribution). The normalization is performed by dividing the floor space by the selling price. Figure 2 shows the estate advertisements over the map of Budapest. The more expensive estates are represented not just by color but by larger markers as well. For modeling purposes, the CDR dataset contains mobility entropy data including dweller entropy, dweller gyration, worker entropy, worker gyration, dwellers' work distances, and workers' home distances as independent variables for the prediction of estate price as the only dependent variable. Further definitions of the input and output variables are given as follows: • Norm price: normalized real estate price; • Dweller entropy: mean entropy of the devices whose home is the given cell; • Dweller gyration: mean gyration of the devices whose home is the given cell; • Worker entropy: mean entropy of the devices whose workplace is the given cell; • Worker gyration: mean gyration of the devices whose workplace is the given cell; • Dwellers' home distance: average work-home distance of the devices whose home cell is the given cell; • Workers' work distance: average work-home distance of the devices whose work cell is the given cell. Methods The proposed methodology includes three principal sections, namely, data preprocessing, normalization, and machine learning modeling. The raw CDR data passes through a series of functions to be prepared for the modeling section. Figure 3 represents a simplified workflow of the essential data preprocessing section. According to Figure 3, data preprocessing can be divided into eleven building blocks. After cleaning the input data, the home and work locations have been determined (building block 3) using the most frequent location during and out of the work hours. Then the home-work distance (building block 6), the entropy (building block 4), and the radius of gyration (building block 5) are calculated for every SIM card. Using the market selling prices, the average real estate price is determined for every cell via the polygons generated by Voronoi tessellation (building block 9) [37]. As every cell has an associated real estate price, a price level can be selected for every subscriber's home and work locations (building block 10). Finally, these indicators are aggregated into a format suitable for modeling (building block 11). For modeling purposes, the CDR dataset contains mobility entropy data including dweller entropy, dweller gyration, worker entropy, worker gyration, dwellers' work distances, and workers' home distances as independent variables for the prediction of estate price as the only dependent variable. Further definitions of the input and output variables are given as follows: • Norm price: normalized real estate price; • Dweller entropy: mean entropy of the devices whose home is the given cell; • Dweller gyration: mean gyration of the devices whose home is the given cell; • Worker entropy: mean entropy of the devices whose workplace is the given cell; • Worker gyration: mean gyration of the devices whose workplace is the given cell; • Dwellers' home distance: average work-home distance of the devices whose home cell is the given cell; • Workers' work distance: average work-home distance of the devices whose work cell is the given cell. Methods The proposed methodology includes three principal sections, namely, data preprocessing, normalization, and machine learning modeling. The raw CDR data passes through a series of functions to be prepared for the modeling section. Figure 3 represents a simplified workflow of the essential data preprocessing section. According to Figure 3, data preprocessing can be divided into eleven building blocks. After cleaning the input data, the home and work locations have been determined (building block 3) using the most frequent location during and out of the work hours. Then the home-work distance (building block 6), the entropy (building block 4), and the radius of gyration (building block 5) are calculated for every SIM card. Using the market selling prices, the average real estate price is determined for every cell via the polygons generated by Voronoi tessellation (building block 9) [37]. As every cell has an associated real estate price, a price level can be selected for every subscriber's home and work locations (building block 10). Finally, these indicators are aggregated into a format suitable for modeling (building block 11). In this study, the normalization technique [38] is performed due to the dynamic range and the parameters' value differences. This technique can be formulated and performed using Equation (1) for adjusting values measured on different scales to a notionally common parameters' scale for the ranges from +1 to −1. The final values between +1 and −1 can be generated based on the minimum In this study, the normalization technique [38] is performed due to the dynamic range and the parameters' value differences. This technique can be formulated and performed using Equation (1) for adjusting values measured on different scales to a notionally common parameters' scale for the ranges from +1 to −1. The final values between +1 and −1 can be generated based on the minimum and maximum input values. Using the normalization technique would significantly reduce the errors raised by differences in the parameter range. where, x N represents the normalized data in the range of +1 and −1. X min represents the lowest number and X max the highest number in the dataset, respectively. This study proposes an efficient classification method based on artificial neural networks [39]. This study's principal ANN modeling is conducted using a multi-layered perceptron's machine learning method [40]. A multi-layered perceptron variation of the neural networks works according to the feedforward neural network principle, a standard yet powerful neural network. MLP can efficiently generate the output variables' values according to the input variables through a non-linear function. MLP, as one of the simplest artificial intelligence methods for supervised learning, consists of several perceptrons or neurons [41]. MLP uses a backpropagation algorithm, which is supervised learning of artificial neural networks using gradient descent. The perceptron models the output according to its weights and the non-linear activation functions. Figure 4 represents an implementation of the model with the detailed architecture and the input variables of the MLP. According to implemented architecture, the model includes three learning phases. The first phase obtains and inserts seven input variables. The next phase, which is devoted to the hidden layers, contains several sets of hidden neurons. The number of neurons in the hidden layer can be modified and tuned to deliver higher performance. In this study, the number of neurons in the hidden layer is an efficient factor in improving model accuracy. The model's third layer, or so-called output layer, regulates and delivers the output variable, which is the real estate price. In MLP, one input layer, one hidden layer, and one output layer for the neural network have been set during training and testing [40]. Furthermore, the basic concepts and problem-solving strategy of particle swarm optimization (PSO) evolutionary algorithm [44] are used to enhance the MLP classifier's performance [40]. To train the MLP, the advanced evolutionary algorithm of PSO is proposed. When MLP is trained with PSO, the combination is called MLP-PSO, which provides a robust technique to model several non-linear real-life problems [41]. MLP-PSO has recently been used in several scientific and engineering applications with promising results. Comparative analysis of PSO's performance with other evolutionary algorithms in training neural networks has shown reliable results where PSO in several cases outperforms other algorithms [42,43]. The PSO, as an efficient stochastic based algorithm, works based on finding global optimization. The algorithm follows the population-based search strategy, which starts with a randomly initialized population for individuals. The PSO, through adjusting each individual's positions, finds the global optimum of the whole population [44]. Each individual is tuned by adjusting the particles' velocities in the search space for particles' social and cognition behaviors as follows. The popularity of MLP has recently been increasing due to its robustness and relatively high performance [42]. Literature includes several comparative studies where MLP models outperform other models [43]. MLP has also shown promising results in modeling a diverse range of data and applications. Therefore, in this study, it had been selected as a suitable modeling algorithm. The essential information and formulation of MLP are described as follows. The output value of f(x) is calculated using Equation (2). According to [39][40][41][42], a hidden layer connects the input layer to the output layer and computes it as follows. where, b and w represent the bias and weights. Furthermore, K and Q denote the activation functions. In addition, Equation (3) is devoted to representing the hidden layer and is described as follows. Here, Q's activation functions are obtained through Equations (4) and (5) as follows. Sigmoid where Sigmoid(x) delivers a slower response compared to the Tanh(x). In addition, the output vector is formulated and presented according to Equation (6) as follows. In MLP, one input layer, one hidden layer, and one output layer for the neural network have been set during training and testing [40]. Furthermore, the basic concepts and problem-solving strategy of particle swarm optimization (PSO) evolutionary algorithm [44] are used to enhance the MLP classifier's performance [40]. To train the MLP, the advanced evolutionary algorithm of PSO is proposed. When MLP is trained with PSO, the combination is called MLP-PSO, which provides a robust technique to model several non-linear real-life problems [41]. MLP-PSO has recently been used in several scientific and engineering applications with promising results. Comparative analysis of PSO's performance with other evolutionary algorithms in training neural networks has shown reliable results where PSO in several cases outperforms other algorithms [42,43]. The PSO, as an efficient stochastic based algorithm, works based on finding global optimization. The algorithm follows the population-based search strategy, which starts with a randomly initialized population for individuals. The PSO, through adjusting each individual's positions, finds the global optimum of the whole population [44]. Each individual is tuned by adjusting the particles' velocities in the search space for particles' social and cognition behaviors as follows. where rand(1) is a random function for producing values between 0 and 1. Furthermore, c 1 and c 2 remain constants with values between 0 and 2. In this study, c 1 and c 2 are set to 2 throughout the modeling [40]. The algorithm starts by initializing V i (t) and X i (t) which represents the population of particles and velocity, respectively [34]. In the next step, the fitness of each particle is calculated. Further, (lbest i ) computes the local optimum through elevating the fitness of particles in every generation. (gbest i ) identifies the particle with better fitness as the global optimum. The V i (t + 1) delivers the new velocity and X i (t + 1) is generating the new positions of the particles. The algorithm is adjusted to reach the maximum iteration of the velocity range [41]. The modeling includes two phases of training and testing. Additionally, 70% of the data is used for training and 30% for testing. Furthermore, the evaluation of the performance of the models was performed by the use of correlation coefficient (CC), scattered index (SI), and Willmott's index (WI) of agreement, Equations (3)-(5) [42,43]. Entropy 2020, 22, 1421 where, O refers to the output value, P refers to the predicted value, and n refers to the number of data [43]. Results The results and further description of statistical modeling, training, and testing are presented as follows. Statistical Results Statistical analysis is conducted by SPSS software V. 22 using ANOVA analysis [45]. Table 1 includes the sum of squares, mean square, F value, and significance index between groups. According to Table 1, all the variables which have been selected as the independent variables have significant effects on the real estate price as the only dependent variable. Training Results Using three performance indexes, namely, MSE, SI, and WI, Table 2 summarizes MLP and MLP-PSO models' training results. The number of the neurons are 10, 12, and 14, and the population sizes vary from 100, 150, to 200. Testing Results Four models with various neuron numbers and population sizes in Table 3 represent the experimental results. The MLP-PSO with ten neurons in the hidden layer and population size of 100 outperforms other configurations. Figure 5 further presents the plot diagrams of the models. Studying the range of error tolerances of the models for the testing results is also essential to identify the model with higher performance. Figure 6 visualizes the models' error tolerances. Testing Results Four models with various neuron numbers and population sizes in Table 3 represent the experimental results. The MLP-PSO with ten neurons in the hidden layer and population size of 100 outperforms other configurations. Figure 5 further presents the plot diagrams of the models. Studying the range of error tolerances of the models for the testing results is also essential to identify the model with higher performance. Figure 6 visualizes the models' error tolerances. Performance evaluation of the four proposed models for the testing phase indicates that hybrid model 2 with fewer neurons in the hidden layer and lower population size outperforms other models. As illustrated in Figure 6, of the four models' range of error tolerances, model 2 shows promising results. The Interactions of Variables on the Testing Results Analyzing the outputs of the testing phase for studying the effect of each independent variable on the real estate price indicated that real estate price has an indirect relation with dweller gyration, dweller entropy, workers' gyration, and workers' home distance and has a direct relation on workers' entropy and dwellers' work distances. It can be claimed that, according to the observations, working and flow of activities and entropy of mobility are from areas with lower real estate prices to areas with higher real estate prices. Figure 7a represents the normalized property process's dependence on entropy and gyration of inhabitants living in Budapest. The contour lines on the heat map chart showing the levels of property prices suggest that there is a strong influence of property prices on entropy and gyration of the dwellers. Additionally, entropy and gyration show a linear relationship with the home's prices. The higher the gyration beside the same value of entropy, the higher the property price is. On the other hand, it seems that people have the same level of gyration, but higher entropy (visiting more places) Performance evaluation of the four proposed models for the testing phase indicates that hybrid model 2 with fewer neurons in the hidden layer and lower population size outperforms other models. As illustrated in Figure 6, of the four models' range of error tolerances, model 2 shows promising results. The Interactions of Variables on the Testing Results Analyzing the outputs of the testing phase for studying the effect of each independent variable on the real estate price indicated that real estate price has an indirect relation with dweller gyration, dweller entropy, workers' gyration, and workers' home distance and has a direct relation on workers' entropy and dwellers' work distances. It can be claimed that, according to the observations, working and flow of activities and entropy of mobility are from areas with lower real estate prices to areas with higher real estate prices. Figure 7a represents the normalized property process's dependence on entropy and gyration of inhabitants living in Budapest. The contour lines on the heat map chart showing the levels of property prices suggest that there is a strong influence of property prices on entropy and gyration of the dwellers. Additionally, entropy and gyration show a linear relationship with the home's prices. The higher the gyration beside the same value of entropy, the higher the property price is. On the other hand, it seems that people have the same level of gyration, but higher entropy (visiting more places) Performance evaluation of the four proposed models for the testing phase indicates that hybrid model 2 with fewer neurons in the hidden layer and lower population size outperforms other models. As illustrated in Figure 6, of the four models' range of error tolerances, model 2 shows promising results. The Interactions of Variables on the Testing Results Analyzing the outputs of the testing phase for studying the effect of each independent variable on the real estate price indicated that real estate price has an indirect relation with dweller gyration, dweller entropy, workers' gyration, and workers' home distance and has a direct relation on workers' entropy and dwellers' work distances. It can be claimed that, according to the observations, working and flow of activities and entropy of mobility are from areas with lower real estate prices to areas with higher real estate prices. Figure 7a represents the normalized property process's dependence on entropy and gyration of inhabitants living in Budapest. The contour lines on the heat map chart showing the levels of property prices suggest that there is a strong influence of property prices on entropy and gyration of the dwellers. Additionally, entropy and gyration show a linear relationship with the home's prices. The higher the gyration beside the same value of entropy, the higher the property price is. On the other hand, it seems that people have the same level of gyration, but higher entropy (visiting more places) stay in poorer zones in the city. The right bottom part of the heat map chart shows the lower property price domain. The people staying in cheaper homes visit many city locations, therefore having entropy higher than 0.7 and having a radius of gyration less than 6 km. prices are proportionally lower at places where the most diverse visiting behavior population works and lives by the increasing entropy of home cells. The area where the high entropy dwellers live and only limited entropy people work seems to be relatively cheap. In these zones, limited job opportunities are available and the inhabitants have to visit several locations on a weekly basis. The gyration of the area where people are working and the entropy of the same locations are used as home are the places that have a remarkable interrelation to housing prices (Figure 7c). The most expensive properties can be found in the cells where the inhabitants visit only a few locations (entropy is <0. 25), and the gyration of the workers is relatively high (>10 km). The higher the gyration level in the working place cell, the lower the housing price if the home cell entropy is in the middle range (0.4-0.6). The dwellers visit the same level of destinations but have a bigger radius of gyration living in cheaper neighborhoods. In the region where the inhabitants' entropy is relatively high (>0.75) lower housing prices belong to higher worker gyration until its value is lower than 10 km. However, the properties are more expensive if the gyration is bigger than 15 km. The level of gyration in a cell is significantly correlated with the distance between the workplace and the dwellers' home locations. The people living far from their job locations have to spend more time traveling. Therefore, their opportunities to visit several places in the city are limited. This observation is confirmed in Figure 7d. It predicted coherency of the home and work locations' cell level distances and dwellers entropy and the housing prices. There are no properties available at the regions where the home-work distance and entropy are small (left bottom corner of the heat map) or both of them are high (top right corner of the chart). The home's prices increase with the higher the distance between home and workplaces, the higher home's prices in the cells having a value of entropy below 0.5. It seems that people visiting only a few locations (i.e., home, work, school, etc.) could afford to live in more expensive districts and travel more for their work. These significant These cells are located in the most upbeat working region (financial district) of the city. The housing prices are proportionally lower at places where the most diverse visiting behavior population works and lives by the increasing entropy of home cells. The area where the high entropy dwellers live and only limited entropy people work seems to be relatively cheap. In these zones, limited job opportunities are available and the inhabitants have to visit several locations on a weekly basis. The gyration of the area where people are working and the entropy of the same locations are used as home are the places that have a remarkable interrelation to housing prices (Figure 7c). The most expensive properties can be found in the cells where the inhabitants visit only a few locations (entropy is <0. 25), and the gyration of the workers is relatively high (>10 km). The higher the gyration level in the working place cell, the lower the housing price if the home cell entropy is in the middle range (0.4-0.6). The dwellers visit the same level of destinations but have a bigger radius of gyration living in cheaper neighborhoods. In the region where the inhabitants' entropy is relatively high (>0.75) lower housing prices belong to higher worker gyration until its value is lower than 10 km. However, the properties are more expensive if the gyration is bigger than 15 km. The level of gyration in a cell is significantly correlated with the distance between the workplace and the dwellers' home locations. The people living far from their job locations have to spend more time traveling. Therefore, their opportunities to visit several places in the city are limited. This observation is confirmed in Figure 7d. It predicted coherency of the home and work locations' cell level distances and dwellers entropy and the housing prices. There are no properties available at the regions where the home-work distance and entropy are small (left bottom corner of the heat map) or both of them are high (top right corner of the chart). The home's prices increase with the higher the distance between home and workplaces, the higher home's prices in the cells having a value of entropy below 0.5. It seems that people visiting only a few locations (i.e., home, work, school, etc.) could afford to live in more expensive districts and travel more for their work. These significant characteristics are not typical in the cells having higher diversity of visited locations. The people living in middle price (0.6-0.8 million HUF) homes have higher entropy and are ready to travel long distances for their jobs. Figure 7e illustrates how housing prices can be estimated by taking into account the mean home-work approach and entropy in the cells. The cheapest flats can be found in those cells where the inhabitants have diversified visiting habits and people having their workplaces within 5-10 km from home. The houses are proportionally more expensive by the difference of this home-work distance range. It is also interesting that on the same level of home-work mileage, the property prices in cells are higher where the mean entropy is smaller. The explanation for this trend could be that the more expensive neighborhood has more easily accessible services and facilities, and the dwellers need to visit fewer places. Conclusions Call detail records with mobility information help telecommunication companies map the users' accurate locations and entropy activities for analyzing social, economic, and related capabilities in the subset of the smart cities category. The lack of an exact solution to transform the data into practical tools for better understanding the nature of the effect of telecommunication technologies in today's life leads researchers to use some additional and useful tools for making a user-friendly system under telecommunication technologies like machine learning tools. The present study develops single and hybrid machine learning techniques to analyze and estimate estate prices according to the call data records, including mobility entropy factors. These factors include dweller entropy, dweller gyration, worker entropy, worker gyration, dwellers' work distance, and workers' home distance. Modeling had performed using the machine learning method of multi-layered perceptron trained with the evolutionary algorithm of particle swarm optimization for optimum performance. Results have been evaluated by mean square error, sustainability index, and Willmott's index. Statistical analysis indicated that all the selected independent variables have a significant effect on the dependent variable. According to the results, the hybrid ML method could successfully cope with estimating the estate price with high accuracy over the single ML method. Analyzing the outputs of the testing phase for studying the effect of each independent variable on the real estate price indicated that real estate price has an indirect relation with dweller gyration, dweller entropy, workers' gyration, and workers' home distance and have a direct relation with workers' entropy, and dwellers' work distance. It can be claimed that, according to the observations, working and flow of activities and entropy of mobility are from areas with lower estate prices to regions with higher estate prices. For future research, exploring other cities of the country using the proposed model is encouraged. In addition, developing more sophisticated machine learning models to study the CDR data with higher performance is suggested. The future of the research on CDR data with machine learning will not be limited to real estate price prediction. Further research on mobility modeling would be beneficial in a wide range of applications, for example, COVID-19 outbreak and its governance modeling.
7,287
2020-12-01T00:00:00.000
[ "Computer Science" ]
Cavitating Flow in the Volute of a Centrifugal Pump at Flow Rates Above the Optimal Condition Cavitation is regarded as a considerable factor causing performance deterioration of pumps under off-design conditions, especially at overload conditions. To investigate the unsteady cavitation evolution around the tongue of a pump volute, and its influence on the flow field within passages of the impeller, numerical calculations and several hydraulic tests were performed on a typical centrifugal pump with a shrouded impeller. Emphasis was laid on the cavitation evolution and blade-loading distribution at flow rates above the optimal value. Results indicated that vapor is likely to first emerge from the tongue of the volute rather than at the leading edge of the blades at overload conditions. In contrast to the designed condition, the flow distribution in each passage is obviously different. The flow rate of the passage reaches a maximum just past the location of the tongue, while the minimum flow rate value is projected to appear at the passage upstream. The cavitation at the tongue squeezes the flow area at the outlet of the corresponding flow passage of the tongue, thereby causing a huge growth in the flow rate at the impeller outlet. Introduction The generation of cavitation is a significant contributory factor to the performance deterioration in hydraulic machinery [1][2][3]. As for volute pumps, widely used in marine and other fields, cavitation always plays a role in the causing of vibration and noise under off-design conditions. A number of factors are known to affect the cavitation performance, such as the blade structure, inducer, and installation condition, etc. [4][5][6]. Furthermore, the occurrence of cavitation in each path of the impeller can cause a gradual decline in energy exchange between the liquid and blades, which would result in a sudden deterioration of performance, even apparatus failure. Generally, in the case of centrifugal pumps, the flow separation, characterized by the unsteady shedding of vortices from the leading edges of the blades, is the main location of cavitation inception [7]. Nevertheless, the separated flow around the tongue can also lead to a sharp drop in local pressure, and in some special cases, the cavitation first appears at the tongue of the volute rather than the blade inlet [8,9]. Meanwhile, the head curve has a precipice shape as well as the typical characteristic curve of centrifugal pumps. Furthermore, the head drop due to cavitation does not come from cavitation in the impeller but the cavitation evolution at the tongue. The investigation of flow field in the vicinity of or inside a vapor cloud is complex, and most studies in the field of cavitation in centrifugal pumps have only focused on the cavitation in the impeller. Maklakov et al. [10] visualized the cavitation structure on an arbitrarily shaped airfoil by utilizing a high-speed camera and Laser Doppler Velocity, which showed the cloud cavitating area consisting of two parts: attached vapor in the foreside and an unsteady two-phase mixture in the rear region. Fu et al. [11] proposed that the variation in radial force on the impeller are strongly influenced by the rotor-stator 2 of 12 interaction, and cavitation causes the distortion of radial force. Dumitrescu et al. [12] analyzed the leading-edge separation bubbles on rotating blades at different rotation speeds, showing the flow around the bubbles reattaches to the section surface, with a turbulent boundary layer extending from the reattachment point to the trailing edge. By means of direct visualization and PIV measurements, Rudolf et al. [13] investigated the cavitation phenomena at the tongue of the volute and found it is similar to the cavitation on a single hydrofoil. The unsteady cavitation cloud generation results from the unsteady flow field generated by the passing of the blades. Based on the analysis of vibration signal using the mean and RMS amplitudes features, Ahmed et al. [14] found that using a lowfrequency range between 1 kHz and 2 kHz was effective for monitoring cavitation in the pump. Wang et al. [15] developed a novel entropy production diagnostic model, with phase transition, to predict the irreversible loss of cavitation flow in hydraulic machinery by including the mass transfer and slip velocity, as well as the low-frequency excitation that is caused by a cavitation-induced vortex, which is consistent with the frequency-domain characteristics of the interface entropy production rate. Dular et al. [16] made a transient simulation of cavitating flow under different conditions for two hydrofoils and acquired the images of the vapor structures. Compared with the experiment results, numerically predicted results shows similar cavity structure length. Qiu et al. [17] conducted the numerical computation of a pumpjet propulsor with different tip clearances in oblique flow, and at the different cavitation numbers, the difference in the hydrodynamic parameters between the different tip clearances are relatively small. Xia et al. [18] simulated the cavitation of a waterjet propulsion pump, and the cavitation development trend is similar with it being under a small flow-rate condition. Cavitation emerges in the hub before the blade rim, and the maximum value of the vapor fraction in blade rim is larger than that in the hub. Yang et al. [19][20][21] determined the design of an inducer to improve the anticavitation performance, and as the cavitation number decreases, the bubbles occurs in the leakage vortex firstly, and the leakage vortex cavitation is connected with the shear cavitation in the leakage flow, forming a stable leakage cavitation. Jiang et al. [22] found that the cavitation performance can be improved effectively by arranging a variable pitch inducer and adopting an annular nozzle scheme. Jain et al. [23] illustrated that when a pump is operated as a turbine, it might be suffering from traveling bubble and von Karman vortex cavitation near the impeller blades, and vortex rope cavitation in the draft tube. Wang et al. [24] and Liu et al. [25] investigated several improved turbulence models to obtain high resolution results of the successive stages of unsteady cavitating flow in a centrifugal pump. Hu et al. [26] conducted a visualization experiment to validate the unsteady simulation of cavitation in the centrifugal pump, and the cavitation inception, shedding off, and collapse procedure of the cavitation evolution were successfully captured. The above research results indicate that the vaporization preferably emerges on the suction side of the blade leading edges in the impeller. Otherwise, only a few studies on the cavitation within the volute have been investigated. Therefore, this paper proposed a special case, in which the cavitation at the tongue causes a 3% head drop, while there is no obvious cavitation at the impeller inlet. Through numerical computation and hydraulic tests, the cavitation structure in the volute as well as the effect of its evolution process on the energy exchange within the impeller were demonstrated to provide a certain guidance for the optimization of the volute. Geometry and Parameters In this article, the model pump design parameters are as shown in Table 1, and Figure 1 illustrates the 2D hydraulic assembly sketch. The main hydraulic components of this model are the impeller and the volute. The impeller plays a role to transfer energy to the conveying fluid, while the spiral volute collects the fluid from the impeller and conveys fluid to the outlet section. Three-Dimensional Model and Grids The computational domain is simplified as the inlet, impeller, volute, and outlet se tion. Compared with tetrahedral meshes, hexahedral meshes can easily fit the boundar of the domain and is suitable for the calculation of fluid and concentration stress on wa surfaces, which not only improves the accuracy of the calculation, but also reduces th computing time. Therefore, ANSYS ICEM was used to generate hexahedral meshes fo each computation domain, and 15 layers of boundary mesh was adopted near the wall o each subdomain. Figure 2 shows the mesh independence validation of the computatio domains, and as the grid number was greater than 2.6 × 10 6 , the numerical results give a inapparent difference. Then, the grids of the computation domains were established, a shown in Figure 3. Boundary Conditions The transient simulation of the model pump was conducted with several flow rate above the designed value by utilizing ANSYS-CFX 17.0 software. Given that the cavitatio phenomenon in a pump is closely linked to the static pressure at the impeller inlet, th total pressure inlet and mass flow rate outlet were set as the boundary conditions of th Three-Dimensional Model and Grids The computational domain is simplified as the inlet, impeller, volute, and outlet section. Compared with tetrahedral meshes, hexahedral meshes can easily fit the boundary of the domain and is suitable for the calculation of fluid and concentration stress on wall surfaces, which not only improves the accuracy of the calculation, but also reduces the computing time. Therefore, ANSYS ICEM was used to generate hexahedral meshes for each computation domain, and 15 layers of boundary mesh was adopted near the wall of each subdomain. Figure 2 shows the mesh independence validation of the computation domains, and as the grid number was greater than 2.6 × 10 6 , the numerical results give an inapparent difference. Then, the grids of the computation domains were established, as shown in Figure 3. where RB means the radius of the bubble, pv means the pressure inside the bubble, p is the pressure of the fluid outside of the bubble, ρf is the fluid density, and σ is the surface tension of the interface between the fluid and bubble. 5.0x10 5 1.0x10 6 1.5x10 6 2.0x10 6 2.5x10 6 3.0x10 f where RB means the radius of the bubble, pv means the pressure inside the bubble, p is the pressure of the fluid outside of the bubble, ρf is the fluid density, and σ is the surface tension of the interface between the fluid and bubble. 5.0x10 5 1.0x10 6 1.5x10 6 2.0x10 6 2.5x10 6 3.0x10 Boundary Conditions The transient simulation of the model pump was conducted with several flow rates above the designed value by utilizing ANSYS-CFX 17.0 software. Given that the cavitation phenomenon in a pump is closely linked to the static pressure at the impeller inlet, the total pressure inlet and mass flow rate outlet were set as the boundary conditions of the inlet and outlet. The impeller subdomain was set as the rotational domain, the general connection between the stationary subdomains. The volume fraction of the cavity at the inlet was 10 −15 . The wall roughness was 0.02 mm. The results of the noncavitation conditions were set as the initial values of the corresponding steady numerical simulation, and then the initial values of the unsteady cavitation calculation were the corresponding steady noncavitation results. The numerical calculations in this article were based on the SST k-omega turbulence model, which considers the homogeneous model based on the Zwart equation [27], in which the growth process of the bubbles in the fluid is described as follows: where R B means the radius of the bubble, p v means the pressure inside the bubble, p is the pressure of the fluid outside of the bubble, ρ f is the fluid density, and σ is the surface tension of the interface between the fluid and bubble. According to ISO9906, the uncertainties of Q, H were calculated by the square root of the sum of the uncertainties. Hydraulic Tests of the Pump where e is the overall uncertainty, eR is the stochastic uncertainty resulting from repeated measurement data; as each of the partial errors are calculated independently, with the law of a normal distribution, the true error is almost less than the uncertainty. eS is the system uncertainty in instruments or measure methods. The overall uncertainty of the flow rate is Q, Table 2 shows the detailed calculation of the uncertainties when Q/Qd = 1.52. The external characteristics of the overall results of the pump are shown in Figure 5. According to ISO9906, the uncertainties of Q, H were calculated by the square root of the sum of the uncertainties. e = e 2 R + e 2 S , where e is the overall uncertainty, e R is the stochastic uncertainty resulting from repeated measurement data; as each of the partial errors are calculated independently, with the law of a normal distribution, the true error is almost less than the uncertainty. e S is the system uncertainty in instruments or measure methods. The overall uncertainty of the flow rate is Q, e(Q) = e 2 R (Q) + Table 2 shows the detailed calculation of the uncertainties when Q/Q d = 1.52. The external characteristics of the overall results of the pump are shown in Figure 5. where p in means the static pressure of the inlet of the impeller, p v0 is the saturated vapor of the fluid, set as 3574 Pa, and v in, is the average absolute velocity at the inlet of the impeller. Cavitation Clouds at the Tongue At operating points above the optimal value, high-speed separated flow at the tongue of the volute induces the vortex cavitation of shear turbulence. The unsteady process of shear layer instability and vortex shedding can be captured by numerical calculations. The strength of the separation vortex is effectively restrained as the tongue becomes shorter or the radius of the rounded corner becomes larger. At the conditions of a large flow rate, the similarity of the flow pattern of the adjacent passages is enhanced, and the isobaric surface of the static pressure is nearly circular. Hence, the flow distortion in the impeller is weakened, while the direct discharge of fluid from the impeller to the diffuser is obvi- Cavitation Clouds at the Tongue At operating points above the optimal value, high-speed separated flow at the tongue of the volute induces the vortex cavitation of shear turbulence. The unsteady process of shear layer instability and vortex shedding can be captured by numerical calculations. The strength of the separation vortex is effectively restrained as the tongue becomes shorter or the radius of the rounded corner becomes larger. At the conditions of a large flow rate, the similarity of the flow pattern of the adjacent passages is enhanced, and the isobaric surface of the static pressure is nearly circular. Hence, the flow distortion in the impeller is weakened, while the direct discharge of fluid from the impeller to the diffuser is obviously enhanced. The direction of the absolute velocity at the outlet of the impeller shifts to the side of the blade close to the tongue, and then to the incident flow; therefore, the leading edge of the tongue presents a negative attack angle for the incident flow. Under the influence of periodic strong shear flow, the local low pressure induces the cavitation, and the low-pressure area varies periodically with the rotation of the impeller. Figure 6 shows the result of the tongue cavitation numerical simulation in the 1/6 cycle when Q/Q d = 1.52. The short tongue used in this article has great differences with the ordinary airfoils, but its structure of cavitation has great similarities with the attached cavitation on a single airfoil. The main factors affecting the cavitation evolution and shedding periodically around a single airfoil is the inlet velocity distribution and the Strouhal number; however, the cavitation evolution near the tongue in this case was influenced by the alternation of the pressure gradient as well as the jet wake vortices at the impeller outlet. As can be seen from Figure 6a, at 0 T, as the monitoring blade started to leave the tongue, the cavitation clouds developed on the tongue again, upon which the blocking of the flow passage caused by the cavitation cloud structure downstream was the most obvious. Then, at 1/4 T, as the blade was leaving the tongue, an attached cavitation cloud was formed on the tongue, and somewhat dissociated into two asymmetrical parts. At 1/2 T, the tongue was in the middle of the passage outlet, and the cavitation clouds had appeared at the head of the tongue. At 3/4 T, as the trailing edge of the next blade approached the tongue gradually, the untouched cavitation cloud was similar to the U-type vortex on a single hydrofoil, which results from the re-entrant jet. At 1 T, the blade just passed the location of the tongue, and newly attached cavitation can be seen on the front edge, while the previous cavitation cloud collapsed rapidly or moved downstream. The intense turbulence caused by the cavitation greatly interfered with the exchange of energy at the wake region, and the low-pressure area expended. With the shedding of the cavitation cloud, the cross section near the tongue changed dramatically, which caused large energy loss in the volute. When the blade was close to the position of the tongue, vapor appeared on both sides of the tongue. Meanwhile, the cavitation cloud was asymmetrical: the cavitation was weaker on the side near the front shroud of the impeller, while the volume fraction of the vapor near the back shroud of the impeller was larger. When the tongue is in the middle of the trailing edge of two adjacent blades, the effect of rotor-stator interference on the cavitation cloud declined significantly. At this moment, the cavitation structure near the tongue was close to that on a single airfoil, and several typical cavitation patterns on the airfoil also emerged at the tongue. Figure 6b shows the velocity distribution in the 1/6 cycle on the middle surface within the impeller. It can be seen that the velocity distribution is uniform in the impeller at overload conditions. Serious flow separation emerged around the tongue, which motivated the cavitation inception. At the off-design conditions, a detached, unsteady vortex appeared near the tongue, and the unsteady vortex was an important source of vibration and noise. At 0 T, a low-velocity secondary flow area occupied nearly 1/2 the channel at the downstream of the tongue. The cavitation clouds developed rapidly and squeezed the flow cross section, so the flow velocity outside the cavitation clouds increased at the tongue. At 1/4 T, as the cavitation cloud shed, the blocked passage was partially released and the low-velocity area shrunk, the vortex moved downstream, and the flow area was divided into the main flow and backflow region. At 1/2 T, the velocity distribution varied little, but the vortex intensity within the vapor increased. At 3/4 T, the center of the vortex moved further downstream. The previous cavitation clouds flowed downstream and formed the cavitation wake. At 1 T, a second vortex core appeared and activated the cavitation cloud with a relatively high volume fraction of vapor. Figure 6c shows that, at 0 T, as the blade just left the range of observation, a large area of cavitation emerged in the front of the volute tongue. At 1/4 T, the next blade just appeared, and the cavitation cloud expanded rapidly and shed. At 1/2 T, the cavitation turned up at the leading edge as the blade approached the tongue. At 3/4 T, the blade was located just near the location of the tongue, and the cavitation cloud acted as attached cavities on the surface. At 1 T, as the blade just left the tongue and moved downstream, the blocking of the flow passage was caused by the cavitation cloud structure downstream. Compared with the simulation results, it seems that the cavitation clouds captured with high-speed photography did not attach to the wall surface, and the cavitation wake was not apparent in the shooting window. In the period of the 1/6 cycle, the cavitation structure changed little, and only when the trailing edge of the blade was closest to the tongue had the separated layer of the cavitation clouds formed. Blade-Loading Distribution In this study, the unsteady force on the blade of a traditional centrifugal pump was used as a strong correlation coefficient, using the space between the upstream blade outlet and the tongue. This correlation coefficient was utilized to investigate the influences of cavitation at the tongue on blade loading under high-flow rates, especially the influences of dynamic and static interferences. The distance from the outlet of the blade pressure side and the tongue is defined as the interference distance of the separation tongue and is expressed as where α is the included angle between the outlet of the blade pressure side and the leading edge of the separation tongue, and Z is the number of blades. The radial distance coefficient (r*), axial distance coefficient (z*), and blade-load coefficient (p*) are defined as follows: where r i is the distance from the monitoring point to the rotation shaft of the impeller, R is the outer diameter of the impeller, z i is the axial distance from the monitoring point to the back shroud, Z i is the axial distance from the force-bearing point to the back shroud, p i refers to the outlet static pressure at the force-bearing point, and U denotes the inlet velocity of the pump. The blade loading close to the tongue is shown in Figure 7. The blade loading of the centrifugal pump is unsteady due to the asymmetry of the volute. The pressure difference between the pressure and suction side of the blades increased gradually as the upstream blade approached the tongue, whereas it decreased gradually when the blade left the volute and moved downward. The static pressure at the SS inlet of the upstream blade was higher than the corresponding PS (the blue arrow at c b = 0.53). This phenomenon disappeared when the flow passage left the tongue, indicating that a low-pressure zone was developed at the PS inlet of the flow passage in the tongue under Q/Q d = 1.52. Under a high-flow rate, the low-pressure zone in the centrifugal pump occurred at the leading edge of the SS. As shown in Figure 6, the cavitation at the tongue squeezed the flow area at the outlet of the corresponding flow passage of the tongue, thereby causing a huge growth in the flow rate of the fluid at the impeller outlet. Consequently, the flow rate at the impeller inlet increased, forming a local high-speed region near the PS outlet. The cavitation structure on the tongue quickly generated and collapsed as the blade passed the tongue, thereby decreasing the squeezing effect and the flow rates at the passage outlet and inlet. However, the pressure difference decreased significantly at c b = −0.13, which was mainly attributed to the transport inertia of the fluid. and inlet. However, the pressure difference decreased significantly at cb = −0.13, which was mainly attributed to the transport inertia of the fluid. The power capability of the rotating blades of the impeller is unsteady. The pressure difference first increases and then decreases when 0.3 < r* < 1, and the main power area of the blade was in the interval of 0.3 < r* < 0.9. At cb = 0.53, the blocking effect of the cavities at the tongue was evident and seriously disturbed the energy exchange at the impeller. The blade mostly lost its influences on the fluid. The working capability of the impeller increased gradually when the blade approaches the tongue. The maximum pressure difference between the pressure and suction side of the blades was achieved at cb = −0.13 and −0.27. The power capability of the blades declined gradually. The relationship between the flow-field distribution at the leading edge of the tongue and the flow rate at the impeller outlet was studied, and the flow coefficient was defined as The power capability of the rotating blades of the impeller is unsteady. The pressure difference first increases and then decreases when 0.3 < r* < 1, and the main power area of the blade was in the interval of 0.3 < r* < 0.9. At c b = 0.53, the blocking effect of the cavities at the tongue was evident and seriously disturbed the energy exchange at the impeller. The blade mostly lost its influences on the fluid. The working capability of the impeller increased gradually when the blade approaches the tongue. The maximum pressure difference between the pressure and suction side of the blades was achieved at c b = −0.13 and −0.27. The power capability of the blades declined gradually. The relationship between the flow-field distribution at the leading edge of the tongue and the flow rate at the impeller outlet was studied, and the flow coefficient was defined as where q refers to the mass flow rate at the outlet of each impeller passage, and Q refers to the mass flow rate at the inlet of the pump model. The changes in the outlet flow rate of the different passages in the impeller under Q/Q d = 1.52 at cavitation conditions are shown in Figure 8. The outlet flow rate distribution of the impeller is positively related to the blade-passing frequency. A regular distribution is found in the passage of the single cycle when the pump operates under the designed working conditions. Under an extremely large flow rate, the flow distributions vary significantly between the different passages although the total outlet flow rate of the impeller is highly correlated with the blade-passing frequency. At the moment of wave peak, the tongue is located at the middle of the two adjacent blades. At the moment of wave valley, the blade sweeps the tongue. Combining the peak value and the corresponding passage position shows a relatively high outlet flow rate of the impeller when the passage sweeps the tongue, and the valley of the outlet flow rate is found. Although the leakage losses at the impeller ring and front and back pump cavities are neglected, as secondary flow (e.g., inlet backflow, passage vortex, and axial vortex) exists in the impeller, the sum of the coefficients at the outlet of the impeller in the centrifugal pump are kept lower than 0.9. Conclusions This study was carried out to observe the cavitation evolution in the volute of a centrifugal pump at overload conditions based on computational calculations and experiments. Unsteady cavitation evolution was investigated to connect it with the blade-loading distribution and flow-rate variation of each passage. The following are several findings of this investigation: (1) At large flow rates, the unsteady shedding of the separated vortex lowers the local static pressure around the tongue, which triggers the cavitation inception. The cyclical Conclusions This study was carried out to observe the cavitation evolution in the volute of a centrifugal pump at overload conditions based on computational calculations and experiments. Unsteady cavitation evolution was investigated to connect it with the blade-loading distribution and flow-rate variation of each passage. The following are several findings of this investigation: (1) At large flow rates, the unsteady shedding of the separated vortex lowers the local static pressure around the tongue, which triggers the cavitation inception. The cyclical change of the flow states in the volute, as well as the periodic cavitation, are caused by the pressure difference between the suction and pressure surface and the wake at the impeller outlet. (2) When unsteady cavitation appears near the tongue, the blade-loading distribution on each blade has different characteristics. As the frequency of the vapor cloud shedding is the same as the blade-passage frequency, the variation cycle of the blade loading corresponds to the blade passage frequency. The differential pressure between both sides of the blades increases gradually as the upstream blade approaches the tongue, whereas it decreases gradually when the blade leaves the volute and moves downward. (3) Under an extremely large flow rate, the flow distributions vary significantly between different passages although the total outlet flow rate of the impeller is highly correlated with the blade-passing frequency. At the moment of wave peak, the tongue is in the middle of the passage. At the moment of wave valley, the blade sweeps the tongue.
6,462.4
2021-04-20T00:00:00.000
[ "Engineering", "Physics" ]
Symmetry breaking in vanadium trihalides In the light of new experimental evidence we study the insulating ground state of the 3d2 -transition metal trihalides VX 3 (X = Cl, I). Based on density functional theory with the Hubbard correction we systematically show how these systems host multiple metastable states characterised by different orbital ordering and electronic behaviour. Our calculations reveal the importance of imposing a precondition in the on site d density matrix and of considering a symmetry broken unit cell to correctly take into account the correlation effects in a mean field framework. Furthermore we ultimately found a ground state with the a1g orbital occupied in a distorted VX 6 octahedra driven by an optical phonon mode. I. INTRODUCTION The use of Hubbard-like correction to the Density Functional Theory (DFT+U ) is mainly motivated to properly take into account the localized nature of the d and f orbitals and to correctly describe the observed insulating behaviour in some strongly correlated materials [1][2][3][4].In practical implementation of the method [5][6][7][8], the difficulty to precisely account for the localization of electrons can numerically produce convergence to different electronic metastable phases, depending on the initial charge density used in the calculations, like in symptomatic compounds such as FeO [1] and UO 2 [9].These metastable phases could have very different energies and can constitute a trap preventing to access the real ground state of the system.Moreover, the stabilization of metastable electronic states in strongly correlated systems is sometimes due to what A. Zunger calls "simplistic" electronic structure theory: a mean-field calculation (like DFT in the local density approximation) in a unit cell showing as many symmetry constrains as possible [10,11].Exceptional confirmation of the theory are the predictions of insulating phases in "Mott insulators" even by mean-field like DFT providing different symmetry breaking (spin symmetry breaking, structural symmetry breaking, consideration of spin-orbit coupling (SOC)) [11] which allows to explore strong correlations and multireference character in the real symmetry unbroken wavefunction [12][13][14]. In this paper, we provide a further and unnoticed example of symmetry breaking induced phases in two dimensional magnetic materials, discovering that symmetry breaking is necessary to stabilize the right insulating ground state among different metastable ones hosted by layered magnetic VI 3 [15] and VCl 3 [16] compounds.We find that depending on the strength of on-site Coulomb repulsion (U), SOC and structural distortions, different orbital ordered phases can be stabilized.The delicate balancing of these low energy interactions is ultimately originated by the common structural motif in VX 3 compounds: honeycomb lattice of V cations in edge sharing octahedral coordination with the halides [15][16][17][18] (Fig. 1a-b).The sixfold coordination of the metal cations could naturally lead to an octahedral symmetry O h , but, due to partial occupation of d orbital in t 2g shell (vanadium is in a V 3+ oxidation state with two electrons in the d correlated manifold), a Jahn-Teller distortion of the octhaedra lowers the symmetry from O h to trigonal point group D 3d (Fig. 1b) [19,20], determining the splitting of the d-state manifold according to the trigonal basis (see Fig. 1c and Table S1 in Supplementary Material (SM)).Similar octhaedral environment and structural distortion are commonly observed in 3d ABO 3 perovskites [21][22][23][24].As a general rule, the trigonal symmetry determines the splitting of t 2g orbitals in a doublet e ′ g (e ′ g− and e ′ g+ ) and a singlet a 1g (Fig. 1c).The a 1g orbital extends along the out-of-plane direction (see Table S1 in SM) and in these compounds corresponds to the cubic harmonic with |l z = 0⟩ [19,25].On the other hand, the e ′ g− and e ′ g+ orbitals belong to the eigenspace with |l z = −1⟩ and |l z = 1⟩, respectively.Layered VX 3 compounds are deeply investigated motivated by the technological perspectives they promise and for the peculiar physical properties they have demonstrated to host [16,26,27].They have been recently investigated by different experimental and theoretical approaches because of contrasting evidences on the nature of their electronic ground state [16,25,[28][29][30][31]. Transmission optical spectroscopy [32] and photoemission experiments [16,29,30,33,34] revealed a sizable band gap for both VI 3 and VCl 3 .Polarization dependent angular resolved photoemission experiments [30] on VI 3 showed evidences that in-plane (e ′ g manifold) and out-of-plane (a 1g ) orbitals are both occupied, pointing to a a 1g e ′ g− ground state, see Fig. 1c (here and after, we refer to this phase as a 1g -insulating ground state, see Fig. S1 in SM for additional details).Indeed, X-ray magnetic circular dichroism measurement [25] confirmed the presence of a large orbital moment (⟨L z ⟩ exp ≃ −0.6), compatible with the a 1g -insulating ground state, although partially quenched with respect to the theoretically predicted ⟨L z ⟩ th ≃ −1 [25,28].Despite all these experimental evidences point to an a 1ginsulating ground state, recent photoemission spectra on VI 3 and VCl 3 [29,30,34] proved to be particularly difficult to interpret by first-principles DFT+U calculation, in particular for the determination of the energy position of V-d states [25,29,32,35,36].To further complicate the scenario, different first-principle calculations [37,38] predicted both VCl 3 and VI 3 to be metals.The emergence of false metallic states in first-principles DFT calculations [10] could be an indication of possible metastable electronic phases which prevent from obtaining the ground state.In addition, the above cited difficulties to reconcile DFT+U predictions with angular resolved photoemission spectroscopy (ARPES) experiments and the discrepancies between the experimental orbital angular momentum (⟨L z ⟩ exp ≃ −0.6) and the theoretical prediction (⟨L z ⟩ exp ≃ −1) call for a deeper understanding of the ground state properties of Vanadium based trihalides, solving both the numerical and physical aspects which hinder the determination of the electronic configuration in these compounds. II. METASTABLE PHASES The electronic structure of VI 3 and VCl 3 are studied using a monolayer unit cell in the D 3d point-group using DFT+U approach including SOC and considering the spin polarization along the z-axis (see Methods section in SM for further information).As anticipated, DFT calculation in this class of materials is complicated by competing and entangled structural, electronic and magnetic degree of freedom, in which the self-consistent solution of the Kohn-Sham equation could be trapped depending on the starting guess for the charge density and wavefunctions.So, in order to stabilize different metastable states, we used a precondition on the onsite occupation of the d-density matrix for the +U functional [39] (see Fig. S1 in SM for a summary on the d-states representation in a D 3d structure and on the precondition matrices used in the calculations). We start the discussion presenting the electronic phase we have obtained in VI 3 forcing the occupation of the a 1g -metallic state.The self-consistency, without SOC, ends in a phase with orbital angular momentum directed along the z direction (⟨L z ⟩ ≃ −0.18) and the a 1g orbital occupied (central panel of Fig. 2).In this configuration, VI 3 results metallic due to the hybridization of the e ′ g doublets with iodine derived bands (See Figs.S2, S3).We refer to this last phase as the a 1g -metallic phase (see SM for a better description of the technicalities used to stabilized the different phases). On the other hand, forcing the occupation of the a 1g state and including SOC in the calculation produces an insulating phase with occupied a 1g state and with an orbital angular momentum ⟨L z ⟩ ≃ −1 compatible with FIG. 3. Energy differences between e ′ g -insulating phase and a1g-insulating phase using PBE and LDA functional in a D 3d unit cell.The PBEC1 line represent the differences between the e ′ g -insulating phase and the symmetry-broken (SB) a1ginsulating phase in a unit cell with randomized positions of the atoms (see text). the occupation of the e ′ g− orbital (leftmost panel in Fig. 2).Finally, leaving the a 1g empty in the starting guess for the density matrix, we find an insulating solution for both VI 3 and VCl 3 (rightmost panel in Fig. 2).In the insulating solution the e ′ g doublet is fully occupied and, as expected, the angular momentum is quenched ⟨L z ⟩ ≃ 0. Indeed, Geourgescu et al. [20] claim that, in light compounds without strong SOC, such as VCl 3 and TiCl 2 , the only insulating ground state must be the e ′ ginsulating phase, driven by electronic-correlation (Hubbard U Coulomb repulsion).We note that the a 1g -and e ′ g -insulating phases differ also from a structural point of view [19,25]: the a 1g -insulating phase is trigonally contracted while the e ′ g -insulating phase is trigonally elongated (see Fig. 1c).A deeper analysis of the self-consistent d-density matrix reveals that both a 1g -metallic and e ′ g -insulating phases have a block diagonal form with real occupancies, in agreement with the symmetry constrains, while the a 1ginsulating phase show a d-density matrix with imaginary occupancies signaling the fundamental role of SOC in stabilizing this state (see Fig. S1 in SM). III. CORRELATION EFFECTS All the investigated phases lie close in energy (order of ≃ 10-50 meV) and, given the already discussed delicate physical properties of this class of compounds, it is worth understanding how much the ground state properties could depend on the fine details of the calculation.Thus, we analyze the energy differences among the discovered phases as a function of U parameter and for two choices of the exchange-correlation potential, PBE and LDA.We present the results in Fig. 3a-b (curves labelled as PBE D3d and LDA D3d ) for VI 3 and VCl 3 respectively, where the total energy difference between the two insulating phases (e ′ g and a 1g ) is reported as a function of the U parameter.[40].We find that the total energy difference is strongly dependent on U for VI 3 .In particular, for U less than ≃ 3.0 eV the e ′ g is lower in energy, while the a 1g is the ground state for larger U.For VCl 3 , the e ′ g is always favored.On the other hand, the LDA functional predicts the a 1g -insulating phase as the ground state for both VCl 3 and VI 3 , irrespective on U value.Note that, even in light compound, like VCl 3 , SOC makes accessible the a 1g -insulating phase (that is the ground state in LDA approximation). IV. STRUCTURAL SYMMETRY BREAKING The highlighted strong dependence of the total energies of the considered phases on both U and exchangecorrelation potential point towards a complicated and nearly degenerate electronic energy landscape promoted by strong electronic correlations, which can be further enriched by the coupling with structural degree of freedom.Indeed, following the approach of Zunger et al. [10,11] strong correlations in the exact wavefunction can be captured in a DFT+U framework, lowering the symmetry inducing structural distortions.To explore this possibility in an unbiased way, we randomize all the positions of the atoms in the unit cell and then relaxed the system toward the closest local energy minimum.The calculation ends in a new, completely symmetry-broken (SB) phase [41] which we call PBE C1 characterized by a distortion of the halides (∼ 0.03 Å).We find that this last phase has an occupied a 1g orbital and a nearly quenched angular momentum ⟨L z ⟩ ≃ −0.1 and is energetically favored for all the U values studied (Fig 3a-b).This result is reminiscent of a sort of Jahn-Teller effect (similar of those found in perovskites [21][22][23][24]), in which structural distortion promote an electronic energy lowering.The effect can be further rationalized studying the phonon modes The filled red dot are energies obtained starting from the charge of a1g-insulating phase that has imaginary occupancies in the d-density matrix, the empty black dots are energies obtained starting from a SB a1g-insulating phase with real occupancies in the d-density matrix.The red dots with the black border are obtained starting from the a1g-insulating phase but converging in a SB a1g-insulating phase.In the insets there is a zoom close to the undistorted phase at Γ-point, which could signal structural instabilities by imaginary frequencies.We found that the a 1g -insulating phase [42] have real frequencies, indicating that it is, indeed, a real metastable phase, stable for relatively small structural perturbations around the equilibrium structure.However, the calculation of the total energy displacing the atoms along the eigenvector of one optical phonon mode (mainly involving the halides as is shown in Fig. 4a-b) using larger distortions (up to ≃ 0.2 Å), reveals an electronic instability when the displacement of the atoms is 0.5 times the normalized eigenvector for VI 3 and ∼ 0.6 for VCl 3 (see Figure 4c-d).Above these thresholds the systems do not remain in the same Born-Oppenheimer energy surface (red curve) but moves to a different one (black curve).Moving back along the eigenvector direction from this new electronic phase to recover the undistorted unit cell, both systems end in a new energy minimum clearly highlighted in the insets of Figure 4c-d, with inequivalent distances between V and X (of about 0.03 Å) corresponding to an a 1g -insulating state with ⟨L z ⟩ ≃ −0.1 and a d-density matrix with real occupancies, in agreement with the evidences from the most recent photoemission experiments [29,30,34] in VI 3 .It must be underlined that this last phase cannot be obtained in DFT calculation in a fully symmetric D 3d unit cell, because it sets a univocal orbital scheme (see Fig. 2).Moreover, it is not even stabilized by SOC: a calculation without SOC on this new phase, starting from the previous converged charge density remains equally stable. V. CONCLUSION The newly discovered phase in Vanadium trihalides originates from the a 1g -metallic phase upon a Jahn-Teller distortion which splits the degenerate e ′ g doublet opening an energy gap further increased by electronic correlations, possibly signaling strong correlation in the true correlated wavefunction [10][11][12][13].Our results shed light on new physical mechanisms active in Vanadium based trihalides, overlooked and unexplored so far, which can be theoretically accessed once the electronic correlations are considered in a structural SB unit cell.We can now interpret the available experimental reports considering the above predicted SB phase.X-ray natural linear dichroism and X-ray magnetic circular dichroism experiment by Sant et al. [31] found an anisotropic charge density distribution around the V 3+ ion which we could naturally intepret with an unbalanced hybridization between the vanadium and the ligand states, originating from the structural distortion and inequivalence of the ligands.The occupation of the a 1g orbital and the insulating behavior, are confirmed by polarization dependent ARPES [30], but the predicted orbital moment (⟨L z ⟩ ≃ −0.1) results too far from the experimental estimation of ⟨L z ⟩ ≃ −0.6 [25].However, it should be noted that neither the symmetric a 1g -insulating (⟨L z ⟩ ≃ −1) nor the e ′ g -insulating phase (⟨L z ⟩ ≃ 0) can explain the measured value, leaving this aspect open for further theoretical and experimental investigation.We conclude calling for dedicated experiments to confirm the newly discovered SB phase and to possibly access the different metastable phases we discovered.In addition, the numerical technique based on pre-condition of the d-density matrix and the exploration of symmetry-broken phases can represent a valuable computational tool to study low energy phases of correlated magnetic systems. VI. DATA AVAILABILITY STATEMENT All data that support the findings of this study are included within the article and supplementary materials. I. METHODS Density functional theory calculations were performed using the Vienna ab-initio Simulation Package (VASP) [1,2], using both the generalized gradient approximation (GGA), in the Perdew-Burke-Ernzerhof (PBE) parametrization for the exchange-correlation functional [3] and local density approximation (LDA).Interactions between electrons and nuclei were described using the projector-augmented wave method.Energy thresholds for the self-consistent calculation was set to 10 −5 eV and force threshold for geometry optimization 10 −4 eV Å−1 .A plane-wave kinetic energy cutoff of 450 eV was employed for both VI 3 and VCl 3 .The Brillouin zone was sampled using a 12 × 12 × 1 Gamma-centered Monkhorst-Pack grid.To account for the on-site electron-electron correlation we used the GGA+U and LDA+U approaches with an effective Hubbard term U = 3.5 eV for VCl 3 and U = 3.7 eV for VI 3 consistent with the value calculated by He et al. [4] with linear response theory [5].The VCl 3 and VI 3 monolayer phases were described using a lattice parameter of a = 6.084 [6] Å and a = 6.93 Å [7] respectively and a vacuum region of 15 Å.Phonons at the Γ-point have calculated by finite differences method [8]. II. d-DENSITY MATRIX PRECONDITION In Fig. S1 we report an overview of the different phases described in the manuscript (name of the phase, orbital angular momentum, stabilization method, d-density matrix occupation). FIG. S1. Overview of the metastable phases in VX3 compounds In a D 3d symmetry the d orbitals can be written, with the z axis directed along trigonal axis, as is shown in Table S1.In this representation the d-density matrix [9] S1.Relationships among the atomic d orbitals in the cubic basis (x, ȳ, z) with z aligned to the along one of the octhaedral axis and trigonal basis (x,y,z) with z axis aligned to (111) direction. III. BAND STRUCTURES AND DENSITY OF STATES The band structure along the symmetry lines of the 2D hexagonal Brilloun Zone are reported in Fig. S2 for the different phases we discussed.The corresponding projected density of states (PDOS) on in-plane and out-of-plane orbitals are shown in Fig. S3.The PDOS provides a quantitative evidence of the orbital occupation reported in the manuscript. FIG. 1 FIG. 1. a) Top view of the VX3 monolayer: honeycomb lattice of V cations (red spheres) in edge sharing octahedral coordination with the halides X=Cl, Br, I (purple spheres).b) Octahedra in trigonal distorted phase (D 3d ).In metal trihalides L1, L2, L3 distances are inequivalent.The [111] direction is the trigonal axis and correspond to the z out-of-plane direction.c) Schematic representation of orbital symmetry breaking under structural distortion.Starting from the spherical symmetry, we report d-level splitting in cubic symmetry and under trigonal distortion.Elongation and contraction refer to the trigonal axis.The effect of the spin-orbit coupling is also reported: the e ′ g orbitals split because of diffent angular momentum. FIG. 2 . FIG. 2. Sketch of the possible phases that can be established in VX3 compounds.Central panel: the a1g-metallic phase with a partially quenched angular momentum and hybridized e'g orbitals.Left panel: the a1g-insulating phase with an unquenched angular momentum.Right panel: the e'g-insulating phase with a quenched angular momentum.The corresponding density of states and band structure are reported in the SM in Figs.S2,S3 FIG. 4 . FIG. 4. Top (panel a) and side (panel b) view of the optical phonon mode at Γ. VI3 (panel c) and VCl3 (panel d) total energy as a function of the normalized eigenvector fraction.The filled red dot are energies obtained starting from the charge of a1g-insulating phase that has imaginary occupancies in the d-density matrix, the empty black dots are energies obtained starting from a SB a1g-insulating phase with real occupancies in the d-density matrix.The red dots with the black border are obtained starting from the a1g-insulating phase but converging in a SB a1g-insulating phase.In the insets there is a zoom close to the undistorted phase FIG. S2.Band structures of the different metastable states in a symmetric structure (D 3d ), the zero of the energy is set to the valence band maximum for the insulating phases and the Fermi energy for metallic ones.a) VI3 a1g-metallic phase with partially quenched angular momentum, b) VI3 a1g-insulating phase with unquenched angular momentum, c) VI3 eg'-insulating phase with quenched angular momentum.d) VCl3 a1g-Near Zero Gap (NGZ) phase with partially quenched angular momentum; the NZG is opened by the SOC, e) VCl3 a1g-insulating phase with unquenched angular momentum, f) VCl3 eg' -insulating phase with quenched angular momentum. TABLE is block diagonal.
4,635.4
2023-11-27T00:00:00.000
[ "Chemistry", "Physics" ]
3d Modeling of Combustion for Di-Si Engines Résumé – Modélisation 3D de la combustion dans les moteurs à injection directe d’essence — L’injection directe d’essence (IDE) est un concept prometteur pour les moteurs à allumage commandé. La mise au point de ce type de moteur est néanmoins délicate, et le calcul 3D des chambres de combustion est un moyen d’aider à leur conception. Ceci nécessite cependant de disposer de modèles adaptés, à même de décrire le jet d’essence, son évaporation et la combustion du mélange créé. Cet article présente un modèle ECFM qui permet de simuler la combustion dans les moteurs IDE, même en fonctionnement stratifié. Ce modèle est un développement du modèle flamme cohérente dans lequel des effets d’expansion thermique ont été introduits et qui a été couplé avec une description conditionnelle gaz frais/gaz brûlés des grandeurs thermodynamiques. Une validation de ce modèle à l’aide de comparaison calcul-expérience sur trois points de fonctionnement du moteur GDI Mitsubishi a été effectuée. INTRODUCTION Direct injection of gasoline in the cylinder is a very promising concept to reduce fuel consumption of SI engines: the fuel stratification allows very lean combustion which reduces significantly the pumping work at low load. For DI engines design, every parameter, like piston shape or injector inclination for example, has to be carefully adapted.CFD is a useful tool to understand the processes taking place in the combustion chamber and the correlation between parameters.It requires to model precisely each physical phenomenon occuring in the engine, from intake to combustion and pollutant generation. This paper presents a combustion model integrated in the IFP 3D code KMB [12], a multiblock version of Kiva-II [1]. First, the hypothesis of the model are presented, then, the equations of the model are described.A comparison between computations and measurements on the GDI Mitsubishi engine concludes the work. MAJOR HYPOTHESIS OF THE MODEL 1.Flame Structure The main assumption made for this work is that the combustion of fuel occurs in a premixed regime, even for very lean or very rich combustion, and can be described with a flamelet type combustion model.Note that this hypothesis does not imply that all the chemical reactions occur under this way. The region of fuel consumption is supposed to be very thin and we assume that it separates unburnt from burnt gases.We also suppose that no fuel remains in the burnt gases.This last assumption can be easily justified: indeed the high temperature existing in the burnt gases leads to fuel molecules decomposition. Species Diffusion In the work presented here, all the species are supposed to have the same diffusivity.As the main contribution to species diffusion is "turbulent diffusion", which is a convection term, it is a fair assumption.The "turbulent" Schmidt number has no reason to differ from one specie to another. To be consitent with the finite volume element approach, all the properties of the fluid are supposed to be locally homogeneous and isotropically distributed. Chemical Species Involved We suppose that the unburnt gases are only composed of fuel, molecular oxygen and nitrogen, carbon dioxyde and water.We could have assumed the existence of other components in the fresh gases, but it would have required one transport equation per added component, which is not cost effective regarding the overall description precision. The burnt gases are supposed to be composed of molecular and atomic oxygen, nitrogen, and hydrogen, carbon monoxyde and dioxyde, OH and nitrogen monoxyde. Burnt Gases Chemical Reactions As it has been mentioned in Section 1.1, the fuel is not supposed to exist in the burnt gases, but chemical reactions may occur.All the reactions computed in the burnt gases are supposed to be bulk reactions.That means that no local structure of the reaction zone is taken into account and that these reactions are only function of the mean local quantites computed in the burnt gases.The reactions are solved using conditionned burnt gases properties. Two kind of chemical reactions are simulated.The first set of reactions is supposed to be fast and the various component are supposed to be at equilibrium.We consider the following chemical system using constants proposed in [9]: The second set is to compute NO formation and is an extended Zeldovitch kinetic mecanism (constants come from [8] For computational effectiveness, those two systems are sequentially solved, but with modified species balance for the kinetic set resolution that ensure that the same results would be obtained if the whole system was solved in a coupled way. Fuel Properties To approach real gasoline behaviour, especially for evaporation and combustion, a new fuel has been introduced in the KMB library.Its atomic formula is C 7 H 13 and its thermochemical properties are choosen to be equivalent to typical RON95 unleaded gasoline. If is the Favre averaged fuel mass fraction, the fuel conservation equation is written as: (10) where: is the mean density, is the i component of the mean fluid velocity, µ t is the total viscosity, Sc t is a turbulent Schmidt number, and is the fuel consumption rate.The combustion model we use is an improved version of the Coherent Flame Model, the ECFM (E stands for Extended). As for the CFM, the fuel consumption rate might be written as: (11) where: is the mean local density of the unburnt gases, is the mean fresh gaz fuel mass fraction, U L is the speed that a laminar flame would have under the local thermochemical conditions undergoing by the turbulent flame, and Σ is the flame surface density, that is, the local area of flame per unit of volume.Thus to close our problem we need to know , , U L and Σ. Laminar Flame Properties The laminar flame speed is estimated with the help of an experimental correlation proposed in [11] and [8].It requires the knowledge of ambient pressure, local equivalence ratio, dillutant mass fraction and unburnt gases temperature.The laminar flame thickness, which is also needed for the model, is obtained using [3] and requires the knowledge of laminar flame speed and burnt gases temperature. Thus we need all the local conditioned burnt and unburnt gases properties in order to determine those two quantities. Flame Surface Density Transport Equation The flame surface density transport equation is very similar to the one used in [6]: (12) where: Φ (Σ) is a source term due to the spark plug, including ignition and convection at the spark plug effects, is the mean progress variable and K is the total positive stretch.The mean progress variable is calculated with the following expression: (13) There are two main contributions in K: turbulence and the combined effets of curvature thermal and expansion.Assuming local isotropy of the flame surface density distribution, we have modelled it by: (14) where: K t is the turbulent stretch computed by the ITNFS model [10] and is the density of the burnt gases.As in [6], a correction of U L and K t is introduced to model flame wall interaction effects. Ignition From the previous equations, one can notice that flame surface density has to be initialized at some point to start the computation of combustion.An analytical analysis has shown that the time at which flame surface density is initialised and the initialized quantity are critical. Previous work [4,12,5] relied on the LI model.Some applications have shown that this model has difficulties to simulate some of the experimental trends, like pressure dependence.We choose to replace it by a correlation based on experimental results.This correlation depends on the local density and on the local laminar flame speed, but does not depend on the fuel or on the engine.The same correlation and the same model with the same constants are used for all the computations shown here.We have also used it for racing engines combustion computation at full load and higher engine speed.A burnt gases volume is also initialized to be consistent with the flame surface density initialization. General Consideration From the previous section, we see that we can close our model if we manage to know the local properties of burnt and unburnt gases.This section presents how these informations can be obtained locally in the whole computational domain. Species Concentrations Computation In every cell we compute two concentrations for every species: a concentration in the unburnt gases and a concentration in the burnt gases.To do so, we use a formalism already presented in [2] and [6].Thus, two transport equations are introduced, one to represent the unburnt gases fuel concentration and the other for the unburnt gases oxygen concentration. The main differences with [6] is that a source term due to the spray has to be added for the unburnt fuel mass fraction, and that a correction to the fuel source term has to be introduced in cells where combustion occurs to be consistant with the previous hypothesis. With these two added equations and the hypothesis of local homogeneity and isotropy every concentration can be determined. Temperatures As in [2] or [6], a transport equation for the unburnt gases enthalpy is computed.But a source term due to evaporation of the liquid fuel in the cells where combustion occurs has also been introduced to be consistent with the hypothesis of local homogeneity. From the unburnt gases enthalpy and the unburnt gases composition, the local unburnt gases temperature is computed.The burnt gases enthalpy, and consequently the burnt gases temperature, is deduced from the progress variable, the unburnt gases enthalpy and the mean enthalpy. Configuration The computed engines is the GDI Mitsubishi engine.Details of the computations can be found in [7].The following table summarizes the operating conditions retained for the comparisons.The injection and ignition timings are given in crank angle degrees, 360 being the combustion top dead center.Case 1 and case 2 correspond to "homogeneous" Operating conditions while the load is stratified in case 3. Fuel Distribution Before Ignition The fuel distributions before ignition for the three cases are presented on Figures 1 and 2. The histograms of equivalence ratio are centered around the mean value for the cases 1 and 2, which indicates that the load is rather homogeneous even if a rich spot remains at the top of the bowl for case 2. For case 3, it can be seen that before ignition, the fuel in the cylinder is distributed in a wide range of equivalence ratio, from very lean to very rich mixture.Equivalence ratio for the late injection case; cumulated histogram and distribution in the symmetry plane. Pressure Trace The computed and measured cylinder pressure are represented for the three cases on Figure 3.As it has been mentioned before, no coefficient has been changed from one case to another.Comparison between computed and measured cylinder pressure for the three cases. The overall agreement is rather good, even if the model slightly overpredicts the combustion speed.The delay is also overpredicted by the model in the stratified case, but it might be due to a too coarse of the fuel distribution given by the spray modeling.The picture at the top of this figure shows the flame kernel soon after the flame was initialised in the computational domain.Then it propagates fast in the middle of the chamber due to high equivalence ratio and expansion.It slows down when approaching the cylinder walls due to lean mixture. NO (NITROGEN OXIDE) The comparison between measured and computed NO x emissions are summarized in the following table : Comp.Exp. Case 1 2450 1450 Case 2 1650 1750 Case 3 550 670 For case 2 and case 3 the agreement between computation and experiment is rather good.The NO x production is slightly underpredicted by the computation in case 3, but it might be due to the under prediction of the IMEP.The differences between computed and measured NO x emission for case 2 have not been explained yet and further experimental and computational work would be needed to determine wether this difference is due to the kinetic constants used or to a great sensitivity of NO x emissions to the maximum cylinder pressure for lean operating conditions. Figure 5 shows an interesting behaviour of NO distribution in the cylinder.The location of the maximum NO concentration is not the location of the maximum temperature.This can be explained by the CO concentration distribution.The maximum temperature is situated where CO concentration is rather high, that is where the oxygen concentration is very low.At those locations, NO is reduced by the product, and no NO appears despite the high temperature. CONCLUSIONS A model has been developed to describe combustion in DI-SI engines.This model is fully coupled with the spray model and enables stratified combustion modeling including EGR effects, and NO formation.The model relies on a conditional unburnt/burnt description of the thermochemical properties of the gases.The overall agreement between computation and experiment is rather good, even for NO prediction, and the developed model can already be used to help understanding DI-SI engine behaviours and to improve their design. Future work will focus on small scale heterogeneity modeling, refined post-flame chemistry (including soot formation), laminar flame speed prediction (specially for very lean or very rich combustion) and on ignition description improvements. Figure 1 Equivalence Figure 1Equivalence ratio for the two early injection cases; cumulated histogram and distribution in the symmetry plane. Figure 4 Figure 4 represents the flame surface density in the symmetry plane of the engine at three different crank angle degrees. Figure 4 Flame Figure 4Flame surface density 20 CAD BTDC, 10 CAD BTDC and at TDC for the case 3 in the symmetry plane of the engine. Figure 5 NO Figure 5 NO mass fraction, CO mass fraction and temperature 90 CAD ATDC for case 3 in the symmetry plane of the engine.
3,154.2
1999-03-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Adaptive Triangular Deployment of UnderwaterWireless Acoustic Sensor Network considering the Underwater Environment In this paper, we propose an adaptive triangular deployment algorithm that can adjust sensor distribution depending on the variation in communication performance in an underwater environment. To predict the distance between sensor nodes, a performance surface model is implemented by estimating the communication performance based on spatio-temporal environment factors affecting the communication performance of the underwater sensor node. Subsequently, the performance surface model is applied to the adaptive triangular deployment algorithm and is used to control the distance between nodes. Therefore, underwater wireless sensor networks deployed with adaptive triangular deployment algorithms can achieve a maximum connectivity rate with an optimal number of nodes. Introduction The nodes constituting a wireless sensor network collect various information from the environment and transmit this data to the adjacent nodes [1].The layout of a sensor network is important for the efficient and stable functioning of the nodes.Therefore, research on a deployment algorithm for determining the optimal layout of a sensor network by adjusting the position of nodes in a given area is extremely important in sensor networks. Deployment algorithms can be divided into two categories: random deployment algorithms and deterministic deployment algorithms [2].Random deployment algorithms are suitable when the nodes are arranged without any limitation with regard to the number of sensor nodes.After random deployment is performed, we can ensure stability by deploying additional nodes in the area where degraded performance of the system is observed.In the case of deterministic deployment, an algorithm deploys the nodes in a predetermined position by considering the performance of the current sensor nodes.The deployment of the terrestrial sensor network is suitable for applying the random deployment algorithm because the cost of the node is low and the performance change of the sensor node is not large depending on the position within a given area.On the contrary, in an underwater environment, the performance of the sensor node is highly variable even within a given area.When the sensor nodes are randomly deployed in a region where the performance of nodes is not homogeneous, the connectivity between the nodes degrades due to deterioration in communication performance in some areas.In another area, more nodes than necessary can be deployed even though sufficient connectivity is secured.Although the performance of the network can be improved by deploying additional nodes in the area where the number of nodes is insufficient even though there are redundant nodes in some areas, this hinders the economics of network implementation.In particular, this method is not suitable owing to the high cost of an underwater sensor node. Numerous deterministic deployment algorithms have been studied, and some of them are summarized in Table 1.Particle swarm optimization (PSO) and virtual force algorithm (VFA) are deployment algorithms based on optimization algorithms and are used to find the optimal sensor node position through iterations until the initial random deployment node satisfies the given condition [3,4].Optimization algorithms have the disadvantage that the time required to deploy a node increases exponentially as the number of nodes increases and the complexity of the environment increases.Another deterministic deployment technique is to deploy nodes in various patterns, such as triangles, long belts, and diamonds [5][6][7][8][9].However, conventional pattern deployment algorithms mainly considered only the coverage between nodes [5][6][7].In such a case, as there is a possibility that each node of the sensor networks cannot transmit the collected data to the adjacent nodes, the function of the sensor networks may be lost.Although some studies have proposed a deployment considering connectivity, their results pertain to limited conditions and only considered partial connectivity [8,9].In case of partial connectivity, managing the information delivery time and lifetime of nodes is disadvantageous because it is more likely to bypass more nodes to transmit the collected information to the final node.In addition, when one node fails, there is a possibility that the connectivity of the entire node becomes problematic.As it is not easy to find and replace the problematic nodes after the underwater sensor network is configured, it is necessary to deploy the nodes to enable communication between the nodes through an alternative path.Crucially, conventional pattern deployment algorithms are unsuitable for underwater environments where the performance of the sensor varies rapidly depending on the environment, as all nodes assume the same detection or communication range. In this paper, an adaptive triangular deployment algorithm is proposed to obtain maximum connectivity with the optimal number of nodes based on the simulated communication performance of sensor nodes in an underwater environment.Most conventional triangular deployment algorithms deploy nodes at equal distances, which can theoretically form full connectivity with adjacent nodes [9].However, as the performance of the nodes may vary depending on the environment they are in, if equally spaced deployment is applied, connectivity between nodes in all areas can be difficult to maintain, and there may be some areas where the nodes may not be connected.In our proposed adaptive triangular deployment algorithm, node deployment is possible by adjusting the node interval by considering the communication performance depending on the node positions in a given area.Therefore, it is more advantageous than the conventional triangular deployment algorithm in ensuring connectivity between nodes.The communication performance in a given area is analysed through a communication prediction simulation that reflects the environmental factor affecting the node's communication performance, and a performance surface model is implemented by mapping the communication performance.The implemented performance surface model is used as the basis for adjusting the spacing of the adaptive triangular deployment algorithm, ensuring full connectivity of deployed nodes.Therefore, it is possible to construct an underwater sensor network that maintains high connectivity with an optimal number of nodes using an adaptive triangular deployment algorithm considering communication performance in an underwater environment. This remainder of this paper is organized as follows.The generation method of the performance surface is described in Section 2. The methodology of the adaptive triangular deployment algorithm is proposed in Section 3. The adaptive triangular deployment algorithm is verified to be an efficient deployment method compared to the conventional triangular deployment algorithm via simulation comparisons in Section 4. The conclusions are summarized in Section 5. Performance Surface Model In conventional research, a performance surface model is used to geospatially map the detection probabilities of targets for the performance evaluation of underwater sonar systems [10].In this paper, the communication range between nodes according to the underwater environment in a given area is calculated through simulation.An underwater communication performance surface model is implemented based on the communication range calculated for the whole given area.The performance surface model is used as a basis for position control considering the connectivity between nodes with the adaptive triangular deployment algorithm.This section describes the implementation process of the performance surface in detail. Communication Distance Calculation with the Predicted Communication Performance.The communication performance of the sensor node can be evaluated through the bit error rate (BER) [11].Therefore, the connectivity of two nodes is secured when two adjacent nodes constituting the sensor network satisfy the BER criterion required by the system.To estimate the BER between adjacent nodes in an [12].The signal-to-noise ratio (SNR) of the signal received at the receiving node can be calculated by including the simulated transmission loss (TL) value in Equation ( 1) where SL denotes the source level and NL denotes the ambient noise level from the Wenz curve [13][14][15].It is possible to represent an acoustic channel in the given underwater environment using the simulated channel impulse response characteristic and the calculated SNR value.The simulated channel impulse response and the calculated SNR are applied to the communication performance estimation algorithm, as shown Figure 2. Figure 3 shows a deployment scenario of an underwater acoustic sensor network installed at the seabed.To obtain the communication range of each sensor node in the underwater environment, the maximum communication range (MCR) that satisfies the minimum BER for ensuring the stability of information transmission between adjacent nodes should be calculated.Therefore, the MCR can be calculated by extracting the BER depending on the distance from the transmitting node to the receiving node through the communication performance estimation algorithm. Implementation of the Performance Surface Model in the Target Area.The East Sea of South Korea is selected as the target area for deploying the nodes.Figure 4 shows the sampling points for implementing the communication performance surface model in the target area.The selected target area spans an area of 16 km × 16 km, and there are a total of 25 sampling points in this area.Assuming that each sampling point denotes the location of the transmitting node, sampling was performed on the azimuth and the distance assuming that the receiving node exists for up to 2 km at intervals of 100 m for 6 azimuths.According to the deployment scenario in Figure 3, the sensor network is installed on the ocean floor; thus, the depth of the node is determined 3 Journal of Sensors by the topography of the position where the node is located.Therefore, the performance surface model is implemented assuming the horizontal distance based on the reference node.To simulate the communication channel for each sampling situation using the Bellhop model, the environmental data for sound velocity structure, topography, and bottom constituent were used.For the sound velocity structure, the monthly data of GDEM was used, and ETOPO data for bathymetry was used [16,17].The bottom constituent used data measured at the target area. Topography and bottom constituent are spatial factors affecting the underwater communication channel, which vary with the location of the target area.The factor affecting the temporal change of the underwater channel is the sound velocity structure, which is sensitively affected by the temperature variation across seasons.The nodes constituting a sensor network must be able to communicate with the adjacent nodes in harsh environments.The sound velocity profile for various seasons should be considered for temporal change as well as spatial change.Therefore, a performance surface model with the sound velocity structure in February and August, which represents winter and summer, respectively, was implemented.Moreover, a comprehensive performance surface model is implemented by considering the lower performance value at each point of the two seasons. Spatial SNR and BER for each azimuth and distance of the sampling points were simulated depending on the temporal and spatial changes of the underwater environment.The input parameters for the simulation are presented in Table 2, and the results for winter and summer are presented in Figures 5 and 6, respectively.Assuming a target BER of 1% as shown in Table 1, the MCR for each azimuth for each sampling point is calculated.As each sampling point has the MCR for each azimuth, each sampling point should be represented by its MCR in order to be simplified as a performance surface model.Therefore, the MCR is averaged over all azimuths at each sampling point using Equation (2), and this value is defined as the average maximum communication range (AMCR). where m is the number of bearing samples.Figures 7(a) and 7(b) show the performance surface model results of sensor nodes in winter (February) and summer (August), respectively.In summer, there are many refracted waves owing to the sound velocity structure, which causes communication performance degradation owing to the multipath delay spread of the signal.Therefore, the communication performance is relatively poorer in summer than in winter.Figure 8 shows the final performance surface model implemented by combining the performance surface models of the two seasons in the target area, which almost follows the performance surface model in summer. Methodology of Adaptive Triangular Deployment Algorithm In the conventional triangular deployment algorithm, the distance between sensor nodes is constant [9]; it is impossible to adjust these distances based on the communication performance according to the positions of the nodes.Therefore, to effectively deploy sensor nodes in the underwater environment, an unequal spacing deployment algorithm that considers the changes in communication performance is required.In this study, the adaptive triangular deployment algorithm, which is capable of adjusting the distances between the sensor nodes according to changes in communication performance, is proposed.The proposed adaptive triangular deployment algorithm is a deterministic algorithm that is suitable for implementing efficient underwater sensor networks and offers full connectivity. The conceptual depictions of the conventional and adaptive triangular deployment algorithms are presented in Figures 9(a) and 9(b), respectively.As shown in Figure 9(a), the constant communication ranges of the nodes do not overlap when the communication performance of the nodes differ according to the environment because the conventional method deploys nodes at constant spacing.Therefore, the connectivity between the nodes is not maintained consistently.In the adaptive method, because the communication performance of the nodes according to the environment is considered as shown in Figure 9 5 Journal of Sensors in the proposed adaptive triangular deployment algorithm is represented by the AMCR that is azimuthally averaged at each sampling point; thus, the communication ranges of two adjacent nodes may be different.In general, the reciprocity theory is assumed between the transmitter and the receiver in a communication system.Applying the reciprocity theory to Figure 9(b), the communication ranges of nodes "b" and "c," which are located within the communication range of node "a," include the position of node "a."Therefore, we applied the reciprocity theory in the adaptive triangular deployment assuming that the connectivity of the two nodes is secured if the AMCR of one of these contains the remaining node's position. Figure 10 shows the methodology of the adaptive triangular deployment algorithm.In this algorithm, the upper boundary of a target area is selected as the reference line, and the nodes in this line are deployed one by one horizontally.Node "A" is deployed as the reference node for the target area.Node "B" is deployed in the communication range of node "A" along the reference line.The communication range of node "A" can be extracted from the performance surface model of the target area, and the connectivity between nodes "A" and "B" can be secured by this method.The remaining nodes on the reference line are also deployed through this process to fill the line. The nodes on the following line are deployed at the intersections between the communication ranges of the nodes on the preceding line.Node "a" is deployed at the intersection between the communication range of nodes "A" and "B."The intersection of the two nodes can be computed in vector form using Equation (3), and the progress is described in Figure 11. Node "b" is deployed at the intersection between the communication ranges of nodes "B" and "C."The nodes on this line are thus deployed using the same process to fill the entire line.This method is consecutively applied to all the even lines (see Figure 10). If the aforementioned method is repeatedly applied to all the following lines, the number of nodes constituting each subsequent line decreases by one as shown in Figure 12; thus, it is impossible to deploy nodes for the entire target area by this method.To solve this problem, node "1" in Figure 10 is located on the intersection between the communication range of the node "a" and the left vertical boundary of the 6 Journal of Sensors target area, serving as the new reference node.To ensure connectivity between node "1" and the nearest node of the upper line, node "2" is deployed at the intersection of the communication ranges of nodes "1" and "a," as shown in Figure 10.Further, node "3" is deployed on the intersection between the communication ranges of nodes "2" and "b." The nodes on this line are thus deployed in this manner to fill the entire line.The same method is consecutively applied to all the odd lines (see Figure 10). Because a target area exhibits varied communication performance depending on the environment, it is sometimes impossible to determine an intersection using this proposed basic deployment method.Therefore, the following three techniques have been added to solve these problems. First, if the condition of Equation ( 4) is not satisfied, no intersection exists between two nodes. where x a , y a and x b , y b are the coordinates of nodes "a" and "b," respectively.R a and R b are the communication ranges of nodes "a" and "b," respectively.For example, in Figure 13, node "3" is deployed at the intersection between the communication ranges of nodes "2" and "b" according to the odd line deployment method.However, because the distance between nodes "2" and "b" is greater than the sum of the communication ranges of the two nodes, Equation (4) is not satisfied, and there is no intersection between the two nodes.In this case, node "a," which was deployed before node "b," is used to determine the intersection for node "3."Thus, node "3" is deployed at the intersection between the communication ranges of nodes "a" and "2."Second, when the deployment based on the performance surface model is implemented, a vertical position difference may occur between nodes on the same line.If this difference is accumulative, the intersection points between the nodes may not be readily obtained.Thus, a method to solve this problem is outlined in Figure 14.Nodes "A" and "B" on the same line have a slope ∝.If this slope exceeds the threshold, 7 Journal of Sensors an additional node "C" is deployed at the intersection between the nodes "A" and "B." Third, if there is a region where the communication performance is drastically degraded, a large number of nodes may be required to fill the given area, or the nodes may converge at a certain point without filling the given area.Therefore, selective deployment is required for an area that has rapidly declining performance.Accordingly, a limitation on the minimum communication range for deployment of a node should be defined.For example, if any node has a communication range below this limitation value, it is not deployed. The flowchart of the adaptive triangular deployment algorithm is thus presented in Figure 15. Simulation of Optimal Deployment of Sensor Nodes Using Adaptive Triangular Deployment Algorithm The deployment of nodes using the aforementioned adaptive triangular deployment algorithm was performed by applying the performance surface model.To verify that the adaptive triangular deployment algorithm is effective for implementation and operation of an efficient and stable underwater sensor networks, it is compared with the conventional triangular deployment algorithm. In the case of the conventional triangular deployment algorithm, because equal spacing is used with a constant communication range, a distance setting for the communication range is required.The communication range of 1 km (case 1) and 2 km (case 2) was selected via the best-and worst-case scenarios of the conventional triangular deployment algorithm, considering that the communication range in the given area from the performance surface model is approximately 1 km to 2 km.And 1.5 km (case 3) corresponding to the middle of both cases was selected as the communication range.The deployment results for each of these cases are shown in Figures 16-18.The deployment results are shown for the performance surface model and are located at constant spacing regardless of the communication ranges of the nodes according to their positions.Figure 19 shows the result of deploying nodes via the adaptive triangular deployment algorithm (case 4).The result is that the node distance is adjusted to reflect the communication range according to the position in the target area through the adaptive triangular deployment algorithm. In Figure 20, the connectivity of sensor nodes for each case is illustrated.In case 1, connectivity between nodes was achieved in the entire given area, but more nodes are deployed in the area with superior communication performance.In cases 2 and 3, nodes in areas with low The connectivity rates and number of nodes used to fill the given area for the four deployment cases are compared in Table 3.Here, the connectivity rate (R c ) is the ratio of the total number of nodes (N t ) deployed and the number of nodes (N c ) having k-connectivity k ≥ 2 and can be defined by Equation (5). In the case of a conventional triangular deployment assuming a communication range of 1 km (case 1), a total of 314 nodes are deployed in the given area, and each node has a connectivity rate of 100% as the communication range is satisfied for the entire target area.In this case, it is possible to implement a stable sensor network in a given area.However, it is an inefficient deployment as more nodes than necessary are deployed in an area having excellent communication performance.Assuming range of 2 km (case 2), the required number of nodes is 85, and the connectivity rate is approximately 30%.Assuming coverage of 1.5 km (case 3), the required number of nodes is 126 and the connectivity rate is about 70%.Cases 2 and 3 have a problem which the connectivity between the nodes is degraded since the number of nodes is insufficient in the area where the communication performance deteriorates.Therefore, the three cases do not satisfy the efficiency and stability of the network.When the Journal of Sensors adaptive triangular deployment algorithm (case 4) is used, 124 nodes are required.In this case, although the number of nodes is almost the same as that of case 3, the connectivity rate is 100% because each node is deployed through the interval adjustment considering the communication performance.Consequently, it can be observed that the adaptive triangular deployment is an algorithm that can achieve maximum connectivity with an optimal number of nodes. Conclusion In this paper, we propose an adaptive triangular deployment algorithm that can optimize connectivity in a sensor network as adjusting the spacing between nodes considering communication performance according to underwater environment.To consider the communication performance according to the underwater environment, actual location data from a sea such as sound velocity structure, seabed topography, and seabed constitution are collected.Using the collected data, the communication channel of the target area was simulated.A performance surface model was implemented by simulating the communication performance of the nodes according to their positions in the target area via the simulated communication channel.The conventional deployment algorithms mostly considered only the range of the nodes or connectivity under limited conditions.Further, most deployment algorithms have equally spaced nodes without considering the performance of the nodes according to the environment.The proposed adaptive triangular deployment algorithm can apply the performance surface model to adjust sensor intervals considering the communication ranges of the nodes according to their positions.Through this method, each node is guaranteed connectivity with its adjacent nodes and can achieve full connectivity.The proposed adaptive triangular deployment algorithm is thus demonstrated to be a suitable methodology for efficient and stable underwater Figure 1 : Figure 1: Simulated channel impulse response and transmission loss as obtained by the Bellhop model. Figure 2 : Figure 2: Block diagram of the communication performance estimation algorithm. Figure 3 : Figure 3: Deployment scenario of an underwater acoustic sensor network. Figure 4 : Figure 4: Sampling points in the target area. (b), communication ranges of the nodes can overlap via positional adjustments.The performance surface model used to adjust the node spacing Figure 5 : Figure 5: Signal to noise ratio (SNR) and bit error rate (BER) in winter (February) for the target area. Figure 6 : Figure 6: SNR and BER in summer (August) for the target area. Figure 7 :Figure 8 : Figure 7: Performance surface results of sensor nodes in winter (February) and summer (August). Figure 9 : Figure 9: Conceptual depiction of the conventional triangular deployment algorithm (a) and adaptive triangular deployment algorithm (b). Figure 13 :Figure 14 :Figure 15 : Figure13: Deployment method when the distance between two nodes is greater than the sum of the communication ranges of the two nodes. Figure 20 : Figure 20: Connectivity of nodes for each case. Table 1 : Example of a sensor deployment algorithm. underwater environment, simulating the channel that reflects environmental information such as sound velocity profile, bathymetry, and bottom constituent depending on the location of the node is necessary.The Bellhop model is a suitable channel model to simulate an underwater acoustic channel reflecting environmental information, and it can extract channel impulse response and transmission loss, as shown in Figure1 Table 2 : Communication system parameters for simulation. Table 3 : Example of sensor deployment algorithm.that can secure maximum connectivity with an optimal number of nodes compared to the conventional triangular deployment algorithms.
5,586
2019-02-25T00:00:00.000
[ "Computer Science" ]
Birotons and “Dark” Quantum Hall Hierarchies : A computational scheme is suggested to estimate neutral excitation energies in the fractional quantum Hall effect (FQHE) states. The FQHE states are systematized according to the Farey-number hierarchy structure. We show that besides the widely known Laughlin–Jain hierarchy of fractional states, there exist other “dark” hierarchies. Although hardly observed in the highest mobility samples, they can significantly affect the thermodynamics and spectral characteristics of the FQHE states. The known problems in the interpretation of the FQHE’s experimental results are explained in terms of the coexistence of two fundamentally different transformations of the electron system, one of which is a neutral excitation in the FQHE state, whereas the other is a transition between two FQHE ground states, one of which represents the Laughlin–Jain FQHE hierarchy and the other a state of “dark” hierarchies. Introduction Today, the FQHE provides the only experimentally accessible system for observing anyons-quasiparticles with non-Fermi and non-Bose statistics.The pioneering works on the experimental detection of quasiparticles with abelian anyonic statistics π/3 in the FQHE state of 1/3 [1,2] opened fundamentally new prospects for incorporating anyons into applied physics.At present, the FQHE states with non-abelian anyons in the fractional states of 5/2 and 12/5 have been the focus of intense research [3,4].In addition to single anyons, multiparticle anyon complexes have been observed, and their collective properties have been investigated [5].In the very near future, further advancements in the experimental methods of studying anyon matter may well enable the observation of quasiparticles with more sophisticated abelian and non-abelian statistics than that of π/3.However, at the current stage of development, FQHE physics is facing challenges related to the thermodynamics and spectral properties of the observed fractional states.These issues raise fundamental questions about the FQHE hierarchical structure and the interrelation between different fractional states.For example, in the spectrum of the neutral excitations of the 1/3, 2/5, and 3/7 fractional states, there have been experimentally observed charge density excitations with abnormally low energies compared to the calculations [6].Other significant problems arise concerning the interpretation of conductivity activation dependencies for the known FQHE states, close to the filling factor of 1/2 [7]. Creating a general hierarchical structure of fractional states requires building its foundation on the lowest spin sublevel of the zero Landau level ν < 1.However, such a task can be considered within several theoretical approaches.The choice of a particular approach shall be based on either experimental observations or the numerical solution of the Schrodinger equation obtained by exact diagonalization for the system of a sufficiently large number of particles.In the present work, we argue that even in the best existing samples, the random potential precludes the observation of the vast majority of possible fractional states.For this reason, the exact diagonalization technique remains the only feasible method of organizing FQHE states into a hierarchical system. All modern conceptions of FQHE states at the lowest Landau level can be summarized by the following well-established facts.First, there exist fractional states belonging to the main Laughlin hierarchy ν = 1 m (where m is an odd integer number), as well as their symmetrical counterparts, ν = 1 − 1 m , emerging due to the electron-hole symmetry.Second, there exists the Jain series.It generalizes the main Laughlin series with the electron filling factors expressed as ν = m 2nm±1 and their symmetric pairs of the form 1 − m 2nm±1 (where n is an integer number).Finally, several experimentally detectable "weak" fractional states lie outside the main Laughlin-Jain hierarchy, for example, 4/11, 4/13, 5/13, etc. Hence, it is natural to ask the question of whether the Laughlin-Jain hierarchy is a complete representation of fractional states, with few possible exceptions, or are there any fractional hierarchies for some reason unobservable experimentally ("dark hierarchies").Indeed, since the very discovery of the FQHE, several theoretical approaches have been developed to answer this question.The works of Haldane and Halperin suggest the following model to account for possible hierarchies of the FQHE.When a large quantity of the charged excitations of the 1 m Laughlin state carrying the charge ± e m (quasi-electrons and quasi-holes) are present in an electronic system, they themselves form a Laughlin-like child state at ν = p q [8,9].Since the quasiparticle charge reduces as the denominator increases proportionally to q −1 , the given Haldane-Halperin models predict a decrease in the value of the energy gap between the ground and the excited states with the growing denominator of the fractional state. An alternative hierarchical structure of the FQHE states was proposed by Jain; he developed the conception of composite fermions-quasiparticles consisting of an electron and an even number of the magnetic flux quanta [10].This model is based on mapping FQHE states onto composite fermions' integer QHE states (the Laughlin-Jain hierarchy mentioned above).The given structure can be generalized as the FQHE states with the filling factor ν are mapped onto the new fractional states with the electron filling factor ν 2 > 1.Hence, successive mappings lead to one of the Laughlin states-the foundation of every hierarchy of the FQHE states.Such a construction corresponds to the system with several "sorts" of composite fermions that differ in the number of attached flux quanta.Generally speaking, given an arbitrary fractional filling factor ν, both described theoretical approaches permit a few ways of bringing a particular FQHE state to the top of its hierarchy.Jain himself suggested selecting real FQHE states on the basis of energy consideration, whereas other studies argued that electron density configurations obtained in different ways are equivalent [11]. To resolve such an uncertainty, Zang and Birman [12] combined the approaches proposed by Halperin, Haldane, and Jain and devised a method of matching every fraction to a unique path leading to the top of its hierarchy.Not only does it remove the construction ambiguity, but it also enables qualitative estimation of the energy of the first excited state for every filling factor.In essence, the Zang-Birman arrangement is similar to Jain's, except that at every level of the hierarchy, a new composite particle "captures" two extra magnetic flux quanta.Thus, for the child state, the filling factor ν can be described by the mediant of the parent state, p q , and the critical fraction, p ′ q ′ , q ′ = 2m, as follows: ν = p q ⊕ p ′ q ′ = p+p ′ q+q ′ .In this case, the energy gap that separates the ground state from the excited states drops for every descendant fraction within its own hierarchy.This rule does not impose principal restrictions on energy gaps in different hierarchies.Hence, there is no direct link between the denominator value of an arbitrary FQHE state and the energy gap.Finding the pattern in energy gaps for a number of filling factors can give us evidence on the credibility of a given hierarchy scheme, even without conjecturing a trial wavefunction for a state at said filling factors, as, e.g., Jain in [10]; therefore, it is important to find the gaps. The choice between the given hierarchical models of the FQHE depends on the development of modern computational resources.However, even for the existing ultra-fast computers, calculating the dispersion dependencies of neutral excitations by solving the many-electron Schrodinger equation with exact diagonalization method proves difficult.It allows covering the range where the momentum corresponding to the energy gap lies only for the main Laughlin states 1/3 and 2/3.Indeed, explicitly calculating the gap even for the next fractional state 2/5 in the Laughlin-Jain hierarchy presents a considerable computational challenge [13].To overcome this technical difficulty, we used general assumptions about the dispersion dependencies of neutral excitations for the FQHE states at the zero Landau level regarding the link between the magnitude of the energy gap and the lowest energy at zero momentum.We tested these assumptions numerically for several fractions and built the hierarchy of the FQHE states for ν from 1/4 to 3/4-most suitable for the experimental study.This hierarchy was found to be in good agreement with the Zang-Birman model (Farey-number hierarchy structure [14]). In Section 1, a computational technique is given-to obtain an energy spectrum of a many-electron system interacting via Coulomb repulsion in a cell with the periodic boundary condition, the exact diagonalization technique was used.The results of numerical experiments are listed in Section 2. In Sections 3 and 4, a possible explanation for the pattern that can be seen in the result of our numerical experiment is proposed and discussed. Materials and Methods We considered a two-dimensional system of electrons with Coulomb interaction confined to a parallelogram cell Λ ∋ z = ατ 1 + βτ 2 , 0 ≤ α, β ≤ 1, z being the complex coordinate and τ 1,2 corresponding to coincident sides of the cell, and quantizing magnetic field B perpendicular to its plane.We applied periodic boundary conditions (PBCs), that is the state of the system is conserved by magnetic translation by τ 1,2 .The PBCs are compatible if the cell is pierced by an integer number N s of magnetic flux quanta.In such a system, a Hamiltonian of a single electron has a well-known spectrum of hω C n + 1 2 , ω C = eB m (Landau levels), each energy level having a finite degeneracy equal to N s .Therefore, for a number of electrons N e , we can define the following base of states diagonalizing the kinetic part of Hamiltonian ∏ N e j=1 a † i j ,n j |0⟩, the pairwise interaction part given by where a † α,i k ,n k and a α,i k ,n k stand for, respectively, the creation and annihilation operators of a spin α electron k in the state ψ n i , n being the number of the Landau level and ranging from 0 to ∞ and i specifying the number of the states within the Landau level and ranging from 1 to N s . To find the energy spectrum of the given system, it is necessary to calculate matrix elements of Coulomb potential and to diagonalize (1).We assumed that the cyclotron energy hω C and Zeeman splitting are much larger than the Coulomb energy e 2 ϵl B ; hence, we can neglect the contribution of higher Landau levels and assume that all electrons are spin polarized; therefore, we only need a finite number of matrix elements for exact diagonalization (in what follows, we however present expressions for matrix elements for arbitrary pairs of Landau levels for the sake of completeness).For a multi-electron system with N e electrons occupying N s possible states, the allowed Fock basis comprises C N e N s vectors, which makes the exact diagonalization for even simple fractions quite a tedious task.However, an observation due to Haldane [15] helps to build a simpler scheme, as there exists a certain momentum-like operator with gcd(N e , N s ) quantum numbers. The periodic eigenfunctions of Landau Hamiltonian were built by Yoshioka [16], who also applied the exact diagonalization scheme in the case of a rectangular cell, and later by Haldane and Rezayi [17], the matrix element for Coulomb interaction was computed in the case of a rectangular cell in [18].However, the formula for a general parallelogram was never published, to our knowledge.Some results concerning the reciprocal vector operator were rederived in a fashion that seems more fitting for the problem. Magnetic Translations and One-Electron Wavefunctions For a single electron in a constant magnetic field, the Hamiltonian Ĥ = 1 2m p − e c A 2 . We work in the Landau gauge  = ( 0 Bx ), so one-electron states at the lowest Landau level can be represented as [16] Because of the PBC, we have tm (τ 1,2 )ψ = ψ, where magnetic translations: By the Campbell-Hausdorff formula (here, t(τ) stands for an ordinary translation by τ and τ = τ x + iτ y ): For a state to be conserved by two magnetic translations by τ 1 and τ 2 , they must commute; therefore: Applying ( 4) to ( 2) and taking tm (τ 1,2 )ψ = ψ into account, we have the following condition on f : Specifying f (z) = 1 and, hence, represent f as a Fourier series f (z) = ∑ k∈Z c k exp i 2πkz τ 1 . To find its coefficients, substitute it into (7), which gives Consequently f (z) corresponding to LLL states with PBCs constitutes a linear space of dimension N s , with one example of a base delivered by Finally, the LLL wavefunctions are Similarly, at the n-th LL (H n is the n-th Hermite polynomial): Reciprocal Vectors and Partial Translations For an operator to constitute an observable, it should commute both with the Landau Hamiltonian and magnetic translations tm (τ 1,2 ).A remarkable observation due to Haldane [15] was that there exists center mass magnetic translation operators T(a) = ∏ N e i=1 tm,i (a), where tm,i (a) acts on the i-th electron, to nontrivial vectors a satis- fying both of these properties and also commuting with each other, thus constituting a vector quantum number similar to momentum.Indeed, it follows from (4) that T(a) T(b) = exp What is the spectrum of these operators?Because each N eigenvalues are exp 2πil N . Two translation operators commute; hence, they have a common eigenstate base, the eigenvalues being exp 2πil i N for T(L i ).Consider q such that (q, L 1 ) = 2πl 1 , (q, L 2 ) = 2πl 2 (i.e., q belongs to a reciprocal lattice), then the spectrum of T(L) is exp( i(q,L) N ).This q is a reciprocal vector quantum number.Direct calculation shows that the one-electron states deduced in the previous section . Using this, we constructed a base out of reciprocal momentum eigenstates, and to diagonalize (1), it is only needed to handle blocks corresponding to each eigenvalue. We will now focus on the allowed values of q because these will be the eigenvalues for the blocks, and their absolute values correspond to the points in the dispersion curve.As we mentioned, they lie in the reciprocal lattice {n 1 τ 1 + n 2 τ 2 } −1 and are defined up to a translation by an element of Therefore, the number of allowed values (N 2 ) of the reciprocal number at some fixed filling factor can be increased by increasing gcd(N e , N s ), which makes the computational complexity of our problem dramatically increase.Keeping N s fixed, we can still increase the number of different absolute values of allowed q by considering a non-square cell ("lifting degeneracy" of two points, which are reflections in a diagonal) and the maximum absolute value of q by choosing a nonrectangular cell (maximizing the diagonal of a parallelogram while keeping its area).The latter may be crucial in constructing roton minima for some fractions. Matrix Element Finally, we need to evaluate the matrix elements in (1).Using the translational invariance of wavefunctions, it can be rewritten as follows: where Ṽ stands for periodic continuation of V, . Being a doubleperiodic function, it can be also represented by the following Fourier series q exp(i(q, r))), where σ denotes the area of a primitive cell of the lattice and L = {k 1 τ 1 + k 2 τ 2 , k 1 , k 2 ∈ Z}, and the series is summed over the reciprocal lattice L −1 , ∀q ∈ L −1 , r ∈ L (q, r) = 2πN, N ∈ Z. To account for Coulomb weakening, we modified the Fourier components by introducing the geometrical form-factor F(q), calculated using the profile of the envelope wave function of electrons in the lowest-dimensional quantization sub-band of the conduction band in GaAs and obtain the following more realistic expression: Plugging in the expressions from ( 11) and ( 13) and with some math, we arrive at: where (L k n stands for the generalized Laguerre polynomial) The function F(q) was calculated numerically for the actual parameters of the experimental sample. Results In our study, we considered a hypothetical two-dimensional electron system with physical parameters of GaAs/AlGaAs quantum wells to provide a direct comparison with experimental data.The calculations were carried out for a system of several electrons in toric geometry, using the electron wavefunctions introduced in [17], invariant with respect to magnetic translations.To check the correctness of the numerical results, we calculated the dispersion dependencies of the five lowest-energy neutral excitations for the main fractional states of the Laughlin-Jain hierarchy, 1/3 and 2/3, related by the electron-hole symmetry (Figure 1).The spectrum of neutral excitations consists of a magneto-roton branch (MR) [19,20] and a continuum of multi-roton states (MMR).At zero momentum, the roton branch merges with the continuum and cannot be detected as a distinct excitation branch.The principal result of these calculations is that at zero momentum, the lowestenergy excitation is a biroton (magneto-graviton (BMR(0)) [21]) with twice the energy of the roton minimum, which is consistent with the previously reported findings [22].The application range of our computational scheme was specified based on the condition of building the roton minimum with the smallest number of electrons (Figure 1).The next step in the numerical calculations involved constructing the dispersion dependency of five neutral excitations with the lowest energy for the second representative of the Laughlin-Jain hierarchy, the fractional QHE state of 2/5.Once again, it can be claimed, though with less certainty than in the case of the fractional state of 1/3, that at zero momentum, the lowest energy of neutral excitations corresponds to the doubled energy of the absolute minimum of the roton branch [23].Numerical calculations for the other fractional states lead to the conclusion that at ν < 1, the spectrum of the lowest-energy neutral excitations of any FQHE state is of the same type.It consists of a multi-roton continuum and a roton branch (with the number of minima depending on the fractional state) damped at zero momentum.Based on the calculations, it is natural to assume that the lowest-energy excitation with zero momentum is going to be a biroton with twice the energy of the absolute minimum of the roton branch.It corresponds to the appearance of two rotons with minimal possible energies and opposite momenta.In our analysis, we could also consider the question of the binding energy between the rotons in a biroton [13].However, even for the main FQHE states 1/3 and 2/3, such energy turns out to be smaller than the numerical error of our simulations.Therefore, since calculating only the lowest energy of neutral excitations at zero momentum is far less complicated than finding the absolute minimum of a roton branch in the corresponding dispersion dependencies, it opens broad prospects for the comparative analysis of the excitation energies of different fractional states. Discussion Calculation results for biroton energies were compared to the activation dependencies of conductivity in fractional states of the Laughlin-Jain hierarchy.In this case, the doubled activation energy was considered approximately equal to the energy of a biroton with zero momentum [24].Hence, we found that the activation energies closely agree with the calculation for the main fractions, 1/3 and 2/3.However, approaching the filling factor of 1/2 leads to increasing deviation between the calculated energies and the experimental values of activation energies.Thus, for example, for the fractional states of 5/11 and 6/11, the discrepancy between the calculated and experimental data already exceeds an order of magnitude (Figure 2).The observed disagreement cannot be explained by the non-locality of electronic wavefunctions due to the finite width of the quantum wells used in the experiment.Indeed, the change in biroton energies due to the quantum well width is virtually independent of the FQHE state (Figure 2).It should also be noted that in the experiment, a linear approximation of the activation energy of fractional states in the Laughlin-Jain hierarchy to the filling factor 1/2 leads to non-physical, negative values [7], which does not occur in the numerical modeling (Figure 3).A plausible explanation of this effect is that, for electron filling factors in the vicinity of ν = 1/2, we are observing a temperature-induced reorganization of electron density rather than activation dependency.In that case, the ground state is brought to the FQHE states of a close filling factor that belong to a "dark" (experimentally undetectable) FQHE state.The transition between different fractional states should be accompanied by the appearance of bulk conductivity in a two-dimensional electronic system.This effect is insignificant for the activation energies of the main fractional states of the Laughlin-Jain hierarchy, 1/3 and 2/3.Indeed, energy gaps between the ground states of these fractions and the "dark" fractional states of close filling factors is approximately equal to the activation energies themselves.However, as the electron filling factor approaches ν = 1/2, the given energy gaps separating fractions of the Laughlin-Jain hierarchy and close to them "dark" fractional states are substantially reduced, leading to the decrease in "activation" energy observed in the experiment (Figure 2).[6], corrected for the finite quantum well width and magnetic field used in the experiment.The diamonds denote the even electron filling factors, 1/4, 3/10, 3/8, 1/2, 5/8, 7/10 (left to right).(b) Energies of neutral excitations with spin 1 (S = 1) for the Laughlin state 1/3 calculated for spin birotons and spin excitons with zero momentum, at a magnetic field of 10 T (black dots).The green dots show the measured excitation energies from [5,6,25].Red ovals indicate the corresponding pairs of experimental and calculated neutral excitations, SE(0)-spin exciton [6] and SBMR(0)-spin biroton (magneto-graviton) [5].Squares mark the experimentally observable excitations that have no theoretical counterpart. Another problem of FQHE states is the spectrum of experimentally detectable neutral excitations.Direct measurement of neutral excitations of the Laughlin-Jain hierarchy using resonant Raman scattering reproduces the pattern observed in the activation experiments [6].The energy of neutral excitations for the FQHE state 1/3 shows excellent agreement with data calculated for a biroton (magneto-graviton) with zero momentum.In contrast, for the following fractional states of the Laughlin-Jain hierarchy, 2/5 and 3/7, the energies of the excitations decrease drastically, showing no resemblance to the calculation results.One possible way of explaining the observed effect is to take into account "dark" FQHE states, namely the optical transitions between the ground states of different FQHE states, as the local redistribution of the electron density induced by the electromagnetic field of a light wave.The coexistence of two different optical transitions: (i) between different fractional states and (ii) within a single fractional state were already discussed in [25]. The authors of the paper observed analogous abnormal optical transitions in the spectrum of spin density neutral excitations for the main Laughlin state 1/3, measured by the resonant reflection for several heterostructures with GaAs/AlGaAs quantum wells [5,26,27].The spectrum indicates the direct optical transitions that excite spin birotons (spin magnetogravitons), as well as the transitions that give rise to excitations with the energy lying in the bandgap of the fractional Hall dielectric ν = 1/3 (Figure 3).Unlike birotons, which exhibit bosonic properties predicted for neutral excitations [5], the anomalous excitations do not show signs of Bose statistics [26,27], suggesting possible transitions between the ground states of different fractional states rather than neutral excitations in the fractional QHE state 1/3. Conclusions In the conclusion of the study, we constructed the hierarchy of computable fractional QHE states in the range of electron filling factors from 1/4 to 3/4 (Figure 4).The energies of these fractional states were found to be reasonably consistent with the Zang-Birman hierarchical structure-Farey-number hierarchy structure [12].Although this particular structure has much in common with the theoretical models of Halperin, Haldane, and Jain, it has a significant distinctive feature.Fractional states of highly varied denominator values from different FQHE hierarchies can have nearly equal energy gaps between the ground and excited states.For instance, the energies of such dissimilar fractional states as 6/19, 6/17, 8/23, 8/25, 10/29, and 10/31 belonging to different "dark" hierarchies are found to be almost the same, as all of them belong to the second steps from the top of their hierarchical ladders-the Laughlin state 1/3 (Figure 4).Conversely, the energies of fractional QHE states with equal denominators and similar electron filling factors can differ substantially.For example, the respective biroton energies of fractional states 10/31 and 11/31, corresponding to the second and third steps from the top of their hierarchical ladders, come to be nearly two-orders of magnitude apart (51.8 • 10 −3 and 0.74 • 10 −3 in Coulomb units (Figure 3), respectively). Despite the abundance of calculated fractional QHE states from "dark" hierarchies, the fact that they are not directly observable in magneto-transport experiments is quite understandable.The hierarchical structure proposed in the present work makes it evident that biroton energies of fractional states in the Laughlin-Jain hierarchy are always greater than other fractional states of comparable filling factors belonging to "dark" hierarchies.Hence, it is more energy favorable to localize some part of excited quasi-electrons and quasi-holes of Laughlin-Jain fractional states and to keep the filling factor of extended states unchanged in the region of filling factors separating the neighboring fractional states of this hierarchy.Considering that even a small amount of impurities in a twodimensional system causes localization of a macroscopic number of electron states, the filling factor regions where we can observe the fractional states of "dark" hierarchies are not large, even in the most highly mobile samples known to this day [28].Usually, for ν < 1, these regions fall within narrow ranges of electron filling factors, [1/3; 2/5] and [3/5; 2/3], where the Laughlin-Jain fractional states are far apart in terms of the filling factor.As the electron filling factor approaches 1/2, free of Laughlin-Jain fractional states, regions shrink in size because the "separation" (in terms of the electron filling factor) between the nearby fractional states of the Laughlin-Jain hierarchy decreases as the inverse square of the fraction's denominator.Therefore, while the Laughlin-Jain hierarchy is not the only hierarchy of FQHE states, it is still dominant over the "dark" hierarchies in magnetotransport experiments (Figure 4).The "dark" hierarchies become, in turn, essential in describing the excitation properties of FQHE states. The "dark" hierarchies are nevertheless essential in describing the excitation properties of FQHE states.Taking into account transitions between the FQHE states of the Laughlin-Jain hierarchy and the states of the "dark" hierarchies, the non-physical negative values for activation energies close to the filling factor 1/2 [7] are explained.The abnormal excitation energies measured by resonant Raman scattering for the fractional states, 2/5 and 3/7 [6], as well as the spin excitation energies for the state 1/3 measured by the authors of this paper [5,26,27] are similarly explained,. Figure 1 . Figure 1.(a) Dispersion dependencies of the five lowest-energy excitations in Laughlin fractional QHE states 1/3 and 2/3, calculated for nine electrons in a δ-function GaAs/AlGaAs quantum well at a magnetic field of 10 T. (b) Calculated energies of birotons with zero momentum and rotons in Laughlin state 1/3, at a magnetic field of 10 T, plotted versus the number of electrons.The dashed line indicates the smallest number of electrons needed to observe the minimum in the dispersion of the roton brunch.The solid lines are included as a matter of convenience. Figure 2 . Figure 2. (a) Dispersion dependencies of the five lowest-energy excitations in the Laughlin fractional QHE state 2/5, calculated for twelve electrons in a δ-function GaAs/AlGaAs quantum well at a magnetic field of 10 T. The dashed line marks the boundary of an elementary cell in inverse space.(b) Calculated energies of birotons with zero momentum for FQHE state 2/5 at a magnetic field of 10 T plotted as a function of the number of electrons.(c) The doubled experimental activation energy of fractional states from the Laughlin-Jain hierarchy measured in [7] (red dots) normalized by the calculated energies of birotons with zero momentum.(d) Energy dependencies of birotons with zero momentum for the FQHE states of 1/3 (black circles), 2/5 (black triangles), and 13/27 (black squares).For comparison, red circles indicate the energy dependency of the roton minimum for FQHE 1/3.The solid lines are included for convenience. Figure 3 . Figure 3. (a) Energies of birotons with zero momentum, at a magnetic field of 10 T, calculated for different FQHE states in the range of filling factors [1/4; 3/4], given a δ-function GaAs/AlGaAs quantum well.Fractional states of the Laughlin-Jain hierarchy (black circles) and "dark" hierarchies (red circles) are expressed in Coulomb energy units.The solid lines are included for clarity.The length of green arrows signifies the measured charge density excitation energies from[6], corrected for the finite quantum well width and magnetic field used in the experiment.The diamonds denote the even electron filling factors, 1/4, 3/10, 3/8, 1/2, 5/8, 7/10 (left to right).(b) Energies of neutral excitations with spin 1 (S = 1) for the Laughlin state 1/3 calculated for spin birotons and spin excitons with zero momentum, at a magnetic field of 10 T (black dots).The green dots show the measured excitation energies from[5,6,25].Red ovals indicate the corresponding pairs of experimental and calculated neutral excitations, SE(0)-spin exciton[6] and SBMR(0)-spin biroton (magneto-graviton)[5].Squares mark the experimentally observable excitations that have no theoretical counterpart. Figure 4 . Figure 4. (a) Energies of birotons with zero momentum, in a magnetic field of 10 T, calculated for different FQHE states in the range of Coulomb energies from 0.001 to 0.1 e 2 /ϵl 0 .The fractional states of the Laughlin-Jain hierarchy and of the "dark" hierarchies are plotted in black and red circles, respectively.(b) FQHE states with energies from (a) brought into the Zang-Birman hierarchical structure (Farey-number hierarchy structure).Horizontal and vertical axes denote, accordingly, the fraction's absolute value and its denominator.The dot diameter is proportional to the energy of the biroton with zero momentum of the corresponding fraction.The lines link the fractional states of individual hierarchies.
6,876.8
2022-08-08T00:00:00.000
[ "Physics" ]
Identification of Potential SARS-CoV-2 Main Protease and Spike Protein Inhibitors from the Genus Aloe: An In Silico Study for Drug Development Severe acute respiratory syndrome coronavirus (SARS-CoV-2) disease is a global rapidly spreading virus showing very high rates of complications and mortality. Till now, there is no effective specific treatment for the disease. Aloe is a rich source of isolated phytoconstituents that have an enormous range of biological activities. Since there are no available experimental techniques to examine these compounds for antiviral activity against SARS-CoV-2, we employed an in silico approach involving molecular docking, dynamics simulation, and binding free energy calculation using SARS-CoV-2 essential proteins as main protease and spike protein to identify lead compounds from Aloe that may help in novel drug discovery. Results retrieved from docking and molecular dynamics simulation suggested a number of promising inhibitors from Aloe. Root mean square deviation (RMSD) and root mean square fluctuation (RMSF) calculations indicated that compounds 132, 134, and 159 were the best scoring compounds against main protease, while compounds 115, 120, and 131 were the best scoring ones against spike glycoprotein. Compounds 120 and 131 were able to achieve significant stability and binding free energies during molecular dynamics simulation. In addition, the highest scoring compounds were investigated for their pharmacokinetic properties and drug-likeness. The Aloe compounds are promising active phytoconstituents for drug development for SARS-CoV-2. Structure-Based Virtual Screening and Molecular Docking of Aloe Phytochemicals on SARS-CoV-2 Spike Glycoprotein and Main Protease High-throughput virtual screening of compounds from Aloe, was followed by molecular docking and MD simulation. Since ligand binding to a protein of interest is the first step in drug discovery, molecular docking is widely used to predict and identify ligands that fit into the binding pocket of a protein of interest [28]. Our screening was performed against two major drug discovery and therapeutic targets of SARS-CoV-2, spike glycoprotein and M pro proteins [7,12]. SARS-CoV-2 main protease M pro is critical for the life cycle of the virus. Approximately, two thirds of the SARS-CoV-2 genome is translated into polyproteins pp1a and pp1ab, that are cleaved with M pro into nonstructural proteins that are involved in the production of viral membrane, spike and nucleocapsid proteins [29]. M pro is a dimer that has cysteine and histidine in the active site which form a catalytic dyad, conserved among coronaviruses making it an ideal therapeutic target [12]. In molecular docking studies, the ligand-receptor interaction with protein active site residues is established by formation of some interactions including hydrogen bonds, Van der Waal force 36.3%, chromones 27.4%, coumarin 0.8%, flavonoids 4.2%, simple phenolic compounds 8%, phenyl pyran and phenyl pyrone derivatives 7%, benzofurans 2%, naphthalene derivatives 5.9%, alkaloids 1.2%, fatty acid derivatives 1.2% and miscellaneous compounds 5.5%. Structure-Based Virtual Screening and Molecular Docking of Aloe Phytochemicals on SARS-CoV-2 Spike Glycoprotein and Main Protease High-throughput virtual screening of compounds from Aloe, was followed by molecular docking and MD simulation. Since ligand binding to a protein of interest is the first step in drug discovery, molecular docking is widely used to predict and identify ligands that fit into the binding pocket of a protein of interest [28]. Our screening was performed against two major drug discovery and therapeutic targets of SARS-CoV-2, spike glycoprotein and M pro proteins [7,12]. SARS-CoV-2 main protease M pro is critical for the life cycle of the virus. Approximately, two thirds of the SARS-CoV-2 genome is translated into polyproteins pp1a and pp1ab, that are cleaved with M pro into nonstructural proteins that are involved in the production of viral membrane, spike and nucleocapsid proteins [29]. M pro is a dimer that has cysteine and histidine in the active site which form a catalytic dyad, conserved among coronaviruses making it an ideal therapeutic target [12]. In molecular docking studies, the ligand-receptor interaction with protein active site residues is established by formation of some interactions including hydrogen bonds, Van der Waal force interaction, π-sigma bond, π-π interaction, electrostatic interaction, and many other hydrophobic interactions. Hydrogen bonds are essential for interaction, lowering the binding energy and stabilizing the ligand-receptor docked complex. Pharmacologically, it is well-known that blockade of a receptor active site by a ligand terminates its functional activity [30]. Our molecular docking approach was validated by docking of hydroxychloroquine, a potent inhibitor of SARS-CoV-2 M pro . Hydroxychloroquine acts as a lysomotropic agent that inhibits viral entry and viral endocytosis. Viral entry and replication are highly dependent on the acidic pH of lysosomes and endosomes, and some host proteases which are also active in acidic pH (pH 5-5.5) [31]. Chloroquine and its analogues are diprotic weak bases that in their unprotonated forms, readily diffuse through cellular and organelle membranes such as lysosomes, endosomes and Golgi vesicles increasing pH from 6.3 to 6.7 [32][33][34]. In addition to disruption of endocytic pathway pH, chloroquine and hydroxychloroquine have been recently found to be potent inhibitors of SARS-CoV-2 M pro but not viruses that belong to Rhabdoviridae [35]. In our study, the compounds previously isolated from Aloe plants were virtually screened against SARS-CoV-2 main protease M pro (PDB ID: 6LU7) ( Figure 2) and spike glycoprotein (PDB ID: 6M0J) ( Figure 2) to find potential inhibitors for SARS-CoV-2. Using our docking approach, hydroxychloroquine interacted with SARS-CoV-2 protein M pro and docked hydroxychloroquine bound to the active site with and RMSD of 1.2 Å. Molecular docking data were filtered to remove compounds with scores > −6.5 for both SARS-CoV-2 main protease M pro ( Figure 3 and Table A1) and spike glycoprotein ( Figure 4 and Table A1). Molecular docking was performed by examining the interactions of these compounds with the active site residues of these proteins and analysis of results. interaction, π-sigma bond, π-π interaction, electrostatic interaction, and many other hydrophobic interactions. Hydrogen bonds are essential for interaction, lowering the binding energy and stabilizing the ligand-receptor docked complex. Pharmacologically, it is well-known that blockade of a receptor active site by a ligand terminates its functional activity [30]. Our molecular docking approach was validated by docking of hydroxychloroquine, a potent inhibitor of SARS-CoV-2 M pro . Hydroxychloroquine acts as a lysomotropic agent that inhibits viral entry and viral endocytosis. Viral entry and replication are highly dependent on the acidic pH of lysosomes and endosomes, and some host proteases which are also active in acidic pH (pH 5-5.5) [31]. Chloroquine and its analogues are diprotic weak bases that in their unprotonated forms, readily diffuse through cellular and organelle membranes such as lysosomes, endosomes and Golgi vesicles increasing pH from 6.3 to 6.7 [32][33][34]. In addition to disruption of endocytic pathway pH, chloroquine and hydroxychloroquine have been recently found to be potent inhibitors of SARS-CoV-2 M pro but not viruses that belong to Rhabdoviridae [35]. In our study, the compounds previously isolated from Aloe plants were virtually screened against SARS-CoV-2 main protease M pro (PDB ID: 6LU7) ( Figure 2) and spike glycoprotein (PDB ID: 6M0J) ( Figure 2) to find potential inhibitors for SARS-CoV-2. Using our docking approach, hydroxychloroquine interacted with SARS-CoV-2 protein M pro and docked hydroxychloroquine bound to the active site with and RMSD of 1.2 Å . Molecular docking data were filtered to remove compounds with scores > −6.5 for both SARS-CoV-2 main protease M pro (Figure 3 and Table A1) and spike glycoprotein ( Figure 4 and Table A1). Molecular docking was performed by examining the interactions of these compounds with the active site residues of these proteins and analysis of results. Compounds scoring lower than −5.00 kcal/mol are expected to be active. These compounds were then filtered by RMSD value [30], to evaluate experimental stability of the docked ligand conformers. RMSD values around 1.5 Å , are considered successful and stable while those beyond 2 Å indicate instability of ligand conformation and docking parameters [36]. For SARS-CoV-2 protein M pro , the binding energy observed for these compounds ranged from−7.950 to −0.339 kcal/mol while for spike glycoprotein, binding energy ranged from −8.088 to −5.437 kcal/mol. The top three scoring compounds for SARS-CoV-2 protein M pro were compound 132 (2′-oxo-2′-O-(3,4-dihydroxy-E-cinnamoyl)-(2′R) aloesinol-7-methyl ether), compound 134 (2′-oxo-2′-O-(4-hydroxy-3-methoxy-(E)-cinnamoyl)-(2′R)-aloesinol-7-methyl ether) and compound 159 (rutin), (Table 1 docking scores and Figure 5, top panel). These three compounds showed the strongest interaction with the active site of SARS-CoV-2 protein M pro . Molecular 2D and 3D interactions complexes of compounds 132, 134 and 159 with SARS-SARS-CoV-2 protein M pro are shown in Figure 6. Compounds scoring lower than −5.00 kcal/mol are expected to be active. These compounds were then filtered by RMSD value [30], to evaluate experimental stability of the docked ligand conformers. RMSD values around 1.5 Å, are considered successful and stable while those beyond 2 Å indicate instability of ligand conformation and docking parameters [36]. For SARS-CoV-2 protein M pro , the binding energy observed for these compounds ranged from−7.950 to −0.339 kcal/mol while for spike glycoprotein, binding energy ranged from −8.088 to −5.437 kcal/mol. The top three scoring compounds for SARS-CoV-2 protein M pro were compound 132 (2 -oxo-2 -O-(3,4-dihydroxy-E-cinnamoyl)-(2 R) aloesinol-7-methyl ether), compound 134 (2 -oxo-2 -O-(4-hydroxy-3-methoxy-(E)cinnamoyl)-(2 R)-aloesinol-7-methyl ether) and compound 159 (rutin), (Table 1 docking scores and Figure 5, top panel). These three compounds showed the strongest interaction with the active site of SARS-CoV-2 protein M pro . Molecular 2D and 3D interactions complexes of compounds 132, 134 and 159 with SARS-SARS-CoV-2 protein M pro are shown in Figure 6. Molecular Dynamics Simulation Conventional docking approaches do not account for the inherent protein binding site flexibility and the many protein conformational rearrangements [37]. Computational tools for drug discovery such as molecular dynamics take into account structural flexibility and entropic effects which produce accurate predictions of small molecule-protein binding thermodynamics and kinetics [38]. Hence dynamical docking considers flexibility of drug-protein binding and conformational changes, solvation of drug-protein complex and temperature [38,39]. Unbiased millisecond-long can predict spontaneous drug-protein entire binding [40]. In addition, recent developments in dynamical docking such as Molecular Dynamics Simulation Conventional docking approaches do not account for the inherent protein binding site flexibility and the many protein conformational rearrangements [37]. Computational tools for drug discovery such as molecular dynamics take into account structural flexibility and entropic effects which produce accurate predictions of small molecule-protein binding thermodynamics and kinetics [38]. Hence dynamical docking considers flexibility of drug-protein binding and conformational changes, solvation of drug-protein complex and temperature [38,39]. Unbiased millisecond-long can predict spontaneous drug-protein entire binding [40]. In addition, recent developments in dynamical docking such as enhanced sampling for dynamical docking, path-based and alchemical transformations have greatly impacted drug discovery [38]. To validate molecular docking results, we subjected the top scoring compounds to unbiased molecular dynamics simulation experiments. The three top scoring M pro inhibitor hits 132, 134, and 159 were able to achieve stable binding inside the active site with low deviations across the course of simulations (Average RMSD = 3.22, 3.32, and 3.86 Å, respectively) and convergent binding free energies (∆G = −6.9, −6.8, and −6.5 kcal/mol, respectively), ( Figure 8A). Drug like Properties, and Pharmacokinetic Prediction of the Ligands Drug-like properties and pharmacokinetic properties are intrinsic characteristi drugs that may need to be optimized independently from pharmacodynamics prope during drug development. It is a balance among molecular properties affecting pha codynamics and pharmacokinetics of small molecules. These molecular properties as membrane permeability and bioavailability are always connected to some basic m ular descriptors such as lipophilicity log P, (Tendency of a compound to partition int aqueous matrix versus lipid matrix), molecular weight (MW), topological polar sur area (TPSA), or hydrogen bond acceptors and donors count in a molecule. Lipophil impacts drug's absorption, distribution, metabolism, elimination (ADME) and pla protein binding properties. In addition, the number of hydrogen bond donors and hy gen bond acceptors influence drug's pKa (−log Ka). The solubility of small molecules pacts their bioavailability and the need for frequent dosing, hence we investigated With respect to SARS-CoV-2 spike glycoprotein, both compounds 120 and 131 were stable inside the binding site during MD simulation, with scoring average RMSDs of 2.81 Å and 3.96 Å, respectively, and ∆G of −7.4 and −6.8 kcal/mol, respectively ( Figure 8B). On the other hand, compound 115 was significantly less stable (average RMSD = 6.2 Å) inside the SARS-CoV-2 spike glycoprotein binding site, and this instability was further translated into a low binding free energy (∆G = −4.5 kcal/mol) compared to compounds 120 and 131 ( Figure 8B). RMSF is an expression of the average residual mobility throughout simulation in a structure and a higher RMSF value indicates more flexibility during MD simulation. We calculated the RMSF value for the top scoring compounds from Aloe genus with SARS-CoV-2 M pro and SARS-CoV-2 spike glycoprotein and plotted RMSF value versus residue number ( Figure 8C,D). The results indicate that compounds 159 and 120 had high RMSF values compared to other compounds. The RMSD and RMSF values indicate that the top scoring compounds from Aloe genus were stable and had greater random motion during the simulation. The inhibitors identified in in our docking analysis that showed interaction with SARS CoV-2 spike protein and M Pro are in agreement with previously reported results [41]. Arokiyaraj et al. found that several polyphenolic compounds from Geranii Herba, including geraniin, kaempferitrin, quercitin, gallic acid, and kaempferol interacted with amino acid residues in the SARS-CoV-2 RBD active site inhibiting the interaction of SARS-CoV-2 RBD with ACE2. Arokiyaraj et al. also reported that these polyphenolic compounds interacted strongly with amino acids in the active site of SARS-CoV-2 M pro and its proximity leading to blockade of the nucleophilic attack toward His 41 and blockade of proteolytic activity. In agreement with this, we found that quercetin interacted with SARS CoV-2 RBD and M pro with binding energies of −5 Kcal/mol and −5.5 Kcal/mol respectively, similar to the results reported by Arokiyaraj et al. for quercetin interaction with RBD and M pro −5.71 Kcal/mol and −6.49 kcal/mol, respectively. In addition, we found that gallic acid interacted with SARS CoV-2 RBD and M pro with binding energies of −4.19 Kcal/mol and −3.56 Kcal/mol, respectively, similar to the binding energies reported by Arokiyaraj et al. for the gallic acid interaction with SARS CoV-2 RBD and M pro , −4.21 kcal/mol and −4.46 kcal/mol, respectively. These finding indicate that phenolic compounds from Aloe are potential inhibitors for SARS CoV-2 RBD and M pro [41]. Drug like Properties, and Pharmacokinetic Prediction of the Ligands Drug-like properties and pharmacokinetic properties are intrinsic characteristics of drugs that may need to be optimized independently from pharmacodynamics properties during drug development. It is a balance among molecular properties affecting pharmacodynamics and pharmacokinetics of small molecules. These molecular properties such as membrane permeability and bioavailability are always connected to some basic molecular descriptors such as lipophilicity log P, (Tendency of a compound to partition into an aqueous matrix versus lipid matrix), molecular weight (MW), topological polar surface area (TPSA), or hydrogen bond acceptors and donors count in a molecule. Lipophilicity impacts drug's absorption, distribution, metabolism, elimination (ADME) and plasma protein binding properties. In addition, the number of hydrogen bond donors and hydrogen bond acceptors influence drug's pKa (−log Ka). The solubility of small molecules impacts their bioavailability and the need for frequent dosing, hence we investigated the ADME properties, inhibition of cytochrome P450 (CYP), modulation of P-glycoprotein (Pgp), solubility, plasma protein binding and permeability of the top scoring compounds in our analysis. The best scoring compounds for both SARS-CoV-2 M pro and spike glycoprotein were tested for obeying Lipinski's rule of five parameters, which states that drugs having log P ranging from 0 to 5, have high possibility of oral absorption [42]. Data (Table 2) showed that the compounds have log P values that ranged from −1.06 to 2.8 that does not exceed 5.0 indicating reasonable probability of their good absorption. The number of hydrogen bond donors was variable and ranged from 4 to10 that is more than 5 and also hydrogen bond acceptors were 11-16 that is more than 10. All compounds have number of atoms that ranged from 40 to 43 which is within 20-70. In addition, the topological polar surface area (TPSA) of the compounds as parameter for the prediction of drug transport properties showed TPSA value greater than 140 Å 2 tend to be poor at permeating cell membranes. Despite violation of some rules, approved anticancer and anti-infective drugs from natural products or their semisynthetic derivatives such as taxol and amphotericin B have also some violations but are biologically effective as drugs. Therefore, these results don't interfere with the development of these compounds as potential SARS-CoV-2 therapeutic agents [43]. [44]. All compounds showed medium Caco-2 predicted permeability and medium MDCK predicted cell permeability [45]. Moreover, all compounds showed moderate predicted plasma protein binding (PPB) (Table S1) except for compound 159, which showed weak predicted PPB, which indicates predicted decreased excretion and increased predicted half-life. It is important to consider drug's interaction with plasma proteins, transporters and CYP450s for the successful selection of drug candidate. CYP2C19 and CYP2C9 inhibition leads to increased drug plasma concentration, leading to potential side effects [46,47]. All top scoring compounds were predicted to inhibit CYP2C19 and CYP2C9. CYP2D6 metabolizes many drugs and toxins [48]. None of the top scoring compounds showed predicted inhibitory activity to CYP2D6. CYP3A4 is also involved in metabolism of xenobiotics and is highly expressed in the liver and intestine [49]. The six top scoring compounds were predicted to inhibit CYP3A4. Drug resistance is a major concern in drug development. Multidrug resistance is regulated by a network of ATP-binding cassette (ABC) proteins that detoxify xenobiotics and act as drug transporters and efflux pumps. P glycoprotein (Pgp; ABCB1) is the most popular and well-studied efflux pump [50,51]. Pgp has intrinsic ATPase activity to drive active transport and generate a concentration gradient leading to transport of drugs to the extracellular space and inhibition of drug activity [51]. Pgp is highly expressed in blood-brain barrier cell, liver, intestine and kidney. Thus, it is important to predict drug's binding to Pgp. Only compound 115 was predicted to inhibit Pgp and hence it may affect the activity or excretion of other Pgp substrates. Compound 159 had the highest water solubility (217.207 mg/L) while the other five compounds had low water solubility, hence this should be considered during drug development. In addition, skin permeability is an important factor to consider during drug development for the potential of dermal drug delivery and risk assessment of drugs that may contact skin [52]. Skin permeability also increases drug's plasma concentration and activity. It has been reported that logP between −3 to +6 predict drug's skin permeability [53]. SKlogD, SKlogP and SKlogS are related to drugs' skin permeability and lipophilicity. All the six top scoring compounds had skin permeability values ranging from −4.6 to −3.6, indicating that they may not be absorbed through skin and thus should not pose a dermal exposure risk. Finally, all compounds did not have predicted ability to pass the blood brain barrier (BBB) and are not expected to be neurotoxic. Phytochemical Review of Genus Aloe Intensive review of the literatures in ScienceDirect, PubMed, SciFinder and has been conducted to identify compounds from Aloe genus. Preparation of Protein and Active Site Prediction In this study, two SARS-CoV-2 proteins which facilitate viral-host interaction and replication were selected from the RCSB protein databank (https://www.rscb.org/pdb, accessed on 20 February 2021). The proteins are SARS-CoV-2 main protease (PDB ID: 6LU7, resolution = 2.16 Å) [54] and spike glycoprotein (PDB ID: 6M0J, resolution = 2.45 Å) [42]. The 3D protein structures were prepared using Molecular Operating Environment software (MOE 2014.0901) Ligx option. The site finder function used to calculate and predict possible active potential site of selected proteins for ligand binding in the receptor. PyMol 2.3 software was used for visualization. Preparation of Ligand Reviewing the available literatures identified 237 phytochemical compounds that were isolated from genus Aloe (Table A1 and Figures S1-S18). All these molecular structures were imported to MOE and subjected to 3D protonation and energy minimization using MMFF94s force field and ligand database was constructed. Ligand coordinate files were extracted from PDB files and used as reference structures for root mean square deviation (RMSD) calculations. Docking Analysis Flexible ligand-rigid receptor docking was performed with MOE-DOCK for molecular docking. The parameters of scoring were Triangle Matcher, scoring was set at London dG with 30 output poses and rescoring at GBVI/WSA dG retaining 10 refined poses. The docking score, root mean square deviation (RSMD), 2D and 3D interactions were recorded [55]. The results of docked ligands are chosen based on the most negative docking score. The docking score represents the best-bound ligand conformations and relative binding affinities. The best-docked conformations for comparison of the binding of the drugs and targets of SARS-CoV2 were selected based on number of hydrogen bonds, binding energy (kcal/mol), upper and lower bound RMSD, number of interacting residues, and forces which stabilized the receptor-ligand complex. RMSD and RMSF of the ligand interaction with the target protein were calculated using the following formulas: where N is the number of atoms, t ref is the reference time, r is the position of the selected atoms in frame x after superimposing on the reference frame, frame x recorded at time t x , T is the trajectory time over which the RMSF was calculated, r is the position of an atom. Poses of docked compounds are automatically calculated by docking function in Molecular Operating Environment software. Molecular Dynamics Simulation MD simulation experiments were performed as previously described [43]. Briefly, the Molecular Dynamics (NAMD) 2.6 software [45], employing the CHARMM27 force field [56] was used for simulations. Hydrogen atoms were added to initial coordinates of proteins using the psfgen plugin included in the Visual Molecular Dynamics (VMD) 1.9 software [57]. Subsequently, the protein systems were solvated using TIP3P water particles and 0.15 M NaCl. The equilibration procedure comprised 1500 minimization steps followed by 30 ps of MD simulation at 10 K with fixed protein atoms. Then, the entire systems were minimized over 1500 steps at 0 K, followed by gradual heating from 10 to 310 K using temperature reassignment during the initial 60 ps of the 100 ps equilibration MD simulation. The final step involved NTP simulation (30 ps) using the Nose-Hoover Langevin piston pressure control at 310 K and 1.013 bars for density (volume) fitting [58]. Thereafter, the MD simulation experiments were continued for 25 ns for the entire systems (20 ns for the enzyme-ligand complexes). The trajectory was stored every 0.1 ns and further analyzed with the VMD 1.9 software. The MD simulation output over 25 ns provided several structural conformers that were sampled every 0.1 ns (250 poses) to evaluate conformational changes of the entire protein structure to analyze the RMSD. All parameters and topologies of the compounds selected for MD simulation were prepared using the online software Ligand Reader & Modeler [59] and the VMD Force Field Toolkit (ffTK) [57]. Binding free energy calculations (∆G) were performed using the free energy perturbation (FEP) method through the web-based software Absolute Ligand Binder [60] together with MD simulation software NAMD 2.6 [45]. Hydrogen bonds and hydrophobic interactions between protein and ligand were also analyzed using Protein-Ligand Interaction Profiler [61]. Drug Like Properties, and ADME Prediction of the Ligands The drug likeliness of best pose scoring compounds is specified by the Lipinski's rule and molecular properties prediction was calculated by the free access website https: //www.molinspiration.com/cgi-bin/properties, accessed on 20 February 2021. ADME Prediction were determined by PreADMET estimation server website [62]. Conclusions In recent years, advances in computational resources and software tools led to emergence of molecular dynamics (docking and scoring tool), as the first phase in drug screening and discovery. In addition, absence of new cell culture models for working safely with highly pathogenic viruses makes virtual screening, docking and dynamics of great importance. Aloe genus is a rich source of phytochemicals with a wide range of therapeutic activity. Several natural products from Aloe have shown strong antiviral activity, inhibiting replication and entry and of HSV-1 and 2, human cytomegalovirus (HCMV), influenza A and polio. Aloin significantly reduces replication of oseltamivir-resistant (H1N1) influenza virus. In our study, we applied computational screening of 237 natural product compounds from Aloe genus and identification of six compounds as stable potential inhibitors of SARS-CoV-2 main protease and spike glycoprotein. Our molecular docking analysis showed that theses six compounds are stable and safe. Compounds 132, 134 and 159 were the top three scoring potential inhibitors of for SARS-CoV-2 main protease. These compounds interacted strongly with amino acids in the active site of SARS-CoV-2 main protease. Rutin (154) is known to have antiviral activity against influenza virus [63]. Compounds 115, 120 and compound 131 were the top scoring potential inhibitors of SARS-CoV-2 spike glycoprotein. The results highlighted chromone derivatives as potential inhibitors for SARS-CoV-2 according to their best scores of binding affinities to the mentioned target proteins among the examined compounds. The results of this in-silico investigation (docking and molecular dynamics simulation) should have a great impact for drug repurposing studies. In the future, in vitro, in vivo and clinical studies shall be conducted to further validate the effectiveness of these compounds as potential treatments for COVID-19 and to identify compounds with best pharmacokinetic profiles. In addition, it will be of great importance to apply newly-developed algorithms and utilize the development of steered molecular dynamics for evaluating the binding of the top scoring compounds to SARS-CoV-2 target proteins [64]. Supplementary Materials: The following are available online. Figures S1-S18: Compounds isolated from genus Aloe, Table S1: Predicted pharmacokinetics of top scoring compounds. Conflicts of Interest: The authors declare that they have no competing financial interests or personal relationships that could influence the work reported in this paper. Sample Availability: Samples of the compounds are not available from the authors.
5,990.6
2021-03-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Is a PD game still a dilemma for Japanese rural villagers? A field and laboratory comparison of the impact of social group membership on cooperation Local norms and shared beliefs in cohesive social groups regulate individual behavior in everyday economic life. I use a door-to-door field experiment where a hundred and twenty villagers recruited from twenty-three communities in a Japanese rural mountainous village play a simultaneous prisoner’s dilemma game. To examine whether a set of experiences shared through interactions among community members affect experimental behavior, I compare villagers’ behavior under in-community and out-community random matching protocols. I also report a counterpart laboratory experiment with seventy-two university student subjects to address the external validity of laboratory experiments. The findings are three-fold. First, almost full cooperation is achieved when villagers play a prisoner’s dilemma game with their anonymous community members. Second, cooperation is significantly higher within the in-group compared to the out-group treatment in both the laboratory and field experiments. Third, although a significant treatment effect of social group membership is preserved, a big difference in the average cooperation rates is observed between the laboratory and field. Introduction Social interactions and social norms play a central role in decision-making in favor of cooperation and coordination. Lifelong, frequent interactions with community members remind that full cooperation can be supported in the equilibrium of infinitely repeated prisoner's dilemma games. However, there is a lack of experimental evidence to support full cooperation in a well-controlled experimental game. Can local norms of cooperation be replicated in an experiment if subjects bring their shared experience and beliefs into the experiment? Understanding the connection between experimental behavior and the elements of everyday economic life still remains an open question (Henrich et al., 2001). In this paper, I report the first experimental evidence of full cooperation among Japanese rural villagers in a prisoner's dilemma game. The rural areas of Japan have a number of small and cohesive closed communities, in which neighboring villagers share behavioral norms and expectations over generations. Villagers repeatedly interact over time, coordinate almost every day, and take collective action each season. Most local communities seem to have institutional features that meet the requirements of a possible full cooperation outcome that can be supported in an infinitely repeated prisoner's dilemma game. A growing experimental literature has shown that cooperation only emerges with experience even when games are consistent with equilibrium, risk dominant, and sufficiently incentive compatible (Dal Bó & Fréchette, 2011, 2018. Villagers play super games with their community members since their birth, share the lifelong history of those games, and expect certain interactions for the rest of their life. Combined with experimental evidence summarized by Dal Bó and Fréchette (2018), these features lead us to expect that high cooperation can be observed in those communities. Can pre-existing, shared experiences facilitate coordination on the efficient equilibrium in a prisoner's dilemma experiment? How high can we expect cooperation rates to be? I conduct a door-to-door field experiment in a Japanese rural village to capture a moment of everyday interactions among villagers as a stage game of super games. Group identity can affect individual behavior toward cooperation and coordination (Efferson et al., 2008;Solow & Kirkwood, 2002). Using minimal groups induced in a laboratory, a series of experimental studies show that subjects behave more pro-social with in-group members and can use a salient group identity to coordinate with in-group members on the better equilibrium in a coordination game (Chen & Chen, 2011;Chen & Li, 2009). In real social groups, their members share a set of beliefs and behavioral norms formed through mutually experienced interactions. Social group identity gives more than just a label but shared expectations based on the history of group members. Bernhard et al. (2006) use several native tribes in Papua New Guinea as social groups and show that ingroup favoritism regulates altruistic norm enforcement. Goette et al., (2006Goette et al., ( , 2012 utilize a randomly assigned social group and find that subjects are more likely to cooperate when they are paired with in-group members. In this paper, I explore how strongly local norms and shared beliefs from everyday interactions among neighboring villagers affect experimental behavior. I do this by comparing the cooperative behavior in a prisoner's dilemma game when they play with their community members and when they play with out-community members. A growing body of literature addresses the external validity of laboratory experiments with university student subjects (Snowberg & Yariv, 2021). The question of whether the results from a university-student sample can be generalized to other populations remains a fundamental methodological issue in the field of experimental economics. To test the external validity of the laboratory findings, a series of experimental studies compare the behavior of standard subjects with other subject pools. Some of these studies simply compare experimental measures like cooperation rates between different subject pools (Anderson et al., 2013;Bigoni et al., 2013;Burks et al., 2009). Additionally, few compare the difference of experimental treatment effects, for example, of having an opportunity of punishment between different subject pools (Alm et al., 2015;Bortolotti et al., 2015). For example, the literature suggests that university students are less prosocial compared to non-student subjects (Anderson et al., 2013;Bortolotti et al., 2015;Burks et al., 2009). I report a counterpart laboratory experiment with university student participants, which share the same procedures as the field experiment with villagers. I test the external validity of the experimental treatment effect as well as cooperation rates in both experiments. I conduct a door-to-door field experiment where 120 villagers were selected from 23 communities in a Japanese rural village to play a simultaneous prisoner's dilemma game. To examine whether experiences shared through lifelong, everyday interactions among community members affect experimental behavior, I compare villagers' behavior under in-community and out-community random matching protocols. I also report a counterpart laboratory experiment with 72 university student subjects to address the external validity of laboratory experiments, or the question of whether the results from a university student sample can be generalized to other populations. The findings are three-fold. First, almost full cooperation is achieved when villagers play a prisoner's dilemma game with their anonymous community members. Second, cooperation is significantly higher in the in-community compared to the out-community treatment in the field experiment with villagers, even though a very high cooperation rate is still observed in the out-community treatment. Third, regarding the external validity, a significant treatment effect of social group membership is preserved in both laboratory and field experiment. Although, a big difference in average cooperation rates is observed. Institutional background: local communities in mountainous villages The rural areas of Japan have a number of small and tight-closed communities. One of my interests is to explore how local norms of cooperation and experience shared by community members affect behavior in an experimental game. The study site, Kumakogen town, is located in the center of Ehime prefecture (33° 39ʹ N, 132° 54ʹ E), approximately 600 km southwest of Tokyo (Mitani & Shimada, 2021). The town is very mountainous and has a total area of 583.7 km 2 , 1 3 which is almost ten times bigger than the land area of Manhattan. In 2014, the resident population of the town was 9327 with 45.3% of them older than 65 years of age, indicating that the town faces an aging and shrinking population. In total there are 219 communities in the town, and many communities are remote and isolated. A mail survey was conducted by approaching all 219 community leaders, and 115 responses were collected with an overall response rate of 52.5%. 1 The survey reveals some institutional background. The median community size is 14 households, which can be small enough for community enforcement to support cooperation as a possible equilibrium outcome in an infinitely repeated prisoner's dilemma game. Even though previous experimental studies show that cooperation will not emerge under anonymous random matching with identified past behavior unless the group size is very small (Camera et al., 2012;Duffy & Ochs, 2009), I consider the community size small enough for cooperation to evolve given the lifelong, frequent interactions within community members. The survey reveals information about community organizations and their collective activities. 93% of households are members of local community organizations. 95% of organizations have collective activity agreements, and more than 30% of them have an enforcement instrument using a monetary penalty. 41% of organizations own a community forest. Regarding collective decision rules, about 40% of communities employ majority voting, about 35% of them employ consensus decision-making (which requires unanimity to reach a decision), and the rest of them require a majority approval for a leader's proposal. This suggests frequent interactions among community members and sustaining local norms of cooperation. The survey provides evidence that indicates cohesive interactions among community members over generations. The majority of communities do not have any single immigrated household in their current generation. Only 7% of all households have immigrated into the town in their generation. Thus, most villagers have been members of their community since their birth. Only 3% of them commute outside of the town to work. Most of the villagers are retired or work in agricultural, forestry, or public sector jobs in their residential region. Many communities are remote and isolated in mountain valleys. For example, it takes about 2 h to drive up to some communities from the center of the town, with some communities located an hour away from a state road. Institutional characteristics of these communities are summarized as follows. First, the community size is small. The size in the sample is slightly bigger than that of the Swiss army training platoons used as social groups in Goette et al., (2012) and much smaller than that of the Israeli kibbutz communities used in Ruffle and Sosis (2006). 2 Second, local community membership is not self-selected, and most villagers have been members since birth. Unlike in the case of the Israeli kibbutz, where prosocial people usually choose to join the community (Ruffle & Sosis, 2006), selfselection of community membership will not account for prosocial behavior in this 1 3 The Japanese Economic Review (2022) 73:103-121 study. Third, villagers are relatively homogeneous among communities in terms of individual, demographic, historical, and cultural characteristics. In addition, there is no indication of competition between communities, like there is among tribes or college fraternities (Bernhard et al., 2006;Kollock, 1998). I use 120 villagers from 23 communities as the subject pool to explore whether local community membership impacts experimental behavior. This is done by comparing their behavior between in-community and out-community anonymous random matching. Local communities in this study area are naturally formed social groups. The institutional characteristics indicate that community members have mutually established behavioral beliefs and norms of cooperation based on their lifelong experience of social interactions and personal history. I assume that most villagers will be certain about the norms and shared beliefs of cooperative behavior when their own community is concerned, but they might be unsure when they interact with villagers outside their community. Even though most communities might have similar behavioral norms of cooperation and most villagers know it, the extent to how obvious villagers mutually share the beliefs regarding other villagers' strategies in an experimental game would vary between when they are paired with incommunity members and when paired with out-community members. Following the community leader survey, we contact the community leaders who revealed their willingness to participate in a door-to-door survey and collect a potential list of household volunteers. We recruit 120 participants from 23 communities based on the available number of households in each community and their available date and time. They are then assigned either in the in-community or out-community treatment. In the in-community treatment, villagers are matched anonymously but informed that the other players in an experimental group are members of their community. The out-community treatment was the same except that villagers are informed that the others are members of different communities. Because anonymity is preserved, villagers only know whether they are paired with someone from their community or different communities, but not with whom. Despite anonymous random matching, villagers in the in-community treatment may not face strategic uncertainty on the others' choices due to their strong enough shared behavioral norms. While, in the out-community treatment, given the town population size of about 10,000, it might be hard for villagers to believe that the others share the same behavioral norms in the experiment. Note that all communities in the town share relatively homogeneous demographic, cultural, and historical characteristics. Goette et al., (2006Goette et al., ( , 2012 argue that membership of naturally formed social groups can be confounded with subjects' individual, demographic, and cultural characteristics. Community membership is not randomly assigned, however, nor does it involve a selection based on individual characteristics. There is little evidence of a difference in demographic and cultural backgrounds among local communities in the town. There should also be a small level of statistical difference in beliefs and norms that are brought into the experiment between treatments because all subjects are recruited from the same pool of villagers. However, information available for a subject regarding the opponents can vary across treatments. In the in-community treatment, subjects know the anonymous opponents should be three 1 3 of their community members so that it can be easy for them to form beliefs of others' choice. On the other hand, in the out-community treatment, subjects know their opponents are not their community members so that they might wonder what norms the opponents might have. The observed difference in cooperation in a prisoner's dilemma experiment between treatments can be attributed to the difference of the shared beliefs and experiences between treatments. I try to see whether a prisoner's dilemma game is still a dilemma for community members. Game To test whether a set of experiences shared by neighboring villagers regulate prosocial and indirect trust behaviors, I develop a simultaneous prisoner's dilemma game with four local player interactions around a circle. Four anonymous villagers are arranged in a circle with every villager having two directly connecting villagers to their left and right. These direct villagers are referred to as the posterior (left) and anterior (right) participants in the instructions. A villager i has another villager (i − 1) as her right participant and villager (i + 1) as her left participant (for 1 ≤ i ≤ 4, let 0 be 4 and 5 be 1). Each villager participates in a one-shot game, where they are asked to choose simultaneously whether to send their whole endowment of 1000JPY (about 9.5USD at the time of the experiment) to the participant on the left or to keep it in their pocket. The amount sent is doubled. The payoff is determined by own and right villager's choice: the money is sent and 0 otherwise. The payoff matrix for player i is illustrated in Table 1. Keeping means defection while sending means cooperation. 3 The payoffs share the same features as standard two-player prisoner's dilemma in the sense that defection strictly dominates cooperation for each player. This game has a unique Nash equilibrium in which all players detect. Whereas the socially optimal outcome is that all players cooperate. If this is the stage game of an infinitely repeated game, cooperation will be consistent with equilibrium and risk dominant for the probability of continuation greater than 2/3 in the super game. Pro-social and indirect trust behaviors can be observed by the game in the sense that the participants play the dictator role and simultaneously serve as recipients through an indirect participant. Treatments I use a between-subjects design with two treatments: in-community and out-community. In the in-community treatment, villagers are matched anonymously but informed that the other three players in her experimental group are members of the same community. The out-community treatment is the same except that villagers are informed that the other three players are members of different communities but residents in the town. Despite anonymous random matching, villagers in the in-community treatment may not face strategic uncertainty on the others' choices because of strong enough shared behavioral norms. If this is the case, participants know that an implicitly agreed strategy would be either cooperation or defection based on the history of the super game (i.e. daily economic interactions) outside of the experiment. While, in the out-community treatment, it might be hard for villagers to believe that the others share the same behavioral norms in the strategic environment induced by the experiment. Procedures The door-to-door field experiment was conducted with paper and pencil at the participant's residence. A reminder of an interview date and time was mailed to 120 participants recruited from 23 communities, but they were not informed about the economic experiment in advance. 72 of them were randomly assigned to the incommunity treatment (18 groups) while 48 were assigned to the out-community treatment (12 groups). The experiment was carried out by a trained undergraduate experimenter visiting each participant at their home. 4 12 sessions were conducted in 2 days with 101 participants in total. 5 At each session, a maximum of 14 experimenters visited their assigned participant and started the experiment at the same time at different places. 6 This procedure prevents villagers from communicating with other villagers. The experiment lasted 30-45 min and the subjects earned on average 1901 JPY from this specific experimental game. The economic experiment was followed by a post-experiment interview. Special care was taken to ensure anonymity and 4 There were 14 experimenters. Among them, eight were female and six were male. 5 19 of the recruited participants (13 in the in-community treatment and 6 in the out-community treatment) cancelled with short notice or did not show up for their appointment. We implemented all experiments as we planned. Indeed, the experimenter was not able to know whether there were any cancellations in their group at the time of their session. We slightly modified a way to calculate the participants' payoffs in case of cancellation in their group. 6 The number of participants in a session ranged from 4 to 14 with median of 10. 1 3 minimize the experimenter effect. Experimental earnings were mailed to the home address a week after the experiment. A counterpart laboratory experiment with student subjects was conducted at Kyoto University with 72 undergraduate students. I used a classmate matching protocol as an in-group treatment. In an out-group treatment, 48 undergraduate students were recruited from the general population at Kyoto University through a standard recruiting process while 24 undergraduate students were recruited from a specific freshman class of about 40 students at a specific department in the in-group treatment. Experimental procedure and instructions were identical to those used in the door-to-door field experiment with villagers, except that all experiments were run on a computer in an experimental laboratory with 12 subjects at the same time. Subject pools I collected data from 101 villagers and 72 students. Table 2 provides summary statistics from the participant characteristics gathered by a post-experiment questionnaire, along with the results of balance tests determining whether the mean of each characteristics is statistically the same between treatments in the field. The results indicate that the randomization was properly performed. "Trust" is a survey measure of general trust. Specifically, this question "In general, one can trust people" was asked using a 4-scale measure from "disagree fully (1)," "disagree somewhat (2)," "agree somewhat (3)," to "agree fully (4)." Fig. 1 shows the results by villagers (navy bar) and students (orange bar), suggesting that villagers tend to show relatively higher measures of trust than students. The variables "Reward" and "Punish" were derived from the question "You are willing to reward (punish) others in return for kind (unfair) treatment" with a 4-scale measure. Figures 2 and 3 suggest that villagers are more likely to agree with rewarding while they disagree with punishing when compared with students. "GFavor" was constructed by combing answers to the questions with a 4-scale measure: "You are willing to help neighbors who in need" and "You are willing to help strangers who in need." This measure of 0 is consistent with no evidence of in-group favoritism while positive scores are consistent with in-group favoritism. Figure 4 shows no difference between villagers and students. Results The findings from the door-to-door experiment with villagers and the counterpart laboratory experiment with undergraduate students are organized into three main results. The first result reveals how high cooperation rates can be observed in a Result 1 Almost full cooperation was achieved when villagers played a one-shot simultaneous prisoner's dilemma game with their anonymous community members. Table 3 reports the results of the door-to-door experiment with villagers. Only two of 59 villagers (3.4%) chose not to cooperate in the in-community treatment. This suggests that "Prisoner's Dilemma" is no longer a dilemma for neighboring In-group Favoritism rural villagers in Japan. It rather seems that they easily coordinate on the efficient equilibrium in which cooperation is the established convention. At the group level, 16 out of 18 groups in the in-community treatment achieved full cooperation (i.e. the socially optimal outcome). The cooperation rate observed in the in-community treatment is surprisingly high when compared with previous findings. Goette et al. (2006) find that 69% of Swiss Army officer trainees cooperate in a similar simultaneous prisoner's dilemma game when they interact with a member of their own platoon. However, villagers' in-community cooperation rate is more comparable to but still higher than those in a later super game of infinitely repeated prisoner's games or those in a public goods experiment with punishment opportunity (Camera et al., 2012;Dal Bó & Fréchette, 2018;Fehr & Gächter, 2000). This finding suggests that a set of beliefs and local norms of cooperation that subjects bring to the frame of an experiment can regulate behavior in the experiment. Result 2 In the field experiment with villagers, cooperation was significantly higher in the in-community compared with the out-community treatment. The effect of community membership on cooperation was robust after controlling for the potential confounders. Cooperation was significantly higher in the in-community than in the out-community treatment at the 1% risk level (Table 3). 7 This treatment effect remained robust after controlling for individual characteristics as well as other potential confounders. Table 4 reports the estimation results from probit regressions, where the dependent variable equals 1 if cooperate. The coefficient of the in-community treatment is positive and statistically significant at 5% in all model specifications. Models 2 and 4 include individual characteristics (such as gender, age, and social value orientation presented in Table 2) as independent variables. The estimation results indicate that pro-social behavior is positively associated with general trust and age at the individual level. Models 3 and 4 include potential cofounders as control variables. Male experimenter aims to capture the experimenter gender effect. Cancellation equals 1 if there was any cancellation in the group. Since four villages were merged to form the current town in 2004, Village fixed effects aim to capture the potential, historical and cultural differences between the old village districts. The estimation results show that the treatment effect is robust to these concerns. Observed treatment effect of social group membership is consistent with previous findings in group identity and in-group favoritism (Bernhard et al., 2006;Chen & Li, 2009;Goette et al., 2006). 8 What makes this Japanese field peculiar is that a very high cooperation rate was observed even when villagers played with anonymous random out-community members. Only eight of 42 villagers (19%) chose not to cooperate in the out-community treatment. Thus, 81% of villagers still cooperated even though they were paired with any stranger in the town who were not a part of their community. This cooperation rate is fairly high given the size of the town, noting the population of about 10,000 and the area, which is ten times bigger than Manhattan. This suggests that cooperation can emerge under anonymous random matching with relatively big group size even though it is not necessary that cooperation can stabilize over time if they continue to match with strangers. Regarding villagers' motivation for cooperation in the experiment, because we are not able to control any kind of future interactions that may or may not occur outside the frame of the experiment, I am not able to rule out a possibility of super games motivations such as punishments, rewards, and their willingness to avoid the "second-order free rider problem" (even though they can not identify their opponents in the experimental game especially in the out-community treatment). Further investigations will be required. Result 3 Average cooperation rates were much higher among villagers than university students in all treatments. Although a big difference in the average was observed, a significant treatment effect of social group membership was preserved in both laboratory and field. Table 5 reports the results of the counterpart laboratory experiment with university students. While 41.7% of student subjects cooperated when they were paired with anonymous classmate fellows, only 18.8% of them cooperated when they were paired with anonymous strangers. The difference is statistically significant at the 5% risk level as shown in Table 5. Both average cooperation rates and treatment effects are consistent with previous studies (Cooper et al., 1996;Goette et al., 2006). With respect to external validity concerns, this result supports a consistent finding in the literature that university students are less pro-social compared with non-student subjects (Anderson et al., 2013;Bortolotti et al., 2015;Burks et al., 2009;Snowberg & Yariv, 2021). However, Bigoni et al. (2013) report the opposite result, and also the magnitude of the difference varies across studies depending on the type of non-student adult population. These results challenge the generalization of results from a student population regarding the magnitude of cooperation rates. Nonetheless, the good news is that a treatment effect is preserved both in student subjects and non-student villagers, implying that the treatment effect of social group membership is externally valid. The treatment effect of social group membership and a big difference in cooperation between villagers and students remained robust after controlling for individual characteristics. Table 6 reports the estimation results from pooled probit regressions, where the dependent variable equals 1 if cooperate. The coefficient of the Incommunity treatment is positive and statistically significant at 5% in all model specifications. This indicates that a treatment effect is preserved both in the field and laboratory. The coefficient of a Field dummy is positive and statistically significant at the 1% level in all model specifications. This indicates a statistically significant mean difference exists between the field and laboratory. Models 7 and 8 show that the coefficient of the interaction term between Incommunity and Field is positive but not statistically different from 0. This implies that the effect of community membership in the field is not statistically greater than the effect of classmate matching in the laboratory. Models 6 and 8 include individual characteristics (such as gender and social value orientation presented in Table 2) as independent variables. The results indicate that pro-social behavior is positively associated with general trust at the individual level. Note that Age was removed from these models with pooled data because Age was highly correlated with whether field or laboratory. The t-test with unequal variances shows a statistically significant difference at the 1% level in Age between villagers and students (t = − 40.15, p < 0.001). The youngest age in the field was 36 while the oldest age in the laboratory was 24. Figure A1 in Appendix shows the comparison of age distribution (kernel density estimates) between villagers and students. Table A1 in Appendix presents the estimation results of regressions that add Age to the models reported in Table 6. The coefficients of Field in Models 9 and 11 are still significant at the 5% level after including Age in the models. However, the coefficient becomes insignificant after controlling for individual social value orientation measures, shown in Models 10 and 12. The following two observations suggest that observed full cooperation is likely to be unique to neighboring villagers in the rural area of Japan and is unlikely to be replicated with non-student samples of comparable ages in urban areas. First, Table A2 in Appendix shows cooperation rates by age groups. The result indicates that cooperation rates are stable across these age groups. The cooperation rate of the younger group is 1 or nearly 1 in the in-community treatment (1.00 for 60 years old or younger, N = 15; 0.97 for 70 years old or younger, N = 31). Second, Table A3 shows cooperation rates by age groups, which are calculated based on the percentage of choosing the cooperative strategy in prisoner's dilemma games among the representative sample of the United States population reported in Snowberg and Yariv (2021). This provides no evidence that cooperation rates among the representative sample are significantly associated with age groups. These observations suggest that age is not solely responsible for the big difference in the average cooperation rates observed between villagers and university students. Nonetheless, my data does not allow me to decompose the effects of Field into the effect of age and the effect of local norms shared by villagers. This is an important direction for future research. Concluding remarks A consistent body of experimental evidence of prisoner's dilemma games has shown that experimental behaviors significantly deviate from the free-riding Nash equilibrium on average. However, so far, no experimental evidence has been documented for full cooperation. Local norms and shared beliefs in cohesive social groups regulate individual behavior in everyday economic life. Can we replicate it in a wellcontrolled economic experiment? In this paper, I report on a door-to-door field experiment where 120 villagers were recruited from 23 communities in a rural, mountainous Japanese village to play a simultaneous prisoner's dilemma game. To see whether local community membership affects experimental behavior, I compared villager's behavior under in-community and out-community random matching protocols. I also report on a counterpart laboratory experiment with 72 university student subjects to address concerns about the external validity of laboratory experiments, with the question of whether the results from a university student sample generalize to other populations. The findings are three-fold. First, almost full cooperation was achieved when villagers played a prisoner's dilemma game with their anonymous community members. Second, cooperation rate was significantly higher in the in-community than in the out-community treatment in the field experiment with villagers, even though a very high cooperation rate was observed even in the out-community treatment. This treatment effect of social group membership on cooperation is robust to potential confounders. Despite the lack of variation in cooperation among villagers due Snowberg and Yariv (2021) *Cooperation rates by age groups are calculated based on the percentage of choosing the cooperative strategy in a Prisoner's dilemma game among the Representative Sample (of the United States population) reported in Snowberg and Yariv (2021). The average SPE (i.e. 1 (D,C)− 1 (C,C) 1 (D,C)− 1 (D,D) ) in their PD games is comparable to the one in this study. to a very high cooperation rate, the pro-social behavior of villagers was positively associated with the general trust and age of participants. Third, regarding external validity concerns, a significant treatment effect of social group membership was preserved in both laboratory and field. Although, a big difference in the average cooperation rates between villagers and university students was observed. This is in line with previous studies that compared university students to representative samples (Snowberg & Yariv, 2021). To confirm that Japanese rural villages provide substantial conditions for sustaining cooperation, it would be highly desirable to know how much of the observed difference in cooperation rates between villagers and university students is attributable to the effect of intensive social interactions among neighboring villagers. To this end, future studies should address whether and how the rate of cooperation observed among rural villagers is higher than the cooperation rate observed among non-student samples of comparable ages in urban areas. In addition, it would be important to gather further evidence from other rural areas of Japan to explore whether full cooperation is widely observed among rural villagers in Japan.
7,448
2021-07-26T00:00:00.000
[ "Economics" ]
Molecular Longitudinal Tracking of Mycobacterium abscessus spp. during Chronic Infection of the Human Lung The Mycobacterium abscessus complex is an emerging cause of chronic pulmonary infection in patients with underlying lung disease. The M. abscessus complex is regarded as an environmental pathogen but its molecular adaptation to the human lung during long-term infection is poorly understood. Here we carried out a longitudinal molecular epidemiological analysis of 178 M. abscessus spp. isolates obtained from 10 cystic fibrosis (CF) and 2 non CF patients over a 13 year period. Multi-locus sequence and molecular typing analysis revealed that 11 of 12 patients were persistently colonized with the same genotype during the course of the infection while replacement of a M. abscessus sensu stricto strain with a Mycobacterium massiliense strain was observed for a single patient. Of note, several patients including a pair of siblings were colonized with closely-related strains consistent with intra-familial transmission or a common infection reservoir. In general, a switch from smooth to rough colony morphology was observed during the course of long-term infection, which in some cases correlated with an increasing severity of clinical symptoms. To examine evolution during long-term infection of the CF lung we compared the genome sequences of 6 sequential isolates of Mycobacterium bolletii obtained from a single patient over an 11 year period, revealing a heterogeneous clonal infecting population with mutations in regulators controlling the expression of virulence factors and complex lipids. Taken together, these data provide new insights into the epidemiology of M. abscessus spp. during long-term infection of the CF lung, and the molecular transition from saprophytic organism to human pathogen. Introduction The Mycobacterium abscessus complex is a group of rapidly growing mycobacteria (RGM) that is associated with an array of human infections typically affecting the lungs, skin or soft tissues [1][2][3][4]. Chronic lung infection is most frequently observed in patients with underlying lung disease, especially those suffering from cystic fibrosis (CF) [5,6], and M. abscessus spp. infections in CF patients are linked to disease progression [2]. M. abscessus complex is subdivided into the 3 species M. abscessus sensu stricto, Mycobacterium massiliense and Mycobacterium bolletii which exhibit clinically relevant differences in antibiotic sensitivity profile [7][8][9]. Chronic infections caused by environmental pathogens are characterized by the ability of the infecting organisms to adapt to a very different niche habitat, leading to changes in phenotype influencing colony morphology, inflammatory response, and antibiotic resistance [10][11][12]. Like other RGM the M. abscessus spp. are saprophytic organisms associated with water reservoirs linked to human activity [13,14]. M. abscessus sensu stricto is innately resistant to many classes of antibiotic [15] and readily acquires resistance to clarithromycin during infection [16]. In addition, M. abscessus sensu stricto exhibits differences in colony morphology including smooth colony variants typical of environmental and early infection isolates [6] which are replaced over a period of years with rough variants capable of invasion of macrophage and respiratory epithelial cells, and associated with severe inflammation [17,18]. Previous studies have demonstrated the long-term persistence of single M. abscessus spp. strains during chronic infection of the CF lung with occasional sharing of strains between sibling pairs, implying that transmission between patients may occur [19]. A recent study provided evidence for an outbreak involving M. massiliense infection of 5 patients attending a CF clinic implying that some strains of M. abscessus complex may have epidemic potential [1]. The phenomenon of long-term persistence of M. abscessus complex during chronic infection of the human lung is well established. However, the relative prevalence of the 3 M. abscessus species among long-term infected patients, and the capacity of M. abscessus complex to transmit between patients have not been well examined. Here, we carry out longitudinal molecular tracking of Table 1. Summary of patient, isolate and clinical information. M. abscessus complex isolates from chronically infected patients with multi-locus sequence and PCR-based bacterial typing. We also combine whole genome sequencing of sequential isolates with phenotypic analysis of colony morphology and antibiotic sensitivity to examine the adaptive diversification of M. abscessus spp. during chronic infection of the human lung revealing a genetically and phenotypically heterogeneous infecting population. In addition, the data provide molecular correlates of adaptation of M. abscessus spp. to the human lung that will inform future studies investigating M. abscessus pathogenesis. M. abscessus Complex Isolates and Culture Conditions All clinical isolates were referred to the Scottish Mycobacteria Reference Laboratory (SMRL) for identification, susceptibility testing, DNA extraction and long-term storage. Ethical approval for use of samples/data was provided by the National Health Service, South East of Scotland HSS BioResource facility operating within the Tissue Act Scotland 2006. The Scientific Officer indicated that the research does not need NHS ethical review under the terms of the Governance Arrangements for Research Ethics Committees (A Harmonised Edition). In particular, research limited to secondary use of information previously collected in the course of normal care (without an intention to use it for research at the time of collection) is generally excluded from REC review, provided that the patients or service users are not identifiable to the research team carrying out the research. The review board waived the need for written informed consent from the participants. A total of 178 M. abscessus spp. pulmonary isolates were obtained from 12 patients in Scotland between 1998 and 2010, including 10 unrelated patients and 1 sibling pair. Annual incidence of M. abscessus spp. infection is estimated at ,887/100,000 in Scottish cystic fibrosis patients or about 8 patients annually (Scottish Mycobacterium Reference laboratory, data not shown). Of the 12 patients, 10 suffered from cystic fibrosis and 2 had significant comorbidities including cirrhotic alcoholic liver disease, and acute myeloid leukemia, both with chronic pulmonary disease ( Genomic DNA Extraction Genomic DNA (gDNA) was extracted from bacteria following the method described by van Soolingen et al. [20] and previously used for M. abscessus spp. [21]. DNA yield was quantified using a Nanodrop ND-1000 Spectrophotometer (Thermo Scientific, USA). ERIC-PCR Typing For ERIC-PCR, 50 ml reactions included 2 mM forward and reverse oligonucleotide primers described by Sechi et al. [22], 0.25 U of Go-Taq Flexi DNA Polymerase, 5x Green Go Taq Flexi Buffer, 1.5 mM MgCl 2 , 200 mM dNTPs (Promega, Hampshire, UK) and 10 ng approx. gDNA. Thermocycling conditions were 3 min at 95uC followed by 30 cycles of 95uC for 1 min, 55uC for 1 min and 72uC for 5 min with a final extension of 10 min at 72uC. Agarose gel electrophoresis of ERIC-PCR products was followed by comparative analysis of resolved profiles with the BioNumerics program v5 with clustering by UPGMA with Dice similarity. Independent genomic DNA extractions resulted in indistinguishable ERIC-PCR profiles, demonstrating the method to be highly reproducible (data not shown). Multi-locus Sequence Typing (MLST) MLST was carried out as described previously [9]. Briefly, 50 ml PCR reactions specific for the 7 housekeeping loci included 0.2 mM forward and reverse oligonucleotide primers, 0.25 U of Go-Taq Flexi DNA Polymerase, 5x Green Go Taq Flexi Buffer, 1.5 mM MgCl 2 , 200 mM dNTPs (Promega, Hampshire, UK) and 10 ng approx. gDNA. Thermocycling conditions were 3 min at 95uC followed by 30 cycles of 95uC for 1 min, 55u-60uC for 1 min and 72uC for 1 min with a final extension of 10 min at 72uC. Sequencing was carried out by The GenePool Sequencing Service (King's Buildings, University of Edinburgh, UK), and sequences assembled using Geneious 5.4.2. [23]. Alleles and sequence types were determined using the Institute Pasteur M. abscessus MLST database (http://www.pasteur.fr/recherche/genopole/PF8/mlst/ references_abscessus.html). Single and multi-gene phylogenies were re-constructed using a neighbor-joining method with the HKY model of nucleotide substitution, with support for nodes assessed by 1000 bootstrap replicates. Whole Genome Sequencing and Analysis Whole genome sequencing was carried out with the Illumina GA2 sequencing platform. The FASTX toolkit [24] was used to assess read quality, remove adaptor sequences, remove 3 bp from the 59 end of the read and filter reads so that at least 80% of bases within a read were of PHRED score Q30 or above. Paired-end reads were mapped to a draft genome sequence of M. bolletii [25] using the BWA short read aligner [26]. Polymorphic sites were identified at loci covered by at least 3 reads where the alternative base was present in at least two thirds of the reads with PHRED equivalent score greater than 30, and inspected manually. As an additional confirmation, reads were mapped to the genome of M. abscessus sensu stricto strain ATCC 19977 (accession number CU458896), and only those variants present in both reference genomes are reported. A neighbor-joining phylogeny was reconstructed based on the polymorphic sites identified using the HKY substitution model. 1000 bootstrap replicates were used to determine support for key nodes. Molecular Population Genetic Analysis of M. abscessus spp. from Chronically Infected Patients In order to examine the long-term persistence of M. abscessus spp. during infection of the human lung, we analyzed a total of 178 sequential M. abscessus complex isolates from 12 patients and carried out multi-locus sequence type (MLST) analysis of each unique ERIC-PCR genotype identified in each patient. ERIC-PCR has been shown to be highly discriminatory for grouping outbreak strains of M. abscessus spp. and does not suffer from problems of degradation that occur with some electrophoresisbased methods [21]. In total, 12 genotypes representing 9 distinct sequence types (ST), including 6 novel STs and 3 M. abscessus complex species, were identified among the 12 patients ( Fig. 1; Table 1). For 11 of 12 patients, a single genotype was identified during long-term infection (2 to 12 years) consistent with persistent infection with the same strain. In contrast, for patient 9, the first 5 chronological isolates exhibited a genotype that was distinct from that of the subsequent seven isolates suggesting replacement of the original infecting strain during the course of infection (Fig. 1). MLST analysis of the isolates from patient 9 revealed that an M. abscessus sensu stricto strain was replaced by an M. massiliense strain (Table 1). Our findings are in agreement with other studies that compared sequential M. abscessus isolates from CF patients highlighting the long-term persistence of M. abscessus, and its treatment-refractory nature [6,27,28]. However, the data indicate for the first time that replacement of M. abscessus by unrelated strains may occur during chronic infection. M. massiliense ST23 is Prevalent among CF Patients Single gene trees based on argH, glpK, gnd or murC sequences exhibited 3 distinct clades that co-segregate with the type strains M. abscessus NCTC 13031, M. massiliense DSM 45103 and M. bolletii DSM 45149 (Fig. S1 in File S1). In contrast, the trees constructed with cya, pta and purH sequences have topologies that imply horizontal transfer of these alleles among the M. abscessus complex. Of note, substantial horizontal gene transfer between members of the M. abscessus complex has previously been demonstrated [9]. Phylogenetic reconstruction based on the concatenated sequences of the 4 MLST loci that did not display evidence of horizontal transfer resulted in a tree with 3 wellsupported clades, each including one of the M. abscessus species type strains (Fig. 2), facilitating species designation for the clinical isolates. Of the 12 patients, 4 were infected with M. abscessus sensu stricto, 5 with M. massiliense, 2 with M. bolletii and 1 (patient 9) was infected with M. abscessus sensu stricto initially followed by replacement with M. massiliense ( Fig. 2; Table 1). Although a number of studies have reported a higher prevalence of M. abscessus sensu stricto among clinical isolates [8,29], Zelazny et al. reported that M. massiliense was most frequently isolated from the airways of young patients with an underlying lung disease [30]. Notably, 5 of 6 M. massiliense isolates belonged to ST23 which is highly prevalent in the M. abscessus MLST database and has been reported in France, Germany, Switzerland and an outbreak of skin and soft tissue infections in Brazil. The finding that a single sequence type is associated with disease in distant countries is noteworthy and could indicate limited clonal variation among M. massiliense strains. Alternatively, ST23 may represent a M. massiliense subtype with enhanced pathogenic potential, or the capacity to resist aggressive antibiotic treatment. In the current study, isolates (ED2 and ED12) obtained from a pair of siblings had closely-related genotypes of the same ST23, consistent with a common infective source or cross infection (Fig. 1). Of note, a recent study reports the likely transmission of M. abscessus spp. between patients [1]. Furthermore, M. abscessus spp. have been reported to occur in hospital water distribution systems and clusters of infections have previously been linked to contaminated bronchoscopes [31,32]. Huang et al. typed M. abscessus spp. isolates from clinical and environmental sources linking some infections to M. abscessus spp. present in water distribution systems but the possibility of patient-to-patient transmission could not be excluded [33]. Transition from Smooth to Rough Colony Morphology during Chronic M. abscessus spp. Infection Previous studies have reported a correlation between a switch in smooth to rough colony morphology and an increase in virulence [34]. In order to determine if M. abscessus spp. undergo changes in colony morphology during long-term infection, the colony phenotype of all isolates was examined. In 5 patients, the majority of earlier isolates in these patients had a smooth colony morphology with a small number of isolates demonstrating a rough colony phenotype. However, over several years of persistent infection, there was an increase in the proportion of isolates with a rough colony morphology leading to uniformly rough colony morphology for the later stage isolates (Table 1, Table S2 from File S2). From patient 1, persistently infected with M. bolletii, 21 of 25 isolates obtained between 1998 and 2005 demonstrated raised, moist, smooth colonies. The remaining four isolates, all collected between 2006 and 2009, contained a mixture of smooth and rough colonies. The timing of the emergence of rough colony phenotype isolates from patient 1 correlates with an increase in severity of clinical disease that was observed in 2005. The transition in colony phenotype has been linked to an increase in severity of infection as the rough phenotype induces an elevated inflammatory response in the host [34]. It is feasible that in the patients with rough colony isolates only, the M. abscessus infection was asymptomatic until a smooth to rough colony phenotype switch resulted in elevated inflammation and clinical symptoms. Susceptibility of M. bolletii to Antibiotics during Longterm Infection of a Single Patient In order to examine the antibiotic sensitivity of M. abscessus spp. isolates during long term infection of the CF lung in a patient undergoing defined antibiotic treatment regimens, we examined 6 M. bolletii isolates obtained sequentially from a single patient (Patient 1) over a 12 year period. She had received multiple courses of antimicrobials over 12 years following deterioration in her pulmonary function including a combination of ethambutol, rifampicin and clarithromycin, and had previously been treated with amikacin (Fig. 3). The patient was turned down for lung transplantation in view of M. abscessus spp. culture-positivity. Subsequently, the patient was treated with nebulized interferon gamma but was again turned down for transplantation due to extensive pleural disease. In June 2011, she had advanced cystic fibrosis requiring ambulatory oxygen and was taking ethambutol, rifampicin, clarithromycin, moxifloxacin, cotrimoxazole and nebulised colistin. Amoxicillin/clavulanic acid, cefepime, ciprofloxacin, doxycycline, minocycline, moxifloxacin, tobramycin and trimethoprim/sulphamethoxazole demonstrated no activity against any of the isolates tested (data not shown). All isolates were found to be resistant to clarithromycin with MICs of .16 mg/ml after 14 days of incubation and 5 out of 6 isolates were resistant to clarithromycin with an MIC of 8 mg/ml from day 5 ( Table S1 in File S1). M. bolletii encodes the erythromycin ribosome methyltransferase (erm) determinant which facilitates inducible resistance to clarithromycin [35]. The observed resistance from as early as day 5 of reading susceptibility results is consistent with Patient 1 having received clarithromycin since 2005, with intermittent macrolide treatment prior to that. The isolates tested demonstrated intermediate resistance to amikacin and were fully resistant to cefoxitin and ciprofloxacin. Of note, this patient received treatment with amikacin prior to 2005, which might explain the reduced sensitivity to this antibiotic. All 3 M. abscessus spp. type strains demonstrated sensitivity to amikacin (data not shown). A previous study reported that all M. bolletii isolates exhibited high drug resistance to ciprofloxacin and cefoxitin with intermediate resistance or resistance to amikacin [36]. However M. bolletii isolates susceptible to amikacin and with intermediate resistance to cefoxitin have also been reported in the literature [35]. Taken together, these data demonstrate the high levels of innate in vitro antibiotic resistance among M. bolletii clinical isolates. Genome-wide Examination of M. bolletii Adaptation to the Human CF Lung To investigate the molecular adaptation of M. abscessus spp. to the human lung, we determined the whole genome sequence of 6 sequential isolates of M. bolletii obtained over a 12 year period from a single persistently infected patient (Table 2). Isolates were selected to represent the temporal and phenotypic diversity of the isolates obtained from patient 1 ( Table 2). The core genome of the 6 strains, identified as all shared nucleotide sites, was comprised of 4,178,261 sites, and we identified 14,594 variable sites with respect to the reference genome of M. bolletii type strain CIP108541 [25]. A total number of 34 SNPs (median coverage 21 reads) and 5 nucleotide insertions or deletions were identified as variable within the core genome among the 6 sequenced genomes (Table 3). A neighbor-joining tree based on the SNP differences identified among the 6 isolates consisted of 3 clades with the 2 isolates obtained in 1998 represented on the same branch (Fig. 3). However, isolates from different years were more closely related to each other than to isolates obtained in the same year indicating the existence of a heterogeneous infecting population containing multiple persisting sub-lineages (Fig. 3). The diversification of bacterial pathogens in the CF lung leading to heterogeneous infecting populations has been previously described for Burkholderia dolosa, Pseudomonas aeruginosa and Staphylococcus aureus [12,37,38]. Of the 34 SNPs identified among the 6 isolates, 24 gave rise to non-synonymous mutations (Table 3). Of note, the phoR gene was affected by 2 distinct non-synonymous mutations in 3 out of the 6 isolates (Table 3; Table S3 in File S2). In Mycobacterium tuberculosis PhoR is the histidine kinase component of the two-component system PhoPR involved in the biosynthesis of complex lipids required for growth of M. tuberculosis inside macrophages and mice [39]. The occurrence of multiple independent mutations of phoR implies that phoR attenuation may confer a selective advantage in the CF lung. In the ED1-4R isolate, a large deletion of the mmpl gene which encodes Mmpl, a putative membrane protein, was identified. Deletion of mmpL4b has been demonstrated to result in a switch from smooth to rough phenotype [40,41] suggesting that this deletion may be responsible for the rough phenotype of the isolate. Furthermore, several non-synonymous mutations were found in genes that are involved in bacterial metabolism, and regulation of gene expression, leading us to speculate that they may contribute to the radical transition from a saprophytic to human pathogenic lifestyle. Finally, the acquisition of amikacin resistance by M. bolletii relative to the amikacin-sensitive reference strain led us to examine the genome sequence for the underlying mutations responsible. Previous studies have demonstrated that polymorphisms in the 16S rRNA of M. tuberculosis result in resistance to amikacin [42]. However, there were no polymorphisms identified in the 16S rRNA of the amikacin-resistant M. bolletii isolates, suggesting that an alternative mechanism of amikacin resistance may exist in M. bolletii. Concluding Comments M. abscessus spp. have emerged as an important cause of pulmonary infection among CF patients with the capacity for long-term persistence leading to clinical deterioration over time. Our genetic typing analysis demonstrates the capacity of M. abscessus spp. isolates to persist for at least 12 years during longterm infection but also indicates that replacement of M. abscessus spp. can occur occasionally during long-term infection. The study provides an indication of the relative prevalence of the 3 M. abscessus species among a selection of infected patients in Scotland and reveals that a single globally distributed ST of M. massiliense (ST23) is also prevalent among Scottish patients. An investigation into the origin and pathogenic capacity of ST23 is warranted to understand the basis for the existence of a widespread M. massiliense clone among infected patients. Finally, the first genome-wide analysis of the evolution of M. abscessus spp. during long term infection revealed the heterogeneous nature of the infective M. bolletii population, and led to the identification of mutations which may contribute to the adaptation from saprophytic organism to human pathogen. Characterization of the genetic determinants involved in the transition of M. abscessus spp. from initial colonization to severe lung pathology could lead to the identification of novel drug targets for the control of M. abscessus spp. infections. Supporting Information File S1 This supporting file includes Figure S1 and Table S1. Figure S1: Phylogenetic analysis of M. abscessus spp. isolates based on single MLST loci. Table S1: Clarithromycin, Cefoxitin and Amikacin MICs determined for isolates from Patient 1. (PDF) File S2 This supporting file includes Table S2 and Table S3. Table S2: Genetic and phenotypic characteristics of each of the 178 study isolates. Table S3: Genome variation among sequential M. bolletii isolates from a single patient. (XLS)
4,947.6
2013-05-16T00:00:00.000
[ "Medicine", "Biology" ]
Aldosterone and renin concentrations were abnormally elevated in a cohort of normotensive pregnant women During pregnancy, the renin–angiotensin–aldosterone system (RAAS) undergoes major changes to preserve normal blood pressure (BP) and placental blood flow and to ensure a good pregnancy outcome. Abnormal aldosterone–renin metabolism is a risk factor for arterial hypertension and cardiovascular risk, but its association with pathological conditions in pregnancy remains unknown. Moreover, potential biomarkers associated with these pathological conditions should be identified. To study a cohort of normotensive pregnant women according to their serum aldosterone and plasma renin levels and assay their small extracellular vesicles (sEVs) and a specific protein cargo (LCN2, AT1R). A cohort of 54 normotensive pregnant women at term gestation was included. We determined the BP, serum aldosterone, and plasma renin concentrations. In a subgroup, we isolated their plasma sEVs and semiquantitated two EV proteins (AT1R and LCN2). We set a normal range of aldosterone and renin based on the interquartile range. We identified 5/54 (9%) pregnant women with elevated aldosterone and low renin levels and 5/54 (9%) other pregnant women with low aldosterone and elevated renin levels. No differences were found in sEV-LCN2 or sEV-AT1R. We found that 18% of normotensive pregnant women had either high aldosterone or high renin levels, suggesting a subclinical status similar to primary aldosteronism or hyperreninemia, respectively. Both could evolve to pathological conditions by affecting the maternal vascular and renal physiology and further the BP. sEVs and their specific cargo should be further studied to clarify their role as potential biomarkers of RAAS alterations in pregnant women. Introduction During pregnancy, the renin-angiotensin-aldosterone system (RAAS) undergoes major changes. Aldosterone and renin are upregulated during pregnancy to ensure an increase in maternal volemia and good placental perfusion [1,2]. Circulating prorenin and renin increase fourfold by 10 weeks of gestation and plateau at 22 weeks of gestation [3,4]. Aldosterone and its regulator angiotensin II (Ang-II) also increase during pregnancy [5]. The actions of Ang-II, including aldosterone synthesis, are mediated through the Ang-II type-1 receptor (AT1R) [6][7][8]. AT1R is expressed in the placenta and is increased in pathologies such as preeclampsia [9,10]. However, to date, the regulation of AT1R abundance in the plasma of pregnant women is unknown. Circulating aldosterone binds to mineralocorticoid receptor (MR) [11][12][13] and plays a crucial role in regulating body fluid volume and blood pressure (BP). Aldosterone increases approximately tenfold at term pregnancy [4]; however, the pathophysiological effects of increased aldosterone on MR are mitigated in pregnancy by progesterone, which acts as an antagonist of MR [14,15] and by inhibiting CYP11B2 activity in adrenal tissue [16,17]. MR activation by aldosterone increases the circulating and urinary levels of lipocalin-2 protein (LCN2, also called NGAL), which has been proposed as a surrogate biomarker of MR activation by aldosterone [13,18,19]. Currently, experimental and clinical studies have demonstrated that changes in the concentration, size, and/ or cargo of sEVs or exosomes are potential biomarkers of disease in nonpregnant and pregnant populations [20][21][22][23][24]. The presence of LCN2 in small extracellular vesicles (sEVs) has also been suggested as an early marker of renal injury and MR activation [13,19,25], but there is no information suggesting whether changes in this novel biomarker are associated with high BP, aldosterone, or renin levels in pregnancy. Like LCN2 [19], the presence of AT1R [26,27] in circulating sEV associated with changes in RAAS in pregnant women remains unknown. The aim of this study was to measure aldosterone levels and plasma renin in a cohort of normotensive pregnant women to identify women with abnormal levels of these hormones. In addition, we explored the presence of plasma sEVs and their protein cargo LCN2 and AT1R in these pregnant women. Subjects This study was performed in 54 Chilean normotensive women in the third trimester of pregnancy (≥37 weeks of gestation) from the Hospital Clínico UC-CHRISTUS, Chile. Normotension was determined according to the 2017 ACC/ AHA Guidelines for High BP [28]. All the women included in this study had single pregnancies; no intrauterine infections or obstetric complications; no chronic pathologies such as kidney failure, heart failure, chronic liver damage, or endocrinopathies; and the absence of exogenous treatment with mineralocorticoids or glucocorticoids. Subjects with pregestational and gestational diabetes, intrauterine growth restriction, fetal malformations, chronic hypertension, hypertensive disorders of pregnancy, or other maternal pathologies were excluded from this study. Women previously diagnosed with secondary causes of hypertension, such as primary aldosteronism, familial hyperaldosteronism (OMIM 103900), apparent mineralocorticoid excess (OMIM 218030), hypercortisolism, and renovascular disease, were also excluded from this study. General maternal (i.e., age, height, weight, and BP) and neonatal (i.e., sex, gestational age, weight, and height) variables were obtained from the clinical records. All subjects included in this study signed a written informed consent in accordance with the Helsinki Declaration and were approved by the ethical Committee of the Faculty of Medicine (CEC-18081004), Pontificia Universidad Católica de Chile. Classification of pregnant women based on aldosterone and renin levels Pregnant women were classified into three groups: controls with normal aldosterone and renin levels and women with high aldosterone or renin levels [29,30]. There were 44 control subjects with normal aldosterone and renin levels ( Fig. 1). Women with serum aldosterone greater than the 75th percentile (115.3 ng/dL) with a concomitant low renin concentration (lower than the 25th percentile: 34.9 µUI/mL) were categorized as elevated aldosterone and low renin (EALR) (Fig. 1A). Otherwise, women with a low aldosterone (lower than the 25th percentile: 37 ng/ dL) and a concomitant renin concentration greater than the 75th percentile (62.5 µUI/mL) were categorized as low aldosterone and elevated renin (LAER) (Fig. 1B). Biochemical assay Serum aldosterone (LIAISON ® Aldosterone (310450)) and direct plasma renin concentration (DRC) (LIAISON ® Direct Renin (code 310470)) were measured by chemiluminescent immunoassay technology on an automated chemiluminescent analyzer (Liaison XL/DiaSorin, Saluggia, Vercelli, Italy), with reported coefficients of variation of 9.5% for 6.7 ng/dL and 5.6% for 28.8 ng/dL for plasma aldosterone and 13% for 5.8 mIU/mL and 7.3% for 107.5 mIU/mL for DRC in the clinical laboratory of the Hospital Clínico UC-CHRISTUS. Plasma electrolytes (sodium and potassium) were evaluated with methods previously described [31]. Serum cortisol and cortisone were quantified using mass spectrometry (LC-MS/MS) and validated according to the parameters suggested by the Food and Drug Administration and Clinical and Laboratory Standards Institute using deuterated internal standards of cortisol and cortisone (cortisol-d4 and cortisone-d2) in Agilent 1200 series HPLC equipment coupled to an AB Sciex 4500 QTrap mass spectrometer [32]. Quantification of sEVs by nanoparticle tracking analysis (NTA) sEV samples were diluted with PBS to obtain a concentration range between 20 and 100 particles per frame (optimal greater than 20 particles per frame). The samples were analyzed using an NS300 instrument (Malvern, UK) with NanoSight NTA 3.0 Nanoparticle Tracking and Analysis software (Version Build 0064). The videos (two videos per sample) were processed and analyzed as previously described and informed the mean, mode, and median particle size together with an estimated number of particles per mL of plasma [34]. Determination of sEV morphology by transmission electron microscopy sEV shape and size were determined by transmission electron microscopy as described [34]. For this, a 15-µL sEV pellet was added onto a carbon-coated copper grid (300 mesh) for 1 min and stained with 2% uranyl acetate for 1 min. The grids were visualized at 80 kV in a Tecnai transmission electron microscope (Phillips, Finland). Identification of characteristic EV proteins, LCN2, and AT1R by western blot After washing, the membranes were incubated (1 h, room temperature) with secondary horseradish peroxidase-conjugated goat anti-rabbit (Thermo Fisher Scientific, USA) or rabbit anti-goat (Abcam, UK) antibodies as previously described [35]. Proteins were detected by enhanced chemiluminescence, and a semiquantitative densitometry analysis was performed using ImageJ software (NIH, USA). Statistical analysis Values are presented as the median [interquartile range] or mean ± standard deviation (range). Comparisons between two groups were performed by Student's t test for parametric data or the Mann-Whitney U-test for nonparametric data. A value of p < 0.05 was considered statistically significant. Data analysis and plotting were performed with GraphPad Prism 7.0 software (GraphPad Software Inc., USA). Results Identification of groups of normotensive pregnant women with either high plasma aldosterone or high renin levels All women participating in this study had normal BP (SBP < 120 mmHg and DBP < 80 mmHg), with an average mean BP of 102.5 mmHg (Table 1). We identified 44 pregnant women with normal aldosterone or normal renin levels, which were considered controls (Table 1) (Fig. 1). In our cohort, 9% (5/54) were identified as having EALR (Table 1) (Fig. 1A). In the EALR group, the aldosterone levels were twofold higher in pregnant women than in controls (130 ng/ dL vs. 65.6 ng/dL, p < 0.0001), and the renin levels were 64% significantly lower than those in controls (34.1 µUI/mL vs. 44.7 µUI/mL, p < 0.05). The ARR was threefold higher in women with EALR than in controls (4.9 vs. 1.5, p < 0.0001) (Table 1). Similarly, 9% (5/54) of pregnant women were identified as having LAER (Table 1) (Fig. 1B). These five women were different than those identified in the EALR group. The data corresponding to control women are shown in Table 1 and Fig. 1B. In pregnant women with LAER, aldosterone levels were 70% lower (30.8 ng/dL vs. 65.8 ng/dL, p < 0.05), renin levels were higher (74.2 µUI/mL vs. 44.7 µUI/mL, p < 0.05) and ARR was fourfold lower (0.36 vs. 1.5, p < 0.001) compared to control pregnant women (Table 1). Serum cortisol, cortisone, and the cortisol-to-cortisone ratio were similar in different groups (Table 1). Regarding the neonatal variables, we did not observe any difference in the EALR and LAER groups compared to control neonates (Table 1). Weight, body mass index (BMI), SBP and DBP were determined at 3rd trimester. Comparisons between two groups (EALR or LAER) were performed by Student's t test for parametric data or Mann-Whitney U-test for nonparametric data. *p < 0.05, **p < 0.001, ***p < 0.0001 vs. corresponding values in control group. Clinical and biochemical data are presented as median and interquartile range. EALR women with elevated aldosterone and low renin, LAER women with low aldosterone and elevated renin, ARR aldosterone to renin ratio, CCR cortisol-to cortisone ratio Plasma electrolytes in normotensive pregnant women Plasma electrolytes Na + and K + and the respective Na+/K+ ratio were determined. Although EALR group had a trend to higher sodium and lower potassium levels, both were similar to control or LAER group (Kruskal-Wallis test, pNS). Unfortunately, no urine samples were obtained from these women to measure urinary Na + or K + electrolytes. Identification and characterization of plasma extracellular vesicles in the EALR and LAER groups Plasma sEVs were isolated from 18 pregnant women in our cohort, including 10 controls, 4 EALRs, and 4 LAERs. The particle size ranged between 50 and 150 nm for all samples ( Fig. 2A). The morphology of the isolated sEVs was determined by transmission electron microscopy. sEVs showed the characteristic morphology of a round donut shape between 30 and 150 nm (Fig. 2B). The presence of the sEV marker TSG101 was confirmed by western blot in the isolated sEVs (Fig. 2C). No significant differences were observed in sEV concentration and mode in the different study groups ( Determination of AT1R and LCN2 proteins in plasma sEVs from pregnant women A qualitative and semiquantitative analysis by western blot for AT1R and LCN2 protein was performed in plasma EV Mann-Whitney U-test was used to identify differences between groups. Data are presented as mean ± S.D (range). EALR women with elevated aldosterone and low renin, LAER women with low aldosterone and elevated renin. *p < 0.05 lysates from EALR and LAER pregnant women compared to controls. AT1R and LCN2 proteins from EVs of EALR women showed no significant differences compared to controls (Fig. 2D), although LCN2 protein abundance tended to be higher in the EALR group compared to controls (p = 0.057) (Fig. 2D). In addition, the protein abundances of AT1R and LCN2 in sEVs were similar between LAER and the control group (Fig. 2E). Discussion In the present study, we evaluated the levels of aldosterone and renin in a cohort of 54 normotensive pregnant women. We also described for the first time the presence of AT1R and LCN2 proteins in sEVs isolated from maternal circulation, which have been considered potential biomarkers of RAAS and MR activity, respectively. Currently, there is no normal range that identifies lower and upper thresholds for normal circulating concentrations of aldosterone and renin (DRC) during pregnancy, which is helpful to determine when these physiological changes can become pathophysiological changes. Plasma renin activity (PRA) has traditionally been used to calculate the ARR; however, measurement of DRC has become increasingly popular. DRC assays are still in evolution, and generally a conversion factor of PRA (ng/mL/h) to DRC (mU/L) is 8.2-12 is accepted [36]. We propose the use of the interquartile range (25th percentile-75th percentile) to identify a normal range for renin and aldosterone in a normotensive group of pregnant women. Our results confirmed that aldosterone and renin levels are higher in pregnancy than in nonpregnant women [3,4]. Although certain groups of the women showed either elevated levels of aldosterone or renin (EALR and LAER groups), none of these pregnant women showed clinical symptomatology of high BP during pregnancy. In the EALR group (Fig. 1A), we suggest that these women have increased renin-independent aldosteronism, which is a condition similar to that seen in primary aldosteronism [28,[37][38][39], where aldosterone secretion is independent of renin and could be attributed generally to the presence of an autonomous secretion of aldosterone, which may be due to adrenal hyperplasia, aldosterone-producing cell clusters, or aldosterone-producing adenoma [40][41][42]. In this regard, previous studies in normotensive populations have shown the existence of a continuum of reninindependent aldosteronism [30] and a higher risk of developing arterial hypertension in normotensive subjects with elevated ARR [29]. Considering these findings, we suggest that normotensive pregnant women with EALR have an increased likelihood of future developing hypertension or CV diseases, especially in presence of a second factor or challenge (e.g., ischemia/reperfusion, excessive high salt intake, electrolytic imbalance, exogenous/endogenous metabolites, placental sEVs, miRNAs, etc.). Hence, complementary, and longitudinal studies in these pregnant women are strongly encouraged. With respect to the women from the LAER group (Fig. 1B), we suggest that the hyperreninemic phenotype caused by an increase in circulating renin mainly derived from the placenta and maternal decidua could favor the synthesis and activity of Ang-II [43,44]. In this regard, Lumbers et al. postulated that the oversecretion of placental renin and placental exosomes enriched with renin and other functional components of the renin-angiotensin system (e.g., renin, Ang-II, AT1R, AT1R-AA, miRNAs) could directly affect maternal vascular physiology and BP [44]. In respect to plasma electrolytes Na + and K + , we did not find differences between groups. These results are similar to previous studies in normotensive pregnant women [45,46], and suggest the aldosterone and renin changes in these women are not a compensatory response to sodium loading or electrolytic imbalance, and should respond to alterations of the normal aldosterone or renin physiology. Recent evidence concerning the role of sEVs (exosomes) in the pathophysiology of pregnancy highlights the impact of sEVs and their cargo on metabolism and function over receptor cells and tissues. Studies about the cargo of sEVs and MR activation have been carried out in human and animal models [47,48]. However, studies on the sEV concentration and cargo in normotensive pregnancies with alterations in aldosterone-renin metabolism have not been performed to date. In this study, we detected for the first time the presence of AT1R and LCN2 proteins in plasma sEVs from normotensive pregnant women. sEVs from EALR and LAER pregnancy did not show any changes in the protein abundance of EV-AT1R or EV-LCN2; however, there was a trend toward increased levels of EV-LCN2 in the EALR group. Although this finding was not significant, it could become so if a larger cohort of pregnant women were studied because increased EV-LCN2 expression has been previously associated with MR activation by aldosterone [18,19]. Moreover, a significant increase in the sEV mode was found in the LAER group compared to the controls, suggesting that in LAER, the nanovesicles found in plasma are larger and may have a different biogenesis [49] (Table 2). In summary, we found normotensive pregnant women having either a high circulating aldosterone (9%) or a high plasma renin (9%) level. Despite both normal BP and normal plasma electrolytes, these abnormal ARRs suggest subclinical status similar to primary aldosteronism or to hyperreninemia, respectively. In these women, both are subclinical conditions that could evolve to pathological conditions by altering the maternal vascular and renal physiology and, subsequently the BP [44,50]. In this respect, the presence of AT1R and LCN2 proteins in plasma sEVs isolated from pregnant women should be further explored as potential biomarkers of subclinical conditions associated with an unregulated RAAS. Author contributions V.P., A.L., C.A.C., C.E.F. and A.T.-C. designed the study, collected, analyzed, interpreted the patients data, and wrote the first draft of the manuscript. All authors contributed to the discussion, reviewed the manuscript, and approved the final version.
3,977.4
2021-11-26T00:00:00.000
[ "Medicine", "Biology" ]
Flat firms, complementary choices, employee effort, and the pyramid principle I review Markus Reitzig’s book, Get Better at Flatter, and offer some critical observations on why managers might want to flatten their firms and on Reitzig’s advice to them. I also introduce the pyramid principle, a simple theory of why firms might end up taller than they would want to be. Interest in "flattening" firms-reducing the number of layers between the CEO and the lowest level on the organization chart-began more than 40 years ago (e.g., Toffler 1981;Peters and Waterman 1982). It probably began in part as a response to the conglomerates that had been built up during the 1960s and their lagging performance in the 1970s. Though an increasingly distant memory, these behemoths combined hundreds of businesses, remotely related if at all. ITT, for example, initially a telephone and telegraph company, acquired divisions that did everything from operating hotels to offering insurance to baking bread. Flattening these firms usually involved not just reducing the number of managerial layers but also restructuring them. Firms would emerge from these processes not just flatter but also smaller and more focused. Most of these conglomerates have long since disappeared, but managerial interest in flatter firms has persisted. Markus Reitzig's essay, and even more so the book upon which that essay has been based (Reitzig 2022), provides a how-to manual for these managers. Importantly, however, Reitzig does not stop at how-to, he also answers the questions of when and why managers might want to streamline their hierarchies. The proponents of flatter firms often present them almost as getting something for nothing. Flat firms continue to produce the same goods and services. But, with fewer layers of middle managers, they presumably produce those things at less cost, more efficiently. Why, then, have all firms not adopted these flat structures? Reitzig (2022) points out that flat organizational structures do not necessarily benefit all firms. Middle managers matter. They mentor, monitor, and motivate their employees. They determine who needs to do what. They resolve conflicts between employees. They filter and communicate relevant information both from their own bosses to those they supervise and vice versa. All of these activities contribute to organizations operating efficiently and effectively. Flatter firms, those with fewer of these middle managers, Reitzig argues, appear most useful to meeting three goals: (1) Flat structures stimulate innovation, by allowing employees to interact with more diverse alters (for recent empirical evidence, see Lee 2022). (2) They enable faster adaptation. Hierarchy helps to ensure accountability and reliability but simultaneously retards the speed at which the organization can respond to changes in technology and customer preferences (Hannan and Freeman 1984). (3) Flat structures, by providing people with more interesting and satisfying jobs, may also help to recruit and retain employees. Complementary choices For me, one of the great strengths in Reitzig's analysis comes from his attention to complementary choices. Managers, he maintains, cannot simply eliminate managerial layers without simultaneously changing other dimensions of the organization design. Earlier drafts of this review benefited from the comments of two anonymous reviewers. For example, flattening the firm usually also requires a change in compensation schemes. Middle managers help to monitor and motivate employees. As they become fewer in number, the span of control-the number of employees they oversee-increases. Managers have less time to monitor each report. To mitigate the agency problems that might ensue, firms that succeed at becoming flatter usually adopt compensation practices that more closely align the financial interests of employees with those of the firm (Reitzig 2022). Removing layers of hierarchy also typically means devolving decisions. Middle managers have less time to direct those they supervise. To prevent them from becoming a bottleneck in organizational processes, successful flattening therefore generally means giving employees the ability and authority to act more autonomously (Reitzig 2022). The complementary choices do not end there. Flattening the firm may require a reorganization of reporting lines, an expansion of routines, and even adjustments in the corporate culture. Indeed, Reitzig (2022) discusses interactions between the depth of the corporate hierarchy and nearly every other aspect of organization design. In his how-to format, Reitzig presents these complementary choices as a practical matter. But I believe that his arguments raise a larger point that has been under appreciated in the literature on organization design. For more than 20 years, I have been using the People, Architecture, Routines, and Culture (PARC) Framework to teach organization design (for an excellent review, see Saloner et al. 2001). Most discussions of the framework treat it as a set of semi-independent policies. Managers can pick and choose different policies to tune the organization to some optimal balance of coordination and individual incentives. Reitzig's discussion of these choices in the context of flattening the firm, however, highlights the fact that they are not independent. They are complementary. Rather than having a near-infinite set of feasible combinations available to them, managers instead choose between a few clusters of coherent organizational design choices. This view of organization design as a set of complementary choices interestingly brings it in line with current thought in other closely-related areas. Human resource management, for example, has increasingly been focused on clusters of policies that maximize performance (e.g., MacDuffie 1995; Baron et al. 2001). Strategy has similarly come to view competitive advantage as being a function of adopting sets of coherent choices (e.g., Milgrom and Roberts 1990;Rivkin 2000;Porter and Siggelkow 2008). The fact that flat organizational designs fit best with firms trying to emphasize innovation and adaptability, moreover, further suggests that these complementary design choices may also complement specific strategic positions. Form follows function. Employee effort Although I appreciate the book and the essay and I recommend them without reservation, I find myself disagreeing with one of the claims. Reitzig (2022) argues that flat firms require more of their employees, not just in the sense of being proactive but also in the effort that they exert. They therefore must recruit people willing to work harder. At first blush, this conclusion seems straightforward. Fewer middle managers means that others must now cover the responsibilities of those eliminated, in addition to everything that they had already been doing. Each employee must do more. However, I believe that this conclusion rests on two implicit assumptions: first, that flattening leads to fewer employees; second, a conservation of activities and of the time required for them. In other words, the flattened firm must still do everything that had been done before but with fewer people. But those assumptions may not hold true. As firms flatten their hierarchies, they may reduce their ranks of middle managers but expand in other areas, such as in sales or in client relationship management. The total number of employees therefore may remain unchanged. Even if the headcount does decline, if firms restructure themselves in the ways that Reitzig advises, many of the activities the middle managers did will disappear. Incentives will substitute for monitoring. Routines and modularization will reduce the need for resolving conflicts. Shortening the number of nodes between the sources and destinations of information will eliminate a friction in its movement. Despite having fewer people, a flatter firm may not need them to exert any extra effort. Although the flatter firm might operate more efficiently, these changes do not come without cost. Monitoring ensures not just that employees exert effort but also quality control. Conflict resolution capacity allows the firm to engage reliably in more complex and more tightly-integrated production processes. Lean and agile may mean being more fragile. The pyramid principle Finally, although it falls somewhat outside the scope of Reitzig's research, I feel as though his essay begs a question: Why do managers have a perennial interest in flattening their firms? If managers understand the benefits of being flat, why have more of them not built their firms flat from the beginning? We need a theory by which firms systematically end up with more hierarchy than ideal. 1 Let me offer one, which I will call the pyramid principle. Assume that each bottom-level employee in an organization can produce a certain amount of whatever good or service (e.g., Williamson 1967;Keren and Levhari 1979). Let us further assume that firms begin with a fixed span of control (at every level). As one might surmise from Reitzig's discussion, this span of control stems from a large number of interdependent choices. Changing the span of control therefore comes at considerable cost. In terms of the imagery of a pyramid, the area covered by its base represents the total production of the firm. Span of control essentially defines the steepness of the sides. As the firm produces more -as its base becomes larger -its height must rise if the angle of its sides remains constant. However, to understand why firms might end up with too many layers, we need two additional assumptions. First, assume that adopting a larger span of control costs more. Reitzig (2022) provides justification for this idea (see also Lee 2022): Developing routines and investing in technology that might allow managers to supervise more people incurs a semi-fixed cost (or at least the marginal costs decrease rapidly with scale). In essence, then, the benefits of a broad span of control increase at an increasing rate with firm size. But its costs remain relatively fixed. As a result, smaller organizations prefer smaller spans of control. If we also assume that firms begin small, then we have a world in which firms initially choose small spans of control. They persist with these small spans because the interdependent design choices mean that any change would come at considerable cost. As they grow, their hierarchies therefore become ever more inefficient. At some point, the benefits of flattening the firm will outweigh the costs of the change. Reitzig (2022) then instructs managers as to how to increase the span of control. In the meantime, however, the firm has more than the (static) equilibrium number of layers. This simple theory suggests some directions for future research on organizational hierarchies. For example, far from being flat, startups should have a hierarchical height proportional to their logged headcount. They should grow taller as they add employees. The slope of this relationship between height and headcount, however, should flatten out eventually, as firms begin to invest in the routines and technology that enable larger spans of control. Interestingly, Lee (2022) finds just such a pattern among startups in the video game industry, with the shift in the slope falling someplace around 200 employees. Startups might also vary in their early spans of control. Being richer in resources, for example, may permit some to invest in complementary technology earlier. Or, recruiting early employees from large, established firms might allow startups to import effective routines for coordination. Those able to adopt larger early spans of control may then enjoy a competitive advantage as they begin to scale (e.g., Lee 2022). Get Better at Flatter provides both a nuanced perspective on organization design and much-needed guidance for managers. But it also reminds me of how little empirical attention has been given to the topic. I hope that the book will inspire not just managers but also researchers to give greater attention to the pros and cons of organizational hierarchy. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2,809.8
2022-03-01T00:00:00.000
[ "Economics" ]
lncRNA ZFAS1 promotes lung fibroblast-to-myofibroblast transition and ferroptosis via functioning as a ceRNA through miR-150-5p/SLC38A1 axis Pulmonary fibrosis (PF) is a lethal fibrotic lung disease. The role of lncRNAs in multiple diseases has been confirmed, but the role and mechanism of lncRNA zinc finger antisense 1 (ZFAS1) in the progression of PF need to be elucidated further. Here, we found that lncRNA ZFAS1 was upregulated in bleomycin (BLM)-induced PF rats lung tissues and transforming growth factor-β1 (TGF-β1)-treated HFL1 cells, and positively correlated with the expression of solute carrier family 38 member 1 (SLC38A1), which is an important regulator of lipid peroxidation. Moreover, knockdown of lncRNA ZFAS1 significantly alleviated TGF-β1-induced fibroblast activation, inflammation and lipid peroxidation. In vivo experiments showed that inhibition of lncRNA ZFAS1 abolished BLM-induced lipid peroxidation and PF development. Mechanistically, silencing of lncRNA ZFAS1 attenuated ferroptosis and PF progression by lncRNA ZFAS1 acting as a competing endogenous RNA (ceRNA) and sponging miR-150-5p to downregulate SLC38A1 expression. Collectively, our studies demonstrated the role of the lncRNA ZFAS1/miR-150-5p/SLC38A1 axis in the progression of PF, and may provide a new biomarker for the treatment of PF patients. Accumulating evidence has confirmed that long noncoding RNAs (lncRNAs) are involved in many pathophysiological processes of PF, including cell proliferation, migration, epithelial-mesenchymal transition (EMT), and immunoregulation [7,8]. For example, Song et al. found that the abnormal expression of lncRNA and protein-coding gene was associated with the progression of PF through as competing endogenous RNA (ceRNA) [9]. Wu et al. showed that lncRNA CHRF promoted the progression of PF by downregulating the inhibitory effect of miR-489 on the expression of MyD88 and Smad [10]. Zhao et al. reported that lncRNA PFAR acted as ceRNA to promote the FMT process and myofibroblast differentiation by regulating the miR-138/YAP1 axis [11]. In addition, lncRNA zinc finger antisense 1 (lncRNA ZFAS1) was originally identified as a novel tumor-related lncRNA by upregulating cell proliferation, migration, and EMT [12,13]. Recently, upregulation of lncRNA ZFAS1 was demonstrated to induce EMT and ECM deposition by promoting the expression of ZEB2 [14], suggesting the potential involvement of lncRNA ZFAS1 in PF progression. However, the role of lncRNA ZFAS1 in the progression of PF requires further study and confirmation. Ferroptosis is a newly characterized iron-dependent form of non-apoptotic regulated cell death triggered by lipid reactive oxygen species (ROS). Interestingly, several studies found that ROS levels were upregulated in the process of FMT induced by TGF-β1 [15,16], which was triggered by inflammatory cytokine secretion in PF [17][18][19]. Moreover, previous studies confirmed that iron overload could lead to PF, which is related to the increase in lipid peroxidation and the decrease in glutathione peroxidase 4 (GPX4) activity in lung tissues [20]. For example, upregulation of GPX4 decreased myofibroblast differentiation, ROS levels and ferroptosis in the TGF-β1-induced PF cell model [21]. Furthermore, increasing evidence has confirmed that glutamine metabolism contributes to the formation of oxidizable lipids, which could lead to ferroptosis [22,23], and the level of glutathione in PF lung tissues was downregulated [24,25]. In addition, SLC38A1 is an important regulator of glutamine uptake and metabolism in lipid peroxidation [26]. Therefore, we speculated that ferroptosis plays an important in the progression of PF, and the role of SLC38A1 in PF through ferroptosis regulation remains elusive. In this study, the effect of lncRNA ZFAS1 on the process of FMT and ferroptosis was determined. Mechanistically, we further to explore whether lncRNA ZFAS1 regulates fibroblast activation and lipid peroxidation via sponging miR-150-5p and regulating SLC38A1 in the progression of PF. Overall, our study will provide vital theoretical evidence for explaining the mechanisms of the lncRNA ZFAS1/miR-150-5p/SLC38A1 axis in PF progression, and will simultaneously provide a new biomarker and target for the diagnosis and treatment of PF. Upregulation of lncRNA ZFAS1 in PF is positively correlated with SLC38A1 expression The previous studies showed that an abnormally expressed lncRNA was involved in the development and progression of PF by regulating a downstream gene [11]. In this study, the expression of lncRNA ZFAS1 and SLC38A1 in lung tissues was determined by RT-qPCR. As shown in Figure 1A, 1B, lncRNA ZFAS1 and SLC38A1 mRNA were highly expressed in lung tissues of the PF rat model induced by BLM compared with the control group (both P<0.001). Moreover, Spearman's correlation analysis revealed a remarkably positive correlation between lncRNA ZFAS1 expression and SLC38A1 expression in lung tissues of BLM-induced PF rat model (r=0.792, P<0.01, Figure 1C). Consistent with the in vivo studies, the expression levels of lncRNA ZFAS1 and SLC38A1 were higher in TGF-β1-treated HFL1 cells than in the NC group (both P<0.001, Figure 1D, 1E). Furthermore, we attempted to evaluate the subcellular location of lncRNA ZFAS1 in HFL1 cells. The FISH analysis results showed that lncRNA ZFAS1 was mainly distributed in the cytoplasm ( Figure 1G). Similarly, RT-qPCR showed that the lncRNA ZFAS1 transcript was preferentially localized in the cytoplasm than in the nucleus ( Figure 1F). Taken together, the results showed that lncRNA ZFAS1 was upregulated in PF and positively correlated with SLC38A1, which indicated that overexpression of lncRNA ZFAS1 and SLC38A1 may play an important role in regulating the progression of PF. Knockdown of lncRNA ZFAS1 inhibits the FMT process in TGF-β1-induced HFL1 cells Accumulating evidence has confirmed that FMT is closely related to the development of PF [2,27]. First, we transfected HFL1 cells with lncRNA ZFAS1 shRNA and found that lncRNA ZFAS1 expression was significantly decreased compared with that in the negative control (sh-NC) group (P<0.001, Figure 2A). Moreover, BrdU staining and wound healing assay showed that knockdown of lncRNA ZFAS1 significantly restored the TGF-β1-induced proliferation and migration of HFL1 cells (all P<0.01, Figure 2B-2E), while no significant difference was observed between the TGF-β1+sh-ZFAS1 group and the control group. Furthermore, the role of lncRNA ZFAS1 in regulating the TGF-β1-induced FMT process was evaluated. Western blot analysis results showed that knockdown of lncRNA ZFAS1 significantly decreased the protein levels of α-SMA, collagen I, and FN1 (P<0.01, P<0.001, Figure 2F, 2G), but upregulated E-cadherin expression (P<0.01). Similarly, immunofluorescence staining showed that silencing of lncRNA ZFAS1 abolished the inducing effect of TGF-β1 treatment on the expression of α-SMA ( Figure 2H), but promoted the expression of E-cadherin ( Figure 2I). Overall, knockdown of lncRNA ZFAS1 significantly attenuated TGF-β1-induced FMT process in vitro. Knockdown of lncRNA ZFAS1 reduces inflammatory cytokine secretion, ROS levels and ferroptosis in HFL1 cells Recent studies have found that lipid peroxidation promotes myofibroblast differentiation and ferroptosis, leading to the progression of PF [16,21]. In this study, we attempted to evaluate the effect of lncRNA ZFAS1 on inflammation, ROS levels and ferroptosis in HFL1 cells. As shown in Figure 3A, the expression of inflammatory cytokines (TNF-α, IL-6, IL-1β) were significantly upregulated in HFL1 cells when treated with TGF-β1 compared with the control group (P<0.01, P<0.001), while knockdown of lncRNA ZFAS1 decreased its expression (all P<0.01, Figure 3A). Moreover, knockdown of lncRNA ZFAS1 or ferroptosis detect the expression of lncRNA ZFAS1 and SLC38A1 in lung tissues; (C) Spearman analysis was used to analyze the association between lncRNA ZFAS1 and SLC38A1 expression in the lung tissues of BLM-induced pulmonary fibrosis cases; (D, E) The expression of lncRNA ZFAS1 and SLC38A1 in HFL1 cells treated with TGF-β1 or control were determined by RT-qPCR; (F) RT-qPCR was used to measure the expression of lncRNA ZFAS1 in either the nucleus or cytoplasm of HFL1 cells; (G) FISH was performed to evaluate the location of endogenous lncRNA ZFAS1 (green) in HFL1 cells, U6 and GAPDH were used as nuclear and cytoplasmic localization markers, respectively. DNA (blue) was stained with DAPI. *** P<0.001, compared with the control group; ### P<0.001, compared with the NC group. AGING inhibitor (Fer-1) treatment significantly decreased the TGF-β1-induced high levels of ROS (all P<0.01, Figure 3B). Furthermore, we detected the expression of GPX4 and MDA to evaluate the effect of lncRNA ZFAS1 on lipid peroxidation in ferroptosis. Western blot analysis showed that the protein level of GPX4 was significantly enhanced by Fer-1 administration and lncRNA ZFAS1-silenced in TGF-β1-treated HFL1 cells (P<0.01, Figure 3D, 3E). Moreover, the level of MDA in the TGF-β1-treated group was significantly enhanced compared with that in the control group (P<0.001, Figure 3C), but Fer-1 treatment or lncRNA ZFAS1slienced abrogated this effect (both P<0.001). In addition, there was no significant difference between the control group and the TGF-β1+sh-ZFAS1 group or TGF-β1+Fer-1 group. Taken together, these results demonstrated that knockdown of lncRNA ZFAS1 decreased the promotion effect of TGF-β1 on inflammation and lipid peroxidation in HFL1 cells. miR-150-5p is a target of lncRNA ZFAS1, and miR-150-5p directly targets SLC38A1 in TGF-β1-induced HFL1 cells Accumulating evidence has confirmed that the lncRNA-miRNA-mRNA network plays an important role in the progression of PF [8,28]. In this study, bioinformatics tools of StarBase were applied to analyze the potential interaction between lncRNA ZFAS1 and miRNAs. The results showed that miR-150-5p was predicted to be a potential target of lncRNA ZFAS1, and the potential binding site between miR-150-5p and lncRNA ZFAS1 was shown in Figure 4A. Moreover, previous studies have shown that miR-150-5p was associated with fibrosis [29]. As expected, RT-qPCR showed that miR-150-5p was expressed at low levels in the lung tissues of rats with PF compared with the control group (P<0.001, Figure 4B). Similarly, Spearman's correlation analysis revealed a remarkably negative correlation between the expression of lncRNA ZFAS1 and miR-150-5p in lung tissues of the PF rat model (r=-0.785, P<0.01, Figure 4C). Consistently, the expression of miR-150-5p was down-expressed in TGF-β1-treated HFL1 cells (P<0.01, Figure 4D). In addition, FISH analysis showed that lncRNA ZFAS1 and miR-150-5p co-localized in the cytoplasm of HFL1 cells ( Figure 4E). Of note, Chen et al. found that lncRNA ZFAS1 binds to miR-150-5p in an AGO2-dependent manner [30]. As expected, the RIP results also showed that compared with those in the control group (IgG), the expression levels of lncRNA ZFAS1 and miR-150-5p were highly increased in the AGO2 AGING pellet (P<0.001, Figure 4F). Furthermore, the Dual-Luciferase Reporter assay results showed that the luciferase activity of the WT reporter was lower in lncRNA ZFAS1-WT+miR-150-5p group than that in the lncRNA ZFAS1-WT+NC group (P<0.01, Figure 4G), but had no effect on the luciferase activity of the MUT reporter. In addition, RNA pulldown showed that miR-150-5p-WT, but not the mutant one, precipitated lncRNA ZFAS1 ( Figure 4H), demonstrating their direct interaction. In light of this, we confirmed that the target gene of lncRNA ZFAS1 was miR-150-5p, and lncRNA ZFAS1 negatively regulated the expression of miR-150-5p. In addition, we discovered that miR-150-5p may target SLC38A1 directly as predicted by the Starbase database ( Figure 5A). To further confirm that miR-150-5p specifically bind to the 3'UTR of SLC38A1 mRNA to regulate the expression of SLC38A1, we performed dual-luciferase reporter assay. The results showed that the luciferase activity in the SLC38A1-WT+miR-150-5p mimic group was lower than that in the SLC38A1-WT+miR-NC group (P<0.01, Figure 5B), but there was no significant difference when miR-150-5p mimic or NC were co-transfected with SLC38A1-MUT. In addition, we used western blot to test the expression level of SLC38A1 when miR-150-5p mimics were transfected into HFL1 cells treated with TGF-β1. The results indicated that the expression of SLC38A1 was significantly decreased in the miR-150-5p mimic group compared with the NC group (P<0.01, Figure 5C, 5D). Furthermore, Spearman's correlation analysis revealed a remarkably negative correlation between SLC38A1 expression and miR-150-5p expression in lung tissues of PF rats (r=-0.844, P<0.01, Figure 5E). In conclusion, these results suggest that SLC38A1 was a direct target of miR-150-5p, which negatively regulated SLC38A1 expression. The expression of miR-150-5p in lung tissues was analyzed by RT-qPCR; (C) Spearman correlation was used to analyze the association between miR-150-5p and lncRNA ZFAS1 expression in the lung tissues of BLM-induced pulmonary fibrosis; (D) RT-qPCR was used to detect the expression of miR-150-5p in HFL1 cells; (E) FISH was performed to evaluate the location of endogenous lncRNA ZFAS1 and miR-150-5p in HFL1 cells; (F) RT-qPCR was applied to detect the expression of lncRNA ZFAS1 and miR-150-5p in AGO2 pellet; (G) dual-luciferase reporter gene was used to verify the targeted relationship between lncRNA ZFAS1 and miR-150-5p; (H) RNA pulldown assay showed that biotin labeled miR-150-5p-WT interacted with lncRNA ZFAS1. *** P<0.001, compared with the control group; ## P<0.01, compared with NC group; ΔΔΔ P<0.001, compared with IgG group;  P<0.001, compared with bio-NC group. AGING lncRNA ZFAS1 regulates FMT and ferroptosis via the miR-150-5p/SLC38A1 axis Next, we further to explore the role of the lncRNA ZFAS1/miR-150-5p/SLC38A1 axis in the process of FMT and ferroptosis in the PF cell model induced by TGF-β1. First, we used SLC38A1 shRNA to decrease the expression of SLC38A1 in HFL1 cells, and the transfection efficiency was determined by western blot (P<0.01, Figure 6A, 6B). BrdU staining and wound healing assay showed that SLC38A1 knockdown decreased TGF-β1 induced the cell viability and migration (all P<0.01, Figure 6C, 6D). However, there was no significant difference between the TGF-β1 treatment only group and the rescue group (TGF-β1+ si-SLC38A1+miR-150-5p inhibitor group or TGF-β1+ si-SLC38A1+ZFSA1 group) ( Figure 6C, 6D). Moreover, western blot results showed that the protein levels of α-SMA, collagen I and FN1 were decreased in the TGF-β1+si-SLC38A1 group compared with the TGF-β1treatment only group (P<0.01, Figure 6F, 6G), but the expression of E-cadherin was increased (P<0.01). Furthermore, immunofluorescence staining showed that silencing SLC38A1 restored the promoting effect of TGF-β1 treatment on the expression of α-SMA ( Figure 6E), but upregulated the expression of E-cadherin ( Figure 6H). However, the above effects were alleviated in the rescue group. Overall, knockdown of SLC38A1 significantly promoted TGF-β1-induced fibroblast activation. In addition, the effect of the loss-of function of si-SLC38A1 on ferroptosis in TGF-β1-treated HFL1 cells was investigated. Western blot analysis showed that SLC38A1 suppression significantly increased the protein level of GPX4 compared with that in the TGF-β1 group (P<0.01, Figure 7A, 7B). In addition, the ROS levels and MDA production were decreased when SLC38A1 knockdown in TGF-β1-treated HFL1 cells (all P<0.01, Figure 7C, 7D). However, there was no significant difference between the TGF-β1 treatment group and the rescue group (Figure 7). Taken together, the results showed that overexpression of lncRNA ZFAS1 increased the promoting effect of TGF-β1 on ROS levels and ferroptosis in HFL1 cells by downregulating the inhibitory effect of miR-150-5p on SLC38A1 expression. Knockdown of lncRNA ZFAS1 blocks BLM-induced PF via regulation of the miR-150-5p/SLC38A1 axis To further verify the role of the lncRNA ZFAS1/miR-150-5p/SLC38A1 axis in the progression of PF in vivo, H&E and Masson staining were performed. The results AGING showed that BLM administration caused the thickening of pulmonary interalveolar septa, inflammatory cell infiltration, and increased collagen deposition compared with the control group ( Figure 8A, 8B), but knockdown of lncRNA ZFAS1 attenuated this effect. Moreover, immunohistochemistry assay revealed that lncRNA ZFAS1-silenced decreased the expression of α-SMA in the lung tissues of BLM-induced PF rat model ( Figure 8C), but enhanced E-cadherin ( Figure 8C). Consistent with the immunohistochemistry assay results, the western blot results showed that lncRNA ZFAS1silenced decreased the expression of α-SMA, collagen I and FN1 in the lung tissues of the BLM-induced rat model (all P<0.01, Figure 8D) and increased the expression of E-cadherin (P<0.01, Figure 8D). Besides, knockdown of lncRNA ZFAS1 significantly increased the expression of miR-150-5p in lung tissues of the BLM-induced PF rat model (P<0.01, Figure 8E), but decreased the mRNA expression of SLC38A1 (P<0.01, Figure 8F). Moreover, silencing of lncRNA ZFAS1 significantly decreased the expression of serum inflammatory cytokines in the BLM-induced rat model (all P<0.01, Figure 8G). Furthermore, the effect of lncRNA ZFAS1 on ROS levels and the expression of MDA and GPX4 in lung tissues were determined. The results showed that BLM administration significantly enhanced ROS levels and MDA generation compared with the control group (P<0.001, P<0.01, Figure 8H, 8I), but decreased GPX4 protein levels (P<0.01, Figure 8J). However, knockdown of lncRNA ZFAS1 reduced the inducing effect of BLM treatment on the lipid peroxidation and ferroptosis (all P<0.01, Figure 8H-8J). Overall, knockdown of lncRNA ZFAS1 significantly attenuated BLM-induced PF and ferroptosis by regulating the miR-150-5p/SLC38A1 axis. DISCUSSION In this study, our results demonstrated that lncRNA ZFAS1 was highly expressed in lung tissues of BLMinduced PF rat model. Moreover, knockdown of lncRNA ZFAS1 significantly inhibited BLM-induced PF by suppressing the process of FMT and lipid peroxidation. Mechanistically, silencing of lncRNA ZFAS1 restricted BLM-induced PF and decreased migration and FMT of HFL1 cells treated with TGF-β1 by sponging miR-150-5p. Furthermore, knockdown of SLC38A1 significantly attenuated the TGF-β1-induced FMT process and lipid AGING peroxidation. Of note, SLC38A1, a key positive regulator in lipid peroxidation, was found to be a target gene of miR-150-5p. Collectively, these results suggest that knockdown of lncRNA ZFAS1 ameliorated BLMinduced PF by lncRNA ZFAS1 acting as a ceRNA and sponging miR-150-5p to downregulate SLC38A1 expression, which provides a new therapeutic target for the treatment of PF. Accumulating evidence has confirmed that lncRNAs are long RNA transcripts without protein-coding ability and are widely involved in various aspects of cellular processes [31,32]. Previous studies suggest that the abnormal expression of lncRNA was closely related to the development and prognosis of many human diseases [33,34]. Hence, dysregulated lncRNA may function as an important biological marker for various diseases including PF. lncRNAs exert their functions in different ways, such as modification of transcription, translation and post-translational levels. Recently, lncRNAs have attracted extensive attention as a ceRNA that binds to miRNA through a "sponge" adsorption mechanism [35]. Thus, in addition to miRNAs mediating mRNA degradation and transcriptional inhibition, lncRNAs could reversely regulate miRNAs by competing with mRNAs for binding to miRNAs. At present, the role of lncRNA as a ceRNA in the biological behaviors of human cancer cells has been reported broadly [36,37]; however, only a few studies have reported a similar role of lncRNA in PF. For example, Yan et al. found that lncRNA MALAT1 promoted silica-induced PF by sponging miR-503 to upregulate the expression of PI3K [38]. Qian et al. confirmed that lncRNA ZEB1-AS1 promoted BLM-induced PF by ZEB1-mediated EMT via competitively binding miR-141-3p [39]. Li et al. found that lncRNA RFAL accelerated PF progression by CTGF through competitively binding miR-18a [40]. In the present study, we for the first time to explore the role of lncRNA ZFAS1 in the progression of PF. As expected, our data revelated that lncRNA ZFAS1 was overexpressed in lung tissues of BLM-induced PF, and overexpression of lncRNA ZFAS1 accelerated the progression of PF through sponging miR-150-5p to upregulate SLC38A1 expression. In addition, previous studies confirmed that lncRNA ZFAS1 acts as an oncogene gene to promote the growth and metastasis of multiple solid tumors by mediating the EMT and proliferation of cancer cells [30,41]. In recent years, abnormal expression of ROS has been associated with several diseases, including PF [42], and ROS plays an important role in regulating cell apoptosis, differentiation and viability. For example, high levels of ROS could promote TGF-β1-induced fibrosis and FMT process [43]. Similarly, ROS-mediated oxidative damage through upregulation of inflammatory cytokine secretion in the progression of PF [44,45]. Besides, the ROS level acts as an important marker of lipid peroxidation, and lipid peroxidation leads to ferroptosis by up-regulating ROS levels in PF [16]. Similar results have been reported in that the level of ROS was elevated in TGF-β1-induced HFL1 cells, but treatment with ferroptosis inhibitor abolished this effect [21]. In the present study, our results showed that treatment with TGF-β1 significantly increased the level of ROS and inflammatory cytokines, but Fer-1 administration or lncRNA ZFAS1 knockdown alleviated its expression. Published studies have shown that miRNAs target downstream genes to alleviate the progression of PF by affecting ROS levels [46]. For example, Fierro-Fernández et al. found that miR-9-5p suppressed ROS levels, pro-fibrogenic transformation and fibrosis through targeting NOX4 and TGFBR2 [47]. In addition, previous studies revealed that miRNAs play an important role in regulating the FMT process, fibroblast apoptosis and inflammation to mediate the progression of PF [48][49][50][51]. In this study, our data showed that knockdown of miR-150-5p significantly promoted the expression of inflammatory cytokines, fibroblast activation and ROS levels by targeting SLC38A1 expression. Ferroptosis is a lipid-and iron-dependent form of cell death that is different from cell apoptosis, pyroptosis and autophagy. Recently, studies found that ferroptosis biomarkers (ROS, MDA and GPX4) have been detected in the tissues of fibrosis-related diseases [52][53][54]. In the present study, we found that the expression of ROS and MDA were upregulated in HFL1 cells treated with TGF-β1, but GPX4 expression was decreased. However, treatment with Fer-1 alleviated the promotion effect of TGF-β1 on lipid peroxidation. Moreover, other studies confirmed that upregulation of GPX4 against the TGF-β1-induced FMT process and oxidative stress [16,21]. Besides, it has been demonstrated that the regulation of the glutamate/cystine antiporter system Xckey proteins (SLC38A1, SLC1A5, SLC3A2 and SLC7A11) could also block ferroptosis in cells, for instance, miR-137 targets SLC1A5 to decrease ferroptosis [55] and upregulation of SLC7A11 promoted ferroptosis [56]. In this study, we confirmed that SLC38A1 was upregulated in lung tissues of the BLM-induced PF rat model. Silencing of SLC38A1 inhibited the FMT process, lipid peroxidation and inflammatory cytokine secretion. In summary, we found that overexpression of lncRNA ZFAS1 and SLC38A1 were positively correlated with PF progression. Further study revealed that knockdown of lncRNA ZFAS1 significantly decreased fibroblast AGING activation, lipid peroxidation, inflammation, and PF as a ceRNA to downregulate SLC38A1 by sponging miR-150-5p. Therefore, our findings suggest that the lncRNA ZFAS1/miR-150-5p/SLC38A1 axis plays an important role in the development and progression of PF, and may provide a novel biomarker for the diagnosis and prognosis of PF. Animal model Male Sprague-Dawley rats (200-220 g) were purchased from Weitonglihua Company (Beijing, China) and maintained in a pathogen-free facility. All animal experiments were reviewed and approved by the Animal Ethics Committee of Kunming Medical University. After one week of adaptive feeding, a total of 30 rats were randomly divided into 3 groups (n=10 rats/group): a control group; bleomycin (BLM) group; and BLM+sh-ZFAS1 group. Rats in the BLM group were administered 5 mg/kg BLM (Nippon Kayaku, Japan) dissolved in phosphate buffered saline (PBS) and administered to the rats intratracheally to establish the PF model. Rats in the control group were treated with 0.05 mL PBS. Rats in the BLM+sh-ZFAS1 group were injected intraperitoneally with 30 μL lncRNA ZFAS1 shRNA adeno-associated virus 5 (Vigene Biosciences, USA) for 3 weeks prior to an injection of 5 mg/kg BLM sulfate. After 4 weeks, all rats were sacrificed, and lung tissues were collected for further experiments. Immunohistochemical staining was performed to observe the histomorphology and examine the expression of E-cadherin (1:1000, ab1416, Abcam, UK) and α-SMA (1:2000, ab32575, Abcam, UK) according to the previous studies [39]. Hematoxylin-Eosin (H&E) staining and Masson staining were performed to observe the morphological changes of lung tissues according to our previous studies [57]. Cell culture and transfection Human fetal lung fibroblast 1 (HFL1) cells were purchased from the Shanghai Institutes for Biological Sciences of the Chinese Academy of Sciences and cultured according to the manufacturer's instructions. Media were supplemented with 1% penicillinstreptomycin and 10% FBS (Gibco; Thermo Fisher Scientific, Inc.) at 37 °C and 5% CO2. Subsequently, the cultured cells were randomly divided into three groups after culturing for 12 h. The TGF-β1-treated group were added with TGF-β1 (6 ng/mL; Sigma Aldrich, USA). The control group was added with the same amount of solvent. The TGF-β1+Ferrostain-1 (Fer-1)-treated group were treated with Fer-1 (1 μM; Sigma Aldrich, USA) prior to TGF-β1 treatment. RT-qPCR analysis The nuclear and cytoplasmic fractions were isolated using NE-PER TM Nuclear and Cytoplasmic Extraction Reagents (Thermo Fisher Scientific, USA) according to the manufacturer's instructions. Total RNA was isolated from cultured tissues and cells using TRIzol reagent (Qiagen GmbH) and revere-transcribed into cDNA using PrimeScript TM RT Reagent Kit with a gDNA Eraser (Takara Bio, Inc.). Subsequently, qPCR analysis was performed in an Applied Biosystems 7500 Real Time PCR system (Thermo Fisher Scientific, USA) using SYBR Green PCR Master Mix (Takara, Japan) using the following conditions: Initial activation step at 95 °C for 5s, 35 cycles of denaturation at 94 °C for 15 s, annealing at 55 °C for 25 s, and extension at 70 °C for 30 s. The sequences of the primers for qPCR are presented in Table 1. U6 and GAPDH were used as internal controls. The 2 -∆∆Ct method was used to calculate the relative expression of IL-6, IL-1β, TNF-α, GPX4, lncRNA ZFAS1, miR-150-5p and SLC38A1. qPCR was performed in triplicate BrdU staining Cells were inoculated into 96-well plates with 1×10 4 cells per well and cultured in a 37 °C incubator with 5% CO2 for 1 h to allow cells to adhere. Then, the cells were incubated with 5-bromodeoxyuridine (BrdU) for 1 h and stained with anti-BrdU (ab1893, Abcam, UK) following the manufacturer's protocol. All stained images were observed and imaged with a scanning microscope (Olympus, Japan). Wound healing assay HFL1 cells (1.5×10 6 cells/well) were treated with different reagents, seeded in six-well plates and cultured until they reached confluence. Wounds were made in the cell monolayer by making a scratch with a 20 μL pipette tip. The plates were washed once with fresh medium after 24 h in culture to remove non-adherent cells. Following this wash, the cells were imaged. The migration distance of cells were measured according to the following formula: Migration rate (%)=(W0h-W24h)/W0h×100%. Fluorescence in suit hybridization (FISH) Cells were incubated with 0.2 mol/L HCl for 0.5 h after fixation with 4% formaldehyde for 20 min, and then incubated with 5 μg/mL proteinase K for 15 min. Acetylated in a specific solution, and hybridized with FITC labeled lncRNA ZFAS1 probe (5 μg/mL) for 1 day. Subsequently, the cells were washed twice with 2× SSC detergent containing 0.01% Tween-20 at 55 °C. Afterwards, FITC-labeled probes were detected using standard immunofluorescence protocols. RNA immunoprecipitation (RIP) The RIP assay was conducted using the Magna RIP RNA-binding protein immunoprecipitation kit (Millipore, USA) according to the manufacturer's protocol. RIP was performed using HFL1 cell lysate and either anti-Ago2 AGING (ab32381, Abcam, UK) or Rabbit IgG (ab172730, Abcam, UK) as the antibody. Subsequently, RT-qPCR was used to detect the expression of purified RNA. Determination of MDA generation The tissues or cells were mixed with PBS at a 9-fold volume, dispersed into a single cell suspension, freezethawed for 3 times, and centrifuged in 12000 rpm for 10 min. The supernatant was collected as the protein sample, and the concentration was detected by BCA protein quantitative kit (Beyotime Biotechnology, China) according to the protocol. MDA content was detected by a Lipid Peroxidation (MDA) assay kit (ab118970, Abcam, UK) according to the manufacturer's protocol. Determination of the ROS level The OxiSelectIn Vitro ROS/RNS Assay Kit (Cell Biolabs, USA) was used to detect the level of ROS in the lung tissues and cell samples. The ROS level in each group was analyzed in triplicate using commercial kits according to the manufacturer's instructions. Dual-luciferase reporter gene assay The cells were seeded in 24-well plates at a density of 60%. According to the manufacturer's instructions, the reporter construct containing the lncRNA ZFAS1 wildtype (WT) or mutant (MUT) 3'UTR was co-transfected into cells with miR-150-5p using Lipofectamine 3000 reagent. After 48 h, the cells were collected and tested for luciferase by a Dual-Luciferase Assay System (Promega, USA). The target verification methods for SLC38A1 and miR-150-5p were similar to those mentioned above. Statistical analysis All data were collected and presented as the mean ± standard deviation. The data were then analyzed by using SPSS 22.0 software (IBM, USA). Spearman correlation analysis was performed to analyze the correlation among lncRNA ZFAS1, miR-150-5p and SLC38A1 in lung tissues of the PF model using Graphpad Prism (Version 8.0.2). P<0.05 was considered to indicate a statistically significant difference. AUTHOR CONTRIBUTIONS Zhaoxing Dong and Tao Zhang designed the study; Ting Li, Yongjun Liu, and Shuhan Zhao performed the experiments; Yanni Yang, Wen Lei, and Wenjuan Wu analyzed and interpreted the data; Yanni Yang, Ting Li, Yongjun Liu, Zhengkun Li, and Jin Li contributed analytical tools; Yanni Yang, Ting Li, and Yongjun Liu drafted the paper.
6,490.4
2020-05-26T00:00:00.000
[ "Biology", "Chemistry" ]
The Quadrupole Moment of Compact Binaries to the Fourth post-Newtonian Order: From Source to Canonical Moment As a crucial step towards the completion of the fourth post-Newtonian (4PN) gravitational-wave generation from compact binary systems, we obtain the expressions of the so-called"canonical"multipole moments of the source in terms of the"source"and"gauge"moments. The canonical moments describe the propagation of gravitational waves outside the source's near zone, while the source and gauge moments encode explicit information about the matter source. Those two descriptions, in terms of two sets of canonical moments or in terms of six sets of source and gauge moments, are isometric. We thus construct the non-linear diffeomorphism between them up to the third post-Minkowskian order, and we exhibit the concrete expression of the canonical mass-type quadrupole moment at the 4PN order. This computation is one of the last missing pieces for the determination of the gravitational-wave phasing of compact binary systems at 4PN order. near zone. We refer to [23][24][25] for discussions and details about the matching procedure we employ to link the near zone to the external zone. It is important to emphasize that the two descriptions in terms of canonical moments {M L , S L } or in terms of source and gauge moments {I L , J L , W L , X L , Y L , Z L } are physically equivalent, i.e. describe the same physical matter source if and only if the canonical moments are related in a precise way to the source and gauge moments up to arbitrary high orders [23]. Hence there is a coordinate transformation linking the two descriptions of the same source, which is a non-linear deformation of the linear gauge transformation parametrized by the gauge moments {W L , X L , Y L , Z L }. In this paper, we complete a missing step towards the knowledge of the 4PN orbital phasing for compact binary inspiral by obtaining the relation between the canonical mass quadrupole moment M ij and the corresponding source one I ij at the 4PN order. Such relation was known previously at the leading 2.5PN order [23] and at the next-to-leading 3.5PN order [25]. Up to the 3.5PN order, the correction terms in the canonical quadrupole are quadratic in the multipole moments; the cubic corrections start at 4PN, and are thus the aim of the present computation. Let us recapitulate the complete result for the canonical quadrupole moment up to the 4PN order 1 a| i I j a + Z a| i I Here M is the constant (ADM) total mass, W ij and Y ij for instance denote the quadrupoles associated with the series of gauge moments W L and Y L , W and W i are the monopole and dipole of W L , and so on. All those quantities are evaluated at time t except when 1 Hereafter, we denote with a capital letter L a multi-index with ℓ indices, e.g. I L = I i1i2...i ℓ , with angular brackets the symmetric-trace-free (STF) projection, e.g. W a i I j a ≡ STF ij [W ai I ja ]; we systematically use STF harmonics [26][27][28][29]; in the spirit of [22], we use the shorthands J i|L ≡ ε ii ℓ a J aL−1 and Z i|L ≡ ε ii ℓ a Z aL−1 for current type moments; the superscript (n) denotes n time derivatives; (−) n stands for (−1) n and c is the speed of light and G the gravitational constant. specified otherwise. In addition, the source is considered to be stationary in the remote past so that all time derivatives of the moments vanish before some instant −T . This means in particular that the hereditary integrals are well defined. The expression (1.1) is valid in the center-of-mass frame, for which the mass dipole moment I i vanishes. The 4PN cubic terms are new with this paper, which extends the previously computed 3.5PN quadratic relation [23,25,30]. Note that the analogous relations for the mass octupole and current quadrupole moments (currently known only at the leading 2.5PN order) do not receive 3PN corrections and can be found in [25]. Interestingly, the relation between the canonical and source/gauge moments is not local, as clear from the 4PN tail integrals appearing in the last line of (1.1). The associated scale r 0 is unphysical and should naturally disappear from any physical results, such as the 4PN flux. This will be a stringent test for our computation, which will however have to wait for the complete calculation of the 4PN radiative-type moment, directly observable at future null infinity. Notice however that the tail terms in (1.1) will be zero in the case of circular orbits; see Eqs. (5.7) in [30]. Another worthy remark is that the subtleties arising from the use of an IR dimensional regularization scheme for the mass quadrupole have been completely tackled and solved in [19,20]. We can thus safely perform the computation in three dimensions, using the standard Hadamard regularization scheme. Finally, after the result (1.1), only one last step remains before getting the 4PN mass quadrupole: the three-dimensional computation of cubic non-linear terms called "tails-ofmemory", entering the relation between the radiative quadrupole moment and the canonical one at 4PN order. This is left to future work. The plan of this paper is as follows. After reminders about the multipolar-post-Minkowskian (MPM) formalism in Sec. II, we describe the general method for relating the canonical moments to the source and gauge moments up to any post-Minkowskian (PM) order in Sec. III (extending earlier works in [23,25]). Finally Sec. IV is devoted to the practical implementation that led to the result (1.1), together with required formulas for retarded integrals of non-linear source terms. The essential, but technical, near-zone expansion of tail integrals is presented in details in App. A. The verification of our main result (1.1) via an alternative procedure is relegated in App. B. II. THE MULTIPOLAR-POST-MINKOWSKIAN EXPANSION The MPM formalism [29,[31][32][33] obtains the general solution of the Einstein field equations outside a matter source in the form of a post-Minkowskian expansion, with each PM coefficient expanded as a formal multipolar series. The gothic metric deviation from the Minkowski metric, h µν ≡ √ −g g µν − η µν where g µν is the inverse of the usual covariant metric, η µν , that of the Minkowski metric, and g ≡ det(g µν ), obeys the Einstein's vacuum field equations in harmonic coordinates, Here is the flat d'Alembertian operator and the gravitational source term Λ µν is at least quadratic in h and its first and second partial derivatives. In this paper, we will reason to any PM order for the general method, but in practical computations, as we are interested in the 3PM interaction, we will only use the leading quadratic ∼ h∂ 2 h + ∂h∂h and sub-leading cubic ∼ h∂h∂h pieces in the source term. A. The generic MPM algorithm The "generic" non-linear MPM solution of the field equations is searched in the form of a non-linear expansion in the field perturbation, labeled by the gravitational constant G. Formally, it may be represented to arbitrary high orders by the asymptotic series: The starting point is the most general solution of the linearized Einstein's vacuum equations in harmonic coordinates, h µν gen 1 = ∂ ν h µν gen 1 = 0, which can be written in terms of six sets of symmetric-trace-free (STF) multipole moments {I L , J L , W L , X L , Y L , Z L }, dispatched between a simpler linear solution called "canonical", and a linear gauge transformation, as We employ the shorthand notation ∂ϕ µν for the linear gauge transformation. Denoting functionals of multipole moments by means of capital calligraphic letters, the functional dependence of the two terms in (2.3) are For obvious reasons, the moments {I L , J L } (mass-type I L and current-type J L ) are called the source moments while the moments {W L , X L , Y L , Z L } are the gauge moments. The linear canonical solution, evaluated at field point x and at time t, reads explicitly [27][28][29]34] h 00 where r = |x| represents the radial distance to the origin located in the source. From the harmonic coordinate condition ∂ ν h µν can 1 = 0, the mass monopole I and current dipole J i must be constant, while the mass dipole I i is varying linearly with time. In applications, we choose a center-of-mass frame for which I i = 0. 2 With these exceptions, the moments are arbitrary functions of time encoding the properties of the source. The linearized gauge vector is defined by The gauge moments {W L , X L , Y L , Z L } are arbitrary functions of time without restriction. The MPM construction is defined by induction on the PM order n [29]. Suppose that for some given n 2, one has obtained the first n − 1 PM coefficients h µν gen m , ∀m n − 1. Then, the next order coefficient h µν gen n is constructed as follows. It satisfies h µν gen n = Λ µν gen n ≡ Λ µν n [h gen 1 , · · · , h gen n−1 ] , (2.7a) where the source term Λ µν gen n , being at least quadratic in h, depends only on the previous iterations as indicated. We first construct a particular retarded solution of the wave equation h µν gen n = Λ µν gen n as where −1 ret denotes the usual retarded inverse d'Alembertian operator, and the symbol FP B=0 refers to a specific operation of taking the finite part (FP) in the Laurent expansion when the complex parameter B tends to zero. This finite part involves the multiplication of the source term by the regularization factor (r/r 0 ) B , where we introduce an arbitrary constant length scale r 0 . Such FP operation is required for dealing with source terms made with multipolar expansions like in (2.5) that are singular at the origin r = 0. More generally, the regularized retarded integral operator FP −1 ret ≡ FP B=0 −1 ret (r/r 0 ) B is well defined when acting on a source term admitting an expansion when r → 0 of the form, for any N ∈ N, wheren L = STF[n i 1 n i 2 · · · n i ℓ ], with n i = x i /r, is the STF spherical harmonics of order ℓ, while the sum boundaries a min and p max are integers (depending on the PM order n). For any function in the class (2.9), we have [FP −1 ret f ] = f . In the end of the recursive process, the structure (2.9) turns out to be proved by induction. Because of the regularization scheme, the object u µν gen n does not satisfy the harmonic gauge condition, but a simple calculation using the fact that the source term is divergenceless, ∂ ν Λ µν gen n = 0, gives w µ gen n ≡ ∂ ν u µν gen n = FP Due to the explicit factor B, this term is non zero only when the integral generates a pole ∝ 1/B in the Laurent expansion as B → 0. In turn, the pole arises only from the singular behaviour of the source term as r → 0, which is of the type (2.9). Furthermore, the coefficient of the pole is necessarily a homogeneous retarded solution of the wave equation, since the source term has no pole, hence w µ gen n = 0. At this stage, we apply the MPM "harmonicity" algorithm to construct from w µ gen n another homogeneous retarded solution, say v µν gen n ≡ V µν w gen n , (2.11) satisfying ∂ µ v µν gen n = −w µ gen n , together with v µν gen n = 0. The formulas specifying the above harmonicity algorithm w µ −→ V µν [w] are given by e.g. Eqs. (2.11)-(2.12) in [33]. Finally, the metric at the PM order n, now satisfying the full Einstein vacuum equations in harmonic coordinates at the PM order n, is naturally defined as h µν gen n = u µν gen n + v µν gen n . (2.12) This construction yields the most general solution h µν gen of the Einstein field equations in the vacuum region external to any isolated matter system [29]. It is obtained in the form of a functional of six sets of multipole moments, (2.13) B. The canonical MPM algorithm However, we also know that the general field of an isolated system in GR can be described by two and only two sets of STF multipole moments, which we call the "canonical" moments and denote {M L , S L }. Notably, these moments describe the GW propagation far away from the source, and they parametrize the two GW tensorial modes of GR [29]. The description of the external field of the source in terms of {M L , S L } starts at linear order just by the canonical metric (2.5) instead of the generic metric (2.3), but, in this different set up, the linear approximation is parametrized by the canonical moments M L and S L : (2.14) The canonical MPM algorithm proceeds then by induction over the PM order n 2 exactly in the same way as before, i.e. following the synthetic steps This results in a full non-linear metric, which is a functional of the canonical moments only, and represents as well the most general solution of the Einstein field equations outside the source: The next section presents the general method to obtain the relationships linking the canonical moments {M L , S L } to the set of source and gauge moments {I L , J L , · · · , Z L }, valid in principle to any PM order. We thus apply this general method to the practical case of cubic interaction to derive the 4PN relation (1.1). A. General method In Sec. II, we constructed two full non-linear MPM solutions, Eqs. (2.2) and (2.16), which both represent the most general solution of the Einstein field equations in the vacuum region outside the source. Requiring that these two metrics describe the exterior field of the same physical system, we now impose that they are physically equivalent, i.e., just differ by a coordinate transformation. This implies unique relations between the canonical moments {M L , S L } and the source and gauge moments {I L , J L , W L , X L , Y L , Z L }, which can be viewed as two physically equivalent sets of moments for describing the source as seen from its exterior. We thus look for a coordinate transformation where J ≡ det(∂x ′ /∂x) is the Jacobian of the transformation. Eq. (3.1) immediately follows from the definition h µν = √ −g g µν −η µν and the law of transformation of tensors [36]. Introducing a coordinate shift ϕ µ such that x ′µ = x µ + ϕ µ (x), we can rewrite the statement (3.1), using the non-linear correction δ ϕ h µν can (x) to the metric h µν can (x) induced by the shift, as h µν It is implicit that we work perturbatively in both the vector ϕ λ and the metric h λρ can , so that we have for instance h µν gen (x ′ ) = n 0 ϕ λ 1 · · · ϕ λn ∂ λ 1 · · · ∂ λn h µν gen (x)/n!. To linear order, the correction reduces to ∂ϕ µν = ∂ µ ϕ ν + ∂ ν ϕ µ − η µν ∂ ρ ϕ ρ . Let us then pose, to any order, where Ω µν denotes a functional of ϕ λ and h λρ can , as well as their derivatives, which is at least quadratic and can be computed perturbatively up to any order using Eq. (3.1). The harmonic gauge condition satisfied by both metrics h µν gen and h µν can implies that (as a consequence of the identity ∂ ν ∂ϕ µν = ϕ µ ) Let us now look for the coordinate shift in the form of a full PM expansion series and, conjointly, for the relations between the canonical moments and the source/gauge moments in the same PM form Here M n L and S n L denote some n-th non-linear functionals of the six types of source and gauge moments I K , · · · , Z K (with, say, K = i 1 · · · i k ), which are to be determined: At linear order, as we have seen with Eq. (2.3), by definition of the two linear approximations for the two generic and canonical metrics, where the functional H µν can 1 is explicitly given by (2.5) and the gauge vector is parametrized by the gauge moments {W L , · · · , Z L } [see its expression (2.6)]. Eq. (3.8) means that the relation (3.2) is satisfied at leading order provided that (i) ϕ µ Thus, we see that the looked-for coordinate transformation will be a non-linear deformation of the linear gauge transformation associated with the shift (2.6). Our recurrence hypothesis will be that all the ϕ µ m 's in (3.5) are known up to a given PM order n − 1, as are the functional relations (3.6) up to the corresponding order. Hence we assume that we have already determined in such a way that the equations (3.2)-(3.3) hold up to order n − 1. This means that, for all m n − 1, Recall that Ω µν m is a non-linear, at least quadratic, functional of the coordinate shift and metric, and is therefore known following our induction hypothesis. Indeed, using (3.3) and (3.8), we get Ω µν 1 ≡ 0. Crucial in our hypothesis is the assumption that the canonical metric depends on the moments so far determined to order n − 1: B. Implementation to nPM order With our recurrence hypothesis, let us see how to the next PM order ϕ µ n together with the functionals M n L and S n L are uniquely determined. We thus want to find ϕ µ n and M n L , S n L such that h µν gen n = h µν can n + ∂ϕ µν n + Ω µν n , (3.13) where h µν gen n and h µν can n are defined precisely by the two MPM constructions in Sec. II, and h µν can n is now a functional of M n L and S n L . First of all, note that Ω µν n depends on ϕ k and h can k for k n − 1 and is already known by our induction hypothesis. Second, apply the harmonic coordinate conditions on Eq. (3.13): this shows that while ϕ µ n is one of our unknowns, its d'Alembertian ∆ µ n ≡ ϕ µ n is already determined as we have [see Eq. (3.4)] 14) The explicit expressions of Ω µν n and ∆ µ n at quadratic and cubic orders are displayed in Eqs. (4.2), (4.7) and (4.8). At this stage, we must use the specific definitions of the generic and canonical metrics defined in Sec. II. Applying the d'Alembertian operator on (3.13), we find that the two source terms Λ µν gen n and Λ µν can n are related by Hence, the particular retarded solutions of the two algorithms, u µν gen n and u µν can n , defined respectively in (2.8) and (2.15), satisfy Next, we introduce a linear-looking gauge transformation with vector defined by the retarded integral of ∆ µ n , as This vector satisfies φ µ n = ∆ µ n but is not yet our looked-for vector ϕ µ n . It a priori differs from it by an homogeneous retarded solution of the wave equation. Thanks to (3.17) we can advantageously rewrite (3.16) as u µν gen n = u µν can n + ∂φ µν n + Ω µν n + X µν n + Y µν n . (3.18) The last two terms are the most interesting: they come from the non-commutation of the finite part of the retarded integral FP B=0 −1 ret with the partial derivative, due to the differentiation of the regularization factor (r/r 0 ) B therein. They are thus given by Observing that, for the class of multipole-expanded functions f we are concerned with [see Eq. (2.9)], the statement (FP −1 ret f ) = f is always correct, we see that X µν n and Y µν n represent the "commutators" of the operators that appear inside the square brackets: Since the differentiation of the regularization factor (r/r 0 ) B produces an extra factor B, the quantities X µν n and Y µν n will be non-zero only when the integral develops a pole ∼ 1/B. In that case, they are necessarily homogeneous (retarded) solutions of the wave equation: X µν n = Y µν n = 0. It is easy to figure out that if X µν n and Y µν n were actually zero, the canonical moments {M L , S L } would simply agree with their source counterparts {I L , J L }. As a result, the non trivial relations between those moments entirely follow from the evaluation of the two quantities X µν n and Y µν n . In our practical calculations, we reshuffle the commutators (3.20) and use the following expressions, exhibiting the explicit factor B in front: Next, we carry on the MPM algorithm by computing the divergence of (3.18). With Eq. (3.14), we obtain w µ gen n = w µ can n + W µ n , where we have posed U µν n ≡ X µν n + Y µν n and W µ n ≡ ∂ ν U µν n . It is in fact necessary and sufficient to apply the harmonicity algorithm V µν only to the divergence of the sum of the two commutators (3.21). We have The final step consists in remarking that H µν n ≡ U µν n + V µν n is not only divergenceless by definition of the harmonicity algorithm V µν , but that it is also a retarded homogeneous solution of the wave equation, since both U µν n and V µν n are separately such. Hence H µν n satisfies the linearized vacuum Einstein field equations, i.e. H µν n = ∂ ν H µν n = 0, to which we know the general solution. Namely, it can be decomposed in a unique way as where H µν can 1 denotes the linearized functional (2.5) of the moments, but computed with certain moments M n L and S n L , and where ∂ψ µν n is some linear-looking gauge transformation. The associated gauge "vector" ψ µ n is parametrized in a unique way by some moments {W n L , X n L , Y n L , Z n L }. The vector ψ µ n represents the homogeneous solution to be added to φ µ n as given by Eq. (3.17) in order to recover Eq. (3.13) with the shift ϕ µ n ≡ φ µ n + ψ µ n . S n−1 L } by the more accurate, newly determined set {M n L , S n L }, modulo higher-order PM terms at least of order ∝ G n+1 , which we can discard to order n. Hence we have proved that the nPM contribution to the canonical moments {M L , S L } is determined, as is the nPM piece of the coordinate shift ϕ µ , and our recurrence hypothesis is verified at the next order n. The practical implementation at quadratic and cubic orders is described in Sec. IV. As a verification, we have also followed an alternative approach, presented in App. B. C. Extraction of physical multipole moments A straightforward procedure permits reading off the expressions of the n-th order corrections to the canonical moments {M n L , S n L } from Eq. (3.23), where the left-hand side H µν n = U µν n + V µν n follows from the algorithm (3.22). In fact, as we have seen that U µν n = X µν n + Y µν n is a retarded vacuum solution of the wave equation, we can directly, and uniquely, give the desired moments as functions of the ten sets of retarded STF moments composing U µν n . We thus pose and follow the steps (3.22), successively computing W µ n , V µν n and H µν n , which is finally put into the form (3.23) on which we read off the physical moments For completeness, we also give the corrections to the gauge moments composing the gauge vector as ψ µ n : IV. PRACTICAL IMPLEMENTATION Let us apply the previously described procedure to determine the quadratic and cubic corrections M 2 ij and M 3 ij to the mass-type quadrupole moment M ij at the 4PN order [see Eqs. (3.6)]. Previous investigations [23,25] focused on quadratic interactions and determined the mass quadrupole at leading order 2.5PN and sub-leading order 3.5PN; such corrections are recalled in Eq. (1.1). However, in order to obtain the 4PN correction, we need to derive the cubic interactions. A preliminary dimensional analysis shows that at 4PN order and in the center-of-mass frame (where the mass dipole I i is vanishing), the only multipole interactions between the source and gauge moments are cubic and necessarily of the three types: where M is the constant ADM mass, I ij is the source mass quadrupole moment, and where the monopole W and the two quadrupoles W ij and Y ij are gauge moments. Note that out of the center-of-mass frame, a large number of additional interactions would appear, such . , but those are not needed in concrete applications. A. Controlling the cubic source terms The first step towards the practical determination of the cubic couplings is to successively construct the quadratic and cubic quantities Ω µν n and ∆ µ n that enter Eqs. (3.21) for n = 2, 3. To quadratic order we have This is valid "on-shell", as we have used the facts that h µν can 1 = ϕ µ 1 = 0, which hold at linear order. One can directly verify that [see Eq. (3.14)] [32,37]. The quadratic couplings between source and gauge moments have been computed in [23,25]. Notably, this yields [see (3.27a)] and we have already given the 3.5PN contribution in (1.1). What remains to be computed, for insertion into the cubic quantities Ω µν 3 and ∆ µ 3 , are the quadratic couplings contributing to the quadratic coordinate shift ϕ µ 2 . As we have seen in (3.24), the shift is composed of two parts: the first one, φ µ 2 , has been defined in Eq. (3.17), and is computed by the usual techniques (see for instance the Appendix A of [33]). The second part, ψ µ 2 , is extracted from (3.23) as quadratic corrections to the four types of gauge moments; the general result has been provided in Eqs. (3.28). Working out ϕ µ 2 = φ µ 2 + ψ µ 2 for the needed quadratic interactions M × W, M × W ij , M × Y ij and I ij × W, we find Note the presence of non-local tail integrals, involving Q m (y), the Legendre functions of the second kind, here defined with a branch cut from −∞ to 1 and related to the Legendre polynomials P m (y) by We inject the expressions (4.5) together with the expressions for the canonical metric, notably h µν can 2 corresponding to the interaction M×I ij which is also non-local and given by (B3) in [32], into the cubic Ω µν 3 and ∆ µ 3 that define (3.21). For convenience, we split Ω µν 3 into quadratic-type and purely cubic interactions: Ω µν 3 ≡ Ω µν 12 + Ω µν 21 + Ω µν 111 , where Recall that our computations are done on-shell, using the wave equations satisfied at linear order, h µν can 1 = ϕ µ 1 = 0, and quadratic order, h µν can 2 = Λ µν can 2 . An important point is that Ω µν 111 only contains h can 1 × ϕ 1 × ϕ 1 and ϕ 1 × ϕ 1 × ϕ 1 terms, but there is no h can 1 × h can 1 × ϕ 1 sector. This fact allows us to discard it entirely in the practical implementation, since at the 4PN order we have only the three cubic interactions (4.1) which contain at most one gauge moment. Similarly, we have for the coordinate shift ∆ µ We have inserted ϕ µ 2 = ∆ µ 2 , which comes from (3.17), and used the fact that ψ µ 2 = 0. The latter quantities satisfy ∂ ν Ω µν 3 + ∆ µ 3 = 0, consistently with the divergencelessness of the cubic source, and in fact, even separately, At this stage, we control the cubic source terms that are required to evaluate the "commutators" X µν 3 and Y µν 3 defined by the formulas (3.21). Using canonical relations between the Legendre functions, we find that the terms to be computed fall into two and only two classes: Here, the functions F and G represent some products of (source or gauge) multipole moments, with the function G which can be constant when dealing with the ADM mass. The second type of term I tail integrates over a tail term which comes from the tail present in the quadratic metric for the interaction M×I ij , see (B3) in [32], and those which appeared in the coordinate shift [see Eqs. Thanks to the factors B or B 2 , we know that the integrals (4.10) will depend only on the behaviour of the source terms when r → 0. 3 Indeed the regularization process FP B=0 has been introduced to cope with the singular behaviour of the source when r → 0, and hence only that limit can generate poles 1/B or 1/B 2 which can compensate the factors B, B 2 and lead to a finite part when B → 0. Therefore we are entitled to restrict ourselves to a ball of finite and even infinitesimal size (say r < ǫ) and replace the source terms in (4.10) by their formal Taylor expansion when r → 0 (i.e., the near zone or PN expansion r/c → 0). Once the near zone expansion of the source is known the integration can be performed with standard techniques. The result for the simpler case I inst (with purely instantaneous source term) was already given in Eq. (A.18) of [33]. It is non-zero only when p ℓ + 3 and b = 1 (only a simple pole 1/B can appear in this case), and reads (4.11) Next, we deal with the more difficult integral I tail . By the previous argument we can replace the function G(t − r/c) by its formal Taylor expansion when r → 0. The main problem is therefore the control of the expansion series when r → 0 of the tail integral F m ≡ +∞ 1 dy Q m (y) F (t − yr/c). We devote the App. A to this not-so-easy question and state here the general result for this expansion: Here, the symbol r/c→0 = indicates the formal asymptotic expansion, the coefficients c m j and β m i are defined in (A5) and (A16), and H n denotes the usual harmonic number. Thus, we have to multiply Eq. (4.12) by the Taylor expansion of G(t − r/c) and integrate term by term. Note that all those formal expansions are convergent, due to the stationary of the source in the remote past. As we see from the structure of the expansion (4.12), the final integration will boil down to the control of just one type of term, However, the function H(t) here can be either an instantaneous function or a non-local tail integral of the type given in (4.12). Furthermore, still because of the tail term we must add the case where there is logarithm ln r in the source, see again (4.12). Thus, we consider (4.13) with a = 0 or 1. We give a few details on the calculation of J . Writing (4.13) in ordinary three-dimensional form we have, since as we said the integration is limited to an infinitesimal ball r < ǫ, (4.14) Using the formal STF expansion of the multipolar factor when r ′ ≡ |x ′ | → 0, together with the angular integration performed using Eq. (A29a) in [29], we end up with radial integrals of the type Here the r ′ = 0 boundary vanishes by analytic continuation on B. With the factor B b in front the latter integral is zero unless there is a pole, and the pole can come only when p is of the form p = ℓ + 3 + 2j (where j ∈ N). Hence the results are immediate. When a = 0 (source without ln r) the integral is zero when b = 2 as there is only a simple pole 1/B in this case: One can check that this is perfectly consistent with the result for I inst in (4.11), in the sense that one can Taylor expand the source term of I inst when r → 0 and recover the same result by applying Eq. (4.17a) on each term of the Taylor series. By contrast, when there is a ln r the integral is also non-zero when b = 2 because of the presence of a double pole 1/B 2 : (4.17b) Finally with those results in hand, we can implement all the terms up to cubic non-linear order, using the xAct library of the Mathematica software [38]. Such computation ends up with the final result at the 4PN level already recapitulated in Eq. (1.1). In the App. B, an alternative procedure is described which permitted checking it independently. This appendix is devoted to the determination of the near zone (or PN) expansion when r/c → 0 of the tail integral that enters the source of the integral I tail (4.10b). This will permit justifying the claim of Eq. (4.12) concerning the structure of the near zone expansion, and provide the explicit coefficients we need for the practical computation. Thus we look for the expansion of If an explicit expression of the Legendre function of the second kind Q m (y) is given in Eq. (4.6), we preferably use here the general expression of the Legendre function for generic µ ∈ C \ {−1, −2, · · · } in terms of the hypergeometric function F ≡ 2 F 1 : with |y| > 1, | arg y| < π. This representation of the Legendre function reads explicitly, for any real argument such that y > 1, It can naturally be regarded as an asymptotic expansion when y → +∞ on the real axis. However, we stress that it is valid as soon as y ∈]1, +∞[, as the series representation of the hypergeometric function is well defined in that case. Defining τ ≡ yr/c one can then rewrite (A1) as the following series where and the coefficients are just given by (A3b) in the particular case where m ∈ N, i.e., Upon the expression of F j m we perform a series of integrations by parts in order to increase the power of τ until we reach a logarithm of τ . The all-integrated terms contain the functions F (n) (t − r/c) that we replace by their Taylor expansions when r → 0. The last integral containing ln τ is split according to . In the first integral, from 0 to r/c, we are allowed to expand the integrand when τ → 0, since by definition r/c → 0 for the PN expansion. Then the second integral from 0 to +∞ just gives a tail integral in the ordinary sense. Therefore, we obtain the following asymptotic expansion when r/c → 0: Very importantly, the value i = m + 2j is to be excluded from the first summation. The last term is the tail integral, where H n denotes the usual harmonic number. Resumming on j yields We already see the type of structure claimed in Eq. (4.12). The coefficients α m i in the first sum (corresponding to instantaneous terms) are still at this stage given by an infinite series: Despite that, the result (A7) can be dealt with as it is in practical calculations. However, as it turns out the coefficient (A8) can be resummed in analytic closed form, and this will yield a more interesting and powerful expression of the near zone expansion of F m . To obtain such form of α m i we take advantage of the fact that the coefficients c m j can be generalized to any µ ∈ C \ {−1, −2, · · · } through the general definition of the Legendre function, see Eqs. (A3). Therefore, one can extend the definition of α m i to any generic value of µ by posing where the coefficients c µ j are now given by (A3b). We assume that µ is non-integral, so that the condition i = m + 2j is no longer necessary and has been dropped. Now, from the very definition of the coefficients c µ j in (A3), the fact that the hypergeometric series is absolutely convergent for |y| > 1, and using the identity +∞ 1 dy y i × y −1−µ−2j = (µ + 2j − i) −1 valid for ℜ(µ) > i, we prove that the coefficient α µ i is actually given by Furthermore, we know that [39] Hence we arrive at the closed-form expression where i k is the usual binomial coefficient. Its validity may be extended to all non-integral values of µ by analytic continuation. Still this expression is to be connected to the actual result (A8) we search for, as this result excludes the value i = m + 2j from the summation. However, posing µ = m + ε one can also substract and then re-add the contributions i = m + 2j in (A9); in this way we rewrite α m i as the following limit when ε → 0: In the case i = m + 2j an explicit pole 1/ε has to be added, and which should cancel the pole present in α m+ε m+2j so that the limit is finite. Furthermore since the coefficient c m+ε j is finite when ε → 0, with limit c m j given by (A5), we can expand it to order ε and obtain Finally, the last step is to use the closed-form expression (A12) for α m+ε i . The extra terms in (A14) are easily computed with the help of (A3b) and (A5). We verify that indeed the limit ε → 0 is finite and obtain where the new coefficient β m i reads explicitly with the convention that i k = 0 whenever i < k. Hence our final result for the near zone expansion of F m reads with the notations δU µν quad n = δX µν quad n + δY µν quad n and n−1,1 + η µν ∆ i 1,n−1 + ∆ i n−1,1 ; (B5c) the term δV µν quad n is computed from the harmonicity algorithm δV µν quad n = V µν [δW quad n ], with δW µ quad n ≡ ∂ ν δU µν quad n . The source of the coordinate shift δφ µ quad n , which involves the n − 1 order piece of the metric h µν can n−1 , is of the same undesirable type as δΛ µν quad n . However, in the current procedure, the canonical moments will be read off from the gravitational waveform, which is not sensitive to linear-looking gauge transformations. On the other hand, although the commutator contributions do contain terms proportional to h µν can n−1 or ϕ µ n−1 , their brute force integration is not required. Thus, it will be possible to determine M L n and S L n without integrating any such term. Once δh µν quad n is obtained, we compute the rest of the metric h µν rest n ≡ h µν gen n − δh µν quad n by solving the equation h µν rest n = δΛ µν rest n ≡ Λ µν can n − δΛ µν quad n , with the help of the standard MPM algorithm. By construction, the source term δΛ µν rest n does not depend on h µν can n−1 nor ϕ µ n−1 and is thus free of the difficult contributions we wanted to avoid. At this stage, the n-th order waveform may be built from h µν gen n = Gh µν gen 1 + · · · + G n h µν gen n , by taking the limit R → +∞ for constant asymptotically null time U = T − R/c, which corresponds to "radiative" coordinates (T = t − 2GM/c 3 ln(r/r 0 ), R = r) [5]. In those coordinates, the leading order contribution to the metric, H µν gen n , admits the same expression as the one in harmonic gauge, but with the logarithms ln r effectively replaced by ln r 0 , and with (t, r) replaced by their radiative counterparts (T, R); see, e.g., Ref. [21]. This yields in particular [up to terms O(R −2 )] H µν gen n = h µν rest n + ∂δφ µν quad n ln r→ln r 0 + Ω µν 1,n−1 + Ω µν n−1,1 + δU µν quad n + δV µν quad n , where ∂δφ µν quad n is the linear gauge transformation associated with δφ µ quad n . The n-th order waveform h TT n ij is then the transverse trace-free projection of the 1/R term in H µν gen n . As the TT projection of linear gauge transformations vanishes at order 1/R, the vector δφ µ quad n is actually not required. The result for h rad n ij is a certain functional of the source/gauge moments: h rad n ij = H rad n ij [I L , J L , · · · , Z L ] , and for the canonical moments, we must also have, at the same time: The expressions of M n L and S n L are finally found by guess work. We assume they are sums of terms involving the source/gauge moments, with consistent index structures and physical dimensions, but arbitrary coefficients. Those are fixed by identifying Eq. (B8a) with the outcome that ensues from inserting our ansatz into Eq. (B8b). Of course, if we wish to iterate the process to the next order n + 1, we will eventually need to tackle the difficult integrals of source terms containing h µν can n−1 , which arise in the calculation of h µν can n and δφ µ quad n . Nonetheless, we have managed to push this step to the very end. This strategy is particularly relevant to determine the canonical moments at cubic order for two reasons: (i) The latter task does not demand computing h µν can 3 nor δφ µ 3 , which means that all retarded integrals we have to consider are sourced by functions of h µν can 1 , ϕ µ 1 , or their derivatives; (ii) The corresponding integrands have the form f (t − r/c)n L /r k , with k ∈ N \ {0, 1}, whose finite part retarded integral are explicitly known [33]. In practice, we build the cubic source Λ µν 3 , subtract the "difficult" part δΛ µν 3 , and apply the MPM algorithm to the rest, which leads to h µν rest . At last, we compute the commutators δU µν quad 3 with the method developed in Sec. IV B, from which we can infer δV µν quad 3 . The cubic waveform follows from the effective metric h µν eff 3 = Gh µν gen 1 + G 2 h µν gen 2 + G 3 [h µν rest 3 + Ω µν 12 + Ω µν 21 + δU µν 3 ] . Our final result (1.1) for the canonical quadrupole moment was obtained following the general method in Secs. III A-III B, and has then been entirely checked using this approach.
10,568.6
2022-04-24T00:00:00.000
[ "Physics" ]
Application of Graph Theory in Grain and Oil Deployment The deployment of grain and oil is related to the daily needs of people and the stability of society. In this paper, we take the shortest path problem and the minimum cost maximum flow problem in graph theory as the theoretical basis. Through the establishment of distance matrix between the reserves station and the deployment warehouse or between reserve station and reserve station, we use the Floyd algorithm to calculate the shortest path between any two points in the matrix to determine the optimal deployment of the emergency plan. Through the establishment of mathematical model of the reserve station and deployment warehouse, we use the minimum cost maximum flow theory to solve the model and to obtain the deployment programs of grain and oil under normal circumstances. Through the combination of shortest path and minimum cost and maximum flow, we give the deployment plan under the general emergency situation and provide a new way for the deployment of the supply of grain and oil in each case. Introduction Plenty of grain and oil reserves in each region are important for the protection of non-food consumer demand, regulating the supply and demand balance of domestic food, stabilizing grain prices, responding to major natural disasters or other emergencies, to protect the security and stability of society.Each year, after the harvest season, the national authorities will carry out the acquisition of grain and oil supplies, then according to the needs of each locality, make allocation of distribution, to ensure that each place has plenty of grain and oil reserves.Grain and oil materials need to be transported to the country, however, facing the complex road conditions and the different demand for grain and oil reserve materials of each different places, if we do not make an overall plan, it will cause an increase transport costs in the deployment, cause the shortage or excess supplies of local grain reserve, and affect the life of people and the security and stability of society.Therefore, under the premise of meeting the minimum needs of grain and oil of different areas, how to use the lowest transportation costs to convey the grain and oil materials to various reserve repository as much as possible is worthy of study, and in the case of an emergency situation, how to deploy the grain and oil to meet the emergency needs faster and better is worthy of study. Currently, the research of the application of the combination of shortest path and minimum cost maximum flow theory in grain and oil deployment is also less.Existing literatures has done some related research about this issue but only for one aspect.Shortest path theory is one of the most widely used network theories, many problems can be optimized by using this theory.It refers to the known network, seeking one way, of which the arc weight is minimum among all the ways from a specified node v i to another node v j [1].In the study of the shortest path problem, [2] studied the shortest path problem in road transport network, and converted the optimization problem of choices to the shortest path problem, then solved the problem.[3] studied the application of the shortest path problem in tourist route optimization and obtained the shortest path from any view spot to another view spot, and as well as the shortest path problem about the two view spot of the way must be given.[4] proposed a solution procedure for the elementary shortest path problem with resource constraints, presented computational experiments of the algorithm for our specific problem and embedded in a column generation scheme for the classical vehicle routing problem with time windows.[5] tried to find an answer to the question of which shortest path algorithm for the one-to-one shortest path problem ran fastest on a large real-road network and solved the key problem of the computation of shortest paths between different locations on a road network, which appeared in many applications.[6] considered the variant of the shortest path problem in which a given set of paths was forbidden to occur as a subpath in an optimal path, established that the most-efficient algorithm for its solution, a dynamic programming algorithm, had polynomial time complexity, and showed that this algorithm could be extended, without increasing its time complexity, to handle non elementary forbidden paths.[7] investigated the time-dependent reliable shortest path problem, which was commonly encountered in congested urban road networks.Two variants of time-dependent reliable shortest path problem are considered in this study, and the proposed solution algorithms given in this study have potential applications in both advanced traveler information systems and stochastic dynamic traffic assignment models.In the study of the minimum cost maximum flow problem, [8] studied the application of the minimum cost maximum flow algorithm in path planning, and found a method to solve the complex problems of dynamic programming, which was generally difficult to be solved by traditional algorithm.[9] [10] studied the application of the minimum cost maximum flow model in the railroad freight and connecting cargo flights.The studies that both the methods are used for the deployment of grain and oil are less.[11] studied the resource scheduling scheme according to the optimal transport routes, the selection transport mode and the distribution of emergency supplies in order to deploy the most supplies from distribution center to urgent areas within the limited time.[12] [13] studied the optimization problem of oil distribution based on the minimum cost flow theory.[14] obtained an optimal route of a more realistic situation as to scheduling maximum flows at a minimum cost from a source to a destination.Several special cases of the problem were intensively studied in the literature and were solved by the proposed various techniques.[15] developed the fast algorithms for previously unstudied specially structured minimum cost flow problems that had applications in many areas, such as locomotive and airline scheduling, repositioning of empty rail freight cars, highway and river transportation, congestion pricing, shop loading, and production planning. Previous research literature has laid a foundation to the further research about the related deployment of materials, but hasn't yet combined the shortest path problem with the minimum cost maximum flow problem, then is applied to the study of the deployment of grain and oil, and hasn't yet used the different models to analyze the different problems according to the different situations.This paper will establish the comprehensive mathematical model of the deployment of grain and oil supplies based on the researches of the shortest path problem and minimum cost maximum flow problem and will analyze and then present the deployment plans of grain and oil supplies in an emergency situation, in normal circumstance, and in special circumstance.This can provide some references more or less to the actual deployment of grain and oil supplies.[16] hold the opinion that emergency logistics is a special logistics activity for the purpose of providing the emergency supplies for the unexpected natural disasters, public health emergencies and other unexpected emergencies, and for the purpose of pursuing the goal of the maximization of time benefits and the minimization of emergencies loss.Emergencies is the cause of emergency logistics, the natural disasters, the mistakes of decisions, the complex international environment, the protection of consumer rights, and the reasons from third parties is the main reason for emergency logistics demand generated [17].Generally, emergency logistics have several typical features: sudden, uncertainty, uncertainty and weak economy [18].Thus, emergency logistics is different from the general logistics, the requirements for timeliness of the emergency logistics is relatively high.We are generally unable to accurately estimate the burst duration, scope and intensity of the emergency events.It reflects an urgent word, when emergencies arise, if the logistics activities are carried out step by step, you will meet the demand for emergency logistics.So, the economic benefits of logistics will no longer be the central objective of logistics activities.Regardless, the sudden major natural disasters such as earthquakes, snowstorms or emergencies such as war, are bound to need a lot of emergency supplies, especially the grain and oil supplies.Therefore, the logistics systems are needed to play its role in the emergency response, and the benefit of logistics can be achieved through the logistics efficiency.And then, we can pursue the goal of the maximization of time benefits and the minimization of emergencies loss while meeting the needs of the emergency supplies of the emergency areas. Strategy Analysis of Emergency Deployment Sudden emergencies will cause a shortage of emergency supplies in the short term.In order to minimize the risks caused by the lack of emergency supplies, to ensure rapid deployment of emergency supplies becomes very important.Therefore, it is particularly important to deploy the most needed supplies at a certain number to emergency area within the shortest possible time. Some researchers previously do a certain amount of research about deployment strategy in the emergency logistics. [19] use the dynamic programming and integer programming to make the distribution strategy of emergency supplies after the earthquake, establish the distribution network between non-affected areas and emergency areas so as to deliver the much-needed emergency supplies to emergency areas in shortest time as soon as possible to improve the material distribution efficiency and reduce losses caused by disasters.[20] analyzed the features of the deployment of the emergency supplies of large-scale emergency events, designed and explained the whole process model of the deployment of the emergency supplies of large-scale emergency events from three stages: scheduling preparation, scheduling implementation and scheduling assessment.[21] researched the emergency supplies scheduling problem about more rescue points under constraints, constructed the emergency rescue model of the minimum number of rescue points under the premise of the earliest of emergency rescue time.[22] considered a variety of practical problems in the emergency areas, put forward a single objective path selection model for the propose of minimizing the time, and used the improved Dijkstra algorithm to solve the model.Then, on this basis, they constructed the complex multi-objective path selection model for the purpose of minimizing the time and minimizing the complex level of path, and solved the model with the ant colony algorithm. In this paper, the shortest path theory will be used to make an appropriate deployment program of grain and oil in an emergency situation.The shortest path problem can be used to solve the problem that how the vehicle transport grain and oil supplies from source point to sink point at the minimum time, and it can be used to find the shortest path [23].Typically, each province, in the deployment management of grain and oil, often adopt the regional distribution system to deploy the grain and oil from the deployment warehouse to the reserve station according to the division of administrative regions.Although it has a good feasibility, but in face of the emergency situations, the grain and oil supplies deployed from deployment warehouse to reserve station cannot fully meet the needs of emergency situations, and this will cause great loss.By using the minimum cost maximum flow theory, we can establish the transport corridors between the reserve station and reserve station, between the deployment warehouse and reserve station.Then, we can select the deployment warehouse owning grain and oil supplies to deploy grain and oil supplies to the reserve station lacking grain and oil supplies according to the shortest distance between points so as to respond quickly in an emergency event. Then, we use Floyd algorithm to get the shortest distance between any two points.When the emergency event occurs, we can properly provide grain and oil supply from deployment warehouse or reserve station owning grain and oil to the emergency areas according to the grain and oil inventory of each reserve station and the shortest distance between each two points with the purpose of ensuring the sufficient grain and oil supply of the destination. Model Establishment 1) Figure 1 shows the establishment of shortest network diagram of the grain and oil supplies deployment.Regard reserve station and other reserve station, reserve station and deployment warehouse as the nodes of the abstract network graph to establish an abstract network graph of connecting every point.Regard the time that the vehicle cost between any two connected points as the "distance" between two points.Regard v i , s as reserve station and deployment warehouse.Regard d as the weight of the abstract network graph.And, d represents the distance between two points.One-way arrow means the grain and oil supplies can only be deployed from one point to another point.Double-headed arrow means the grain and oil supplies can only be deployed between two points.The shortest network diagram of the grain and oil supplies deployment is as follows. 2) After obtaining the distance between two connected points, we write the corresponding distance matrix according to the shortest network diagram. Among them, d ij represents the distance between two points; ∞ represents infinite, means that there is no deployment of a reserve station to the deployment warehouse or a reserve station to a reserve station; 0 represents the deployment of a reserve station or deployment warehouse itself to itself. Model Solution Then, we can use the Floyd algorithm to solve the model. The theory of Floyd algorithm: Assuming that network weights matrix is D = (d ij ) n×n , it describes the length of the shortest path from v i to v j in the network.l ij represents the weight of arc (v i , v j ), means the distance from v i to v j .A represents the set of arc. Only consider any two points v i and v j , other points are deleted except v i and v j , then, we get a subnet work D(0), which only contains v i and v j , denoted as: 2) Add v 1 and the arcs associated with v i , v j or v 1 in D to D(0), we get D (1).If we denote ( ) as the shortest length of the path from v i to v j in D(1), then: 3) Add v 2 and the arcs associated with v i , v j or v 1 v 2 in D to D(0), we get D( 2).If we denote ( ) as the shortest length of the path from v i , to v j in D(2),then: Recursively, Add n points to subnet work gradually, we get the shortest length of the shortest path finally from v 1 to v j in D: is the shortest length of the shortest path from v i to v j .Thus, we obtain the shortest path between any two points that mean we get the shortest deployment time between any two points.Then, we can make appropriate deployment program of grain and oil in an emergency situations according to the shortest deployment time between any two points, and timely ensure that the demand of the areas for grain and oil. Problem Description When there is no emergency situation, or after the acquisition of grain and oil, each region has some of the grain and oil reserves.In this condition, when we carry out the deployment of grain and oil, under the premise of meeting the minimum needs of grain and oil of different areas, how to use the lowest transport costs to convey the grain and oil materials to various reserve repositories as much as possible should be taken into consideration.In this case, exploring the application of minimum cost maximum flow theory in the deployment of grain and oil is more meaningful.In this paper, the minimum cost maximum flow theory will be used to make appropriate deployment program of grain and oil in normal circumstances.The minimum cost maximum flow theory can be used to solve the problem that how the vehicle transport grain and oil supplies from source point to sink point at the minimum cost.This co-ordination arrangement will satisfy as much as possible to ensure the adequate supply of grain and oil of each areas in the steady state, while keeping the cost of the deployment process the minimum.And this can ensure social harmony and stability to a certain extent. By minimum cost maximum flow theory, we can establish the transport corridors between the reserve station and reserve station, between the deployment warehouse and reserve station, we can establish the each maximum transport capacity of transport routes, and we can establish the unit transportation costs of each transport route.Then, using the principles of the minimum cost and maximum flow to find the maximum flow from source point to sink point or from one point to another point, and we can get the best deployment plan. Model Establishment In the deployment process of grain and oil supplies, consideration is needed to ensure the minimum needs of grain and oil of different areas, and the lowest transport costs should be used to convey the grain and oil materials from deployment warehouse to reserve station as much as possible to realize the minimum total transportation costs, the largest volume of transport of grain and oil supplies in the whole process. We assume that: 1) ij b is the unit transportation costs of grain and oil supplies from the i deployment warehouse to the j re- serve station. 2) ij f is the total amount of grain and oil supplies from the i deployment warehouse to the j reserve station. 3) i α is the total amount of grain and oil supplies of the i deployment warehouse. 4) j β is the minimum required amount of grain and oil supplies of the j reserve station to ensure the needs. 5) ij c is the maximum traffic capacity of road from the i deployment warehouse to the j reserve station. We regard the maximum number of deployment and the minimum total transportation costs as the objective function, regard the amount of grain and oil supplies of deployment warehouse, the as maximum traffic capacity of road, and the minimum required amount of grain and oil supplies of each reserve station as the constraints to establish the model.This model is as follows: (1) In the formula: 1) Constraint 1 represents the amount of grain and oil supplies deployed from the i deployment warehouse should not be more than the total amount of grain and oil supplies reserved in the deployment warehouse. 2) Constraint 2 represents the amount of grain and oil supplies deployed to the j reserve station should be more than the minimum required amount of grain and oil supplies of the j reserve station. 3) Constraint 3 represents the traffic capacity from the i deployment warehouse to the j reserve station should be within range of the maximum traffic capacity of road. 4) Constraint 4 represents the flow out of the middle points is equal to the flow into them.5) Constraint 5 represents for source, the flow is not into it, but only flow out of it.6) Constraint 6 represents for sink, the flow is not out of it, but only flow into it. Model Solution 1) Because the solution of minimum cost and maximum network flow is in the condition of a single source point to a single sink point, so, when we deploy the grain and oil, involving a single deployment warehouse and multiple reserve stations, we can add a virtual point to the network graphics; when involving multiple deployment warehouses and multiple reserve stations, we can add a virtual source point and a virtual sink point to the network graphics. Figure 2 shows the establishment of minimum cost and maximum flow network diagram of the grain and oil supplies deployment when there is a single deployment warehouse and multiple reserve stations.In this situation, regard s as the virtual source point.And, t represents the virtual deployment ware house.Regard A 1 , A 2 , A 3 , V 1 , V 2 , V 3 , V 4 , V 5 , V 6 as the reserve station.We add t as a virtual point to the network graphics.And, t represents the virtual reserve station.The one-way arrow means the deployment of grain and oil supplies.And, the d ij in (d ij , c ij ) represents the unit transportation costs of grain and oil supplies, the c ij in (d ij , c ij ) represents the maximum traffic capacity of road from the i deployment warehouse to the j reserve station. Figure 3 shows the establishment of minimum cost and maximum flow network diagram of the grain and oil supplies deployment when there multiple deployment warehouses and multiple reserve stations.In this situation, we add s as the virtual source point, and s represents the deployment warehouse.Regard V 1 , V 2 , V 3 , V 4 , V 5 , V 6 as the reserve station.We add t as a virtual point to the network graphics.And, t represents the virtual reserve station.The one-way arrow means the deployment of grain and oil supplies.And, the d ij in (d ij , c ij ) represents the unit transportation costs of grain and oil supplies, the c ij in (d ij , c ij ) represents the maximum traffic capacity of road from the i deployment warehouse to the j reserve station.Further, the weight of arc from 2) Principles of the minimum cost and maximum flow a) The minimum cost and maximum flow For a network ( ) is the capacity of the arc ( ) ( ) Then, the minimum cost and maximum flow: Seeking a maximum flow, so that the total cost of the flow: If ( ) . Consequently, the problem that finding augmenting path of minimum cost about feasible flow is transformed into another problem that finding the shortest path from source point to sink point in the network diagram.c) Algorithm steps Firstly, assume that the initial feasible flow is ( ) is the minimum cost and maximum flow; if there is a shortest path ,then the shortest path is the corresponding augmenting path µ in the network D. And, then transfer to step 4. Fourthly, according to the augmenting path µ , adjust ( ) k f with the he adjusting value θ : Then, we get the feasible flow ( ) + , and go back to step 2. Thus, we obtain the minimum cost and maximum flow from deployment warehouse to reserve station.Then, we can make appropriate deployment program of grain and oil in the normal circumstances to ensure adequate requirement of grain and oil of different areas at minimum cost. Application of the Combination of Shortest Path and Minimum Cost Maximum Flow Theory in Grain and Oil Deployment Circumstance Analysis When the urgency of the emergency situation is in a lower level, if the whole deployment of grain and oil supplies only takes the time into consideration, and use the shortest path theory to solve the problem, because we only think that we deploy the grain and oil supplies in the shortest possible time in order to meet the demand of the area, without considering the transportation costs, so, the transportation cost will be very high.In order to better optimize the deployment plan, we need to take transportation costs into consideration, in this case, we can not only ensure the requirement of grain and oil supplies, but also ensures the reduction of the transportation costs as much as possible in the whole deployment process of grain and oil supplies.If the whole deployment of grain and oil supplies only use the minimum cost maximum flow theory to optimize, although we can transport grain and oil supplies to reserve station as much as possible at the lowest cost, however, we cannot guarantee the timeliness of supply, that means we cannot transport the grain and oil supplies to the desired place timely, and it will affect the demand for grain and oil supplies of the desired place.Consequently, in this case, how to make a better deployment program to convey the grain and oil supplies to the reserve station of destination as much as possible to ensure the emergency needs while reducing the whole transportation cost as much as possible is worth studying. Strategic Planning To begin with, we can use the shortest path theory to solve the problem of deploying the part of grain and oil supplies.Firstly, we can regard the time as the distance of the matrix, use the Floyd algorithm to find the shortest path in order to deploy the grain and oil supplies in the shortest time, and then we can regard the unit transportation costs as the distance of the matrix, use the Floyd algorithm to find the shortest path in order to deploy the grain and oil supplies with the minimum transportation costs to meet the requirement of grain and oil supplies in a certain period of time.Then, we can use the minimum cost maximum flow theory to solve the problem of deploying the other part of grain and oil supplies.By using this strategic plan, can we ensure that the overall cost of the whole deployment process is minimum.1) Regard the time as the distance of the matrix, we can obtain the shortest path between any two pointsthe shortest time of deployment between any two points.Then, we can develop the deployment programs base on the shortest time of deployment to deploy the grain and oil supplies so as to meet the needs of grain and oil supplies of the destination timely to a certain period. 2) Regard the time as the distance of the matrix, we can obtain the shortest path between any two pointsthe minimum transportation costs of deployment between any two points.Then, we can develop the deployment programs base on the smallest unit transportation costs of deployment to deploy the grain and oil supplies in order to relatively reduce the transportation costs of deployment. 3) Finally, we can use the minimum cost maximum flow theory to make a deployment plan to deploy the grain and oil supplies from the deployment warehouse to reserve station to achieve the overall effect of the minimum cost of deployment. Example One A province of china has a large grain and oil supplies deployment warehouse, with S representing, and has nine different regional grain and oil reserve stations, and they are respective as follows: 1 2 9 , , , V V V  .Normally, this deployment warehouse can assure the grain and oil supplies requirements of the nine different reserve stations.However, when the emergency situation of grain and oil material shortage occurs, the grain and oil supplies deployed from deployment warehouse to reserve station cannot fully meet the needs of emergency situations, and this will cause great loss.So, we should establish the transport corridors between the reserve station and reserve station, between the deployment warehouse and reserve station.Then, we can select the deployment warehouse owning grain and oil supplies to deploy grain and oil supplies to the reserve station lacking grain and oil supplies according to the shortest distance between points so as to respond quickly in an emergency event. Table 1 shows the shortest transportation time and transport-path network between them [24] [25]. 1) Path network diagram establishment Regard the time that the vehicle cost between any two connected points as the "distance" between two points, and establish the path network diagram which connect every point.Figure 4 shows the path network diagram. 2) Distance matrix establishment We can establish the distance matrix according to the path network diagram as follows: Finally, we can obtain the shortest path between any two points from the solving results above.Then we can make appropriate deployment program of grain and oil in an emergency situations according to the shortest deployment time between any two points, and timely ensure that the demand of the areas for grain and oil. Example Two An area of China has seven grain and oil supplies deployment warehouses, they are respective as follows: A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 7 .And the grain and oil reserves of them are respective as follows: 1.08 million tons, 960,000 tons, 1.32 tons, 1.26 million tons, 1.14 million tons, 1.38 million tons, 1.14 million tons.In addition, There are three grain and oil supplies reserve stations of three regions, they are respective as follows: V 1 , V 2 , V 3 , and their lowest grain and oil supplies requirements are follows: 2.76 million tons; 2.28 million tons; 1.32 million tons.When there is no emergency situation, that is to say, under normal circumstances, they are responsible for the deployment of grain and oil supplies from the seven employment warehouses to the three reserve stations [26]. Table 2 shows the amount of grain and oil supplies of seven employment warehouses, the lowest grain and oil supplies requirements of the three grain and oil reserve stations, and the unit transportation costs of transporting unit suppliers from a deployment warehouse to a reserve station. 2) Model establishment (2) 3) Model solution Then, we use the minimum cost and maximum theory to solve the model.Table 4 shows the solving results of the model.Finally, we can obtain the optimal deployment scheme from the table above.And the minimum total transportation costs are 983.7 million yuan. Conclusions In the deployment process of grain and oil supplies, when there is an emergency situation, this paper uses the shortest path theory and develops a deployment plan to respond quickly according to the sudden and uncertainties of the emergencies, shortage of emergency supplies and other characteristics.It means that we can deploy the grain and oil supplies from source point to sink point along the shortest path based on the shortest path theory to timely ensure the deployment of emergency supplies. In normal conditions, we can use the minimum cost maximum flow theory to develop the deployment plan of grain and oil supplies.It means that under the premise of meeting the minimum needs of grain and oil of different areas, we can use the lowest transport costs to convey the grain and oil materials to various reserve repositories as much as possible.We can also use the minimum cost maximum flow theory to develop the different deployment plan of grain and oil supplies according to the lowest grain and oil demand of each area in different periods. And, when the urgency of emergency situation is in a lower level, we can combine the shortest path theory with the minimum cost maximum flow to make the deployment plan of grain and oil supplies.This can not only meet the needs of the destination but also reduce the whole transportation cost as much as possible. Consequently, the application study of the graph theory in grain and oil deployment can provide some references to the actual deployment of grain and oil supplies. Figure 2 . Figure 2. The minimum cost and maximum flow network diagram (including virtual sink point). Figure 3 . Figure 3.The minimum cost and maximum flow network diagram (including virtual source point and virtual sink point). 6 are respectiveas follows: (d a1 , c a1 ), (d a2 , c a2 ), (d a3 , c a3 ), (d a4 , c a4 ), (d a5 , c a5 ), (d a6 , c a6 ), (d a7 , c a7 ); (d b1 , c b1 ), (d b2 , c b2 ), (d b3 , c b3 ), (d b4 , c b4 ), (d b5 , c b5 ), (d b6 , c b6 ), (d b7 , c b7 Algorithm thought If f is the minimum cost feasible flow among all feasible flow with the flow is ( ) v f , and u is the minimum cost augmenting path among all augmenting path about f, then we can adjust f with the adjusting value θ along u, and we can get the feasible flow f ′ .So, f ′ is the minimum cost feasible flow among all feasible flow with the flow is ( ) v f θ + .When f ′ is the maximal flow, then f ′ is the minimum cost and maximum flow we want to get.Therefore, if f is the minimum cost feasible flow with the flow is ( ) v f , we must find the minimum cost augmenting path.At the moment, we can construct a weighted directed network graph ( ) w f , and the point of it is the same as the point of D The each arc and the weight ij w in the weighted directed network graph ( ) w f can be can be classified according to the arc ( ) the network D as fallows: denote the minimum cost and maximum flow after k times adjustment as( ) k f Table 1 . Transportation time and transport-path direction.
7,606.2
2015-07-21T00:00:00.000
[ "Mathematics", "Engineering" ]
Hypothalamic AgRP neurons exert top-down control on systemic TNF- a release during endotoxemia Loss of appetite and negative energy balance are common features of endotoxemia in all animals and are thought to have protective roles by reducing nutrient availability to host and pathogen metabolism. Accord-ingly, fasting and caloric restriction have well-established anti-inflammatory properties. However, in response to reduced nutrient availability at the cellular and organ levels, negative energy balance also recruits distinct energy-sensing brain circuits, but it is not known whether these neuronal systems have a role in its anti-inflammatory effects. Here, we report that hypothalamic AgRP neurons—a critical neuronal population for the central representation of negative energy balance—have parallel immunoregulatory functions. We found that when endotoxemia occurs in fasted mice, the activity of AgRP neurons remains sustained, but this activity does not influence feeding behavior and endotoxemic anorexia. Furthermore, we found that endotoxemia acutely desensitizes AgRP neurons, which also become refractory to inhibitory signals. Mimicking this sustained AgRP neuron activity in fed mice by chemogenetic activation—a manipulation known to recapitulate core behavioral features of fasting—results in reduced acute tumor necrosis factor alpha (TNF- a ) release during endotoxemia. Mechanistically, we found that endogenous glucocorticoids play an important role: glucocorticoid receptor deletion from AgRP neurons prevents their endotoxemia-induced desensitization, and importantly, it counteracts the fasting-induced suppression of TNF- a release, resulting in prolonged sickness. Together, these findings provide evidence directly linking AgRP neuron activity to the acute response during endotoxemia, suggesting that these neurons are a functional component of the immunoregulatory effects associated with negative energy balance and catabolic metabolism. Functionally, mimicking fast-ing-induced AgRP neuron activity by selectively increasing the activity of these neurons via chemogenetic is sufficient to reduce circulating TNF- a in otherwise fed mice, but the same manipulation does not reverse endotoxemic anorexia. Our data further indicate that endogenous glucocorticoids play an important role in maintaining AgRP neuron activity during endotoxemia and, importantly, that the loss of glucocorticoid signaling in these neurons reduces the anti-inflammatory effect of fasting, resulting in an increased TNF- a secretion and an overall prolonged sickness response. 211 Hz and 511 Hz, respectively. Light was sent to a Fluorescence Mini Cube (FMC4_AE(405)_E(460-490)_F(500-550)_S; Doric Lenses) through low autofluorescence optical fiber patch cords (MFP_400/430/LWMJ-0.48_1m_FCM-FCM_T0.05; Doric Lenses). A pigtailed rotary joint (FRJ_1x1_PT_400/430/LWMJ-0.57_1m_FCM_0.06m_FCM; Doric Lenses) was used to allow for free movement of the mice. 470 nm (Ca 2+ -dependent fluorescence) and 405 nm (Ca 2+ -independent isosbestic fluorescence) signals were collected through the same optic fiber cable, spectrally separated by a dichroic mirror, amplified and quantified using a Doric Fluorescence Detector (Doric Lenses). Data was collected via the Doric Neuroscience Studio Software (Doric Lenses) and demodulated using a lock-in detection algorithm. The 470 and 405 nm signals were processed and normalized to baseline signals independently to define D F/F (F-Fbaseline)/Fbaseline) using MATLAB (MathWorks). Fbaseline was the mean of the fluorescence detected during the pre-stimulus period. Isosbestic D F/F was subtracted from the 470 nm D F/F and data were down-sampled to 1 Hz in MATLAB (MathWorks). Mice used for photometry recording were validated before the initiation of the full experimental sequence. Presentation of food to a fasted mouse results in a rapid and substantial inhibition of AgRP neuron activity and only the subjects that exhibited this response were used for further testing (inclusion criteria: detection of spontaneous neuronal activity before and at least 10% inhibition after food contact/consumption). Post-hoc validation of transgene expression and fiber placement was performed at the end of each histologically. INTRODUCTION Loss of appetite and negative energy balance are common features of sickness in all animals, and although these responses are thought to have a protective role during acute infection, 1 there is a limited understanding of the mechanistic basis for this notion. Reduced nutrient levels and the catabolic metabolism associated with negative energy balance are crucial in mediating stress resistance and tissue protection from inflammatory and pathogen-induced damage. [2][3][4] However, under physiological conditions, in response to reduced nutrient availability at the cellular and organ levels, negative energy balance recruits distinct brain circuits, whose activity change upon variations in energy balance. 5 In particular, negative energy balance activates a discrete neuron population in the arcuate nucleus of the hypothalamus (ARC) defined by the expression of the agouti-related peptide (hereafter AgRP neurons). 6 Converging evidence indicates that AgRP neurons represent a critical neuronal substrate encoding the central representation of negative energy balance: AgRP neurons are naturally activated by fasting and inhibited during satiety; [6][7][8] AgRP neuron ablation causes cessation of feeding and lethal starvation, 9,10 whereas selective, artificial AgRP neuron activation promotes voracious feeding, 11,12 behavioral patterns, [13][14][15] and neuronal responses in cortical regions 16 similar to those occurring naturally during periods of food deprivation. Therefore, it is clear that negative energy balance and the neuronal mechanisms encoding its central representation are tightly connected. However, it is not known whether the negative energy balance associated with sickness modulates the immune response exclusively via reduced nutrient availability or also via a centrally mediated response. This question is particularly relevant when considering the long-standing proposal that the central nervous system exerts top-down control of immune and inflammatory responses, [17][18][19] as well as current progresses in the neuroimmunology of inflammatory diseases. 20 Here, to investigate this knowledge gap, we used the molecularly and anatomically defined AgRP neurons as an entry point to the central representation of negative energy balance. We studied the dynamics and functions of AgRP neurons during acute endotoxemia and found that even though AgRP neuron activity does not affect appetite and nutrient intake, it does influence the inflammatory response. RESULTS Endotoxin-driven anorexia is independent of the activity of AgRP neurons To monitor the activity of AgRP neurons in vivo during endotoxemia, we used fiber photometry ( Figures 1A-1C). We injected an adeno-associated virus (AAV), expressing the Cre-inducible calcium indicator jGCamP7s, into the ARC of AgRP CRE/+ mice and measured real-time, activity-dependent fluorescence via an indwelling optic fiber ( Figures 1A-1C). As a reductive and tractable approach to induce endotoxemia, we used lipopolysaccharide (LPS)-a wall component of Gram-negative bacteriawhose immunogenicity is rapid and triggers a well-characterized cytokine response. LPS does not cause any detectable inhibition of AgRP neurons in food-deprived mice ( Figures 1D-1F and S1A), despite being potently anorectic. This indicates that an immunogenic challenge, such as LPS, does not inhibit the activity of AgRP neurons, whose activity remains sustained in food-deprived mice despite profound anorexia. We then tested the anorectic factor oleoylethanolamide (OEA), a lipid messenger released postprandially by smallintestinal enterocytes, which inhibits feeding without inducing immune activation or aversion. 21,22 Contrary to LPS, an anorectic dose of OEA produces a rapid and stable inhibition of AgRP neurons ( Figures 1D-1F). Thus, anorexia induced by sickness or physiological signaling seems to differentially impact on the activity of AgRP neurons. To test this further, we artificially increased the activity of AgRP neurons using chemogenetics and tested whether LPS or OEA affects feeding when it is driven by increased AgRP neuron activity. We used mice expressing the stimulatory DREADD allele hM3Dq, together with a mCherry reporter protein, selectively in AgRP neurons (AgRP CRE/+ ::LSL-hM3Dq-T2A-mCherry; hereafter AgRP-hM3Dq; Figure 1H). As expected, a systemic injection of the hM3Dq actuator, clozapine-N-oxide (CNO), elicits activation of AgRP neurons ( Figure S1C) and a robust feeding response in otherwise sated AgRP-hM3Dq mice, whereas it has no effect in littermate controls that do not express hM3Dq ( Figure S1D). In line with previous studies, 23 chemogenetic activation of AgRP neurons is not sufficient to counteract the anorexia induced by endotoxemia ( Figure 1I). By contrast, OEA fails to suppress feeding when AgRP neurons are artificially activated via chemogenetics ( Figure 1J). Together, these data highlight a fundamental difference between the reductions in feeding caused by endotoxemia and those by physiological signaling, indicating that these two forms of appetite suppression are different in nature. Whereas anorexia induced by the satiation factor OEA occurs with concomitant inhibition of AgRP neurons, anorexia induced by endotoxemia does not and can occur despite the sustained activity of these neurons. Endotoxemia desensitizes AgRP neurons to inhibition In physiological conditions, the activity of AgRP neurons induced by food deprivation is rapidly inhibited when food is identified, as well as by post-ingestive nutrient signaling. [24][25][26] Thus, we asked how AgRP neurons respond to these inhibitory signals during endotoxemia. We confirm that AgRP neurons display a rapid inhibition upon sensory detection of food ( Figure 2A). However, we found that during endotoxemia this anticipatory inhibition is almost absent (Figure 2A). While the significance of the AgRP neuron inhibition upon food contact remains unresolved, 27 it is considered an appetitive response predicting the amount of food consumed in the ensuing meal. 28,29 Consequently, the AgRP neuron-blunted inhibition in endotoxemic mice could be an epiphenomenon due to a lack of appetitive drive toward food as a result of the endotoxemia-induced inflammatory response. Thus, we asked what the effect of nutrient ingestion on AgRP neuron activity would be if this suppression of appetitive drive is bypassed. To answer this question, we provided mice with nutrients via direct intragastric delivery. We found that while a 0.3-kcal bolus of glucose potently inhibits AgRP neurons in control mice, strikingly, the same bolus is minimally effective in inhibiting these neurons in endotoxemic mice ( Figures 2B-2D): an indication that during endotoxemia, AgRP neurons become refractory to inhibition triggered by post-ingestive signals. Furthermore, we found that AgRP neuron inhibition upon intragastric glucose delivery is reverted back to normal if mice are challenged again with a second dose of LPS 3-5 days later ( Figure 2E), when LPS fails to trigger systemic inflammation due to the development of endotoxin tolerance. 30,31 Thus, these data further indicate that the endotoxemia-induced desensitization of AgRP neurons is primarily driven by the acute inflammatory response. We asked also whether endotoxemia promotes a general impairment of AgRP neuron responsiveness to metabolic signals and tested ghrelin, a gut-released hormone that signals energy deficit and potently activates AgRP neurons. We found, however, that AgRP neurons remain responsive to ghrelin administration and display robust activation both in endotoxemic and control mice (Figures 2F-2H). Together these data indicate that endotoxemia rapidly desensitizes AgRP neurons to prevent their inhibition but that the neurons remain able to respond to signals of energy deficit. AgRP neuron activity during endotoxemia reduces acute TNF-a secretion Our data, so far, provide converging evidence that AgRP neurons remain active when food-deprived mice experience endotoxemia but that their activity does not sustain feeding. Most fundamentally, these observations raise the question as to why these neurons would remain active during a state of endotoxemia-driven anorexia. In response to negative energy balance, animals forage for food and take risks, increasing the likelihood of injury and infection. We envision that the occurrence of microbial exposure and immune challenges during a state of negative energy balance might have endowed AgRP neurons with immunoregulatory functions, possibly conferring an evolutionary advantage. To test this idea, we activated AgRP neurons artificially via chemogenetics in otherwise well-fed mice and assessed the acute inflammatory response to LPS by measuring serum levels of tumor necrosis factor alpha (TNF-a), an early-secreted cytokine and hallmark of endotoxemia that coordinates the immune response and sustains sickness behavior. We found that an acute chemogenetic activation of AgRP neurons in AgRP-hM3Dq mice is sufficient to reduce circulating TNF-a levels in response to LPS (Figures 3B and S2B). This effect seems specific to TNF-a, as AgRP neuron activation does not affect circulating levels of the other early cytokines IL-1b and IL-6 ( Figures 3C and 3D). Notably, several previous studies have established that systemic TNF-a release during endotoxemia can be neuronally regulated. [17][18][19] We obtained a similar reduction in LPS-induced TNF-a secretion in separate cohorts of AgRP CRE/+ mice in which the hM3Dq allele was expressed in AgRP neurons post-developmentally using a Cre-inducible Figure S2), ruling out the possibility of developmental expression of the hM3Dq in a cell population other than hypothalamic AgRP neurons. Furthermore, following AgRP neuron activation we found reduced LPS-induced Tnf mRNA ( Figure 3E) and TNF-a protein ( Figure 3F) levels in the liver-a principal source of serum TNF-a during endotoxemia. 17,32 We found no difference in the spleen ( Figure 3H). Thus, the effect of chemogenetic activation of AgRP neurons on acute TNF-a secretion could be mediated, at least in part, by an attenuated hepatic response to endotoxemia. During negative energy balance, increased AgRP neuron activity can promote energy conservation by reducing thermogenesis, resulting in reduced body temperature. 14,33 However, chemogenetic activation of AgRP neurons reduces LPS-induced TNF-a levels even under thermoneutral conditions (32 C;Figure S3B), which fully prevent the AgRP neuron-driven reduction of body temperature ( Figures S3D and S3E). This indicates that this effect is independent of body temperature. Thermoneutrality also blunts the increase in locomotor activity (a proxy of foraging behavior) that follows chemogenetic activation of AgRP neurons (data not shown), further indicating that the reduction in TNF-a levels is independent of changes in physical activity. Together, these data indicate that AgRP neuron activity inhibits TNF-a production during the early-phase inflammatory response to endotoxemia. Therefore, the desensitization of AgRP neurons that we report to occur acutely during endotoxemia can be adaptive in nature, as the resulting sustained tone of the neurons might contribute to controlling TNF-a levels. Notably, our findings that AgRP neurons have anti-inflammatory properties also concur with a previous report showing that AgRP-specific knockdown of the key cellular metabolic sensor Sirtuin 1-which impairs AgRP neuron excitability-results in a pro-inflammatory state and disease susceptibility. 34 Endogenous glucocorticoids mediate endotoxemiainduced desensitization of AgRP neurons and enable their anti-inflammatory effects Next, we sought to understand the mechanism underlying the endotoxemia-induced AgRP neuron desensitization. It is well established that endogenous glucocorticoid levels increase acutely during endotoxemia. 35 AgRP neurons express the Nr3c1 gene that encodes the glucocorticoid receptor (GR) whose activation results in an increased AgRP tone. [36][37][38] Thus, we hypothesized that increased glucocorticoid levels could be one of the mechanisms driving the observed endotoxemiadependent AgRP neuron desensitization. To test this hypothesis, we used Agrp CRE/+ ::Nr3c1 flx/flx mice (hereafter AgRP G R À KO) that bear conditional deletion of the Nr3c1 gene in AgRP neurons, 37 and we transduced these neurons with the Cre-inducible calcium indicator jGCamP7s to perform photometry recordings ( Figures 4A-4C). As described above, we provided fooddeprived mice with a 0.3-kcal bolus of glucose via intragastric delivery during normal or endotoxemic conditions. Strikingly, we found that in AgRP GR-KO mice, glucose gavage inhibits AgRP neurons to the same extent following either saline or LPS injection (Figures 4D-4F). Thus, GR expression in AgRP neurons is necessary for the endotoxemia-dependent desensitization, revealing endogenous glucocorticoid action on these neurons as an important underlying mechanism. Finally, we capitalized on the AgRP GR-KO mice and sought to test the hypothesis that the endotoxemia-induced desensitization of AgRP neurons could be adaptive in nature, providing a mechanism to sustain fasting-induced AgRP neuron tone and contribute to controlling TNF-a levels. We reasoned that since the endotoxemia-dependent desensitization of AgRP neurons is absent in AgRP GR-KO mice, they should display an impaired fasting-induced suppression of TNF-a secretion during endotoxemia. To test this, we measured LPS-induced circulating TNF-a levels in fed and fasted AgRP GR-KO mice and littermate controls. As expected, we found that fasting significantly reduces TNF-a levels in response to endotoxemia in wild-type controls (Figure 4H). AgRP GR-KO mice, however, have similar TNF-a levels during both fed and fasted conditions ( Figure 4H). Circulating levels of IL-1b and IL-6 do not differ significantly ( Figure 4I). Furthermore, we found that AgRP GR-KO mice display a longer endotoxemia-induced anorexia when challenged with LPS (Figure S4A). There is no difference, however, between AgRP GR-KO mice and controls in spontaneous or fasting-induced feeding under physiological conditions ( Figures S4B and S4C), indicating that glucocorticoid action is dispensable for the appetite-promoting functions of AgRP neurons. 39 Therefore, the prolonged endotoxemia-induced anorexia in AgRP GR-KO mice likely reflects an increased sickness response due to the loss of endogenous glucocorticoid signaling in AgRP neurons. These data add further support to the hypothesis that glucocorticoid signaling sustains the anti-inflammatory effect of AgRP neurons, and they might suggest a possible additional bottom-up mechanism through which glucocorticoids exert their pleiotropic effects on inflammation. DISCUSSION Here, we report that AgRP neuron activity remains sustained when endotoxemia occurs during a state of negative energy balance, such as fasting, and that AgRP neuron activity does not influence feeding behavior. We further found that systemic inflammation rapidly desensitizes AgRP neurons to inhibitory signals preventing their inhibition. Functionally, mimicking fasting-induced AgRP neuron activity by selectively increasing the activity of these neurons via chemogenetic is sufficient to reduce circulating TNF-a in otherwise fed mice, but the same manipulation does not reverse endotoxemic anorexia. Our data further indicate that endogenous glucocorticoids play an important role in maintaining AgRP neuron activity during endotoxemia and, importantly, that the loss of glucocorticoid signaling in these neurons reduces the anti-inflammatory effect of fasting, resulting in an increased TNF-a secretion and an overall prolonged sickness response. Multiple lines of evidence have highlighted the impact of nutritional state on peripheral immunity, with food deprivation and caloric restriction exerting well-established anti-inflammatory adaptations. 40 These effects were first observed and particularly prominent in the context of endotoxemia following exposure to bacterial products, 2-4 as experimentally mimicked in our studies using the LPS. However, even though the anti-inflammatory effects of fasting and the adaptive nature of negative energy balance during sickness responses have been generally attributed to reduced nutrient availability and its impact on host and pathogen metabolism, our results suggest that these effects are at least in part a result of a central, AgRP neuron-driven mechanism and not completely dependent on nutrient availability per se. To the best of our knowledge, our findings provide the first evidence directly linking AgRP neuron activity to the peripheral inflammatory response, pointing to AgRP neurons as a critical functional component of the anti-inflammatory effects associated with negative energy balance and catabolic metabolism. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: (I) LPS-induced circulating levels of IL-1b and IL-6 in fasted AgRP GR-WT and AgRP GR-KO mice. Data are presented as mean ± SEM or as individual biological replicates in box-and-whisker plots. 3V, third ventricle; ARC, arcuate nucleus of the hypothalamus; ME, median eminence. See also Figure S4.
4,267.2
2022-09-01T00:00:00.000
[ "Medicine", "Biology" ]
Modelling and Fault Current Characterization of Superconducting Cable with High Temperature Superconducting Windings and Copper Stabilizer Layer : WiththehighpenetrationofRenewableEnergySources(RES)inpowersystems, theshort-circuit levels have changed, creating the requirement for altering or upgrading the existing switchgear and protection schemes. In addition, the continuous increase in power (accounting both for generation and demand) has imposed, in some cases, the need for the reinforcement of existing power system assets such as feeders, transformers, and other substation equipment. To overcome these challenges, the development of superconducting devices with fault current limiting capabilities in power system applications has been proposed as a promising solution. This paper presents a power system fault analysis exercise in networks integrating Superconducting Cables (SCs). This studies utilized a validated model of SCs with second generation High Temperature Superconducting tapes (2G HTS tapes) and a parallel-connected copper stabilizer layer. The performance of the SCs during fault conditions has been tested in networks integrating both synchronous and converter-connected generation. During fault conditions, the utilization of the stabilizer layer provides an alternative path for transient fault currents, and therefore reduces heat generation and assists with the protection of the cable. The e ff ect of the quenching phenomenon and the fault current limitation is analyzed from the perspective of both steady state and transient fault analysis. This paper also provides meaningful insights into SCs, with respect to fault current limiting features, and presents the challenges associated with the impact of SCs on power systems protection. Introduction Transmission System Operators (TSOs) are responsible for the security of power grids and maintaining the balance between power generation and demand. However, new trends have emerged in power systems, pushing for a change in the way that networks are controlled, giving the TSOs plenty of new challenges to face in order to maintain the reliability and the security of power exchanges. The traditional power grids are gradually evolving towards power networks with high penetration of large-scale Renewable Energy Sources (RES) in both distribution and transmission level. In addition, more of the networks' equipment are reaching their capacity limits, while at the same time the utilities face several converging challenges caused by demand growth. All these factors bring about new challenges for future power systems, requiring the development of bulk power corridors as interconnections between different countries, and the upgrading of existing networks. Consequently, in order to avoid technologically, economically, and socially challenging solutions, such as building of new substations [1], there is a need for the investigation of new technologies which can overcome these restrictions and increase the electrical capacity and flexibility of the network. In addition, the penetration of RES changes significantly the fault levels and the resulting fault current signatures. Such changes imply the need for upgrading the existing switchgear and protection systems. As a result, the utilization of Resistive Superconducting Fault Current Limiters (RSFCLs) has been proposed by [2][3][4][5] as a viable solution towards addressing the challenge of managing short circuit currents in power-dense systems. However, the RSFCLs are very expensive. Therefore, many researchers have been focused on the integration of fault current limiting features into other power system devices, in an attempt to take advantage of the unique features of the superconducting materials while fulfilling the cost requirements [6]. The studies presented in this paper promote that the utilization of SCs with a copper stabilizer layer connected in parallel. The main scope of this research is to study the fault current and voltage signatures resulted by the utilization of the SCs. Emphasis is given to the fault current limitation feature (as an extension to the cable's primary function as a lossless transmission media during steady state operation), in conjunction with the assessment of the potential benefits of the copper stabilizer layer during transient phenomena. The obtained results provide useful information regarding the fault analysis of future power grids integrating SCs and high amounts of RES, which can be considered as a prerequisite step for designing effective protection schemes. Characteristics of Superconducting Cables In recent years, the deployment of Superconducting Cables (SCs) in power system applications has become widely accepted due to their unique characteristics. Several prototype projects have been carried out worldwide which proposed the utilization of different configurations of SCs as a viable solution for bulk power transmission [7][8][9][10][11][12]. Compared to conventional cables, SCs are characterized by a plethora of technically-attractive features, such as higher current-carrying capability [13], higher power transfer at lower operating voltages and over longer distances [1,14], lower losses due to their lower resistance compared to that of overhead transmission lines [15], and more compact size due to their high current density. Therefore, the installation of SCs is considered a promising solution against congestion, especially in high power density areas such as metropolitan meshed networks. Furthermore, their fundamental property of transferring power over long distances, at low voltage levels, renders them the most effective way to interconnect renewable energy sources, such as offshore wind farms, to the power grid. The superconducting behavior appears after cooling down the superconductor below a characteristic temperature, known as critical temperature T C , which has a specific value for each superconducting material [16][17][18][19]. The maximum value of the current that can be conducted through the superconductor without encountering increase in the resistance value is called critical current I C . However, superconductors lose their superconductivity if the magnetic field reaches its critical value H C or in case the temperature increases beyond T C . This phenomenon is called quench. These remarkable physical properties of SCs make them capable of conducting currents with approximately zero electrical resistance during steady state, while their variable resistance, which is dependent on the load current, in conjunction with the introduction of a high resistive layer into the superconducting wire, such as copper, result in fault current limitation in short-circuit situations. The contribution of SCs to fault current limitation is determined by the design. Challenges Associated with the High Temperature Superconducting Cables Installation and the Superconducting Cables (SCs) The discovery of HTS materials created the opportunity of applying the superconductivity principles to electric power devices such as, superconducting machines and SCs. The major advantage of the HTS materials is that their high critical temperature values, T C , are attainable using liquid nitrogen, LN 2 , as coolant with a boiling temperature of 77 K [20][21][22]. For the presented case studies, the Yttrium Barium Copper Oxide (YBCO) material has been chosen with T C(YBCO) = 93 K, which belongs to the 2nd generation of HTS tapes (2G), as its transition from the superconducting state to normal state lasts for a few milliseconds, which makes it attractive considering the fault current limitation capability [8]. In addition, one of the most challenging tasks to be achieved is the connection between HTS cables and existing conventional circuits [23][24][25]. It is important to understand that the direction and the magnitude of power flows could be affected by the installation of HTS cables, due to their low impedance. During steady state conditions, HTS cables operate at the superconducting state, presenting the current path with approximately zero resistance and as a consequent, attracting naturally the power flow. These significant changes in the current distribution and the rearrangements of power flows must be considered in order to maintain power system stability. Furthermore, the installation of HTS power cables impacts on the short-circuit level of the power system. The changes in the short-circuit level, and as a consequence the changes in the fault currents, affect the performance and design of power system protection schemes. The incorporation of the copper parallel layer and fault current limiting features in SCs cables have made them increasingly appealing for power system applications [26,27]. In steady state condition SCs transmit bulk power with low losses. Under fault conditions, when the fault current flowing through the HTS tapes exceeds the critical current I C , the superconducting tapes will automatically quench and switch to normal resistive state. As the fault current increases, the resistance and the temperature of the cable increase as well, as interdependent variables. The transition from the superconducting state to the normal resistive state during short circuit conditions can occur within milliseconds (i.e., within a single AC cycle). Consequently, the integration of the fault current limiting feature into the HTS cable can limit the short-circuit current to a certain point, helping towards protecting the system [27]. This property of the SCs creates new challenges for the power system protection, as the calculation of the expected short-circuit level must be conducted in accordance with the variable resistance of the installed SCs. The paper is organized as follows: Section 2 presents the detailed mathematical development of the utilized cable based on well-known equations which explain the behavior of superconductors. The model is developed using Matlab and Simulink software and is applied to a power system which contains wind farms and synchronous generators. In Section 3, different fault scenarios are carried out which aim to investigate the cable performance during transients and verify the practical feasibility of the proposed SCs model. Modelling of SCs with 2G HTS Wires Various numerical models of HTS cables have been recently proposed, which use the Finite Element Method (FEM) or finite-difference time-domain (FDTD) analysis to understand the non-linear electromagnetic properties of the superconductors [28][29][30][31]. The investigation of the electromagnetic and thermal properties of the HTS cables is an effective way to predict and optimize the cable performance under different operating conditions. However, for power system studies such as fault analysis, the performance of the numerical models using FEM and FDTD is compromised due to the computational complexity [30]. Thus, a simplified time-dependent model of a multilayer HTS cable will be analyzed in this research, providing a solid foundation for the utilization of SCs in power system applications. Configuration and Design Specifications Several design topologies of SCs have been developed to minimize the capital and operating costs. The different configurations can be classified based on the superconducting layer layout for each phase and the voltage level. One design, known as triaxial configuration involves three different phases attached onto a single former, contained in a single cryostat [1] as shown in Figure 1. The three phases are separated by a dielectric layer which provides electric insulation. The circulating liquid nitrogen flows between the copper screen and the inner cryostat wall to cool down the entire cable to a temperature range of 65-77 K [31]. This configuration offers higher carrying current capacity, and has the lowest inductance compared to other cable designs. Regarding the position of the insulation layer, SCs can be separated into two categories, namely the warm dielectric (WD) and the cold dielectric (CD), with the latter to be the most preferred design due to low losses and higher current capacity [32]. In this paper, a CD triaxial SCs with YBCO wires has been modelled. The detailed structure of the SCs tape is demonstrated in Figure 2. Configuration and Design Specifications Several design topologies of SCs have been developed to minimize the capital and operating costs. The different configurations can be classified based on the superconducting layer layout for each phase and the voltage level. One design, known as triaxial configuration involves three different phases attached onto a single former, contained in a single cryostat [1] as shown in Figure 1. The three phases are separated by a dielectric layer which provides electric insulation. The circulating liquid nitrogen flows between the copper screen and the inner cryostat wall to cool down the entire cable to a temperature range of 65 − 77 K [31]. This configuration offers higher carrying current capacity, and has the lowest inductance compared to other cable designs. Regarding the position of the insulation layer, SCs can be separated into two categories, namely the warm dielectric (WD) and the cold dielectric (CD), with the latter to be the most preferred design due to low losses and higher current capacity [32]. In this paper, a CD triaxial SCs with YBCO wires has been modelled. The detailed structure of the SCs tape is demonstrated in Figure 2. The typical structure of the YBCO tape consists of the YBCO layer, the copper stabilizer layer, the silver stabilizer layer, the Hastelloy substrate and the buffer layer which is placed between the substrate and the YBCO layer [33]. The YBCO layer, which is the only layer responsible for conducting the load current during the steady state operation, is manufactured as a film with very small thickness, protected by copper stabilizer layers on both sides. In the superconducting wire, a stabilizer layer (such as copper) is connected in parallel with the HTS layer to maintain stability, reduce the heat generation and the temperature during high current faults, and protect the cable from thermal-induced damage. This technique has been introduced and adopted by major manufacturers Configuration and Design Specifications Several design topologies of SCs have been developed to minimize the capital and operating costs. The different configurations can be classified based on the superconducting layer layout for each phase and the voltage level. One design, known as triaxial configuration involves three different phases attached onto a single former, contained in a single cryostat [1] as shown in Figure 1. The three phases are separated by a dielectric layer which provides electric insulation. The circulating liquid nitrogen flows between the copper screen and the inner cryostat wall to cool down the entire cable to a temperature range of 65 − 77 K [31]. This configuration offers higher carrying current capacity, and has the lowest inductance compared to other cable designs. Regarding the position of the insulation layer, SCs can be separated into two categories, namely the warm dielectric (WD) and the cold dielectric (CD), with the latter to be the most preferred design due to low losses and higher current capacity [32]. In this paper, a CD triaxial SCs with YBCO wires has been modelled. The detailed structure of the SCs tape is demonstrated in Figure 2. The typical structure of the YBCO tape consists of the YBCO layer, the copper stabilizer layer, the silver stabilizer layer, the Hastelloy substrate and the buffer layer which is placed between the substrate and the YBCO layer [33]. The YBCO layer, which is the only layer responsible for conducting the load current during the steady state operation, is manufactured as a film with very small thickness, protected by copper stabilizer layers on both sides. In the superconducting wire, a stabilizer layer (such as copper) is connected in parallel with the HTS layer to maintain stability, reduce the heat generation and the temperature during high current faults, and protect the cable from thermal-induced damage. This technique has been introduced and adopted by major manufacturers The typical structure of the YBCO tape consists of the YBCO layer, the copper stabilizer layer, the silver stabilizer layer, the Hastelloy substrate and the buffer layer which is placed between the substrate and the YBCO layer [33]. The YBCO layer, which is the only layer responsible for conducting the load current during the steady state operation, is manufactured as a film with very small thickness, protected by copper stabilizer layers on both sides. In the superconducting wire, a stabilizer layer (such as copper) is connected in parallel with the HTS layer to maintain stability, reduce the heat generation and the temperature during high current faults, and protect the cable from thermal-induced damage. This technique has been introduced and adopted by major manufacturers [34][35][36][37]. For the Energies 2020, 13, 6646 5 of 24 fault analysis, due to the parallel structure of the layers, the total fault current flowing through each phase must meet Equation (1), I total = I HTS + I Copper (1) where I total is the total current, I HTS is the current in YBCO layer and I Copper is the current flowing in the copper layer. Specifically, as it is illustrated in Figure 3, in steady state, during which the HTS tapes are in the superconducting state, the load current only flows through the HTS layer (i.e., as presented by Equation (2)), due to its very low impedance compared to that of the copper stabilizer, Energies 2020, 13, x FOR PEER REVIEW 5 of 24 [34][35][36][37]. For the fault analysis, due to the parallel structure of the layers, the total fault current flowing through each phase must meet Equation (1), where is the total current, is the current in YBCO layer and is the current flowing in the copper layer. Specifically, as it is illustrated in Figure 3, in steady state, during which the HTS tapes are in the superconducting state, the load current only flows through the HTS layer (i.e., as presented by Equation (2)), due to its very low impedance compared to that of the copper stabilizer, = ( In this case, during the steady state, the Copper I is approximately zero. In transient conditions, once the fault current exceeds the value of the critical current the HTS tapes quench and their resistivity increases exponentially. Furthermore, the temperature of the HTS tapes is affected by the generated heat. The temperature increases gradually and exceeds the value of the critical temperature , indicating the transition to the normal state. Once the HTS tapes enter the normal state, the variable resistance, which is a function of the current density and the temperature , reaches values which are much higher than that of the copper layer. Hence, the transient current is diverted into the copper stabilizer layer, as expressed in Equation (3), which acts as a by-pass circuit. Thus, the effect of the stabilizer layer is important for the transient studies, where is the diverted fault current, flowing through the stabilizer layers, while a very small current (approximately zero) flowing through the HTS layers. Based on the analysis presented above and according to the study conducted in [37], the boundary of the critical current determines whether or not the superconducting tape quenches. Thus, exceeding the threshold of can be considered as the impelling factor that leads to quench, while the threshold of the critical temperature determines if the superconductor will enter the highly resistive normal state. Therefore, it can be defined as a criterion for the degree of quenching. To further study the performance of the integrated HTS cables, it is of major importance to investigate in more detail the transition period from the superconducting to the normal state. To study the quenching process, special focus should be given to the current distribution among the layers and the resistance variations with respect to the accumulated heat and the current amplitude. In the following part, the proper design of a simplified model of multilayer HTS power cable will be presented. In this case, during the steady state, the I Copper is approximately zero. In transient conditions, once the fault current exceeds the value of the critical current I C the HTS tapes quench and their resistivity increases exponentially. Furthermore, the temperature of the HTS tapes is affected by the generated heat. The temperature increases gradually and exceeds the value of the critical temperature T C , indicating the transition to the normal state. Once the HTS tapes enter the normal state, the variable resistance, which is a function of the current density J and the temperature T, reaches values which are much higher than that of the copper layer. Hence, the transient current is diverted into the copper stabilizer layer, as expressed in Equation (3), which acts as a by-pass circuit. Thus, the effect of the stabilizer layer is important for the transient studies, where I Copper is the diverted fault current, flowing through the stabilizer layers, while a very small current (approximately zero) flowing through the HTS layers. Based on the analysis presented above and according to the study conducted in [37], the boundary of the critical current I C determines whether or not the superconducting tape quenches. Thus, exceeding the threshold of I C can be considered as the impelling factor that leads to quench, while the threshold of the critical temperature T C determines if the superconductor will enter the highly resistive normal state. Therefore, it can be defined as a criterion for the degree of quenching. To further study the performance of the integrated HTS cables, it is of major importance to investigate in more detail the transition period from the superconducting to the normal state. To study the quenching process, special focus should be given to the current distribution among the layers and the resistance variations with respect to the accumulated heat and the current amplitude. In the following part, the proper design of a simplified model of multilayer HTS power cable will be presented. Equivalent Circuit Each phase of the cable consists of (i) several HTS tapes connected in parallel, in order to cope with the large operating current, and (ii) two copper-stabilizer layers connected in parallel with the HTS layer. The rest of the cable layers shown in Figure 2 have been neglected for simplicity reasons as the increase of the temperature mainly affects the resistance of the HTS and the copper stabilizer layer. The number of the tapes and the layers have been selected after taking into consideration the value of the designed critical current I C , while the geometric characteristics of the tapes have been determined based on the maximum quenching voltage [38]. In particular, the rated current I rated during the steady state operation has been considered equal to 80% of the critical current I C [39]. Therefore, the number of tapes can be calculated by the following equation, where I C_initial_per_tape , corresponds to the initial value of the critical current for each YBCO tape, and has been estimated based on validated manufacturers' data presented in [8], where n is the number of tapes. The equivalent impedance of each phase is dependent on the current distribution among the HTS and the copper layers. Figure 4. shows the equivalent circuit of the three phase triaxial SCs. The resistance of the HTS layers is introduced as a variable resistance which represents the quench phenomenon with an initial value of approximately zero. The PI section model has also been used, in order to implement the self-and mutual-inductances and the capacitance of the cable. The resistance of the copper stabilizer has been modelled as a variable resistor. Once the current increases to higher than the critical value I C , the HTS tapes resistance starts to increase and the current flows in both the superconducting and the copper layers. During this process, heat is generated in the tape resulting in a dramatic temperature rise. Once the temperature exceeds T C , the cable reaches the normal state mode and the current flows through the copper layer. Equivalent Circuit Each phase of the cable consists of (i) several HTS tapes connected in parallel, in order to cope with the large operating current, and (ii) two copper-stabilizer layers connected in parallel with the HTS layer. The rest of the cable layers shown in Figure 2 have been neglected for simplicity reasons as the increase of the temperature mainly affects the resistance of the HTS and the copper stabilizer layer. The number of the tapes and the layers have been selected after taking into consideration the value of the designed critical current , while the geometric characteristics of the tapes have been determined based on the maximum quenching voltage [38]. In particular, the rated current during the steady state operation has been considered equal to 80% of the critical current [39]. Therefore, the number of tapes can be calculated by the following equation, , corresponds to the initial value of the critical current for each YBCO tape, and has been estimated based on validated manufacturers' data presented in [8], where is the number of tapes The equivalent impedance of each phase is dependent on the current distribution among the HTS and the copper layers. Figure 4. shows the equivalent circuit of the three phase triaxial SCs. The resistance of the HTS layers is introduced as a variable resistance which represents the quench phenomenon with an initial value of approximately zero. The PI section model has also been used, in order to implement the self-and mutual-inductances and the capacitance of the cable. The resistance of the copper stabilizer has been modelled as a variable resistor. Once the current increases to higher than the critical value , the HTS tapes resistance starts to increase and the current flows in both the superconducting and the copper layers. During this process, heat is generated in the tape resulting in a dramatic temperature rise. Once the temperature exceeds , the cable reaches the normal state mode and the current flows through the copper layer. Modelling Methodology The following Section presents the detailed equations that have been used for the SCs development and the subprocess that has been followed to calculate the resistance of each layer. The modelling method followed is based on the equations proposed by the authors in [40], in which the modelling of transformers with superconducting windings is presented. To describe the HTS and copper layers, the operation of the multilayer SCs has been divided into three modes (referred as three distinct stages for simplicity) with respect to the current distribution and the values of the equivalent resistance. Stage 1 refers to the superconducting mode, where the applied current is lower than the critical current and the temperature is considered to be below the critical temperature , Modelling Methodology The following Section presents the detailed equations that have been used for the SCs development and the subprocess that has been followed to calculate the resistance of each layer. The modelling method followed is based on the equations proposed by the authors in [40], in which the modelling of transformers with superconducting windings is presented. To describe the HTS and copper layers, the operation of the multilayer SCs has been divided into three modes (referred as three distinct stages for simplicity) with respect to the current distribution and the values of the equivalent resistance. Stage 1 refers to the superconducting mode, where the applied current is lower than the critical current I C and the temperature is considered to be below the critical temperature T C , where T operating is the operating temperature of 70 K. Stage 2 refers to the flux flow mode, when the quench starts, and is determined by the following boundary conditions: At this stage the HTS tapes start to quench and their resistivity increases sharply as a function of the current density J C and the accumulated heat. At the final mode, stage 3, which is described by the boundary conditions (9) and (10), the HTS layer completely loses its superconductivity and enters normal state. The main parameters that affect the resistance value and the operation mode of the HTS tapes-layers are the critical current density J C , and the critical temperature T C [7]. The relationship between the temperature T, the current density J C and the critical current I C is given by the following equations, J C0 = I C_initial s HTS (12) where J C0 is the critical current density A/m 2 at the initial operating temperature T 0 = 70 K; T C = 92 K is the critical temperature of the HTS superconducting tape; the density exponent a is 1.5; I C_initial corresponds to the initial value of the critical current and s HTS is the cross section area of the superconductor; w HTS is the width of the HTS material and t HTS is the thickness of the HTS material and n is the number of tapes. As can be seen from Equations (11)-(13), the value of the critical current density J C (T), and by extent the value of the critical current I C_initial , decreases drastically as the temperature T(t) rises. The temperature dependence on the critical current density is known in literature as 'critical current density degradation' [41]. The effect of the resulting degradation must be taken into consideration for the design of large-current-capacity AC SCs and their cooling systems. To better understand the operation of the HTS cable it is crucial to estimate the resistance of the HTS and the copper stabilizer layers and the equivalent resistance of the SCs at every stage. Initially, at stage 1, the HTS tape is in a superconducting state. The resistivity of the HTS tape is ρ 0 = 0 (Ω·m) and therefore its total resistance equals approximately zero. The copper stabilizer resistance has been considered constant and the total equivalent resistance of the cable is equal to the HTS layer resistance, as the main current flows through it only. At stage 2, when the applied current exceeds the value of Energies 2020, 13, 6646 8 of 24 the critical current, the resistivity of the HTS tape increases exponentially as a function of the current density and the temperature, according to Equation (15), where E C = 1 µV/cm is the critical electric field; the coefficient N has been selected to be 25, while the YBCO tapes should be within the range of 21 to 30 [42]. The copper stabilizer resistance corresponds to a constant value, similar to that of stage 1. This approximation can be confirmed by the small variation of copper resistivity with the temperature rise at this stage. The total resistance of the superconductor is obtained by the equation for equivalent resistance of parallel electrical circuits, where R SC is the total resistance of the SC during stage 2. When T > T C , stage 3 has been initiated, which corresponds to the normal resistive mode. The HTS layer-tape resistance reaches values much larger than the copper stabilizer resistance. For modelling purposes, a maximum limit has been set for the HTS resistance value at stage 3. However, in this case the resistivity (Ω·m) of the copper changes with respect to temperature rise and is determined by Equation (17). The maximum value that copper resistivity can reach is calculated for T = 250 K, which has been selected as the upper temperature limit [43]. During the normal mode, the equivalent resistance of the SFCLC is affected solely by the value of the copper stabilizer resistance, as the transient current is diverted into the copper layers. Thermal Transfer Analysis during the Quenching Superconducting tapes are immersed in liquid nitrogen LN 2 , which is used as a refrigerant for cooling the SCs below a certain temperature. When the resistance of the HTS tapes is zero, (stage 1) the amount of the power dissipated is not considered significant. When a fault occurs, the resistance increases, and heat is generated by the superconductor. The generated heat increases the superconductor temperature and part of it is absorbed by the LN 2 circulation system (the heat transfer with the external environment has been neglected). The power dissipated is a function of the fault current and can be calculated by Equation (18), where t is time and R SC is the equivalent resistance of the superconductor. The cooling power that can be removed by the LN 2 cooler is given by Equation (19), where T(t) is the temperature; A is the total area that is covered by the cooler; h is the heat transfer coefficient. The heat transfer coefficient is a function of the temperature and considered as the major factor which determines the cooling system effectiveness and the cable recovery, representing the heat transfer process between the superconducting tapes and the LN 2 . Equations (20)-(23) below present the calculation of h based on the temperature variation [44]. If Equation (19) is subtracted from Equation (18) then the net power P SC can be calculated. Equation (24) is the thermal equilibrium equation which gives the part of the dissipated power which leads to temperature rise in the superconductor during the quenching process. Finally, Equation (25) gives the temperature T(t) of the superconducting tapes at each iteration step, where T 0 is the initial temperature of the HTS materials and C p (J/K) is the heat capacity. For stage 2, when the quenching starts, the current starts to flow through the copper layer. However, as the temperature rise is not very high at this stage, the copper heat capacity variation with the temperature is neglected. The heat capacity of the YBCO material can be calculated by Equation (26) and the volume of the cable by Equation (27), where d is the density of the material, T is the temperature, v is volume and l is length; th is the thickness and w is the width and n is the number of tapes. At stage 3, when the resistance of the HTS tapes-layer has reached very high values due to the increased temperature, the fault current flows through the copper layers. In this case the heat capacity in Equation (28) is substituted by the total heat capacity of the superconductor Equation (29) gives the heat capacity of the copper layer, where C Cu is the heat capacity of the copper and d Cu is the density of the copper. The classification of the quenching process and the corresponding characteristics of each stage are listed in Table 1. Current distribution I applied > I HTS I HTS > I Cu I Cu > I HTS, I f ault = I Cu Matlab has been used to model Equations (11)- (29) in order to compute the resistance values of the HTS tapes R HTS , the copper stabilizer R Cu , and the variation of the temperature ∆T of the superconductor. The calculation process is shown in Figure 5. T o,ι , I C_initial , J C initial are the initial values of the operating temperature, critical current and critical current density for the first iteration, respectively. Once the I rms gives a value of current density J i , which exceeds the critical value J C , the HTS tapes start to quench. During the quenching process the values of I rms P diss , P cooling , P SC , and T i+1 are updated in each time step, T step . The calculation process terminates once the T i+1 , R HTS , and R Cu reach their maximum values, indicating that superconductor has entered into normal state. Energies 2020, 13, x FOR PEER REVIEW 10 of 24 Figure 5. Flowchart corresponding to the calculation process for the resistance values of the HTS tapes , the copper stabilizer , the equivalent , and the variation of the temperature of the superconductor. Simulation Results In this Section the model development is completed by integrating a lumped model of a 5 km long SCs into a simulated power system which contains converter-connected generators and a synchronous generator (SG). The fault analysis is carried out, analyzing the stages of the quenching process, and the corresponding plots of the fault current signatures, resistance values, and temperature have been obtained. For the purpose of the simulation-based fault analysis, the system under test (as shown in Figure 6) has been built in Matlab and Simulink shows the components of the tested system. Table 2 presents the main components of the power system. Figure 5. Flowchart corresponding to the calculation process for the resistance values of the HTS tapes R HTS , the copper stabilizer R Cu , the equivalent R eq , and the variation of the temperature ∆T of the superconductor. Simulation Results In this Section the model development is completed by integrating a lumped model of a 5 km long SCs into a simulated power system which contains converter-connected generators and a synchronous generator (SG). The fault analysis is carried out, analyzing the stages of the quenching process, and the corresponding plots of the fault current signatures, resistance values, and temperature have been obtained. For the purpose of the simulation-based fault analysis, the system under test (as shown in Figure 6) has been built in Matlab and Simulink shows the components of the tested system. Table 2 presents the main components of the power system. The network consists of an equivalent voltage source connected at Bus 1 with a nominal voltage of 275 kV, which represent the equivalent connected transmission system. Two different generation units accounting for (i) a wind farm connected via Voltage Source Converter (VSC) and (ii) a Synchronous Generator (SG) are connected at Bus 11. The SG has been modelled as a standard salient pole synchronous machine with an automatic voltage regulator (AVR) and a power system stabilizer. The wind farm consists of 100 variable speed wind turbines, which consist of permanent magnet SGs Energies 2020, 13, 6646 11 of 24 connected via VSC and operate under a Direct Quadrature Current Injection (DQCI) control algorithm. The 132 kV/10 km transmission lines transfer power to (132 kV/33 kV) transformers. The 33 kV triaxial SCs connects Bus 7 and Bus 11, and due to its high-power density it is capable of transfering power up to 202 MVA. For the steady state, the resistance of the HTS tapes has been considered approximately zero while the positive and zero sequence inductance and capacitance have been obtained by [23]. Regarding the final stage of the quenching process, known as normal state, for simulation purposes, a maximum value has been set for both the HTS and copper stabilizer layers. The idea behind this assumption was to model the change of HTS and copper layer resistance according to the current and temperature changes and examine the current distribution among the different layers during the quench phenomenon. These assumptions can be considered reasonable as the HTS layers become highly resistive during the fault which results in the flow of short-circuit current through the stabilizer. Therefore, for the normal state, a high resistance value has been selected for the HTS layer, based on the studies conducted in [40]. The maximum resistivity of the copper stabilizer layer has been calculated by using Equation (17) Simulation Results In this Section the model development is completed by integrating a lumped model of a 5 km long SCs into a simulated power system which contains converter-connected generators and a synchronous generator (SG). The fault analysis is carried out, analyzing the stages of the quenching process, and the corresponding plots of the fault current signatures, resistance values, and temperature have been obtained. For the purpose of the simulation-based fault analysis, the system under test (as shown in Figure 6) has been built in Matlab and Simulink shows the components of the tested system. Table 2 presents the main components of the power system. In the following part, systematic iterative simulations have been performed, which include (i) 3-Phase-to-ground faults at two different fault locations, (ii) a Phase-to-Phase-to-ground fault and (iii) a Phase-to-ground fault. In all cases the faults initiate at t = 5.06 s and last for 120 ms. To obtain a high-fidelity insight of the transient phenomena of SCs, a sampling frequency of f = 2 MHz has been used (accounting for simulations and records). Fault Analysis of the SCs Initially, a 3-Phase-to-ground fault with fault resistance R f = 0.01 Ω was triggered at 50% of the HTS cable's length at t = 5.06 s, and it was cleared after 120 ms. Figure 7 shows the stages of the quenching process, Figure 8a illustrates the fault current signatures contributed by the wind farm and the SG at Bus 11, while Figure 8b presents the corresponding voltage signatures. The resulting fault current distribution among the different layers of the three phases is shown in Figure 8c-h. At the superconducting state (stage 1) and the flux flow state, which is a moderately resistive state, the current flows through the HTS layers, presenting high peaks due to the low resistance of the superconductor when the fault occurs. However, as the fault current exceeds the critical value I C in the flux flow mode (stage 2), the temperature rises continuously and the value of the resistance of the superconductor increases rapidly to very high values, reaching the normal state (stage 3). Therefore, as it can be seen from Figure 8d,f,h, the main current has been diverted to the copper stabilizer layers, indicating that the normal state has been reached, while the HTS layers conduct approximately zero current. Figures 9 and 10 illustrate the changes in the resistance values and the temperature rise, respectively. Initially, the temperature is 70 K for the three phases and the equivalent resistance of the superconductor is approximately zero. Once the temperature exceeds 92 K, which is the critical value, at t = 5.064 s, the HTS tapes enter the normal state and their resistance starts to increase rapidly. For stage 2, the equivalent resistance of the superconductor is calculated based on Equation (16). The current distribution starts to change, and the fault current is diverted to the stabilizer layers. Subsequently, in the normal state (stage 3) the equivalent resistivity is equal to the maximum resistivity of the copper stabilizer layer obtained by Equation (17). Therefore, the proposed design has achieved the current sharing between the HTS and stabilizer layers, aiming to improve the performance of the cable and self-protecting it from being destroyed. In the following part, systematic iterative simulations have been performed, which include (i) 3-Phase-to-ground faults at two different fault locations, (ii) a Phase-to-Phase-to-ground fault and (iii) a Phase-to-ground fault. In all cases the faults initiate at = 5.06 s and last for 120 ms. To obtain a high-fidelity insight of the transient phenomena of SCs, a sampling frequency of = 2 MHz has been used (accounting for simulations and records). Fault Analysis of the SCs Initially, a 3-Phase-to-ground fault with fault resistance = 0.01 Ω was triggered at 50% of the HTS cable's length at = 5.06 s, and it was cleared after 120 ms. Figure 7 shows the stages of the quenching process, Figure 8a illustrates the fault current signatures contributed by the wind farm and the SG at Bus 11, while Figure 8b presents the corresponding voltage signatures. The resulting fault current distribution among the different layers of the three phases is shown in Figure 8c-h. At the superconducting state (stage 1) and the flux flow state, which is a moderately resistive state, the current flows through the HTS layers, presenting high peaks due to the low resistance of the superconductor when the fault occurs. However, as the fault current exceeds the critical value in the flux flow mode (stage 2), the temperature rises continuously and the value of the resistance of the superconductor increases rapidly to very high values, reaching the normal state (stage 3). Therefore, as it can be seen from Figure 8d,f,h, the main current has been diverted to the copper stabilizer layers, indicating that the normal state has been reached, while the HTS layers conduct approximately zero current. Figures 9 and 10 illustrate the changes in the resistance values and the temperature rise, respectively. Initially, the temperature is 70 K for the three phases and the equivalent resistance of the superconductor is approximately zero. Once the temperature exceeds 92 K, which is the critical value, at = 5.064 , the HTS tapes enter the normal state and their resistance starts to increase rapidly. For stage 2, the equivalent resistance of the superconductor is calculated based on Equation (16). The current distribution starts to change, and the fault current is diverted to the stabilizer layers. Subsequently, in the normal state (stage 3) the equivalent resistivity is equal to the maximum resistivity of the copper stabilizer layer obtained by Equation (17). Therefore, the proposed design has achieved the current sharing between the HTS and stabilizer layers, aiming to improve the performance of the cable and self-protecting it from being destroyed. immediately, the magnitude of the fault currents decreases. Specifically, at t = 5.064 s, when the values of resistances and the temperature reach high values, the fault current starts flowing through the stabilizer layers, presenting peaks of approximately 5.5 kA. During the current elimination within the first fault cycle, some peaks are presented at the 3-Phase fault voltages. Moreover, it is noticeable that after t = 5.069 s and before the fault clearance at t = 5.18 s, the magnitude of fault currents at Bus 11 are limited and the phase voltages show higher magnitudes compared to steady state. This is interpreted based on the large equivalent resistance inserted by the SCs. Hence, it is evident that SCs provide effective limitation of fault currents in systems containing SGs and converter-interfaced generators. Such fault current limiting capability seems to be an interesting feature in regards towards protecting networks with varying short-circuit levels. Furthermore, the high voltage magnitudes during transient conditions raises new challenges for the voltage-assisted protection schemes. Normally, during the fault events, the voltage magnitude is anticipated to be reduced. However, in this case, when the fault occurs at t = 5.06 s, the 3-Phase voltages decrease for few milliseconds, but when the equivalent resistance of the superconductor increases, the fault current decreases, while the 3-Phase voltages present high peaks. The introduction of high equivalent resistance leads to voltage spikes across the superconductor. The faults at the SCs can be considered as high impedance faults in nature, jeopardizing the operation of the existing protection schemes. As discussed earlier, the installation of the SCs impacts the magnitude of fault currents. Indeed, from the fault current waveforms plotted in Figure 8, at the time of the fault event at = 5.06 s, the highest first current peak is approximately 15 kA. As the value of the layers' resistance increases immediately, the magnitude of the fault currents decreases. Specifically, at = 5.064 s, when the values of resistances and the temperature reach high values, the fault current starts flowing through the stabilizer layers, presenting peaks of approximately 5.5 kA. During the current elimination within the first fault cycle, some peaks are presented at the 3-Phase fault voltages. Moreover, it is noticeable that after = 5.069 s and before the fault clearance at = 5.18 s, the magnitude of fault currents at Bus 11 are limited and the phase voltages show higher magnitudes compared to steady state. This is interpreted based on the large equivalent resistance inserted by the SCs. Hence, it is evident that SCs provide effective limitation of fault currents in systems containing SGs and converter-interfaced generators. Such fault current limiting capability seems to be an interesting feature in regards towards protecting networks with varying short-circuit levels. Furthermore, the high voltage magnitudes during transient conditions raises new challenges for the voltage-assisted protection schemes. Normally, during the fault events, the voltage magnitude is anticipated to be reduced. However, in this case, when the fault occurs at = 5.06 s, the 3-Phase voltages decrease for few milliseconds, but when the equivalent resistance of the superconductor increases, the fault current decreases, while the 3-Phase voltages present high peaks. The introduction of high equivalent resistance leads to voltage spikes across the superconductor. The faults at the SCs can be considered feasibility of the parallel stabilizer layer can been confirmed for 3-Phase-to-ground, Phase-A-toground and Phase-A-B-to-ground faults by observing the current distribution characteristics in Figures 8, 11, and 12 respectively. As the quenching process evolves, the temperature of the SCs increases, reaching values higher than the critical ; during the normal resistive mode, the value of the SCs equivalent resistance is only determined by the value of the stabilizer layer given by Equation (17). The further increase in temperature results in an increase in the resistivity of the copper stabilizer layer (Equation (17)) which leads to further reduction in fault current. The accuracy of the fault current limiting capability is verified by Figure 8 for a 3-Phase-to-ground fault, where the first peak of the fault current at = 5.06 s is approximately 15 kA; however, within the first cycle, and before the fault clearance at = 5.18 s , the peak of the fault current is reduced to 1.8 kA. The same behaviour is observed for a Phase-A-to-ground and Phase-A-B-to-ground fault, as depicted by Figures 11 and 12, respectively. The first peak of the fault current flowing through HTS layer, has values of 15 kA for the faulted phases, while the resulted fault current flowing through the stabilizer layer has been limited to approximately 1.7 kA. Additionally, the fault currents, the voltage signatures, and the current distribution characteristics for a Phase-A-to-ground and a Phase-A-B-to-ground faults at the 50% of the HTS cable's length with R f = 0.01 Ω are reported in Figure 11 and Figure 12, respectively. The faulted phases of the proposed SCs have been found to behave in a similar way as in the previous case of the 3-Phase-to-ground fault. The characteristics of the superconductor resistance have the same trend as those presented in Figure 9 for the faulted phases. However, the equivalent resistance of the HTS layers of non-faulted phases remains at 0 Ω, as they do not quench and operate at superconducting state. Regarding the temperature rise for the faulted phases, it can be described based on Figure 10, while for the non-faulted phases the operating temperature remains constant at 70 K prior to and during the fault. For the non-faulted phases, the fault current flows only through the HTS layer. Therefore, the specific design target of the current limitation can be verified for different fault types with approximately zero fault resistance. Once the value of the fault current density exceeds the critical value J C (T) (Equation (11)) within the first fault cycle, the resistivity of the HTS layer increases based (refer to Equation (15)) and the fault current diverts to the copper stabilizer layer. The feasibility of the parallel stabilizer layer can been confirmed for 3-Phase-to-ground, Phase-A-to-ground and Phase-A-B-to-ground faults by observing the current distribution characteristics in Figure 8, Figure 11, and Figure 12 respectively. As the quenching process evolves, the temperature of the SCs increases, reaching values higher than the critical T C ; during the normal resistive mode, the value of the SCs equivalent resistance is only determined by the value of the stabilizer layer given by Equation (17). The further increase in temperature results in an increase in the resistivity of the copper stabilizer layer (Equation (17)) which leads to further reduction in fault current. The accuracy of the fault current limiting capability is verified by Figure 8 for a 3-Phase-to-ground fault, where the first peak of the fault current at t = 5.06 s is approximately 15 kA; however, within the first cycle, and before the fault clearance at t = 5.18 s, the peak of the fault current is reduced to 1.8 kA. The same behaviour is observed for a Phase-A-to-ground and Phase-A-B-to-ground fault, as depicted by Figure 11 and Figure 12, respectively. The first peak of the fault current flowing through HTS layer, has values of 15 kA for the faulted phases, while the resulted fault current flowing through the stabilizer layer has been limited to approximately 1.7 kA. Energies 2020, 13, x FOR PEER REVIEW 14 of 24 feasibility of the parallel stabilizer layer can been confirmed for 3-Phase-to-ground, Phase-A-toground and Phase-A-B-to-ground faults by observing the current distribution characteristics in Figures 8, 11, and 12 respectively. As the quenching process evolves, the temperature of the SCs increases, reaching values higher than the critical ; during the normal resistive mode, the value of the SCs equivalent resistance is only determined by the value of the stabilizer layer given by Equation (17). The further increase in temperature results in an increase in the resistivity of the copper stabilizer layer (Equation (17)) which leads to further reduction in fault current. The accuracy of the fault current limiting capability is verified by Figure 8 for a 3-Phase-to-ground fault, where the first peak of the fault current at = 5.06 s is approximately 15 kA; however, within the first cycle, and before the fault clearance at = 5.18 s , the peak of the fault current is reduced to 1.8 kA. The same behaviour is observed for a Phase-A-to-ground and Phase-A-B-to-ground fault, as depicted by Figures 11 and 12, respectively. The first peak of the fault current flowing through HTS layer, has values of 15 kA for the faulted phases, while the resulted fault current flowing through the stabilizer layer has been limited to approximately 1.7 kA. feasibility of the parallel stabilizer layer can been confirmed for 3-Phase-to-ground, Phase-A-toground and Phase-A-B-to-ground faults by observing the current distribution characteristics in Figures 8, 11, and 12 respectively. As the quenching process evolves, the temperature of the SCs increases, reaching values higher than the critical ; during the normal resistive mode, the value of the SCs equivalent resistance is only determined by the value of the stabilizer layer given by Equation (17). The further increase in temperature results in an increase in the resistivity of the copper stabilizer layer (Equation (17)) which leads to further reduction in fault current. The accuracy of the fault current limiting capability is verified by Figure 8 for a 3-Phase-to-ground fault, where the first peak of the fault current at = 5.06 s is approximately 15 kA; however, within the first cycle, and before the fault clearance at = 5.18 s , the peak of the fault current is reduced to 1.8 kA. The same behaviour is observed for a Phase-A-to-ground and Phase-A-B-to-ground fault, as depicted by Figures 11 and 12, respectively. The first peak of the fault current flowing through HTS layer, has values of 15 kA for the faulted phases, while the resulted fault current flowing through the stabilizer layer has been limited to approximately 1.7 kA. Current Limitation In this Section, the presented analysis aims to evaluate the transient performance of the SCs in contrast with a conventional copper cable installed at the same power system. For this reason, emphasis has been given on the calculation of the current-limitation capability as a percentage of the prospective fault current flowing through a conventional copper cable, during the quenching process. In particular, a 3-Phase-to-ground fault with fault resistance of = 0.01 Ω was applied at the 50% of the SCs length. The same fault has been repeated for the case of conventional copper cable. The fault currents captured by the SCs model during the simulations have been compared with the prospective fault currents through the conventional copper cable, highlighting the merits arising by utilizing superconductors. Figure 13 demonstrates the RMS value of the fault currents at Bus 11 during a 6-cycle 3-Phase-to-ground fault for both cases. Similarly, to the previous Section, the fault is initiated at = 5.06 s and cleared after 120 ms. When the fault occurs at = 5.06 s, the RMS value of the current for the SCs is slightly higher compared to that of the conventional cable, as at the initial quenching state (stage 2) the resistance of the HTS tapes has not reached high values yet. It is well-established that the short-circuit magnitude Current Limitation In this Section, the presented analysis aims to evaluate the transient performance of the SCs in contrast with a conventional copper cable installed at the same power system. For this reason, emphasis has been given on the calculation of the current-limitation capability as a percentage of the prospective fault current flowing through a conventional copper cable, during the quenching process. In particular, a 3-Phase-to-ground fault with fault resistance of R f = 0.01 Ω was applied at the 50% of the SCs length. The same fault has been repeated for the case of conventional copper cable. The fault currents captured by the SCs model during the simulations have been compared with the prospective fault currents through the conventional copper cable, highlighting the merits arising by utilizing superconductors. Figure 13 demonstrates the RMS value of the fault currents at Bus 11 during a 6-cycle 3-Phase-to-ground fault for both cases. Current Limitation In this Section, the presented analysis aims to evaluate the transient performance of the SCs in contrast with a conventional copper cable installed at the same power system. For this reason, emphasis has been given on the calculation of the current-limitation capability as a percentage of the prospective fault current flowing through a conventional copper cable, during the quenching process. In particular, a 3-Phase-to-ground fault with fault resistance of = 0.01 Ω was applied at the 50% of the SCs length. The same fault has been repeated for the case of conventional copper cable. The fault currents captured by the SCs model during the simulations have been compared with the prospective fault currents through the conventional copper cable, highlighting the merits arising by utilizing superconductors. Figure 13 demonstrates the RMS value of the fault currents at Bus 11 during a 6-cycle 3-Phase-to-ground fault for both cases. Similarly, to the previous Section, the fault is initiated at = 5.06 s and cleared after 120 ms. When the fault occurs at = 5.06 s, the RMS value of the current for the SCs is slightly higher compared to that of the conventional cable, as at the initial quenching state (stage 2) the resistance of the HTS tapes has not reached high values yet. It is well-established that the short-circuit magnitude Similarly, to the previous Section, the fault is initiated at t = 5.06 s and cleared after 120 ms. When the fault occurs at t = 5.06 s, the RMS value of the current for the SCs is slightly higher compared to that of the conventional cable, as at the initial quenching state (stage 2) the resistance of the HTS tapes has not reached high values yet. It is well-established that the short-circuit magnitude is determined by the X/R ratio of the circuit. Therefore, it can be seen that the RMS values of the fault currents start to decrease at the time instant of t = 5.065 s, due to the high resistance and the significant temperature increase. To quantify the fault current limitation by adding the fault current limiting function, a current limitation percentage of the prospective current through a conventional cable has been introduced based on Equation (30). Particularly, for the case of the SCs the RMS values of the limited fault currents during the whole quenching process (stage 2 and stage 3) have been calculated and compared with the prospective current values. Figure 14 shows the current limitation percentage per phase, verifying and supporting the practical feasibility of the proposed cable design, where I conv is the RMS value of the fault current flowing through the conventional copper cable and I SC is the fault current flowing through the SCs under the same type of fault. The current limitation presents a slight difference among phases due to the difference in phase angle of each phase at the fault instant. Energies 2020, 13, x FOR PEER REVIEW 16 of 24 is determined by the X/R ratio of the circuit. Therefore, it can be seen that the RMS values of the fault currents start to decrease at the time instant of = 5.065 s, due to the high resistance and the significant temperature increase. To quantify the fault current limitation by adding the fault current limiting function, a current limitation percentage of the prospective current through a conventional cable has been introduced based on Equation (30). Particularly, for the case of the SCs the RMS values of the limited fault currents during the whole quenching process (stage 2 and stage 3) have been calculated and compared with the prospective current values. Figure 14 shows the current limitation percentage per phase, verifying and supporting the practical feasibility of the proposed cable design, where is the RMS value of the fault current flowing through the conventional copper cable and is the fault current flowing through the SCs under the same type of fault. The current limitation presents a slight difference among phases due to the difference in phase angle of each phase at the fault instant. It is evident that the installation of SCs can lead to fault current reduction up to 62.5% of the prospective current flowing through a conventional copper cable, considering the same 3-Phase-toground fault. Simulation Analysis of Fault Resistance Effect on the Quenching Process In order to achieve the maximum benefit of the designed cable, its performance under a wide range of power system conditions should be comprehensively evaluated. In the available technical literature, several studies [45][46][47][48][49][50] have investigated the impact of the fault resistance on the superconducting current limiters. However, there are no studies available assessing the impact of the fault resistance on the SCs and the fault current limitation that it provides. Therefore, in this Section the quenching process of the SCs is analyzed in accordance with the gradual increase in the fault resistance value. Simulation studies, which include 3-Phase-to-ground faults applied at the 50% of SCs length (considering different values of ), were conducted to study the relationship between and the quenching process. Figures 15-20 show the corresponding waveforms of the quenching stage, the fault current signatures among the layers, the resistance and the temperature of the cable for = 1 Ω, = 5 Ω and = 10 Ω, respectively. Based on the results depicted in Figure 16, when fault resistance is = 1 Ω, the HTS tapes quench at the first half fault cycle and enter normal state (stage 3), as it can be seen in Figure 15a. The It is evident that the installation of SCs can lead to fault current reduction up to 62.5% of the prospective current flowing through a conventional copper cable, considering the same 3-Phase-to-ground fault. Simulation Analysis of Fault Resistance Effect on the Quenching Process In order to achieve the maximum benefit of the designed cable, its performance under a wide range of power system conditions should be comprehensively evaluated. In the available technical literature, several studies [45][46][47][48][49][50] have investigated the impact of the fault resistance R f on the superconducting current limiters. However, there are no studies available assessing the impact of the fault resistance on the SCs and the fault current limitation that it provides. Therefore, in this Section the quenching process of the SCs is analyzed in accordance with the gradual increase in the fault resistance value. Simulation studies, which include 3-Phase-to-ground faults applied at the 50% of SCs length (considering different values of R f ), were conducted to study the relationship between R f and the quenching process. Figures 15-20 show the corresponding waveforms of the quenching stage, the fault current signatures among the layers, the resistance and the temperature of the cable for R f 1 = 1 Ω, R f 2 = 5 Ω and R f 3 = 10 Ω, respectively. temperature reaches the maximum value of 250 K with a delay, which affects the quenching process. In the last case of = 10 Ω , the boundary condition > of quenching has been met. Although the temperature does not reach the critical value , resulting in "incomplete quenching". The resistance of the HTS tapes reach low values, affecting the value of the equivalent resistance and the current distribution among the layers and resulting in small percentage of fault current limitation. The fault current flows mainly through the HTS layers. Based on the results depicted in Figure 16, when fault resistance is R f 1 = 1 Ω, the HTS tapes quench at the first half fault cycle and enter normal state (stage 3), as it can be seen in Figure 15a. The resistance and the temperature reach their maximum values at t = 5.065 s, as shown in Figures 19a and 20a, respectively. Therefore, the current starts to flow through the stabilizer layer at t = 5.065 s, 5 ms after the fault occurs. For the case of R f 2 = 5 Ω, HTS tapes quench after one fault cycle (at t = 5.082 s) and it is noticeable by Figure 15b that stage 2 lasts for a slightly longer period (few ms). Considering a fault resistance of R f 1 = 1 Ω, SCs operate within stage 2 for 5.5 ms, while, for R f 2 = 5 Ω, stage 2 lasts for 18 ms. Furthermore, for fault resistance R f 2 = 5 Ω, the first fault current peaks depicted in Figure 18c,e,g are lower compared to the fault current peaks extracted during the fault with R f 1 = 1 Ω, as a larger value of fault resistance results in lower fault currents. Regarding the case of R f 3 = 10 Ω, the HTS tapes of the faulted phases quench, reaching only stage 2, without entering into normal state. Therefore, the maximum value of the SCs equivalent resistance is low, R eq = 0.058 Ω and the fault currents flow through the HTS layers. This behaviour indicates that the increase in the fault resistance value affects the quenching degree and consequently the current sharing between the HTS and stabilizer layers. Particularly, as it has already been analyzed (also reported in [28]), temperature increase plays a key role in the resulting value of the equivalent resistance and the quenching degree, which in turn is determined by the generated resistive heat, the magnitude, and the duration of the fault current. By observing, Figure 20a, it is obvious that for a 3-Phase-to-ground fault with R f 1 = 1 Ω, the temperature of the superconductor exceeds the critical value T C , reaching the maximum value of 250 K. When a 3-Phase-to-ground fault with R f 2 = 5 Ω occurs, the temperature exceeds the critical value T C = 92 K, but it is noticeable from Figure 20b, that the temperature reaches the maximum value of 250 K with a delay, which affects the quenching process. In the last case of R f 3 = 10 Ω, the boundary condition I f ault > I C of quenching has been met. Although the temperature does not reach the critical value T C , resulting in "incomplete quenching". The resistance of the HTS tapes reach low values, affecting the value of the equivalent resistance and the current distribution among the layers and resulting in small percentage of fault current limitation. The fault current flows mainly through the HTS layers. Energies 2020, 13, x FOR PEER REVIEW 18 of 24 The results revealed that the fault resistance has a considerable impact on the SCs performance for the same type of fault, considering the same fault location. For instance, further increase in the fault resistance can lead to much lower fault currents, even below the critical current I C , preventing SCs from quenching. This has been confirmed by Figure 18, where, during a 3-Phase-to-ground fault with R f 3 = 10 Ω, the first peak of the fault current is below the critical current I C , and therefore there is no quenching or fault current sharing between the two layers. Consequently, low values of fault resistance result in higher fault current (with respect to the critical current I C ), which lead to SCs quenching during the first half cycle, and therefore to greater fault current limitation capability (Figure 16). High fault resistance affects the quenching degree and jeopardizes the fault current limiting capability of the cable. This can be explained by the reduced allocation of fault current within different layers during current limitation mode, as fault current is predominately limited by the fault resistance value. Conclusions and Verifications The comprehensive fault current characterization presented in this paper used a simplified, validated SCs model and highlighted the following key outcomes: The operation of the SCs can be divided into three different stages: (i) the superconducting stage during the steady state operation of a power system, at which the SCs presents approximately zero resistance; (ii) the quenching process, which includes the partial resistive flux flow stage, reached when the fault current exceeds the critical current I C and while temperature remains below the critical value T C ; and (iii) the highly resistive normal state which is reached once the temperature exceeds the critical value T C . Furthermore, it has been found that during fault, the stabilizer layer can be used as a parallel path for the transient current, reducing heat generation, temperature rise, and protecting the cable from being damaged. To better investigate the feasibility of the copper stabilizer layer, the change of the copper layer resistivity has been modelled during the normal resistive mode, while the resistance of the HTS layer considered to be constant (i.e., set to its maximum value). The performance of the SCs in limiting the fault currents was assessed through a number of fault scenarios. Simulation results revealed the impact of SCs on fault current magnitudes, under different type of faults, and as a consequent on the short-circuit level of the power systems. Specifically, it has been observed that within the first electric cycle, the magnitude of the fault current has been reduced from 15 kA to approximately 1.8 kA. Therefore, the installation of SCs introduces a challenge for the existing protection schemes due to their variable resistance, which leads to lower fault currents and higher voltage magnitude during transient conditions. In order to obtain a deeper insight of the fault current limiting capability of SCs, a comparative analysis has been conducted between the SCs and a conventional copper cable under the same fault conditions. The analysis revealed that for 3-Phase-to-g faults, the SCs model offers fault current limitation in the range of 60%, with respect to the prospective values. Therefore, the deployment of SCs increases the transmission efficiency due to the low resistance during the steady state and suppresses fault currents. The obtained results are well-aligned with relevant conducted studies such as [19,36,40]. The performance of SCs during transient conditions is determined by certain power system characteristics such as the prospective fault currents and fault resistance. In particular, simulation results showed that the increase in the fault resistance value impacts on the feasibility of the SCs, as it affects the quenching degree. It was revealed that the higher the fault resistance, the lower the prospective current and the percentage of current limitation. This was confirmed through the case of fault resistance equal to R f 3 = 10 Ω, where the fault current is predominately limited by the fault resistance, affecting the increase in the resistivity of HTS tapes (which do not enter normal state) and the quenching process. Future Work After the fault current characterization and evaluation of SCs' performance during fault conditions, it has been identified that that there are many challenges that the protection schemes must take into consideration. As the simulation results revealed, the variable resistance of SCs, the reduced fault currents, the higher voltage magnitude during quenching stages, and the impact of high fault resistance values which jeopardize the quenching process, are factors which are anticipated to introduce challenges to the fault detection and classification methods, and by extension to the applicability of conventional protection schemes (e.g., over-current, distance protection, etc.) towards protecting SCs feeders. Considering the protection of future power grids (integrating SCs and inverter-connected generation), more research shall be steered towards the development of novel protection solutions which capture the particularities and distinctive features of SCs. For example, potential merits can arise from the utilization of learning-based methods for fault diagnosis on SCs, such as Deep-Learning techniques, which take advantage of the sequential relationships of the data and are able to handle long-term dependencies and correlated features that are important for fault diagnosis.
16,627.8
2020-12-16T00:00:00.000
[ "Engineering", "Physics" ]
Sphingomyelin Modulates the Transbilayer Distribution of Galactosylceramide in Phospholipid Membranes*□S The interrelationships among sphingolipid structure, membrane curvature, and glycosphingolipid transmembrane distribution remain poorly defined despite the emerging importance of sphingolipids in curved regions and vesicle buds of biomembranes. Here, we describe a novel approach to investigate the transmembrane distribution of galactosylceramide in phospholipid small unilamellar vesicles by C NMR spectroscopy. Quantitation of the transbilayer distribution of [6-C]galactosylceramide (99.8% isotopic enrichment) was achieved by exposure of vesicles to the paramagnetic ion, Mn . The data show that [6-C]galactosylceramide prefers (70%) the inner leaflet of phosphatidylcholine vesicles. Increasing the sphingomyelin content of the 1-palmitoyl-2-oleoyl-phosphatidylcholine vesicles shifted galactosylceramide from the inner to the outer leaflet. The amount of galactosylceramide localized in the inner leaflet decreased from 70% in pure 1-palmitoyl-2-oleoyl-phosphatidylcholine vesicles to only 40% in 1-palmitoyl-2-oleoyl-phosphatidylcholine/ sphingomyelin (1:2) vesicles. The present study demonstrates that sphingomyelin can dramatically alter the transbilayer distribution of a monohexosylceramide, such as galactosylceramide, in 1-palmitoyl-2-oleoylphosphatidylcholine/sphingomyelin vesicles. The results suggest that sphingolipid-sphingolipid interactions that occur even in the absence of cholesterol play a role in controlling the transmembrane distributions of cerebrosides. Sphingolipids participate in a number of important cellular processes that require membrane budding, fission, or vesiculation (1,2). Examples include infectious processes involving bacterial toxin and envelope virus entry into cells (3,4), exosomal antigen presentation (5), and processes related to the ter-minal stages of apoptosis (6). Many recent investigations, by this laboratory and others, have focused on the in-plane lateral interactions among sphingolipids, cholesterol, and other membrane lipids (7)(8)(9)(10). As a result, significant new insights into sphingolipid organization in membranes have emerged, including the identification and characterization of sphingolipid-enriched, liquid-ordered microdomains, often referred to as rafts (11)(12)(13). With so much emphasis on lipid lateral interactions, studies of sphingolipid transmembrane distribution have been relatively few (14), and the interrelationships among sphingolipid structure, membrane curvature, and glycosphingolipid transmembrane distribution remain poorly understood (15). Much of what is currently known about the mechanical forces affecting membrane curvature has been achieved by investigations of phosphoglyceride model membranes (16). The elastic constants associated with a fluid membrane are the bending elastic modulus and the spontaneous curvature (17,18). The bending elastic modulus is the resistance of membranes to curvature or the bending rigidity, whereas the spontaneous curvature is the inherent curvature of an unconstrained membrane section and changes with lipid structure. Because biomembranes are largely bilayers, each leaflet contributes to the overall stiffness in nonlocalized ways that arise from the different strains to which molecules in each leaflet are subjected as the bilayer bends. An outward curvature results in expansion of the outer leaflet of the bilayer along with a compression of the inner leaflet. The different strains in each leaflet produce mechanical stress gradients within the membrane. The stress gradients can significantly increase lateral diffusivity (19,20) and be a driving force for the transbilayer migration of lipid molecules between leaflets (20,21). The altered lipid packing and stress gradients in highly curved membranes can be relieved by the generation of asymmetries in the lipid transbilayer distributions that depend on the overall molecular shape of different individual lipids (22). For instance, a well characterized lipid mass imbalance (2:1) exists in the outer and inner leaflets of phosphatidylcholine (PC) 1 small unilamellar vesicles (SUVs) as a consequence of packing the roughly cylindrically shaped PC amphiphiles into a highly curved bilayer vesicle (23,24). The resulting transbilayer lipid mass imbalance can be maintained almost indefinitely by keeping the SUVs in the liquid-crystalline phase state to minimize transient packing defects that promote slow relaxation processes (25,26). In SUVs composed of equimolar egg phosphatidylethanolamine (PE) and egg PC, PE is enriched in the inner leaflet, whereas the PC is enriched in the outer leaflet. The smaller and less hydrated headgroup of PE imparts a cone-like molecular shape which is better suited than PC's cylindrical shape for inner leaflet localization in highly curved bilayers (27). Geometric accommodation of lipid shape also has provided a similar, logical explanation for the transbilayer distributions of lyso-PC/PC mixtures (28). However, lipid geometric shapes alone do not satisfactorily account for the transbilayer distributions observed when PC SUVs contain low mole fractions of either PE or phosphatidylglycerol (PG). In this case, disproportionately higher amounts of PE or PG are observed in the SUV outer leaflet, putatively because of generalized lattice packing effects (29,30). Together, these studies show that investigating lipid transbilayer distributions in vesicles provides an effective means to gain insights into the interrelationship between lipid structure and membrane curvature. The present study was motivated by the need to better understand and define: 1) the transbilayer distribution of simple sphingolipids in phospholipid membranes and 2) the impact of changing vesicle composition on sphingolipid transmembrane distribution. Here, we describe a novel means to quantify the transbilayer distribution of [ 13 C]galactosylceramide (GalCer) by 13 C NMR. Interestingly, we find that [ 13 C]GalCer preferentially localizes to the inner leaflets of POPC SUVs. In response to increasing sphingomyelin (SPM) content, the GalCer transmembrane distribution shifts markedly toward the outer SUV leaflet even though PC and SPM have chemically identical polar headgroups. The results suggest that SPM-GalCer interactions, even in the absence of cholesterol, play an important role in controlling cerebroside transbilayer distributions. Vesicle Preparation-SUVs were prepared by sonication using a modification of the established procedure by Huang and Thompson (25). The total amount of lipid in each preparation was kept constant (200 mol). The lipids were dissolved in 15 ml of CHCl 3 :CH 3 OH (2:1) in a 50-ml round-bottom flask. For preparations containing GalCer, a drop of water was added to aid solubilization. A lipid film was obtained by slowly evaporating the solvents at 37°C on a rotary evaporator, followed by freeze-drying in vacuo for 6 h. The lipid film was hydrated in 2 ml of D 2 O, then dispersed by vortexing with intermittent warming to 37°C, and the dispersion was sonicated under nitrogen for 30 -60 min until translucent. After removal of titanium debris by centrifugation at 50,000 ϫ g for 60 min, the vesicles were used immediately for NMR analysis. Vesicle stability and vesicle impermeability to ions was ascertained by 31 P NMR by monitoring phospholipid chemical shifts and signal intensities as a function of time. By these criteria, all vesicles used in the present study remained stable and ion-impermeable for several days. Localization of Phospholipids in SUVs by 31 P NMR-POPC and SPM were localized and quantified in the inner and outer vesicle leaflets of SUVs by 31 P NMR using 1 mM praseodymium (Pr 3ϩ ) ions as the paramagnetic shift reagent (28). Proton-decoupled 31 P NMR spectra were recorded at 121.42 MHz on a Varian UNITY 300 instrument (Varian Assoc., Palo Alto, CA) using a 5-mm variable temperature probe (37.0 Ϯ 0.1°C). Standard single-pulse experiments entailed a 90°pulse of 15 s, an acquisition time of 1.6 s, and a pulse delay of 2 s, with the decoupler gated on during acquisition only. At a spectral width of 10,000 Hz, 32,000 data points were collected, whereas 1,600 and 6,400 transients were used for samples obtained in the absence and presence of Pr 3ϩ , respectively. Data were then zero-filled and Fourier-transformed after applying 0.1-Hz exponential line broadening. Peak areas were digitally integrated. Spectra were referenced relative to the external standard, concentrated H 3 PO 4 , having a chemical shift (␦) of 0.00 ppm. Localization of GalCer in Phospholipid SUVs by 13 C NMR-GalCer was localized and quantified in the inner and outer leaflets of POPC and POPC/SPM vesicles by 13 C NMR using 5 mM Mn 2ϩ as quenching agent. Proton-decoupled 13 C NMR spectra of [6-13 C]GalCer containing SUVs were acquired at 75.423 MHz in the absolute mode at 37°C. Standard single-pulse measurements entailed a 90°pulse of 9 s, a pulse delay of 1 s, and an acquisition time of 1.8 s. At a spectral width of 16,500 Hz, 59,900 data points were collected, and 24,000 transients were used. Data were zero-filled and Fourier-transformed after applying 1-Hz exponential line broadening. Peak areas were digitally integrated. The integral of the resonance at ␦ 61.361 represented the total [6-13 C]Gal-Cer in the vesicles (see "Results" for details). To quantify the GalCer localized in the SUV inner leaflet, 5 mM Mn 2ϩ was added to quench the [6-13 C]GalCer resonance associated with the outer bilayer leaflet. The difference between the integrals of the [6-13 C]GalCer resonances observed at ␦ 61.361 in the absence (total GalCer) and in the presence (inner GalCer) of Mn 2ϩ ions provided quantitation of the ion-accessible GalCer in the outer vesicle leaflet. Novel Approach to Measure Glycolipid Transbilayer Distribution in Phospholipid Vesicles by 13 C NMR-Phospholipid transbilayer distribution between the inner and outer leaflets of vesicles can be accurately determined by 31 P NMR (32)(33)(34). However, this approach is not suitable for monitoring the transbilayer distribution of glycosphingolipids because of the lack of phosphate in the headgroup of these lipids. Thus, the localization of the sugar headgroups of glycosphingolipids incorporated into vesicles was analyzed by 13 C NMR spectroscopy. A comparison of 13 C NMR spectra of phospholipids (35) and of bovine brain GalCer in solution (CDCl 3 :CD 3 OD:D 2 O; 50:50: Fig. 1B), indicated that two signature resonances derived from C-6 and C-1 of galactose (␦ 61.445 and 104.083, respectively) did not overlap with any of the phospholipid resonances and thus might be used for quantitative analysis. However, preliminary experiments with vesicles composed of PC, SPM, and GalCer (40:40:20 mol %) indicated that only the 61.361-ppm resonance derived from C-6 of galactose could be clearly detected, whereas the C-1 resonance was broadened almost beyond recognition (data not shown). This finding is consistent with C-1 being more motionally restricted by virtue of being part of the pyranose ring and buried in the membrane interfacial region. In contrast, the C-6 carbon is not part of the rigid ring system, can rotate more freely, and projects farther into the aqueous phase (36). Thus, the 13 C NMR resonance of the galactose C-6 carbon was deemed best suited for quantifying the transbilayer distribution of GalCer in vesicles. Attempts to shift the GalCer C-6 resonance (␦ 61.361) using paramagnetic shift reagents (Pr 3ϩ and Yb 3ϩ ) yielded unsatisfactory results, leading us to adopt an alternate approach to quantify GalCer in the inner and outer vesicle leaflets of SUVs. Our approach was based on using Mn 2ϩ ions as a bilayerimpermeant relaxing reagent to measure the distribution of cholesterol in the inner and outer leaflets of lipid vesicles. 3 A similar strategy has previously been used to compare the accessibility of cholesterol in the outer leaflet of ester-and etherlinked phospholipid SUVs (39). Titration experiments revealed that 5 mM Mn 2ϩ was optimal for efficiently quenching the resonance at 61.361 ppm derived from GalCer in the vesicle outer leaflet without affecting the resonances of the inner leaflet. Because of the relatively low signal intensity of the C-6 resonance of GalCer compared with the phospholipid resonances in SUVs, natural abundance 13 C NMR required that the vesicles contain a relatively high content of GalCer (Ͼ20 mol %) to achieve adequate signal-to-noise ratios to quantify the transbilayer distribution of GalCer. To monitor GalCer transbilayer distribution over a wide range of mole fractions, including those typical of biological membranes, isotopic enrichment at the C-6 position of galactose was deemed the best strategy to assure acceptable sensitivity. Thus, we synthesized [6-13 C]Gal-Cer as described in detail in the Supplemental Information and outlined in Scheme 1. The resulting preparation was 99.8% isotopically enriched and increased the intensity of the GalCer C-6 resonance almost 100-fold (Fig. 1A). Transmembrane Distribution of GalCer in POPC Vesicles-To determine GalCer distribution in phospholipid vesicles, POPC SUVs containing 1 mol % [6-13 C]GalCer were prepared and analyzed by 13 C NMR at 37°C. Fig. 2 shows that the C-6 resonance of galactose was well separated from POPC resonances and that its signal-to-noise ratio (ϳ20:1) was well suited for quantitative analysis. Fig. 3 (left panel) shows the 50 -75-ppm region of the 13 C NMR spectrum of the POPC vesicles containing 1 mol % [6-13 C]GalCer before and after addition of 5 mM Mn 2ϩ . It is noteworthy that GalCer strongly preferred the inner leaflet (Fig. 3, right panel), with 70% of the GalCer molecules being inaccessible to Mn 2ϩ ions ( Table I). The same high preference of GalCer for the inner leaflet of POPC SUVs also was observed when the GalCer content was increased to 2 mol% (Fig. 3, right panel; Table I). To determine the transbilayer distribution of POPC in SUVs containing 1 or 2 mol % GalCer, 31 P NMR measurements were performed in the presence and absence of Pr 3ϩ (see "Experimental Procedures"). The outer-to-inner leaflet phosphorus ratios were 1.94 and 1.92, respectively, and were similar to the 1.90 ratio of pure POPC SUVs (Table I). SPM Alters the Transmembrane Distribution of GalCer-To investigate the effect of increasing SPM on the transbilayer distribution of GalCer, SUVs with a constant amount of [6-13 C]GalCer and varying amounts of POPC and SPM (e.g. 2:1, 1:1, or 1:2) were prepared. The transbilayer distributions of [6-13 C]GalCer and of each phospholipid were then assessed by 13 C and 31 P NMR spectroscopy, respectively. Fig. 4 (left panel) shows the 31 P NMR spectra of vesicles composed of equimolar POPC and egg SPM containing 1 mol % GalCer. As the top spectrum illustrates, the 31 P resonances of POPC (Ϫ0.900 ppm) and SPM (Ϫ0.246 ppm) were partially resolved from each other in the absence of Pr 3ϩ , indicating differing local environments for their phosphocholine headgroup moieties. Addition of Pr 3ϩ caused the POPC and SPM resonances of the SUV outer leaflet to shift downfield, resulting in the four distinct resonance peaks shown in the lower spectrum of Fig. 4. By comparison with SUVs containing different amounts of SPM and POPC (e.g. 1:2 and 2:1), the SPM and POPC resonances were assigned to the inner (Ϫ0.246 and Ϫ0.900 ppm, respectively) and outer (7.596 and 5.203 ppm, respectively) leaflets. The peak assignments agree well with earlier reports (35)(36)(37)(38). The larger Pr 3ϩinduced downfield shift of outer leaflet SPM (compared with PC) and the distinct 31 P resonances of SPM and POPC in the absence of lanthanide ions were consistent with earlier findings (32,38,40). Quantitative assessment indicated that the content of phospholipid (both POPC and SPM) in the outer leaflet of SUVs far exceeded that in the inner leaflet, consistent with the known mass distribution of PC in SUVs of ϳ25-nm diameter (Fig. 4). However, small but reproducible differences in the transbilayer distributions of POPC and SPM could be distinguished (Table I). SPM showed a slightly greater preference for the outer leaflet at the expense of POPC and this tendency became more pronounced as the SPM mole fraction increased. It is also noteworthy that the overall outside-toinside 31 P integrated signal ratios of the SPM/POPC vesicles remained very close to the 2:1 ratio expected for SUVs at all SPM compositions, consistent with the vesicles having average diameters of ϳ25 nm (23,25,26). Previous studies have indicated that sonication of SPM results in formation of unilamellar vesicles similar in size to those generated by sonication of PCs (40). Having established the transbilayer distribution of SPM and POPC when mixed in SUVs, we next determined the effect of SPM on the transbilayer distribution of 1 mol % [6-13 C]GalCer by 13 C NMR (Fig. 5). We found that increasing the SPM content in POPC SUVs shifted GalCer from the inner to the outer leaflet. The amount of GalCer localized to the inner leaflet decreased from 70% in pure POPC vesicles to only 40% in POPC/SPM (1:2) vesicles. We believe that this represents the first evidence showing that SPM can dramatically alter the transbilayer distribution of a simple monohexosylceramide, such as GalCer, in phospholipid vesicles. DISCUSSION We have quantified the transbilayer distribution of GalCer in phospholipid vesicles by 13 C NMR spectroscopy. When used in combination with 31 P NMR approaches that monitor phospholipid transbilayer distribution, the strategy provides novel insights into the effect of changing SPM content on the transbilayer distribution of simple glycosphingolipids. The results reveal two notable findings. First, at low mole fractions, GalCer strongly prefers the inner leaflet of POPC SUVs. Second, increasing the SPM content of the POPC SUVs shifts the transbilayer distribution of GalCer toward the outer leaflet. Ramifications of these observations are discussed below. In POPC SUVs containing low mole fractions of GalCer (1 or 2 mol %), 70% of the glycolipid is localized in the inner leaflet. This corresponds to a doubling of the GalCer inner membrane concentration with respect to POPC, which is 34% localized in the inner leaflet (Table I). The mass distribution of PC in SUVs (1:2 inner leaflet-to-outer leaflet) is well established (7)(8)(9). What is remarkable about the transmembrane distribution of GalCer is its strong preference for the inner leaflet when present at 1 or 2 mol % in POPC SUVs. Earlier studies of PE and PG (see Introduction) revealed just the opposite behavior in that these lipids strongly preferred the outer leaflet of PC SUVs (29,30). Only when present at 10 mol % (or more) in PC SUVs did PE and PG assume transmembrane distributions that can be rationalized by the structural parameters associated with their overall shape, charge, and hydration (22,29,30). The preferential localization of low mole fractions of PE and PG to the outer leaflet of PC SUVs has been explained as a general lattice response linked to the "looser" molecular packing of the outer leaflet of PC SUVs (30). Our results clearly show that GalCer does not conform to the PE/PG transmem-brane localizations in fluid PC bilayers previously reported for highly curved phosphoglyceride vesicles. This may be a consequence of GalCer having a completely uncharged and moderately hydrated polar headgroup compared with the zwitterionic or ionic headgroups of phosphoglycerides (36,41). The second major finding of this study is that increasing the SPM content of POPC SUVs dramatically shifts the transbilayer distribution of GalCer toward the outer leaflets. This shift occurs even though SPM and POPC have chemically identical phosphocholine headgroups. Not surprisingly, our 31 P NMR measurements in the presence and absence of the paramagnetic shift ion, Pr 3ϩ , show that SPM and POPC localize quite similarly in SUVs with SPM showing only a slight preference for the outer leaflet (38,40). However, the impact of increasing SPM content on GalCer transmembrane distribution is clear and dramatic (Table I). It is likely that the remarkable shift in the transbilayer localization of GalCer toward the outer leaflets of SPM/POPC SUVs reflects changes in the inplane interactions that occur between GalCer and SPM relative to those that occur between GalCer and POPC in highly curved vesicles. The structural features of GalCer, SPM, and POPC likely to play a role in this behavior are the following. First, consider the lipid hydrocarbon region. POPC has the naturally prevalent PC motif consisting of sn-1 chain saturation and sn-2 chain unsaturation. Both egg SPM and [6-13 C]GalCer have the naturally prevalent sphingolipid motif consisting of sphingosine and a saturated acyl chain. Compared with POPC's oleoyl chains, the mostly palmitoyl acyl chains (ϳ85%) of egg SPM would be expected to pack better with the palmitoyl chains of [6-13 C]GalCer. With regard to the polar headgroup region, one might mistakenly assume no difference among PC and SPM because of the chemical identity of their phosphocholine head- groups. However, it is clear that, when PC and SPM are mixed together in SUVs, the 31 P NMR resonances of these lipids display different chemical shifts, indicating that the local environments near their respective phosphate groups are not identical. One explanation may be that SPM, but not PC, forms intramolecular hydrogen bonds involving the hydroxyl group at carbon 3 of the sphingoid base and either the bridge oxygen or ester oxygen of phosphate (40,(42)(43)(44). This capability may contribute to the metastable behavior and different structural conformations known to occur in SPMs (Refs. 43-46 and references therein). In addition, the ceramide region of SPM and GalCer contain amide-linked acyl chains, which are thought to participate in intermolecular hydrogen bonding lattices via bridging water molecules, as well as 4,5-trans double bonds that further modulate intermolecular interactions (10,42). The combined differences in the headgroup and interfacial regions of SPMs and PCs appear likely to affect their interactions with GalCer. Altogether, our results emphasize that subtle structural and conformational changes to the interfacial zone of bilayer matrix lipids, such as PC and SPM, can significantly affect the transbilayer distribution of simple sphingolipids, such as cerebroside, in curved membranes. Physiological Relevance and Implications-Lipid transmembrane asymmetry is of fundamental importance to the health of cells, and a loss of this asymmetry has severe detrimental effects. During late apoptotic events as well as under many other pathological conditions such as diabetes, malaria, and sickle cell disease, a loss of phosphatidylserine (PS) asymmetry occurs (47). A defect in the aminophospholipid translocase or activation of the phospholipid scramblase causes abnormal exposure of PS on the exoplasmic leaflet from its normal cytoplasmic orientation (48 -50). PS externalization during the lipid scrambling process has recently been linked to the inward translocation of external SPM (6). This "flopped" SPM pool is hydrolyzed by cytosolic sphingomyelinase to ceramide as part of the execution phase of apoptosis. SPM depletion from the plasma membrane leads to a redistribution of cholesterol to intracellular sites and/or the efflux of cholesterol to external acceptors such as serum lipoproteins and cyclodextrins (51,52). An important consequence of SPM transmembrane migration and associated ceramide generation is a triggering of membrane destabilization and an increase in membrane fission processes involving membrane blebbing and vesicle shedding. Other striking examples implicating sphingolipid transmembrane distributions in the triggering of membrane vesiculation in cell and model membranes also have been reported (1,53). Gaining insights into the effects of high curvature on membranes is an area of increasing interest in cell biology because of the importance of membrane fission events in generating transport vesicles (54). The ability of endophilin, a presynaptically enriched protein that binds the GTPase dynamin and synptojanin, to generate very highly curved membrane tubules underscores the potential importance of tubulovesiculation processes to membrane trafficking events in the cell (55). Investigations of sphingolipid transbilayer distributions in curved membranes of defined composition, such as those reported here, are likely to provide a valuable foundation for a better understanding of cellular processes initiated by or utilizing curved membrane regions where sphingolipids are important.
5,236
2002-05-31T00:00:00.000
[ "Biology", "Chemistry" ]
Quantum well stabilized point defect spin qubits Defect-based quantum systems in in wide bandgap semiconductors are strong candidates for scalable quantum-information technologies. However, these systems are often complicated by charge-state instabilities and interference by phonons, which can diminish spin-initialization fidelities and limit room-temperature operation. Here, we identify a pathway around these drawbacks by showing that an engineered quantum well can stabilize the charge state of a qubit. Using density-functional theory and experimental synchrotron x-ray diffraction studies, we construct a model for previously unattributed point defect centers in silicon carbide (SiC) as a near-stacking fault axial divacancy and show how this model explains these defect's robustness against photoionization and room temperature stability. These results provide a materials-based solution to the optical instability of color centers in semiconductors, paving the way for the development of robust single-photon sources and spin qubits. A promising approach towards solving this problem is to engineer defect ionization energies by locally manipulating the band gap of the host material. In particular, a quantum well can lower the ionization energy of a point defect's dark state so that the excitation laser will preferentially repopulate the point defect's bright state. The mechanism is schematically depicted in Fig. 1D. Here, we demonstrate that the combination of a quantum well and a spin qubit in SiC can stabilize the latter against photoionization. Additionally, we demonstrate that such configurations are readily observable in SiC. The generalization of these exemplified local structures to other semiconductor hosts could provide a material-based solution to the optical instability of color centers and pave the way toward robust point-defect-based single photon emitter and spin qubits. Results Stacking faults and polytype inclusions are natural sources of quantum wells in semiconductors (21)(22)(23). Such extended defects may additionally incorporate color centers that can exhibit distinct properties compared to their bulk counterparts (24). As both extended defects (25)(26)(27)(28) and applicable color centers (17,20,29) are commonplace in SiC, we take SiC as an exemplary host material for studying complex defect structures. In particular, we investigate divacancy qubits in the vicinity of a single stacking fault in 4H-SiC, in order to understand the consequences of their interaction. 4H-SiC is the most commonly used polytype of SiC. The primitive cells of 4H-SiC and other relevant polytypes are depicted in Fig. 2A-C. We consider a Frank-type stacking fault defect (30), a 1FSF(3,2) stacking fault in the Zhdanov notation, whose structure can be obtained by inserting a single Si-C double layer in cubic stacking order into a perfect 4H-SiC primitive cell (see in Fig. 2D). This configuration was assigned to the so-called "carrot" defect in 4H-SiC (23,31). Stacking faults in 4H-SiC often form quantum-well-like states that can be observed by photoluminescence (21)(22)(23). The considered stacking fault configuration was assigned to the 482 nm PL-emission line (23). First, we theoretically confirm that the considered stacking fault forms a quantum well. The band structure of perfect 4H-and 6H-SiC as well as the defective 4H-SiC structure including a 1FSF(3,2) stacking fault are depicted in Hereinafter, we refer to the high and low symmetry configurations as axial and basal plane divacancies, respectively. Recently, each of the divacancy configurations were assigned (32) to the PL1-PL4 divacancy related qubits (17) in 4H-SiC. In our study, we consider two sets of divacancy configurations within a single model: 1) divacancies in the near-stacking-fault region, i.e. maximally 5 Å away from the stacking fault, and 2) bulk-like divacancy configurations, i.e. at least 14 Å away from the stacking fault. The near-stacking-fault and the bulk-like configurations are marked as ab-ssf and ab-4H in see Fig. 2D, respectively. In this notation, a and b can respectively represent silicon and carbon vacancies in both hexagonal-like and cubic-like environment. Note that due to the presence of the stacking fault, we distinguish three cubic-like lattice sites in the near stacking fault region, named as k1, k2 and k3, and two hexagonal-like lattice sites in the near-stacking-fault region, named as h1 and h2. In In Fig. 6 and Table S1 in Supplementary Materials, we provide the calculated zero-fieldsplitting parameters for all the considered axial and basal plane configurations, respectively. In agreement with the hyperfine splitting results, ZFSs of the axial configurations form three well-distinguishable groups, see Fig. 6. Note that k2k2-ssf configuration exhibits the largest zero-field-splitting that differs from all the bulk-like axial configurations. Considering the basal plane configurations in Table S1 in Supplementary Materials, a similar trend can be observed as for the hyperfine splitting parameters, i.e. kh-4H and k3h2-ssf as well as hk-4H and h2k2-ssf pairs exhibit comparable zero-field-splitting parameters. PL5'-PL6' (13), that cannot be explained by the possible configurations in perfect bulk 4H-SiC host (PL1-PL4 centers (32)). In many respects, these unexplained, yet generally observable configurations follow the properties of bulk divacancy configurations. On the other hand, they show outstanding stability in room-temperature ODMR measurement (17,33) and photo ionization studies (11). In the latter case, the luminescence of the additional centers, such as the PL6 axial divacancy related room-temperature qubit, do not change by applying an additional repump laser of varying wavelength (11). This is in contrast to the bulk divacancy configurations that exhibit three orders of magnitude increase in the PL intensity by applying an appropriate repump lase (11). This observation indicates that PL6 qubit remains in its bright state under continuous excitation, while bulk divacancy configurations turn into the dark state with higher probability (11). We propose that the additional divacancy related qubits are relented to divacancyquantum well structures created by a single stacking fault. Indeed, we found three distinguishable configurations, two basal plane and one axial configuration, that can account for the PL5 and PL7 basal plane and PL6 axial divacancy configurations reported in ODMR measurement in 4H-SiC. Furthermore, the calculated hyperfine, zero-fieldsplitting, and ZPL magneto-optical parameters obtained for the k2k2-ssf configuration agree well with the experimental data reported for PL6 divacancy related qubit (see Fig. 4, Fig. 6, and Supplementary Materials). Consequently, we assign the k2k2-ssf combined stacking fault-divacancy configuration to the PL6. The outstanding stability of PL6 qubit may be attributed to the mechanism discussed above and depicted in Fig. 1D. The observed two additional basal plane divacancy configurations, k1h1-ssf and k2k1-ssf, may be related to the additional two basal plane ODMR centers, PL5 and PL7 (33). Due to the fewer experimental data available for PL5 and PL7 qubits, we cannot conclusively assign them at this point. Discussion The study of point defects embedded in extended defects (39) has received much less attention than that of pure point defects (24). These structures, however, may broaden the palette of point defect qubits and may provide a new avenue for engineering their properties for superior functionality. Through the example of divacancy qubits in close vicinity of a single stacking fault in 4H-SiC, we draw attention to an alternative way of engineering point defect qubit for robust quantum bits. In particular, we demonstrated that quantum well of a stacking fault can give rise to a mechanism that can stabilize point defect qubits without the application of an additional re-pumping laser. Furthermore, by identifying PL6 room-temperature qubit as a divacancy-stacking fault structure, we demonstrated that defects in quantum wells are important and readily observable in ODMR and PL measurements in SiC. The particular stability of PL6 center exemplified the stabilization mechanism of stacking fault quantum wells, but our results can be generalized to a wide variety of point defect qubits and single-photon emitters in semiconductors. Stacking faults can also appear in other semiconductors. For example, diamond, an important material for optically addressable spin qubits, also contains stacking faults (40). Thus, incorporating point defects into quantum wells could be an important strategy for discovering a large and robust class of new spin qubits. (32,45), the basal planar size as well as the k-point grid density are optimized for all the magneto-optical parameters calculated in this study. The large axial size of the supercell allows us to calculate and compare near-stacking-fault and farther, bulk-like divacancies using the same model. To obtain the most accurate ground state hyperfine tensors of first neighbor 13 C and second neighbor 29 Si nuclei, HSE06 functional is used on a PBE relaxed supercell of 704 atoms with 3×3×1 k-point sampling. To obtain the ground state ZFS, we use a 1584 atom supercell with 2×2×1 k-point set, PBE Kohn-Sham wavefunctions, and our in-house implementation for the ZFS tensor calculation (46). In our computational study we concentrate on the most reliable ground state hyperfine and ZFS data, however, to supplement the discussion and our conclusions, we calculate the ZPL energies as well, see Supplementary Materials.
1,986.6
2020-04-21T00:00:00.000
[ "Physics" ]
Nonlinearity-induced topological phase transition characterized by the nonlinear Chern number As first demonstrated by the characterization of the quantum Hall effect by the Chern number, topology provides a guiding principle to realize robust properties of condensed matter systems immune to the existence of disorder. The bulk-boundary correspondence guarantees the emergence of gapless boundary modes in a topological system whose bulk exhibits nonzero topological invariants. Although some recent studies have suggested a possible extension of the notion of topology to nonlinear systems such as photonics and electrical circuits, the nonlinear counterpart of topological invariant has not yet been understood. Here, we propose the nonlinear extension of the Chern number based on the nonlinear eigenvalue problems in two-dimensional systems and reveal the bulk-boundary correspondence beyond the weakly nonlinear regime. Specifically, we find the nonlinearity-induced topological phase transitions, where the existence of topological edge modes depends on the amplitude of oscillatory modes. We propose and analyze a minimal model of a nonlinear Chern insulator whose exact bulk solutions are analytically obtained and indicate the amplitude dependence of the nonlinear Chern number, for which we confirm the nonlinear counterpart of the bulk-boundary correspondence in the continuum limit. Thus, our result reveals the existence of genuinely nonlinear topological phases that are adiabatically disconnected from the linear regime, showing the promise for expanding the scope of topological classification of matter towards the nonlinear regime. Topology is utilized to realize robust properties of materials that are immune to disorders [1,2].A prototypical example of topological materials is the quantum Hall effect [3,4], which was discovered in a two-dimensional semiconductor under a magnetic field.In such a two-dimensional system, the Chern number characterizes the topology of the band structure and the corresponding gapless boundary modes.This bulk-boundary correspondence lies at the heart of the robustness of topological devices utilizing boundary modes.Recent studies have also explored topological phenomena in a variety of platforms, such as photonics [5], electrical circuits [6], ultracold atoms [7], fluids [8], and mechanical lattices [9]. While band topology has been well-explored in linear systems, nonlinear dynamics is ubiquitous in classical [10][11][12][13][14][15] and interacting bosonic systems [16,17].For example, nonlinear interactions can naturally emerge in the mean-field analysis of bosonic many-body systems, as is seen in the Gross-Pitaevskii equations of ultracold atoms.Recent studies have also analyzed the nonlinear effects on topological edge modes [11,12,[18][19][20][21][22][23][24][25][26][27] and revealed unique topological phenomena intertwined with solitons [28][29][30][31][32][33][34] and synchronization [35][36][37].Nonlinearity can further modify the conventional notion of topological phase; recent studies have revealed that onedimensional systems can exhibit nonlinearity-induced topological phase transitions, where the existence of topological edge modes depends on the amplitude of the oscillatory modes [38][39][40][41][42].While these previous studies have indicated the existence of topological edge modes in nonlinear systems, one cannot straightforwardly extend the topological invariants to nonlinear systems because they have no band structures.In addition, despite the advantage that nontrivial topology in twodimensional systems requires no additional symmetries other than the U (1) and translation symmetries as is the case for the quantum Hall effect, nonlinear topology in two-dimensional systems [18,29,30,34] is much less understood than that in one-dimensional systems. In this paper, we introduce the notion of the nonlinear Chern number and reveal its relation to the bulk-boundary correspondence.To define the nonlinear Chern number of two-dimensional systems, we consider the nonlinear extension of the eigenvalue problem [40,42,43] and make an analogy to band structures.While it is not obvious that the nonlinear eigenvalue problem elucidates the bulk-boundary correspondence in nonlinear topological insulators, we theoretically prove the bulk-boundary correspondence of the nonlinear Chern number in weakly nonlinear systems.Furthermore, in stronger nonlinear regimes where nonlinear terms are larger compared to the linear band gap, we find that the nonlinearity-induced topological phase transition can occur in two-dimensional systems as depicted in Fig. 1.We construct a minimal model of a nonlinear Chern insulator, for which we can obtain exact bulk solutions and thus can analytically show the amplitude dependence of the associated nonlinear Chern number without any approximations.In the continuum limit of the model, we analytically show the nonlinear bulk-boundary correspondence, which states that the nonlinear Chern number predicts the existence of edge modes even under strong nonlinearity.In addition, we numerically confirm the bulk-boundary correspondence in the lattice model at the parameter where the lattice model well-approximates the behavior of the continuum model.We also discuss that the nonlinearity-induced edge modes can be observed in the quench dynamics in experimental setups such as nonlinear topological photonics.Since the nonlinearity-induced topological phases are disconnected amplitude 𝐻 𝜓 FIG. 1. Schematic of the nonlinear Chern insulators and the nonlinearity-induced topological phase transition.While topology of a noninteracting linear system can be characterized by the Chern number that is computed from its eigenvectors, topology of a nonlinear system is classified by the nonlinear Chern number, which utilizes the nonlinear extension of the eigenequation.In weakly nonlinear regions (i.e., small amplitude), the nonlinear Chern number predicts the existence of edge modes corresponding to those in linear systems.Specifically, when nonlinear systems exhibit edge-localized steady states, both the nonlinear and linear Chern numbers are nonzero as shown in the upper part of the figure.If we inject higher energy into the system and consider the eigenmodes with large amplitudes, the nonlinear band structure can become gapless.At such a gapless point, nonlinearity-induced topological phase transition can occur, where topological boundary modes appear with the nonzero nonlinear Chern number.The nonlinearity-induced topological phases exhibit boundary modes that cannot be predicted from the linear Chern number.Therefore, such topological phases are genuinely unique to nonlinear systems. from the linear limits under adiabatic deformations, our results show the existence of genuinely nonlinear topological phases. The scope of this paper applies to a broad class of twodimensional systems with U (1)-gauge and spatial translation symmetries, which can be realized in a variety of experimental setups.As is the case that the Thouless-Kohmoto-Nightingaleden Nijs formula [4] has triggered the research of a variety of topological materials, the nonlinear Chern number is expected to open up the research stream of nonlinear topological materials including their systematic classification.From the experimental point of view, one can realize nonlinear Chern insulators with the U (1)-gauge symmetry in, e.g., photonics [11,12,18,20,22,25,27,[29][30][31][32][33], ultracold atoms [16,17,27], and electrical circuits [6,35,36], where both linear band topology and nonlinear effects have been investigated.In particular, since Kerr nonlinearity [10] is fairly common in photonic systems, it should be possible to extend the current topological photonic devices to nonlinear ones. Nonlinear eigenvalue problem and nonlinear Chern number.-Theconventional band topology is based on eigenvalue problems in linear systems.To define a topological invariant, one should consider eigenvalues E j (k) and eigenvectors |ψ j (k)⟩ of the Bloch Hamiltonian H(k) corresponding to the Fourier component of the real-space Hamiltonian of a periodic system.Then, one can define the Chern number as C j = (1/2πi) ∇ k × ⟨ψ j (k)|∇ k |ψ j (k)⟩d 2 k.The Chern number is unchanged under the continuous deformation of the Hamiltonian unless the Bloch Hamiltonian is degenerate.In this sense, the band topology is robust against perturbations, which is of practical use to realize topological edge modes in experimental setups.Thus, to define the topological invariant in nonlinear systems, it is essential to extend the notion of band structures. We here consider the nonlinear extension of the eigenvalue equations [40,42,43] and define the nonlinear Chern number by using the nonlinear eigenvectors.We start from the general nonlinear dynamics, where Ψ j (r) is the state variable and f j (•; r) is a nonlinear function of the state vector Ψ.In lattice systems, we use the notation that r is a representative point in each unit cell of the lattice as shown in Fig. 2a.Then, j represents the internal degrees of freedom that include, e.g., sublattices and effective spin degrees of freedom.When we consider continuum systems, r should simply represent the location and j corresponds to the internal degrees of freedom such as spins.For example, the Gross-Pitaevskii equation in the continuous space is given by f (Ψ; r) = −∇ 2 Ψ(r)/(2m)+V Ψ(r)+(4πa/m)|Ψ(r)| 2 Ψ(r) with V being a potential and m and a being constants, and the nonlinear function f (•; r) depends on the Ψ(r) and its derivative and has no internal degrees of freedom.Since the quantum Hall system has the U (1) and translation symmetries, we impose them on the nonlinear equation to study the analogy of such a prototypical topological insulator.Concretely, the U (1) symmetry is represented as f j (e iθ Ψ; r) = e iθ f j (Ψ; r), which is satisfied in, e.g., the Kerr-like nonlinearity κ|Ψ j (r)| 2 Ψ j (r) [10].The translational symmetry in lattice systems is defined as f j (Ψ; r + a) = f j (Φ; r) with a being a lattice vector and Φ being translated state variables Φ j (r) = Ψ j (r + a).The translational symmetry in continuum systems is also defined in the same equations, while f j (Φ; r) still remains r dependence due to, e.g., the periodic potential.We also focus on conservative dynamics analogous to Hermitian Hamiltonians where the sum of squared amplitudes j,r |Ψ j (r)| 2 is preserved. Corresponding to the nonlinear dynamical system in Eq. ( 1), the nonlinear eigenvector and eigenvalue are defined as the state vector Ψ with components Ψ j (r) and the constant E that satisfy We term the equation as a nonlinear eigenequation and analyze its bulk-boundary correspondence below.We note that we can regard the nonlinear eigenvector as a periodically oscillating steady state Ψ j (r; t) = e −iEt Ψ j (r) of the nonlinear system when the eigenvalue is real.Since the sum of the squared amplitude is conserved under the U (1) symmetry, we here focus on special solutions with the fixed sums of squared amplitudes j,r |Ψ j (r)| 2 (resp.j d n r|Ψ j (r)| 2 ) in lattice (resp.continuum) systems, where we take the sum both on the location and the internal degrees of freedom. To extend the Chern number to nonlinear systems, we introduce the eigenvalue problems in the wavenumber space, which is analogous to the linear eigenequation of the Bloch Hamiltonian.In a lattice system with the translation symmetries, we assume an ansatz state [40] that we name the Bloch ansatz: Ψ j (r) = e ik•r ψ j (k).We note that in linear systems the Bloch theorem guarantees that every eigenvector is given by the form of the Bloch ansatz.On the other hand, in nonlinear systems, there can be nonlinear eigenvectors out of the description of the Bloch ansatz, including bulk-localized ones.Despite the existence of such localized modes, we here only focus on nonlinear bulk eigenvectors described by the Bloch ansatz and show that even such periodic bulk solutions can exhibit unique topological phenomena to nonlinear systems, i.e., the nonlinearity-induced topological phase transition.Under this ansatz and the U (1) symmetry, one can rewrite the nonlinear eigenequation as f j (k, ψ(k)) = E(k)ψ j (k) parametrized by k (see Supplementary Information for the detailed derivation). To capture the nonlinearity-induced topological phase transition depending on amplitudes, we focus on a special so-lution of the nonlinear eigenvector at each k whose sum of the squared amplitudes j |ψ j (k)| 2 = w is fixed independently of the wavenumber k.We note that the assumption of such fixed-amplitude Bloch-ansatz solutions is consistent with the perturbation calculation of the nonlinear eigenvectors (see Supplementary Information for the detail).By using fixed-amplitude nonlinear eigenvectors, we define the nonlinear Chern number as (3) We note that this definition reduces to the conventional linear Chern number if f defined in Eq. ( 2) is a linear function.It is also noteworthy that since special solutions of nonlinear eigenvectors should exist at arbitrary w in ordinary nonlinear systems, we can define the nonlinear Chern number at any positive w except for gap-closing points.One can prove that the nonlinear Chern number is an integer by embedding the nonlinear eigenvectors into an eigenspace of a linear Bloch Hamiltonian (see also Supplementary Information).Since the eigenvector can be changed by the amplitude w, the nonlinear Chern number also depends on w as shown in Fig. 1.Therefore, the nonlinear Chern number can predict the nonlinearity-induced topological phase transition by the change of the amplitude in nonlinear systems, which is a qualitatively novel phenomenon absent in linear systems [22,24,[38][39][40][41][42]. In continuum systems with a periodic potential, the Bloch ansatz should read Ψ j (r) = e ik•r ψ j (k, r) with ψ j (k, r) being a periodic function of r whose period is equal to that of the periodic potential.Then, the wavenumberspace representation of the nonlinear eigenequation becomes f j (k, ψ(k); r) = E(k)ψ j (k, r), and the nonlinear Chern number can be written as , where S represents the unit cell of the periodic system.The squared amplitude is defined as w = j S d 2 r|ψ j (k, r)| 2 in this continuum case. Compared to a previous study of the nonlinear topological invariant in one-dimensional systems [42], the nonlinear Chern number has no higher-order correction terms.This is because we assume the U (1) symmetry which guarantees that one can describe periodic steady states by using a single frequency mode.Therefore, there is no need to consider the multi-mode expansion of nonlinear eigenvectors that leads to the nonlinear collection terms.In fact, the one-dimensional topological invariant defined in the previous study is also reduced to the conventional form under the U (1) symmetry, which is consistent with our results. We note that the Bloch ansatz does not describe bulklocalized solutions that can be obtained in strongly nonlinear systems.Since such strong nonlinearity can also generate edge-localized modes [34], it is not straightforward to identify whether the edge modes originate from the bulk topology or the nontopological nonlinear effects, which makes the bulk-boundary correspondence unclear in strongly nonlinear regimes.Therefore, in the following sections, we mainly fo- cus on weakly and more strongly nonlinear systems where the nonlinear terms are smaller than the linear terms (see also Supplementary Information for the Bloch-wave-like solutions of the bulk modes in this parameter region). Nonlinear Chern number calculated from exact solutions of a lattice model.-Toinvestigate the bulk-boundary correspondence, i.e., the correspondence between the nonzero nonlinear Chern number and the existence of the edge-localized steady state, we propose and analyze the nonlinear extension of the Qi-Wu-Zhang (QWZ) model [44] defined in the real space (see Supplementary Information for the detail of the real-space description of the model).By using the Bloch ansatz, we rigorously obtain its wavenumber-space description, where w is the squared amplitude u and κ are dimensionless parameters of linear and nonlin- ear mass terms.We here introduce the staggered Kerr-like nonlinearity ±κw to the linear Chern-insulator model [44]. To calculate the nonlinear Chern number, we focus on special solutions where the squared amplitude w is fixed independently of the wavenumber k.Then, we can regard Eq. ( 4) as a linear equation and thus can diagonalize it as the linear QWZ model.Analytically diagonalizing the right-hand side of Eq. ( 4), we obtain the exact bulk solutions of the nonlinear eigenvalues and eigenvectors.Then, using the exactly obtained nonlinear eigenvectors, we calculate the nonlinear Chern number and obtain the phase diagram in Fig. 2b (see also Supplementary Information for the detail of the calculation and Supplementary Information for the numerical confirmation).The amplitude dependence of the nonlinear Chern number indicates the existence of the nonlinearity-induced topological phase transition in the nonlinear QWZ model.We note that the above nonlinear eigenvectors are exact solutions of the nonlinear QWZ model under the periodic boundary conditions, and thus our result reveals the existence of the nonlinearity-induced topological phase transition without any approximations.Such an analytical demonstration of the nonlinearity-induced topological phase transition is achieved by considering nonlinear equations with the form, (5) with ψ j and k being the state variables and the wavenumber (see also Supplementary Information), while the addition of off-diagonal or non-uniform diagonal nonlinear terms prevents us to obtain the exact solutions.It is also noteworthy that the obtained exact solutions are consistent with the results of the perturbation or self-consistent calculations of the nonlinear eigenvalue problem (see also Supplementary Information). Bulk-boundary correspondence in weakly nonlinear systems.-Wefirst numerically confirm the bulk-boundary correspondence in weakly nonlinear systems.We simulate the dynamics of the nonlinear QWZ model (Eq.( 4)) with weak nonlinearity where the nonlinear Chern number is the same as that in the linear limit κw → 0. In the topological phase C NL = ±1 (Fig. 2d), we find a long-lived localized state that corresponds to a topological edge mode in the QWZ model.Meanwhile, in the case of C NL = 0 (Fig. 2c), the edgelocalized initial state is spread to the bulk, which indicates the absence of edge modes.We also confirm the bulk-boundary correspondence from the perspective of the nonlinear band structure as shown in Fig. 3 (see Supplementary Information for the details of the numerical method). In fact, the bulk-boundary correspondence between the nonlinear Chern number and the gapless edge modes can be established in general weakly nonlinear systems.We mathematically show the bulk-boundary correspondence under weak nonlinearity compared to the linear band gap.We describe the detail of the theorem and its proof in Supplementary Information. Nonlinearity-induced topological phase transition by stronger nonlinearity.-Wenext show that nonlinearityinduced topological phase transitions occur in the stronger nonlinear regime, where the nonlinear Chern number becomes nonzero and topological edge modes appear at a critical amplitude.To analyze the behavior of topological edge modes, we derive the effective theory of the low-energy dispersion of the nonlinear QWZ model around the critical amplitude.For example, if we focus on the critical amplitude w c = (−2 − u)/κ, the nonlinear band structure of the model closes the gap at (k x , k y ) = (0, 0).Then, around the critical amplitude and the gap closing point, w ∼ w c and (k x , k y ) ∼ (0, 0), we remain the leading order term of the wavenumber-space description of the nonlinear QWZ model.Finally, by substituting the wavenumbers with the derivative, we derive the real-space description of the continuum model m is positive and κ is negative, the nonlinear Dirac Hamiltonian exhibits the nonlinearity-induced topological phase transition.We obtain an unconventional localized mode if the amplitude is larger than a critical value.In this localized mode, there exist nonvanishing amplitudes even in the limit of x → ∞.Therefore, the localized mode can be unphysical in a truly semi-infinite system because we cannot normalize such a nonvanishing mode, while it still exists and is physically relevant in experimentally realizable finite systems.We set m, κ, and D as m = 0. i∂ t Ψ(r) = Ĥ(Ψ(r))Ψ(r), with m being m = u + 2 and ψ(x, y) = (ψ 1 (x, y), ψ 2 (x, y)) T being the state-vector function at the location (x, y).This statedependent Hamiltonian has a similar structure to the Dirac Hamiltonian except for the nonlinear mass term κ(|Ψ , and thus we term it the nonlinear Dirac Hamiltonian.Starting from other critical amplitudes w c = −u/κ and w c = (2 − u)/κ, one can derive similar state-dependent Hamiltonians (see also Supplementary Information).In general, the nonlinear Dirac Hamiltonian should describe the lowenergy dispersion of nonlinear topological insulators, and thus its localized modes unveil the existence of topological edge modes in various continuum systems.By using the Bloch ansatz Ψ i (x, y) = ψ i (k) exp(i(k x x + k y y)) (without explicit (x, y) dependence because of the continuous translational symmetry), one can determine the nonlinear Chern number of the nonlinear Dirac Hamiltonian as where w = S dS[|Ψ 1 (x, y)| 2 +|Ψ 2 (x, y)| 2 ]/|S| is the average squared amplitude of plain waves in this nonlinear system.We note that the nonlinear Chern number of the nonlinear QWZ model corresponds to the sum of those of the nonlinear Dirac Hamiltonians obtained from the expansion around the gapclosing points (k x , k y ) = (0, 0), (0, π), (π, 0), (π, π). We analytically show that the Chern number predicts the existence of localized modes in continuum systems by calculating the localized modes of the nonlinear Dirac Hamiltonian.We assume the ansatz Ψ i (x, y) = e ikyy Ψ ′ i (x) that is periodic in the y direction and consider the x > 0 region with the open boundary at x = 0. We calculate the localized mode with the wavenumber k y and the eigenvector E = k y .Constructing an analogy to the linear case, one can use an ansatz We can analytically ob- The legend shows the lattice constants h.We can confirm that the size of the gap decreases as the system size becomes larger.However, if the lattice constant is so large that the discretized system cannot imitate the behavior of the continuum nonlinear Dirac Hamiltonian, we find a sudden decrease in the size of the band gap.This sudden decrease contradicts the estimation that the band gap is proportional to 1/(L + 1/2) and indicates the existence of irregular gapless modes.Since such gapless modes appear at smaller amplitudes than the critical point of the nonlinear Chern number, the numerical result indicates that the bulkboundary correspondence can be ensured only after taking both the continuum and thermodynamic limits. tain the solution of this equation, where D is the integral constant and −(κ/m) + De −2mx must be positive.Combining w and D by the averaged squared amplitude w ), we find the bulkboundary correspondence, i.e., correspondence between the positive nonlinear Chern number in Eq. ( 8) and the existence of a left-localized mode in Eq. ( 9).We note that L can take an arbitrary value because the local amplitude w(x) = 1/ −(κ/m) + De −2mx of the left-localized mode also exists in the topological parameter region, m + κw(x) < 0. Figure 4 summarizes the behaviors of gapless modes in the nonlinear Dirac Hamiltonian at different parameters.In the case of m < 0, where the Chern number is C = 1/2 in the linear limit, we obtain the localized states at the left side as in the linear case.These localized states are consistent with the bulk-boundary correspondence in weakly nonlinear systems shown in the previous section.If we consider m > 0, κ < 0, we still obtain the localized state, while the amplitude remains to be |m/κ| at the limit of x → ∞.The residual amplitude |m/κ| corresponds to the transition point of the nonlinear Chern number that satisfies m + κw = 0. Therefore, the localized state at m > 0, κ < 0 indicates the nonlinearity-induced topological phase transition associated with the amplitude-dependent Chern number.We note that the nonvanishing amplitude at the limit of x → ∞ indicates that it is impossible to normalize the edge mode.Meanwhile, in finite systems, such a nonvanishing localized mode can be normalized and thus can robustly emerge.We can also check that no localized modes appear in the case of positive m and κ, where the nonlinear Chern number is C NL = −1/2 at any amplitudes. In lattice systems, we numerically validate the existence of the amplitude-dependent gapless modes and their correspondence to the nonlinear Chern number.We reveal that the finite-size effect leads to nonzero gaps of the localized modes, while such band gaps converge to zero in the thermodynamic limit L → ∞.We also find that discontinuity of edge modes can alter the phase boundary, and thus lattice systems exhibit the bulk-boundary correspondence at the parameters where they well-approximate the continuum models.To show that, we consider the rediscretization of the continuum nonlinear Dirac Hamiltonian in Eq. ( 7) and reveal the correspondence between the parameters in the nonlinear QWZ model (Eq.( 4)) and the lattice constant (see Supplementary Information for the detail).Using the small lattice constant h, one can assume that the discretized Hamiltonian reproduces the behavior of the continuum nonlinear system. To numerically confirm the bulk-boundary correspondence in the lattice model, we calculate the minimum of the absolute values of the nonlinear eigenvalues at different sizes.We fix the parameters m = −1 and κw/L = 1, where the nonlinear band structure is gapless in the corresponding continuum model.Figure 5 shows the size and lattice-constant dependencies of the energy gaps.We obtain nonzero gaps even at the topological parameters C NL ̸ = 0 due to the finite-size effect.However, the gap becomes smaller as the system size becomes larger.Specifically, if the lattice constant h is small enough to reproduce the behavior of the continuum Hamiltonian, we confirm that the gap is proportional to 1/(L + 1/2) with L being the system size, which corresponds to the analytical solution of the eigenvalue equation of the linear QWZ model (see Supplementary Information).Thus, the gap will be closed in the continuum and thermodynamic limit.Meanwhile, we find sudden decreases in the sizes of the band gaps at h > 0.14.The inconsistency between the numerical results at large lattice constants and the analytical estimation in the continuum limit indicates that the bulk-boundary correspondence can be modified by strong nonlinearity (see Supplementary Information for additional numerical calculations). Observation protocol of nonlinear edge modes via the quench dynamics.-Inrealistic setups, it is difficult to directly prepare a single nonlinear edge mode because it has a complicated amplitude distribution and thus one needs finetuning of the strength of the excitation of each site.Instead, one can observe the topological properties via quench dynamics.In quench dynamics, one only has to excite the edge sites at homogeneous amplitudes and observe the dynamics without any other external interactions as depicted in Fig. 6a. To confirm the correspondence between the existence of the nonlinear edge modes and the localized states in quench dynamics, we numerically simulate quench dynamics of the nonlinear QWZ model (Eq.( 4)) at various parameters.Figures 6b and c show the time evolution of the quench dynamics with and without nonlinear edge modes.We also obtain the phase diagrams in Figs.6d and e, which are classified by the amplitude at the edge of the sample in the long-time limit.Here, we consider two initial states equivalent to the nonlinear edge modes at u+κw = ±1 (see also Supplementary Information).In weakly nonlinear regimes, the localized states remain in the topological cases and vanishes in the trivial cases.We can also confirm the shift of the phase boundary by the change of the amplitude, which indicates the existence of the nonlinearityinduced topological phase transitions.We note that strong nonlinearity induces trap phases [34,41,45] where the localized initial states remain without topological origins (see also Supplementary Information).It is also noteworthy that the phase boundary obtained from quench dynamics does not exactly match that of the nonlinear Chern number because the perfectly-localized initial condition is quite different from the Bloch wave and thus quench dynamics cannot fully reproduce the topological properties obtained from the Bloch ansatz.Specifically, previous studies have argued topological photonics using ring-resonator arrays [46,47].By focusing on a pseudo-spin sector in light propagating through ring-resonator arrays, one can realize the photonic counterpart of a Chern insulator.If we utilize different materials in the ring resonators at different sublattices, one can possibly tune the sign of the Kerr nonlinear effect [48] at each site.Thus, topological photonic metamaterials using different resonators can be a candidate system to realize the nonlinear topological insulators analyzed in the present paper. Discussion.-We introduced the nonlinear Chern number in two-dimensional systems by considering the nonlinear eigenvalue problem (Eq.( 2)).We theoretically proved the bulk-boundary correspondence of the nonlinear Chern number in weakly nonlinear regimes, which guarantees that if the nonlinearity is small enough compared to the bulk band gap, the bulk topology can predict the existence of the edge modes.More importantly, we investigated the bulk-boundary correspondence in the stronger nonlinear regime where the nonlinearity is larger than the linear band gap while it is smaller than the linear couplings.We showed the existence of the nonlinearity-induced topological phase transition that depends on the amplitude and thus has no counterparts in linear systems.We proposed a minimal model of a nonlinear Chern insulator, whose exact bulk solutions indicate the existence of the nonlinearity-induced topological phase transition detected by the nonlinear Chern number.We analytically show that the nonlinear Chern number exactly predicts the nonlinearityinduced topological phase transition in the nonlinear Dirac Hamiltonian, implying the nonlinear bulk-boundary correspondence..In a nonlinear lattice system, we numerically checked that the bulk-boundary correspondence is recovered in the continuum and thermodynamic limit. Our results indicate the existence of unique topological phenomena beyond the weakly nonlinear regime.There remain intriguing issues to fully establish the topological classification of nonlinear systems including further strongly nonlinear cases where nonlinearity is even stronger than the linear couplings.Since such strong nonlinearity can induce bulklocalized modes [18,34,41,45] that are out of the description of the Bloch ansatz, it is unclear whether or not the nonlinear Chern number still fully works. From the perspective of many-body quantum physics, nonlinear systems are regarded as the result of the mean-field approximation of interacting many-body systems [16,17].The nonlinear Chern number should correspond to the topology of the excited states of interacting systems, which we can observe by applying external fields oscillating at the corresponding frequency.In fact, the nonlinear eigenvalues correspond to the energies per particle of the many-body wavefunctions in the mean-field analysis (see Supplementary Information), which are not the ground state except for that of the minimum eigenvalue.Therefore, the nonlinear topology would reveal the unexplored topological phases in quantum many-body systems at the level of mean-field approximation. Supplementary Materials Justification of the Bloch ansatz via the perturbation analysis. If the nonlinear term is small compared to the linear band gap, one can regard the nonlinear effect as a perturbation to the linear band structure.Under such an assumption, one can perturbatively calculate the nonlinear eigenvectors.To show the perturbation-calculation protocol of the nonlinear eigenvalue problem we rewrite the nonlinear eigenequation as We consider the perturbation expansion by κ, One can determine Ψ (0) and E (0) from the eigenvalue and eigenvector of the linear Hamiltonian H 0 Then, the first-order perturbation is calculated from the eigenequation of (H 0 + κH NL (ψ (0) )) as ))(Ψ (0) + κΨ (1) ) = (E (0) + κE (1) )(Ψ (0) + κΨ (1) ). In translation-invariant systems, the nonperturbed eigenvector Ψ (0) is described by a Bloch wave Ψ (0) = e ikx ψ (0) , due to the Bloch theorem of linear systems.Since the Bloch wave exhibits no site-dependence of the amplitude, one can assume that the nonlinear term κH NL (Ψ) is also uniform, and thus the whole effective Hamiltonian H 0 + κH NL (Ψ) still has the translational symmetry.Therefore, in weakly nonlinear systems, one can believe that all of the nonlinear eigenvectors are described by the Bloch ansatz.We note that in strongly nonlinear regimes, there can be localized modes that cannot be described by the Bloch ansatz [34,41,45].However, the periodic solutions obtained from the Bloch ansatz are still exact nonlinear eigenvectors under the periodic boundary conditions. Real-space description of the nonlinear QWZ model. To investigate the existence or absence of topological edge modes in lattice systems, we construct a minimal lattice model of a nonlinear Chern insulator, which we term the nonlinear QWZ model (see Eq. ( 4) for the wavenumber-space description).Its real-space dynamics is described as where j, l and x, y represent the internal degree of freedom and the location, respectively, and Ψ j (x, y) is the j-th component of the state vector at the location (x, y).(σ i ) jl 's are the (j, l)-component of the ith Pauli matrices.This lattice model introduces the staggered Kerr-like nonlinearity −(−1) to the linear QWZ model [44]. Exact bulk solutions of the nonlinear QWZ model. To obtain the phase diagram in Fig. 2b, we analytically solve the nonlinear eigenequation in Eq. ( 4).If we focus on special solutions where the squared amplitude |Ψ 1 (k)| 2 + |Ψ 2 (k)| 2 = w has no k-dependence, the nonlinear eigenequation (Eq.( 4)) exactly corresponds to a linear one.Therefore, by solving the corresponding linear eigenequation, we obtain the following exact bulk eigenvalues and eigenvectors, where c ± (k) = (u + κw + cos k x + cos k y + E ± (k x , k y )) 2 + sin 2 k x + sin 2 k y is a normalization constant.By using these nonlinear eigenvectors, we analytically obtain the nonlinear Chern number, as summarized in Fig. 2b.We note that one can generally obtain exact bulk solutions if the nonlinear equation has the form in Eq. ( 5) (see also the following section). Derivation of exact bulk solutions. If the nonlinear dynamics is described as one can obtain its exact bulk nonlinear eigenvectors.Here, ψ n (k) is the state variable in the momentum space k, and n, m and j label the unit cell or the internal degree of freedom such as the spin satisfying The basic idea is to construct a set of special exact solutions self-consistently by requiring that the quantity w defined by has no k dependence.Then, the nonlinear eigenequation becomes a linear equation which is diagonalizable by using eigenfunctions ψ together with the eigenenergy E (α) .We note that is also an eigenfunction of the same eigenequation in the linear case, where c (k) is an arbitrary function.When we set c (k) independently of k as we obtain It follows from Supplementary Eqs.(S12) and (S16) that which indicates that Supplementary Eq. (S14) is the nonlinear eigenvector of Supplementary Eq. (S12).Hence, we have solved Supplementary Eq. (S12) self-consistently by requiring Supplementary Eq. (S11).Supplementary Equation (S14) presents a set of exact solutions parametrized by the real number w in terms of solutions of the linear equation (S12). Perturbation analysis of the bulk modes of the nonlinear QWZ model. While we calculate the exact solutions of the bulk modes of the nonlinear QWZ model in the main text, we can also obtain the same bulk modes from the perturbation analysis or the self-consistent calculations.Specifically, if we conduct the perturbation analysis described in the previous section, the calculation stops at the first-order perturbation and derives the same bulk modes as the exact solutions. The zeroth-order solutions, i.e., the linear solutions of the bulk modes of the QWZ model are where we fix the norm as Then, substituting these solutions into the state-dependent Hamiltonian of the nonlinear QWZ model, one can obtain the first-order-perturbation solutions.Due to the nonlinear terms only depending on the norm of the nonlinear eigenvector, the substituted effective Hamiltonian is described as independently of the wavenumber.The eigenvalues and eigenvectors of this Hamiltonian are the same as those of the nonlinear QWZ model (Eqs.(S7) and (S8)), and thus the first-order perturbation calculation is consistent with the exact solutions.The self-consistent calculation is equivalent to the higher-order perturbation calculation (see also the following section).However, if one substitutes the first-order solutions into the state-dependent Hamiltonian of the nonlinear QWZ model, one obtains the same effective Hamiltonian as Eq.(S21).Therefore, the nonlinear eigenvectors and eigenvalues obtained from the self-consistent calculation are the same as those obtained from the first-order perturbation and the exact solutions. Numerical simulations of the real-space dynamics of the nonlinear QWZ model. In Fig. 2, we numerically calculate the dynamics of the nonlinear QWZ model by using the fourth-order Runge-Kutta method.We consider the 20 × 20 square lattice, where each lattice point has two internal degrees of freedom.We impose the open boundary condition in the x direction and the periodic boundary condition in the y direction.We set the time step dt = 0.005.The simulation starts from the localized initial states where We use the parameters u = 3, κ = 0.1, w = 1 in Fig. 2c and u = −1, κ = 0.1, w = 1 in Fig. 2d.In these figures, we plot the square root of the sum of the square of absolute values of the first and second components. Self-consistent calculation of nonlinear band structures. To obtain the nonlinear band structures in Fig. 3, we numerically calculate the nonlinear eigenvalue problem by using the self-consistent method.We first rewrite the nonlinear dynamics as i∂ t Ψ = f (Ψ) = H(Ψ)Ψ, where H(Ψ) is a state-dependent effective Hamiltonian.Then, we conduct the self-consistent calculation in the following procedure: (i) We numerically diagonalize H( ⃗ 0) and set the initial guess of the eigenvalue and eigenvector Ψ 0 , E 0 by adopting a pair of the obtained eigenvalue and eigenvector of H( ⃗ 0).We fix the norm of Ψ 0 to be ||Ψ 0 || 2 = w.(ii) We substitute the guessed eigenvector Ψ i after i iterations into H(Ψ) and diagonalize H(Ψ i ).(iii) We choose the obtained eigenvalue that is the closest to the previous guess E i , and the corresponding eigenvector as the next guess E i+1 , Ψ i+1 .(iv) We iterate the steps (ii) and (iii) until the distance between Ψ i and Ψ i+1 becomes smaller than the threshold, ||Ψ i+1 − Ψ i || < ϵ, or the iteration reaches a fixed number.We also perform these calculations starting from all the eigenvectors of H( ⃗ 0) and obtain a set of nonlinear eigenvectors and eigenvalues of f (Ψ). In Fig. 3, we consider the parameterized state-dependent Hamiltonian that corresponds to the nonlinear Chern insulator under the assumptions of the y-periodic Bloch ansatz and the open boundary condition in the x direction.The state-dependent Hamiltonian is described as where ∆ x and ∆ 2 x are the difference operators defined as ∆ x Ψ(x) = [Ψ(x + 1) − Ψ(x)]/2 and ∆ 2 x Ψ(x) = Ψ(x + 1) + Ψ(x − 1) − 2Ψ(x), respectively.Then, we calculate the nonlinear eigenvalues of this state-dependent Hamiltonian at k y = n∆k, n = −N, −N + 1, • • • , N − 1, N , and ∆k = π/N with N being N = 50. To calculate the nonlinear band gap in Fig. 5, we also use the self-consistent method.However, to stably obtain the band structure of topological gapless modes, we start from the eigenvectors of H(( w/2L, w/2L, • • • ) T ) instead of those of H( ⃗ 0), where L is the system size.This is because H( ⃗ 0) has no topological gapless modes, and thus the initial guess corresponding to the gapless modes in H(Ψ) cannot be obtained from H( ⃗ 0).We also note that the self-consistent calculation is only stable at weak nonlinearity.To calculate the band structures in strongly nonlinear systems than those analyzed in the main text, one should instead use the Newton method, whose details are described in the following section. Quasi-Newton method to solve strongly nonlinear eigenvalue problems. Here, we explain the (quasi-)Newton method to solve nonlinear eigenvalue problems in strongly nonlinear regimes.While we use the self-consistent calculation in weakly nonlinear systems, it often fails to converge in strongly nonlinear regimes.Instead, we should use the (quasi-)Newton method [51], which approximately calculates the roots of algebraic equations.To apply the Newton method, we reconsider the set of the nonlinear eigenvalue problem and the normalization condition, If we consider the n-component vector Ψ, there are n + 1 unknown variables consisting of the set of the components of the eigenvector Ψ i and the eigenvalue E.Then, the nonlinear eigenequation provides n algebraic equations and the normalization condition does one algebraic equation.Therefore, the total number of algebraic equations is equal to the number of unknown variables, and thus we can solve the eigenvalue problem by calculating the roots of the algebraic equations.The Newton method solves the algebraic equations by using the information of the residual and the local gradient.We rewrite the equations as F (Ψ, E) = 0, where 2 −w.We first set an initial guess of the eigenvalue E 0 and eigenvector Ψ 0 .In our calculations, we randomly determine the initial guess of the eigenvalue from the uniform distribution [−2, 2] and the real part and the imaginary part of each component of the initial guess of the eigenvector from the uniform distribution [−1, 1].Then, we renormalize the initial guess of the eigenvector.If the guess is close to the genuine solution Ψ g , E g , we can approximate the residual as where J F is the Jacobian matrix of F .Thus, (Ψ 1 , E 1 ) = (Ψ 0 , E 0 )+αJ −1 F F (Ψ 0 , E 0 ) with α being the update step becomes better approximations of the nonlinear eigenvector and eigenvalue.Iterating this update procedure of the approximated eigenvector and eigenvalue, we can numerically calculate the nonlinear eigenvector and eigenvalue.In our calculations, we fixed the update step as α = 0.01.If we start from different initial guesses, we can obtain other sets of eigenvectors and eigenvalues, which can cover all the eigenvectors of the nonlinear equation. Since the Jacobian matrix of F is tough to calculate in our model, we also numerically obtain the approximation of the Jacobi matrix, which is known as the quasi-Newton method.Specifically, we use an update algorithm called the Broyden method [52].We denote the approximated Jacobian matrix at the ith step of the quasi-Newton method by B i .We set B 0 as the identity matrix, B 0 = I.Then, the approximated Jacobian matrix is updated as where y i and ∆Ψ ′ i are Finally, we substitute B i into J F in Supplementary Eq. (S26) and update the guess of eigenvectors and eigenvalues. Theorem of the bulk-boundary correspondence in weakly nonlinear systems. We mathematically show the bulk-boundary correspondence in weakly nonlinear systems.We here use a simple notation , where ⃗ Ψ is the nonlinear eigenvector whose components correspond to the state variables at the locations r and the internal degrees of freedom j.The claim of the theorem is as follows: Ψ is a nonlinear eigenvalue problem on a two-dimensional lattice system that satisfies the following assumptions: (1) When we rewrite the nonlinear function f as f ( ⃗ Ψ) = H( ⃗ Ψ) ⃗ Ψ, there exists a positive real number c < 1 that satisfies ||H( ⃗ Ψ) − H( ⃗ 0)|| < gc/2 for any complex vector ⃗ Ψ with g being the bandgap of H( ⃗ 0) and || • || being the operator norm.(2) There exists a positive real number c ′ < 1 such that for any pairs of complex vectors Ψ and Ψ + ∆Ψ with the norm w, they satisfy (3) For any complex vector ⃗ Ψ, one can rewrite the nonlinear function f where H is a Hermitian matrix.(4) The nonlinear equation satisfies the U (1) symmetry, f (e iθ ⃗ Ψ) = e iθ f ( ⃗ Ψ).We also assume that the number of nonlinear eigenvectors is equal to that of the linear eigenvectors of H( ⃗ 0).Then, the nonlinear eigenequation f ( ⃗ Ψ) = E ⃗ Ψ exhibits robust gapless boundary modes if and only if its nonlinear Chern number is nonzero. To prove Theorem, we first show the following proposition: Proposition 1: In the nonlinear eigenvalue problem that satisfies the assumptions in Theorem, the self-consistent calculation converges, Furthermore, there exists an eigenvector ⃗ Ψ 0 and eigenvalue E 0 of H( ⃗ 0) that satisfy To show this proposition, we utilize the perturbation theorem of the eigenvectors in linear systems: Suppose H as a nondegenerate Hermitian matrix and g as the minimum difference between its two eigenvalues.If ||A|| < g/2 in terms of the operator norm, for an arbitrary eigenvector ⃗ Ψ of H + A, there exists an eigenvector ⃗ . We iteratively use this theorem and evaluate the distance between the guess of eigenvectors ⃗ Ψ i and ⃗ Ψ i+1 at each step. We also show that the resulting eigenvalue and eigenvector, E ∞ and ⃗ Ψ ∞ are indeed a pair of nonlinear eigenvalue and eigenvector of f ( ⃗ Ψ), which is summarized in Proposition 2: The converged solutions of the self-consistent calculation, E ∞ and ⃗ We show this proposition by using simple inequalities and limit evaluations.By using these propositions, we finally show the following lemma to prove Theorem. Lemma: For an arbitrary eigenvector of a nonlinear eigenvalue problem , where w is the norm of ⃗ Ψ and ⃗ Ψ ϵ , and C is the constant independent of the eigenvector ⃗ Ψ and the constant ϵ.The lemma indicates that in weakly nonlinear systems, one can map the nonlinear eigenvalues and eigenvectors onto those of a linear eigenvalue problem.Thus, we can show the bulk-boundary correspondence in weakly nonlinear systems. Proof of the main theorem. In this section, we prove Theorem in the main text.As discussed in the previous section, we use two propositions and a lemma to prove the theorem.In Propositions 1 and 2, we show that the nonlinear eigenvectors are obtained as the convergent values of the self-consistent calculation.We also show Lemma by using a similar technique in Proposition 1. Lemma indicates the existence of a continuous path that connects nonlinear and linear eigenvectors.Therefore, one can conclude the bulk-boundary correspondence in weakly nonlinear systems via that in linear systems.In this section, we use Ψ to denote the nonlinear eigenequation, where E and ⃗ Ψ are the nonlinear eigenvalue and eigenvector, and H( ⃗ Ψ) represents the effective Hamiltonian that depends on the nonlinear eigenvector. Proof of the perturbation theorem of linear eigenvectors. We utilize the perturbation theorem of linear eigenvectors to prove Proposition 1.The perturbation theorem is as follows: Suppose H as a nondegenerate Hermitian matrix and g as the minimum difference between its two eigenvalues.If ||A|| ∞ < g/2 in terms of the operator norm, for an arbitrary eigenvector ⃗ Ψ of H + A, there exists an eigenvector ⃗ . The perturbation theorem is proved as follows.Let Ψ and E be the eigenvector and eigenvalue of H + A. From Weyl's perturbation theorem [53], there exists only one pair of an eigenvector and an eigenvalue, ⃗ where we rewrite the difference of eigenvectors and eigenvalues as ∆ ′ ⃗ Ψ = ⃗ Ψ − ⃗ Ψ ′ 0 and ∆E = E − E 0 , respectively.Then, we separate ∆ ′ ⃗ Ψ into the components parallel and perpendicular to ⃗ with (H − E 0 ) − being the generalized inverse matrix of H − E 0 .The operator norms of ∆E − A and Therefore, we obtain the bound of the norm of ∆ ⃗ Ψ as Then, we determine and its norm is bounded by which indicates the existence of the eigenvector ⃗ Proof of Proposition 1. Next, we prove Proposition 1.We assume that the nonlinear eigenvalue problem f ( ⃗ Ψ) = E ⃗ Ψ satisfies the following conditions in Theorem: (1) When we rewrite the nonlinear function Then, Proposition 1 reads as follows: In the nonlinear eigenvalue problem that satisfies the assumptions in Theorem, the self-consistent calculation converges, Furthermore, there exists an eigenvector ⃗ Ψ 0 and eigenvalue We denote the eigenvector obtained in the ith step of the self-consistent calculation by ⃗ Ψ i .Mathematically, ⃗ Ψ i is defined as follows: whose eigenvalue is the closest to the eigenvalue of ⃗ Ψ i−1 and that minimizes the distance between ⃗ Ψ i−1 and ⃗ Ψ i .We also denote the band gap of H( ⃗ Ψ i ) as g i+1 . To prove Proposition 1, we iteratively use the perturbation theorem of linear eigenvectors and evaluate the distance between ⃗ Ψ i−1 and ⃗ Ψ i .In the following, we explain how to evaluate the distance at each step.From the first condition, we obtain an inequality ||H( ⃗ Ψ 0 ) − H( ⃗ 0))|| < gc/2.Then combining the inequality and the perturbation theorem, we show the upper bound of the distance between ⃗ Ψ 0 and ⃗ The linear perturbation theorem [53] also indicates g 1 > g(1 − c).We next use the second condition, which indicates Combining these inequalities and the perturbation theorem of linear eigenvectors, we obtain where we use 0 < c < 1 in the last inequality.From the conditions c < 1 and c ′ < (1 − c)/2 < 1 and the linear perturbation theorem, we also obtain We again use the second condition and derive From the perturbation theorem, we obtain Then, we obtain the bounds To continue the calculations, we utilize the inequality g By iterating the evaluation of the upper bound of the band gap g i and the distance between ⃗ Ψ i−1 and ⃗ Ψ i , we obtain The above inequality indicates that the distance between ⃗ Ψ i and ⃗ Ψ 0 is also bounded by a constant Therefore, this infinite series is a Cauchy sequence, and thus ⃗ Ψ i converges to lim i→∞ ⃗ Ψ i .Supplementary Equation (S38) also presents the upper bound of the distance between lim i→∞ ⃗ Ψ i and ⃗ Ψ 0 .We can also obtain the bound of the difference between the nonlinear eigenvalue and the corresponding linear eigenvalue from half of the right-hand side of Supplementary Eq. (S36). We note that we have loosely (but rigorously) evaluated the bound in Supplementary Eqs.(S34) and (S35).To obtain the tight inequality, we need to solve a complicated recurrence relation and thus we leave it to future works.While the obtained bound is loose, Proposition 1 still guarantees that the algorithm of the self-consistent calculation can solve the nonlinear eigenvalue problem in weakly nonlinear systems.It is also noteworthy that one can also prove the convergence of the self-consistent calculation in the case (1 − c)/2 ≤ c ′ < 1 in the completely same way.The condition of c ′ < (1 − c)/2 is needed only for proving Lemma. Proof of Proposition 2. While Proposition 1 guarantees the convergence of the self-consistent calculation, there still remains a possibility that the convergent solution is not a pair of genuine nonlinear eigenvalue and eigenvector.Thus, we need to prove Proposition 2, which guarantees that the obtained solution satisfies the nonlinear eigenequation. Proposition 2: The converged solutions of the self-consistent calculation, Ψ i and E i be the obtained solution of an eigenvalue and an eigenvector at the ith step of the self-consistent calculation, respectively.We also consider their limits, ⃗ Ψ = lim i→∞ ⃗ Ψ i and E = lim i→∞ E i .Then, Proposition 2 is equivalent to The norm of the left-hand side is upper-bounded by Since we assume that H( ⃗ Ψ) is always a bounded operator, the first term converges to zero.By using the second condition in Theorem, we obtain the upper bound of the second part as 2) with g n being the minimum difference of the nonlinear eigenvalues of H( ⃗ second term also converges to zero.Lastly, one can also check the convergence of the third term to zero from the definitions of E and ⃗ Ψ.Therefore, the right-hand side of Supplementary Eq. (S39) converges to zero, which deduces that the left-hand side also converges to zero in the limit of n → ∞. Proof of Lemma. The proof of Lemma is similar to that of Proposition 1.The major difference is that we use the nonlinear eigenvector ⃗ Ψ of H + f ( ⃗ Ψ) as the initial guess of the nonlinear eigenvector of H + (1 − ϵ)f ( ⃗ Ψ).In this subsection, we define ⃗ Ψ ′ i as follows: The initial guess ⃗ The obtained solution at the ith step ⃗ We also describe the minimum difference of the eigenvalues of H + f ( ⃗ Ψ) by g ′ , which is lower-bounded by Supplementary Eq. (S36).The minimum difference of the eigenvalues of , we obtain the following inequality, Then, we obtain the bound of As in the proof of Proposition 1, we iteratively evaluate the bound of ||H( ⃗ for i ≥ 3. From the condition of c ′ < (1 − c)/2, we obtain gc ′ /2g ′ < 1.Therefore, Supplementary Equation (S45) indicates the convergence of the self-consistent calculation.We can also derive the upper bound of the distance between ⃗ Ψ and ⃗ Ψ ϵ , whose right-hand side is the product of ϵ and a constant that is independent of the eigenvector ⃗ Ψ. Proof of Theorem. To prove the bulk-boundary correspondence in weakly nonlinear systems that satisfies the conditions in Theorem, we show that the nonlinear eigenvectors can be continuously connected to linear eigenvectors.Here, we consider the nonlinear eigenvectors whose nonlinear eigenvalues are the closest to a certain eigenvalue E of H of all the nonlinear eigenvalues.Since we assume that the number of nonlinear bands is equal to that of linear bands of H, the nonlinear eigenvector of H + ϵf ( ⃗ Ψ) should match that obtained from the self-consistent calculation starting from the nonlinear eigenvector of H + ϵf ( ⃗ Ψ)/(1 − δ), which is considered in Lemma.Therefore, Lemma indicates that ⃗ Ψ ϵ is a continuous vector function of ϵ and thus provides a continuous path connecting the nonlinear eigenvector of H + f ( ⃗ Ψ) and the linear eigenvector of H. Since the nonlinear Chern number is a topological invariant that cannot change its value under the continuous deformation of the nonlinear eigenvectors, the nonlinear Chern number coincides with the linear Chern number of H.One can also construct adiabatic connections between the edge modes of linear systems and nonlinear eigenvectors in nonlinear systems with open boundaries and nonzero nonlinear Chern numbers.Therefore, the bulk-boundary correspondence in weakly nonlinear systems is proved from that in linear systems. Discretization of the nonlinear Dirac Hamiltonian. To confirm the bulk-boundary correspondence in the nonlinear QWZ model with stronger nonlinearity, we discretize the nonlinear Dirac Hamiltonian in Eq. (7).Naively taking the lattice constant h, one seems to achieve the discretization of the Dirac Hamiltonian, where ∆ i are the difference operators defined as . However, at the gapless parameter 2 ) = 0, the obtained lattice Hamiltonian closes gaps at four points in the wavenumber space (k x , k y ) = (0, 0), (0, π), (π, 0), (π, π), which have different signs of topological charges around them.Thus, we cannot obtain the expected topological features such as the nonzero Chern number and the localized modes in this naive discretization.To avoid this inconsistency, we introduce the wavenumber-dependent mass terms ∆ 2 /h, where ∆ 2 is the discretized Laplacian, The added mass terms correspond to the Wilson fermion action [49,50] in the lattice field theory.We finally obtain the discretized Hamiltonian, which is denoted as in the wavenumber space, where w is the squared amplitude w As expected from the derivation of the nonlinear Dirac Hamiltonian in the main text, this discretized Hamiltonian is equivalent to that of the nonlinear QWZ model in Eq. ( 4) except for the existence of the lattice constant h.In the h → 0 limit, the obtained lattice Hamiltonian reproduces the behavior of the nonlinear Dirac Hamiltonian. Numerical calculations of the quench dynamics. We have numerically solved the nonlinear Schrödinger equation (Eq.(S6)) with the initial condition localized at the left edge as shown in Fig. 6a.As the initial conditions, we study the two cases.One is which corresponds to the solution with u = −1 of the linear model (κ = 0).The other is which corresponds to the solution with u = 1 of the linear model (κ = 0).The derivation is shown in Supplementary Note 10. For the numerical calculation, we used the "NDSolve" function in Mathematica.We consider a 10 × 10 lattice under the open boundary condition in the x direction and the periodic boundary condition in the y direction.We solve the nonlinear Schrödinger equation (Eq.(S6)) under the two initial conditions in Eqs.(S49) and (S50).The time evolution of 6d and e.We define the phase indicator P by with T = 10.We have P ∼ 0 in the trivial phase as shown in Fig. 6b, which implies the absence of localized edge states.On the other hand, we have P ∼ 1 in a topological phase as shown in Fig. 6c, which implies the presence of a localized edge mode.In order to elucidate the topological phase diagram, we plot P in the u-κw plane in Fig. 6d and e.We find that the phase indicator is 1 along the blue line, where the exact solution is valid.It shows that the exact solution is realized by the quench dynamics. Derivation of a nonlinear eigenvalue problem in the wavenumber space. We use the wavenumber-space description of a nonlinear eigenequation to introduce the nonlinear Chern number.Here, we clarify the relationship between the real-space and wavenumber-space description of the nonlinear eigenequation.For clarity, we consider a one-dimensional lattice system with translation symmetry and a periodic boundary.In general, one can describe the eigenequation of such a one-dimensional nonlinear system as where ) represent the nonlinear eigenvalue and eigenvector, respectively.Due to the periodicity, ⃗ F does not explicitly depend on the location x. To derive the wavenumber-space description of the nonlinear eigenequation, we have assumed the Bloch ansatz, with We have also assumed the U (1) symmetry denoted by where θ is a real phase factor.Substituting the Bloch wave ansatz in Supplementary Eq. (S52), we obtain Then, we finally derive the wavenumber-space description of the nonlinear eigenequation where both sides are independent of the location x. One can also derive the wavenumber-space description of the nonlinear eigenequation corresponding to a continuum nonlinear system (cf.the nonlinear Dirac Hamiltonian in Eq. ( 7) in the main text).When a nonlinear system has continuous translation symmetry, the real-space description of an eigenequation is in a general continuum nonlinear system.To derive its wavenumber-space description, one should assume the Bloch-wave ansatz, We also impose the U (1) symmetry described as Then, the eigenequation (Supplementary Eq. (S57)) reads Therefore, one can obtain the wavenumber-space description of the eigenequation In continuum systems with discrete translation symmetry, the real-space description of an eigenequation is We assume the Bloch ansatz where ⃗ ψ ′ (x) is a periodic function with the same period as that of ⃗ F (x; •).The U (1) symmetry is described as Then, we rewrite the eigenequation (Supplementary Eq. (S62)) as Therefore, we obtain the wavenumber-space description of the eigenequation In both lattice and continuum cases, the size of the eigenvector in the wavenumber space is smaller than that in the real space, which is of practical advantage in the calculation of the nonlinear Chern number.We also note that one can derive the wavenumber-space description of nonlinear eigenequations in higher dimensions via the same calculations as one-dimensional cases. In linear systems, one can derive the wavenumber-space descriptions of Hamiltonians from the Fourier transformation of the linear equations.In contrast, the Fourier transformation of a nonlinear equation reveals the interaction between eigenmodes with different wavenumbers.For example, in the Fourier transformation of a Kerr-type nonlinearity, |ψ(x)| 2 ψ(x) becomes However, since we assume the Bloch-wave ansatz ⃗ ψ(x) = e ikx ⃗ ψ ′ or equivalently the twisted boundary condition in a unit cell in the derivation of the wavenumber-space description of the nonlinear eigenequation, the integrant in Supplementary Eq. (S69) contributes only when the wavenumbers are Quantization of the nonlinear Chern number. The nonlinear Chern number defined in Eq. ( 3) in the main text are quantized.We can show the quantization by embedding the nonlinear eigenvectors into eigenspaces of a linear Bloch Hamiltonian.For clarity, we consider two-band cases in the following. Let ψ(k) be the nonlinear eigenvectors of the nonlinear eigenvalue equation F (ψ) = Eψ derived from the Bloch ansatz of the wavenumber k.Then, one can uniquely determine ϕ(k) so that ϕ(k) is perpendicular to ψ(k) (we here ignore the phase ambiguity).By using ϕ(k) and ϕ(k), one can construct the following linear Bloch Hamiltonian This Bloch Hamiltonian has the eigenvector ψ(k) and the corresponding eigenvalue E(k) = −1.Therefore, the Chern number of the lower band of this linear Hamiltonian is which is equivalent to the nonlinear Chern number of F (ψ) = Eψ.Therefore, the nonlinear Chern numbers must be integers as linear ones.We note that in many-band cases, one can also construct a linear Bloch Hamiltonian that has the same eigenvectors, and thus the nonlinear Chern number must be quantized. Amplitude distributions of nonlinear bulk modes. Here, we numerically calculate the nonlinear eigenstates of the nonlinear Qi-Wu-Zhang (QWZ) model (Eq.( 4) in the main text) of nonlinear Chern insulators as in Fig. 3 in the main text.Supplementary Figure S1 shows an amplitude distribution of a nonlinear bulk eigenstate.The bulk eigenstate at k y = π/5 resembles sine curve, which can justify the use of the Bloch-wave ansatz to approximate the bulk modes and derive the nonlinear Chern number in this parameter region.In contrast, the bulk modes around k y = 0 are localized as shown in Supplementary Figs.S1c,f because of the nonlinear interactions between linear bulk modes with different wavenumbers.These results are consistent with Proposition 1, which indicates that the nonlinear eigenstates are almost unchanged from the corresponding linear modes in weakly nonlinear regimes, while stronger nonlinearity than the linear band gap can lead to drastic changes in nonlinear eigenvectors. While numerical techniques cannot stably achieve periodic solutions around k y = 0, periodic solutions still exist in such a stronger nonlinearity region.To capture the topological properties of nonlinear systems, we focus only on such periodic solutions.We also note that weak nonlinearity has a negligible impact on the existence of gapless edge modes because their eigenvalues are isolated from those of bulk modes. Numerical calculation of the Chern number in the nonlinear QWZ model. In this section, we numerically demonstrate the quantization and the amplitude dependence of the nonlinear Chern number.We calculate the nonlinear eigenvectors of the wavenumber-space description of the nonlinear QWZ model (Eq.( 4) in the main text) as in Fig. 3 in the main text.Then, applying the Fukui-Hatsugai-Suzuki method [54] to the obtained nonlinear eigenvectors, we calculate the nonlinear Chern number. Supplementary Figure S2a shows the result of the numerical calculation of the nonlinear Chern number.We confirm that the nonlinear Chern numbers always take integer values as in linear cases.The figure also indicates the existence of the nonlinearityinduced topological phase transitions where the nonlinear Chern numbers are altered at the critical amplitudes w = 0.5, 2.5, 4.5.While these nonlinear Dirac Hamiltonians have different signs in their wavenumber-dependent terms, the nonlinear Chern numbers are determined by the signs of mass terms similar to Eq. ( 8) in the main text.The sum of the nonlinear Chern numbers of four nonlinear Dirac Hamiltonians is equal to that of the nonlinear QWZ model. System-size dependence of nonlinear band gaps in the nonlinear QWZ Hamiltonian in the continuum limit. We analytically estimate the system-size dependence of nonlinear band gaps in the nonlinear QWZ model (Eq.( 4) in the main text) in the continuum limit at the critical amplitude where the nonlinearity-induced topological phase transition occurs.We impose the open boundary condition in the x direction and assume the uniform eigenvector in the y direction.Then, the eigenvalue equation of the discretized system is equivalent to that of the nonlinear Su-Schriefer-Heeger (SSH) model [23,28,38,[40][41][42]55], under a properly chosen unitary transformation of the effective Hamiltonian.We use the same parameters m, κ, and h as those in Eq. ( 7) in the main text.At the critical amplitude | ⃗ ψ(2x − 1)| 2 + | ⃗ ψ(2x)| 2 = |m|/κ, the strengths of intercell and intracell hoppings of the nonlinear SSH model become the same on average, we can estimate the band gap of the discretized nonlinear Dirac Hamiltonian with L unit cells from that of a simple one-dimensional chain with 2L lattice points.The eigenvector of the one-dimensional chain is described by sine curves ψ(x) = sin(kx).To satisfy the open boundary conditions ψ(0) = 0 and ψ(2L + 1) = 0, k must be a multiple of 2π/(2L + 1), i.e., k = 2nπ/(2L + 1) with n being an integer n = 1, • • • , 2L.The corresponding eigenvalues are E = a(2 + cos(2nπ/(2L + 1))) with a being a real constant, and thus we estimate the band gap around E = 0 as 2πa/(2L + 1) + O((2L + 1) −2 ).This estimation indicates that the nonlinear Dirac Hamiltonian is gapless at the critical amplitude in the thermodynamic limit, which is consistent with the bulk dispersion of the nonlinear Dirac Hamiltonian. We also numerically estimate the nonlinear band gaps in the thermodynamic limit at different amplitudes.We calculate the minimums of the absolute values of eigenvalues at different amplitudes and system sizes.Then, we fit the squares of the minimums by a/(2L + 1) 2 + b, where a and b are the fitting parameters.As we discussed in the previous paragraph, b should become zero at the critical amplitude, while b should be positive below the critical amplitude, which indicates the existence of the band gap in the thermodynamic limit.Supplementary Figure S3 shows the obtained fitting parameters b at different amplitudes w.As we expect, we obtain positive b when the amplitudes are smaller than the critical one, and b is zero at the critical amplitude.Since the fitting function does not completely capture the finite-size scaling of spectrum gaps at noncritical amplitudes, b becomes negative at larger amplitudes than the critical one.However, this numerical result still indicates that the nonlinear QWZ model is gapped (gapless) when the nonlinear Chern number is zero (nonzero). We note that the nonlinear Dirac Hamiltonian obtained from the low-energy effective theory of the nonlinear QWZ model is different from that in previous studies [19,56] because they have considered the nonlinear terms with the same sign at the first 4) in the main text) in a 10 × 1 supercell structure at u = −2.5 and κ = 10.The transition point is w = 0.5 with w being the squared amplitude, while gapless edge modes appear around w ≃ 0.15 as is pointed out by the red arrow.We note that some eigenvalues disappear at large amplitudes due to the limitation of the numerical technique.b, Amplitude distribution of an anomalous gapless mode.We confirm the localization of the gapless mode at the edge of the sample.The eigenvalue corresponds to that pointed by the red arrow on the left side of panel a. c, Amplitude distribution of a topological gapless mode.We confirm the localization of the topological mode at both edges of the system.One can understand this localized edge mode as the superposition of two edge modes localized at the left and right edges, respectively.The eigenvalue corresponds to that pointed by the green arrow on the right side of panel a. main text, we can expect the emergence of the nonlinearity-induced topological phase transition, where topological edge modes appear at the critical amplitude.In fact, numerically calculating the dynamics of the nonlinear QWZ model above the critical amplitude, we obtain a long-lived localized state shown in Supplementary Fig. S4.Here, we conduct the Runge-Kutta simulation of the lattice model as in Fig. 2 in the main text, setting the time step as dt = 0.005 and parameters as u = −2.1 and κ = 1. We also numerically calculate the nonlinear band structure corresponding to the lattice model.Since the self-consistent calculation cannot stably obtain the eigenvalues in the strongly nonlinear regimes, here we instead use the quasi-Newton method to solve the nonlinear eigenvalue problems.We impose the open boundary condition in the x direction and the periodic boundary condition in the y direction and calculate the nonlinear eigenvalues at each phase parameter k y to obtain the nonlinear band structure.Supplementary Figure S5 shows the nonlinear band structure at the same parameters in Supplementary Fig. S4.One can confirm the existence of the gapless modes.We note that if the amplitude is smaller than that considered in Supplementary Figs.S4, S5, there are no gapless edge modes, which indicates that the strongly nonlinear lattice system exhibits the nonlinearityinduced topological phase transition. The numerical results in Supplementary Figs.S4, S5 indicate the existence of topological edge modes in stronger nonlinear lattice systems, while one can see the inconsistency between the parameters where edge modes appear and the nonlinear Chern number is changed.We confirm such inconsistency in the numerical calculations of the eigenvalues at k y = 0 and different amplitudes.Supplementary Figure S6a shows the numerical results at the parameters u = −2.5 and κ = 10 and various amplitudes.In this case, the nonlinear Chern number is changed at the amplitude w = 0.5, while the nonlinear band becomes gapless around w = 0.15, which implies the breakdown of the bulk-boundary correspondence.We check that the anomalous gapless modes in Supplementary Fig. S6b are also localized at the edge of the sample as in topological gapless modes in Supplementary Fig. S6c. The emergence of the anomalous gapless modes is induced by the discontinuous change of the amplitude.At the amplitude w = 0.15, the local on-site term at x = 0 can become u + 1 + κ|Ψ(x = 1)| 2 = 0, which leads to the existence of perfectly localized modes exhibiting a sudden decrease in their amplitude.Comparing the nonlinear QWZ model and the nonlinear Dirac Hamiltonian (Eq.( 7) in the main text), a sudden change of the amplitude is unphysical in the nonlinear Dirac Hamiltonian.However, the effect of the momentum term becomes much stronger than that of the mass term at the corresponding parameter region, we need to take a small lattice constant to approximate the nonlinear Dirac Hamiltonian by the nonlinear QWZ model.Therefore, using the same lattice constant as in the weakly nonlinear regime, one cannot accurately calculate the high-frequency components of the wavefunction in this parameter region. Exact solutions of the perfectly localized edge modes in the nonlinear QWZ model. As discussed in the previous section, there can be perfectly localized modes unique to lattice systems.We here derive the exact solutions of such perfectly localized modes in the nonlinear QWZ model (Eq.( 4) in the main text).We consider the localized modes described in the following ansatz, We impose the open boundary condition in the x direction and the twisted boundary condition in the y direction.We obtain the gapless edge modes corresponding to the nonzero nonlinear Chern number as in the case of the fixed norm.We use the parameters u = −1 and κw = 0.1.We note that some bulk modes are not obtained due to the limitation of the numerical method.b, We show an example of an eigenvector of an edge mode, which corresponds to the red circle in panel a.We can check its localization at the edge.previous papers [34,41] that have revealed strong nonlinearity induces the bulk-localization and thus eliminates topological edge modes. Effects of definitions of normalization conditions. While we fix the L 2 norm of the nonlinear eigenvector as the normalization condition, one can consider various types of normalization conditions to restrict the solutions of a nonlinear eigenequation.Specifically, some previous studies [40,42] have fixed the maximum absolute value of the components of the nonlinear eigenvector.In this section, we compare the nonlinear band structures under such a normalization condition to those obtained when we fix the L 2 norm.Under both normalization conditions, one can confirm the bulk-boundary correspondence in weakly nonlinear systems and in strongly nonlinear continuum systems. We numerically calculate the eigenvalues of the nonlinear QWZ model (Eq.( 4) in the main text) under the normalization condition of max x (|ψ 1 (x)| 2 +|ψ 2 (x)| 2 ) = w.Supplementary Figure S8 shows the obtained nonlinear band structure under weak nonlinearity.We confirm that the existence and absence of gapless edge modes correspond to the nonzero and zero nonlinear Chern number for each.Under the weak nonlinearity compared to the band gap in the linear limit (κ = 0 in Eq. ( 4) in the main text), the nonlinear terms can be considered as the perturbation and thus do not alter the topology of the system independently of the normalization conditions.Therefore, weakly nonlinear systems also exhibit the bulk-boundary correspondence under the normalization condition fixing the maximum of the amplitudes. We also numerically calculate the nonlinear eigenvalues at k y = 0 under the stronger nonlinearity.Supplementary Figure S9a shows the obtained spectra in a 10 × 1 supercell structure under the open boundary condition in the x direction and the twisted boundary condition in the y direction.We use the parameters u = −2.5 and κ = 1.Since the mean of the amplitude is less than the maximum (see Supplementary Fig. S9b), the nonlinear band is still gapped at w = 0.5, where the bulk bands close the gap and the nonlinear Chern number becomes nonzero.Instead, we obtain gapless modes around w = 1.5 and confirm its localization at the edge of the system as shown in Supplementary Fig. S9c.The emergence of the gapless edge modes indicates the nonlinearity-induced topological phase transition, while the bulk-boundary correspondence is broken under the maximum normalization condition due to similar mechanisms to that under the norm-fixed normalization condition. Exact bulk solutions of the modified nonlinear QWZ model. While we focus on the nonlinear QWZ model with nonlinear terms −(−1) j κ(|Ψ 1 (x, y)| 2 + |Ψ 2 (x, y)| 2 )Ψ i (x, y), replacing the nonlinear terms with on-site ones −(−1) j |Ψ j | 2 ψ j has no significant effects on the emergence of topological edge modes and the nonlinearity-induced topological phase transitions.We here analytically calculate the bulk solutions of the modified nonlinear QWZ model, FIG. 2 . FIG. 2. Phase diagram and dynamics of the nonlinear Qi-Wu-Zhang (QWZ) model.a, The figure shows a schematic of the nonlinear QWZ model.The model has two sublattices (the black circles) at each lattice point encircled by the blue ellipse.The green lines represent the linear couplings.We use the notation Ψi(x, y) to represent the state variable at each sublattice, where (x, y) is the location of the representative point of each lattice point denoted by the red cross.b, We analytically obtain the phase diagram of the prototypical model of nonlinear topological insulators.The horizontal axis represents the parameter of the mass term and the vertical axis corresponds to the strength of the nonlinearity.The color of each separated region represents the difference in the nonlinear Chern number.c, We numerically show the absence of the edge modes in the topologically-trivial parameter region.We simulate the dynamics of the prototypical model of a nonlinear Chern insulator starting from an initial state localized at the left edge.We impose the open boundary condition in the x direction and the periodic boundary condition in the y direction.The figure shows the snapshot at t = 1.The color shows the absolute value of the components of the state vector at each site.The parameters used are u = 3, κ = 0.1, and w = 1, which corresponds to the red square in panel b.d, We numerically show the existence of the long-lived localized state in the weakly nonlinear topological insulator.The figure shows the snapshot of the simulation at t = 1.The sites at the left edge show large amplitudes, which indicates the existence of the edge-localized state.The parameters used are u = −1, κ = 0.1, and w = 1, which corresponds to the blue circle in panel b. FIG. 3 . FIG. 3. Nonlinear band structure and edge modes of the weakly nonlinear QWZ model.a, We numerically calculate the nonlinear band structure of the topologically trivial system under the open boundary condition in the x direction and the periodic boundary condition in the y direction.One can confirm the absence of gapless modes.The parameters used are u = 3, κ = 0.1, and w = 1.b, Nonlinear band structure of the topologically nontrivial system is numerically calculated.There are gapless modes that connect the upper and lower bulk bands.The parameters used are u = −1, κ = 0.1, and w = 1.c, The spatial distribution of the gapless mode is presented.The eigenvalue of the localized mode corresponds to the red circle in the band structure in panel b. FIG. 4 . FIG. 4. Phase diagram and localized modes of the nonlinear Dirac Hamiltonians.a, We illustrate the phase diagram of the nonlinear Dirac Hamiltonian, which demonstrates the nonlinear bulk-boundary correspondence in the continuum model.The vertical axis represents the amplitude, and the horizontal axes correspond to the parameters of the nonlinear Dirac Hamiltonian.The blue curved surface shows the phase boundary that separates a trivial phase without boundary modes and a topological phase exhibiting localized modes at the left boundary.The red surfaces present the boundaries where the sign of the parameters of the Dirac Hamiltonian changes.The red lines show the phase boundaries in the surfaces of w = 1 and w = 2.In the linear limit (w = 0), the topological phases are separated by the m = 0 axis, while the nonlinearity modifies the boundary of the topological phases.b-d, Each panel shows the representative shape of the localized mode in each of topologically nontrivial parameter regions.b, When m is negative and κ is positive, we obtain a localized mode in the small-amplitude region, which is regarded as a counterpart of a conventional topological edge mode.We set m, κ, and D as m = −0.5, κ = 1, and D = 3. c, When m is positive and κ is negative, the nonlinear Dirac Hamiltonian exhibits the nonlinearity-induced topological phase transition.We obtain an unconventional localized mode if the amplitude is larger than a critical value.In this localized mode, there exist nonvanishing amplitudes even in the limit of x → ∞.Therefore, the localized mode can be unphysical in a truly semi-infinite system because we cannot normalize such a nonvanishing mode, while it still exists and is physically relevant in experimentally realizable finite systems.We set m, κ, and D as m = 0.5, κ = −2, and D = −3.d, When both m and κ are negative, a localized mode appears independently of the amplitude as in linear topological insulators.We set m, κ, and D as m = −0.5, κ = −1, and D = 3. FIG. 4. Phase diagram and localized modes of the nonlinear Dirac Hamiltonians.a, We illustrate the phase diagram of the nonlinear Dirac Hamiltonian, which demonstrates the nonlinear bulk-boundary correspondence in the continuum model.The vertical axis represents the amplitude, and the horizontal axes correspond to the parameters of the nonlinear Dirac Hamiltonian.The blue curved surface shows the phase boundary that separates a trivial phase without boundary modes and a topological phase exhibiting localized modes at the left boundary.The red surfaces present the boundaries where the sign of the parameters of the Dirac Hamiltonian changes.The red lines show the phase boundaries in the surfaces of w = 1 and w = 2.In the linear limit (w = 0), the topological phases are separated by the m = 0 axis, while the nonlinearity modifies the boundary of the topological phases.b-d, Each panel shows the representative shape of the localized mode in each of topologically nontrivial parameter regions.b, When m is negative and κ is positive, we obtain a localized mode in the small-amplitude region, which is regarded as a counterpart of a conventional topological edge mode.We set m, κ, and D as m = −0.5, κ = 1, and D = 3. c, When m is positive and κ is negative, the nonlinear Dirac Hamiltonian exhibits the nonlinearity-induced topological phase transition.We obtain an unconventional localized mode if the amplitude is larger than a critical value.In this localized mode, there exist nonvanishing amplitudes even in the limit of x → ∞.Therefore, the localized mode can be unphysical in a truly semi-infinite system because we cannot normalize such a nonvanishing mode, while it still exists and is physically relevant in experimentally realizable finite systems.We set m, κ, and D as m = 0.5, κ = −2, and D = −3.d, When both m and κ are negative, a localized mode appears independently of the amplitude as in linear topological insulators.We set m, κ, and D as m = −0.5, κ = −1, and D = 3. FIG. 5 . FIG.5.The bulk-boundary correspondence for the lattice model in the continuum limit.We calculate the band gaps of the nonlinear QWZ model at different sizes and lattice constants.We impose the open boundary condition in the x direction and the periodic boundary condition in the y direction.We fix the parameters m = −1, and κw = 1.0,where the nonlinear band gap is closed in the corresponding continuum model.The band gaps versus 10 2 /(L + 1/2) (L is the size of the system) are plotted.The legend shows the lattice constants h.We can confirm that the size of the gap decreases as the system size becomes larger.However, if the lattice constant is so large that the discretized system cannot imitate the behavior of the continuum nonlinear Dirac Hamiltonian, we find a sudden decrease in the size of the band gap.This sudden decrease contradicts the estimation that the band gap is proportional to 1/(L + 1/2) and indicates the existence of irregular gapless modes.Since such gapless modes appear at smaller amplitudes than the critical point of the nonlinear Chern number, the numerical result indicates that the bulkboundary correspondence can be ensured only after taking both the continuum and thermodynamic limits. FIG. 6 . FIG. 6. Phase diagram of the quench dynamics of the nonlinear QWZ model.a, The schematics of the experimental protocol of the quench dynamics is shown.First, one excites the edge sites by, e.g., applying lasers to the edge resonators in nonlinear topological photonic insulators.Then, one observes the nonlinear dynamics without external fields and confirms the existence or absence of a long-lived localized state.b, We plot the time evolution of the quench dynamics in a trivial phase.We use the parameters u = −2.5, κ = 1.5, and w = 1, which correspond to the white square in panel d.We confirm the absence of localized edge modes.c, We plot the time evolution of the quench dynamics of nonlinear edge modes.We use the parameters u = −2.5, κ = 0.25, and w = 1, which correspond to the white circle in panel d.We confirm that a localized state remains for a long time, which indicates the existence of edge modes.d and e, We simulate the quench dynamics and plot the amplitude remaining at the edge sites in the long-term limit after a quench in the u-κw plane.The initial configuration is taken as the edge modes in the linear limit at u = −1 (u = 1) in panel d (e).The light-blue lines indicate the parameters where we can obtain exact edge-localized solutions.The gray lines represent the phase boundary derived from the nonlinear Chern number.These lines agree with the right (left) boundary of the topological phase in panel d (e), and thus the quench dynamics shows the shift of the phase boundary by the nonlinearity. 6 FIG FIG. S1.Bulk nonlinear eigenvectors of the nonlinear Qi-Wu-Zhang (QWZ) model.a, Nonlinear band structures under the open boundary condition in the x direction and the twisted boundary condition in the y direction.The red circle and the green square correspond to the nonlinear eigenvectors in panels b and c, respectively.The parameters used are u = −0.1 and κw = 0.1.b, Wave function of a sine-wave bulk mode under the open boundary condition in the x direction.The red (blue) lines show the real part of the first (second) component of the wave function, and the red (blue) dashed lines show the imaginary part of the first (second) component (these correspondences are the same in panels c, e, and f).c, Wave function of a localized bulk mode under the open boundary condition in the x direction.The localization indicates that there can be eigenstates out of the Bloch-wave ansatz.d, Nonlinear band structures under the periodic boundary condition in the x direction.We consider a 10 × 1 supercell structure and impose the twisted boundary condition in the y direction.The red circle and the green square correspond to the nonlinear eigenvectors in panels e and f, respectively.e, Wave function of a sine-wave bulk mode under the periodic boundary condition in the x direction.f, Wave function of a localized bulk mode under the open boundary condition in the x direction.The localization and periodicity are not altered by the boundary condition. FIG.S3.Recovery of the bulk-boundary correspondence at the continuum and thermodynamic limit.a, Band gaps at each size and strength of nonlinearity.The squared gaps versus 10 4 /(L+1/2) 2 (L is the size of the system) are plotted.The legend shows the correspondence between the color and the strength of nonlinearity κw.The other parameters used are h = 0.01 and m = −1.We can confirm that the gaps decrease as the size become larger.b, Squared gaps at the thermodynamic limit estimated from the least-square fitting.We fit the squared gaps with the function a/(L + 1/2) 2 + b and plot the obtained b's at different amplitudes.W/L = 1.0 is the transition point, where the band gap is closed and the gapless edge modes appear. FIG.S6.Nonlinear eigenvalues and the existence of anomalous gapless modes in a stronger nonlinear regime.a, Amplitude dependence of nonlinear eigenvalues at ky = 0. We calculate the nonlinear eigenvalues of the nonlinear QWZ model (Eq.(4) in the main text) in a 10 × 1 supercell structure at u = −2.5 and κ = 10.The transition point is w = 0.5 with w being the squared amplitude, while gapless edge modes appear around w ≃ 0.15 as is pointed out by the red arrow.We note that some eigenvalues disappear at large amplitudes due to the limitation of the numerical technique.b, Amplitude distribution of an anomalous gapless mode.We confirm the localization of the gapless mode at the edge of the sample.The eigenvalue corresponds to that pointed by the red arrow on the left side of panel a. c, Amplitude distribution of a topological gapless mode.We confirm the localization of the topological mode at both edges of the system.One can understand this localized edge mode as the superposition of two edge modes localized at the left and right edges, respectively.The eigenvalue corresponds to that pointed by the green arrow on the right side of panel a. Ψ 1 FIG.S7.Recovery of the bulk-boundary correspondence in the infinite-amplitude limit.We calculate the nonlinear eigenvalues of the modified model of a nonlinear Chern insulator in Supplementary Eq. (S98).We impose the open boundary condition in the x direction and the twisted boundary condition in the y direction and plot the nonlinear eigenvalues at ky = 0. We use the parameters u = −3, κ = 2, and c = 10.The nonlinear Chern number becomes CNL = −1 around the squared amplitude w ≃ 1 while the band gap is not completely closed at that point.However, one can confirm that the band gap becomes smaller as the amplitude becomes larger, which indicates the existence of genuinely gapless modes and the recovery of the bulk-boundary correspondence in the infinite-amplitude limit. FIG. S9.Nonlinear eigenvalues and eigenvectors under the maximum-based normalization.a, We calculate the nonlinear eigenvalues at various maximum amplitudes.We impose the open boundary condition in the x direction and the twisted boundary condition in the y direction.The transition point of the nonlinear Chern number should correspond to w = 0.5 with w being the maximum of the squared amplitude, while the nonlinear band has a gap at that parameter.We instead obtain gapless modes at w = 0.5, which indicates the breakdown of the bulk-boundary correspondence under the maximum-based normalization.b, Eigenvector of a bulk mode at w = 0.5, which is pointed by the red arrow on the left side of panel a.The red (blue) curves show the real part of the first (second) component of the wave function, and the red (blue) dashed curves show the imaginary part of the first (second) component.We can confirm that the eigenvector exhibits sine curves, which indicates the absence of edge modes at this amplitude.c, Eigenvector of an edge mode at w = 1.5, which is pointed by the red arrow on the left side of panel a.As in panel b, the red (blue) curves show the real part of the first (second) component of the wave function, and the red (blue) dashed curves show the imaginary part of the first (second) component.We confirm the localization at the edge of the sample.
20,414.8
2023-07-31T00:00:00.000
[ "Physics" ]
Review of Small-Signal Converter-Driven Stability Issues in Power Systems New grid devices based on power electronics technologies are increasingly emerging and introduce two new types of stability issues into power systems, which are different from traditional power system stability phenomena and not well understood from a system perspective. This paper intends to provide the state of the art on this topic with a thorough and detailed review of the converter-driven stability issues in partial or all power electronics-based grids. The underlying and fundamental mechanisms of the converter-driven stability issues are uncovered through different types of root causes, including converter controls, grid strength, loads, and converter operating points. Furthermore, a six-inverter two-area meshed system is constructed as a representative test case to demonstrate these unstable phenomena. Finally, the challenges to cope with the converter-driven stability issues in future power electronics-based grids are identified to elucidate new research trends. I. INTRODUCTION E LECTRIC power systems today are undergoing a transformation from large machine predominant slow electromechanical dynamics to more small or medium-sized semiconductor-induced fast electromagnetic dynamics due to the increasing penetration of power electronics converters (PECs) in the generation, transmission, distribution, and load [1]- [3]. Such an evolution will provide high flexibility, full controllability, sustainability, and improved efficiency for future power grids; however, it also imposes new challenges to power system stability. As indicated by the major results of the work of the IEEE Task Force in [4], in addition to the impacts on classic power system stability issues (rotor angle stability, voltage stability, and frequency stability) [5], two new stability classes, resonance stability and converter-driven stability, are also introduced by the PECs. For the classical categories of power system stability, many studies have been conducted to analyze the impacts of PECs as listed in Table 1, including impacts on the rotor angle stability [6]- [15], the voltage stability [10], [16], [17], and the frequency stability [18]- [21]. The interactions between PECs and synchronous machines are also studied, such as the interactions between the synchronous machines and various grid-forming control approaches in [22]. It can be seen that the impacts of PECs on classic power system stability can be either beneficial or detrimental. The detrimental impacts are mainly due to the reduction of system inertia and improper converter control design, while the benefits are mainly due to the faster control dynamics and stronger output regulations of the converters. For the two new categories of PECs-induced power system stability, the unstable phenomena and possible causes are briefly described in [4]. The resonance stability issues are mainly caused by the effects of flexible alternating current transmission systems or high-voltage direct current transmission systems (HVDC) on torsional aspects (i.e., torsional resonance), and the effects of doubly fed induction generator (DFIG) controls on electrical aspects (i.e., electrical resonance), which encompass the subsynchronous resonance (SSR). The causes of resonance stability have been identified and the solutions have also been proposed accordingly. For example, devices such as static var compensators can be used to damp torsional resonance, and supplemental controllers in DFIG control can help to damp the electrical resonance. The converter-driven stability issues may exhibit in different forms from classic power system stability issues as indicated by the documented incidents of the unstable operations in power electronics-based grids (PEGs) from field tests, e.g., sub-synchronous oscillations induced between wind turbines generations (WTGs) and series compensated lines in the ERCOT region [23] or harmonic instability issues in photovoltaic (PV) farms [24], [25]. The converter-driven stability is further classified as of slow-or fast-interactions based on the frequencies of the instability [4]. The slow-interaction converter-driven stability refers to the stability issues driven by the slow dynamic interactions between the slow outer control loops of converters and other slow-response components in power systems, typically around system fundamental frequency; while the fast-interaction converter-driven stability (also referred to as harmonic stability [26]) involves the problems caused by fast dynamic interactions between the fast inner control loop of converters and other fast-response components in power systems, typically in the range of hundreds of hertz to several kilohertz. The converter-driven instability may arise due to many different reasons, such as converterinterfaced generation (CIG) controls, grid strength, converterinterfaced loads (CIL), operating conditions, power transfer limits, and other similar factors [27], [28]. For example, the fast control dynamics of the CIGs may result in rapid frequency changes or transiently distorted voltage/current waveforms, which may lead to the over-reaction of protections fitted to the inverters and cause system tripping [29]. Therefore, it is of significance to fully understand and identify the exact causes for the converter-driven instabilities such that the proper system and converter operation can be designed accordingly. This paper aims at exploring the underlying fundamental mechanism of converter-driven stability issues in power systems. First, the state of the art on different types of instability issues caused by typical converters in power systems is summarized; and then different stability analysis approaches, such as passivity-based approach or eigenvalue analysis, are applied to systematically analyze the root causes, including the converter-control-induced issues (i.e., control delay, inner and outer control loops, and converter switching actions) and the grid-condition-induced issues (i.e., grid strength, loading conditions, and the system operating conditions). Next, simulation studies are performed using a two-area meshed network test case. In the end, some open research issues and challenges of the converter-driven stability are discussed accordingly. II. MECHANISMS OF CONTROL DYNAMICS-INDUCED CONVERTER-DRIVEN STABILITY ISSUES The dynamics of the entire power grids are determined by the dynamics of each piece of equipment in the system. Therefore, the characteristics of each device in the system need to be investigated. In conventional power grids driven by physical laws, general models for SGs can be obtained in a quasi-static format since the transients of interest are within a narrowband (0.1 Hz to 5 Hz [30]) and the fundamental frequency fluctuations are negligible (due to the large inertia of the rotor [31]). However, in PEGs driven by converter controls, there has not been a generic model yet since PECs highly depend on manufactures and are effective in wide control regions. Plus, the frequency variations cannot be neglected due to the low system inertia. Hence, this section attempts to cover the most used PECs with the root cause analysis for converter-driven stability issues in a wideband control range in power systems. The converter-interfaced generations and loads in power systems generally use voltage-sourced converters (VSCs), which can be further classified as current-type VSCs as shown in Fig. 1 and voltage-type VSCs as shown in Fig. 2. The current-type VSCs (also termed as grid-following inverters, GFLs) have been used in many applications, such as PVs, ESSs, and Type-4 WTGs at the generation side or fast-charging stations at the load side. The output current i L is usually controlled with a proportional-integral (PI) controller in the synchronous frame or with a proportional-resonance controller in the stationery frame. Additionally, a PLL unit is used to obtain the angle θ of the converter terminal voltage in the stationary frame or of measured signals in the synchronous frame. The voltage-type VSCs (also termed as grid-forming inverters, GFMs) are to establish system voltage and frequency autonomously [32]. A typical P-f and a Q-v droop control are adopted to realize power synchronization. The voltage control is to regulate the output voltage, and the current control is to provide damping for the LC resonance and to limit the overcurrent. Additionally, the converter-interfaced transmissions normally have a rectifier station and an inverter station with either line commutated converter (LCC) as shown in Fig. 3 or VSC as shown in Fig. 4. The LCC-HVDC has been widely used in long-distance transmissions with two common LCC control loops, i.e., constant extinction angle control (CEAC) and constant dc voltage control (CDVC). The VSC-HVDC is also a preferred transmission solution, especially in offshore wind farms with two-or three-level VSC or modular multilevel converters. The rectifier side of VSC-HVDC is normally with PLL control, active power/dc voltage control, and reactive power/ac voltage control loops [33], and the inverter side structure is like current-type VSCs. To limit the scope of this paper, the dc-link dynamics are ignored considering only the dc/ac and ac/dc stages. The control-induced converter-driven stability (fast-and slow-interaction) issues arising from these four kinds of converters in power systems will be discussed from the following aspects: control delay, inner/outer control, and switching actions. It should be noted that these causes are coupled and may mix to cause converter-driven instabilities. A. CONVERTER CONTROL DELAY (FAST INTERACTION) The PECs may cause current harmonics in power systems as shown in Fig. 5 with 830 Hz harmonics in a wind farm [40]. The unstable sources can be identified with the bus VOLUME 9, 2022 participation factors (PFs) calculated from the multi-inputmulti-output transfer function matrix model of the power system and eigenvalue sensitive analysis. Specifically, the converters with larger PFs would introduce harmonic resonances into the system. The fundamental mechanism behind the phenomena can be further revealed by the passivity-based stability criterion, i.e., for a system described by a rational transfer func- [42]. Therefore, if a converter impedance is non-passive at some frequencies when connecting to another passive system, instability will possibly happen within these non-passive regions (NPRs). Accordingly, the output admittance of current-type VSCs with LCL filters Y o1 (s) is derived, and the converter passivity is examined to identify the root causes. The results show that there is a high frequency (HF) NPR which is caused by the interactions between LC resonance frequency f r and system control delay T d . The delay here is assumed to be k times of switching period T sw , which is typically 1.5 and can be reduced to 0.5 with more advanced digital control. The conclusions are: (1) when f r < f sw 4k , the HF-NPR in GFLs with LCL filter is (f r , f sw 4k ) as shown in Fig. 6(a); (2) when f r = f sw 4k , there is no HF-NPR, and (3) , [43]- [46]. According to the conclusions above, the instability causes for the system in [40] can be dug deeper, where the converter switching frequency f sw is 4 kHz and the control delay is 0.5T sw . Besides, f r is 729 Hz which is smaller than 2 kHz. Therefore, the harmonic instability issues would happen within (729 Hz, 2 kHz), which matches with the current waveforms with 830 Hz resonance as shown in Fig. 5. Following the same approach, the HF-NPR of currenttype VSCs with an L filter is identified to be ( f sw 4k , 3f sw 4k ) using the output admittance Y o2 (s) as shown in Fig. 6(b) [35], [42], [47]. And the HF-NPR of voltage-type VSCs is identified to be ( f sw 4k , 3f sw 4k ) with an examination on phase angles of converter output impedance Z o (s) as shown in Fig. 6(c), which is analogous to the L-filtered current-type VSCs [37], [48], [49]. When the control delay is small enough, e.g., k = 0.5, the converter could be passive up to Nyquist frequency 0.5f sw , which means there would be no harmonic stability issues if connecting the converter to another passive grid. Plus, if the converter is implemented with silicon-carbide devices instead of silicon devices with a higher switching frequency, the converter passivity could also be guaranteed to a higher absolute value of frequency range and system stability could be improved. Therefore, to eliminate the control-delay-related converterdriven stability, one direct method is to use advanced controllers to achieve small control delays. Other than that, system stability can also be enhanced by some passivity compensation methods. For example, for current-type VSCs, there are voltage feedforward control [35], [43], [46], leadlag control [45], active damping [41], [43], [46], passivitybased robust control [44], and adaptive bandpass-filter-based compensation control [50]; for voltage-type VSCs, there are adaptive notch-filter-based compensation control [50], and voltage feedforward control with virtual impedance control block [48], [49]. Note that virtual impedance control may also affect system slow-interaction converter-driven stability. Therefore, the outer loop needs to be refined accordingly. B. INNER LOOP CONTROL (FAST INTERACTION) In addition to the control delays as the root cause for system harmonic instability issues, the inner loop control bandwidth will also have some impacts since the control delays typically add negative damping into the alternating current control (ACC) loops of PECs [49], [51]. For example, in a system with multiple paralleled LCL-filtered current-type VSCs, the interactions among the ACC loops with larger control bandwidth will cause the interactive circulating currents to arise, because the resonance frequency tends to shift to the negative damping region caused by control delays when control bandwidth is increased [52]. A direct solution is to limit the inner current control bandwidth, which may sacrifice the current control dynamics. Apart from this, a multisampling approach can be used [53]. But harmonic instability driven by switching actions would be introduced, causing a distorted grid current with low-frequency aliasing. Hence, a repetitive filter to eliminate the multi-sampling-induced harmonics is also needed. C. CONVERTER SWITCHING ACTIONS (FAST INTERACTION) For parallel converters with asynchronous carriers, the pulsewidth-modulation (PWM) block generates sideband harmonics which may cause system harmonic instability [54], [55]. Fig. 7 shows the harmonic current waveforms in a system with two-parallel current-type VSCs. To eliminate the f sw sideband harmonics, a global synchronization of all PWMs through a communication-based central controller is needed. Another way is to add active damping or passive damping into the system to damp the high-frequency oscillations. Plus, the increasing parasitic resistance at a higher frequency due to the skin effect of the output inductor L can provide additional passive damping which is good for system stability. Therefore, the effects of controllers on system stability above Nyquist frequency (f sw /2) may be negligible in some cases [56]. D. OUTER LOOP CONTROL (SLOW INTERACTION) Slow-interaction converter-driven instabilities are also observed in power systems as shown in Fig. 8, which are also called sub-synchronous oscillations (SSO) [57]. The main reason for SSO has been identified as the interactions between the outer control loops of the converters and grid strength (defined by short circuit ratio -SCR). 1) PLL CONTROL For current-type VSCs, the slow-interaction converter-driven instability is mainly due to the asymmetrical PLL dynamics, i.e., only regulating q-axis PCC voltage introducing positive feedback into the system [58]- [60]. By examining the closedloop poles of current-type VSCs, it is found that there is one pair of complex poles (P 1,2 ) that have the low-frequency dynamics related to system fundamental frequency sideband oscillations [58]. The root-locus approach is applied to analyze the locations of the poles to study the impact of the PLL (proportional gain K pll_P and integral gain K pll_I ) as shown in Fig. 9. For the PLL control parameters, a decrease of proportional gain K pll_P (star line in Fig. 9) and an increase of integral gain K pll_I (circle line) will move the SSO moderelated pole to the unstable region. It is also observed that reduction of the ACC integral gain K CC_I (square line) will have minor impacts on system SSO stability. But the impact of ACC proportional gain K CC_P on system SSO is negligible. Using the impedance-based Nyquist stability analysis approach can draw the same conclusions as discussed in [59,60]. Additionally, it is found in [63] that the ACC loop may accelerate the equivalent motion of PLL in the first swing, which will worsen system transient stability by enlarging the mismatch between the accelerating and decelerating area in the power angle curve of the analogized synchronous machine model of the current-type VSC. The PLL control blocks in LCC-HVDC and VSC-HVDC have similar impacts on systems stability. For the LCC-HVDC, a study was conducted based on the smallsignal model and eigenvalue analysis to investigate the impacts of PLL and LCC controllers in [38]. First, the PLL bandwidth has significant impacts on system stability. Too large PLL control bandwidth will cause system SSO, especially under weak ac grids. Considering PLL gain stabil- ity boundary with different types of LCC controls, the stable region of PLL gain is larger with CDVC than with CEAC. Second, in CEAC controller G γ , smaller proportional gain K p and larger integral gain K I can help improve system stability. While in CDVC controller G dcv, larger K p and smaller K I can enhance system stability. Third, there is a close coupling between PLL and LCC control loops, which indicates that the instability caused by larger PLL gain can be eliminated by properly tuning LCC controllers. For the VSC-HVDC, based on the eigenvalue analysis of the corresponding smallsignal model, the PLL impacts on system stability can be obtained as shown in Fig. 10. It is seen that when the SCR is larger than 1.32, there will be no stability issues for any value of K pll_P . However, in a system with lower SCR, there will be a maximum K pll_P limitation for system stability. Note that the K pll_I is assumed as c times of K pll_P for simplicity. Another study on a windfarm-connected HVDC transmission is conducted with the impedance-based stability analysis in [39]. It is also found that increasing the voltage loop crossover frequency or reducing the PLL control bandwidth can improve system slow-interaction converter-driven stability. The PLL-related converter-driven instabilities can be directly solved by tuning PLL control parameters, e.g., reducing PLL bandwidth to limit the effective frequency range of the harmful positive feedback. Another approach is to add active damping, e.g., virtual impedance [34], [64] or feedforward control [61], [65]. 2) DROOP CONTROL In voltage-type VSCs, droop control strategies are normally adopted for power regulation and system synchronization. A complex-value-based output impedance model in the stationery frame is built in [62] to study the impacts of the control loops. It is revealed the interactions of the droop control loops and voltage control loop tend to cause system instability issues. Moreover, a comparative study is conducted in [66] to investigate the differences between multi-loop droop (with inner V -I loop as shown in Fig. 2) and single-loop droop (without inner V -I loop). Fig. 11 shows the results of smallsignal stability boundaries under different grid equivalent impedances [66]. It is found that the voltage-loop will make the converter prone to be less damped and lose system stability more easily since the stable region is reduced with inner V -I control. Besides, it is found in [36] that a larger voltage control bandwidth may enhance system SSO stability. Additionally, the Q-v droop impact on system stability is weaker than the p-f droop. Based on these findings, the droopinduced slow-interaction converter-driven instability can be eliminated by tuning the parameters of the more sensitive control blocks, i.e., p-f droop and voltage control. III. MECHANISM OF GRID CONDITION-INDUCED CONVERTER-DRIVEN STABILITY ISSUES In addition to various converter control loops, converterdriven stability issues are also dependent on system interactions and operating conditions. A. GRID STRENGTH (SLOW-AND FAST-INTERACTIONS) As shown in Fig. 9, Fig. 10, and Fig. 11, the slow-interaction converter-driven stability not only relies on the converter control loops but also depends on the grid strength. In converters with PLL control block, the instabilities would be more likely to be stringent under weak grid conditions. As shown in Fig. 9, an increase of L g (diamond line), i.e., a weaker grid, will also make P 1 be an RHP pole and cause SSO instability. Note that a weak grid is defined as an ac power system with a low SCR and/or inadequate mechanical inertia by IEEE standard 1204-1997 [67]. It is also worth mentioning that in the LCC-HVDC system, a weak system means an SCR < 2.5. While in VSC-HVDC systems, the SCR for ''weak'' or ''strong'' system boundary is suggested to be 1.3-1.6 as implied by Fig. 10 [33]. However, in converters with droop control, the smaller the grid-impedance is, the smaller the allowed maximum p-f droop gain would be as shown in Fig. 11. That means the SSO instability tends to happen in a strong grid under the same droop gains in voltagetype VSCs, which coincides with results in [36], [62]. The fast-interaction stability may also be affected by the grid strength. For example, if the magnitude of the grid-side impedance intersects with that of converter impedance in the HF-NPR, and the phase difference at the intersection does not meet the stability criterion, then the harmonic instability issues will exhibit [48]. A grid impedance away from the HF-NPR can help eliminate the harmonic instability issues. B. CONVERTER-INTERFACED LOADS (FAST-AND SLOW-INTERACTION) The converter-interfaced loads will have very different frequency and voltage characteristics from conventional resistive loads or motor loads. Under some circumstances, the CILs can be considered as current-type VSCs as discussed in Section II-A. It is revealed for simplicity that the CILs exhibit constant power characteristics when the control bandwidth is high enough in some studies [68]- [70]. Therefore, negative incremental impedances will be introduced by the constant power loads (CPLs) across the entire frequency range, and both fast-and slow-interaction converter-driven stability will be affected by this negative damping. Similar findings have been obtained by a microgrids study in [71] with different solutions such as using passive damping, active damping, or more advanced control strategies. One should note that although the CPL assumption is dynamic-wise (i.e., simplifying the load dynamics), it may not always be the worst-case condition for system stability from a control standpoint [72]. C. OPERATING CONDITIONS (FAST-AND SLOW-INTERACTION) System operating conditions also affect the converter-driven stability, including both fast-and slow-interactions. For example, a theory for harmonics created by resonance in [73] shows that the harmonics may not happen in normal mode, but may suddenly occur and grow before it reaches a certain value if operating conditions change as shown in Fig. 12. The main reason for this phenomenon is that the converter impedance depends on both the operating points and harmonic components. To solve this kind of issue, the focus should be on utilizing passive elements or control strategies to provide more damping to reshape the system impedance. Moreover, slow-interaction converter-driven stability will also be affected by system operating conditions as shown in Fig. 9 that a larger current I ref (bar line) will induce SSO with higher oscillation frequency. Hence, a proper design of converter impedance characteristics under different operating conditions should be examined to guarantee system stability. IV. CASE STUDIES OF INSTABILITY PHENOMENA IN PEGs To illustrate the different types of instability phenomena described above, a notional scale-down two-area system interconnected by VSC-HVDC as shown in Fig. 13 was built in MATLAB/Simulink. In each area, a three-bus system is investigated, where G x1 and G x2 work as voltage-type generators, G x3 works as current-type generator/load (x represents Area 1 or Area 2). And G x1 provides voltage references for each sub-system. The system is designed to be stable first. Then, based on the review of the possible causes for system instability issues, some typical impact factors are studied by changing the corresponding parameters, such as the inner control, the outer control, or the grid strength. Note that the control and hardware parameters for the stable operations are regarded as benchmark conditions (defined with subscript ''BM'' in the following text). Three case studies are conducted in this paper through both time-domain simulations and the Norton admittance matrix (NAM)-based stability analysis with the characteristic loci of the system eigenvalues [74]. The reasons that the NAM-based approach is adopted in these case studies are summarized as follows. First, there are generally two types of modeling approaches for system stability analysis. One is the state-space approach, and the other is the impedancebased approach [26], [75]. The state-space approach is suitable for system low-frequency dynamics modeling and can be used to identify the oscillation modes through eigenvalue analysis. However, if the fast dynamics in the system are considered, the model will become a high-order matrix which might be difficult to compute. Additionally, information of the entire system is required to derive the model. While the impedance-based approach is to analyze the system stability through the interactions between different subsystems, which only needs the terminal characteristics and can be used to identify the impact of each subsystem on system stability. Therefore, an impedance-based approach is adopted in this VOLUME 9, 2022 FIGURE 13. Configuration of the test system. paper. Second, the impedance-based stability criteria can be further categorized into three types, including the Nyquistbased stability analysis, the loop-based stability analysis, and the NAM-based stability analysis [74]. The Nyquistbased approach analyzes system stability through an openloop model at one partition point. Therefore, the open-loop RHP poles need to be checked first, and the analysis results are sensitive to the partition point. The loop-based approach analyzes the system stability through the closed-loop model, so there is no need to check the open-loop RHP poles and it is insensitive to the system partition point. However, it depends on the circuit operation, and it cannot be used to identify the weak point in the system. The NAM-based approach analyzes the system stability through the closed-loop model with overall system structure, so there is no need to check the open-loop RHP poles. Also, it is insensitive to either the system partition point or circuit operations. It can also be used to identify the weak point and the oscillation frequency in the system by analyzing the characteristic loci of the system return ratio matrix [76], [77]. Therefore, the NAM-based approach is adopted in this paper. A. CASE I: IMPACT OF INNER CONTROL PARAMETERS In Case I, Area 1 and Area 2 work independently with VSC-HVDC disconnected, that is no power flowing between Area 1 and Area 2. And the transmission lines in both Area 1 and Area 2 are kept the same as the benchmark system. But the inner control of G 13 is changed to be 5 times of the benchmark parameters to have a faster inner loop design. Consequently, a 480 Hz harmonic instability issue is observed on B 13 and the NAM-based stability analysis result also predicts such an oscillation through the characteristic loci as shown in Fig. 14 (420 Hz + 60 Hz). To eliminate this instability issue, the control bandwidth of the inner loops should be limited as reviewed in Section II. With a slower inner loop, the system can be stabilized as shown in Fig. 15. Note that in the following case studies, only the unstable waveforms will be given considering the page limits. B. CASE II: IMPACT OF OUTER CONTROL PARAMETERS First, the PLL control parameters of G 13 in Area 1 are changed to be K pll_p = 0.01 * K pll_p,BM and K pll_I = 5 * K pll_I ,BM , and the other parameters are kept the same as the benchmark system. Also, all the parameters in Area 2 remain the same as the benchmark system. The VSC-HVDC is disconnected. It can then be found that due to the improper PLL parameter design, there will be low-frequency oscillations in Area 1 as shown in Fig. 16. The phase voltage of B 13 shows a 68Hz resonant frequency which matches with the analysis result. The PLL control blocks in VSC-HVDC will also have a similar impact on system stability as that in current-type VSCs. When there is power flowing from Area 2 to Area 1 through the VSC-HVDC connection, and the parameters in both Area 1 and Area 2 are kept the same as the benchmark system, except the PLL parameters in VSC-HVDC are changed to be K pll_p = 0.05 * K pll_p,BM . It can then be observed in Fig. 17 that there will be low-frequency oscillations in both the inverter station and the rectifier station. To remove the slow-interaction instability issues, an increase of K pll_p and a decrease of K pll_I can help as reviewed in Section II. C. CASE III: IMPACT OF GRID STRENGTH In Case III, Area 1 and Area 2 work independently with VSC-HVDC disconnected. The transmission line parameters in Area 2 stay unchanged compared with the benchmark system so it is stable, while L 113 is increased to 5 times of L 113,BM and L 123 changes to 5 times of L 123,BM in Area 1 (i.e., weaker connection). It can then be seen from Fig. 18 that a 216 Hz harmonic issue occurs in Area 1. And the impedance-based stability analysis approach also predicts this harmonic resonant frequency. According to the review in Section III, to remove this instability issue, a stronger grid connection is expected. The other causes reviewed in Section II and Section III, such as the control delay or the loads, can also be studied following the same method used in the case studies above. V. OPEN RESEARCH ISSUES AND CHALLENGES With the understanding of the impacts of PECs on power system stability, future all power electronics-based grids can be envisioned. But there are still some challenges going forward. A. STABILITY ANALYSIS AND IMPROVEMENTS OF LARGE-SCALE PEGs There have been many papers studying the converter-driven stability issues in small-scale PEGs following the common practice: building system models → applying stability analysis approaches → developing stability improvement methods → conducting simulation/experimental validations [74], [78]. The system model is normally a state-space model or an impedance model, and the corresponding stability analysis is eigenvalue-based analysis or Nyquist criterion. The stability improvement method is usually to improve converter control or to add extra damping. And the analysis results can be simulated by PSCAD, MATLAB, or other software. It is also feasible to build a hardware platform for the small-scale PEGs for further analysis. However, for large-scale PEGs, there is no such study yet. Although people have studied high PE penetrations (e.g., 80%) in the VOLUME 9, 2022 large-scale system, the stability analysis mainly focuses on the classic power system stability study in the range of 0.1 Hz −5 Hz [21]. If directly applying the approaches for the smallscale system to large-scale PEGs, there will be many issues: (1) A very large state-space matrix or NAM model has to be built first. And when applying the stability analysis approaches, the matrix may not be solvable due to the huge computation burden of the excess matrix dimensions. One may use the Nyquist stability criterion to study the impedance ratio L AC = Z source /L load , which is normally a one-or twoorder matrix, by simply dividing the system into the source subsystem (Z source ) and the load subsystem (L load ). However, this approach is sensitive to the partition point and can only reveal the interactive stability of two subsystems at this given point. Therefore, the NAM model is preferred since it can preserve the structure of the entire system and be less sensitive to circuit operations [74], [79]. (2) It is time-consuming to simulate a large-scale PEG on a personal computer. For example, in a case study with 32 Type-III WTGs (48 generators in total) in PSCAD, to investigate 8 seconds system response using average models for the PECs at one operating point, it will take about 20 hours to run the entire simulation with regular Intel R Core (TM) i7-7700 CPU @ 3.60 GHz, not to say using converter switching models. Besides, it is also challenging to build a hardware platform for a large-scale power system. The solutions for the challenges in studying large-scale PEGs can be considered from either top-down or bottomup angles [80]. The top-down approach has a global view of the system. First, it is expected to have a generic converter model to cover a wide variety of PECs to simplify the entire system model, which could keep all the important intrinsic characteristics of the PECs and meanwhile simplify the calculation process. Some latest studies have developed generic models for PECs, such as a generic model for wind power plants [81], [82], or the data-driven-based power electronic converter modeling approach [83]. Second, the stability analysis approach should be improved to relax the huge computation burden for a large-scale system, such as the partition-based nodal admittance matrix model for smallsignal stability analysis of large-scale PEGs in [77]. Third, for the system simulation, a more powerful computer station with multicores calculated simultaneously can be adopted to speed up the process. While the bottom-up approach starts with the local converter. It is desired that the decentralized control for smart converters [84] can ensure system stability. The passivity-based control can be applied for converter design to enhance system stability. The existing works mainly aim at improving fast-interaction converter-driven stability, but a general solution for slow-interaction stability regarding converter synchronization is still unclear since the low-frequency behavior highly depends on system operating points. Therefore, a decentralized converter control for large-scale system stability under variable working conditions is desired. B. STABILITY ANALYSIS CONSIDERING SYSTEM NONLINEARITIES The converter-driven stability analysis for either small-scale or large-scale PEGs above is mainly focused on smallsignal stability with system linearization. However, a PEG is inherently a nonlinear system [85], such as large disturbances in systems, power/current limits, or control saturations. To study the system large-signal stability considering all the nonlinearities, a common approach is to use timedomain simulation tools to reflect the system response under some disturbances. Typically, many simulations under different types of disturbances (e.g., faults, generations, or loads dispatch) are needed to characterize system characteristics. There have been some studies focused on large-signal stability analysis on PEGs, such as the converter-level large-signal stability analysis of GFMs or GFLs in grid-connected conditions [86]- [89], or the system-level large-signal analysis on dc microgrids [90]. However, a systematical large-signal stability analysis approach for ac PEGs is still lacking. Therefore, a system-level large-signal stability analysis method for future PEGs considering all the nonlinear effects, especially for large-scale PEGs, should be developed. VI. CONCLUSION Power electronics-based grids represent the trend for future electric power systems. New system stability issues like harmonic stability or subsynchronous oscillations, could arise along with the impacts on classical power system stability. This paper presents a comprehensive analysis of the converter-driven stability issues (fast-and slow-interactions) in power systems with root cause analysis. The results show that the converter control, grid strength, CILs, and system operating conditions all affect system stability. The case studies of a two-area PEG verified these instabilities with illustrative and intuitive explanations. Control and design challenges for future PEGs are also presented. NOTICE OF COPYRIGHT This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
8,115.8
2022-01-01T00:00:00.000
[ "Engineering", "Physics" ]
The fear factor: Xenoglossophobia or how to overcome the anxiety of speaking foreign languages The article approaches a ubiquitous as well as a rarely adequately addressed problem area of learning and teaching foreign languages. It concentrates on xenoglossophobia, the fear of speaking foreign languages. Why do avoidance strategies as well as phobias develop during childhood especially in the foreign language classroom whenever it comes to the productive usage of the English language? Psychological, pedagogical, didactical as well as language related and neuroscientific findings are analysed and interpreted in order to help answer central questions like the above. The theoretical indications are further supported by a fundamental pilot study based on the productive language usage of foreign language students (n=108) and the according reflective and prospective analysis. The second part of the article brings all these findings toge-ther and outlines language didactical, meaningful, positive preventive, diagnostical, and therapeutic opportunities for intervention in the foreign language classroom. experience, good as well as bad (Perry & Marcellus, 2020;Shonkoff & Phillips, 2000): Neurons and synapses, the connections among these brain regions, change in an activity-dependent way. This very early use-dependent developmental process is the first key to understanding the impact of fear on the process of language acquisition, especially foreign language learning (Engels et al., 2007). The more a certain neural system is activated, the more it builds an internal representation of the experience corresponding to that specific neural activation. This is the basis for learning and memory. Age, however, makes a difference -due to the plasticity or receptiveness to environmental input: The sensitive brain of a child is more malleable but also more vulnerable to experience than a mature brain. In order to develop normally, different regions of the brain require specific kinds of experience, e.g. visual input while the visual system is organising. These times during development are called critical or sensitive periods. In all, with optimal experiences, the brain develops healthy, flexible, and diverse capabilities. Disruptions of timing, intensity, quality or quantity of normal developmental experiences, however, cannot be avoided and are common, as well. Nonetheless, it might have a devastating im-pact on the brain's development and function, when occurring too often, regularly, and not falling back below a certain intensity level. It then can insidiously create bad feelings like fear, anxiety, or worry. Fear is an emotional, physical response or reaction to a clear foreseen and present danger of harm and affects the ability to focus and think. Anxiety arises as a negative emotional response to anticipated events and has cognitive as well as controlling elements (Horwitz, 2010). The cognitive aspect is what interferes with academic performance. Therefore, the ability to concentrate, focus and think are also affected. Worry generates potential negative outcomes, often as a result of an elaborate cognitive process. It has more to do with thinking than with feeling and negatively influences the ability to concentrate and think. In this paper, worry as a clearly subliminal, non-dangerous, and normal but also only temporal human state of mind can be neglected and will not be further addressed. process so difficult to comprehend. The prefrontal cortex, the so-called Thinking Brain, is responsible for cognitive action like reasoning, conscious memories, awareness, detailed information, and concentration. From a functional point of view, the differences towards the amygdala are big, the connections yet close: Anxiety responses can also be initiated in the cortex by alerting the amygdala to potential or imagined dangers. And, what is more, amygdala reactions can -with a little time gapbe relativised by the cortex, when the danger turns out to be harmless on a second glance. The cortex, however, provides another anxiety pathway, independent of external information: Thinking or ruminating hypothetically about a prospective, but unreal future along with a bad case scenario is often the basis of a self-fulfilling prophecy. What one imagines to possibly go wrong combined with catastrophic images in one's mind, will probably happen. The two areas in a contrastive nutshell as in Table 1 below. Training, Language and Culture 45 By introducing the Amygdala first, it can be summarised that this part of the brain creates, maintains, or modifies anxiety and fear responses. Situated within the limbic system, the amygdala works from a less conscious part of the brain. The amygdala makes way for the direct impact of fear and anxiety: it receives information from the thalamus before the cortex does and can activate the sympathetic nervous system due to triggers that might be unknown to the cortex. This, for example, can be a stimulus that has been previously associated with emotional reactions and is therefore a part of the emotional memories. When anxiety starts there, interventions based in the cortex, like logical thinking and reasoning, will not help reduce the upcoming or existing feeling. Amygdala-based anxiety can mostly be identified by certain characteristics: It occurs suddenly, creates strong physiological responses, and seems to leave the borders of the proportion to the situation. Ultimately, the amygdala does not need any involvement of the cortex, which makes the whole rudn.tlcjournal.org Table 1 Contrastive overview of language related fear centres AMYGDALA PREFRONTAL CORTEX -responsible for emotions -generates, receives, or changes anxiety and anxious reactions -operates unconsciously -physical alarm system, perpetually scans for incoming signals of danger -affects the nervous system, hormones, and the prefrontal cortex (PFC) -generates anxiety without the influence of the PFC and even overrules it -anxieties are 'learned' and negatively connoted -Thinking Brain -responsible for cognition -responsible for consciousness -functions as executive (evaluation and analysis) -promotes reasoning and logical thinking -conscious memory (retrievable) -contains detailed information -capable of learning to control the amygdala (aim of prevention and therapy) -produces hypothetical bad case scenarios Age plays a crucial role Age, again, plays an important role in the development of the two areas' relationship: the prefrontal cortex is not completely developed until the age between 20 and 25, so that it cannot precisely suppress emotions such as fear or anxiety yet (cf. Figure 2). That has to do with a process called myelination: The communication velocity between the neurons is determined by the thickness of the myelin sheaths around their connections. These are nerve fibres, known as axons, 1 millimetre to 1 meter long, which transmit electrical nerve stimuli between each other with an average speed of approx. 250 mph. Myelination, the gradually coating of the axons, allows brain areas, that are far apart, to connect -e.g. the limbic system and the cortex. This development has a designated direction, from back to front all the way to the cortex (cf. Figure 2) (Gogtay et al., 2004). Similar to the development of the prefrontal cortex, myelination is a maturation process that continues until the age of around 27. Only then all functions in the prefrontal cortex are fine-tuned, connected, concentration and focusing are fully available, and emotions can be controlled in most cases. Therefore, teenagers usually prioritise their emotional processing over their cognitive processing due to the fact that they use their limbic system more strongly during their adolescence as it has already developed further. The mind is literally not capable of controlling the rest of the brain, which can be fruitful for creative language proces-ses (Böttger & Költzsch, 2019). However, strategic, long-term as well as short-term planning and interventions are not or only to a limited extent possible (Böttger & Sambanis, 2017). This has an impact on the handling of fear and anxiety -children and teenagers are hence especially vulnerable and require a special form of protection. The importance of understanding Generally understanding where and how fear and anxiety begin, i.e. knowledge of the language of the amygdala, is the first means of taking correct steps to interrupt the process of increasing amygdala activation and negative bodily responses (cf. Pittman, 2015). A mental intervention is possible, when fear and anxiety signals (beginning in the cortex with uncomfortable thoughts) provide only little but nonetheless precious time in order to change mental predispositions oneself. This presupposes already learned, self-effective mental and physical strategies like self-programming, self-talking or taking deep breaths. This in turn may help in preventing the activation of the amygdala and decrease upcoming cortex-based anxiety at the same time. Repetition and experience concerning this procedure is essential for being successful sustainably. 'Therefore, teenagers usually prioritise their emotional processing over their cognitive processing due to the fact that they use their limbic system more strongly during their adolescence as it has already developed further. The mind is literally not capable of controlling the rest of the brain, which can be fruitful for creative language processes' 46 Training, Language and Culture Definition and description Anxiety can produce physiological responses interfering with the ability to concentrate and learn -especially languages. Specific thoughts or preoccupations can suddenly -dependent on individual triggers -invade learning processes and redirect attention to a parallel and negative world of thoughts. Anxiety is capable of motivating avoidance strategies which in turn can lead to self-exclusion from the learning community, to a delay of studying and simply the avoidance of doing homework or class assignments. Anxiety can, as a result, interfere with the teacher's professional ability to accurately assess a student's knowledge or skills. If the affected learning process is language learning, it is a matter of a very specific type of anxiety, xenoglossophobia -the fear of speaking foreign languages. Xenoglossophobia derives from the Greek: phobos meaning 'fear', xeno meaning 'foreign', glosso meaning 'language' or 'tongue'. Psychiatrically speaking, xenoglossophobia belongs to the group of specific phobias (Horwitz, 2001). It describes the abnormal and exaggerated fear of foreign languages. In the course of the sickness, people tend to avoid not only the studies of foreign languages but also speakers of said languages. They feel embarrassed when they have to speak English in classrooms. Travelling to foreign countries is also refrained due to similar reasons. Hence, xenoglossophobia restricts daily life constantly and can lead to the development of more in-depth states of anxiety or even depression. This can be seen in form of physical reactions or experiencing stress response symptoms like a pounding heart, rapid breathing, stomach distress etc. The effects for the language classroom are multiple: For example, when students cannot sit still on their chairs in a classroom, when they nervously play with their pens, when they cannot utter sounds calmly, naturally and clearly in classroom conversations, they show anxiety. They are also often not willing to start any conversation or even don't like to participate in communications, they remain silent in discussions, speak fast and finish quickly facing large audiences like in front of their classmates. How xenoglossophobia develops Xenoglossophobia does not arise by itself. Speaking is a very complex neuromuscular activity, which needs to be learned over years and must constantly be practiced, especially regarding foreign languages (Böttger, 2014). The phonological production or rather the articulation can be taken from the image below (cf. Figure 3). It begins in Broca's Area (1) and moves on to the motor cortex in which the motoric process of the articulation (2) and the motoric operational control (3) take place. The auditory feedback and the phonological monitoring, a type of cognitive self-control and correction of one's own speaking, occur in the upper temporal lobe (4). In addition, there are a number of smaller muscles that are required to enable speech production through the human lips. That such a long way contains various sources of error goes without saying. Training, Language and Culture 47 rudn.tlcjournal.org Figure 3. Oral language production in the brain (Böttger, 2014) The fear factor: Xenoglossophobia or how to overcome the anxiety of speaking foreign languages by Heiner Böttger and Deborah Költzsch A process simultaneous to speaking, namely speech perception or much simpler listening, is decoded in the language-processing and listening centres of the brain. The sound -also one's own voice -arrives in the ear and reaches the primary listening centre in the auditory cortex (5). From there the vocal message is sent to Wernicke's area (6) for phonological, grammatical, and semantic detection as well as decryption. Besides the neuronal processing, non-linguistic factors are also involved in a decryption process, i.e. conditions and information from a situation in which a statement is made. Redundancy plays an important role in decoding spoken language. It describes the difference between the amount of information a message could theoretically have and the amount that it actually possesses. Spoken language has a very high level of redundancy; it is generally estimated at around 50 percent of what has been said. During phone calls and in face-to-face communication, parts of the phonological information are often disturbed, drowned by external noise or even not transmitted at all, in parts or as a whole. Native speakers are so familiar with the coding system of their language that they can reconstruct a message from an incomplete transmission. However, anyone who learns a foreign language from early on must first acquire this ability. It is most important for receptive communication, particularly essential when understanding translated TV programmes, listening to recordings or information transmitted through loudspeakers (e.g. at airports) and telephone calls in the foreign language. For non-natives, nonetheless, the decoding process in a foreign language, carried out automatically and simultaneously to speaking itself, poses various risks due to even more sources of error and thus possible causes of anxiety. Speaking requires memory capacity as well as specific strategies. Both the amygdala and the prefrontal cortex are involved in this remembering procrss. Memorising and reproducing language is highly communicative and especially the ability of being able to remember something has further significance as an own speech strategy: Only those who can recall what the interlocutor has said are able to establish actual communication in conversation, which, apart from the exchange of content, also contains intercultural and affective components, for example, initial conversation strategies of asking questions, agreeing, and the like. Memorising means to memorise and retain language material such as words, sentences, etc. The linguistic content which is to be recalled must firstly be understood in order to be able to deal with it. Although the brain is quite capable of memorising even without knowledge of the content, this is not sufficient for participation in communicative situations. Communicative speaking altogether is a very complex structure consisting of the sub-competences hearing/listening, understanding, remembering, and speech production. A PILOT STUDY: XENOGLOSSOPHOBIA AND ITS IMPACT ON PROSPECTIVE FOREIGN LANGUAGE TEACHERS As said above, xenoglossophobia is not only a phenomenon experienced in childhood or teenage years but rather remains up until adulthood (also cf. Fondo, 2019). What is more, the fear of speaking foreign languages is not only found in certain social groups but, interestingly enough, also in those which tend to be viewed as free of these anxieties, e.g. foreign language teachers. A fundamental pilot study examined 108 prospective foreign language teachers studying English. As there are scarcely any findings regarding anxiety among foreign language speakers or learners respectively and especially little among experienced foreign language learners as well as speakers, the aim of this study was to take initial steps into this specific field of anxiety in order to give first indications. In the realm of a quantitative online survey, the participants were asked rudimental questions concerning their use of the English language as well as feelings, concerns, or fears connected with the matter. Below is the complete question catalogue of the survey questionnaire. 1. Do you have to think before speaking English? 2. Do you make plans before speaking English, e.g. in class? 3. Up until the present day, are you scared to make mistakes while using English? 4. In general, have you avoided communicational situations in English? 5. Have you ever avoided an English-speaking phone call by not calling, by not answering, or by writing a text message instead? 6. Are there English words that you avoid in conversations? 7. Are there parts of a sentence that you avoid while speaking? 8. Up until the present day, are you sometimes nervous before coming into a class held in English? 9. Do you notice physical reactions (e.g. blushing, perspiration) while speaking English in front of a group? 10. Do you sometimes condition yourself (e.g. pep talk) before speaking English in class or before giving a presentation in English? 11. Looking back, did you feel uncomfort-able speaking English at school, especially when called upon suddenly? 12. Are you scared to say something wrong in English? 13. Do you avoid speaking English in class to prevent making a mistake even though you might know the correct answer? 14. Do you sometimes shorten your sentences on purpose in order to avoid verbalising long sentences in English? 15. Do you know someone who would have answered the previous questions with yes? Remarkably, even the students, who tend to feel confident while speaking English due to their field of studies, experience xenoglossophobia equally. The answers of the pilot study are shown in the graphs in Figure 4 below. rudn.tlcjournal.org 53% of the students that participated in the study conceded that they have to think before speaking English. This is fully in line with contemporary language acquisition theory findings, which state that non-bilinguals learn and/or use a foreign language after the age of 6 only against the background of their first or mother tongue (Böttger, 2020, p. 37). The internal 'language plan' before the act of speaking in the foreign language is thus generally slightly delayed. Fears reinforce this 'lag'. Moreover, this aspect is reinforced by a se-cond finding: 31% acknowledged that they additionally had to make some kind of plan before using the foreign language. The third survey result also substantiates parts of the language acquisition theory: the great fear of making mistakes among two thirds of all respondents (64%) makes them weigh up whether or not to say something for a much longer time, communicatively actually too long. However, not only did these prospective English teachers overthink their use of the foreign language but over half of them also admitted to avoiding situations in which they needed to use it. Avoiding speech acts, or simplification of speech, is not only considered a source of error (Böttger, 2020) but is also the maximum negative fear reaction. While 54% of the students stated that they had already avoided communicational situations in which English was necessary, 33% did not pick up the phone but instead wrote a message due to the oral foreign language barrier. The lack of any communicative support through non-verbal signals such as gestures and facial expressions has an anxiety-increasing effect ( Figure 5). 51% of all students acknowledged the fact that they avoid certain English words during English conversations. This is mostly due to personal uncertainty regarding the correct pronunciation or meaning of the word. Nonetheless, only 12% confirmed the avoidance of complete parts of a sentence while speaking English. With regard to longer linguistic language units, the previously described effect is significantly enhanced when there is even more uncertainty with respect to the correct sentence structure or the correct grammatical structure. Additionally, half of all students surveyed stated that they are nervous before coming into a class held in English and 61% noted that they had perceived physical reactions, such as blushing or perspiration, before speaking English in front of groups. The physically measurable excitement, which can certainly be a still subliminal fear or positive excitement similar to the 'pre-start feeling' as known from sporting competitions, is positively influenced by self-conditioning in almost half of the respondents (48%) (as shown in Figure 6). When taking a closer look into the classroom, the so-called 'cold calls' negatively influenced approximately half of all respondents. The fact that 47% of the students admitted that they had felt uncomfortable speaking English at school might confirm the assumption that the fear of speaking a foreign language stems from their previous education during their childhood or respectively their teenage years. Hypothetically, this number is likely to be much higher for learners who are not as English-affine as those interviewed. This could possibly be traced back to the fact that students are scared to say something incorrect, which 69% of the participants in the study admitted. 62% confessed that they did not answer certain questions in class even though they knew the correct answer due to the fear of being grammatically or linguistically incorrect. Even 28% confessed to shortening Training, Language and Culture Volume 4 Issue 2, 2020, pp. 43-55 doi: 10.22363/2521-442X-2020 their sentences on purpose in order to minimise their number of errors in verbalisation of the foreign language. The last question shows the iceberg-effect: 94% of those questioned conceded to knowing somebody anxious concerning their oral use of the foreign language English. This circumstance must be investigated in more depth in follow-up examinations as the fear potential seems to be much greater than previously assumed. This first pilot study to explore the scientific field will be followed by a second in-depth study in the year of the publication of this article. This will involve a significant expansion of the group of respondents in all aspects (number, learning experience with foreign languages, etc.) (cf. Figure 6). As reality therefore shows, xenoglossophobia does not only relate to a certain group of people but is on the contrary a type of anxiety which everyone is affected by, even those who should feel more confident in the foreign language. Interestingly enough, the participants of this study are in general highly aware of their language proficiency and are capable of reflecting upon their language skills more in-depth due to their professional training. Hence, it is of interest to explore the matter further regarding non-language students as well as people who do not work in any languagerelated contexts. One the one hand, it can be assumed that the level of anxiety rises even further as the subjects would not be as experienced in the foreign language and would therefore be more fearful to make use of it. On the other hand, it is probable that the level of anxiety would drop due to the fact that future language teachers possess a high standard of critical self-reflection and are also used to comparing themselves with other experienced language-users in turn damaging their personal linguistic self-confidence. The results of this study therefore only depict the anxieties of this specific social group, which is why prospective studies are highly required in order to shed more light on the precise reality of xenoglossophobia. Sources of xenoglossophobia Language anxiety and foreign language learning are interrelated. Many factors lead to xenoglossophobia in teaching, learning or even related situations (Li & Wang, 2019). The following table was created following the categories of Turula (2002) (cf. Buitrago et al., 2008, p. 28) -academic, cognitive, social, and personal reasons -and was rudn.tlcjournal.org Figure 6. Pilot study: Xenoglossophobia regarding foreign language students (n=108), questions 11-15 (left to right) created by the authors after interviews and informal conversations with neuroscientists, psychologists, psychiatrist as well as academic language teaching staff of universities, teacher trainers, teachers, and students in 2020. It doesn't claim to be complete and is to be expanded continuously. In order to deal with the effects of language anxiety on a didactical basis focusing on possible anxiety prevention precautions, therapy interventions, and also methodological task formats, it is necessary to first identify in-depth and -as a second prospective step in order to draw pedagogical, didactical, and even methodological conclusions -deeply analyse the source of it (Alnuzaili & Uddin, 2020;Abinaya, 2016). The latter, however, is far beyond the scope of this study and will be the main topic of an authors' book forthcoming 2021. Turula (2002) summarises the sources of xenoglossophobia as shown in Table 2 below. 52 Training, Language and Culture Table 2 Sources of xenoglossphobia in the ELT classroom (Turula, 2002 -adopted and expanded) PSYCHOLOGICAL SOURCES METHODOLOGICAL SOURCES COGNITIVE SOURCES -lack of affective support -false perception of emotions -frustration -stressful, negative, nearly hostile environment -lack of self-confidence, low self-esteem -adolescent, peer-related behaviour -restrictions due to schools and teachers -time pressure -loss of cognitive control -feeling of being observed and evaluated -individual expectations -unresolved conflicts -feeling of excessive demands -lack of mindfulness a) Motivational: -monotonous lessons -boring topics and learning content -little student involvement -difficult tasks b) Pedagogical: -promotion of competition -calling up of students in class ('cold calling') -lack of speaking practice -classroom organisation -evaluation and grading -negative, non-supportive feedback -negative linguistic experiences/failures -disencouraging learning context a) Metacognitive: -indifference to the learning process and learning formats -excessive testing and evaluation b) Cognitive: -lack of linguistic capacity -lack of contentual/subjectspecific competences -lack of lexical and grammatical skills -complex structures and long sentences in task descriptions and dialogues -lack of planning and goal definition -continuous monitoring of personal speech -grammatical difficulties -deficiency regarding vocabulary -lack of competences in the mother tongue/first language -linguistic interferences -perfectionism -neurobiological or mental diseases SOCIAL-AFFECTIVE SOURCES -prejudice of peers and teachers -hypothetical judgements by speaking partners (native speakers) -risk of public embarrassment -lack of interest regarding the opinion of the speaker -social-affective isolation -cultural background -negative experiences The table contains psychological, methodological, cognitive, and social-affective aspects of the topic. These are interdependent as well: psychological effects can also be found within the methodological area (feedback) as well as in the cognitive column (self-monitoring of one's own speech), and, additionally, may influence socio-affective matters like the risk of exposure through mistakes. Another cross-connection could be a lack of selfconfidence (psychological) (Ferreira Marinho et al., 2017) and mindfulness (Böttger, 2018), the lack of speaking experience (and opportunities) in English (methodological), vocabulary deficits (cognitive) and cultural background (social). First steps towards overcoming xenoglossophobia From a psychological perspective, proven treatments to overcome the fear of speaking English are two kinds of therapies: cognitive behavioural therapy and confrontation therapy. The first option is one of the most modern psychotherapeutic methods. Here, negative attitudes, thoughts, evaluations, and convictions are identified and slowly but sustainably changed. An example of this is the inner negative belief that 'I don't speak English well enough' can be replaced by a new, much more positive self-belief, such as 'My English is perfectly sufficient for what I want to accomplish'. Other possible coping conceptions for preparing a communication situation can be: I can get through this. This won't last forever; I've handled this before; Just remember to breathe; Take your time. There's no rush. Time is on my side; I will learn from this. It will get easier each time; I'm doing the best I can; Staying calm shows that I am in control. Single helpful words to be repeated over and over for self-programming are relax, peace, calm, or breathe. This, however, requires several thera-peutic steps and at least the support by a psychologist. The negative conviction then takes a back seat and is literally overwritten by the new conviction. The second option, the confronting procedure (regarding English), takes place in a protected setting, also mentored by a psychological specialist. Being confronted with the anxiety-causing situation those affected can endure fear and, very crucially, that there are no negative consequences at all. As a result, the fear of speaking English can be forgotten again over time. Resolving xenoglossophobia Ideally, the influence of foreign language teachers accompanies either process. The anxiety and fear references mentioned above must be discussed intensively in the future in order to approach a didactical pattern of anxiety-free teaching and learning action. In the meantime, the most important task is to track down these fears in their entirety, to record and discuss them, put them in a suitable order of importance, and thus to produce first cornerstones of a fear-free foreign language didactics, as illustrated in Figure 7 below. Training, Language and Culture 53 rudn.tlcjournal.org Figure 7. Didactical action pattern of anxiety-free teaching Things to consider In order to prevent xenoglossophobia three fundamental didactical insights and basics must be considered. 1. Teachers must acquire obligatory knowledge about the difficult speaking procedure, in order to be able to professionally, sensitively, and fairly analyse and evaluate an oral performance. It is necessary to understand that mother tongue and foreign language learning -apart from a few points of contact -are fundamentally different. The main distinction is that learning to listen and speak, to assign phoneme series and supra-segmental elements such as prosody and intonation to meanings of use, social, and later individual life happens very early in life. Along with acquiring the mother tongue, awareness, and concept development take place. This natural process can never be fully repeated later on, when people learn all other languages literally on top of their mother tongue -if not having grown up bilingually. From then on, approximately with the age of five, what should be said is planned in the mother tongue and translated internally, which in turn takes some time of thinking and getting ready to talk. The time factor when learning the mother tongue is significantly different from when learning the first foreign language. This is actually a self-explanatory fundamental idea of English teaching and learning mostly not considered in institutionalised language educational systems with narrow schedules, learning progressions, and linearity. In a nutshell, providing enough time to speak avoids or reduces stress and anxiety. 2. Speaking is dependent upon experience, more specifically upon exercise and rehearsal. Intensive pronunciation training is an important and yet often neglected aspect of English instruction that leads to self-confidence in speaking. Successful pronunciation includes emphasis, intonation, sounds as well as fluency. Even in an early phase of vocabulary learning, the words to be repeated must be imitated correctly in order to face and avert fossilisation. Fossilised word pronunciation is extremely difficult to relearn correctly. Deficits in this respect can later become communication barriers. For instance, the meaning of a word can vary, if a single sound in it is pronounced differently. If it is spoken generally in a vague manner, this can impede the communication partner's attention so strongly that the content of what has been said cannot be understood or can only be understood through inquiries. Failure, again, may first lead to avoidance of communication, subsequently to fear and anxiety. 3. Last but not least, the role model offered by the teachers themselves as professional personalities is of importance. Ultimately, their Englishspeaking competence determines their learners' success regarding their speaking capabilities. CONCLUSION All in all, it seems that emotional learning as well as academic learning has to be considered, when it comes to language learning. On the one hand, xenoglossophobia can be decreased through positive exposure provided by, for example, nonrestrictive speaking opportunities, enough time, or a lot of practice. Positive language learning experiences with neutral or rewarding learning outcome, opportunities for successful oral performances, and less seemingly and subjectively threatening assignments have to be built on. On the other hand, this form of anxiety can in many cases be simply avoided by abandoning punitive and harsh communicative responses and instead focusing on aspects, such as: (1) corrective feedforward instead of feedback, presented in an informative, not instructive way, avoiding indirect or direct ridiculing or teasing of students through irony or even sarcastic language; (2) a high ratio of positive to negative comments; (3) an effective communication and social problem solving; (4) emphasising strengths and weaknesses in language use in a balanced way. These aspects are the non-negotiable building blocks of xenoglossophobia prevention. 54 Training, Language and Culture 'In a nutshell, providing enough time to speak avoids or reduces stress and anxiety' Training, Language and Culture Volume 4 Issue 2, 2020, pp. 43-55 doi: 10.22363/2521-442X-2020
7,312.2
2020-06-27T00:00:00.000
[ "Linguistics" ]
Machine Learning Approach to Understand Worsening Renal Function in Acute Heart Failure Acute heart failure (AHF) is a common and severe condition with a poor prognosis. Its course is often complicated by worsening renal function (WRF), exacerbating the outcome. The population of AHF patients experiencing WRF is heterogenous, and some novel possibilities for its analysis have recently emerged. Clustering is a machine learning (ML) technique that divides the population into distinct subgroups based on the similarity of cases (patients). Given that, we decided to use clustering to find subgroups inside the AHF population that differ in terms of WRF occurrence. We evaluated data from the three hundred and twelve AHF patients hospitalized in our institution who had creatinine assessed four times during hospitalization. Eighty-six variables evaluated at admission were included in the analysis. The k-medoids algorithm was used for clustering, and the quality of the procedure was judged by the Davies–Bouldin index. Three clinically and prognostically different clusters were distinguished. The groups had significantly (p = 0.004) different incidences of WRF. Inside the AHF population, we successfully discovered that three groups varied in renal prognosis. Our results provide novel insight into the AHF and WRF interplay and can be valuable for future trial construction and more tailored treatment. Introduction Acute heart failure (AHF) remains a significant problem with a high mortality and a massive financial burden for healthcare providers [1,2]. AHF is a multidimensional state with a complex interplay between the cardiovascular and other systems, including the renal. The pathological condition of simultaneous dysfunction of the kidneys and heart, in which the disorder of one organ induces the damage of the second one, is called cardiorenal syndrome [3]. One of the clinical manifestations of cardiorenal syndrome is the worsening renal function (WRF), which can be defined as, e.g., an increase in serum creatinine or/and a decrease in urine output in a specified period [4]. WRF is a frequent complication overlapping the AHF, especially in conditions of intensive cardiac care units [5], and is associated with prolonged hospitalization and diminished survival [4]. The population of AHF patients endangered by the WRF is heterogenous, and so is the postulated WRF's impact on prognosis. Some authors showed contrary evidence that WRF has a negative, neutral, or even positive effect [4,6,7]. Considering this uncertainty, we presumed that the current lack of well-established classifications describing the risk of WRF is insufficient and does not reflect significant clinical differences between AHF patients. Thus, we decided to analyse the heterogeneity of the AHF population by resorting to novel methods of data analysis, aiming to describe different risk groups of WRF and, further, its impact on prognosis. Importantly, we have only included variables, which are the standard-of-care parameters routinely assessed during AHF patient monitoring. Data science algorithms, especially Machine Learning (ML), enable novel, clinically important insight into existing data and distinguish previously unrecognized patterns [8]. Clustering is an unsupervised ML technique that organizes the set of data into internally similar subgroups. We presumed that this technique, which was successfully leveraged in marketing [9], could as well prove its value in cardiovascular research. Considering these advances, we decided to implement clustering in the AHF population to understand the occurrence and significance of the WRF better. Study Population We have retrospectively analysed three hundred and twelve acute heart failure (AHF) patients from two registries conducted in our institution between 2010-2012 and 2016-2017. Our previous papers described the eligibility criteria in both registries [10]. Heart failure diagnosis was stated according to the current ESC guidelines by a responsible physician [11,12]. To ensure the creatinine course in every patient and avoid missing values in the analysis, we have only included the patients who had serum creatinine assessed at four points, i.e., at admission, after 24 and 48 h of hospitalization, and at discharge. Worsening of the Renal Function Evaluation As there was a significant lack of data about diuresis and GFR or parameters indispensable for its calculation, we have based the diagnosis of worsening renal function (WRF) and acute kidney injury (AKI) on creatinine assessment only. AKI was defined according to the KDIGO guidelines as the ≥0.3 mg/dL increase of serum creatinine in 48 h [13]. WRF was defined as the ≥0.3 mg/dL increase of serum creatinine at any point during hospitalization. We decided to analyse both of these phenomena in order to caption as many renal endpoints as possible. Throughout the paper we will stick to using the term WRF, as it is a broader qualification. Clustering and Data Analysis Variables included in the analysis are shown in Table 1. Initially, we chose 86 variables regarding the patient's clinical status, i.e., HF subtype, aetiology, comorbidities, symptomatology, and biochemical presentation. All parameters were assessed at patient admission to the hospital. Variables were manually screened to eliminate potential errors; e.g., anomalies, single values out of range, etc. The dataset was implemented into RapidMiner and autocleaning was performed. Variables with over 90% stability, 10% of missing values, or correlated with at least r = 0.6 were meant to be removed, but none of the variables fulfilled these criteria. Missing values were replaced by average values, as clustering algorithms cannot proceed with missing values. Further, nominal values were converted into numerical, and all the numerical parameters were normalized to range from 0 to 1, so each variable had the same impact on the calculated distance. Clustering is a widely used descriptive data analysis method on the border between statistical analysis and data mining with a relatively long history. The goal of clustering (also called segmentation) is to identify groups of similar examples. Thus, the critical issue in clustering is a proper definition of similarity or distance. There are several clustering methods and algorithms that can be divided into various types, such as hierarchical versus partitional, exclusive versus overlapping versus fuzzy, and complete versus partial [14]. We used the k-medoids algorithm in our experiments. K-medoids is a partitional method that creates non-overlapping clusters. The number of resulting groups must be specified in advance. The algorithm repeatedly re-assigns the examples into the given number of clusters by minimizing their distance to a centroid and recomputes the centroids. Unlike k-means clustering, where cluster centroids are computed by averaging values for examples in a given cluster, each cluster in k-medoids clustering is represented using an existing, most representative example. This makes the results of the k-medoids clustering easier to interpret. The implementation in RapidMiner luckily offers the option to tune hyperparameters of the algorithm automatically. In our case, we adjusted the number of clusters and the similarity measure. The process of the clusters' calculation performed in RapidMiner is displayed in Figure 1, and the file is attached in Supplementary Materials File S1. Hb-fraction of oxygenated haemoglobin, FHHb-fraction of deoxyhemoglobin in total hemoglobin, ctHb-total hemoglobin, Lac-lactates, mOsm -milliosmoles, HGB-hemoglobin, HCT-hematocrit, RBC-red blood count, MCV-mean corpuscular volume, MCH-mean corpuscular hemoglobin, MCHC-mean corpuscular hemoglobin concentration, RDW-red cell distribution width, WBC-white blood count, LYMPH-lymphocytes percentage, MONO-monocytes, NEUTR-neutrophiles, PLT-platelets count, Ast-aspartate aminotransferase, Alt-alanine transaminase, CRP-C-reactive protein, GGTP-gamma-glutamyl transpeptidase, NTproBNP-N-terminal prohormone of brain natriuretic peptide, INR-international normalized ratio, Fe-total iron amount in blood, TIBC-total iron-binding capacity, Tsat-transferrin saturation, sTfR-Soluble Transferrin Receptor, IL-6-interleukin 6th, eGFR-estimated glomerular filtration rate. We assessed the quality of clustering using the Davies-Bouldin index [15]. This index evaluates the quality of clustering considering the intra-cluster distance (that should be low) and inter-cluster distance (that should be high). The lower the value of the Davies-Bouldin index, the better the clustering. Associations between clusters and clinical variables were evaluated. The normality was checked using K-S, Shapiro-Wilk, and Lilliefors tests. Parameters with normal distributions are shown as means ± standard deviations. The non-normal variables are displayed as the medians and interquartile ranges. Categorical variables are shown as numbers and percentages ( Table 2). Statistical significance was evaluated using analysis of variance; the p below 0.05 was considered statistically significant. Clustering was performed in RapidMiner 9.1 (RapidMiner GmbH, Dortmund, Germany), and the statistical assessment was conducted in STATISTICA 12 (StatSoft Polska Sp. z o.o., Krakow, Poland). Clustering The population was segmented into three clusters, enumerated from 0 to 2. Groups included were, respectively, 158, 110, and 44 patients. Cluster 0 Cluster 0 was the most numerous one. It comprised the highest proportion of chronic HF with reduced ejection fraction, with the underlying cause of coronary artery disease. Patients usually had a history of PCI/CABG and electrical device implantation. COPD and insulin-dependent diabetes were most frequently reported. Clinical status comprised common pulmonary congestion, moderate limb oedema, and the lowest heart rate. In laboratory parameters, they presented the lowest Ast, Alt, ferritin, IL-6, and NT-proBNP. Cluster 1 Among other clusters, this group was composed predominantly of older women. They manifested the first manifestation of HF, with preserved ejection fraction and high comorbidity burden, i.a., diabetes and hypertension. Their clinical presentation was reflected by the most frequent NYHA IV, least frequent lower limb oedema and pulmonary congestion, and highest blood pressure. In laboratory measurements, they reached the lowest haemoglobin, HCO3, bilirubin, GGTP, and the highest serum sodium and potassium concentration, serum osmolarity, glucose, Ast, Alt, IL-6, and ferritin. Cluster 2 The last group was the youngest, with the highest proportion of males and the lowest ejection fraction. They reported the highest stroke history and presented with frequent ascites and hepatomegaly. They achieved the highest HGB, HCT, MCV, bilirubin, GGTP, Fe, NT-proBNP, urine creatinine, and urea and the lowest albumin in laboratory parameters. They were also the most frequent active alcohol users and smokers. The most important clinical features of each cluster are shown in Table 3 and Figure 2. Highest: % of females, age, ejection fraction, % of de novo HF and preserved EF, valvular and hypertension aetiology, hypertension, diabetes, RR, mOsm, Na, K, glucose, Ast, Alt, lowest: ascites, hepatomegaly, HGB, HCT, MCH, pH HCO 3 , urine creatinine and urea, Non-significant: highest: NYHA IV, limbs oedema I, JVP I, no pulmonary oedema, pCO 2 , IL-6, ferritin, creatinine, urine Na-first manifestation of HFpEF older woman, with high inflammatory markers, creatinine and osmolarity, highest AKI and WRF occurrence, and moderate one-year mortality Outcome The global one-year mortality in the studied group was 24% (74 events occurred). The mortality did not significantly differ between the clusters (p = 0.2), from cluster 0 to cluster 2: 22% vs. 22% vs. 34%. The Cox regression was performed, but none of the cluster's hazard ratios reached statistical significance (p = 0.35, p = 0.75, p = 0.0.09), and neither did the Kaplan-Meier estimation (p = 0.21). Clusters differed in terms of the time of hospitalization, AKI, and WRF occurrence. Patients in cluster 2 were the least likely to develop AKI or WRF and were hospitalized for the longest time. The outcomes and findings are summarised in Table 4. Abbreviations: WRF-worsening of the renal function, AKI-acute kidney injury, HF-heart failure. Discussion The WRF and AKI in AHF are common complications associated with ominous outcomes [4]. The occurrence of AKI has been estimated at 9-13% of AHF patients [16,17]. The underlying causes of the WRF in AHF are complex and not fully understood; the most prominent hypotheses include the impact of, i.a., congestion [18]. Given this lack of specific evidence, we decided to analyse the heterogeneity of the AHF population in the context of WRF occurrence and possible clinical phenotypes which determine it. The ML-based analysis is gaining popularity in cardiovascular research [19]. There were some magnificent attempts to implement ML in the HF population [20][21][22][23][24][25][26]. Yagi tried to identify distinct phenotypes among AHF patients who experienced WRF [27]. Nevertheless, our study is the first to incorporate clustering into the analysis of the HF population, aiming to distinguish subgroups varied in terms of the WRF. The clustering techniques were able to distinguish three interesting clinical subtypes with different pathophysiology and implications for the outcome. Cluster 0 This cluster represents the population of older men with chronic HF. We can assume that these patients represent the population with a relatively long history of cardiovascular treatment as they are frequently secured with the electric device and have undergone coronary intervention. They have also been saddled with comorbidities, i.e., end-stage insulin-dependent diabetes and COPD. As these patients represent the group of the chronic and fragile population, therapeutic interventions should be targeted at stable heart failure and comorbidities management [28][29][30]. Cluster 1 Cluster 1 is mainly composed of females. It is the oldest population with the first manifestation of HF, non-ischaemic aetiology, and preserved ejection fraction. They present signs of minimal congestion. In the biochemical assessment, patients in cluster 1 reached the highest serum creatinine, sodium potassium, and osmolarity. This phenotype corresponds with the described HFpEF phenotype [31]. Cluster 1 achieved the highest concentration of selected inflammatory biomarkers (IL-6, ferritin), and high activation of inflammatory pathways was reported to be unique for the HFpEF [32]. Recent studies showed that higher osmolarity correlates with the incidence of WRF in AHF [33]. Importantly, this group reached the highest incidence of AKI and WRF but moderate mortality; our consideration of its explanation is presented in the next paragraph. As the HFpEF population currently suffers from the lack of evidence-based treatment, therapeutic interventions should focus on comorbidities management and lifestyle changes [2]. Some hope for efficient pharmacotherapy is provided by the recent trials on SGLT-2 inhibitors [34][35][36]. Cluster 2 Cluster 2 seems to be the most interesting. It consists almost exclusively of men. They represent the youngest population with chronic HF with the lowest ejection fraction, developed on aetiology described as "other". Patients suffered from the least burden of comorbidities, which can be explained by their youngest age and probable underdiagnosis due to low commitment to their health management. These patients can be described as having toxic aetiology. They represent the highest frequency of active smokers and alcohol users and have the highest values of GGTP and bilirubin, which reflect the afflicted liver function [37]. Moreover, they reached the highest mean value of MCV, which might be associated with alcohol abuse [38]. In the clinical assessment, they manifest frequent and massive peripheral oedema, i.e., the highest incidence of lime oedema III, hepatomegaly, and ascites, but somewhat limited pulmonary congestion. This discrepancy between the aggravation of oedema in different vascular areas should be further evaluated. Laboratory signs of congestion, e.g., NT-proBNP, are also the highest among the clusters. Notably, patients in cluster 2 achieved the lowest pCO2, which can be a sign of heightened chemosensitivity-the predictor of an a unfavourable outcome [39]. Notably, the cluster with the lowest incidence of AKI and WRF (cluster 2) was the one with the highest one-year mortality (non-significant). In our opinion, that can be explained by two intertwined hypotheses. First, creatinine is the late marker of kidney function [40] and has limited value in assessing renal damage [41]. Some authors distinguish true and pseudo-WRF based on the concentration of so-called new renal biomarkers, i.e., NGAL, KIM-1, and cystatin-c [42]. Considering this, the isolated increase in serum creatinine can be insufficient for an accurate kidney assessment. Secondly, creatinine can rise during decongestive therapy [43,44]. It was reported that the transient rise of creatinine during decongestive treatment could even be a promising sign, as it reflects the exhaustiveness of the decongestion [45]. Thus, increased creatinine during diuretic treatment does not necessarily indicate genuine kidney injury, which would worsen the outcome, but it can be a sign of diminishing volume overload. The incompleteness of the decongestion was shown to be an important prognostic factor of mortality in AHF [46], which, in our case, could explain why the cluster with the lowest WRF incidence reaches the highest mortality. The proposed novel classification may complement the classical ways of AHF patient profiling and has significant clinical implications. Each of the extracted clusters has a different suggested pathophysiology and, therefore, another therapeutic pathway that can be therapeutically addressed; e.g., cluster 0-uptitration of the evidence-based HFrEF medical therapy, cluster 1-comorbidities management, and cluster 2-substance abuse counselling and harm reduction. Focusing on these aspects should lead to more accurate treatment tailoring and eventually optimization of therapy. The efficiency of the proposed clusterbased approach to the therapy adjustments should be evaluated in the prospective studies. Notably, clustering does not reveal baffling relationships. The uncovered connections are clear for the experienced cardiologist. The value of the presented analyses is that it provides tangible evidence for the existence of such phenogroups. Potentially, clustering could immediately categorize a patient into one of the groups and suggest to a physician a relevant proceeding, which can sometimes be omitted due to overworking or lack of experience. Limitations Our study is not free from limitations. Our data comes from the single-centre registries gathered between 2010-2012 and 2016-2017. Patients in these registries were treated with the current ESC criteria, which did not mention the modern drugs, i.a., a SGLT-2 inhibitor. This influences the potential extrapolation of our results to the present AHF population. Further, we did not assess the novel kidney markers, which would increase the thoroughness of the renal status evaluation. However, the presented assessment model mirrors the commonly used, well-understood variables. Importantly, we have only included the patients who had their creatinine evaluated at four time points, including discharge. Thus, we only included patients who survived the hospitalization. We have also prespecified the number of clusters, as we wanted to avoid the over-fragmentation of the data; however, pre-specification of the number of clusters to three follows the previous papers about clustering in HF [27,47,48]. All the issues mentioned above should be addressed in further trials. Conclusions Machine learning techniques provided fresh insights into the existing medical datasets. We were able to distinguish three clinically and prognostically different phenotypes. Importantly, these phenotypes are different in terms of the AKI or WRF occurrence. These groups constitute valuable insight into AHF and WRF interplay and may be leveraged for future trial construction and more tailored treatments. Our data provides further evidence for the hypothesis that the serum creatinine concentration should be analysed in the broader context in the population of decongested patients and that its increase is not necessarily prognostically worrying. Noteworthy, we used the k-medoids algorithm instead of the more popular k-means algorithm because k-medoids represent centroids of clusters as existing data points (patients in our case). This makes the results better interpretable. The k-medoids algorithm is also more robust to outliers than the k-means algorithm [49], which is meaningful in medical data. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/biom12111616/s1, File S1: RapidMiner procces. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available within the article. Further data are available on request from the corresponding author.
4,342.8
2022-11-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Chaperone Proteins Select and Maintain [PIN+] Prion Conformations in Saccharomyces cerevisiae Background: Prion proteins adopt different conformations, known as variants, each with a distinct phenotype. Results: Deletion of specific chaperone genes (HSC82, AHA1, CPR6, CPR7, SBA1, TAH1, SSE1) alters established [PIN+] variants in S. cerevisiae. Conclusion: Chaperone proteins have a role in determining prion variants. Significance: Chaperone activity helps to regulate cell prion phenotype. Prions are proteins that can adopt different infectious conformations known as “strains” or “variants,” each with a distinct, epigenetically inheritable phenotype. Mechanisms by which prion variants are determined remain unclear. Here we use the Saccharomyces cerevisiae prion Rnq1p/[PIN+] as a model to investigate the effects of chaperone proteins upon prion variant determination. We show that deletion of specific chaperone genes alters [PIN+] variant phenotypes, including [PSI+] induction efficiency, Rnq1p aggregate morphology/size and variant dominance. Mating assays demonstrate that gene deletion-induced phenotypic changes are stably inherited in a non-Mendelian manner even after restoration of the deleted gene, confirming that they are due to a bona fide change in the [PIN+] variant. Together, our results demonstrate a role for chaperones in regulating the prion variant complement of a cell. The mammalian prion protein, PrP, was originally identified as the causative agent of a group of neurodegenerative disorders collectively known as the transmissible spongiform encephalopathies (1). When the prion-determining domain (PrD) of PrP misfolds, PrP can switch from a non-infectious conformation (PrP C ) to a conformation prone to forming selfpropagating, ␤-sheet-rich, amyloid polymers (PrP Sc ) (1,2). These amyloids spread by catalyzing the transformation of PrP C into PrP Sc , eventually forming large aggregates (for review, see Ref. 3). Although genetic polymorphisms of the PrP gene do affect its disease pathology (4,5), distinct sets of symptoms arising from genetically identical PrP Sc , called strains, have been described (6,7). Protease treatment of different strains of PrP Sc aggregates revealed that they differed by the size and composition of their amyloid core region, suggesting that it is specific conformations of the amyloid that give rise to PrP Sc strains (8). Prions have also been characterized in non-mammalian model organisms, including bakers' yeast, Saccharomyces cerevisiae, where they have been demonstrated to occur frequently in the wild (9 -13). Yeast prions can also form different strains, called variants (14 -16). Studies have shown that different prion conformations distinguish each prion variant (17)(18)(19). The prion protein Rnq1p can adopt different variants, each with a distinct phenotype. Although the function of its nonprion conformation remains unknown, the prion conformation of Rnq1p, [PIN ϩ ], acts to help another yeast prion, Sup35p/ [PSI ϩ ], adopt its own prion state (20,21). Rnq1p has a priondetermining domain between amino acids 153 and 405, with a non-prion domain N terminus (11 (16). Another [PIN ϩ ] variant-linked phenotype is observed when GFP-tagged Rnq1p is overproduced in vivo (22). [PIN ϩ ] high strains form multiple Rnq1-GFP foci per cell, whereas [PIN ϩ ] medium and [PIN ϩ ] low strains generally form only a single Rnq1-GFP focus. Another phenotypic difference between [PIN ϩ ] variants is the size and stability of their amyloid aggregates. [PIN ϩ ] high contains less stable aggregates that break down into smaller subparticles when heated, as opposed to [PIN ϩ ] low or [PIN ϩ ] medium aggregates, which remain stable when heated (16,23,24). When two [PIN ϩ ] variants are introduced into the same cell, either through mating or cytoduction, the diploid and all haploid progeny adopt the phenotype of the dominant variant. [PIN ϩ ] high is dominant over [PIN ϩ ] medium , which is in turn dominant over [PIN ϩ ] low (16). These multiple, distinct phenotypes make [PIN ϩ ] an ideal model system with which to study the etiology of prion variants. Mutations in a prion protein can affect its amyloid structure and, through that, the type of variant it adopts (25,26). Still, variants can arise from prion proteins with identical sequences (14 -16, 27), suggesting that other cellular factors may influence the variant conformation that a given prion will adopt. Chaperone proteins are strong candidates for such variant regulating factors. Chaperones are known to affect the conformation of a wide array of client proteins (for review, see Ref. 28) and have been implicated in other aspects of prion biology, including the de novo formation, propagation, and curing of prions (for review, see Ref. 29 -31). Additionally, changes to chaperone activity and levels have been shown to affect prion variants. Yeast strains expressing N-and C-terminal truncations of the primary stress response transcriptional regulator, Hsf1p, which were shown to increase Hsp104p and decrease Hsp90 levels, respectively, preferentially formed specific [PSI ϩ ] variants upon de novo induction (32). Likewise, strains over-or underexpressing SSE1, which is important to Hsp70 activity, gave rise to specific [PSI ϩ ] variants when induced (33). To date, only one genetic mutation has been reported to lead to a change in a pre-existing variant. Sondheimer et al. (27) demonstrated that deletion of the Sis1p G/F domain altered Rnq1-GFP aggregation pattern in a manner stably propagated even after reintroduction of wild-type Sis1p. Here, we report the findings of our investigations into the actions of chaperone proteins upon already established [PIN ϩ ] variants. We found that disruption of several chaperone genes gives rise to shifts in [PIN ϩ ] variant-linked phenotypes. Genetic analysis showed that the phenotypic shifts are inherited in a non-Mendelian manner, confirming that a bona fide change in [PIN ϩ ] variant was achieved. Our findings provide evidence that chaperones can affect established prion variants and highlight a potential role for chaperones in regulating prion-linked phenotypes through their modulation of prion variants. EXPERIMENTAL PROCEDURES Yeast Strains, Culture, and Genetic Manipulation-S. cerevisiae strains used in this study are listed in supplemental Table S1. All yeast strains were cultured at 30°C with the exception of diploids undergoing sporulation, which were cultured at 25°C. Media were as follows: YEPD (1% yeast extract, 2% peptone, 2% glucose); CSM (0.67% yeast nitrogen base without amino acids, 2% glucose, 1 ϫ Complete supplement mixture (Bio 101, Vista, CA)); CSM auxotrophic marker growth medium (same as CSM but containing 1 ϫ CSM minus the appropriate auxotrophic selection (ϪADE, ϪHIS, ϪLEU, ϪURA, ϪLYS-URA, ϪTRP-URA)); CSM auxotrophic marker induction medium (same as CSM but with 2% galactose in place of 2% glucose). For solid media, the same recipes were used with 2% agar added. Deletions were made using a HIS3 cassette as described (34) and confirmed by PCR. Isogenic MAT␣ strains were made using YCpGAL::HO and mating-type switching (35). Mating, diploid sporulation, and dissection of haploids were performed as previously described with modified sporulation medium (0.3% potassium acetate, 0.02% raffinose) (36). Haploid progeny of dissections were tested for the presence of a gene deletion by culture on CSM-HIS plates. Yeast Plasmids and Cloning-Plasmids used in this study were constructed as follows; pYES2.0-SUP35NM-GFP was constructed by ligating the sequence encoding enhanced GFP (37) in-frame with the 3Ј-end of base pairs 1-762 of the SUP35 gene into pYES2.0 (Invitrogen). pYES2.0-GFP was similarly constructed with only the GFP-encoding sequence. pYES2.0-Rnq1-GFP was constructed by amplifying sequence coding for Rnq1p C-terminally tagged with GFP from genomic DNA of a commercially available GFP-tagged library strain (Invitrogen) and ligating it into pYES2.0. pGREG535-Rnq1 was constructed by amplifying RNQ1 and inserting it into pGREG535 according to the Drag & Drop protocol (38). Plasmids were verified by sequencing. Escherichia coli and yeast were transformed with plasmid using standard chemical transformation protocols (34,39). Assay for Nonsense Suppression-To quantify [PSI ϩ ] induction efficiency based upon nonsense suppression, Sup35NM-GFP or GFP alone was overproduced by culturing strains harboring pYES2.0-SUP35NM-GFP or pYES2.0-GFP, respectively, in liquid CSM medium for 24 h followed by subculturing in liquid CSM induction medium for 48 h. When testing primary deletions and diploids, cell cultures were normalized to an A 600 of 1.0, and when testing tetrads, cultures were normalized to an A 600 of 0.25. After normalization, 10-fold serial dilutions were made. 5 l of each dilution were spotted onto both CSM and CSMϪADE plates, and colony-forming units (cfu) were counted after 2 and 4 days, respectively. Percent [PSI ϩ ] induction efficiency was calculated as cfu (CSMϪADE)/cfu (CSM) ϫ 100. Four to six independent experiments were performed for each strain. Assay for Plasmid Retention-To determine if a given strain retained pYES2.0-SUP35NM-GFP, induced strains were plated for single colonies on both CSM and CSMϪADE, and colonies from both platings were then replica-plated onto CSM and CSM-URA media. Percent plasmid retention was calculated as cfu (CSM-URA)/cfu (CSM) ϫ 100. Between 100 and 200 colonies were compared for each strain. Assay for Prion Curing-To test induced [PSI ϩ ] strains for curability, colonies growing on CSMϪADE medium plates were streaked onto curing plates (YEPD ϩ 3 mM guanidine HCl) and allowed to grow at 30°C for 3 days. Putatively "cured" colonies were then selected based on their pigmentation, restreaked onto CSM and CSMϪADE plates, and allowed to grow at 30°C for 2 and 4 days, respectively. At least four colonies were tested for each strain. Characterization of Rnq1-GFP Aggregates in Vivo-Strains carrying pYES2.0-Rnq1-GFP were cultured for 24 h in liquid CSM growth medium then subcultured in liquid CSM induction medium for 24 h. The number of Rnq1-GFP foci per cell was then quantified by acquiring random wide-field micrographs of cells using an Olympus IX-80 fluorescence microscope. The frequency of multiple foci was expressed as a percentage of all cells containing multiple Rnq1-GFP foci. 800 -1000 cells were counted and categorized over the course of 4 independent experiments. Protein Analysis-Strains carrying pGREG535-Rnq1 were cultured for 24 h in liquid CSM growth medium, then subcultured in liquid CSM induction medium for 8 h. Protein samples were prepared and pelleted as described (24). The pellet fraction, enriched for insoluble HA-Rnq1p, was heated briefly at 55°C and analyzed by semi-denaturing detergent-agarose gel electrophoresis (SDD-AGE) 3 (24,40). Blots were probed with anti-HA antibody (F-7 Sc7392, Santa Cruz Biotechnology, Santa Cruz, CA) and detected with HRP-conjugated antimouse IgG (GE Healthcare). Yeast Two-hybrid Analysis-Yeast two-hybrid analysis was performed using the yeast strain HF7c and the plasmids pGAD424 (prey) and pGBT9 (bait) (Clontech, Mountain View, CA) following standard protocols (41). Genes encoding Rnq1p or chaperone proteins were ligated in-frame into pGAD424 and pGBT9 for expression. Interactions were scored by growth of cells on CSM-HIS medium after 4 days of incubation relative to growth on YEPD. At least three independent experiments were done for each tested pair of proteins. (Fig. 1A). We measured the relative [PSI ϩ ] induction efficiencies of deletion strains using a nonsense suppression assay (Fig. 1B) as described under "Experimental Procedures." In brief, we overproduced Sup35NM-GFP using the galactose-driven expression vector pYES2.0-SUP35NM-GFP and quantified cfu on CSMϪADE medium plates relative to cfu on CSM medium plates. ADE-competent colonies were confirmed to be prion-linked because they lost ADE competence after treatment with guanidine HCl (supplemental Fig. S1). Also, the extent of [PSI ϩ ] induction in a strain being dependent on the levels of plasmid retention by that strain was excluded as a possibility, as both wild-type and deletion strains retained plasmid at levels between 90 and 95% whether or not the cells were ADE-competent (data not shown). We first quantified the localization of Rnq1-GFP in all the deletion strains affected in [PSI ϩ ] induction efficiency ( Fig. 2A). When GFP-tagged Rnq1p was overproduced from the galactose-driven plasmid pYES2.0-Rnq1-GFP, ϳ75% of focus-containing wild-type [PIN ϩ ] low and wild-type [PIN ϩ ] medium cells contained a single Rnq1-GFP focus, whereas ϳ25% contained more than one focus. ϳ50% of wild-type [PIN ϩ ] high focus-containing cells contained multiple foci, in agreement with previ-ous findings (22 A, Rnq1-GFP was overproduced in deletion strains of interest, and the frequency of multiple Rnq1-GFP foci in foci-containing cells was calculated as a percentage of total cells. Error bars represent S.E. Student's t tests were performed to compare the frequency of multiple Rnq1-GFP foci in cells containing foci with that of their parental wild-type strain. **, p Ͻ 0.01; ***, p Ͻ 0.001. B, the Rnq1p aggregate size in deletion strains of interest was compared by SDD-AGE. Samples were incubated at room temperature (top) or 55°C (bottom) before loading. Chaperone-mediated Determination of [PIN ؉ ] Variants We next characterized the sizes of Rnq1p amyloid subparticles in the deletion strains. (23). Strains with a single focus display a relatively narrow range of large subparticles, whereas strains with multiple foci display subparticles ranging in size from monomer to as large as, or larger than, those found in strains with a single focus. Accordingly, we produced HA-tagged Rnq1p from the plasmid pGREG535-Rnq1 in the wild-type [PIN ϩ ] low , [PIN ϩ ] medium , and [PIN ϩ ] high strains and in these strains harboring a gene deletion. We then prepared samples enriched for proteins in an insoluble prion state and incubated them at room temperature and 55°C, as Rnq1p aggregates purified from strains with multiple Rnq1p foci, specifically [PIN ϩ ] high , have been shown to be more sensitive to changes in temperature and to degrade partially at moderately elevated temperatures, whereas other Rnq1p variant aggregates do not (24). Finally, we compared the sizes of the purified HA-Rnq1p amyloid aggregates using SDD-AGE (Fig. 2B) aggregates, also proved to be sensitive to temperature, degrading in part to a monomer at 55°C (Fig. 2B). It is important to note that the pattern of protein migration displayed by these partly degraded protein aggregates is easily distinguished from that produced by protein aggregates isolated from a [pin Ϫ ] strain. [pin Ϫ ] aggregates were monomeric and of low molecular weight, whereas partially degraded aggregates from [PIN ϩ ] strains still displayed species of high molecular weight. Our results suggest that deletion of specific chaperone genes leads to changes in the prion amyloid physical properties, thereby affecting its heat sensitivity. Our results show a correlation between chaperone gene deletion strains displaying altered [PSI ϩ ] induction efficiency and those displaying changes in other [PIN ϩ ]-linked phenotypes. The tah1⌬ strains were exceptions. They were unaffected in the sizes of their HA-Rnq1p aggregates vis à vis the sizes of the aggregates in their corresponding wild-type [PIN ϩ ] variant background. Also of note is that although the sse1⌬ strain in the [PIN ϩ ] low background showed multiple Rnq1-GFP foci per cell, its aggregates were limited to a narrow range of large sizes. Deletion-induced Phenotypes Are Inherited in a Non-Mendelian Manner-The [PIN ϩ ] variant-linked phenotypic changes we observed in strains deleted for chaperone genes could be due directly to the effects of gene deletion or could be due simply to changes in the cell chaperone complement and to the overproduction of tagged protein resulting from our methodology. For example, loss of a chaperone could impair the ability of the cell to deal with the overexpression of tagged Rnq1p, leading to formation of denatured, non-prion aggregates that are detectable in our assays as multiple Rnq1-GFP foci and/or low molecular weight aggregates in SDD-AGE. To eliminate this trivial explanation and to confirm that it is indeed deletion of the chaperone genes that gives rise to the observed [PIN ϩ ] variant shifts, we reintroduced a wild-type copy of a deleted chaperone gene by crossing a deletion strain with an isogenic wild-type strain. This wild-type strain was [pin Ϫ ] so as to avoid any convolution related to introducing dominant [PIN ϩ ] variants (16). If the effects of deletions were specific to the deletion of the gene, then restoring the chaperone gene would be expected to restore the original [PIN ϩ ] variant phenotype. If on the other hand a permanent change in the [PIN ϩ ] variant had occurred in the deletion strain, then the phenotypic changes should persist after reintroduction of the chaperone gene. Fig. 3 shows that, for the most part, [PSI ϩ ] induction efficiencies, the frequency of Rnq1-GFP foci, and the sizes of Rnq1p aggregates of diploid strains did not revert to the phenotypes of the original wild-type strains from which the deletion strains were derived. Instead, these diploid strains maintained the phenotypes of the deletion strains. The only exceptions were diploids arising from matings with sse1⌬ in the [PIN ϩ ] low background and with tah1⌬ in both the [PIN ϩ ] low and [PIN ϩ ] medium backgrounds. The increased frequency of Rnq1-GFP foci in the original haploid strains deleted for SSE1 or TAH1 was eliminated in the diploid strains, suggesting that these changes were [PIN ϩ ]-independent. It should be noted that under our experimental conditions we were unable to statistically distinguish [PIN ϩ ] low from [PIN ϩ ] medium based upon [PSI ϩ ] induction efficiencies (supplemental Fig. S2). As such, we could categorize our results only as being consistent with either [PIN ϩ ] low/medium or [PIN ϩ ] high levels. There was the remote possibility that spurious mutations had been introduced into our deletion strains and had produced dominant effects mimicking an apparent variant switch. If this were the case, it would be expected that this trait would be co-inherited in a Mendelian manner as opposed to the non-Mendelian manner of a true change in variant. To eliminate the possibility of introduced spurious mutations, we sporulated and dissected our diploid strains, yielding wild-type and deletion-carrying haploid progeny. The presence of the chaperone gene deletion cassette (HIS3) was detected by growth on CSM-HIS medium and showed a 2:2 ratio in the progeny as expected (data not shown). We then measured [PSI ϩ ] induction efficiency of the dissected tetrads by nonsense suppression assay (Fig. 4). The specificity of this assay was demonstrated by mating wild-type strains carrying different [PIN ϩ ] variants with a strain deleted for HSP104, which is required for both [PSI ϩ ] and [PIN ϩ ] maintenance (20,42). The wild-type haploid progeny were inducible at levels similar to their parental strains, whereas the hsp104⌬ haploids were unable to be induced. When the chaperone gene deletion strains were mated with the wild-type [pin Ϫ ] strain, we found that the haploid progeny of 24 tetrads all maintained the same [PSI ϩ ] induction efficiency as the parental deletion strain regardless of the presence or absence of the chaperone gene of interest. These levels were consistent with either [PIN ϩ ] low/medium or [PIN ϩ ] high levels. Haploid progeny were found to retain pYES2.0-Sup35NM-GFP at a rate consistent with the parental strains, with no strain losing or retaining the plasmid at a markedly higher level than any other strain. Also, putative [PSI ϩ ] colonies that arose after [PSI ϩ ] induction were shown to be curable by growth on medium containing guanidine HCl (supplemental Fig. S3). The Rnq1p aggregates of wild-type and mutant haploid progeny derived from the mating of deletion strains that gave rise to [PIN ϩ ] variant-related phenotypic changes were characterized using SDD-AGE (Fig. 5). We found that the deletion-induced changes in aggregate size were stable even after tetrad dissection. Deletion strains that exhibited no change in aggregate size were found to be consistent with the wild-type strain in which they were made (data not shown). Taken Rnq1p Interacts Physically with Chaperone Proteins-We performed a yeast two-hybrid analysis to investigate possible physical interactions between our chaperones of interest and Rnq1p (Fig. 7). With growth on CSM-HIS medium as a reporter induction efficiency by nonsense suppression assay on ϪADE medium. Wildtype strains were mated with hsp104⌬ strains to demonstrate genetic specificity for this assay. Wild-type (ϩ) and deletion-carrying (Ϫ) haploid progeny are presented. Twenty-four tetrads were analyzed per diploid. Chaperone-mediated Determination of [PIN ؉ ] Variants of interaction, we detected a previously documented interaction between Rnq1p and Tah1p (43). We also identified two novel interactions: Cpr7p and Sba1p with Rnq1p. No interaction was detected between the remaining chaperones and Rnq1p. When we characterized the [PIN ϩ ] variant of our yeast two-hybrid strain (HF7c) by SDD-AGE and localization of Rnq1-GFP (supplemental Fig. S5), we found HF7c to be [pin Ϫ ], suggesting that the interactions we detected occur when Rnq1p is in its non-prion conformation. DISCUSSION Although recombinant S. cerevisiae prion proteins do not require cofactors to misfold into multiple infectious variants in vitro, they need the activity of different chaperone proteins to propagate stably in vivo. Most prominent among these chaperones are Hsp104p, Sis1p, and members of the Ssa subfamily (for review, see Refs. 29 -31). Together these chaperones facilitate the fragmentation of growing amyloid fibrils, thereby exposing more fibril growing ends and generating more infectious prion seeds. It has been proposed that conformational differences between variants alter their susceptibility to fragmentation and their rate of fibril growth and that equilibrium between these processes determines the stability of prion propagation and the strength of prion phenotype (18,44,45). For example, compared with [PSI ϩ ] weak , [PSI ϩ ] strong has a smaller amyloid core that fragments more easily, allowing for the generation of more growing ends and a higher rate of Sup35p incorporation into a greater number of prion seeds (18). These seeds in turn lead to increased nonsense suppression and more stable propagation of [PSI ϩ ]. Chaperones have also been implicated in variant determination, with alteration of Hsf1p or Sse1p activity affecting the de novo induction of the [PSI ϩ ] variant and truncation of Sis1p leading to stable changes in the established [PIN ϩ ] variant (27,32,33). Our study provides further evidence that chaperones are important for the selection of prion variants. Deletion of AHA1 (46). However, such a scenario is unlikely in our case, because the changes we observed in [PSI ϩ ] induction in our chaperone gene deletion strains were also accompanied by changes in the size and localization pattern of Rnq1p, consistent with a shift in [PIN ϩ ] variant. Additionally, these phenotypic shifts persist after sporulation of diploids into wild-type and deletion-carrying haploid progeny upon reintroduction of the deleted chaperone gene. Together, our results demonstrate that the chaperone gene deletion-induced phenotypes we observed are due to stable shifts in the [PIN ϩ ] variant. Like other chaperone gene deletions, deletion of TAH1 or SSE1 in the [PIN ϩ ] low or [PIN ϩ ] medium backgrounds gave rise to changes in [PIN ϩ ] variant-dependent phenotypes. However, in contrast to what was observed for the other chaperone gene deletion strains, not all of the tested phenotypes were altered in TAH1 or SSE1 gene deletion strains, and of those phenotypes that were altered, not all were maintained after the wild-type gene was restored. This suggests that at least some of the changes observed in the tah1⌬ and sse1⌬ strains are not the result of shifts in the [PIN ϩ ] variant and that any putative var- Wild-type strains were mated with hsp104⌬ strains to demonstrate genetic specificity for this assay. Wild-type (ϩ) and deletion-carrying (Ϫ) haploid progeny are presented. The hsp104⌬ samples and wild-type controls were run on a continuous gel and exposed for a longer period. Twenty-four tetrads were analyzed per diploid. iant shift that may have occurred in these deletion strains does not correspond to a previously characterized variant (16,22 (32). We detected physical interactions of Rnq1p with Cpr7p, Sba1p, and Tah1p but not with the other chaperones. This suggests that some of these chaperones may act upon Rnq1p independently of Hsp90. Cpr7p, for example, has intrinsic proline isomerase activity (52,53). Rnq1p contains three proline residues: one in the N-terminal region, a region shown to affect prion propagation (54), and two in a putative loop region between the ␤-sheets of the amyloid core (55). Isomerization of these prolines could potentially affect Rnq1p conformation. Additionally, as numerous physical interactions between these chaperones and other chaperone complexes have been documented, changes in the levels of these chaperones may affect how Hsp40s, Hsp70s, and/or Hsp104p interact with Rnq1p, resulting in variant change. Cpr7p, for example, has been shown to interact with Hsp104p in a manner that is not essential for its thermotolerance activity (56,57). Additionally, inhibition of Hsp90 ATPase activity has been shown to increase the levels of both Hsp104p and Hsp70 (58). The effect of SSE1 deletion that we report here also implicates Hsp70s in variant change, as SSE1 encodes an important Hsp70 nucleotide exchange factor (59). Fan et al. found that manipulation of Sse1p levels affected the [PSI ϩ ] variant (33), although our sse1⌬ strain did not display the same disposition toward an unstable weak [PSI ϩ ] variant. The difference between our and their results could be due to [PIN ϩ ] variant-specific effects, as Fan et al. (33) did not characterize the [PIN ϩ ] variant of their strain. It is also interesting that it has been reported that mutations in Sis1p give rise to an increase in Rnq1-GFP foci (27), similar to what we observed. It may be that our gene deletions indirectly impaired the activity of Sis1p and/or its association with Rnq1p. How the loss of specific chaperones alters the levels and activities of other chaperones, in addition to their interaction with Rnq1p, will be an important avenue of future investigation to clarify the mechanisms underlying the variant changes that we observed. Chaperones could mediate [PIN ϩ ] variant changes by altering the conformation of Rnq1p. For example, chaperones could regulate the folding of monomeric Rnq1p in such a way as to predispose it to adopt a specific variant upon de novo formation or upon encountering prion seeds. More likely, because we observed changes to established variants, chaperones could work together to affect the conformation of existing amyloid polymers. For this to be effective, only the growing ends of polymers need be remodeled. Alternatively, if multiple or unstable variants are present at the same time in the cell, as has been shown to occur for [PSI ϩ ] (60), changes in the chaperone environment could alter the rates of amyloid polymer fragmentation and/or prion seed generation in a variant-specific manner. In this way, one variant could be selected over others. In closing, we have demonstrated that altering the chaperone complement of a cell can alter existing prion variants without the introduction of exogenous prion material. By modulating existing prion variants, the cell can maintain prion seeds within the cell while mitigating potential negative effects of stronger prion phenotypes. Also, when environmental pressures demand, the existing prion variant could be quickly altered to provide a more advantageous phenotype to the cell. In light of recent findings reporting the prevalence of prions in wild strains of yeast as well as the apparent survival advantages that they bestow (13), prion variant regulation represents a powerful mechanism for modulating a cell response and adaptability to changing environmental conditions.
6,217.4
2012-11-12T00:00:00.000
[ "Biology" ]
The Structure, Magnetic and Absorption Properties of Zn-Ti Substituted Barium-Strontium Hexaferrite Prepared by Mechanochemical Process Synthesis and characterization of (Ba,Sr)0.5Fe12-xZnxTixO19 M-type hexaferrite was conducted by using mechanochemical process. The composition of Zn-Ti ion in (Ba,Sr)0.5Fe12- xZnxTixO19 M-type hexaferrite were varied as x = 0.0, 0.2, 0.6, and 1.0. The sample was characterized by using X-rays diffractometer (XRD), scanning electron microscope (SEM), vibrating sample magnetometer (VSM), and vector network analyzer (VNA). The XRD patterns were further analyzed by Rietveld analysis program. The Rietveld analysis result indicated that the substitution of Zn-Ti ions led to the expansion of the hexagonal lattice parameter and unit cell volume (Vcell), while the atomic density decreased with increasing of the Zn-Ti ion. The VSM measurement exhibited that the substitution of the Zn-Ti ions caused the change in magnetic properties behavior such as intrinsic magnetic coercivity (Hci), remanence (Mr) and saturation (Ms) magnetization. The value of Hc dropped significantly with the substitution of Zn-Ti ions, and then continued to decrease with the addition of Zn-Ti ions content for x = 0.6, and x = 1.0. SEM observation revealed that all the particles showed nearly hexagonal platelet shape, with the particle size distribution at most in the range between 200–600 nm. The variation of the reflection loss versus frequency was measured by using VNA in (Ba,Sr)0.5Fe12-x(ZnTi)xO19 hexaferrite for x = 0.0–1.0. The optimum reflection loss (RL) was found to be –48 dB at 14 GHz in (Ba,Sr)0.5Fe11.6(ZnTi)0.2O19 hexaferrite for x = 0.2. Introduction Barium hexaferrite is one of M-type hexaferrites. Barium hexaferrite has a large saturation magnetization, high coercivity, high Curie temperature, large uniaxial magnetic crystalline anisotropy, and excellent chemical stability. Therefore, Barium hexaferrite is promising candidates for the development of microwave absorbing materials [1][2][3]. Barium hexaferrite has significantly large crystalline anisotropy due to their low crystal symmetry, as it is compared with the cubic symmetry of spinel or garnet ferrites. Therefore, the resonance frequency can reach as high as GHz [4-6]. Furthermore, the location of resonance can be modified over a wide frequency range by the substitution of ions in Barium hexaferrite. Many reports are available for the synthesis and characterization of the barium hexaferrite substituted with various cations like Sr, Pb, La for Ba and Mn-Ti, Co-Ti, Ni-Cr for Fe [7][8][9][10][11]. These studies revealed that the structural, magnetic, and microwave absorbing properties of barium hexaferrite are strongly influenced by the substitution of multivalent cations. Therefore, this research focused on the effect of the ions substitution on the structure, magnetic and absorbing behavior in (Ba,Sr) 0.5 Fe (12-2x) (ZnTi) x O 19 (for x = 0.0, 0.2, 0.6, 1.0) hexaferrite. The main objectives are to fabricate (Ba,Sr) 0.5 Fe (12-2x) (ZnTi) x O 19 (for x = 0.0, 0.2, 0.6, 1.0) using the mechanochemical method and investigate the influence of Zn-Ti ions doping concentration in composites on the structural, magnetic and microwave absorbing performance. The synthesis was done through a mechanochemical process. The obtained materials then characterized by using X-ray diffractometer (XRD), scanning electron microscope (SEM), and vibrating sample magnetometer (VSM) to study the samples' microstructure, and magnetic properties. The characterization of reflection and transmission of microwave were carried out by using the Vector Network Analyzer (VNA, ADVANTEST R3770) with a frequency ranged from 7 to 15 GHz. Experimental Method As starting materials of titanium dioxide (TiO 2 ), zinc oxide (ZnO 2 ), hematite (-Fe 2 O 3 ), barium carbonate (BaCO 3 ), and strontium carbonate (SrCO 3 ) were used. A series of (Ba,Sr) 0.5 Fe 12-x Zn x Ti x O 19 samples with x = 0.0, 0.2, 0.6, and 1.0 were synthesized with a mechanochemical method using HEM, Certi-prep 8000 mixer [12,13]. The prepared for doped samples were mixed together in a stainless steel vial. The ground mixing of each sample was performed for 5 hours. The ball and sample weight ratio was 10 : 1. Furthermore, the sample was dried in an oven at 80 ºC, and followed by sintering at 1000 °C for 5 hours. The crystalline structure was determined by using X-ray diffractometer (XRD, Philips Pananalytical Empyrean) using Cu-K radiation ( = 15.405 nm) in the range of 20 -70 o of 2. The XRD data was analyzed by using general structure analysis system (GSAS) program [14]. The morphology of the sample was characterized by using a scanning electron microscope (SEM, JEOL-JSM 6510). The magnetic measurements were carried out with a vibrating sample magnetometer (VSM, Oxford 1.2T) at a room temperature. The reflection loss of the absorber was measured by using microwave Network Analyzer (ADVANTEST R3770) in the frequency range of 7 -15 GHz. Figure 1 shows the XRD patterns of images, it can be seen that the particle inside the sample was grown with relatively uniform shape. In Figure 2(a), the particles were nearly spherical-like shape. The particle size distribution was almost evenly and smaller than 600 nm. In Figure 2(b), 2(c), and 2(d), it is seen that the particle showed nearly platelet shape. However, it is clear that the particle size was smaller below 1000 nm. Figure 3 shows the refinement result of XRD pattern of (Ba,Sr) 0.5 Fe 12-x Zn x Ti x O 19 samples without and with Zn-Ti doping for x = 0.0, 0.2, 0.6, and 1.0. The refinement result produced a very good quality of fitting with R factor and goodness of fit value χ 2 (chi-squared) was less than 1.2, as shown in Table 1. From Figure 3, the plus symbol (+) of red line represents the observation, while the green line is the calculation. Miller index is signed by bar symbol (I) and the difference between observation and calculation is represented by purple line symbol (-). Based on Figure 3(a), and 3(b), it can be seen that (Ba,Sr) 0.5 Fe 12-x Zn x Ti x O 19 (for x = 0.0, and x = 0.2) were identified to be single phase hexagonal structure, indicating the doping elements had been substituted into the structure successfully. The diffraction pattern produced in accordance with JCPDS # 84-1531 and the space group was P6 3 /mmc. Structure Properties A little change has been found in Figure 2(b) and 2(c) which show the peak broadening and corresponding to the impurities phase, which may be due to the difference in the molarities of the precursor. The content of impurities phase was obtained from the analysis by using GSAS program is listed in Table 1. Based on the Table 1, about 70% and 60% phase purities were achieved with substitution of Zn-Ti ions for x = 0.6 and x = 1.0 respectively. Earlier studies reported that the impurities such as Fe 2 O 3 phase begin to dissolve at the temperature greater than 1000 o C. It helps aggregation of the particles and grains growth significantly during sintering process [1,12]. Variation of lattice parameters (a, c), volume cell and atomic density as a function of Zn-Ti content is shown in Figure 4. It appears that the value of a and c are slightly increased with increasing the Zn-Ti contents as indicated by Figure 4a. The effect of Zn-Ti ions doping also led to increase the unit cell volume (V cell ) is quite significant, especially for x = 1.0, while the atomic density decreases as depicted in Figure 4(b). It indicated that the regular structure became slightly elongated along a-axis and c-axis. As a result, the dopant ions changed both of the lattice parameters, cell volume, and atomic density. The atomic substitution of Fe 2+ /Fe 3+ by Zn 2+ /Ti 4+ has a significant impact on the lattice parameters. Energy perturbation and lattice distortion were expected when Fe 2+ /Fe 3+ were replaced by Zn 2+ /Ti 4+ ions due to the difference in their atomic radii and their orbitals. This condition caused the anisotropy energy to change, and thus the magnetic behavior changed particularly coercivity which decreased with Zn/Ti doping, as shown in Figure 5. The observed variation depended on the difference in the ionic radii of the metal ions Zn = 0.74 Å, Ti = 0.68 Å compare to Fe = 0.645 Å. Figure 5 represents the sequences hysteresis loop behavior of (Ba,Sr) 0.5 Fe 12-x (ZnTi) x O 19 powder with variation of Zn-Ti ions doping for x = 0.0, 0.2, 0.6, and 1.0. The characteristic magnetic parameters are listed in Table 2. The variations of M s , M r , and H c values of (Ba,Sr) 0.5 Fe 12-x (ZnTi) x O 19 are shown in Figure 5(b), 5(c), and 5(d). It is seen that the substitution of the Zn-Ti ions caused decrease significantly in intrinsic coercivity, H ci and little affected to saturation magnetization, Ms at low substitution. Based on Table 2, H c was observed to be 4.0 kOe in the unsubstituted sample. A small amount of Zn-Ti ions doping (for x = 0.2) did not significantly affect the structure but had a great influence on magnetic properties. The M s value remained almost constant, but H c rapidly decreased from 4.00 kOe to 2.35 kOe. As x continued to increase to x = 1.0, the value of M s and Hc decreased to 74 emu/g, and H c to 1.00 kOe, respectively. Magnetic remanent, M r also decreased with the increase of ions substitution. The decrease in M r was due to the net alignment of grain magnetization caused by weak inter-grain exchange interactions. The high value of H c in undoped sample was due to uniaxial magnetocrystalline anisotropy along c-axis. C-axis is very sensitive to magnetic properties and anisotropy of Ba-hexaferrite and is responsible for coercivity. Thus, the increase in c-axis with the addition of "x" alters the distance between magnet ions, so that the exchange interaction varies which in turn changes the magnetic properties. (Fig. 6d), the reflection loss exhibited three peaks approximately at -32 dB at 8 GHz, -34 dB at 11 GHz and -44 dB at 14 GHz. The optimum reflection loss was found to be -48 dB at a frequency of 14 GHz in (Ba,Sr) 0.5 Fe 11.6 (ZnTi) 0.2 O 19 hexaferrite for x = 0.2 (Figure 6b). The Zn-Ti substitution to Fe site affected to domain wall pinning and the change of magnetic anisotropy, so that the damping of domain turning and domain wall displacement increases, thus leading to the resonance enhancement and resonance frequency-position shift [6, 9,15]. However, an excessive amount of ions doping may result in weak microwave absorption, due to the increase in electric resistivity. As a result, a suitable amount of Zn-Ti doping is required to improve the microwave absorption of M-type barium hexaferrite. Conclusion M-type hexaferrite of (Ba,Sr) 0.5 Fe 12-x Zn x Ti x O 19 (for x = 0.0, 0.2, 0.4, 0.6, 1.0) has been synthesized through mechanochemical process. The structural, magnetic and absorption properties of the synthesized composition were studied by using XRD, SEM, VSM and VNA. The refinement of XRD pattern results showed that the M-type hexaferrite for x = 0.0 and x = 0.2 are single phase. The phase purity decreased with the increase of Zn-Ti concentration for x = 0.6 and x = 1.0. The effect of the Zn-Ti ions substitution led changes in the structural and magnetic properties. The lattice parameter increased became elongated along a and c-axis with the doping of Zn-Ti, and also led to increase the unit cell volume (V cell ). The magnetic properties behaviors such as magnetic coercivity, remanent and saturation magnetization led to decrease, as the addition of Zn-Ti substitution. The Effect doping of Zn-Ti could enhance the impedance matching and microwave attenuation, which the optimum reflection loss was found to be -48 dB at 14 GHz in (Ba,Sr) 0.5 Fe 11.6 (ZnTi) 0.2 O 19 for x = 0.2.
2,738.4
2017-05-01T00:00:00.000
[ "Materials Science" ]
Farmers’ Willingness to Pay for Services to Ensure Sustainable Agricultural Income in the GAP-Harran Plain, Şanlıurfa, Turkey Sustainable agriculture is necessary for farmers to have a sustainable income. This research aims to determine the willingness to pay (WTP) of farmers in the GAP-Harran Plain for services that would ensure sustainable agricultural income, the factors affecting their willingness, and the minimum amount they would be willing to pay. The main material of the research has been obtained by means of face-to-face surveys involving farmers selected by a simple random sampling method in the GAP-Harran Plain. The sampling volume was determined with a 95% confidence limit and a 5% error margin. Heckman’s two-stage model was used for the analysis. According to the results of the research, 22.61% of the participants showed WTP, and the average amount they were willing to pay was 180.82 TL/hectare (ha) ($31.86/ha). This amount was 3.08% of the average annual agricultural income calculated. About 41.22% of the participants showed no WTP. They believed that the public sector is accountable of the services and consequently, they should be provided free of charge. About 23.14% of the participants showed WTP only for the services that they needed. The average WTP for all participants was calculated as 40.9 TL/ha ($7.21/ha) and 1.2 million $/year for the GAP-Harran Plain. This amount is the minimum and may increase by several folds with a demand-based variety of service delivery. The factors that statistically effect WTP have been determined as age, education, experience, number of households working in agriculture, amount of land, agricultural income, non-agricultural income, membership status of agricultural cooperatives, and product pattern. The results provide useful information to guide researchers, decision-makers, and policy-makers. Introduction Agriculture has of great importance to all countries to meet their food needs. Agricultural production, which is the main source of life, faces sustainability problems due to various reasons. Agricultural production is a source of rural livelihood and employment, and it is becoming more important as a driving force for rural economic growth and poverty reduction in developing countries [1]. For sustainable agriculture, farmers must obtain sustainable satisfactory income from their production [2]. According to the Global Risks Report of 2018, the five most serious global risks that may occur are weapons of mass destruction, extreme weather events, natural disasters, practices for sustainable agricultural income leads to a sustainable provision of agricultural activities and significantly contributes to a reduction in agricultural concerns. The responsibility here is not only the public sector's but also the private sector and farmers also have responsibilities. In a study of EU countries, it is stated that producers-i.e., farmers-have responsibilities in the context of sustainable development [20]. These responsibilities cover not only the planning and participation dimensions but also the sharing of costs. While this is important to all countries, it is most important to developing countries. Therefore, it is vital to evaluate the participation of farmers in the valuation of services for sustainable agricultural income and to reveal the factors affecting this participation and its economic dimension. The Southeastern Anatolia Project (GAP) is the most important integrated sustainable regional development project in the history of the Republic, aiming to increase the sustainable income and welfare of the people of the region. The project aims to activate the agricultural potential of the region based on water and soil resources.Şanlıurfa is the most important agricultural city of the GAP Region and the Harran Plain is the most important plain of the project. The expected increase in agricultural production and the income of farmers, both inŞanlıurfa and in the Harran plain, has not been sufficiently achieved due to many reasons and one of them is lack of detailed field survey based on the needs and attitudes of farmers. The aim of this research is to determine the attitudes of farmers in the GAP-Harran Plain to accept payment for service supplies that will provide sustainable agricultural income. Also, the affecting factors on these attitudes of the farmers, and their willingness to pay (WTP) with the payment amount of Turkish liras per hectare for these services were investigated. This research is the first of its kind in the GAP Region in this context. The farmers' WTP amount for service delivery for sustainable agricultural income in the GAP-Harran plain was calculated for the first time with this research. In this sense, the results of this study provide useful data and information that will constitute a basis for other studies both in other regions of Turkey which is a lack of this subject until now and also in other countries with the same socio-economic similarities. Literature Review In the literature-based studies given below, in essence, farmers' WTP on various topics for sustainable agricultural activities has been investigated. This situation is also necessary for sustainable agricultural income. Therefore, it is assumed that it is also valid in sustainable income. Because the main source of sustainable income in agriculture is a sustainable agricultural environment and activities. In a study on sustainable agriculture conducted in Oklahoma, USA, economic analysis of drought-resistant crop cultivation strategies was carried out to reduce the effects of climate change on agricultural production. It has been determined that farmers are concerned about crop types, productivity, and marketing, and that culture affects their WTP on sustainable agriculture [21]. A study conducted in Tanzania focused on the importance of advanced technologies adapted to local conditions for farmers under sustainable agricultural policies. However, it is not clear which factors affect the willingness and demands of small-scale farmers for such technologies. An improved agricultural extension service, besides seeds and fertilizers have been effective in increasing the WTP on sustainable agricultural policy based on the income of the farmers [22]. In a study conducted in Wuhan, China, it was determined that farmers are not satisfied with the ecological environment of existing agricultural lands, which affects sustainability, and that they express a WTP to protect agricultural lands. The research proposed the importance and necessity of factoring in WTP in policies on the protection of agricultural land for an income of sustainable farming [23]. In a study conducted inŞanlıurfa, GAP, Turkey, farmers' WTP was determined as $48.8/ha for sustainable use of resources for sustainable income. The factors affecting WTP were determined as the location of farmers, the number of agricultural manpower available for a given household, quantity of land and ownership status, and the income derived from agriculture [24]. In a study conducted in Inner Mongolia of China, the participants' WTP for ecosystem protection was determined as $25.11. The effective factors that determined the WTP were age, education, household income, and sustainability concerns about the income. Younger and more educated respondents had higher WTP, as well as those who were concerned about protection [25]. In a study conducted in Upper Hun River Basin, Northeast China, it was found that large losses were experienced as a result of the degradation of the basin ecosystem due to socio-economic activities of human origin. Emphasizing the importance of the protection and sustainability of the basin, a WTP of $3.2 million was calculated for preventing income losses at the basin [26]. In a study on watershed management in Ethiopia, the WTP was determined to reduce land degradation in 90% of rural residents, with an emphasis on sustainable use of farming lands for income [7]. In Taiwan, farmers' WTP for meteorological information services for safe agricultural production activities are determined as $56.06 to $90.92 for safe of income. One of the factors affecting WTP is the acreage of the land [27]. In West Java, Indonesia, smallholder farmers have WTP of $2.67/ha for agricultural insurance and sustainable agricultural production based on the risk of climate change. The quantity of land, expectation based on crop pattern, and agricultural extension services were determined as the effective factors [28]. In a study conducted in Greece, it was found that farmers have the WTP for sustainable agricultural irrigation based on income [29]. A study carried out in South Korea also found that participants had WTP for sustainable integrated agricultural productions [30]. In another study by the same author, demographic data were found to be influential factors that determine WTP for farming [31]. In a study on the Ganjiang River Basin, China, the importance of protecting the river basin ecosystem for sustainable use was noted. It was stated that this protection should not only be left for the government but those other stakeholders should be involved in the planning without being ignored. Using Heckman's two-stage model, participants' WTP was calculated as $47.62 for sustainable use of the ecosystem. The effective factors were found to be the level of education, nature of work, water quality, and quantity [32]. In a study conducted in Uganda, 89% of farmers were found to have a WTP for sustainable agricultural activities for sustainable income. The most important factors were determined as land ownership type, land quantity, and income [33]. Research Area Turkey is divided into seven geographical regions. Southeastern Anatolia Region, despite having the most water and land resources, is the second of Turkey's least developed. The region has a semi-arid climate [34]. Also, this region has geopolitical importance due to the creation of Turkey's border with Syria and Iraq. Its main source of livelihood is agriculture and agriculture-based industry [35]. The Southeastern Anatolia Project (for which the Turkish acronym is GAP) is the most comprehensive and costliest project in Turkey, which is located in the southeast at the Euphrates-Tigris basin in the plains of Upper Mesopotamia. The GAP project covers about 11% of Turkey in terms of land area and population [36]. The location ofŞanlıurfa, GAP, and Turkey is given in Figure 1 [24]. of the degradation of the basin ecosystem due to socio-economic activities of human origin. Emphasizing the importance of the protection and sustainability of the basin, a WTP of $3.2 million was calculated for preventing income losses at the basin [26]. In a study on watershed management in Ethiopia, the WTP was determined to reduce land degradation in 90% of rural residents, with an emphasis on sustainable use of farming lands for income [7]. In Taiwan, farmers' WTP for meteorological information services for safe agricultural production activities are determined as $56.06 to $90.92 for safe of income. One of the factors affecting WTP is the acreage of the land [27]. In West Java, Indonesia, smallholder farmers have WTP of $2.67/ha for agricultural insurance and sustainable agricultural production based on the risk of climate change. The quantity of land, expectation based on crop pattern, and agricultural extension services were determined as the effective factors [28]. In a study conducted in Greece, it was found that farmers have the WTP for sustainable agricultural irrigation based on income [29]. A study carried out in South Korea also found that participants had WTP for sustainable integrated agricultural productions [30]. In another study by the same author, demographic data were found to be influential factors that determine WTP for farming [31]. In a study on the Ganjiang River Basin, China, the importance of protecting the river basin ecosystem for sustainable use was noted. It was stated that this protection should not only be left for the government but those other stakeholders should be involved in the planning without being ignored. Using Heckman's two-stage model, participants' WTP was calculated as $47.62 for sustainable use of the ecosystem. The effective factors were found to be the level of education, nature of work, water quality, and quantity [32]. In a study conducted in Uganda, 89% of farmers were found to have a WTP for sustainable agricultural activities for sustainable income. The most important factors were determined as land ownership type, land quantity, and income [33]. Research Area Turkey is divided into seven geographical regions. Southeastern Anatolia Region, despite having the most water and land resources, is the second of Turkey's least developed. The region has a semi-arid climate [34]. Also, this region has geopolitical importance due to the creation of Turkey's border with Syria and Iraq. Its main source of livelihood is agriculture and agriculture-based industry [35]. The Southeastern Anatolia Project (for which the Turkish acronym is GAP) is the most comprehensive and costliest project in Turkey, which is located in the southeast at the Euphrates-Tigris basin in the plains of Upper Mesopotamia. The GAP project covers about 11% of Turkey in terms of land area and population [36]. The location of Şanlıurfa, GAP, and Turkey is given in Figure 1 The main objectives of GAP are to increase the income level and life quality of the local people by utilizing the water and soil resources of the region, to eliminate the development gap between this region and other regions, to increase productivity and employment opportunities in the rural area, and to contribute to economic development and social stability goals at the national level. Within the scope of the project, there are 22 dams, 19 HEPPs, and 1.8 million ha of irrigation areas, with a project budget of $32 billion [36]. Harran Plain is the most important plain in the Southeastern Anatolia Region, which is located inŞanlıurfa. Harran Plain has an area of 151.7 thousand ha, and the first irrigations in relation to GAP started in an area of 30 thousand ha in 1995 ( Figure 2); today, the entire plain is under irrigated agriculture. The acreage of the irrigated area, including the Upper Harran irrigated area which is just above the Harran Plain since 2005, is around 166 thousand ha. Harran Plain has 88.5% gravity and 11.5% pressurized irrigation. While there is excessive use of water in the upper parts of the plain, there is water failure in the lower parts towards the south. While this creates a risk of salinity in the upper parts, it creates a drought effect in the lower parts. In both cases, yield and income loss occur in agricultural production [37,38]. Agriculture 2020, 10, x FOR PEER REVIEW 5 of 16 The main objectives of GAP are to increase the income level and life quality of the local people by utilizing the water and soil resources of the region, to eliminate the development gap between this region and other regions, to increase productivity and employment opportunities in the rural area, and to contribute to economic development and social stability goals at the national level. Within the scope of the project, there are 22 dams, 19 HEPPs, and 1.8 million ha of irrigation areas, with a project budget of $32 billion [36]. Harran Plain is the most important plain in the Southeastern Anatolia Region, which is located in Şanlıurfa. Harran Plain has an area of 151.7 thousand ha, and the first irrigations in relation to GAP started in an area of 30 thousand ha in 1995 ( Figure 2); today, the entire plain is under irrigated agriculture. The acreage of the irrigated area, including the Upper Harran irrigated area which is just above the Harran Plain since 2005, is around 166 thousand ha. Harran Plain has 88.5% gravity and 11.5% pressurized irrigation. While there is excessive use of water in the upper parts of the plain, there is water failure in the lower parts towards the south. While this creates a risk of salinity in the upper parts, it creates a drought effect in the lower parts. In both cases, yield and income loss occur in agricultural production [37,38]. Materials The research was based on primary sources of data. The main material of this research was obtained from face-to-face surveys of farmers in the GAP-Şanlıurfa-Harran plain. The number of farmers registered in the state farmer registration system on the Şanlıurfa-Harran Plain was 15,824 in 2019. Farmers were selected by a simple random sampling method, using the sample volume table [39], with a 95% confidence limit and a 5% margin of error. The sample size was 376 and 379 questionnaires were used in the analyses. The surveys were conducted in 2019. The surveys included questions to measure the willingness of the farmers to pay for sustainable agricultural income, the farmers' demographic and socio-economic characteristics, and their ability to pay. Methods During the research, zero observation was obtained because some participants did not want to participate in the payment of services that would enhance sustainable agriculture. Estimation of the model, with the data showing zero observation, would have caused the deviation of the coefficient estimates, and since estimating the model by removing zero observation would also have caused inconsistency. This situation is known as the sample selection deviation. The two-stage estimation method was used to eliminate sample selection deviation [40]. In the Heckman model, using probit instead of logit in the first stage and using the ordinary least square (OLS) instead of other models in the second stage eliminates errors [41]. Several papers use this model to estimate WTP avoiding the Materials The research was based on primary sources of data. The main material of this research was obtained from face-to-face surveys of farmers in the GAP-Şanlıurfa-Harran plain. The number of farmers registered in the state farmer registration system on theŞanlıurfa-Harran Plain was 15,824 in 2019. Farmers were selected by a simple random sampling method, using the sample volume table [39], with a 95% confidence limit and a 5% margin of error. The sample size was 376 and 379 questionnaires were used in the analyses. The surveys were conducted in 2019. The surveys included questions to measure the willingness of the farmers to pay for sustainable agricultural income, the farmers' demographic and socio-economic characteristics, and their ability to pay. Methods During the research, zero observation was obtained because some participants did not want to participate in the payment of services that would enhance sustainable agriculture. Estimation of the model, with the data showing zero observation, would have caused the deviation of the coefficient estimates, and since estimating the model by removing zero observation would also have caused inconsistency. This situation is known as the sample selection deviation. The two-stage estimation method was used to eliminate sample selection deviation [40]. In the Heckman model, using probit instead of logit in the first stage and using the ordinary least square (OLS) instead of other models in the second stage eliminates errors [41]. Several papers use this model to estimate WTP avoiding the problem of selection bias. This research was modeled in two stages to determine the participants' WTP for sustainable agricultural income and the factors that affect the WTP amount, for those who agreed to pay. Heckman's two-step model was used in the analysis by using the probit model at the first stage and OLS in the second stage. Heckman correction is a statistical method applied to correct the errors in randomly selected samples or randomly truncated dependent variables [42]. Heckman correction is a two-step statistical approach. Here, a two-step prediction method is used to correct bias resulting from the use of non-randomly selected samples to predict behavioral relationships as a specification error [43,44]. Due to the two-stage nature of the research, Heckman's two-stage estimation method was applied to the data. The first step of the method involved using a typical binary selection probit model to determine whether farmers were willing to pay for sustainable agricultural income or not. The model can be written for the first stage as [32,45,46] P represents the declared variable; if the participant was willing to pay, it is shown as 1, if not, it is indicated with 0. β 0 is constant, β 0 . . . β n represent the regression coefficients of variables, X 1 . . . X n represent the explanatory variables, and ε i represents the error term. The second stage of the model is the ordinary least square (OLS) approach, which tries to explain the factors affecting the payment levels if the farmers show WTP for sustainable agricultural income. The model can be written as follows for the second stage [32,45,46]. Y i represents the declared variable, i.e., the level of WTP of the participants, β 0 is constant, β 0 . . . β n represent the regression coefficients of variables, X 1 . . . X n represent the explanatory variables, and ε i represents the error term. In Equation (2) Research Findings The surveys involved only male farmers due to the patriarchal structure of the research area. The research area is generally conservative, with a tribal structure and a sense of belonging. Norms and region-specific cultural values predominate [55]. For example, the oldest one in the family and the chief of the tribe are always the final decision-makers, even about the approval or rejection of marriage decisions, who are men. Young people in the region are mostly uncomfortable with this situation and often prefer urban life when they have the opportunity. Because young people are more educated and open-minded than the final decision-makers although they mostly do not approve of tribal decisions, they show consent due to their sense of belonging [56]. Therefore, they have more opportunities to implement their own decisions and income and freedom in city life. Except for the spouses of the oldest and the chief of the tribe, women do not have much say. Early age marriage is common among women, on the other hand, divorce is not a situation that is approved and welcomed. Education levels are generally low, especially among women [15]. The government has been implementing several policies for the positive development of the socio-economic structure of the region. Cotton is a product of strategic importance, both inŞanlıurfa and all over Turkey [6,[57][58][59]. Of the 81 provinces in Turkey, 6 of them produce 85% of the total cotton in Turkey.Şanlıurfa is the top cotton producer province of Turkey, with 40% of the total production happening in the researched area [59]. Cotton is known as white gold inŞanlıurfa. The variables used in this research were selected depending on the socio-economic characteristics of the research area. Of all the participants, 94.9% were married, 2.9% were single, and 2.2% were widowed. In 2019, on average, $1 was 5.676 TL (Turkish lira) [60]. The descriptive statistics of the participants are given in Table 1. Participants were asked about their WTP for services needed to generate sustainable agricultural income. The results are given in Table 2. According to the responses received, the amount declared by those who have WTP is 3.08% of their annual average agricultural income. The farmers who showed no WTP perceived these services as public services and it was believed that it should be provided free of charge. The farmers who had understood the importance of sustainable agriculture for a regular income showed themselves to be conditionally in favor of WTP if there was a service that they needed which satisfied their demands. Heckman's first-stage model results regarding the willingness of the surveyed farmers to pay for sustainable agricultural income are given in Table 3, where Pseudo-R 2 was calculated as the measure of goodness of fit in the model and its value is 18%. This value is an indicator of the effectiveness of the variables in the model in explaining the dependent variable. Heckman's second-stage model results are given in Table 4. Wald chi-square value of the model is calculated as 64.14 and p-value as 0.000. We see from these results that the model was statistically significant as a whole. Besides, the Mills rate is meaningless (p > 5%). It can be concluded, from the statistical values, that there is no selection bias in the model. Therefore, the results obtained are reliable. The reference groups of variables: those between "18 and 30" for age, "literate" for education, "1-10" for experience, "1-4" for households, "1-4" for agricultural workers, "5 ha and less" for land amount, "property owner" for land ownership type, "25,000 and under" for income, "yes" for non-agricultural income, "yes" for agricultural cooperative membership, "mixed" for crop type are taken as a basic level for the reference group. Table 4 shows the relationship between the factors that affected the payment level of the variables used in the model and their subgroups. Discussion According to the model results in Table 3, there was a statistically significant relationship between the independent variables-which were age, education, experience, the number of households working in agriculture, the quantity of land, agricultural income, non-agricultural income, membership status in agricultural cooperatives, and the crop pattern-and sustainable agricultural income, which as a variable was dependent on the WTP of the farmers. The average age of the participants was 46.73 years. Young farmers between 18 and 30 years old were taken as the reference in the age variable groups. A statistically significant relationship was detected on the farmers between 41 and 50 years of age, with a significance level of p < 5%. However, a study conducted in China found out that young people showed more WTP for income derived sustainable ecosystem [25]. In another study conducted in South Korea, it was found that the demographic data of farmers effectively determined the WTP for integrated agricultural production [31]. Literate farmers were taken as the reference in the education variable groups. There was a statistically significant relationship at high school graduates, and university graduates. This significance level was p < 10% for high school graduates and p < 5% for university graduates. The participation of adult individuals in education has been found to be inversely proportional to the need for training. As the level of education increases, individuals' willingness to learn more increases, too [2]. An individual with a high level of education needs less information but the demand for knowledge is high. Because of the increase in the level of knowledge, the individual would be motivated to learn new things [61]. A study conducted in Oklahoma, USA found that culture is an effective determinant of the WTP for a sustainable ecosystem system [62]. Again, another study in China found that educated people showed more WTP for income derived from a sustainable ecosystem [25]. The average farming experience of the participants was determined as 24.77 years. Farmers with between 1 and 10 years' experience were taken as the reference in the experience variable groups. However, no statistically significant relationship was detected at any of the subgroups (p > 10%). The average household was 8.15 people. Households with between 1 and 4 people were taken as the reference group. As the household increased, concerns about sustainable income also increased. This was as a result of the prevailing conditions for agricultural production, since agricultural production depends on natural conditions, climate, and soil. However, no statistical significance was determined (p > 10%). On average, 4.80 people of each household work in agriculture. Households with between 1 and 4 people working in agriculture were taken as the reference group for the number of people working in agriculture. There was a statistical relationship on households between 5 and 9 working in agriculture. The significance level was determined as p < 10%. There was a negative relationship on households with 10 or more people working in agriculture, a result that is remarkable and unpredictable. In such a family with over 10 members working in agriculture, the labor force can be a work not only on their own family farms but also for other households for a fee. Thus, participants from these families had extra sources of income. Similar results were obtained in another study conducted in the same region about sustainable use of natural resources [24]. The average land size of each participant was 19.5 ha. Landowners of 5 ha and below were taken as the reference group for the land amount variable. A statistically significant relationship was detected between all subgroups. This significance was determined as p < 5% for the 5.1 to 10 ha and 10.1 to 20 ha subgroups, and p <1% for the over 20 ha subgroup. Similar results were obtained for another study conducted in the same region for an income of sustainable farming [24]. In a study conducted in Wuhan, China, it was found that farmland and productivity effectively affect farmers' WTP to protect agricultural lands for sustainable income [23]. A study in Taiwan found that one of the most effective factors that affect WTP for safe agricultural production is the acreage of farmland [27]. Another study conducted in West Java, Indonesia found that the quantity of land was an effective factor for WTP for sustainable agricultural production [28]. In another study conducted in Uganda, the size of the land was considered as an effective factor for WTP for sustainable agricultural activities for sustainable income [33]. Land ownership in the research area is in the form of property, rent, and partnership. For the land ownership variable, farmers who used only their land were taken as reference. There was an inverse relationship between those who owned property and those who did not. However, this relationship was not statistically significant (p > 10%). In another study conducted in Uganda, it was determined that land ownership type was an effective factor for WTP for sustainable agricultural activities for sustainable income [33]. The average annual agricultural income of each participant was calculated as 110,543 TL ($ 19,476). An annual income of 25 thousand TL and below was taken as the reference for the income variable. There was a statistically significant relationship in the group of an annual income of over 100 thousand TL at a significance level of p < 10%. As the level of income rose, there was an increased likelihood of individuals willing to pay more to sustain the same level of income [61]. In a study conducted in Iran, farmers' income was identified as the most important factor affecting WTP for sustainable agriculture and to reduce flood risk due to climate for a safe income [63]. In other studies, it was determined that income was an effective factor for WTP for sustainable agricultural environment and activities for sustainable income [24,25,33]. On average, 0.85 person of each household worked outside agriculture. The average annual income of participants from non-agricultural sources was calculated as 4584 TL. Those who have non-agricultural income was taken as the reference group. There was a statistically significant relationship, with the level of p < 5%. A study conducted in Kenya found that non-agricultural income has a positive effect on farmers' access to sustainable irrigation for more income [64]. Membership of agricultural cooperatives was taken as the reference group in the membership variable. There was a statistically significant relationship with the level of p < 5%. Membership of a cooperative protects a farmer safe, with regard to income, and gives the farmer a sense of solidarity. InŞanlıurfa, the sense of belonging that comes with membership of agricultural cooperatives is evident among farmers [65]. Member farmers enjoy some privileges but non-members do not, which may come as input supply, marketing, etc. In this sense, the result is consistent. In a study conducted in northern Italy, it was determined that the membership by farmers of farmer organizations was an effective factor for the production of sustainable olive oil [66]. Farmers who practiced mixed cropping (cotton, wheat, corn, barley, etc.) were taken as the reference group in the crop type variable. No statistically significant relationship was detected on any other subgroup in the crop type variable (p > 10%). In another study conducted in West Java, Indonesia, the expectation based on product pattern was found to be an influential factor affecting WTP for sustainable agricultural production [28]. In studies conducted in Oklahoma, USA; Tanzania; and Uganda, it has been found that crop and seed varieties are effective factors affecting WTP for sustainable agricultural activities for sustainable income [21,22,33]. Table 4 shows the relationship between the factors that affected the payment level of the variables used in the model and their subgroups. All other sub-groups in the age variable showed more WTP with respect to the reference group. However, there was no statistically significant relationship at any subgroup (p > 10%). In the education variable, all other sub-groups showed more WTP with respect to the reference group. There was a statistically significant relationship in all of the subgroups, middle school, high school, and university graduates, and the significance level was determined as p < 5% for middle and high school graduates and p < 10% for university graduates. In the experience variable, those with 11-20 years of farming experience showed more WTP with respect to the reference group. On the other hand, those with between 21 and 30 years' and above 30 years' experience showed less WTP with respect to the reference group. No statistically significant relationship was detected at any of the subgroups (p > 10%). As the number of people in a household increased, there was less WTP with respect to the reference group. There was a statistically significant relationship at each of the subgroups of the household variable. The statistical significance was p < 5% between households of 5-9 people, and p < 1% between households with 10 and above people. Large households often mean high living expenses for families. Therefore, even if there has been a WTP for such households, the excess number of people must have been negatively affected by their payment ability. The subgroups of the variable, the number of people in a household working in agriculture, showed more WTP with respect to the reference group. A statistically significant relationship was found on the subgroup of 5-9 people working in agriculture and the reference group, with the level of significance of p < 5%. There was a positive relationship between the number of people working in agriculture and the level of ability to pay. All subgroups in the land quantity variable showed more WTP with respect to the reference group. A statistically significant relationship was found on all sub-variables. There was a significant level of p < 10% at farmers with 5.1 to 10 ha of land. The statistical significance level at the land amount of 10.1 to 20 ha and above 20 ha was determined as p < 5%. In other words, as the acreage of land increased, the ability to pay equally increased, therefore WTP, too. There was a statistically meaningful relationship between the two subgroups of the property variable-i.e., non-landowners and landowners-which were the reference group. This relationship was inverse and negative. The degree of significance was p < 1%, which was expected. Because those who did not own property signed seasonal partnership or tenancy agreements, this group had less income than those who owned property, and this affected their ability to pay. Subgroups of the income variable group showed more WTP with respect to the reference group, as was expected, because as farmers' income increased, their ability to pay increased. However, no statistically significant relationship was detected at any of the subgroups of the income variable (p > 10%). Remarkably, the WTP decreased as the income level of the farmers' increased. This was unexpected before the research. This situation may be because the farmers felt more with increased income and tended not to think that paying more money would lead to losses. In the non-agricultural income variable, those who did not have non-agricultural income showed more WTP with respect to the reference group, where non-agricultural income was the reference group. However, a statistically significant relationship could not be detected between the two groups (p > 10%). Farmers, whose sole source of income was agriculture showed greater WTP for sustainable income. Taking the mixed croppers (cotton, wheat, corn, and barley) as the reference group in the crop type variable, it was found that those who cultivated the only cotton and only wheat showed less WTP with respect to the reference group, unlike other mono croppers. There was a statistically significant and negative relationship at cotton-only farmers with a significance of p < 1%. This result is unexpected under normal conditions, because cotton is a product with high production and income yield and added value. Unusual climate events took place inŞanlıurfa in 2019. While the average annual precipitation ofŞanlıurfa (1929-2018) is often 451 mm, the average rainfall in the first three months of the 2018-2019 production season has been over 470 mm [34]. This situation caused the late cultivation of cotton. On the other hand, the temperature in August 2019 was 21% below the average for many years. All of these reduced cotton yield and thus caused loss of income. While the long term average cotton yield iņ Sanlıurfa was between 5.5 and 6 tons/ha, it was between 2.5 and 3 tons/ha in 2019. Besides, although the input costs of cotton production increased by around 30% on average, cotton prices decreased by about 26% [67]. The average quantity of land used for mixed cropping, which was the reference group, was 30.01 ha, the average quantity of land used for only cotton crops was 11.56 ha, and the overall average quantity of land was 13.65 ha. Mixed crop cultivators, therefore, showed greater ability to pay due to the advantage of product pattern diversity and land size. Even how these unforeseen circumstances affected cotton yield is enough argument for the importance of sustainable income in agriculture. The results obtained, considering the reasons stated, are meaningful and consistent. According to the calculation that was made considering account of all participants (Table 4), the WTP for service delivery for sustainable agricultural income in the GAP-Harran plains irrigation was calculated as $1.2 million by multiplying 166 thousand hectares with $7.21. Conclusions One of the problems of the socio-economic sustainability of countries is the income gap between the agricultural and non-agricultural sectors. Many studies have shown that agricultural sustainability bridges this gap and enhances individual and social welfare. Sustainable agricultural production benefits both producers and consumers. Agricultural sustainability cannot be achieved only through the most efficient use of agricultural production resources. It is necessary to educate and take responsibility of the farmers who use these resources. In other words, sustainable agriculture requires not only a technical approach, but also social, economic, and political approaches. The requirements of agricultural sustainability may differ for each community and region, because there are many combinations of local parameters, conditions, and factors that affect them. Therefore, a general and global approach to sustainable agricultural production may not give accurate and feasible results. GAP is the flag project of the Republic of Turkey, in which agriculture is the most important sector. Many investments and projects have been made for the development of the region within the scope of GAP, but the targeted progress based on many reasons has not been achieved sufficiently. Although many types of research have been done on agricultural production and income, the expected benefits cannot be said to have been satisfactorily provided. Because there is no comprehensive field study on farmers' sustainable agricultural income and the factors affecting it in the GAP Region. In far-reaching integrated development projects, farmer's thoughts, expectations, and participation in the process significantly affect the level of project success. According to the results obtained, 41.22% of the participants were not in favor of taking responsibility for cost-sharing. This result is meaningful and remarkable. Special attention should be given to this group. It should be explained to the group that waiting for the public sector to provide all services may cause delays in the provision of the expected services and thus loss of income. About 23.14% of the participants believed in the necessity of providing services to boost sustainable agricultural income, but they had not felt sufficient and income-increasing effects from the services provided so far. The provision of services based on the needs of this group would foster and increase their WTP. The WTP for service delivery for sustainable agricultural income in the GAP-Harran plain was calculated as $1.2 million. This amount is the minimum and can be increased by several folds with a variety of accurate and acceptable service delivery based on the demands of the farmers. In this research, the general WTP of farmers for sustainable agricultural activities has been determined. The next stage of this is to determine, by local and subject-based studies, which groups of farmers are willing to pay and the factors affecting their willingness or otherwise. The results obtained from such studies, together with those of the current research, will provide useful information to guide researchers, decision-makers, and agricultural policymakers in the public and private sectors. This study is the first of its kind in the GAP Region.
9,378.6
2020-05-04T00:00:00.000
[ "Economics" ]
A Pen-Eye-Voice Approach Towards The Process of Note-Taking and Consecutive Interpreting : An Experimental Design Interpreting is a cognitively demanding language-processing task. Investigating the process of interpreting helps to explicate what happens inside the black box of interpreters’ minds, with implications on how the human mind processes language under taxing conditions. Since the interpreting process involves multitasking, it is challenging to develop an experimental design to investigate this process. In the case of consecutive interpreting (CI), it is particularly challenging because different methods need to be applied to tap into the two phases of CI, which involve different combinations of sub-tasks. This paper advocates the use of a triangulation of pen recording, eye tracking and voice recording to investigate the process of note-taking and CI. INTRODUCTION Interpreting is an intriguing, challenging, and complex language-processing task.Ever since interpreting research began to be established as a field of study in its own right in the mid-1970s (Pöchhacker, 2004, p. 81), there has been a strong interest in uncovering what is happening in interpreters' minds while they perform this extraordinary task.Researchers with a background in psychology have attempted to shed light on how the human mind processes language under severe stress and while engaging in heavy multi-tasking by investigating the cognitive processes in interpreting (e.g.Barik, 1973;Christoffels, 2004;Christoffels & De Groot, 2004, 2005;De Groot, 1997;Gerver, 1974aGerver, , 1974bGerver, , 1976;;Goldman-Eisler, 1972;Köpke & Signorelli, 2012).Researchers from within the field of interpreting, in turn, have approached the topic from an inter-disciplinary perspective that benefits from the theoretical and empirical findings in the cognitive sciences (e.g.Lambert, 1988;Moser-Mercer, 1997;Seeber, 2011Seeber, , 2013;;Shlesinger, 2000). However, most of the process-oriented research approaching interpreting from a cognitive perspective focuses on simultaneous interpreting (SI), while consecutive interpreting (CI) is often neglected (Chen, 2017a).CI is an interesting activity from both a cognitive and a linguistic point of view.Similar to SI, it requires a high level of bilingual language processing and challenges the interpreter's cognitive system by requiring multi-tasking under strict time constraints.But CI also introduces a new challenge: note-taking (note 1).In addition to listening to the source speech and producing a target speech, CI requires the interpreter to perform the subtasks of note-writing and note-reading, making the process of note-taking and CI a particularly interesting topic of research. A potentially important reason for the lack of research on the process of CI is inadequate process-oriented methods in the field of interpreting studies.This paper introduces an experimental design to investigate note-taking and CI, with an aim to collect detailed data on both the process (the two CI phases) and the product (the interpreting performance) of CI.The design triangulates pen recording (mainly targeting Phase I of CI), eye tracking (mainly targeting Phase II of CI) and voice recording (mainly targeting the interpreting performance). 2 IJCLTS 6(2):1-8 20 th century.It gradually gave way to SI, which was made possible by the development of electronic equipment, in multilateral and multilingual conference settings.However, CI remains the preferred mode in the context of "bilateral interactions with only two languages involved and in settings where confidentiality, intimacy and directness of interaction are given priority over time efficiency", such as high-level diplomatic encounters, business negotiations, ceremonial speeches and press conferences (Dam, 2010, p. 76).CI remains an important component in most interpreter training programmes.Its significance is manifested in the large quantities of master's theses on the subject (note 2).Even in places where the market is largely dominated by SI, training in CI is believed to be a good way of preparing students for SI (Gile, 2001).Furthermore, CI is frequently introduced to language students as a way of reinforcing language skills (e.g.Henderson, 1976;Hill, 1979;Paneth, 1984). Given the important role CI plays in the above contexts, there exists a considerable limitation in the literature in that process-oriented cognitive investigations have rarely been carried out on CI.CI is an interesting activity from both a cognitive and a linguistic point of view.Similar to SI, it requires a high level of bilingual language processing and challenges the interpreter's cognitive system by requiring multi-tasking under strict time constraints.But CI also introduces a new challenge: note-taking.In addition to listening to the source speech and producing a target speech, CI requires the interpreter to perform the tasks of note-writing and note-reading.In Phase I of CI, interpreters listen to and analyse the source speech, keep parts of the speech in their working memory, and write down notes.In Phase II, interpreters read back their notes, retrieve information from their working memory, and produce a target speech.Both phases depend heavily on note-taking -this unique and distinctive feature of CI. Note-taking has been a topic of interest in interpreting research for over half a century (see Chen (2016) for a review).The well-developed volume of literature on consecutive note-taking started with a series of books and articles introducing various note-taking systems and principles.They were published in different languages, each generating a profound influence in its own country and some even reached beyond (e.g.Allioni, 1989;Becker, 1972;Gillies, 2005;Gran, 1982;Ilg, 1988;Kirchhoff, 1979;Matyssek, 1989;Rozan, 1956Rozan, /2002)).Recommendations were made on such skills as noting the idea and not the word, how to use symbols, how to use abbreviations, and how to note links, negations, and emphasis. The note-taking systems seem to be well-developed, but when it comes to teaching and learning note-taking skills, both teachers and students find it challenging.A couple of studies (e.g.Alexieva, 1994;Gile, 1991) found that note-taking diverted students' attention and even led to a degradation in interpreting performance.Researchers who have approached the topic form cognitive and linguistic perspectives (Kirchhoff, 1979;Kohn & Albl-Mikasa, 2002;Seleskovitch, 1975) found that there was a concurrent storage of information in notes and in memory, and a competition for cognitive resources between note-taking and other activities in the interpreting process.This has motivated subsequent research to target more specific note-taking features and to examine them empirically.Some of the most important variables investigated are: the choice of form (e.g.Dai & Xu, 2007;Dam, 2004a), the choice of language (e.g.Abuín González, 2012;Dai & Xu, 2007;Dam, 2004aDam, , 2004b;;Szabó, 2006), and the relation between note-taking and interpreting performance (e.g.Cardoen, 2013;Dai & Xu, 2007;Dam, 2007;Dam, Engberg, & Schjoldager, 2005).The choice of form refers to the choice between language and symbol, and the choice between abbreviation and full word; the choice of language refers to the choice between source and target language, and the choice between native and non-native language (Dam, 2004a).The studies have contributed valuable empirical data for a deeper understanding of the topic.For example, there is a general preference for language over symbol (Dai & Xu, 2007;Dam, 2004aDam, , 2004b;;Lung, 2003), and a source language dominance in the notes taken by student interpreters (Abuín González, 2012;Andres, 2002;Dai & Xu, 2007;Lim, 2010;Lung, 2003).However, most of the studies are product oriented, which means that they only look at the final product of note-taking (the notes produced), without an in-depth analysis of the interpreting process. Nevertheless, some studies have taken a process-oriented approach to note-taking and CI, and two examples would be Andres (2002) and Orlando (2011).Andres (2002) used time-coded videos to analyse the time span between the moment a source speech unit was spoken and the moment it was noted down.It was the first study to record the note-taking process in detail.Orlando (2010) used the Livescribe Smartpen to record the process of note-taking.The questionnaire results he collected from students showed encouraging potentials for the technology to be applied in teaching and learning.However, both methods that have been used have important limitations: video recording involves determining the start of note-taking by manually checking the video and its timestamp; the Smartpen does not report the moment-to-moment changes in pen position in fine details (e.g.coordinates). This paper attempts to revisit the topic of note-taking and CI and propose an experimental design that could potentially address some of the limitations with previous research.The design combines product analysis with process investigation, drawing on the conjoint approaches of pen recording, eye tracking and voice recording. A PEN-EYE-VOICE DESIGN This paper introduces an exploratory design involving pen recording, eye tracking and voice recording with the purpose of gaining further insights into the process of note-taking and CI.Pen recording is mainly targeting Phase I of CI, in which interpreters listen to the source speech and write down notes.Eye tracking is mainly targeting Phase II of CI, in which interpreters read back their notes and produce a translated speech.Voice recording is mainly serving the purpose of recording the interpreting performance, while also documenting a retrospection on the note-taking process. A Pen-Eye-Voice Approach Towards the Process of Note-Taking and Consecutive Interpreting: An Experimental Design 3 Pen Recording The apparatus used for pen recording was the Wacom Cintiq 13HD (a 13-inch LCD tablet with a resolution set at 1366×768 pixels) and the Wacom Pro Pen.The system was chosen because it aims to cater to graphic designers who have very high requirements in terms of the precise control of the pen on the tablet surface.It is ergonomically designed to mimic natural writing and painting.Another reason for choosing this system is because it is compatible with the Eye and Pen software (note 3), one of the core software products powering the experiment.The software piloted a laptop computer which was linked to the pen recording apparatus.The software carried out three tasks: controlling the experiment, collecting the pen data, and processing the pen data. Controlling the experiment The experiment and its procedures were programmed into the software, which then controlled the progress of the experiment and interacted with the participant.For example, in Phase I of CI, when finishing one page of note-taking, the participant could use the pen to click on a button displayed on the tablet screen called "New Page" (Figure 1) and the software would create a new blank page for note-taking. The participant could use as many pages as needed.When the listening and note-writing phase was finished, the participant only needed to click a button called "Begin Interpreting" (Figure 1) and the software would automatically turn to the first page of notes written by the participant. Then the participant could read back the notes and produce a target speech.In this phase, new buttons such as "Turn Page" (which turns to the next page of written notes) and "Next Part" (which plays the next segment of the source speech) would appear on the screen, so that the participant could interact with the software to navigate through the pages of written notes.The tablet screen would only react to the tip of the digital pen, so the participant could write as naturally as possible and did not need to worry about triggering any buttons by touching the screen with their hands. Collecting the pen data The software collected the spatial and temporal data about the pen as it moved across the tablet surface.For example, data was recorded for each pen stroke in terms of the distance (how far the pen travelled across the surface), duration (for how long the pen was in touch with the tablet), and speed (how fast the pen was moving).Spatial data was reported in centimetres and temporal data was reported in milliseconds.The software also kept a session log for each trial, documenting the time every action took place during the recording (e.g. the source speech segment started playing, the participant started writing, etc.).This function was crucial for the calculation of one type of data, the ear-pen span, which is the time span between the moment a speech unit is heard and the moment it is written down in notes.The ear-pen span is a useful indicator of cognitive processing during note-taking. Processing the pen data The software has many functions for displaying and analysing the recorded pen data (Figure 2 is a screenshot of the software in the analysis mode).The most useful function for the current design is the "Word separation" tool, which semi-automatically separates the written texts into words (in the case of this design, note units).Although manual work was required to correct the separations, this function allowed very accurate data to be reported for each individual note unit (e.g.start and end time, duration, distance, speed, etc.).Labels could be created for each note unit so that qualitative data could be added to each note and exported for further analysis.For example, for note unit no.13 (see bottom left of Figure 2), texts 1 to 6 documented the form and language of the note unit as well as its content, meaning, and corresponding source speech unit.The labels indicated that this note unit was language ("L" in Text 1), in English ("E" in Text 2), and an abbreviation ("A" in Text 3).It contained three letters "svs" (Text 4), meaning "services" (Text 5) and corresponding to the word "services" (Text 6) in the source speech.In this way, the exported file contained both quantitative and qualitative data (Figure 3). Eye Tracking The apparatus There were a few prerequisites for selecting the type of eye tracker to be used in this experimental design.First, the eye tracker needed to allow the interpreter to speak freely, thus eliminating the use of eye trackers that require chin rests.Second, the eye tracker needed to be usable in a handwriting situation.In particular, the eye camera(s) could not be masked by the participant's forearms in movement.Head-mounted eye trackers which are mounted directly on the participants could meet these requirements.Third, for the comfort of the participant and ecological validity of the experiment, a lightweight eye tracker that could be attached to the participant easily was preferable. The eye tracker used in this design was the SensoMotoric Instruments (SMI) Eye Tracking Glasses 2 (ETG2).It is a light-weight (47 g), head-mounted eye tracker in the shape of a pair of glasses.The eye tracker uses dark pupil tracking.It has a tracking accuracy of.5°over all distances and a sampling rate of 60 Hz.The eye tracker has a built-in high-definition camera for scene recording.This camera recorded both the video and the audio during the entire note-taking and interpreting process.The SMI software iView ETG and BeGaze were used with default settings for eye data recording and analysis respectively.The experiment took place in a sound-proof studio with constant artificial illumination to avoid any distractions or disruption to the recording of eye data. Semantic Gaze Mapping Semantic Gaze Mapping is an analysis function of the software BeGaze.It can map gaze data from videos to static pictures.The pages of notes taken by the interpreters were saved as pictures by the Eye and Pen software.These pictures were imported into BeGaze and used as reference images.The gaze data on the scene video (a video of what the participant saw during interpreting) were be mapped onto the reference images.After all relevant eye data were mapped onto the images, AOIs were drawn on the images instead of on the scene video, which could increase the accuracy and efficiency of analysis. An AOI was drawn for each note unit and labelled according to the note's form and language (e.g., an English abbreviation).This allowed further analysis to be carried out when comparing the eye movement data between two note-taking choices (e.g. the dwell time on Chinese vs English notes). Voice recording Voice recording during the interpreting process was collected via the eye tracker (the audio files were extracted), and voice recording during retrospection was collected via a laptop computer.The voice data were used for several different purposes in this experimental design. First and foremost, the interpreting performance was recorded.The audio recordings were later transcribed and provided to a group of raters for evaluation.This generated performance scores used for exploring the relationship between note-taking and interpreting performance. Second, voice recording was used during cued retrospection.Immediately after the interpreting tasks, the participants were provided with their notes for cued retrospection.They were asked to provide as much information as they could remember about the note-taking process, including but not limited to: what each note unit was; what it stood for; whether it was symbol or language, and if language, whether it was abbreviation or full word, Chinese or English.This is an important step because note-taking in CI is highly individualised, and the handwriting of interpreters could sometimes be difficult for others to decipher. Third, the source speech audio files were used together with the session logs kept by the Eye and Pen software to calculate the ear-pen span, an important indicator of cognitive processing. Tasks To make the data more generalizable, two CI tasks covering both directions of interpreting (between Chinese and English) were designed to account for both the source/target language status and the native/non-native language status.The two tasks, English to Chinese (E-C) and Chinese to English (C-E), were carefully created through a series of procedures to control for variance. First, two English scripts (on similar topics) were created by the author and edited by an experienced university lecturer in Australia (a native English speaker) to make them: (1) as comparable as possible, and (2) suitable to be read out loud and recorded as a speech.The edited scripts were analysed using CPIDR, a computer programme that could automatically determine the propositional idea density.The results show that they were similar in terms of length (number of words and propositions) and idea density.One of the scripts was used to create the E-C task.Second, the other script was translated by the author into her A language (Chinese), and refined by two Chinese-speaking editors at a local Chinese radio station to make it suitable to be read out loud and recorded as a speech.This script was used to create the C-E task. Third, the edited English and Chinese scripts were recorded into audio by a native Australian English speaker (the university lecturer) and a native Mandarin Chinese speaker (a radio personality) respectively.The recordings were carried out in professionally soundproofed rooms.The speakers were required to record a natural speech with steady pace.They were allowed to restart any sentence at any time when needed.These false starts were later edited (see step four). Four, the recorded audio files were imported into Audacity, a sound-editing programme, for further refinement to address such issues as false starts, unfinished sentences, and background noises (e.g.turning page).After edited, both tasks were about five minutes long and divided into three segments each. The Experimental Setup The digital tablet was linked to a laptop computer powered by the Eye and Pen software, which controlled the experiment procedures, interacted with the participant, and recorded the pen data.The eye tracker (a pair of eye tracking glasses) was linked to another laptop which recorded the eye data.The Eye and Pen software controlled the play of the sources speech and the eye tracker recorded the sound during the entire interpreting process, so the two pieces of data could be synchronised via the sound track.The experimental setup is shown in Figure 4. Procedures The experiment took place in four main procedures: practice, task performance, retrospection and post-experiment questionnaire.The practice session was designed to familiarise the participants with the experimental procedures and the apparatus, especially the digital pen and the eye tracker.The task performance session involved two CI tasks: C-E and E-C.The order of the tasks was randomised so that about half of the participants started with the C-E task and the other half started with the E-C task.Rest was allowed between tasks if needed.The retrospection session was cued by the written notes and participants were instructed to recall whatever they could remember about the note-taking process.This was mainly designed to help the researcher accurately identify the note units and to collect additional qualitative data for the interpretation of the results.The questionnaire was designed to collect such information as the participants' familiarity with the task topics, how they felt about using the digital pen and the eye tracker, and other feedback about the experiment. DATA AND ANALYSIS The main sources of data that were collected using the designed experiment are summarised in Table 1.Because note-taking in interpreting is a highly individualised activity, the number of note units produced in different categories both within and between the interpreters differed.Wherever applicable, the data should be standardised by calculating the mean.For example, if the number of Chinese notes written by a participant is n, then the ear-pen span of Chinese notes (EPS c ) of that participant is calculated as: The standardisation could create paired samples with the same size.In this way, paired-samples t-tests can be used to compare between the note-taking choices in different forms (language vs. symbol; abbreviation vs. full word) and languages (Chinese vs. English).The Pearson's correlation can A pilot and a main study using this design have been reported elsewhere (Chen, 2017b(Chen, , 2017c)).The studies show that the ample empirical data collected during the experiment could reveal important traces of processing and efforts in note-taking and CI.A unique array of indicators can be generated for estimating the physical, temporal and cognitive demands of note-writing and note-reading, indicators that could be useful in future studies on related topics. DISCUSSION This paper introduces an experimental design to cater to process research on note-taking and CI.CI is a special and complicated language processing task which involves the simultaneous sub-tasks of listening and writing in Phase I and reading and speaking in Phase II.Previous studies have so far mainly focused on the product of note-taking (written notes) and CI (the interpreting performance), without investigating into the process.This design triangulates the methods of pen recording, eye tracking and voice recording to allow for a combined analysis of process and product.Detailed data can be collected during the process of note-taking and CI, particularly the pen movements (distance, duration and speed) and ear-pen span during note-writing, and the eye movements during note-reading. It has to be admitted that the design has several limitations.First, there is a considerable amount of manual work involved in initial processing of the pen data (see section 3.1.3)and the eye data (see section 3.2.2),as well as preparing the data for analysis (see section 4).Second, the digital pen and tablet selected in this design for data accuracy and precision are based on a sacrifice of ecological validity.Other apparatus, for example, a digital pen with ink which can write on real paper, can be used to increase the ecological validity of the experiment, on condition that the data quality can be guaranteed.Third, the eye tracker selected in this design for ecological validity and availability reasons is a low speed one (records at 60 Hz).In addition, this eye tracker is not supported by the Eye and Pen software, resulting in a post-hoc data synchronisation.Other eye trackers could be explored to see if they perform better than the selected one. The pen-eye-voice experimental design points out some future directions for interpreting research, especially process-oriented research, and potentially contributes to language processing research in general.Hopefully researchers could join the effort in using a triangulation of methods to investigate into the intriguing and challenging topic of interpreting. END NOTES 1.In this paper, CI refers to long consecutive where systematic note-taking is used.2. Interested readers can find the theses reported in various issues of the Conference Interpreting Research In-formation Network Bulletin (CIRIN Bulletin) at www. cirinandgile.com.3.More detailed information about the software can be found on http://eyeandpen.net/en/. Figure 1 . Figure 1.A screenshot of the tablet in the recording mode Figure 3 . Figure 3.A sample data output of pen recording Figure 4 . Figure 4.The experimental set-up Table 1 . Data that can be collected using the experimental design in this paper
5,405.4
2018-04-30T00:00:00.000
[ "Computer Science" ]
Deep Learning-Based Wavelet Threshold Function Optimization on Noise Reduction in Ultrasound Images , Introduction e continuous development of medical imaging technology makes the medical imaging become an essential part of clinical diagnosis and efficacy evaluation.Doctors can intuitively acquire the feature information of lesions through images, then evaluate the disease more accurately to achieve early detection and diagnosis, and strive for the best therapeutic effect for patients [1].Medical images can be divided into different imaging techniques according to different imaging modes. e recognized representative imaging techniques include magnetic resonance imaging (MRI), X-ray, ultrasound, and nuclear medicine imaging [2].Ultrasound imaging, as a widely used examination method in clinics, has the advantages of safety, noninvasiveness, convenience, and low cost.Especially in the observation of fetal growth and development in pregnant women and diagnosis of abdominal organ lesions, it has high clinical practical value as well as great development prospects [3]. Ultrasound imaging process is as follows.First, the ultrasound is transmitted through the device and reflected back to the receiving end of the device to complete a complete ultrasound imaging process through the interaction of tissues and organs [4].At present, in the process of the clinical application of ultrasound technology, it is found that the intrinsic speckle noise will appear in the ultrasound image due to the interference characteristics of the ultrasound pulse, resulting in damage to the details of the medical ultrasound image, blurring of edge information, and seriously affecting the quality of the ultrasound image [5].e interference of ultrasound imaging has a great impact on the diagnosis and follow-up processing of doctors.erefore, from the clinical point of view, it is necessary to further explore the algorithm of removing speckle noise, so as to provide a theoretical basis for the accurate diagnosis of diseases by clinicians.Convolutional neural network (CNN) is an essential part in deep learning which is one of the earliest networks that uses the backpropagation algorithm to train effectively.CNN is a typical feedforward neural network and has a good performance in large-scale image processing, especially in the scene with similar information characteristics. e precondition of image recognition is image denoising and image edge detection, which are also of great value to image postanalysis and processing. rough the image denoising process, the interference noise can be removed so that the image quality as well as the visual effect can be better [6].In addition, image matching, segmentation, and edge detection, as well as other processing, are based on image denoising, so the quality of image denoising determines the follow-up processing of the whole image.Wavelet transform is a multiscale signal analysis method and widely used in many fields such as digital image processing and computer science.In the wavelet threshold denoising algorithm, the noise wavelet coefficients can be directly reduced.In denoising, the edge information of the image is effectively enhanced to achieve better enhancement effect [7].In this exploration, on the basis of fully understanding the theoretical knowledge of the wavelet transform, the wavelet threshold denoising algorithm is optimized, and its application in ultrasound imaging technology (UIT) and its impact on imaging results are analyzed.It is not rare to apply the wavelet threshold algorithm to image processing, but the research of this algorithm is not deep in the field of medical imaging.In this exploration, from the perspective of image denoising, wavelet threshold denoising algorithm is applied in UIT, so as to obtain the key information of medical images and provide help for clinicians to read and follow up the diagnosis of diseases. Principle of Medical Ultrasound Imaging.Medical ultrasound diagnosis technology originated in the 1940s or so.Ultrasound refers to sound waves with frequencies higher than 20 kHz, which have far exceeded the upper limit of human hearing.Ultrasound is widely used in the field of medical angiography because it is easy to carry media information and can be transmitted in any different media [8].Moreover, it has good directivity and strong penetrating force.Medical ultrasound imaging technology (UIT) takes the ultrasonic signal as the carrier, takes the reflection principle of the ultrasonic signal as the theoretical basis, extracts the information from the reflected signal of the human body through processing, and finally completes the imaging processing. e probe transmits the ultrasonic signal to the human body, and then the signal propagates in the human tissue.Different tissues have different resistances to the ultrasonic signal.Ultrasound signals are reflected and scattered on the tissue surface, and then the echo signals are automatically received by the probe.After a series of signal enhancement processing, the final clearer and more detailed ultrasound images can be obtained.According to different characteristics of different tissues in vivo, combined with pathological knowledge and clinical experience, the spatial location, nature, and severity of the lesion can be accurately evaluated [9]. In the process of transmission, the sound wave will be attenuated due to sound absorption.e relationship between sound intensity and transmission distance can be expressed as (− αz) . (1) In ( 1), I 0 represents the original sound intensity of ultrasound, α is the attenuation coefficient of ultrasound, and z is the propagation distance of ultrasound.Generally, the physical processes that cause acoustic attenuation include viscous loss, heat conduction loss, and various forms of molecular relaxation.Due to different physical processes of attenuation, the variation range of attenuation coefficient will be larger. Ultrasound imaging is mainly accomplished by the reflection of ultrasound.Because the magnitude of the reflection coefficient and the intensity of the reflected sound wave are different, the contrast of the image is also produced.When ultrasound is injected vertically from one medium into another, the reflection coefficient can be expressed as In (2), R is the attenuation coefficient of ultrasound, and z 1 and z 2 represent the acoustic resistance of two different media, respectively.e equation reveals that the greater the difference of acoustic resistance between the two media is, the more the reflectivity of the acoustic wave is.erefore, for the complex imaging environment of human tissues and organs, the medium is inhomogeneous.e reflected sound wave is related to the difference of acoustic resistance between adjacent media and the shape of the target to be detected.e contact between different transmission media will form a reflection interface.According to the transmission characteristics of sound waves, different phenomena will occur when sound waves pass through different interfaces.For a small reflection interface whose reflection interface is less than the acoustic wavelength, sound waves will scatter and have anisotropy, and the energy of reflected sound waves will also be attenuated.For the large reflection interface whose reflection interface is larger than the wavelength of the acoustic wave, the reflection of the acoustic wave will occur, and the direction of reflection mainly depends on the incident direction. Medical UIT can be divided into four types according to the principle of ultrasound imaging and different scanning methods [10]. e origin and application of A-mode ultrasonography in medical diagnosis are the earliest.However, due to the limitation of the imaging principle, the onedimensional waveform of A-mode ultrasound imaging is not intuitive enough.erefore, after the appearance of B-mode ultrasound, A-mode ultrasound imaging is no longer used. 2 Scientific Programming B-mode UIT uses the ultrasound probe to transmit ultrasound to the human body, and then ultrasound signals reflected from human tissues are displayed by brightness modulation.In the two-dimensional grayscale image of B-mode ultrasound imaging, the brightness of the light spot reflects the strength of the echo signal.e brighter the light spot is, the stronger the echo signal is. Figure 1 shows the basic principle of the B-mode ultrasound imaging instrument.M-mode UIT uses slow scanning circuit technology to complete the diagnosis of the heart disease.Its main function is to measure the motor organs, so M-mode ultrasonography is used in the clinical diagnosis of the heart, fetal heart, and arteries and vessels.D-mode ultrasound diagnostic instrument is commonly referred to as Doppler ultrasound diagnostic instrument.Color Doppler imaging can monitor hemodynamic parameters in real time.According to the color coding, the direction, velocity, and nature of blood flow in the target position are judged [11]. Noise Models in Ultrasound Images.e noise in the ultrasound image mainly includes Gaussian noise caused by system instability, circuit noise in the imaging process, and speckle noise caused by the principle of ultrasound imaging itself.Spot noise can seriously affect the quality of the ultrasound image and interfere with artificial evaluation or machine recognition.It is now believed that the signals collected by UIT are divided into two parts: signals reflected by tissues in vivo with the interpretation value and noisy signals interfering with them. e noise signal includes additive noise and multiplicative noise.Multiplicative noise originates from the random scattering signal, which is related to the principle of ultrasonic imaging technology.Additive noise refers to the noise generated by the system.e initial signal obtained by the ultrasound imaging system is f pre , and its general model can be expressed as follows: (3) In (3), pre represents the initial signal obtained by ultrasound imaging technology.e function g pre denotes the noise-free signal, n pre denotes the multiplicative noise, and w pre denotes the additive noise. In the process of ultrasonic signal propagation, the phenomenon of random scattering occurs in a very small resolution, resulting in multiplicative noise n pre .Multiplicative noise is the main part of the interference noise, while the effect of the additive noise can be neglected.erefore, the model can be simplified as follows: Multiplicative noise n pre is also called speckle noise because it can be visualized in the image by a speckle shape.e probability distribution of speckle noise varies with different ultrasound imaging systems because of different scattering in the unit resolution.In order to adapt to the grayscale display range of the screen of the ultrasound imaging system, the signals collected by the ultrasound imaging system are processed by logarithmic transformation.In this case, the multiplication equation (4) model will be changed into an additive model, which is expressed as log f pre � log g pre + log n pre . ( At this time, the signal log (f pre ) is the general medical ultrasound image.Log is equivalent to a compression function, which compresses the signals with a large range of changes to a smaller range.e noise model can not only be as close as possible to the actual distribution of the ultrasonic signal but also integrate with the corresponding denoising algorithm to make the denoising effect of the ultrasound image more satisfactory. Image Denoising Based on the CNN. e network framework of low-illumination image processing based on the U-Net convolution network has three modules: preprocessing module, U-Net convolution network module, and superresolution reconstruction module.According to the RGB (red, green, and blue) color space model, the original raw image is input into four channels in the preprocessing module, and the dimension is reduced to 1/2.en, the black level is extracted, and the image is magnified at different ratios.e U-Net convolution network module realizes the image pixel semantic segmentation, and the output is 12 channels.ere are 23 volume layers in the whole U-Net network.e left is the contraction part of capturing context information, and the right is the extension part of accurately locating segmentation results, called "contraction path" and "expansion path," respectively.e two paths form a shape that is similar to "U" shape symmetrically. Due to the subpixel operation, when the image resolution is improved, it can be realized by recombining the pixels of the image. e subpixel convolution method of pixel recombining is used to improve the resolution without distortion.e process of subpixel convolution mainly includes recombining a single pixel on a multichannel feature map to a feature map.e subpixel convolution operation is carried out on the low-resolution image, and the final results are the high-resolution image. Basic Principle of Wavelet Transform Denoising. e denoising of the image by wavelet transform means that, after decomposing the noise wavelet coefficients, the wavelet coefficients can be calculated which are infinitely close to the pure image wavelet coefficients.e wavelet coefficients are composed of pure wavelet coefficients k i,j and noise wavelet coefficients p i,j .e sum of pure wavelet coefficients and noise wavelet coefficients is the wavelet coefficients x i,j of noise image decomposition.rough the threshold function, the noisy image is processed, and the wavelet coefficients y i,j after the threshold are obtained.If y i,j − k i,j ≈ 0, the maximization of noise removal can be achieved, and a purer reconstructed image with wavelet coefficients can be obtained. e principle of wavelet threshold denoising is to select the appropriate wavelet by using the change of wavelet. rough the multilevel decomposition of image noise, the corresponding high-frequency signal component and lowfrequency signal component are extracted from the decomposition wavelet coefficients.Combining with the corresponding threshold function, the high-frequency coefficients of the wavelet are thresholded.Effective lowfrequency signal coefficients of observable value are retained. e effective low-frequency signal and the threshold highfrequency signal wavelet coefficients are reconstructed to obtain the denoised image.Combining with the denoising evaluation index, the denoised image is quantitatively and qualitatively analyzed to evaluate the denoising effect and the quality of the denoised image.Because of the high correlation of the wavelet transform, the effective signal wavelet domain of the image can be concentrated in the wavelet coefficients with large coefficients, while the noise wavelet coefficients are randomly distributed in the whole domain. rough threshold calculation, satisfactory denoising effect can be achieved. Optimization of Wavelet reshold Function (WTF) Algorithms.In the wavelet transform denoising, the threshold function will directly affect the final denoising effect of the image.When the threshold value is small, the noise figure larger than the threshold value will be retained as an effective signal, which leads to little significance of denoising.e denoised image is still disturbed by many uncancelled noises.When the threshold value is too large, the effective information with small coefficients is treated as noise cancellation.Although the denoised image will be very smooth, more useful details will be lost in the denoising process. e traditional WTF is a classical threshold function proposed by foreign researchers in 1994, also known as the unified threshold function [12].From the theoretical point of view, it is proved that M is used to represent the total number of wavelet coefficients in the corresponding wavelet domain, and σ n is the standard deviation of noise. e derived equation is expressed as Among the above threshold functions, threshold T is greatly affected by the number of wavelet coefficients.When M is larger, a larger threshold may remove some useful information with smaller coefficients [13].Chang et al. proposed an optimal threshold selection method.is threshold function is derived from Bayesian maximum posteriori probability.σ n 2 is used to denote the variance of noise, and σ g,j is used to denote the standard deviation of the noise-free image g in layer j in the wavelet domain.α j is an adjustable coefficient, that is, the adjustable coefficient in the j-layer of the wavelet domain, usually α j � 1.Its equation is expressed as In this exploration, if a new threshold function is proposed based on equation ( 6) to weaken the effect of the total number M on the threshold, the effect of the wavelet function will be better.erefore, based on the characteristics of the wavelet transform, the noise in the image is dense in the high-frequency region, so the threshold in the high-frequency region should be greater than that in the low-frequency region.In equation (7), on the basis of variance ratio σ 2 n /σ g.j , the adjustable coefficient α j should choose the appropriate value according to the wavelet coefficients of each layer; that is, the higher the number of layers is, the smaller the adjustable coefficient is.erefore, in this exploration, a more refined WTF is proposed.σ n is used to represent the variance of noise, and σ g,j is used to represent the standard deviation of the noise-free image g in layer j in the wavelet domain.e adjustable parameters accord with k 1 + k 2 � 1, and the equation is expressed as follows: In equation ( 8), the adjustment coefficient α j of layer j in the wavelet domain is 1/2 j− 1 , j � 1, 2, ..., J. J denotes the maximum level of wavelet decomposition. e WTF of equation ( 8) can be changed to Evaluation Index and the Method of Denoising Effect. Qualitative evaluation: after image denoising, it is necessary to consider the effect and accuracy of the image.Since the effect of image processing affects the extraction and analysis of image information, it is particularly important to evaluate the effect of image denoising.First, the direct evaluation of the image is observed through naked eyes, and the image usually becomes blurred under the interference of noise.erefore, the quality of image denoising can be roughly judged by human visual observation alone.However, subjective evaluation has some limitations.It can only directly reflect the clarity of the image and cannot extract and analyze the regional information of the image.According to the statistics of human visual sensory acuity, the criteria of subjective qualitative images are divided into five levels, and the specific indicators of each level are shown in Table 1. Quantitative evaluation: after preliminary qualitative evaluation, it is usually necessary to qualitatively evaluate the processed image to judge the quality of the denoising effect.First, the mean square error (MSE) of the image is the difference between the pure image and the processed image pixels.e magnitude of mean square deviation can reflect the degree of image distortion.It is considered that the larger the value of MSE is, the stronger the denoising power is. en, peak signal-to-noise ratio (PSNR) is one of the methods widely used in image quality evaluation.e bigger the PSNR is, the better the image quality is.However, due to the limited sensitivity of personal visual effects, images with poor naked eye evaluation may appear.However, the relative coefficient of PSNR is relatively high.Structural similarity index measurement (SSIM) is an index to evaluate the image from the aspects of structure, brightness, and contrast [14,15]. Simulation Experiments and Results. To quantitatively evaluate the denoising effect of the algorithm proposed in this exploration, the evaluation experiment of the simulated image is carried out first.e original image selected in this exploration is shown in Figure 2(a).Hard threshold function denoising, soft threshold function denoising, compromise function denoising, and the optimized wavelet function threshold denoising are carried out, respectively, as shown in Figures 2(b)-2(e), respectively.e denoising effect of the hard threshold and soft threshold is different from that of the improved threshold function.In this paper, the optimized threshold expression is better than the traditional filter denoising for the noisy image, and the improved threshold denoising function is also better than other WTF algorithms under different noise variances. In the simulation experiment, the basic threshold function and the optimized WTF are used to denoise the image.From the perspective of quantitative analysis through naked eye observation, the denoising effect of the hard threshold function and soft threshold function is slightly worse than that of the optimized WTF.However, further quantitative evaluation is needed to analyze the effect of denoising.In this exploration, the denoising effect of the threshold function is evaluated comprehensively by PSNR, MSE, and SSIM.Quantitative comparison of various threshold functions is shown in Table 2. From the results of each index in the table, the noise removal situation and the denoising effect of various denoising functions can be analyzed.e denoising effect of the hard threshold function is better than that of the soft threshold function but lower than that of the compromise threshold function.e denoising effect of the optimized WTF in this exploration is better than that of the traditional threshold function. Application Results of the CNN and WTF in Ultrasonic Medical Image Denoising.In order to fully analyze the denoising effect of the CNN and the optimized wavelet function algorithm, the ultrasound images of two different parts of arteries and kidneys are selected and analyzed.e contrast of artery tissue before and after denoising by the WTF is shown in Figures 3(a) and 3(b), and the effect of the kidney before and after denoising is shown in Figures 3(c) and 3(d).Figure 3 suggests that compared with the original image, the medical ultrasound image processed by the wavelet threshold algorithm has significantly reduced the impact of noise around the target and has better imaging quality and clearer effect, whether it is the detailed imaging of the artery or the imaging of organs such as kidney tissue. Discussion e improvement of modern medical technology makes UIT play a vital role in clinical diagnosis and disease treatment.However, due to the limitation of the principle of ultrasound imaging, speckle noise is unavoidable in medical ultrasound images, which seriously affects the quality of ultrasound images and brings great interference to the diagnosis of doctors.erefore, removing image noise as a key link in image processing can extract the original image and better analyze the image.e removal of speckle noise not only affects the result of image analysis and diagnosis but also is the basis of the development of the automatic diagnosis technology of the ultrasound imaging instrument in the later stage. As a mathematical processing tool, wavelet transform can divide the data that need to be analyzed into different frequency components.With the development of the theory of wavelet transform, wavelet transform has become one of the commonly used tools in many scientific fields.e present principle of the wavelet transform has been widely used in the field of image denoising.e method of wavelet denoising is simple, and the quality of image processing is better.WTF algorithm is not only simple to calculate but also easy to complete, so it has been widely used in wavelet denoising.Aiming at the problem of medical ultrasound image noise removal, an optimized denoising algorithm is proposed in this exploration, i.e., the optimized WTF denoising algorithm.ree objective indexes, including MSE, PSNR, and SSIM, are selected to evaluate the denoising Many scholars have confirmed that the wavelet threshold algorithm has a strong denoising effect in the remote sensing image, fuzzy video image processing, etc., which is consistent with the conclusion of this paper.e simulation results show that the optimized WTF is superior to other traditional thresholding functions in all aspects.It can denoise the image as much as possible without losing the image information.In addition, in this exploration, the optimized function is applied to the actual medical image processing, so as to denoise the arterial tissue and kidney, respectively.e results show that the quality of the denoised image is better than that of the original image, and the extraction of effective information is more accurate.In summary, the optimized WTF algorithm can obtain better visual effect while removing a lot of noise.It has important value in assisting doctors in disease diagnosis and can be widely used in clinics.In this exploration, the focus of the research is to remove the speckle noise in the image, focusing on the optimization of the WTF algorithm, while the optimization of the bilateral filtering algorithm is not deep enough, which needs to be further explored in the future research. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Figure 1 : Figure 1: Basic principle diagram of the B-mode ultrasound imaging instrument. Figure 2 : Figure 2: Denoising comparison of different thresholding functions of images: (a) original image; (b) hard threshold function denoising; (c) soft threshold function denoising; (d) compromise function denoising; (e) optimized wavelet function threshold denoising in this exploration. Table 1 : Evaluation indexes of image denoising. Table 2 : Quantitative comparison of various threshold functions for denoising (dB).
5,447.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
Hybrid Task Coordination Using Multi-Hop Communication in Volunteer Computing-Based VANETs Computation offloading is a process that provides computing services to vehicles with computation sensitive jobs. Volunteer Computing-Based Vehicular Ad-hoc Networking (VCBV) is envisioned as a promising solution to perform task executions in vehicular networks using an emerging concept known as vehicle-as-a-resource (VaaR). In VCBV systems, offloading is the primary technique used for the execution of delay-sensitive applications which rely on surplus resource utilization. To leverage the surplus resources arising in periods of traffic congestion, we propose a hybrid VCBV task coordination model which performs the resource utilization for task execution in a multi-hop fashion. We propose an algorithm for the determination of boundary relay vehicles to minimize the requirement of placement for multiple road-side units (RSUs). We propose algorithms for primary and secondary task coordination using hybrid VCBV. Extensive simulations show that the hybrid technique for task coordination can increase the system utility, while the latency constraints are addressed. Introduction With the rapid advancements in technologies and ongoing urbanization, the number of vehicles and applications is growing rapidly. According to a recent Green Car report [1], the number of vehicles on the road were 1.2 billion in 2014 and is set to reach 2 billion by 2035. This huge number of vehicles results in a tremendous increase in traffic, especially during peak hours, which is an extensive global phenomenon. In the United States, people travelled 6.9 billion extra hours due to traffic congestion in 2014 [2]. During such rush hours, vehicles stuck in congestion can access remote servers to fulfill the requirements of task execution. Using wireless communication, these vehicles are able to act as nodes in autonomous self-organized networks, known as vehicular ad-hoc networks (VANETs). In these networks, vehicles can connect using a dedicated short-range communication (DSRC) service, for communication between vehicle-to-vehicle (V2V) and vehicle-to-road-side unit (RSU) (V2R) [3]. Mobile Cloud Computing (MCC) is a promising paradigm that provides vehicles with an opportunity to offload the computational or storage tasks to remote cloud servers. It provides ubiquitous access to incorporated resources offered by a variety of cloud computational and storage technologies. Users gain the opportunity of executing computationallyintensive tasks whose performance would be hindered by the computational capability of a single user [4]. Vehicular Cloud Computing (VCC) is a similar paradigm that additionally uses the computational capabilities of vehicles in the form of vehicular clouds (VCs). Usually, accessing remote clouds has some disadvantages such as high latency and infrastructure costs. These high latencies are not convenient for delay-sensitive applications. Offloading to remote clouds is not practicable for services and applications that solely depend on time and place. To address these place-bound services the best position for computation is proximal to users [5]. Edge computing is an architecture that brings computation and storage capabilities at the edge of the network-in user proximity. It reduces the latency incurred due to distant clouds, and can fulfil the requirement in delay-sensitive applications. It also reduces the size of data moved through the network [6]. Mobile Edge Computing (MEC) has brought an opportunity to deploy servers with significant computational resources at the edge, in the proximity of users. With the emergence of 5G radio access networks, MEC provides a promising solution of lowering latency for task offloading. It was also benefits in task offloading to MEC servers from vehicles that are equipped with wireless and cellular connectivity [7]. Vehicular Edge Computing (VEC) similarly brings computation to the edge of the network, enabling multiple vehicles to offload their tasks to servers at RSUs. Contrary to MEC, the distinctive features of VEC are the dynamic topology changes in vehicular networks due to the speed of vehicles. In VEC, RSUs act as VEC servers which are responsible for collecting, storing, and processing data where vehicles have different communication, computation, and storage resources. Due to the constrained resources or critical nature of the applications, vehicles offload computation-intensive and delaysensitive tasks to the VEC servers, which can substantially lower the latency and efficiently relieve the burden on backhaul networks [8]. Like edge computing, fog computing also provides services at devices near end users. Fog computation avoids unnecessary network jumps and provides improvements in latency for delay-sensitive applications [9]. Vehicular Fog Computing (VFC) is an emerging paradigm that came into existence with the integration of fog computing and vehicular networks [10]. There are no separate dedicated servers but dynamic clusters of vehicles to decrease the latency while taking advantage of abundant computational resources. It relies on the strategy of collaboration with nearby vehicles, instead of depending on the remote dedicated servers. This strategy shortens deployment costs and delays. According to [11], the three layers of VFC architecture are the abstraction, policy management and application services layers. VFC provides cooperation between cloud computing and fog computing in vehicular networks, to realize benefits for both user vehicles and intelligent transportation systems (ITS). Additionally, the user experience can be improved without any surplus load on V2V communication through the use of smart fog nodes at significant data sensing points [12]. When processing is pushed from the edge of the network to the user layer involving actuators and sensors, it further decreases the latency and increases the self-reliance of the system [13]. The use of processing capabilities within user devices at the user layer has been termed as mist computing [14]. This represents the first computing locations in the user networks. It has also been labelled as Things Computing since it extends the computing and storage processing to the things. Volunteer computing is an approach to distributed computing where users volunteer their idle computing resources to help in solving computationintensive scientific problems. The basic motive for volunteer computing was to find a free-of-cost model for solving computation-intensive problems. It also solves the problem of the wastage of surplus resources in any computing device. Therefore, volunteer computing is seen as the premium option for utilizing resources in any connected computing devices. When vehicles are stuck in congestion for a long time, accessing remote servers from various vehicles induces a great load on the Internet and remote servers for task offloading. Volunteer Computing-Based Vehicular Ad-hoc Networking (VANET), abbreviated as VCBV, is a new approach that is used for task execution and resource utilization in VANETs [15]. In this article, we propose a hybrid task execution method in VCBV that exploits the infrastructure and ad-hoc coordination simultaneously for task execution and resource utilization. Hybrid task execution utilizes the resources of vehicles in a multi-hop fashion which increases the resource utilization by adding more resources including those lying out-of-range for the job coordinator. We consider a congestion scenario where most of the resources are underutilized and task offloading to third-party service providers is at peak, due to leisure timings for drivers and passengers. In this scenario, where tasks are initiated from an RSU and coordinated with volunteer vehicles and extended in an ad-hoc fashion, we formulate the problem and design the algorithm to solve the computation offloading and resource utilization issues. The main contributions of our work are summarized as follows. (1) We propose a hybrid task coordination model for job execution and surplus resource utilization. This model consists of the infrastructure and ad-hoc task coordination simultaneously. (2) We propose a method to identify the boundary relay vehicles to enhance the region of resource utilization without using additional RSUs. (3) We design and validate the primary and secondary task coordination algorithms. The rest of this article is structured as follows: In Section 2, we discuss the background of task offloading in vehicles and related paradigms. Section 3 introduces hybrid VCBV coordination. In Section 4, we describe the system model along with the communication and computation models. Problem formulation regarding cost avoidance is presented in Section 5, and the proposed models and algorithms are explained in Section 6. The performance analysis is presented in Section 7 before the article is concluded in Section 8. Related Works With significant advances in technologies, new applications such as augmented/virtual reality and autonomous driving have developed. These applications have high computational requirements for execution. Unfortunately, computational and storage resources in a single vehicle are not capable of performing these executions in a timely manner. The task offloading concept has been introduced to address these limitations in vehicles. In this concept, computation-intensive tasks are fully or partly migrated from vehicles to resource-rich remote servers/vehicles. In this section, the offloading of a task is reviewed. We describe task offloading hosts in two categories i.e., dedicated servers and cluster of vehicles with surplus resources. The first category where tasks are offloaded to remote servers includes MCC, MEC, and VEC whereas the second category includes VCC and VFC. MCC, the integration of cloud computing with mobile computing devices, provides computing and storage services taking full advantage of cloud computing. The basic functionality of computation offloading is the decision about the task to be offloaded or not, and the server where it would be offloaded [16]. Connectivity and availability of clouds are two requirements for effective task offloading, while the level of resources of bandwidth and network access latency affect the decision of task offloading. Offloading computational tasks at distant clouds may bring additional communication overhead affecting the quality of service (QoS). Algorithms have been developed that use a game-theoretic approach, to enable the user to decide about the offloading decision to the device itself, cloudlet, or remote cloud [17]. Wu et al., [18] proposed an energy-efficient algorithm based on Lyapunov optimization which optimizes energy efficiency by switching the offloading between local, cloud, and cloudlet computing. Guo et al. [19] presented an efficient strategy for dynamic offloading and resource scheduling for optimization of consumed energy and latency. This problem was formulated to minimize energy consumption and application completion time. A real testbed is used for experimentation and validation which shows the efficiency of the proposed scheme over the existing schemes. However, offloading to the remote cloud and increased load can affect the performance so as to make the strategy unsuitable for delay-sensitive applications. Attempting more than one optimization objective was also considered for the efficiency in computational offloading. A multi-site offloading solution was proposed which addresses two average execution time and energy. For this multi-objective task offloading scheme, an algorithm was designed to address energy and execution time with bandwidth condition consideration [20]. To address the higher latency incurred due to distant clouds, MEC involves the proximal placement of servers. The key idea behind MEC functionality is to provide services at base stations where computation-intensive and energy-consuming tasks are offloaded for execution. Usually, cellular communication services, such as 4G or 5G, are used to connect to the MEC server. Both partial and full offloading options for migration are utilized. When some parts of the application are offloaded to the server, it is partial offloading whereas, in full offloading, all the parts of an application are offloaded to the MEC server. Since MEC uses proximal servers to minimize the delay that occurred due to distance cloud, it is also suitable for computation offloading in vehicular networks [21]. The use of MEC in vehicular networks can improve interactive responses during the computational offloading for delaysensitive applications. However, the additional offloading load from dense traffic vehicles other than the mobile devices for MEC servers may lead to optimum makespan [22,23]. In VEC [24,25], the computational and processing tasks are also offloaded from the vehicles to proximal servers. In earlier research, reputation management [26] and low latency in caching [27] have been discussed. In this work a multi-objective VEC task scheduling algorithm is proposed for task offloading from user vehicles to MEC vehicles. Extensive simulations show reduced task execution time for task offloading with high reliability [28]. A mobility-aware task offloading scheme [29] and collaborative computation offloading and resource allocation optimization scheme [30] are proposed for computation offloading in MEC. Dai et al., [31] considered the tasks of offloading and load balancing jointly. JSCO, a low complexity algorithm was proposed and used to address the problem of server selection and task offloading. Numerical analysis demonstrated the effectiveness of the proposed solution. The main problem area explored was the reduced link duration of users with static servers. The load on communication and computation resources can be effectively managed through the use of scheduling algorithms in distributed environments. Fog computing and vehicular network approaches can be combined to utilize surplus resources in vehicles through the use of vehicular fog nodes. In VFC, computational task offloading can be performed using moving or parked vehicular fog nodes. Hou et al. [32] presented the concept of VFC where vehicles are utilized as infrastructure. Their approach is based on the collaborative utilization of communication and computation resources of several edge devices or end-user devices. Due to the wide geographical distribution of fog computing, VFC is a better option for delay-sensitive applications in vehicular networks [33]. In [34], VFC is shown as comprising three layers, namely cloud, cloudlet, and fog layers, which cooperate for the network load balancing. Resource allocation in VFC is a major challenge since the resources are geographically distributed. Therefore, it is necessary to allocate the resources appropriately to minimize the service latency. For applications having diverse QoS requirements, the admission control problem is solved using a theoretical game approach. With the help of the proposed scheduling algorithm, QoS requirements and scalability are achieved [35]. In another work [36], public service vehicles are used as fog nodes for task offloading using a semi-Markov decision process. To increase the long-term reward and gain the optimal allocation of resources, an application-aware policy is used for offloading. Zhou et al., [37] presented a model to minimize the load on the base station by using the underutilized resources of vehicles with the help of an efficient incentive mechanism and by using the stable matching algorithm based on pricing. In an effort to efficiently park vehicles, vehicular fog computing has been employed [38]. In this work, the scheme is introduced to guide the vehicles for parking places with fog nodes and smart vehicles. Efficiency is achieved with the help of parked and moving vehicles with surplus resources. The participating vehicles in service offloading were incentivized with monetary rewards. When task offloading, total delay comprising communication and computation delays can be critical for delay-sensitive jobs. For VFC systems that provide offloading services, the long-term reward is very important; this depends on resource availability, heterogeneity, transmission, and computation delays. Wu et al. formulated a model named SMDP which consists of the components required for task offloading [39]. With the help of an iterative algorithm based on the 802.11p standard, the target of maximal reward was achieved. Vehicles with automated driving capabilities must have accuracy and sensing coverage. To overcome the limitations of computing resources in a single vehicle, Du et al. [40] proposed Li-GRU. The simulations show the improvement in sensing and coverage of a single vehicle. Parallel computing is an effective process for the on-time completion of tasks. Resource aware based parallel offloading [41] was proposed to find suitable nodes for task offloading. The effectiveness of the proposed scheme is validated through simulations. In this article, the idle resources of vehicles stuck in traffic are utilized using hybrid VCBV to execute jobs offloaded to a central entity (RSU) from vehicles, pedestrians, or an internet of things (IoT) device. The objective of this article is to fully utilize the resources in a multi-hop fashion without using additional infrastructures as well as avoid monetary costs payable to third party vendors. Hybrid Volunteer Computing Based VANET Volunteer computing is a type of distributed computing in which any computing device can share its surplus computing resources voluntarily to perform computationintensive tasks. Using volunteer computing, resource-intensive tasks can be performed without the use of expensive computing infrastructure. VC has previously been applied successfully in a variety of domains to solve computation-intensive tasks [42]. The number of vehicles on roads is growing rapidly and the resources of vehicles in the form of on-board units (OBUs)-the small computers mounted on vehicles for communications and computation-are often left idling, and can be utilized with the help of volunteer computing. To utilize the surplus resources in VANETs, volunteer computing and VANET are merged into a new architecture named VCBV [15]. The computing power of vehicles can be utilized without requiring connectivity to the Internet, whether vehicles are parked or idling in congestion. Amjid et al. [43] used volunteer computing over VANETs to support autonomous vehicles, utilizing resources through a centralized job manager. A number of algorithms, differentiated by node-registration, were evaluated for job completion rate, latency and throughput, using NS2 and SUMO. However, hybrid coordination using infrastructure and ad-hoc networking simultaneously for resource utilization has not yet been considered. Further, the impact of using volunteer computing in VANETs in terms of makespan and monetary cost for a job has not been evaluated. In this article, we use hybrid VCBV to utilize the resources of vehicles in congestion. The major advantage of this type of computing is to utilize the resources within VANETs, thereby reducing latency. In hybrid VCBV, the RSU maintains a queue of jobs received from pedestrians, vehicle drivers, passengers, or even from IoT devices. DSRC communication is used for initial offloading to the RSU. The RSU arranges the jobs and decides for selection of jobs for coordination. The RSU receives notification of willingness from volunteers located in its communication range and partitions the selected jobs into the appropriate number of tasks. In the hybrid VCBV scenario, an RSU can select another job coordinator, which can be another RSU or a willing volunteer vehicle. This second coordinator is known as the secondary coordinator and can be found using boundary relay vehicles. The primary difference between hybrid and other types of VCBV is that hybrid uses both RSU and ad-hoc task coordination simultaneously as shown in Figure 1. Hybrid VCBV System Model In this section, we present our proposed hybrid VCBV architecture and elaborate on the system model in detail. The important notations used in this paper are presented in Table 1. Network Model The scenario considered in this paper is of vehicles in congestion that voluntarily process tasks. The network model of hybrid VCBV can be explained in Figure 2. In the scenario, there is a primary job initiator/coordinator, a secondary initiator/coordinator and volunteers. The details are as follows: Primary Job Initiator In hybrid VCBV, a vehicle, RSU, pedestrian, or IoT device having some job to be performed acts as a job initiator. The job initiator sends a job (or jobs) to the RSU for further coordination. The job initiator and task coordinator might be the same or different depending upon the situation. Primary Task Coordinator In hybrid VCBV, an RSU is usually the primary task coordinator, receiving jobs from the primary job initiator. It then schedules the jobs according to priority/incentives, and obtains willingness notifications from volunteers. After receiving the willingness, it partitions the job into the required number of tasks and coordinates the tasks between suitable volunteers. Volunteer Vehicles Volunteer vehicles are the vehicles present in the communication range of a task coordinator that are willing to participate in volunteer computing. In the aforementioned scenario, these vehicles are in congestion and can be used as volunteer resources to perform computational tasks. A job is partitioned into some tasks according to available volunteer resources. We assume there are n vehicles in the communication range of job initiator (RSU/vehicle) willing to serve as volunteers. We denote a set of vehicles as V = {1, 2, 3, . . . n}. Secondary Job Initiator Boundary relay nodes from the n volunteers can play the role of secondary job initiators to maximize resource utilization and minimize the makespan incurred during job execution. If the distance of vehicle i from the primary job coordinator is larger than the distance between the coordinator and all other volunteer vehicles, then vehicle is termed to be a boundary relay node as shown in Figure 3. Let R r be the communication range of RSU and D ir be the distance of vehicle i and RSU. Node i would the boundary relay node if δ ir is minimum positive value for all i ∈ V: from all boundary nodes, i and j two boundary nodes would be selected as secondary job coordinators which have the maximum distance D ij between them. Secondary Task Coordinator Either the secondary job initiator obtains the willingness of volunteers in its communication range and acts as coordinator, or it forwards the task to another vehicle or an RSU which can then acts as a task coordinator. This type of coordinator is termed a secondary task coordinator and accumulates further volunteers, resulting in an increase in resource utilization and optimized makespan. Communication Model In the scenario we have presented, it is assumed that vehicles are stopped and use the IEEE 802.11p standard for communication between V2V and V2R, providing 3 Mbps to 27 Mbps data rates over 10 MHz bandwidth [44]. Request-to-send (RTS) and clear-tosend (CTS) are both mechanisms used to reduce collisions in task transmission and result gathering. The data transmission rates between V2V and V2R using Shannon's formula are as follows: where R t is the data transmission rates for the wireless channel, b is bandwidth allocated and SNR is signal-to-noise ratio respectively. SNR can be found using the following formula: where P is the received signal power of the channel, I is interference, and σ is the noise power. α is the path loss component that depends on distance d between two communicating entities which can be found using the following formula: The data transmission latency between RSU and a volunteer vehicle "i" is given by the following equation, where tp i IS is task input size allocated to vehicle "i". Task Model Here we present a task model for hybrid VCBV. Each job can be partitioned into a number of distinct tasks of the same sizes which may be carried out on OBUs. Every task is presented in the form of a tuple tp i = tp i ID , tp i IS , tp i CR , where "i" represents the vehicle ID from set "V" willing to participate in task execution, tp i ID is separate identity allotted to each partitioned task, tp i IS describes the input size (in bits) of the task sent and tp i CR shows the computational resources required (CPU cycles per bit) to complete the task tp i . Task processing mainly relies on its input size (tp i IS ) and computational requirement (tp i CR ) which is also known as the complexity factor. This factor is crucial to explain the distinct computational requirements. Some tasks, such as applying filters on an image, normally require fewer CPU cycles than applying an algorithm for face detection in a video [45]. Vehicle Computation Model The makespan incurred for a job consists of three types of delay for a single task, namely transmission time, computation time, and results collection time. Transmission time depends upon the transmission rate of the channel and the size of the task. The computation time of the task relies on two elements which are the computational requirements of a task and the computational capability of the volunteer vehicle. The third type of delay is the result collection time from the volunteer to the RSU which is dependent on the size of the output data. The time taken for a task to complete its execution on a volunteer vehicle is shown in the following equation: The total time to transmit and execute a task on a volunteer vehicle is shown in the following equation: The total makespan for a job j to complete with the help of n vehicles is as follows: Similarly, the average execution time for all m jobs is as follows: Cloud Computation Model The offloading from vehicle to cloud includes transmissions from the vehicle to the RSU and then from the RSU to the cloud. Vehicles use DSRC for connectivity to the RSU and backhaul links such as fiber and core networks are used to offload jobs from an RSU to cloud servers placed thousands of miles away [30]. Transmission time includes offloading input tasks and getting back the results. Total time to offload and execute a job j on cloud is "T CC j " which is expressed as follows: where α and β are constants and D o is the output data size. For the aforementioned scenario, it is assumed that all the jobs are already at the RSU. Therefore: Similarly: Edge Computation Model Edge servers are placed at RSUs installed alongside the roads, and play the role of wireless access points, and are smaller but closer computation and data centers compared to cloud servers. After receiving a job from a vehicle, the RSU places the job in the queue and executes it in turn. In the aforementioned scenario, we assume that all jobs are present in the queue of an RSU. Therefore, the computation time for job j at the edge as follows: System Utility Function In this subsection, we define a logic function named the system utility function (S u f ) which depends upon latency and monetary cost, two important metrics for task offloading. Since low latency and costs are requirements of efficiency for task offloading, this system utility function increases monotonically with a decrease in latency or the cost paid. This function represents user satisfaction: where P c is price coefficient and θ and ψ are weight constants. Similarly: Avoiding Costs Paid to Third-Party Vendors We formulate the optimization problem of lowering the makespan for task execution and considering the monetary cost at the same time. According to already explained communication and computation models, the system optimization problem relies on these two factors. Strategies lacking balanced resource allocation can affect the performance of the model which can raise the offloading latency while comparing to local computing. The optimization includes the minimization of makespan while comparing with a benchmark of total job execution time at a single vehicle. The optimization goals are to minimize job execution time, the cost paid to third party venders and restrict the makespan to benchmark. These optimization objectives are as follows: The solution to our problem is based on the achievement of these aforementioned objectives while identifying the possible constraints. If any coordination algorithm fulfils these objectives while handling the constraints, it will be considered as a suitable algorithm. Computation and communication constraints need to be satisfied by the proposed algorithm. The computations performed by vehicles cannot exceed the resource it owns. The link expiration time (LET) between the job coordinator and the volunteer vehicle must not be less than the time taken to complete the task execution by the volunteer. The task transmission time of offloading to volunteers or cloud must not exceed the computation time at the edge server. All these constraints are shown as follows: Proposed Offloading and Resource Allocation Model In this section, hybrid VCBV is proposed which is used for resource allocation during task execution. We consider a congested road as shown in Figure 4. The solution to the above problem encompasses the strategy of multi-hop task coordination to fully utilize the surplus resources of vehicles beyond the range of an RSU. A decomposition technique is used to fragment the aforementioned problem for solution and optimization. To maximize the system utility the problem is divided into boundary relay vehicles determination (BRVD), hybrid VCBV task coordination (HVTC), and secondary task coordination (STC). We design an algorithm for resource utilization using hybrid VCBV and without using any edge or cloud server. Boundary Relay Vehicles Determination Algorithm To achieve the aim of resource utilization in VCBV multi-hop access to volunteers is used. In task coordination, boundary relay vehicles are determined after the identifying the willingness of volunteer vehicles in the communication range of the RSU. These boundary relay vehicles are used to approach the volunteer vehicles which are out-of-the-range of the RSU. The reason to choose the boundary relay vehicles for secondary task coordination is to enhance the region for task coordination. On a congested road, vehicles on both sides of an RSU can play the role of boundary relay vehicles. Each side of the RSU will have exactly one boundary node which will play the role of secondary task coordinator. Algorithm I is used to determine the boundary relay nodes from a set of volunteers, V. It first computes the distance between the RSU and all the volunteers during the beaconing process. Vehicles with maximum distance but under the communication range of the RSU are boundary relay vehicles for primary task coordination on both sides of the RSU. Algorithm I: Proposed BRVD algorithm for Hybrid VCBV Hybrid Based VCBV Task Coordination Algorithm As mentioned before, to execute the jobs and utilize the surplus resources of vehicles stuck in traffic congestion, we use hybrid VCBV task coordination. This type of coordination leverages the use of infrastructure as well as ad-hoc coordination simultaneously. To enhance resource utilization and optimize the system utility, hybrid task coordination opts for primary and secondary task coordination. Based on the problem analysis and constraints, the HVTC algorithm is used to maximize the system utility. We decouple the optimization problem into resource allocation, primary task coordination, determination of boundary relay vehicles, and carrying out the secondary task execution. This algorithm obtains the willingness of n + 2 vehicles from which n vehicles are volunteers and two vehicles are boundary relay nodes. It picks three jobs for simultaneous task coordination. The first job is executed using primary task coordination with the resources available in the communication range of the job coordinator. The second and third jobs are offloaded to boundary relay vehicles for secondary task coordination which are allocated to volunteers, not in the range of primary task coordinator. The RSU is responsible for the collection and aggregation of results from primary and secondary task coordination nodes. Algorithm II: Proposed HBVTC algorithm for Hybrid VCBV Secondary Task Coordination This type of coordination is executed in two modes, depending upon the availability of sufficient volunteers. In the first case, it obtains the willingness of n volunteer vehicles, where the boundary relay vehicle acts as a secondary task coordinator. In the second case, on failing to get willingness from sufficient volunteers, the boundary relay vehicle offloads the job to another vehicle willing to be a coordination node. The STC algorithm below shows the whole process of task coordination. Performance Evaluations In this section simulation experiments, conducted in NS3 and Python, are described. The proposed HVTC is evaluated and a performance comparison is made with the following schemes, from which only RVC uses volunteer computing for the execution of jobs. • The Entire Local Computing (ELC) scheme, where all the jobs are executed on the vehicles locally. We take ELC as a benchmark for the decision to offload. Any offloading job expected to have makespan more than ELC will be rejected for the offloading procedure. • The Entire Cloud Computing (ECC) scheme, where all the jobs are offloaded to cloud servers for execution. ECC is modelled using the eDors algorithm [19] which optimizes the consumed energy and latency using dynamic offloading and resource scheduling at the cloud. • The Entire Edge Computing (EEC) scheme, where all the jobs are executed at edge servers. In VEC, these edge servers are placed at RSU and named as VEC servers. We use JSCO [31], a low complexity algorithm to model EEC. Simulation Setup In the simulations, an RSU is placed near a 1000m straight road congested with vehicles as shown in Figure 3. The VEC server and cloud server have computational capabilities of 2 × 10 10 and 1.5 × 10 11 CPU cycles per second, respectively; the vehicles have the computational capability of 1 × 10 9 CPU cycles per second [46]. Backhaul link capacity for cloud (R R2C t ) is 10 7 bits/s whereas output data size (D o ) is 200 Kb. α, β, and γ are constants and depend on the availability of RSU to cloud communication bandwidth, cloud, and edge computation capabilities, respectively. In hybrid VCBV, an RSU obtains the willingness of volunteer vehicles by sending the beacon frames (BFs) and receiving the beacon frame responses (BFRs). A BF contains information regarding the task to be offloaded. From the BFs, the volunteer vehicles acquire the information of required resources and send a BFR to the RSU indicating the availability and willingness of volunteer. After obtaining the willingness of sufficient volunteers, the RSU sends the task data (input data) for executing the computational procedure. After the execution of the assigned task, the results are sent back to the job coordinator. For RVC and HVTC, we consider one RSU located alongside a two-lane unidirectional road in an urban environment. In our simulation, we consider n = 20 vehicles, which may increase in real situations depending upon the willingness of volunteers. We assume these vehicles are in congestion and resource utilization may be accomplished to a higher level depending upon the availability and willingness of volunteers. RVC considers 20 vehicles for coordination and after getting the willingness of these volunteer vehicles, its task coordination is performed. Whereas HVTC considers 22 vehicles for primary coordination and out of these volunteer vehicles, 2 vehicles are considered as boundary relay vehicles which then take part in secondary task coordination. We use NS-3.27 to find the communication costs for initialization, task offloading, and return of results. These results are used for numerical analysis. Table 2 shows the parameter settings for experimentation. Computation resource cost at edge [30] $0.03/GHz tp i CR Task computational requirements [47] 1500 CPU cycles per bit Performance Comparisons In this section, we evaluate the performance of ELC, ECC, EEC, RVC, and HTVC in terms of average execution time and system utility parameters for three different scenarios. In the first scenario, the aforementioned parameters are compared to a different number of tasks. In the second scenario, these parameters are analyzed for a fixed number of tasks but different task sizes. In the third scenario, the analysis is conducted for varied computational requirements, for tasks whereas the input size and number of tasks are kept constant. Different Number of Tasks In this scenario, the size of the task is fixed at 1000 Kbits and the number of tasks varies from 10 to 50. We first, compute the average execution time and system utility function for ELC. The performance of ELC is taken as a benchmark for all other computing algorithms. In Figure 5, we observe the benchmark values for average execution time and system utility. Any task with a higher average execution time than ELC will be rejected for offloading for any of the computing algorithms. Task execution time increases with an increase in the number of tasks, but average execution time remains constant for several tasks due to the fixed computation requirements and similar OBU types. It is observed in Figure 6 that the average execution time for a small number of tasks is lower when using cloud or edge mechanisms over VCBV algorithms. The reason for this good performance is due to the higher computation resources of cloud and edge computing than the OBUs within vehicles. As the number of tasks increases, the performance of ECC and EEC decreases due to communication and computation constraints. Both RVC and HVTC use volunteer computing for resource allocation but differ in the number of hops. HVTC shows better performance than RVC. The reason behind this is better resource allocation in HVTC, due to multi-hop communication. Even for a smaller number of tasks, HVTC uses three times more resources. It uses the same number of volunteers as is used in RVC, only in primary coordination. Using a multi-hop resource allocation increases the number of volunteers resulting in lower computation time. This technique optimizes the makespan but occupies more communication resources during the offloading process. Figure 7 shows the simulation results of the system utility function for computing algorithms for a varying number of tasks. The system utility of any computing algorithm depends on the makespan and cost paid to third party vendors such as cloud and edge provision. The lower the makespan and monetary cost, the higher the system utility of the algorithm. The reason to use system utility for comparison is to highlight the importance of free of cost computing services. RVC and HTVC have better performance and higher system utilities comparing to ECC and EEC because RVC and HTVC use volunteer computing and charge no monetary cost. Varied Task Size For near-optimal solution, effective computation offloading relies on the makespan which comprises communication and computation delays. Computation cost can be optimized by using more resources from the cloud, edge, or volunteer resources. Similarly, communication cost depends on the size of input and output data. We have performed experiments to determine the effect of varied task size on communication and computation costs. First, we perform local computation and analyze the effect of varied data size on average execution time and system utility. Figures 8 and 9 show the benchmark performance for average execution time and system utility for various sizes of task. We observe that with the increase in input data size, the average execution time increases while system utility decreases. For simulations, we fix the number of tasks at 20 and vary the task input data size from 400 Kb to 1000 Kb. From Figure 10, it can be observed that EEC and ECC have higher average execution time compared to volunteer computing-based algorithms. An increase in the number of tasks affects the performance of edge and cloud due to increase communication and computation requirements. However, a task with a smaller input size has a lower execution time. According to Figure 11, ECC and EEC have smaller system utility compared to volunteer computing-based algorithms. Varied Computational Requirements Task offloading in vehicular networks is usually performed for two reasons. The first, is when the processing requirements of a task is more than the computational capacity of a vehicle. Secondly, when there is some requirement of a deadline to meet which is not possible with ELC due to the higher makespan. The decision to offload a task or not usually depends on the ratio of computing to communication costs. The third factor on which makespan incurred for task offloading depends is the task computational requirements. In this scenario, the size of the task and number of tasks are fixed to 1000 Kbits and 20 respectively. We vary task computational requirements from 150 to 1500 CPU cycles per bit. These computational requirements are investigated for the different type of workloads from data to video processing tasks [49]. Figure 11. System utility for varied task size. From Figure 12, it is observed that the tasks with computation requirements less than 300 CPU cycles per bit have better average execution time than offloading to other devices. Here communication cost incurred due to offloading is greater than the time required for ELC. RVC and HVTC show better performance than the other offloading techniques. Since HVTC incurs additional offloading overheads, it has almost the same performance as RVC for tasks having fewer computational requirements. Similarly, EEC has better performance due to the only dependency being on computational requirements. Figure 13 shows the system utility values for varied task computational requirements. ELC shows better system utility because it does not involve offloading and monetary costs. Even with lower computational capabilities, it shows better performance for tasks with low computational requirements. HVTC has the highest system utility except for the task with computational requirements of less than 300 CPU cycles per bit. Like ELC it does not involve the monetary cost, but it has a communication cost for offloading. Conclusions In this article, we have proposed a hybrid volunteer computing-based model in vehicular networks to minimize latency and maximize system utility. We achieve this by utilizing the surplus resources in vehicular networks. In particular, the surplus resources of vehicles in congestion are considered for efficient utilization. The volunteer model not only optimizes the latency but reduces the monetary costs required for task offloading to third party vendors. We analyze the task coordination model in a single and multihop fashion by using boundary relay nodes which minimize the need for additional infrastructures. Extensive simulations are performed to validate the performance of the hybrid coordination model which show that hybrid VCBV is not only better in latency but shows a higher system utility over existing schemes. It saves on the financial costs used to employ task offloading services, utilizes surplus resources, and achieves a lower makespan given sufficient availability and willingness of volunteers. The VCBV model supplements edge and cloud technologies and minimizes third-party reliance. Our proposed model considers the resource utilization of vehicles stuck in congestion in an urban environment. In future, we will consider the resource utilization of vehicles moving on highways using game theory. Conflicts of Interest: The authors declare no conflict of interest.
9,562
2021-04-01T00:00:00.000
[ "Computer Science" ]
Monitoring temporal opacity fluctuations of large structures with muon radiography: a calibration experiment using a water tower Usage of secondary cosmic muons to image the geological structures density distribution significantly developed during the past ten years. Recent applications demonstrate the method interest to monitor magma ascent and volcanic gas movements inside volcanoes. Muon radiography could be used to monitor density variations in aquifers and the critical zone in the near surface. However, the time resolution achievable by muon radiography monitoring remains poorly studied. It is biased by fluctuation sources exterior to the target, and statistically affected by the limited number of particles detected during the experiment. The present study documents these two issues within a simple and well constrained experimental context: a water tower. We use the data to discuss the influence of atmospheric variability that perturbs the signal, and propose correction formulas to extract the muon flux variations related to the water level changes. Statistical developments establish the feasibility domain of muon radiography monitoring as a function of target thickness (i.e. opacity). Objects with a thickness comprised between ≈50 ± 30 m water equivalent correspond to the best time resolution. Thinner objects have a degraded time resolution that strongly depends on the zenith angle, whereas thicker objects (like volcanoes) time resolution does not. Usage of secondary cosmic muons to image the geological structures density distribution significantly developed during the past ten years. Recent applications demonstrate the method interest to monitor magma ascent and volcanic gas movements inside volcanoes. Muon radiography could be used to monitor density variations in aquifers and the critical zone in the near surface. However, the time resolution achievable by muon radiography monitoring remains poorly studied. It is biased by fluctuation sources exterior to the target, and statistically affected by the limited number of particles detected during the experiment. The present study documents these two issues within a simple and well constrained experimental context: a water tower. We use the data to discuss the influence of atmospheric variability that perturbs the signal, and propose correction formulas to extract the muon flux variations related to the water level changes. Statistical developments establish the feasibility domain of muon radiography monitoring as a function of target thickness (i.e. opacity). Objects with a thickness comprised between ≈50 ± 30 m water equivalent correspond to the best time resolution. Thinner objects have a degraded time resolution that strongly depends on the zenith angle, whereas thicker objects (like volcanoes) time resolution does not. Using the secondary cosmic rays muon component to image geological bodies like volcano lava domes is the subject of increasing interest over the past ten years. Much like medical X-ray radiography, muon radiography aims at recovering the density distribution, ρ, inside the targets by measuring their screening effect on the cosmic muons natural flux. This approach was first tested by George 1 to measure the thickness of the geological overburden of a tunnel in Australia, and later by Alvarez et al. 2 who imaged the Egyptian Pyramid of Chephren to eventually find a hidden chamber. The method then stayed long dormant until recent years when, thanks to progress in electronics and particle detectors, field instruments were designed and constructed by several research teams worldwide 3,4 . Muon radiography experiments have successfully been performed on volcanoes where the hard muon component is able to cross several kilometres of rock 3,5-12 . Applications to archaeology 13 , civil engineering (tunnels, dams) and environmental studies (near surface geophysics) are subject to active research, and monitoring of density changes in the near surface constitutes an important objective in hydrology and soil sciences. The material property that can be recovered with muon radiography is the opacity,  which quantifies the amount of matter encountered by the muons along their travel path, L, across the volume to image, Generally, the opacity is expressed in [g cm −2 ] or, equivalently, in centimetres water equivalent [cm.w.e.]. Muons lose their energy through matter by ionisation processes 14 at a typical rate of 2.5 MeV per opacity increment of 1 g cm −2 . They are relativistic leptons produced in the upper atmosphere at an altitude of about 16 km 15,16 , and reach the ground after losing about 2.5 GeV to cross the opacity of 10 m.w.e. represented by the atmosphere. Muons travel along straight trajectories across low-density materials, including water, concrete and rocks, and scattering is significant only in high-density materials like lead and uranium 14 . However, low-energy muons (E ≤ 1 GeV) have strong scattering in almost all materials. Muon radiography of kilometre-size objects like volcanoes involves the hard muonic component with energy above several hundredths of GeV. In such cases, the muons incident flux may reasonably be considered stationary, azimuthally isotropic and to only depend on the zenith angle 7,15 , and simple flux models can be used to determine the screening effects produced by the target to image 7,17 . The situation is different in environmental and civil engineering applications where the bodies have low opacities (i.e. several tens of m.w.e.) that can be crossed by the soft muonic component (i.e. several GeV) which can no more be considered stationary and isotropic. The soft muon component main causes of non-stationarity and anisotropic characteristics 15,16 are pressure variations at the ground level 18 , and geomagnetic storms together with solar coronal mass ejections (CME) 19,20 . Both time-variations of atmospheric pressure and CME involve time constants of several hours or days 21 which are of a critical importance when monitoring fast density changes in low-opacity bodies. During CME and associated magnetic storms, one generally observes Forbush decreases that correspond to a deficit of cosmic rays of 1% or 2% at ground level, but variations up to 10% have been reported 22 . The atmospheric pressure variations typically produce muon flux relative changes of 0.1% hPa −1 , i.e. several percent variations during perturbed meteorological conditions. Consequently, monitoring subtle density changes in low-opacity targets necessitates a precise correction of the muon flux time-variations induced by both atmospheric pressure variations and eventual intense geomagnetic events. The present study aims at contributing to the procedures development to monitor density changes in low-opacity bodies. We apply and discuss a simple way to suppress atmospheric pressure effects from muon counting data. We present a controlled experiment performed on a water tank tower whose opacity fluctuates in a significant range (3 m.w.e. <  < 5 m.w.e.) where atmospheric effects are expected to significantly perturb the incident cosmic muons flux. We made measurements during a several weeks period while the opacity remains steady at its maximum level before fluctuating. Meanwhile, water level in the tank, atmospheric pressure, and geomagnetic activity are monitored in order to evaluate their relative importance to produce muon flux variations across the water volume. Finally, a discussion about the time resolution in muon radiography monitoring is presented with a particular emphasis for the low-opacity targets case. The SHADOW experiment The SHADOW experiment measured the muonic component time-variations while the water level varied in a water tower. For this purpose, we placed a muon telescope (its description is given in the Methods Section below) along the tower symmetry axis and below the tank. We oriented the instrument vertically (i.e. central zenith angle = 0) as shown in Fig. 1 so that the apparent opacity is only zenith angle (azimuthal invariance) and time (when the water level h(t) is changing) dependent. The water tower is located in Tignieu-Jameyzieu, France, a village located 20 kilometres East from Lyon (altitude 230 m above sea level, X UTM = 31 669490, Y UTM = 5067355). The distance between the upper and the lower matrices is set at 195 cm to cover a zenith angle range 0° ≤ θ ≤ 22.3° such that all the telescope 961 lines of sight pass through the water. The solid angle spanned by the telescope equals Ω int = 0.161 sr, and the total effective acceptance int  = 630 cm 2 sr. The data acquisition started on November 21 th , 2014 and stopped on January 22 nd , 2015. While measuring the muon flux under the tank, the water level was monitored with a several cm accuracy every 5 minutes by the company in charge of the tower (Syndicat Intercommunal des Eaux de Pont-de-Chéruy-SIEPC). These data are . It cannot be excluded that the geomagnetic activity at these dates produced small variations, at the fraction of percent level 24 , of the muon flux measured during the SHADOW experiment. However, these variations are expected to occur only a few times in the data time-series and this sparsity prevents a detailed quantitative study to identify the corresponding signals. In the next section, we use hourly averages of these data series to document the relationship between the muon flux time-variations and those of both the atmospheric pressure and the water level in the tank. Constant water level: Atmospheric effects contribution We first consider the data acquired during the measurement period first three weeks, from November 22 nd to December 13 th 2014, when the water level in the water tower remained almost constant at its maximum level h 0 = 496 cm (Fig. 2c). Meanwhile, the atmospheric pressure varied by ± 15 hPa with respect to a reference pressure p 0 = 1016.8 hPa (Fig. 2a). The muon flux shown on Fig. (2b) not only randomly fluctuates as expected for a Poissonian process but also contains long-period variations with an amplitude of less than 3%. These long-period variations are clearly anti-correlated with those of the atmospheric pressure (Fig. 2a). Since the water level is mainly constant during the considered period, we expect the muon flux time variations to be principally caused by atmospheric effects. The graph in Fig. 3 represents the muon flux hourly averages with respect to the atmospheric pressure from Saint Exupery airport. In this graph, only the data points such that the water level 495 cm ≤ h ≤ 496 cm are retained. A least-squares fit to these points gives a negative slope β p = − 0.0012 (0.0001) hPa −1 where the value in parenthesis is the half-width of the 95% confidence interval. We performed the fit by assigning to the relative flux averages a standard deviation σ Φ = 0.0081 derived from the events arrival times statistics. A standard deviation σ p = 1 hPa is assigned to the atmospheric pressure data. The linear fit residuals standard deviation, σ r = 0.0093, falls near σ Φ and indicates that no higher-order fit is required. Consequently, in the remaining, we shall represent the atmospheric influence on the relative muon flux with a linear relationship, [26][27][28] also find linear relationships with coefficients falling near β p = − 0.001 hPa −1 . We do not expect the barometric coefficient derived in the present study to be strictly equal to those obtained for other experiments since this coefficient is sensitive to the site location and especially to the telescope altitude 29,30 . However they should be in the same order of magnitude. The correction formula (2) says that a Δp = 10 hPa increase of the atmospheric pressure induces a relative muon flux decreases of 1.2%. Applying this correction to the muon flux data (black curve of Fig. 2b) efficiently reduces the long-period variations (light red curve of Fig. 2b). Time-varying water level We now consider the data from the second measurement period, where we observe large water level variations (Fig. 4). It begins on December 13 th 2014 and ends on January 22 nd 2015. The largest decrease in the water level is up to nearly 200 cm with respect to h 0 (Fig. 4c). During the same period, the muon flux variations appear clearly anti-correlated with the water level (Fig. 4b), and the highest relative flux deviation reaches 15% when the water level is minimum (≈ 320 cm). Meanwhile, the atmospheric pressure variations (Fig. 4a) also produce conspicuous effects on the muon flux like, for instance, the flux bump that occurs around December 28 th during a low-pressure event. The circles in Fig. 5 represent the muon flux data versus the water level. Applying the atmospheric correction (2) to the muon flux reduces the data points scattering and enhances the correlation between the flux and the water level (black dots in Fig. 5). The uncorrected points standard deviation σ = 0.019 is reduced to σ = 0.011 for the pressure-corrected data. The uncorrected points standard deviation regularly increases from σ = 0.011 to σ = 0.021 when the water level increases from 3-5 meters while the pressure-corrected points standard deviation remains constant in the whole range of water levels. We explain this feature by the fact that high water levels are more frequent than low levels. Consequently the atmospheric pressure fluctuates in a wider range during the time period of high water level, causing larger muon flux variations. A linear fit to the pressure-corrected points is displayed by the red line in Fig. 5 with a negative slope β h = − 0.0009 (0.00002) cm −1 and an intercept ΔΦ w = 0.444 (0.007). The standard deviations assigned to the water level and muon flux are respectively σ h = 1.7 cm and σ Φ = 0.0082. The residuals standard deviation is not reduced when fitting a second-order polynomial, and we adopt a linear relationship to represent the water level influence on the muon flux, Discussion of SHADOW data analysis The data analyzed in the previous Section show that linear relationships (equations 2 and 3) may safely be used to represent the relative muon flux dependence with respect to the atmospheric pressure (Fig. 3) and water level (Fig. 5) variations. Owing to the fact that the opacity fluctuations produced by Δp = 1 hPa and Δh = 1 cm are identical (i.e. they represent the same mass of matter), it may be deduced that the β coefficients in equations (2) and (3) should be the same. This hypothesis is not supported by our experimental results which indicate that β p is significantly larger than β h . We explain the discrepancy between the experimental values for β p and β h by the fact that the atmosphere is not only, like water in the tank, a screen of matter for the muon flux but it is also the place where muons originate 15,16,31 . Consequently, the muon flux at ground level depends on both the pressure and the temperature profiles in the atmosphere. For instance, if the atmosphere is warmer, the muon production altitude is higher (roughly at the isobaric level p = 100 hPa) and the muons transit times increase. Then, muons are more likely to decay before reaching the ground and thus the relative muon flux decreases 15,31 . This is the so-called negative temperature effect. However, an increase in temperature at the production level decreases the air density, thus reducing the likelihood of pion interactions before their decay into muons. Muon production then increases, and this phenomena is known as the positive temperature effect. Both the pressure and temperature effect upon the flux of muons at ground level may be summarized by 32,33 , where T is the temperature at the production level and β ⁎ p and β ⁎ T are adjustable coefficients for the pressure and temperature effects respectively. The coefficient β ⁎ p is always negative while β ⁎ T may be either positive or negative depending on the prevailing temperature effect. For the soft muon component (≤ 10 GeV) which composes the main part of the particles detected by our telescope in this experimental context, the negative temperature effect dominates and β ⁎ T is expected to be negative. The correlation analysis recently performed by Zazyan et al. 34 shows that the pressure and temperature effects are positively correlated. Consequently, for the present measurement conditions, both β ⁎ p and β ⁎ T are negative and time-correlated. When considering the atmospheric effect alone like in equation (2), the β p coefficient actually accounts for both the pressure and temperature effects. This explains why the experimental value found for β p (2) is larger than the value of β h (3). Statistical feasibility and limits of opacity monitoring We now address some statistical issues concerning the monitoring of opacity variations like those produced by water-level variations measured during the SHADOW experiment. Let us assume that N = N 1 + N 2 particles are detected by the telescope during a time period T, and where N 1 and N 2 are the number of particles respectively counted during the first and second half of T. We want to determine under which conditions N 1 and N 2 may be considered different at the confidence level α. The particle flux difference ΔN = N 2 − N 1 obeys a Skellam distribution defined as the difference between two Poisson processes with means μ 1 and μ 2 with ε the flux variation percentage. When the inequality (6) becomes an equality we get T = T min , the minimum acquisition time necessary to resolve a flux difference given by the following set of parameters (φ 0 , ε, α). When ε is fixed, T min is the best time resolution achievable to observe temporal relative flux variations larger than ε. When T min is fixed, we derive the best relative flux variation, ε, detectable on a time-scale larger than T min . Note that if (N 1 , N 2 )  10 the Poisson laws can be approximated with Gaussians and equation (6) is simplified to, where α α =  erf( ). We numerically compute T min from equation (6) with a confidence level α = 0.05 and represent it on Fig. 6 for a range of measured muon flux and variation threshold ε = 1, 0.1, 0.01 and 0.001 (i.e. 100%, 10%, 1% and 0.1%). Observe that the approximation (10) is suitable for our range of applications, T min will be underestimated starting from ε  0.5 which implies N ≈ 20. Figure 6 shows that to detect a daily variation in the muons count of 2% (ε = 0.02), as is typically observed in the SHADOW experiment, an average flux φ 0 > 2 s −1 must be measured. This solution is represented by the black cross labelled "water tank" on Fig. 6. The lower-left domain delimited by the curved black arrow in Fig. 6 represents the solution-domain for time scales and opacity variations of the SHADOW experiment category. This solution-domain is the region where flux variations can be resolved at a high confidence level. The arrow horizontal branch is limited by the experiment duration, and the vertical branch is placed at a level corresponding to the maximum flux that can be measured by the telescope. This latter quantity increases with the telescope acceptance, e.g. by increasing the angular aperture (i.e. by reducing the distance between the detection matrices), or by grouping several lines of sight, or by using instruments with a larger detection surface. The feasibility domain for a typical volcano experiment is also represented on Fig. 6 and delimited by the blue curved arrow. Note that for this kind of experiments we have a longer acquisition time and a tiny measured flux as the total opacity of the geological body facing the telescope is much bigger than for the SHADOW experiment: about 1000 m.w.e. for a volcanic lava dome versus 5 m.w.e. for the water tank. We can rewrite equation (6) into a form more suitable for radiography applications by replacing the flux fluctuations by opacity fluctuations, where the muon flux φ, is explicitly written to depend on telescope acceptance  , opacity  0 , and zenith angle θ. As before, ε represents the variation of opacity relative to the average opacity  0 . We warn the reader that a given ε-variation of  corresponds to a much larger ε-variation of φ. Putting equations (12) and (13) in equation (9) we obtain the following feasibility condition, min 0 where T min is the measurement period minimum duration necessary to resolve the sought opacity variation. Note that the feasibility formula from Lesparre et al. 7 is the first order development of equation (14). A subset of T min solutions of equation (14) is represented on Fig. 7 for the confidence level α = 0.05, for zenith angles θ = 0°, 30° and 60° and opacity variations ε = 100%, 10%, 1%. An acceptance,  = 10 cm 2 sr, typical of our telescopes has been used in the computation. Roughly, a one order of magnitude ε variation induces a T min change by two orders of magnitude. Observe that there is an optimal opacity range where the measurement time, i.e. the time resolution that is achievable, is minimum to resolve a given opacity variation. The optimal opacity range depends on the zenith angle and goes roughly from 40-100 m.w.e for θ = 0°, and from 20-40 m.w.e for θ = 60°. For low-opacity conditions, measurements at high zenith angles are wise to optimize the time resolution. This is particularly conspicuous for the SHADOW experiment where the average opacity  0 ≈ 5 m.w.e and ε ≈ 10%. For these parameters, Fig. 7 gives T min > 1 day is necessary at θ = 0° to resolve the fluctuations while T min > 0.2 day is sufficient at θ = 60°. The time resolution strong dependence with respect to the zenith angle disappears at larger opacities  0 > 500 m.w.e like those encountered in volcano muon radiography. Discussion Muon radiography is a powerful method to monitor opacity/density variations inside geological bodies. Noticeable advantages of the method are the possibility to remotely radiography unapproachable dangerous volcanoes and to image the density distribution of large volumes from a single view-point 8,10,36 . Muon radiography is entering an era of precision measurements not only for structural imaging but also for dynamical monitoring purposes. Some monitoring experiments have been performed on active volcanoes that demonstrate the usefulness of such measurements to constrain the evolution of eruption crisis 12 . However, as shown above, monitoring opacity variations is subject to external sources of bias, and statistical and experimental constraints that limit the achievable resolution. Understanding these limits is of primary importance to improve the method and to assess the muon radiography monitoring feasibility and validity. Experimental constraints are partly dictated by statistical considerations, and mainly come from the telescope acceptance that limits the maximum flux which fixes the resolution domain right boundary in Fig. 6. This boundary may be moved rightward by increasing the acceptance  of the instrument. Recalling that  is expressed in [cm 2 sr], the acceptance may be augmented by several means: 1) increasing the solid angle encompassed by the instrument by reducing the distance between the detection matrices; 2) increasing the detection surface by coupling several telescopes (actually our telescopes may be merged into a single one); 3) grouping lines of sight to increase both the detection surface and the solid angle at the price of reducing the angular resolution of the radiographies. In the present study, the latter solution was retained and all lines of sight were merged to obtain an effective acceptance of 630 cm 2 sr. Statistical constraints bound the resolution domain of a given experiment (Fig. 6), and the main concern when doing measurements is to ensure that the monitored phenomena fall inside the boundaries. As will be discussed in the next paragraph, the telescope configuration may be adapted to comply with the ongoing experiment objectives. As shown in the preceding sections, the statistical constraints are quite different whether the opacity is high or low. This is conspicuous in Fig. (7) where the feasibility solutions for T min strongly differ in the low-and high-opacity domains. It is remarkable that high-opacities variations are equally resolved whatever the zenith angle while, instead, the resolution for low-opacities strongly depends on this angle. Another conspicuous feature present in Fig. (7) is the existence of an optimal medium-opacity range  ≈ 50 ± 30 m.w.e. where telescopes offer their best performance. These two effects are due to the cosmic muon energy spectrum nature 15,17,37 , and changing the telescope acceptance has no effect on the optimal opacity values but only changes T min by translating the solution curves of Fig. (7) either upward (decrease of acceptance) or downward (increase of acceptance). Methods The muon count series analysed in the present study were acquired with one of our standard telescopes shown in Fig. 8 4,10,38 . The picture was taken during an open-sky calibration phase where the muon count serves to determine the efficiency of the scintillator bars forming the detection matrices. Each matrix is formed by an assemblage of two sets of 16 bars arranged perpendicularly to obtain a 16 × 16 square 5 × 5 cm 2 pixels array. The telescope upper and lower matrices allow 31 × 31 pixels combinations, i.e. 961 distinct lines of sight. The distance between the matrices may be changed to adapt the solid angle spanned by the trajectories. In the present study, the distance was tuned to encompass the entire water tank (Fig. 1). Once geometrically configured, the telescope is totally characterised by its acceptance function i  [cm 2 sr] which relates the muon count, N i , to the muon flux, ∂ φ [s −1 cm −2 sr −1 ] received by the telescope in its i th line of sight, where T is the acquisition duration, i  [cm 2 ] is the line of sight detection surface function, i  is the integrated acceptance, and ∂ φ i is the muon flux in the line of sight central direction. It must be understood that ∂ φ [s −1 cm −2 sr −1 ] is the differential muon flux that reaches the instrument after crossing the target. Consequently, ∂ φ depends both on the open sky differential flux ∂ φ( = 0, ϕ, θ) and on the muon absorption law inside matter. These are determined through experiments 26,27,37,[39][40][41][42] , theoretical works 17 or thanks to Monte-Carlo simulations 43,44 depending on the precision expected and the available information. The three curves (resp. blue, red, green) correspond to three different observation zenith angles (resp. 0°, 30°, 60°) and are computed for  = 10 cm 2 sr using the modified Gaisser model from Tang et al. 17 . Scientific RepoRts | 6:23054 | DOI: 10.1038/srep23054 Figure (9) shows the telescope acceptances i  for i = 1, … , 961 used in the SHADOW experiment. This acceptance function is determined experimentally to account for the detection matrices deffects, mainly imperfect optical couplings at the scintillator bars outputs and on the multichannel photomultiplier front. The latter Fig. (1). The acceptance maximum value, max  = 2.80 cm 2 sr is obtained for the line of sight perpendicular to the detector planes and corresponding to (x, y) = (0, 0). The x and y coordinates represent the horizontal offsets between the pixels defining a given line of sight of the telescope (one pixel in the upper detection matrix, and the other one in the lower matrix). The acceptance integrated over the instrument entire detection surface equals int  = 630 cm 2 sr for a solid angle aperture Ω int = 0.161 sr. causes the distortions visible in the Fig. (9) 3D plot. In practice, the acceptance computation is performed by measuring the "open-sky" muons flux coming from the zenith. The detected particles number N may be increased by grouping several adjacent lines of sight belonging to a subset ε , It results in an acceptance increase and thus a better time resolution. The counterpart is an angular resolution degradation induced by the merging of the small solid angles spanned by the trajectories. In the present study, the entire solid angle spanned by the telescope trajectories were grouped to obtain a total acceptance total  = 630 cm 2 sr. Such a large acceptance dramatically improves the time resolution which falls to the order of tens of minutes in the case of the SHADOW experiment.
6,567.4
2015-04-09T00:00:00.000
[ "Physics" ]
Characterization of Influenza A Virus Infection in Mouse Pulmonary Stem/Progenitor Cells The pulmonary stem/progenitor cells, which could be differentiated into downstream cells to repair tissue damage caused by influenza A virus, have also been shown to be the target cells of influenza virus infection. In this study, mouse pulmonary stem/progenitor cells (mPSCs) with capability to differentiate into type I or type II alveolar cells were used as an in vitro cell model to characterize replication and pathogenic effects of influenza viruses in PSCs. First, mPSCs and its immortalized cell line mPSCsOct4+ were shown to be susceptible to PR8, seasonal H1N1, 2009 pandemic H1N1, and H7N9 influenza viruses and can generate infectious virus particles, although with a lower virus titer, which could be attributed by the reduced vRNA replication and nucleoprotein (NP) aggregation in the cytoplasm. Nevertheless, a significant increase of interleukin (IL)-6 and interferon (IFN)-γ at 12 h and IFN-β at 24 h post infection in mPSCs implicates that mPSCs might function as a sensor to modulate immune responses to influenza virus infection. In summary, our results demonstrated mPSCs, as one of the target cells for influenza A viruses, could modulate early proinflammatory responses to influenza virus infection. INTRODUCTION Influenza virus can cause acute and severe respiratory diseases. According to the WHO, about 3-5 million people are severely infected with influenza virus, and among those, 290,000-650,000 people die annually (World Health Organization [WHO], 2017). Compared to most seasonal influenza viruses, pandemic influenza viruses prefer to infect lower respiratory airways and often cause severe lung disease, such as pneumonia and acute respiratory distress syndrome (ARDS) (Kim et al., 2002). The lower respiratory tract consists of the trachea, bronchi, bronchioles, and alveoli. Alveoli are the ends of the respiratory tree and act as the basic units of ventilation of the lung. The alveoli consist of an epithelial layer for gas exchange, in which two types of cells, alveolar type-I pneumocytes (AT-I) and type-II pneumocytes (AT-II), are there. AT-I is responsible for gas exchange, whereas AT-II maintains surface tension of alveolus through secretion of surfactant proteins, such as SP-A, SP-B, SP-C, and SP-D (Whitsett and Alenghat, 2015). Previous studies have shown that human AT-I (Chan et al., 2009;Yu et al., 2011), human AT-II (Chan et al., 2005), murine AT-I (Kebaabetswe et al., 2013;Rosenberger et al., 2014), and murine AT-II (Kebaabetswe et al., 2013) can be infected by influenza A viruses. High-level expression of cytokines, such as interferon (IFN)-β, interleukin (IL)-6, or IL-1β, and chemokines, like RANTES, IFN-γ-induced protein 10 (IP-10), and macrophage inflammatory protein 1β (MIP-1β), was reported in infected AT-I and AT-II cells. However, little information for the outcomes of lung stem/progenitor cells by the infection of influenza virus A has been reported. Upon influenza virus infection, the adult stem/precursor cells can proliferate and differentiate into various downstream cells to repair the damage in lung and even function as immune regulators (Park et al., 2010;Maul et al., 2011;Volckaert and De Langhe, 2014;Branchfield et al., 2016). Among the many different pulmonary stem/progenitor cells (PSCs) in the respiratory system, AT-II (Barkauskas et al., 2013) and bronchioalveolar stem cells (BASC) (Kim et al., 2005) have been reported as potential adult lung stem/progenitor cells. Recent studies in the mouse model demonstrated that influenza virus infection could induce the migration and differentiation of the upper respiratory tract's stem/progenitor epithelial cells to compensate for damage in the lower respiratory tract (Kumar et al., 2011;Zuo et al., 2015). Another study also found that club cells could differentiate into AT-I and AT-II to repair impaired alveolar regions after virus infection (Zheng et al., 2012). To investigate the importance of pulmonary stem/precursor cells in influenza virus infection, adult PSCs have been isolated from different animal models. Chicken lung mesenchymal stromal cells have been shown to be infected by avian H9N5 and H1N1 influenza strain (Khatri et al., 2010). Moreover, the Oct4 + swine stem/progenitor lung epithelial cells could be infected by human, swine, and avian influenza viruses (Khatri et al., 2012). In C57BL/6 mice, influenza virus was identified to block the self-renewal of pulmonary epithelial stem/progenitor cells through blockage of fibroblast growth factor receptor 2b (Fgfr2b) signaling pathway (Quantius et al., 2016). Therefore, via these findings, we hypothesized that pulmonary stem/progenitor epithelial cells may not only be responsible for repair of lung tissues but also be one of the important target cells for influenza virus infection. In this study, we aimed to demonstrate the infection of influenza virus to PSCs and try to establish the mouse pulmonary stem/progenitor epithelial cell lines to characterize the influences of influenza virus infection in pulmonary stem/progenitor epithelial cells. Previously, we have identified one rare population of mouse pulmonary stem/progenitor cells (named mPSCs) (Ling et al., 2006). The mPSCs could proliferate to form epithelial cell colonies accompanied by stroma cells cocultured in a serumfree culture condition and expressed in low level of pluripotent transcriptional factors (e.g., Oct4, Sox-2, and Nanog) and could differentiate into type-I pneumocytes in vitro (Ling et al., 2006(Ling et al., , 2014. Our studies also demonstrated that mPSCs expressed a specific cellular surface marker, coxsackievirus/adenovirus receptor (CAR), which could be applied as a selective marker to isolate mPSCs (CAR + /mPSCs) from the culture for pure population by fluorescence-activated cell sorting (FACS) technique (Ling et al., 2014). In this study, we first demonstrated that mPSCs were also susceptible to influenza virus infection and then immortalized CAR + /mPSCs (mPSCs Oct4+ cell lines: G2L, E3L, and G4L clones) through overexpression of the pluripotent transcription factor, Oct-4, by retroviruses. The mPSCs Oct4+ cell lines were also susceptible to infections of the mouseadapted influenza virus PR8 strain, seasonal H1N1, pandemic H1N1, and H7N9 influenza A viruses, with a lower efficiency of virus replication, which could be possibly due to impaired vRNA replication in mPSCs Oct4+ and irregular distribution of nucleoprotein (NP) proteins during viral replication. Finally, elevated expression of pro-inflammatory cytokines including IFN-β, IFN-γ, IL-6, IL-12p40p70, monocyte chemotactic protein 5 (MCP5), stem cell factor (SCF), soluble tumor necrosis factor receptor I (sTNFRI), and vascular endothelial growth factor (VEGF) was detected in PR8-infected cells at 12 h post infection (hpi) compared to mock cells. Our results implicated that mPSCs were susceptible to influenza virus infection and could trigger host defense through pro-inflammatory cytokines releasing. The immortalized mPSCs have the potential to serve as an in vitro model to examine the pathogenesis of influenza virus infection in mouse lungs. Virus Infection The (Hoffmann et al., 2000). The virus titers were determined by plaque assay to calculate the multiplicity of infection (MOI) used for infection. For virus infection, the supernatants of cell culture were first removed, and the cells were incubated with virus input for 1 h. After that, the cells were washed with phosphate-buffered saline (PBS) before fresh medium was added into cell cultures. Culture supernatant was harvested at indicated times. Isolation of Mouse Primary Stem Cells Newborn ICR mice were obtained from National Laboratory Animal Center and National Applied Research Laboratories (Taipei, Taiwan). Primary pulmonary cells were isolated as described previously (Ling et al., 2006). Briefly, lung tissue was collected from 0 to 2 days postpartum newborn ICR mice. After washing with Hank's buffer, tissue was cut into 5mm pieces and digested with 10 mg/mL protease in Joklik's MEM medium (M8028, Sigma-Aldrich, United States) at 4 • C for 16 h. Tissues were transferred into 10% FBS Joklik's MEM medium and filtered through a 100-µm nylon cell strainer. These cells were washed and resuspended in MCDB-201 medium (M6770, Sigma-Aldrich, United States). The cells were plated in collagen I (10 µg/cm 2 ; 354236, BD Bioscience, United States) coated plates in MCDB-201 medium containing 1% Insulin-Transferrin-Selenium-A supplement (ITS-A, 51300-044, Gibco, United States), 1 ng/ml human EGF (PHG0311L, Gibco, United States), and 1% antibiotic solution (15240-062, Gibco, United States). After removal of the non-adherent cells after 48 h, cultures were maintained before further experiments. The detailed protocol had been described in previous studies (Ling et al., 2006(Ling et al., , 2014. Immortalization of mPSCs by Oct4 Overexpression Using Retrovirus Transduction Mouse PSCs were transduced with retroviral vectors encoding Oct4. Briefly, pMX-mOct4 retroviral vector and VSV-G were transfected into HEK293T cells using TurboFect transfection reagent (R0532, Thermo Scientific, United States). After 48 h, virus supernatants were collected and filtrated through a 0.45µm filter. A total of 5 × 10 4 mPSCs were seeded per well in the six-well plate and then incubated with harvested retroviruses and 8 µg/ml polybrene for 24 h. The transduced mPSCs were then cocultured with inactivated MEFs with fresh murine embryonic stem (mES)/MCDB201 (1:1) medium added every day. Colony formation of transduced mPSCs was observed after 28 days of induction (Gu et al., 2016). Different to the previous study (Gu et al., 2016), the colonies were picked up and expanded on the collagen I coating plate. The cells of mPSCs Oct4+ G2L, E3L, and G4L clones were cultured in ES/MCDB201 medium and consecutively cultured for more than 20 passages. Quantification of Cellular mRNA and Viral vRNA, mRNA, and cRNA The total RNA of mock or virus-infected MDCK and mPSCs Oct4+ was extracted with NucleoSpin R RNA kit (740955, MACHEREY-NAGAL, Germany) according to the manufacturer's instructions. Complementary DNA of cellular and three viral RNA were synthesized using SuperScript TM III reverse transcriptase (18080-044, Invitrogen, United States). The real-time PCR reaction contains 5 µl of cDNA, 300 nM of each primer, and 10 µl Fast SYBR R Green Master mix (4385612, Applied Biosystems, United States). The PCR program consists of incubation at 95 • C for 20 s, followed by 40 cycles of 3 s at 95 • C and 30 s at 60 • C. The universal and real-time PCR primer sequences are listed in the Supplementary Table S1. Data analysis was performed using Applied Biosystems 7500 Real-Time PCR software (version 7500SDS v1.5.1). At least three independent experiments were performed for all data presented. The relative expression of RNA was normalized with its internal control GAPDH, and a fold change relative to the normalized value at 0 hpi was shown. Cytokines and IFN-β Expression Detection A total of 8 × 10 5 mPSCs Oct4+ cells were infected with 10 MOI of PR8, and at 12 hpi, cells and culture supernatants were collected for protein expression and cytokine detection, respectively. The cytokine profiles were determined by the mouse cytokine antibody arrays (ab133993, Abcam, United Kingdom) according to the manufacturer's instructions. Briefly, the supplied membranes were blocked with provided blocking buffer for 45 min and then incubated at 4 • C overnight with 1 ml of harvested supernatants. Membranes were washed with the provided buffers and incubated at 4 • C overnight with the supplied biotin-conjugated anti-cytokines antibodies. The arrays were then washed, incubated with horseradish peroxidaseconjugated streptavidin at 4 • C overnight. After a final wash, chemiluminescence reaction was detected using the supplied detection buffers. Semiquantification by relative densitometry was obtained using the Image Lab TM vision 6.0 (Bio-Rad, United States) and normalized to the positive control signals in each membrane for comparison of multiple arrays. The cytokine expression in virus-infected mPSCs Oct4+ and mock-infected cells was normalized to their respective internal controls, then the fold change of cytokine expression in virus-infected mPSCs Oct4+ relative to mock-infected cells was calculated. For determination of IFN-β expression, the VeriKine-HS TM mouse IFN beta serum ELISA Kit was used (42410, pbl Assay Science; United States). Statistical Analysis All data are representative of at least three independent experiments. Data were expressed as mean ± SD. All statistical analyses were performed using SPSS (ver. 12.0, Chicago, IL, United States). A two-sample t test was used for the comparison of continuous variables, with correction of unequal variances when appropriate. A p < 0.05 was considered statistically significant. Transmission Electron Microscopy The mPSCs, mPSCs Oct4+ , and MDCK cells were infected with PR8 at the MOI of 10. At 12 hpi, the cells were trypsinized and resuspended in PBS. After centrifugation at 1,000 rpm for 5 min (Model 3300, Kubota, Japan), the cell pellets were collected and fixed with cold 10% glutaraldehyde for at least 1 h. Cell pellets were picked up and then dehydrated with graded ethanol solution (100983, Merck, Germany). Finally, cells were embedded in Spurr's resin (EM0300, Sigma-Aldrich, United States), and the embedded cell blocks were cut into ultrathin sections. Twentyfive sections were collected per sample and put on copper grids. The cells were stained with 1% uranyl acetate (541093, Thomas Scientific, United States) and 1% lead citrate (C6522, Sigma-Aldrich, United States). The grids were then observed by the JEOL JEM-1400 electron microscope (JEOL, Japan). The experiments were facilitated by the electron microscopy service of the Joint Center for Instruments and Researches, College of Bioresources and Agriculture, National Taiwan University, Taiwan. Sucrose Gradient Ultracentrifugation MDCK and mPSCs Oct4+ cells were infected by PR8 at an MOI of 10. Cultured supernatant of virus-infected cells was harvested at 36 hpi and then clarified by centrifugation at 1,500 rpm for 5 min at room temperature. Viruses in clarified supernatant were concentrated by 20% sucrose ultracentrifugation at 20,000 rpm for 2 h at 4 • C (SW28 rotor, Optima L100-K, Beckman Coulter, Inc., United States). Concentrated viruses were resuspended in 1 ml TNE buffer (50 mm Tris-HCl, pH 7.4, 100 mm NaCl, and 0.1 mm EDTA) and then were loaded onto 9-ml linear sucrose gradient (30-60% w/v). Ultracentrifugation was performed at 30,000 rpm for 16 h at 4 • C (SW41 rotor, Optima L100-K, Beckman Coulter, Inc., United States). Ten or 20 fractions were taken from the gradients. Virus titers in each fraction were quantified by plaque assay. Western Blotting Mouse PSCs and mPSCs Oct4+ were harvested and lysed with RIPA buffer. The cell lysates and viruses in fractions were subjected to 12% SDS-PAGE and then transferred to a PVDF membrane for Western blot analysis. Monoclonal rabbit anti-Oct4 antibody (sc-9081, Santa Cruz Biotechnology, United States 1:200), polyclonal anti-GAPDH antibody (GTX100118, Gene Tex International Corporation, United States, 1:1,000), polyclonal anti-NP antibody (GTX125989, Gene Tex International Corporation, United States, 1:10,000), and polyclonal anti-NS1 antibody (GTX125990, Gene Tex International Corporation, United States, 1:1,000) were used as the primary antibody, and HRP-conjugated anti-mouse IgG (626520, Invitrogen, United States, 1:5,000) and HRP-conjugated anti-rabbit IgG (G21234, Invitrogen, United States, 1:5,000) were used as the secondary antibody, respectively. The presence of proteins was detected by the Western Lightning R Plus-ECL system (NEL105001EA, PerkinElmer, United States) and the ChemiDoc TM XRS + system (Bio-Rad, United States). Data were analyzed by the Image Lab TM software (vision 6.0, Bio-Rad, United States). Virus Binding, Penetration, and Entry Assay MDCK and mPSCs Oct4+ cells were preincubated at 4 • C for 30 min, and medium was replaced with serum-free DMEM before virus infection. The cells were infected by virus with an MOI of 10 at 4 • C for 1 h. The supernatant containing unbound viruses was removed, and the cells were washed three times by PBS. The virus-bound cells were harvested for binding assay. For penetration assay, the cells were preincubated at 4 • C for 30 min. After virus infection, the cells were incubated at 37 • C for 1 h to allow virus penetration into the cell membrane. The cells were washed once by acidic PBS (pH 3.0) and twice by neutral PBS (pH 7.0) to remove virus, which did not penetrate into the cell membrane. For entry assay, the medium was replaced with serum-free DMEM, and the cells were infected by virus with an MOI of 10 at 37 • C for 1 h. The supernatant containing unbound virus was removed, and the cells were washed once by acidic PBS (pH3.0) and twice by neutral PBS (pH 7.0) to remove viruses, which did not penetrate into the cell membrane. Viral RNA on/in the cells was extracted with NucleoSpin R RNA kit (740955, MACHEREY-NAGAL, Germany) according to the manufacturer's instructions. Viral RNA was determined by RT real-time PCR. The percentages of these assay were calculated as virus copies in/on the cells/virus input copies × 100%. Susceptibility of mPSCs to Influenza Virus Infection Mouse PSCs were isolated from lung of neonatal ICR mice and further cultured for 7 days before virus infection. The primary mPSCs are surrounded by the mesenchymal stroma cells and have been shown to exhibit pulmonary stem/progenitor properties, including expression of progenitor markers and differentiation potential (Ling et al., 2006;Gu et al., 2016). High-level expression of influenza virus receptors, α2,3-linked sialic acid (α2,3 SA), and α2,6-linked sialic acid (α2,6 SA), was detected on mPSCs ( Figure 1A). The expressions of CAR and CD54 on mPSCs were also confirmed by FACS ( Figure 1A). The susceptibility of mPSCs to influenza virus infection was then examined. As shown in Figure 1B, the change of cell morphology and cytopathic effect (CPE) development were observed in the primary mPSCs upon influenza virus infection at an MOI of 10. Compared to the mock group, CPE was first observed at 12 hpi. The colony became less compact at 24 hpi and finally collapsed at 36 hpi. The infection was specific to mPSCs since the virus-infected cells were also costained with anti-M1 antibody and anti-Oct4 antibody ( Figure 1C). Only mild cell morphology change at 24 hpi was observed when a lower virus input was used (Supplementary Figure S1). Infection of mPSCs by influenza virus was also confirmed using TEM (Supplementary Figure S2). Virus budding from influenza virusinfected mPSCs was observed, and the releasing virus particles had similar morphology to those released from MDCK cells. In addition, the susceptibility of mPSC-differentiated AT-I cells and AT-II cell line (MLE15) to influenza virus infection was also investigated. The CPE development and viral NP protein expression in AT-I cells and the AT-II cell line were observed at 24 hpi (Supplementary Figures S3A,B). FIGURE 1 | Infection of mouse pulmonary stem/progenitor cells (mPSCs) by influenza A virus. (A) The expression of α2,3-linked sialic acid (α2,3 SA) and α2,6-linked sialic acid (α2,6 SA) on mPSCs. The expression of α2,3 SA and α2,6 SA in mPSCs was determined by immunofluorescence assay (IFA) and fluorescence-activated cell sorting (FACS). The expression of CAR and CD54, which were previously demonstrated to be expressed by mPSCs, was also shown. The histograms of α2,3 SA, α2,6 SA, CAR, and CD54 expression of mPSCs were shown in purple, and the negative-staining cells were labeled as green lines. Scale bar of IFA image was 100 µm. Establishment and Characterization of Immortalized mPSCs Using Oct4 Overexpression Due to the relatively low yield of isolating mPSCs from mouse lungs, an immortalized Oct4 + mPSCs cell line (mPSCs Oct4+ ) was established through transducing Oct4 by retroviruses. The Oct4-transduced mPSCs gave rise to colonies after 28 days of incubation, and these colonies were picked up and expanded on collagen I-coated plates consecutively for more than 20 passages (Figure 2A). Among those, three colonies, designated mPSCs Oct4+ G2L, E3L, and G4L clones, were picked up and further analyzed. Increased Oct4 expression in mPSCs Oct4+ G2L, E3L, and G4L clones was observed by RT-PCR and Western blot, respectively (Figures 2B,C). In these cell lines, the expression level of the stem cell marker, Sox 2, was higher than that in mPSCs, yet the other stem cell marker, Nanog and pulmonary epithelial cell markers, Nkx2.1, Id2, SPC, and E-cadherin, had comparable expression levels to those in mPSCs ( Figure 2B). The expression patterns of Oct4, Sox2, SPC, and E-cadherin in the mPSCs Oct4+ E3L clone were also demonstrated by the immunofluorescence analysis ( Figure 2D). These results implicate that the mPSCs Oct4+ G2L, E3L, and G4L clones exhibited similar marker expression as those of mPSCs after immortalization by Oct4 overexpression. Susceptibility of mPSCs Oct4+ to Influenza Virus Infection The potential of Oct4-overexpressed immortalized mPSCs Oct4+ as an appropriate cell model for characterization of influenza virus infection of pulmonary epithelial cells was investigated. First, the mPSCs Oct4+ clones also expressed influenza virus receptors, α2,3 SA and α2,6 SA. The proportion of α2,3 SAand α2,6 SA-positive cells were 48 and 45.1%, respectively, in the mPSCs Oct4+ E3L clone after 20 passages ( Figure 3A). The expression of CAR and CD54 was also detected on the mPSCs Oct4+ E3L clone ( Figure 3A). Sustained expression of sialic acid receptors on the mPSCs Oct4+ E3L clone after five to 20 passages was observed (Supplementary Figure S4). Next, the susceptibility of the mPSCs Oct4+ E3L clone to influenza virus infection was evaluated. Cells were infected by PR8 at an MOI of 10. A significant CPE induced by PR8 infection was observed at 48 hpi ( Figure 3B). The expression of viral M1 proteins was detected in virus-infected cells at 12-48 hpi ( Figure 3C). The release of virus particles from E3L clone was also confirmed using transmission electron microscope (TEM). The morphology of most virus particles was similar to those from MDCK and mPSCs cells (Supplementary Figure S2). The virus particles in the supernatant from virus-infected mPSCs Oct4+ E3L clone and MDCK cells were layered by linear sucrose gradient (30-60%) ultracentrifugation. To examine the distribution of virus in the gradients, viral infectivity and the amount of viral hemagglutinin (HA) and NP proteins were analyzed by plaque assay and Western blot, respectively, in all fractional samples. First, the density of viral particles from supernatants of MDCK and mPSCs Oct4+ E3L clone was estimated to be 1.20 and 1.18 g/ml, respectively (Supplementary Figure S5A). The distribution of HA and NP proteins was found to be consistent with that of virus titers across the gradient for mPSCs Oct4+ E3L clone but not for MDCK cells (Supplementary Figure S5B). More HA proteins were detected in fraction 3 instead of fraction 6 whose virus titer was highest. mPSCs Oct4+ E3L Clone Can Support Influenza Viruses' Replication To characterize virus replication in mPSCs Oct4+ E3L clone, virus growth kinetics were determined. In the mPSCs Oct4+ E3L clone, virus growth was observed for 24 hpi, and it reached a plateau at 12 h ( Figure 4A). Similar trend of virus replication was observed in mPSCs and MDCK when high virus input was used. Compared to MDCK cells, a reduced virus replication was observed when a lower virus input was used to infect the mPSCs Oct4+ E3L clone ( Figure 4B). The virus growth kinetics in AT-I and MLE15 cells were also determined (Supplementary Figure S3C). Whereas viruses exhibited similar growth trend in AT-I cells as compared to the mPSCs and mPSCs Oct4+ E3L clone, a relatively lower virus growth was observed in MLE15 cells (Supplementary Figure S3C). Life Cycle of Influenza Virus in E3L Clone Life cycle of influenza viruses in the mPSCs Oct4+ E3L clone was characterized to determine the plausible reasons for reduced virus replication in E3L clones as compared to MDCK cells at low virus input. First, binding and entry assays were performed to determine whether the differential virus replication was attributed to reduced viral entry. Nevertheless, a significant increased binding (38.8 vs. 31.9%, P = 0.007) and penetration (51.8 vs. 39.4%, P < 0.001) were observed in the E3L clone compared to MDCK cells (Supplementary Table S2). There was no difference in entry between these two cells (52.39 vs. 48.76%, P = 0.21). Next, we aimed to characterize the profiles of three viral RNA species in the E3L clones. In the early stage of infection (3 hpi), expression of three viral RNA species was higher in MDCK cells than those in mPSCs Oct4+ . However, higher mRNA and cRNA expression was observed in mPSCs Oct4+ than in MDCK cells at 6 hpi ( Figures 5B,C), yet vRNA expression in mPSCs Oct4+ remained lower than that of MDCK during the study period ( Figure 5A). According to previous studies, unlike mRNA and cRNA whose replication depends on the original polymerase complexes on the imported vRNP, yet vRNA replication requires newly synthesized polymerase complexes (Jorba et al., 2009;York et al., 2013). Therefore, expression of the major vRNP proteins, NP, was determined. Although a higher FIGURE 2 | Generation and characterization of mouse pulmonary stem/progenitor cells (mPSCs) Oct4+ , an Oct4-transduced immortalized mPSC cell line. (A) Experimental scheme for the establishment and selection of immortalized mPSCs. Scale bar was 100 µm. (B) Expression profiles of pulmonary progenitor/stem epithelial cell markers in mPSCs Oct4+ clones. Semiquantitative RT-PCR analysis was conducted to determine the expression of pulmonary progenitor/stem epithelial cell markers in mPSCs Oct4+ G2L, E3L, and G4L clones. The Oct4 expression in mPSCs and total lung was relatively lower and required up to 35 PCR cycles for visualized signals. Murine embryonic stem (mES) cells were used as a positive control for the stem cell markers, whereas extract from total lung was the positive control for pulmonary epithelial cell markers. (C) Expression of the OCT4 proteins in mPSCs Oct4+ G2L, E3L, and G4L clones. The expressions of OCT4 proteins in mPSCs Oct4+ G2L, E3L, and G4L clones were determined by Western blot. mES cells were used as a positive control. (D) Expression of OCT4, SOX2, E-Cad, and SPC in the mPSCs Oct4+ E3L clone by IFA. The expression of OCT4, SOX2, E-cadherin (E-cad), and Surfactant Protein C (SPC) in the mPSCs Oct4+ E3L clone was determined by IFA. Scale bar was 50 µm. FIGURE 3 | Infection of the mouse pulmonary stem/progenitor cells (mPSCs) Oct4+ E3L clone by influenza A virus. (A) The expression of α2,3-linked sialic acid (α2,3 SA) and α2,6-linked sialic acid (α2,6 SA) on the mPSCs Oct4+ E3L clone. The expression of α2,3 SA and α2,6 SA in mPSCs Oct4+ E3L clone was determined by immunofluorescence assay (IFA) and fluorescence-activated cell sorting (FACS). The expression of CAR and CD54, which was previously demonstrated to be expressed by mPSCs, was also shown. The histograms of α2,3 SA, α2,6 SA, CAR, and CD54 expression were shown in purple, and the negative-staining cells were labeled as green lines. Scale bar of IFA image was 50 µm. For the plaque assay, at least three independent experiments were performed, and the virus titers were presented as mean ± SD. * p < 0.05; * * p < 0.01; * * * p < 0.001. expression of NP was detected at 4 hpi in MDCK than that in the mPSCs Oct4+ E3L clone, no significant difference between the E3L clone and MDCK cells was observed at 8 and 12 hpi ( Figure 5D). Next, the distribution of viral proteins PB2, PB1, PA, and NP in infected cells was investigated by IFA. The expression of NP was mainly restricted in the nucleus at 4 hpi in both MDCK cells and the E3L clone. However, at 8 hpi, a unique aggregation of NP proteins was observed in the cytosol of the E3L clone, and such pattern remained at 12 hpi ( Figure 6A). Furthermore, aggregation of NP proteins was also observed in the cytosol of the mPSCs at 12 hpi (Supplementary Figure S7). Unlike NP proteins, the distributions of influenza polymerase complex proteins, PB2, PB1, and PA, in both MDCK cells and E3L clones were similar and mainly restricted in the nucleus (Figure 6B). Cytokine and Chemokines Released From mPSCs Oct4+ After PR8 Infection Upon influenza virus infection, the airway epithelial cells would initiate a rapid onset of the innate immune responses to prevent virus infection. The profiles of cytokines and chemokines released in influenza virus-infected mPSCs, mPSCs Oct4+ E3L clone, mPSCs-differentiated AT-I and MLE15 cells were determined at 12 hpi. Compared to the mock group, more than 1.5-fold increase of protein expression of IL-6, IFN-γ, MCP-5, SCF, sTNFRI, VEGF, and IL12p40/p70 was observed in E3L clone ( Figure 7A). The intensely enhanced expression of IL-6 was also observed in mPSCs and mPSCs Oct4+ E3L clone. Additionally, a more than 1.5-fold increase of SCF and GCSF was also detected in mPSCs, mPSCs Oct4+ E3L clone, and MLE15 cells. The IFN-β protein levels in the culture supernatants of influenza-infected mPSCs, mPSCs Oct4+ E3L clone, AT-I, and MLE15 cells were also examined by ELISA (Figure 7B). A drastic increase of IFN-β from 0.69 pg/ml at 12 hpi to 10.65 pg/ml at 24 hpi was observed in the virus-infected mPSCs Oct4+ E3L clone. The increase of IFN-β in virus-infected mPSCs and AT-I cells was also observed. However, no IFN-β expression was detected in virusinfected MLE15 cells. Furthermore, IFN-β mRNA expression was determined by RT-PCR ( Figure 7B). Compared to virus-infected MDCK cells, mRNA expressions of IFN-β in mPSCs, E3L clone, and AT-I cells were similar at the early stage (3 and 6 hpi) but increased significantly at later stage (12 hpi). Consistent to the results of IFN-β protein level, mRNA expression of IFN-β was not increased in virus-infected MLE15 from 0 to 24 hpi ( Figure 7C). DISCUSSION Previously, progenitor cells isolated from PR8-infected C57BL/6 mice lung were shown to be more susceptible to PR8 infection than AT-I, AT-II, and bronchial epithelial cells (Quantius et al., 2016). In addition, these virus-infected stem/progenitor cells tend to lose their ability to repair damage due to blockage of Fgfr2b-dependent renewal through inhibiting β-cateninmediated transcription (Quantius et al., 2016). It was also demonstrated that Oct4 + lung stem/progenitor cells from different species could be infected by human H1N1 influenza, swine H1N1 influenza, and avian H7N2 influenza viruses (Khatri et al., 2012). These findings implicate that lung/progenitor cells could be infected by influenza viruses and might play a role in influenza virus-induced pathogenesis. In this study, Oct4 + Sca1 + SSEA1 + mPSCs were isolated from newborn ICR mice to examine their susceptibility to influenza virus infection, and these cells were further immortalized by transducing mouse Oct4 for subsequent in vitro analysis. We demonstrated that these mPSCs Oct4+ cells are susceptible to influenza virus infection and can release similar profiles of cytokines/chemokines after viral infection. In addition, due to the demonstrated expressions of cytokeratin-7 and peroxiredoxin 2 and 6 and relatively high level of CyP450 activities in previous study (Gu et al., 2016), our mPSCs might be closely associated with the lineage of club cells, which were reported as target cells of influenza virus infection, and a significant loss of such cells in C56BL/6 mouse infected with influenza viruses has been reported (Kumar et al., 2011;Hirano et al., 2016). These results implicate that our mPSCs could serve as in vitro target cells for influenza A viruses to modulate early pro-inflammatory responses to influenza virus infection. In this study, the profile of growth factors, cytokines, and chemokines released from influenza virus-infected mPSCs Oct4+ was determined. Previously, increased levels of MCP-1, IL-6, TNF-α, IFN-γ, IL-5, IL-4, and IL-9 in bronchoalveolar lavage fluid (BALF) from PR8-challenged C57BL/6 mice have been reported (Buchweitz et al., 2007). The increased expression of MCP-1, IL-6, TNF-α, RANTES, KC, and MIP2 was detected in primary airway epithelial cells isolated from C57BL/6 mice (Tate et al., 2011). Similar to previous studies, our study results showed that compared with uninfected cells, increased levels of IL-6 and IFN-γ as well as IL12p40p70, MCP-5, SCF, sTNFRI, RANTES, and VEGF were observed at 12 hpi. Elevated expression of IFNβ was also determined by ELISA at 12 and 24 hpi. Interestingly, increased levels of anti-inflammatory cytokines soluble tumor necrosis factor receptor I (sTNFRI) were detected in PR8-infected mPSCs Oct4+ . sTNFRI was shown in previous studies to interfere with the interactions between TNF-α and its membrane form receptor (mTNFR) and subsequently lead to downregulation of TNF-α-induced inflammation (Levine, 2008). Such elevated levels of sTNFRI were also found in mouse model (Lowder et al., 2006;Gabriel et al., 2009) or human (Hoffmann et al., 2015) after H1N1 or H7N9 influenza virus infection, respectively. An increased level of VEGF in cultured supernatants of PR8-infected mPSCs Oct4+ was also noticed. VEGF, as a vascular permeability factor, plays a crucial role in lung maintenance, inducing proliferation of systemic vasculature, stimulating growth of FIGURE 5 | Influenza life cycles in the mouse pulmonary stem/progenitor cells (mPSCs) Oct4+ E3L clone and Madin-Darby canine kidney (MDCK) cells. Quantification of (A) viral RNA (vRNA), (B) mRNA, and (C) complementary RNA (cRNA). The amounts of vRNA, mRNA, and cRNA in the PR8-infected mPSCs Oct4+ E3L clone and MDCK cells were determined by real-time PCR. The relative ratio of each time point were first normalized to its internal control GAPDH and then normalized to 0 hpi. At least three independent experiments were performed, and the ratios were presented as mean ± SD. N.D., none detected; * p < 0.05; * * p < 0.01; * * * p < 0.001. (D) Nucleoprotein (NP) protein expression in influenza virus-infected mPSCs Oct4+ E3L clone and MDCK cells. The mPSCs Oct4+ E3L clone and MDCK cells were infected by PR8 at a multiplicity of infection (MOI) of 10. The cell lysates were harvested at 0, 4, 8, and 12 hpi, and the NP expression was determined by the Western blot. The expression of PCNA was used as the input control. AT-II, and secreting surfactant protein (Barratt et al., 2014). VEGF also has chemotaxis ability to attract alveolar macrophages, which express VEGF receptor 2 (VEGFR2) (Clauss et al., 1996;Ferrara et al., 2003). The increased expression of cytokines in PR8-infected mPSCs Oct4+ might implicate that mPSCs play an important role as immune sensor upon PR8 infection to attract infiltration of immune regulatory cells. To better characterize virus replication in mPSCs Oct4+ , series of assays were performed in this study. Compared with the commonly used cell line MDCK, similar trend of virus growth was observed, albeit with a relatively lower virus propagation in PR8-infected mPSCs and mPSCs Oct4+ cells. A significantly higher percentage of virus binding (38.82 vs. 31.9%, P = 0.00685) and penetration (51.75 vs. 39.41%, P = 0.00024) was observed in PR8-infected mPSCs Oct4+ as compared to MDCK cells. However, the entry percentages were similar (52.39 vs. 48.76%, P = 0.21322) between PR8-infected mPSCs Oct4+ and MDCK cells. The inconsistent results of binding/penetration and entry ability might indicate that virus envelope uncoating, vRNP importing into nucleus, or even RNA replication is faster in MDCK cells than mPSCs Oct4+ . To investigate the difference of viral RNA replication and transcription efficiency between these two cells, expressions of viral vRNA, cRNA, and mRNA were quantified by RT-PCR. A more dramatic difference in viral vRNA replication in mPSCs Oct4+ at 6 and 12 hpi was observed ( Figure 5A). It was recently reported that viral vRNA and cRNA The cultured supernatants were harvested from mock-infected or PR8-infected cells at 12 hpi. Profiles of releasing cytokines and chemokines in the cultured supernatants were determined by the cytokine array. The fold changes of each cytokine or chemokine between virus-and mock-infected cells were shown. (B) Release of interferon (IFN)-β from the PR8-infected mPSCs, mPSCs Oct4+ E3L clone, AT-I, and MLE15 cells. The cells were infected with PR8 at an MOI of 10. Concentration of IFN-β in the culture supernatants collected at 0, 12, 24, 40, or 48 hpi was determined by ELISA. (C) Induction of IFN-β mRNA expression in the mPSCs, mPSCs Oct4+ E3L clone, AT-I, MLE15, and Madin-Darby canine kidney (MDCK) cells after PR8 infection. Total RNA was extracted from PR8-infected cells at 0, 3, 6, 12, and 24 hpi. mRNA of IFN-β and GAPDH was quantified by RT-PCR. The expression of IFN-β mRNA was normalized with GAPDH mRNA. The ratios shown were normalized to 0 hpi. At least three independent experiments were performed, and the ratios were presented as mean ± SD. N.D., none detected. replication are initiated by soluble polymerase complexes in trans, but mRNA replication is initiated by resident polymerase complexes in cis (Eisfeld et al., 2015). Although vRNA and cRNA both are negative stranded RNA and are synthesized by the same polymerase complexes, different promoter sequences are used to initiate RNA replication (Azzeh et al., 2001). Previous study indicated that p65, one component in the NF-κB, was involved in regulating vRNA synthesis from the cRNA promoter (Kumar et al., 2008). NF-κB signaling can be activated by proinflammatory cytokines, such as TNFα, IL-1, or toll-like receptors (Lawrence, 2009), and plays a role in regulating the expression of cytokines, chemokines, or adhesion molecules. Therefore, high expression of sTNFRI in culture supernatant of PR8infected mPSCs Oct4+ might downregulate TNF-α, activating NF-κB signaling pathway, and subsequently lead to impairment in vRNA replication. Quiescence nature of embryonic stem cells might result in restricted viral protein translation and subsequent defeat viral protein production (Wash et al., 2012). In our study, we found that, compared to MDCK, less NP expression in the influenza-infected mPSCs Oct4+ E3L clone was detected at early time point (4 hpi), yet similar viral protein expression between two cells was observed later at 8 and 12 hpi. Nevertheless, a unique distribution pattern of NP proteins was observed in the influenza-infected mPSCs Oct4+ E3L clone. Compared to MDCK, aggregated NP was observed in the cytosol of the mPSCs Oct4+ E3L clone at 8 and 12 hpi. Previous study has shown that druginduced NP aggregation in cytosol might block migration of NP into nucleus and subsequently impair virus replication (Kao et al., 2010). Host protein phospholipid scramblase 1 (PLSCR1) (Luo et al., 2018) and moloney leukemia virus 10 (MOV10) (Zhang et al., 2016) also have been reported can interact with importin α to inhibit viral vRNP imported into nucleus. Therefore, aggregation of NP in mPSCs Oct4+ might block vRNP nuclear import, then interfere with vRNA replication. Otherwise, vRNP imports into nucleus were also controlled by the phosphorylation status of NP (Zheng et al., 2015). In that study, the authors found that phosphorylation of S9 and Y10 on nuclear location signal (NLS) blocked the interaction between NP and importin α, causing vRNP retention in the cytosol. However, the possible mechanism of NP aggregation in the cytosol of mPSCs Oct4+ needs more studies. In summary, the mPSCs Oct4+ provides a cell model to understand influenza virus infection in mPSCs. This immortalized mPSC cell line can be infected by mouse-adapted PR8 strain, 2009 pandemic H1N1, 2009 seasonal H1N1, 2016 pandemic H1N1, and 2013 avian H7N9 influenza viruses. Impaired vRNA replication and NP aggregation in cytosol might be correlated with lower virus propagation. The investigation of releasing cytokines might indicate the potential role of PR8infected mPSCs Oct4+ as an immune sensor. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS T-LC, P-HL, and Y-TC performed the virus infection experiments. S-YG was mainly responsible for generating the mPSCs and relative clone selection. S-YC and T-YL supervised the experimental design and participated in the discussion. T-LC, T-YL, and S-YC drafted the manuscript. All authors read and approved the final manuscript. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmicb. 2019.02942/full#supplementary-material FIGURE S1 | Effects of different virus input in the development of cytopathic effects (CPE) in mPSCs. mPSCs were infected with PR8 at three different MOIs, 10, 1, and 0.1. The morphology changes of the same colony between 0 and 36 hpi were recorded by microscope with a scale bar of 100 µm. FIGURE S2 | Detection of influenza virus release from infected mPSCs, mPSCs Oct4+ E3L clone, and MDCK cells by transmission electron microscope (TEM). The mPSCs, mPSCs Oct4+ E3L clone, and MDCK cells were infected with PR8 at an MOI of 10. After 12 h post infection, the cells were harvested for TEM analysis. The observation of cell morphology under 8,000x, 30,000x and 80,000x magnification was demonstrated. N, cell nuclear. Cy, Cytosol. Scale bars were 2 µm, 0.5 µm, and 200 nm, respectively. The black arrow indicates virus particles. The black arrow head indicates the viruses with abnormal morphology. (C) Replication of influenza virus in AT-I and MLE15 cells. Cultured supernatants were collected at indicated time points and virus titers in the cultured supernatants of AT-I and MLE15 cells were quantified by plaque assay. At least three independent experiments were performed and the virus titers were presented as mean ± SD. FIGURE S4 | The expression of α2,3-linked sialic acid (α2,3 SA) and α2,6-linked sialic acid (α2,6 SA) on mPSCs Oct4+ E3L clone after serial passages. The expression of α2,3 SA and α2,6 SA in mPSCs Oct4+ E3L clone after 5, 12, and 20 passages was determined by FACS. The histogram of α2,3 SA and α2,6 SA expression were shown in purple, and the negative-staining cells were labeled as green lines.
9,356.2
2020-01-21T00:00:00.000
[ "Medicine", "Biology" ]
The Protective Effects of Ultraviolet A1 Irradiation on Spontaneous Lupus Erythematosus-Like Skin Lesions in MRL/lpr Mice We investigated the effects of ultraviolet A1 (UVA1) irradiation on spontaneous lupus erythematosus- (LE-) like skin lesions of MRL/lpr mice, using a disease prevention model. UVA1 irradiation significantly inhibited the development of LE-like skin lesions, without obvious changes of the disease including renal disease and serum antinuclear antibody levels. Besides the massive infiltration of mast cells in the LE-like skin lesions, in the nonlesional skins, more mast cells infiltrated in the UVA1-irradiated group compared with the nonirradiated group. Although apoptotic cells were remarkably seen in the dermis of UVA1-irradiated mice, those cells were hardly detectable in the dermis of the nonirradiated mice without skin lesions. Further analysis showed that some of those apoptotic cells were mast cells. Thus, UVA1 might exert its effects, at least in part, through the induction of the apoptosis of pathogenic mast cells. Our results supported the clinical efficacy of UVA1 irradiation for skin lesions of lupus patients. Introduction Collagen disease, one of the representative autoimmune diseases, is often accompanied by photosensitivity. In particular, photosensitivity is included in its diagnostic criteria of systemic lupus erythematosus (SLE) [1,2], and it has been reported that, in Japan, about 60% of SLE patients showed photosensitivity [3,4]. In addition, exposure to sunlight is a well-known risk factor that induces or exacerbates not only skin lesions but also systemic symptoms for SLE patients. To investigate the effects of sunlight on SLE, some studies using MRL/lpr mice, one of the major lupus-prone mice, have been performed. MRL/lpr mice are characterized by dysregulated apoptosis due to defective signaling through Fas antigen [5] and by autoimmune symptoms including fetal nephritis, autoantibody production, and lymphoadenopathy. Furthermore, some of MRL/lpr mice spontaneously develop LElike skin lesions with alopecia and scab formation on their upper back region [6][7][8]. In cultured cells from MRL/lpr mice, the cytotoxicity of ultraviolet (UV) irradiation was significantly elevated [9]. In line with this, UVB exposure significantly exaggerated the development of LE-like skin lesions in MRL/lpr mice [10]. Moreover, UVA could also exacerbate the skin lesions in SLE patients [11]. On the contrary, several recent studies demonstrated that UVA1 irradiation had therapeutic effects for SLE patients [12][13][14][15]. Ultraviolet can be divided into three different parts based on the wavelength as follows: UVA (320-400 nm), UVB (290-320 nm), and UVC (200-290 nm). Moreover, UVA is made up of two parts: UVA1 (340-400 nm) and UVA2 (320-340 nm). UVC is absorbed by the ozone layer and does not reach the ground, whereas UVA and UVB have different various biological effects. For example, UVA and UVB induce suntan without an erythema response and suntan with sunburn, respectively [16,17]. Although the biological effects of UVA2 are similar to the effects of UVB, only UVA1 possesses the particular characteristic biological functions. In the previous studies with MRL/lpr mice [9,10] [18]. However, to the best of our knowledge, there have been no studies examining the UVA1 effects on the LE-like skin lesions of SLE model mice, and the biological effects of UVA1 on SLE still remain investigated. In the present study, we investigated the effects of UVA1 irradiation on MRL/lpr mice, particularly on their LE-like skin lesions, using a disease prevention model. Simultaneously, we also studied the dermal infiltration of mast cells, histamine receptor (HR) expression [19] and the mRNA expression of cytokines [20], which were previously examined in the spontaneous LE-like skin lesions in MRL/lpr mice [19,20]. 2.2. Mice. MRL/lpr mice, aged 4-6 weeks at the start of each experiment, were purchased from Japan SLC Inc. (Hamamatsu, Japan) and housed individually in cages under the specific pathogen free conditions. All animal experiments were approved by the Committee on Animal Care and Use at Wakayama Medical University. The MRL/lpr mice were divided into the following three groups: (1) nonirradiation group (n = 22), (2) UVA1 irradiation with 5 J/cm 2 (n = 18), and (3) UVA1 irradiation with 10 J/cm 2 (n = 18). The doses of 5 and 10 J/cm 2 have been chosen by the following reasons: first, effective doses of UVA1 treatment for human SLE patients were reportedly low, such as 6 and 12 J/cm 2 [12][13][14]; second, it took about 60 minutes to irradiate 10 J/cm 2 of UVA1 using our equipment, and therefore the irradiation over 10 J/cm 2 seemed quite stressful for mice. Under the deep anesthesia, all of mice were shaved at the upper dorsal skin twice a week, because the spontaneous LE-like skin lesions would appear on this site at older ages. Mice were irradiated UVA1 (5 J/cm 2 or 10 J/cm 2 ) five times a week for 4 months. The nonirradiated mice were left untreated after shaving. LE-like skin lesions of mice were macroscopically evaluated. The mortality was also estimated during experimental period. Measurements of Antinuclear Antibody (ANA) in Sera. Serum samples, collected at 4 months after the first irradiation, were subjected to ANA analysis by a commercial kit (Fluoro HEPANA test, MBL Co., Ltd, Nagoya, Japan), as described previously [21]. Light Microscopic Observation of the Skin. At 4 months after the first irradiation, mice were euthanized by the cervical dislocation, and skin specimens were taken from the upper back regions of all mice. The specimens were fixed in 4% formaldehyde buffered with PBS (pH 7.2), embedded in paraffin, and stained with hematoxylin and eosin (HE). Toluidine blue staining was also performed to assess the infiltration of mast cells. For the evaluation of dermal infiltrating mast cells, 5 microscopic fields were randomly selected, and the number of mast cells was counted. The degree of mast cells infiltration was expressed as the average number of the mast cells in the 5 microscopic fields (magnification, ×100). All measurements were performed without prior knowledge of the experimental procedures. Light Microscopic Observation of the Kidney. Kidney specimens taken from all MRL/lpr mice were subjected to HE and periodic acid-Schiff (PAS) staining. Then, the degree of glomerulonephritis was evaluated according to previously described methods [22] with a slight modification. All measurements were performed without prior knowledge of the experimental procedures. Immunohistochemical Study. Skin specimens were frozen immediately in Tissue-Tek O.C.T. Compound (Sakura Finetechnical Co., Ltd, Tokyo, Japan) to be stored at −80 • C until the use. Cryosections (6 μm thick) were prepared using Cryostat CM1900 (Leica Microsystems, Heidelberg, Germany), and the sections were reacted with anti-H1R pAbs, anti-H2R pAbs, or anti-H3R pAbs. After the incubation of biotinylated secondary Abs, immune No-irradiation Non-lesional skin (a) No-irradiation Skin lesion Figure 1(b)). On the other hand, macroscopic skin lesions were never seen in all UVA1-irradiated (5 J/cm 2 or 10 J/cm 2 ) mice (Table 1, Figure 1(c)). As reported previously [6][7][8], nonirradiated MRL/lpr mice with macroscopic skin lesions showed characteristic histopathological alterations such as acanthosis with hyperkeratosis, liquefaction, mononuclear cell infiltration into the dermis, and slight extravasation of erythrocytes in the upper dermis (Figure 1(e)). Moreover, even 4 nonirradiated mice without macroscopic skin lesions showed moderate histopathological changes mentioned above, while 5 mice in the UVA1 irradiated groups (5 J/cm 2 : 2 mice and 10 J/cm 2 : 3 mice) showed only LE-like histopathological changes. With respect to the development of LE-like skin lesions, there was a significant difference between the nonirradiated and all UVA1-irradiated groups, and between the nonirradiated and UVA1-irradiated (only 5 J/cm 2 ) groups ( Table 1, P < .05). These observations implied that UVA1 irradiation significantly protected mice against the development of spontaneous LE-like skin lesions. Effects of UVA1 Irradiation on the Mortality, Renal Disease, and Serum ANA Levels. To assess the systemic effects of UVA1 irradiation on MRL/lpr mice, we evaluated the mortality, the development of renal disease, and serum ANA levels. The mortality of UVA1-irradiated MRL/lpr mice, especially the 10 J/cm 2 irradiated group, tended to be lower than that of the nonirradiated mice, without statistical significance ( Table 2). MRL/lpr mice almost always showed renal disease as evidenced by proteinuria, which was similar to the human lupus nephritis with SLE. UVA1 irradiation had no influence on proteinuria ( Figure 2). Furthermore, there was no significant difference in renal histopathological alterations among these three groups (data not shown). Similarly, there was no significant difference in the serum ANA levels and immunoglobulin deposits in skin among three groups (data not shown). These data suggested that, in our experimental procedures, the effects of UVA1 irradiation on systemic symptoms of MRL/lpr mice were not significant. In comparison, some studies on human SLE patients have reported that UVA1 therapy improved the systemic symptoms and parameters [12][13][14][15]. Considering the tendency of improving mortality by UVA1 irradiation, further studies of other parameters reflecting systemic immune functions might show some systemic effects of UVA1 in our system. Effects of UVA1 Irradiation on Dermal Infiltration of Mast Cells. In consistent with the previous study [19], we could detect massive infiltration of mast cells in the LE-like skin lesions of MRL/lpr mice, compared with nonlesional skins in both nonirradiated and UVA1-irradiated mice. (Figures 3(b), 4). Furthermore, in nonlesional skins, mast cell recruitment was more evident in UVA1-irradiated mice than in nonirradiated ones (Figures 3(a), 3(c), 4). These observations implied that mast cells might play a role in the protective effects of UVA1 irradiation as well as in the development of spontaneous LE-like skin lesions. Recently, it has been reported that mast cells have both positive and There were no significant differences among three groups. The score of proteinuria Nonirradiated UVA1 5 J/cm 2 UVA1 10 J/cm 2 Figure 2: Degree of proteinuria in MRL/lpr mice of the three groups. There were no significant differences among the three groups. UVA1 irradiation had no effect on the degree of proteinuria. The data represent means ± 1 SD. negative regulatory functions for immunity [23]. Mast cells produce IL-10, eventually playing a negative role in the immunomodulatory mechanisms. Actually, UVB irradiation can induce IL-10 production in mast cells [24]. When we examined the IL-10 localization in the skin samples by immunofluorescence analyses, IL-10-positive signals could be detected in the cytoplasm of mast cells at the LE-like skin lesions of nonirradiated mice ( Figure 5(c), indicated with arrows). Unexpectedly, mast cells recruited on the skin of UVA1-irradiated mice never showed IL-10-positive signals ( Figure 5(f)). Thus, these observations implied at least that UVA1 irradiation could not induce IL-10 production in dermal mast cells in the present study. inflammatory mediator mainly derived from mast cells, was closely related with the development of LE-like skin lesions of MRL/lpr mice through HRs [19,25]. Tachibana et al. suggested that the impairment of histamine metabolism in MRL/lpr mice would contribute to the development of LE-like skin lesions [25]. Actually, the gene expression of H2R was significantly enhanced in the LE-like skin lesions of MRL/lpr mice, and H2R-expressing mononuclear cells could be observed immunohistochemically [19]. The diverse biological effects of histamine are mediated through four different histamine receptors, H1R, H2R, H3R, and H4R [26,27]. We also examined the expression of the histamine receptors in both the LE-like skin lesion and nonlesional skins of nonirradiated and UVA1-irradiated mice by immunohistochemical and RT-PCR analyses. Similar to our previous results [19], some infiltrating mononuclear cells expressed H2R in the LE-like skin lesions, and the expression of H2R mRNA was significantly up-regulated (data not shown). However, in nonlesional skins of MRL/lpr mice, there was no significant change in H2R mRNA expression between the nonirradiated and UVA1-irradiated mice. Effects of UVA1 Irradiation on the mRNA Expression of Cytokines and the Immunochemical Examination and mRNA In addition, we analyzed the mRNA expressions of TNFα, IFN-γ, IL-4, IL-10, and TGF-β. In the LE-like skin lesion, we could observe more remarkable mRNA expressions of TNF-α, IL-10, and TGF-β compared with nonlesional skin of nonirradiated mice, as reported in the previous study [20]. However, in the nonlesional skins, there were no significant changes between the nonirradiated and UVA1-irradiated mice (data not shown). Briefly, within our investigations, we could not detect any significant changes associated with the protective effects of UVA1 irradiation, regarding cytokines and HRs. Induction of Apoptosis of Dermal Cells with UVA1 Irradiation. The apoptosis of infiltrating cells has been recognized as one of the main mechanisms responsible for UVA1 treatment [16,17,28]. In the present study, TUNEL-positive Furthermore, in the nonlesional skins, there was a significant difference between UVA1-irradiated (10 J/cm 2 ) and nonirradiated mice (P < .05). There was no significant difference between UVA1 5 and 10 J/cm 2 groups. The data represent means ± 1 SD from 5 samples of each group. Our study demonstrated that mast cells might play a role in the development of spontaneous LE-like skin lesions of MRL/lpr mice. Although we also speculated the possibility that UVA1 irradiation induced such mast cells as aggressively inhibited the development of LE-like skin lesions, for example, through the production of IL-10, we could not confirm such mast cells in the skin of UVA1irradiated mice. Thus, this implied that UVA1 irradiation might induce the apoptosis of dermal pathogenic mast cells. In consistent with our results, apoptosis of proliferating mast cells was reported to be induced by UV in vitro, while resting skin mast cells were resistant to UV light-induced apoptosis [29]. However, as shown in Figure 7, UVA1 irradiation also induced the apoptosis of other types of dermal cells, except mast cells. Characterization of these apoptotic cells in the dermis of UVA1-irradiated mice still remained unclear, and we need further investigations, because several recent lines reported the new insights on autoimmunity in MRL/lpr mice through the impaired regulation of Langerhans cells [30] or regulatory T cells [31,32]. Besides mast cells, these various immunocytes are speculated to be involved in the development of spontaneous LE-like skin lesions of MRL/lpr mice, and also the protective effects on them by UVA1 irradiation. In this context, the apoptosis of mast cells by UVA1 irradiation should be considered to contribute, but only in part, to the effects of UVA1. In human LE, there have been only few reports investigating mast cells in the skin lesions [33]. Studies on 7 Japanese cases with chronic cutaneous LE revealed a 1.5-fold increase in infiltrating mast cells compared with the control skin [34]. Although evidence is lacking, mast cells are considered good candidates to mediate the tissue damage with autoantibodies and immune complexes in SLE [35]. Conclusions In the present study, we demonstrated that UVA1 irradiation prevented the spontaneous development of macroscopic LE-like skin lesions of MRL/lpr mice. On the contrary, UVA1 irradiation did not have harmful systemic effects. As one of the mechanisms for the protective effects of UVA1 irradiation, we could suggest the induction of apoptosis of dermal pathogenic mast cells. These observations supported the possibility that UVA1 phototherapy would be one of the more rational treatments for skin lesions of SLE patients.
3,430
2009-04-26T00:00:00.000
[ "Medicine", "Biology" ]
Study of the Dielectric Behaviour of Cr-Doped Zinc Nano Ferrites Synthesized by Sol-Gel Method The series of Cr-Zn nano ferrites having the general composition CrxZnFe2−xO4 (0 ≤ x ≤ 0.5) have been synthesized successfully in the nanocrystalline form using the sol-gel method. The samples were sintered at 900 ̊C for 3 hours. The effect of chromium substitution on dielectric properties of Zn-ferrites is reported in this paper. The analysis of XRD patterns revealed the formation of single phase cubic spinel structure for all the Cr-Zn ferrite samples. The FTIR spectra show two strong absorption bands in the range of 400 600 cm−1, which corroborate the spinel structure of the samples. The average grain size was found to be in the nanometer range and of the order of 43 63 nm obtained using TEM images. The lattice parameter and crystallite size decrease with increase in Cr concentration (x). The investigation on dielectric constant (ε'), dissipation factor (D) and ac conductivity (σac) was carried out at a fixed frequency 1 kHz and in the frequency range of 100 Hz to 1 MHz at room temperature using LCR meter. The plots of dielectric constant (ε') versus frequency show the normal dielectric behavior of spinel ferrites. The value of ac conductivity (σac) increases with increase in frequency for all the compositions. The appearance of the peak for each composition in the dissipation factor versus frequency curve suggests the presence of relaxing dipoles in the Cr-Zn nano ferrite samples. It is also found that the shifting of the relaxation peak towards lower frequency side with an increase in chromium content (x) is due to the strengthening of dipole-dipole interactions. The composition and frequency dependence of the dielectric constant, dielectric loss and ac-conductivity are explained based on the Koop’s two-layer model, Maxwell-Wagner polarization process, and Debye relaxation theory. Corresponding author. Introduction Ferrites have paramount advantages over other types of magnetic materials due to high electrical resistivity and low eddy current losses [1] [2].For the most favorable combination of low cost, high stability, high quality and lowest volume, ferrites are considered to be the best core material choice over a wide range of frequency [3].Among all magnetic materials, ferrites are the most useful because in addition to magnetic properties, they are also good electrical insulators.The dielectric properties of ferrites are dependent on several factors, such as the method of preparation, heat treatment, sintering conditions, chemical composition, cation distribution, pH, nature and type of substituent, the ratio of Fe 3+ /Fe 2+ ions, frequency and crystallite size [4]- [6].The important parameters for any dielectric substance are its dielectric constant or dielectric permittivity (ε'), which is the ability to store charge in a capacitor and dissipation factor (D) which is a measure of the energy dissipation in the material.Extrinsic losses are associated with the crystal defects, e.g.porosity, grain boundaries, micro-cracks, random crystal orientations, and impurities, etc.It is evident from the previous studies [7] that the losses in sintered polycrystalline ceramics are greatly affected by these extrinsic factors.All these parameters can play a key role in the modification of the dielectric behavior of spinel ferrites which can be more useful for the desired applications. ZnFe 2 O 4 has the normal spinel structure [8] in which Zn 2+ ions occupy the tetrahedral sites, and all the Fe 3+ ions occupy octahedral sites.ZnFe 2 O 4 exhibit good dielectric and magnetic properties; hence these ferrites have a great importance from the application point of view, where these are widely used in many ferrite devices and production of electronic and magnetic components, converters, and electromagnetic wave absorbers [9]. Polycrystalline ferrites are very good dielectric materials which have numerous applications at microwave frequencies.The study of dielectric properties gives valuable information about the behavior of localized electric charge carriers and can explain the phenomenon of dielectric polarization and electrical conduction in the material.The experimental conditions used in the preparation of these materials show a strong impact on the properties of the resultant ferrite nanoparticles.For this reason, several methods have been used in the preparation of nanoparticles, like the co-precipitation method [10] [11], sol-gel technique [12]- [14], hydrothermal method [15] [16], microwave sintering method [17], spray-spin-heating-coating method [18] and auto combustion method [19].Out of all these, sol-gel method is promising technique for the synthesis of nano ferrites in bulk scale due to the production of homogeneous particles.The sol-gel method is the most convenient technique to synthesize nanoparticles because of its simplicity, inexpensive precursors, short preparation time, better control over crystallite size and other properties of the materials [20].The current effort has been focused on studying the composition and frequency-dependent dielectric properties of Cr-Zn ferrites synthesized through sol-gel technique. Synthesis Mixed Cr-Zn ferrites having the general formula Cr x ZnFe 2−x O 4 (where x = 0.0, 0.1, 0.2, 0.3, 0.4 and 0.5) were prepared using sol-gel method.The detailed structural analysis and magnetic properties of the synthesized Cr-Zn nano ferrites samples are reported in our earlier journal [21].The synthesized powder was pre-sintered at 900˚C for 3 hrs and cooled slowly to room temperature.The pre-sintered samples were mixed with an organic binder (small quantity of polyvinyl alcohol (PVA)).The mixture was dried and pressed into disk-shaped pellets of 10 mm diameter and 2 mm thickness by applying a pressure of 3 tons.The samples were sintered again at 950˚C for 5 hrs and slowly allowed to cool naturally.For dielectric measurements, silver paint was applied on both sides of the pellets and air dried to have good electrical contact. Dielectric Measurements The dielectric measurements were carried out at room temperature using LCR Meter (Waynekerr Model: 43100) over the frequency range 100 Hz to 1 MHz.The values of capacitance (C) and dissipation factor (D) were noted directly at different frequencies. The dielectric constant (ε') was calculated using the relation [22] A where C is the capacitance of the sample, t is the thickness; A is the surface area, and ε o is the permittivity of free space. From dielectric constant and dissipation factor (D), the ac conductivity (σ ac ), of the ferrite samples can be calculated using the relation [23] 2 where ω = 2Πf is the angular frequency. Dielectric Properties The dielectric properties of Cr-Zn ferrites have been examined as a function of composition and frequency.Dielectric studies of these samples may be useful for widening its range of applications. Frequency Dependence of Dielectric Constant (ε') The frequency dependence of the dielectric constant (ε') for Cr x ZnFe 2−x O 4 spinel ferrite systems (where x = 0 -0.5 in steps of 0.1) was studied at room temperature in the frequency range of 100 Hz to 1 MHz. Figure 1 displays the variation of dielectric constant (ε') as a function of frequency at room temperature.It can be observed that all the samples show strong frequency dependent.The dielectric constant decreases exponentially with increase in frequency, which is very rapid at lower frequencies and slower at higher frequencies.As frequency further, increase dielectric constant become almost independent of frequency.This is a normal dielectric behavior observed in most spinal ferrites [24]- [26].It can be explained by the Maxwell-Wagner interfacial type polarization, which is also in agreement with Koop's phenomenological theory (Koops, 1951) [27] [28].According to this model the ferrite composed of good conducting grains separated by poorly conducting grain boundaries.On the application of electric field, the electrons reach the grain boundary through hopping, and if the resistance of the grain boundary is high enough, electrons pile up at the grain boundaries and produce polarization.However, as the frequency of the applied external field is increased beyond a certain value, the hopping frequency cannot follow up the field variation.It decreases the probability of the electrons reaching the grain boundary and as result polarization decreases which in turn causes to the decrement of dielectric constant.The large value of dielectric permittivity (ε') at low frequency is due to the predominance of Fe 2+ ions, oxygen vacancies, grain boundary defects, etc., while the decrease in ε' with frequency is due to the lagging of species contributing to polarizability behind the applied electric field.At the higher frequencies ε' remains constant which is attributed to the contribution of electric polarizability only. Variation of Dissipation Factor (D) with Frequency The variation of dissipation factor (D) against frequency at room temperature is depicted in Figure 2 for Cr x ZnFe 2−x O 4 where x = 0.0 to x = 0.5.All the samples exhibit an abnormal behavior of peaking.According to the Iwauchi [29], there is a strong correlation between the conduction mechanism and dielectric behavior of the ferrites.The exchange of electrons between ferrous ions (Fe 2+ ) and ferric ions (Fe 3+ ) on the octahedral site may lead to local displacement of electrons in the direction of the applied field, and electrons determine the polarization.The dielectric loss in ferrites mainly originates due to the electron hopping and defect dipoles.The electron hopping contributes to the dielectric loss only in the low-frequency range.The response of the electron hopping is decreased with increasing frequency, and hence, the dielectric loss decreases in the high-frequency range.In the meanwhile, dielectric loss peak can be seen in Figure 2 for all the Cr-Zn ferrite samples.The appearance of the relaxation peak can be explained according to Debye relaxation theory [30].The loss peak can be observed when the applied electric field is in phase with the hopping frequency in dielectric materials.The condition for observing a maximum in dielectric loss of dielectric material is given by where ω = 2Πf, f is the frequency of the applied electric field, and τ is the relaxation time.The strength and frequency of the relaxation depend on the characteristic property of dipolar relaxation. Variation of AC Conductivity with Frequency Figure 3 shows the ac conductivity of Cr-Zn ferrite system sintered at 950˚C for 5 hrs.Study of ac conductivity of the samples has been performed in the frequency range of 100 Hz to 1 MHz.All the curves exhibit the significant dispersion with frequency, which is an important behavior of ferrites.The electrical conductivity in ferrites is mainly due to the hopping of electrons between the ions of the same element presented in more than one valence state and distributed randomly over crystallography equivalent lattice sites [31].On the application of ac electrical field, this hopping of electrons gets boosted up, resulting in increasing in ac conductivity.The increase in the frequency of the applied electric field enhances the hopping phenomenon.Hence, there is an increase in ac conductivity. Composition Dependence of Dielectric Behavior The dielectric constant, dissipation factor and ac conductivity of Cr-Zn ferrite system have measured at a fixed frequency of 1 kHz.The values are recorded in Table 1.It is evident from the table that, all the dielectric parameters are greatly affected by Cr-content (x).The dielectric constant increases for all the Cr-substituted Zn-ferrite samples except Cr x ZnFe 2−x O 4 (x = 0.4) sample.The variation of dielectric constant as a function composition is shown in Figure 4. Cr-doped Zn-ferrites have higher conductivity than undoped ones.The increase in ac conductivity with the substitution of chromium may be due to the increase in Fe 2+ ions at tetrahedral sites, which can increase the hopping of charge carriers between the Fe 2+ and Fe 3+ ions, and hence, there is an increase in ac conductivity of the Cr-Zn ferrites [32].Figure 2 shows that the height and width of the relaxation peaks have increased unevenly in all the Cr-doped zinc ferrite samples.The increase in peak height with the substitution of Cr 3+ ion is due to the increase in conductivity of the sample arising due to the increase of the Fe 3+ /Fe 2+ ions available for the conduction process.The increase in the peak width is due to the strengthening of the dipole-dipole interactions which hinders the dipole rotation.From Figure 2, it is also important to note that the relaxation peaks of all the Cr-substituted Zn-ferrite samples except Cr 0.2 Zn•Fe 1.8 O 4 and Cr 0.3 Zn•Fe 1.7 O 4 samples are shifting towards lower frequency side with an increase in Cr-content.The strength and frequency of the relaxation depend on the characteristic property of dipolar relaxation.From the Table 1, it is clear that the Cr-Zn ferrites possess low dielectric losses.The low dielectric loss makes these samples especially attractive for high-frequency applications. Conclusion Cr doped Zn-ferrites were synthesized by sol-gel method.The dielectric properties have been examined as a function of frequency and composition.The room-temperature dielectric constant decreasing rapidly with increasing frequency indicates the normal dielectric behavior for all the samples.All the substituted samples have higher dielectric constant than the basic Zn-ferrite composition without chromium.Relaxation peaks were observed for all the samples in dissipation versus frequency curves and this relaxation peak is shifting to low frequency side with the substitution of Cr content (x).The ac conductivity was found to increase with increase in frequency and Cr concentration.The incorporation of Cr 3+ for Fe 3+ ions results in a significant impact on the dielectric behavior of the Cr-Zn ferrite system. Figure 4 . Figure 4. Variation of dielectric constant (ε') as function of Cr content (x) for Cr-Zn nanoferrite samples at 1 kHz frequency.
3,036.6
2016-06-17T00:00:00.000
[ "Materials Science", "Physics" ]
Phytochemical Composition and Enzyme Inhibition Studies of Buxus papillosa C.K. Schneid : The current research work is an endeavor to study the chemical profiling and enzyme-inhibition potential of di ff erent polarity solvent ( n -hexane, dichloromethane—DCM and methanol—MeOH) extracts from the aerial and stem parts of Buxus papillosa C.K. Schneid. All the extracts were analyzed for HPLC-PDA phenolic quantification, while both (aerial and stem) DCM extracts were studied for UHPLC-MS phytochemical composition. The inhibitory activity against the clinically important enzymes having crucial role in di ff erent pathologies like skin diseases (tyrosinase), inflammatory problems (lipoxygenase—LOX) and diabetes mellitus ( α -amylase) were studied using standard in vitro bioassays. The DCM extracts upon UHPLC-MS analysis conducted in both negative and positive ionization modes has led to the tentative identification of 52 important secondary metabolites. Most of these belonged to the alkaloid, flavonoid, phenolic and triterpenoid classes. The HPLC-PDA polyphenolic quantification identified the presence of 10 phenolic compounds. Catechin was present in significant amounts in aerial-MeOH (7.62 ± 0.45 µ g / g extract) and aerial-DCM (2.39 ± 0.51- µ g / g extract) extracts. Similarly, higher amounts of epicatechin (2.76 ± 0.32- µ g / g extract) and p -hydroxybenzoic acid (1.06 ± 0.21 µ g / g extract) were quantified in aerial-DCM and stem-MeOH extracts, respectively. Likewise, all the extracts exhibited moderate inhibition against all the tested enzymes. These findings explain the wide usage of this plant in folklore medicine and suggest that it could be further studied as an origin of novel bioactive phytocompounds and for the designing of new pharmaceuticals. Introduction The conventional herbal medicinal system is playing a central role to prevent and treat different diseases since time immemorial, and these medicinal herbs are rich sources for the discovery of novel drugs [1]. During the past few decades, the number of prescribed medicines which have been developed from different higher plants are estimated to have an expansion of about 25%. It is also noteworthy that, if traditional medicines are used in a scientific way, they can be considered safer than the modern medicines [2,3]. Since ancient times, these plant derived natural products are regarded as the origin of modern drugs. Currently, many of these natural phytocompounds have been scientifically approved as antioxidant, antimicrobial and, anticancer agents [4][5][6]. Natural products obtained from medicinal plants are receiving much more attention nowadays, and are also considered worldwide as a worthy source for the development of new drug molecules [7]. Understandingly, it is reported that the plant based natural medicines are conquering the pharmaceutical industries, and about 1300 herbal drugs have been utilized in Europe. Likewise, in USA until 2005, the number of natural based drugs was comparatively higher among the total prescribed drugs [8]. Some of the recent studies have highlighted that in the developed countries, about 80% of the population is still depending upon traditional drug therapies for the treatment of common ailments; one-fourth of the prescribed drugs in these countries are derived from natural flora [9]. Keeping in view this upsurge in herbal therapies and natural product usage, the secondary metabolites and/or phytochemicals of natural sources are currently being investigated and explored for the discovery of novel drug leads [10]. With this frame of reference, most of the scientific community have directed their research interests towards characterization and pharmacological effects of medicinal plants and their isolated chemical constituents [11,12]. Right now, unexplored medicinal plants from various folklore medicinal systems could be taken into consideration as an inspiring sources for bioactive phytochemicals for drug designing as well as functional food advancement plans [13]. The genus Buxus (family: Buxaceae) is commonly known as boxwood. This genus is comprised of about 70 different species and is native to China, Europe, America and Asia among other countries. Most of the plants of this genus are distributed in northern America and Eurasia, such as Pakistan, Turkey and China [14][15][16]. This genus is known to contain many important classes of phytochemicals including steroids, alkaloids, terpenoids, among others [17][18][19]. Most of the plant species of this genus have been reported folklore uses for the treatment of skin infections, malaria, human immunodeficiency virus (HIV) infections, cancer, depression, tuberculosis, etc. [20][21][22]. Buxus papillosa C.K. Schneid. Most commonly called as Shamshad is a shrubby plant and is commonly found in the northern areas of Pakistan. The different extracts of these plants have been extensively utilized in the indigenous medicinal system for treating common ailments including skin infections, malaria and rheumatism [23]. As continuation of our plant based research, we have previously reported the phytochemical composition (UHPLC-MS analysis) and pharmacological effects of aerial and stem parts of this plant [24]. As the data on the chemical profiling of some of the polarity solvent extracts is still missing, therefore, this study was conducted to provide in-depth phytochemical profiling and enzyme inhibitory potential of this plant. The phytochemical phenolic composition was established by determining high-pressure liquid chromatography photodiode array (HPLC-PDA) method, while the DCM extracts were analyzed for ultra-high performance liquid chromatography-mass spectrometry (UHPLC-MS) analysis. All the extracts were also analyzed for inhibitory activity for the clinically important enzymes having important roles in common ailments, for-example, diabetes mellitus (α-amylase), inflammation (LOX) and skin diseases (tyrosinase). The current research work may be considered as a foremost report on such phytochemical and enzyme inhibition activities of aerial and stem parts of this plant. HPLC-PDA Analysis Phenolic secondary metabolites are considered as one of the abundant phytocompounds in plants possessing different pharmacological effects and also regarded as the most common antioxidants that are present in human diet [25]. The phenolic compounds have been previously appraised for different biological assays including antioxidants, anticancer, antimicrobial and anti-inflammatory. Similarly, flavonoids are also most common group of phenolics and regarded as natural bioactive molecules in order to design novel bioactive products [26]. The present research wok describes the HPLC-PDA polyphenolic quantification for the n-hexane, DCM and MeOH extracts from B. Papillosa aerial and stem parts. A list of 22 important standard phenolic phytochemicals were tested for their quantification, however, all the studied extracts were found to contain 10 of these compounds. The results of these quantified phenolics is presented in Table 1 and their chemical structures are shown in Figure 1. Likewise, the HPLC chromatograms for each individual extract is given in the Supplementary Material section (Figures S1-S5). From the results, it could be seen that B. papillosa aerial-DCM extract was containing a higher amount of phenolics in comparison with other extracts. The highest amounts of catechin (7.62 ± 0.45 µg/g extract) was found in the aerial-MeOH extract, while naringin was detected as BLD i.e., below limit of detection ·.1 µg/mL) in this extract ( Figure S1). Likewise, the aerial-DCM extract was containing the catechin (2.39 ± 0.51 µg/g extract), epicatechin (2.76 ± 0.32 µg/g extract), syringic acid (0.77 ± 0.06 µg/g extract), benzoic acid (0.47 ± 0.04 µg/g extract) and vanillic acid (BLD) ( Figure S2). Interestingly, none of the tested phenolic standards were present in the aerial-hexane extract which may be due to the non-polar nature of this extract. Similarly, p-hydroxybenzoic acid (1.06 ± 0.21 µg/g extract), 3-hydroxy benzoic acid (0.59 ± 0.06 µg/g extract) and gallic acid (BLQ-below limit of quantification, i.e., < 0.2 µg/mL) were quantified in the stem-MeOH extract ( Figure S3). The stem-DCM extract was found to contain two compounds including syringic acid (0.24 ± 0.02 µg/g extract) and 3-hydroxy-4-methoxy benzaldehyde (0.38 ± 0.04 µg/g extract) ( Figure S4). Naringin (0.45 ± 0.04 µg/g extract) was the only phenolic detected in stem-hexane extract ( Figure S5). Overall, this phenolic profiling confirms the presence of important secondary metabolites, so these plant extracts can be explored further for the isolation bioactive molecules having therapeutic importance. nd: Not detected; BLD: below limit of detection (<0.1 µg/mL); BLQ: below limit of quantification (<0.2 µg/mL); Chlorogenic acid, quercetin, p-coumaric acid, rutin, sinapinic acid, naringenin, t-ferulic acid, harpagoside, 2,3-di-methoxy benzoic acid, o-coumaric acid, t-cinnamic acid and carvacrol were not present in all the four extracts. UHPLC-MS Secondary Metabolites Analysis The analysis of phytochemical composition of plant extracts is given much importance because it is not justifiable to evaluate the biologic properties of an extract devoid of its chemical composition. Moreover, the pharmacological effects of a medicinal plant is directly depending on the bioactive compounds that are present naturally in the plants. As a whole, phytochemical profiling is mandatory for making an accurate estimation of the biologic properties of plant extracts [27]. In the current research, the UHPLC-MS analysis (in both positive and negative ionization modes) of DCM extracts of B. papillosa aerial and stem parts was performed. As indicated in Table 2, the UHPLC-MS phytochemical composition of B. papillosa aerial-DCM extract have tentatively identified 29 different phytocompounds. Their total ion chromatograms (TICs) are shown in Figure 2. UHPLC-MS Secondary Metabolites Analysis The analysis of phytochemical composition of plant extracts is given much importance because it is not justifiable to evaluate the biologic properties of an extract devoid of its chemical composition. Moreover, the pharmacological effects of a medicinal plant is directly depending on the bioactive compounds that are present naturally in the plants. As a whole, phytochemical profiling is mandatory for making an accurate estimation of the biologic properties of plant extracts [27]. In the current research, the UHPLC-MS analysis (in both positive and negative ionization modes) of DCM extracts of B. papillosa aerial and stem parts was performed. As indicated in Table 2, the UHPLC-MS phytochemical composition of B. papillosa aerial-DCM extract have tentatively identified 29 different phytocompounds. Their total ion chromatograms (TICs) are shown in Figure 2. The classes of compounds profiled were majorly flavonoid (7,3',4'-tetrahydroxyflavanone 7-alpha-L-arabinofuranosyl-(1->6)-glucoside, wightin, 8-methoxycirsilineol and dalpaniculin), alkaloid (calystegine A7, 3beta,6beta-dihydroxynortropane, Nb-stearoyltryptamine and 2,4,8-eicosatrienoic acid isobutylamide), phenolic (+)-syringaresinol O-beta-d-glucoside, p-salicylic acid, 4-(2-hydroxypropoxy)-3,5-dimethyl-phenol and flavidulol D) and glucoside (ethyl 7-epi-12-hydroxyjasmonate glucoside, podorhizol beta-d-glucoside and morin 3,7,4'-trimethyl ether 2'-glucoside) derivatives. Three other compounds 1-hydroxy-3-methoxy-7-primeverosyloxyxanthone (xanthone glycosides derivative), panaxydol linoleate (polyacetylenic derivative) and cyclopassifloside II (triterpene derivative) were also tentatively identified. Likewise, the tentative secondary metabolites composition of stem-DCM extract as conducted by UHPLC-MS analysis is presented in Table 3, has revealed the tentative presence of a total of 32 phytocompounds. The TICs for this extract is shown in Figure 3. As presented in Table 3, most of these phytocompounds were belonging to alkaloid and flavonoid groups of secondary metabolites. The alkaloids tentatively detected were 14,19-dihydroaspidospermatine, prosopinine, uplandicine, cyclobuxine D, cyclovirobuxine C, terminaline, solanocapsine, evocarpine, prosopinine and camptothecin, whereas, the flavonoids were chrysosplenoside D, melisimplexin, wightin, 8-methoxycirsilineol, dalpaniculin and melisimplexin. Likewise, the other compounds identified were belonging to triterpene/triterpenoid, organic acid, phloroglucinol, polyacetylenic, phenol, carotenoid, vitamin, sesquiterpenoid and vitamin derivatives ( Table 3). To our knowledge, this is the initial tentative UHPLC-MS phytochemical study for the DCM extracts of this plant. that the aerial-hexane extract showed higher percentage inhibition of 42.6%, while the other extracts presented weak inhibition percentage. Overall, all the extracts exhibited moderate inhibition potential against the tested enzymes, and this observed activity may be correlated with phytochemical profiling of this plant extracts specifically phenolic and flavonoid classes of compounds, as previously a positive connection was noted among enzyme inhibition potential and these phytocompounds [32]. The research reports on such enzyme inhibition assays is lacking specifically for this plant species, thus the current preliminary results may serve as a triggering point in order to extensively explore this plant for the discovery of enzyme inhibitory compounds of natural origin. However, further research work in order to isolate the targeted compounds which are responsible for the observed enzyme inhibitory activities is highly recommended. Enzyme Inhibition Potential In order to have an effective control and to manage the worldwide health related problem, inhibition of the important enzyme directly related with the common pathologies may be studied as one of the prime therapeutic strategy [28]. In the present research work, we have evaluated the inhibitory activity of B. papillosa aerial and stem extracts for their inhibition potential against tyrosinase, α-amylase, and LOX enzymes which are considered to be important enzymes playing important role in common disorders, i.e., skin problems, diabetes and inflammatory disorders. The results of enzyme inhibition assays are gathered in Table 4. A copper-containing enzyme, i.e., tyrosinase is considered as an important enzyme because of its important role in the biosynthesis of melanin [29]. All the extracts were tested for the tyrosinase inhibition assay and the results are given in Table 4. From the results, it was noted that all extracts presented moderate activity. Among these, the stem-DCM extract was the most active one with inhibition of 27.14 mg KAE/g extract. Similarly, the α-amylase enzyme imparts an important role to control glycemia and to manage type 2 diabetes, the control of postprandial glycemia is regarded as an effective strategy [30]. As presented in Table 4, a weak anti-α-amylase activity was recorded with both the hexane extracts showing higher inhibition (aerial-hexane; 0.13-mmol ACAE/g extract; stem hexane; 0.12-mmol ACAE/g extract) compared with other extracts. Likewise, the enzyme lipoxygenase is considered important in the process of inflammation, specifically in the leukotrienes biochemical processes, which are considered to be the major triggers responsible for inflammation and allergic reactions [31]. The results of LOX inhibition assay are given in Table 4, and it was seen that the aerial-hexane extract showed higher percentage inhibition of 42.6%, while the other extracts presented weak inhibition percentage. Overall, all the extracts exhibited moderate inhibition potential against the tested enzymes, and this observed activity may be correlated with phytochemical profiling of this plant extracts specifically phenolic and flavonoid classes of compounds, as previously a positive connection was noted among enzyme inhibition potential and these phytocompounds [32]. The research reports on such enzyme inhibition assays is lacking specifically for this plant species, thus the current preliminary results may serve as a triggering point in order to extensively explore this plant for the discovery of enzyme inhibitory compounds of natural origin. However, further research work in order to isolate the targeted compounds which are responsible for the observed enzyme inhibitory activities is highly recommended. Plant Material and Extraction B. papillosa aerial and stem parts were collected from the peripheral areas of Malakand, Pakistan. The plant was authenticated by Dr. Abdul-Rehman Niazi (Incharge Herbarium), Department of Botany, Punjab University Lahore, Pakistan. For future reference, a plant specimen voucher number, i.e., LAH # 7517 was deposited at Department of Botany herbarium Punjab University Lahore, Pakistan. Both the plant parts were placed for shade drying, and after that grounded into fine powder. The powdered plant material was extracted by the process of maceration (at room temperature) using n-hexane, DCM and MeOH, successively for 3 days with each solvent. The solvent was evaporated under reduced pressure in the rotary evaporator to afford the concentrated extracts. Phytochemical Composition The HPLC-PDA polyphenolic quantification of all the extracts was carried out in order to quantify 22 phenolic standards accordingly standard methods as reported previously [33,34]. Likewise, the tentative secondary metabolites phytochemical composition of the DCM extracts was studies by previously described UHPLC-MS analysis [35]. The details of these both the methods are given in Supplementary section. Enzyme Assays The inhibitory assays of the studied extracts were conducted against three enzymes, namely α-amylase, tyrosinase and lipoxygenase. All the enzyme assays were conducted utilizing previously reported standard methods [29,36,37]. Acarbose, kojic acid and quercetin was uses as standards for α-amylase, tyrosinase and lipoxygenase enzymes, respectively. The α-amylase and tyrosinase inhibition activity were expresses as acarbose equivalents (mmol ACE/g extract) and kojic acid equivalents (mg KAE/g extract), respectively. Whereas, for lipoxygenase enzyme, the percentage inhibition was calculated and the IC 50 value was also recorded using quercetin as positive control. The details of all these assays is provided in Supplementary file. Statistical Analysis All the assays were carried out in triplicate. The results are expressed as mean values and standard deviation (SD). The differences between the different extracts were analyzed using one-way analysis of variance (ANOVA) followed by Tukey's significant difference post hoc test and value p < 0.05 was considered significant. This treatment was carried out using SPSS v. 14.0 program. Graph Pad Prism software (San Diego, CA, USA, Version 6.03) was used to calculate IC 50 values. Conclusions The present communication serves as a thorough tentative evaluation of the phytochemical composition and enzyme inhibitory capability of Buxus papillosa aerial and stem parts. The chemical composition as established by HPLC-PDA analysis has resulted in the quantification of important phenolic compounds like syringic acid, 3-OH benzoic acid, p-hydroxybenzoic acid, 3-OH-4-MeO benzaldehyde and naringin among others. Similarly, some of the other phytochemical derivatives belonging to flavonoid, terpenoid, alkaloid and glucoside groups were also tentatively identified by UHPLC-MS analysis. In case of enzyme assays, a moderate inhibitory activity was observed which can be corelated to the phytochemical composition. Overall, this plant species could be further explored as a potential origin for designing of new natural-product derived drug leads.
3,920
2020-06-29T00:00:00.000
[ "Medicine", "Chemistry", "Environmental Science" ]
Impact of Infrastructure Availability to the Level of Slum Area in Banyumanik District Khristiana Slum especially in big cities such as Semarang City occurred not only in the inner city but also spread to the suburbs of Semarang City, one of them in Banyumanik District. Based on the Decree of Mayor of Semarang no. 050/801/2014 about Determination of Slum Location at Semarang City, there are 6 (six) urban villages in Banyumanik District which have slum areas. Determination of the slum area is based on the Semarang City Slum Inventory Study from 2010 to 2014. As a consequence, it is known that the main problem causing the development of slum settlement is related to the availability of environmental infrastructures : road network, sanitation, clean water, and drainage. This research was conducted to determine the slum level in the slum areas located in Banyumanik district based on the availability of environmental infrastructure. The analysis included descriptive analysis to explain the characteristics of slum settlements and the availability of existing environmental infrastructure, and scoring analysis to determine the impact of the availability of this infrastructure on the slum level. Based on the results of the analysis, shows that the slum settlements in RT 01 / RW IV of Jabungan have the highest slum level in Banyumanik district. INTRODUCTION Development that occurs in a region or city can be identified from various conditions, both physically and non-physically related to the social condition of the community.One form of development which can be seen physically is settlements expansion as an implication of urban population growth, changes in socioeconomic conditions, and interactions contained in it [1]. The population growth phenomenon is not balanced with the provision of the infrastructure of the city.Therefore, the development that occurs actually results in a decrease in environmental quality, potentially create slum settlements.Most of the time, slums grow naturally and cannot be avoided.Slum with various conditions not only revolve around the inner city but also to the periphery. The slum develops due to the demand of occupancy for people who are coming from other areas [2].The migrant initially aims to find better jobs and livelihoods in the city.However, high competition and no skills required by the jobs causes them work in the informal sector.The main work of the informal sector affects the purchasing power to have a comfortable house for both location and condition of the building.As a result, there emerged slums with a variety of poor conditions, including limited physical structures, quality of buildings that did not have health requirements, limited environmental infrastructure, and so on [3]. Semarang City as the capital of Central Java Province is one of the big cities in Indonesia which is in the process of growth and development.Currently, Semarang City has developed into a large city that has a variety of functions, as a regional administrative center, and is devoted to trade and services.This is in line with the vision of RPJMD (medium-term development plan) Semarang 2016-2021 "Semarang City of Great Trade and Services Toward Increasingly Prosperous People" [4].This is certainly a positive implication for Semarang City itself.On the other hand, it presents a challenge because the development will bring more diverse activities that encourage the fulfillment of supporting infrastructure, one of which is the need for housing. The housing needs are met through the planned and unplanned housing, under various conditions.When a house that is occupied is no longer following the prerequisite of a healthy house, it will become a slum neighborhood.Slum areas have a limited level of infrastructure and basic facilities environment services, especially the limitations to obtain services clean water facilities, drainage and sanitation [5] [6]. That determination is influenced by the environmental infrastructure availability in settlement area, which includes: road network, clean water, sanitation, drainage, and garbage [7]. INFRASTRUCTURE AVAILABILITY AT SLUM AREA Referring to the Slum Settlement Management Program of Semarang City, the main problem causing slum is related to the availability of road network, drainage, clean water and sanitation [8].According to Law No.1/2011 about Housing and Residential Areas, the infrastructure is the basic physical completeness of a residential environment with specific standards to stay: healthy, safe and comfortable.The regulation, Guidelines of the Slum Identification at the Metropolitan Buffer Area [7] explained that a slum settlement area must have road network, drainage network, clean water network, wastewater network and solid waste network. The existence of environmental infrastructure has a vital role in supporting the activities of a settlement.In addition, infrastructure as a physical facility becomes a potential factor in determining the success of a region's development [9]. METHODS The method used to describe the characteristics of the slum in Banyumanik District is descriptive statistic, which is related to the number and physical condition of the house, and the socio-economic condition of the community.In addition, descriptive analysis is also conducted to determine the characteristics of the availability of environmental infrastructure in the slum area.That condition identified by observing the phenomenon of slums growth that found in large cities, including Semarang.Based on the tendency of slum settlements, there is a decrease in the quality of settlement environment, which is influenced by the limited availability of environmental infrastructure [10] .Meanwhile, scoring analysis is used to provide an assessment of the availability and service of infrastructure in each sub district that has slum areas in Banyumanik.Through these assessments then the analysis was used to determine the slum level in the study area.The area with the level of piped water system service <30% 30 The area with the level of piped water system service 30%-60% 20 The area with the level of piped water system service >60% The area with the level of garbage service <50%.30 The area with the level of garbage service 50%-70% 20 The area with the level of garbage service >70%. SLUM CHARACTERISTIC Determination of Slum Location at Semarang is based on the results of previous studies, Inventory and Identification of Slums in Semarang City 2010-2014, and its existence is administratively legal and does not conflict with the spatial regulation of Semarang City.Referring to the results of the inventory, in Banyumanik District there is 27.5 ha of slum area and consists of 321 housing units [11]. Land area and Building Area The average of slum land area in Banyumanik district varies from 50 m 2 to 150 m 2 .Whereas, the comparison of the building's floor area to the land area was quite varied, ranging from 50% to 80%, even though some reaches 100%. Distance between buildings The average of distance between buildings is 2-3 m (RT 01/V of Jabungan, RT 05/II of Ngesrep, and RT 03/III of Padangsari).In addition, most of them do not have distance at all (RT 01/IV of Jabungan, RT 04/II of Jabungan, RT 05/I of Tinjomoyo, RT 06/I of Srondol Kulon, and RT 05/II of Gedawang). Condition of The Floor Overview of condition of the floor is based on the identification on each room in the house.In general, most of the houses are cemented/stucco, some are tiled and ceramic. • Floors from cement/stucco are located in RT 5/RW II of Ngesrep, RT 3/RW V and RT 4/RW II of Jabungan, and RT 5/RW I of Tinjomoyo.The proportion of houses with cement/stucco floors is about 50% of slum dwellings located in Banyumanik District.• Floors from tiles (around 31.25%) are located in RT 3/RW III of Padangsari, RT 6/RW I of Srondol Kulon and RT 5/RW II of Gedawang.• Floors from ceramics (around 18.75%) are located in RT 1/ RW IV of Jabungan and RT 5/RW of II Gedawang. The condition of The Wall Based on the results of slum buildings identification, the walls of house building are made from the plastered bricks, unplastered brick walls, bricks-wood/bamboo, and wood/bamboo. Income and education level Following the findings of the study, it is known that there is a tendency of the income level of the population in slum location is low (earnings per households less than the minimum wage of Semarang city /less than Rp 1.500.000,00).The number of households with income less than Rp 1.500.000,00 is about 60% of all existing households.The lack of income is caused by the low level of education.Therefore, the people cannot work in the formal sector because most of them are farmers, traders, factory workers and rough workers.Based on the level of education, about 66% population only graduated from junior high school, 22% graduated from high school, 6% graduated from elementary school, and the rest are graduated from diploma program/university. Nutritional status of children under five and morbidity rate The nutritional status determined by the percentage of children under five year old with malnutrition is less than 10%, except in RT 01/IV of Jabungan (31-50%).This is caused by the economic conditions and an understanding of the importance fulfillment of the nutrition for children under five years old. The morbidity rate is the percentage of people affected by certain diseases per year [13].The average of morbidity rate is less than 5%, except RT 02/V of Ngesrep with dengue fever and diarrhea (6-10%), and RT 01/V of Tinjomoyo with diarrhea rate ranges from 6 to 10% [14]. Clean Water Service In general, all households are served by clean water infrastructure either from PDAM or shallow well water, with decent condition.However, some water source is not suitable to be consumed, i.e. in the slum that is located in RT 1/RW V of Jabungan.In general, the number of households which are not served by clean water is on average 10% -30%.The number of households which have not received clean water service is <10% and they are located in RT 5/RW II of Ngesrep, RT 3/RW III Padangsari, RT 1/RW V of Jabungan, and RT 5/RW II of Gedawang.While, 50% of the area located in RT 1/ RW IV of Jabungan does not have the clean water service. Sanitation Almost all households already have private restrooms.There are still some public toilets in some areas.Four units at RT 3/RW V of Jabungan are built from the community's funding.Two units at RT 1/RW IV of Jabungan are built by the private company.Two units at RT 4/RW II of Jabungan are built from a governmental grant from and four units at RT 5/RW I of Tinjomoyo are also built from the governmental aid.While the public toilets at RT 3/RW V of Jabungan, and RT 5/RW I of Tinjomoyo are managed by community groups. The number of households who do not use restrooms at the community at RT 01/RW V of Jabungan, RT 05/RW II of Ngesrep, RT 03/RW III of Padangsari, RT 05/RW I of Tinjomoyo, RT 06/RW I of Srondol Kulon, and RT 05/RW II of Gedawang is less than 10% of the population.While, RT 4/RW II of Jabungan is about 31-50%, and the highest is at RT 1/RW 4 Jabungan about 61-70% of the population. Solid Waste There is one garbage dump in RT 5/RW II Ngesrep by government aid.For other areas, the garbage disposal is still using a vacant land.Concerning the number of households who do not have the solid waste service, RT 5/RW II of Ngesrep is <10%, RT 1/RW IV and RT 4/RW II of Jabungan are about 11-30%, and the other areas are over 70%. Drainage Channel In some slums area in Banyumanik District do not have proper drainage systems because there are not even drainage networks, such as in RT 3/RW III of Padangsari, RT 1/RW V of Jabungan, RT 6/RW I of Srondol Kulon, and RT 5/RW II of Gedawang.The existing drainage channel has 15-50 cm of wide.The condition of drainage in some areas is not clogged, but because of small dimensions and bad condition, when it rains some channels cannot accommodate rainwater.This causes a flood/puddle moment when rain falls. The existence of the drainage network is found in the Ngesrep region with dimensions: width of 50 cm, 20 cm depth, by bricks.In Padangsari, the existing drainage channel has a diameter of 20 cm, a depth of 10 cm, with concrete pavement, and the drainage length that is clogged with <10%.In RT 4/RW II of Jabungan, the dimension of drainage channel is about 30 cm width, 20 cm depth, with soil material.In this location, the drainage length is not smooth between 11-25%. Road Network The road network in Banyumanik District is dominated by asphalt and paving block, with the average width of the road is 1-4 meters.RT 04/I of Jabungan has the worst road condition with the percentage of road that damaged more than 70%, and it also happened in RT 05/I of Jabungan which has the percentage of road that damaged 51-70%. IMPACT OF INFRASTRUCTURE AVAILABILITY TO THE LEVEL OF SLUM AREA IN BANYUMANIK DISTRICT Referring to the Guidelines for Identification of Slum on Urban Buffer Areas [7], assessment of the availability of infrastructure that affects the level of slum area are: SUMMARY 6 (six) slum areas in Banyumanik district have diverse infrastructure availability and services.PDAM's water supply service has not reached all slum areas, so some of them utilize shallow well water.However, in general, the service has been able to meet the needs of the community, except in the slum areas found in RT 1/RW IV Jabungan.In that sub-district, approximately 50% of the people has clean water service .Sanitation infrastructure, in general, has reached the entire region.Most of the people already have private restrooms, and in some areas have communal latrines from government aid and the private company.Whereas the garbage disposal system, there's supported by the presence of 1 (one) garbage dump at RT 5 / RW II of Ngesrep.The existence of drainage channels has not covered all of the existing territories.While in some areas that some areas already have drainage network, dimensions and conditions have not supported to be used optimally.The road network is a crucial issue that needs to be considered, considering that from the condition have a high level of damage that is more than 50%. From that identification, slum areas located on RT 04/IV of Jabungan have the most limited availability of infrastructure.It makes that location have the highest level of slum compared with other slum areas in Banyumanik District. TABLE 1 . Research Variables number of the private trash bin the household waste disposal system number and location of the garbage dump TABLE 2 . The Criteria That Used for The Analysis TABLE 3 . Number of Slum Houses at Banyumanik District • Unplastered brick walls (12.5%) are located in RT 3/RW III of Padangsari.• Walls with Brick-wood/bamboo (50%) are dominate in Banyumanik's slum area.Houses with these conditions are in RT 5/RW II of Ngesrep, RT 3/RW V and RT 4/RW II of Jabungan, and RT 5/RW I of Tinjomoyo.• Walls which made from wood/bamboo (25%) located in RT 1/RW IV of Jabungan and RT 6/RW I of Srondol Kulon. TABLE 5 . The condition of The Road in Banyumanik District TABLE 6 . Criteria and Assesment of Slum Area
3,459.4
2018-05-22T00:00:00.000
[ "Economics" ]
COMMON ZEROS PRESERVING MAPS ON VECTOR-VALUED FUNCTION SPACES AND BANACH MODULES Let X, Y be Hausdorff topological spaces, and let E and F be Hausdorff topological vector spaces. For certain subspaces A(X,E) and A(Y, F ) of C(X,E) and C(Y, F ) respectively (including the spaces of Lipschitz functions), we characterize surjections S, T : A(X,E)→ A(Y, F ), not assumed to be linear, which jointly preserve common zeros in the sense that Z(f −f ′)∩Z(g−g′) 6= ∅ if and only if Z(Sf −Sf ′)∩ Z(Tg − Tg′) 6= ∅ for all f, f ′, g, g′ ∈ A(X,E). Here Z(·) denotes the zero set of a function. Using the notion of point multipliers we extend the notion of zero set for the elements of a Banach module and give a representation for surjective linear maps which jointly preserve common zeros in module case. 2010 Mathematics Subject Classification: Primary: 46J10, 47B48; Secondary: 46J20. Introduction The study of linear maps preserving some properties related to the elements of spaces under consideration is one of the most active areas of research in recent years. Characterization of linear maps preserving common zeros between vector-valued function spaces is one of these problems which may be considered as a vector-valued version of the Banach-Stone and the Gleason-Kahane-Żelasko theorems. These maps send (in two directions) every pair of functions with disjoint zero sets to functions with the same property. For completely regular spaces X, Y , Hausdorff topological vector spaces E, F , and subspaces A(X, E) and A(Y, F ) of C(X, E) and C(Y, F ), respectively, a characterization of linear bijections T : A(X, E) → A(Y, F ) satisfying Z(T f i ) = ∅, for any n ∈ N and f 1 , . . . , f n ∈ A(X, E), has been given in [17]. Here Z(·) denotes the zero set of a function. In the case where X, Y are metric spaces and E, F are normed spaces, a complete description of linear bijections T : A(X, E) → A(Y, F ) between certain subspaces of C(X, E) and C(Y, F ), satisfying the weaker condition for all f, g ∈ A(X, E), is given in [7] and then some extensions of the previous results are obtained. In [18], among other things, the authors considered maps preserving zero set containments, which dates back to [11]. Indeed, they characterized linear bijections T : C(X, E) → C(Y, F ) such that when X, Y are either realcompact or metric spaces, and E, F are locally convex spaces. Moreover, it is worth mentioning that another related notion, originally introduced in [18], is non-vanishing preserving maps in the sense that For some results concerning non-vanishing preserving maps (on lattices) one can see [4,6,5,8,9,15,18,19,20]. We also refer to [14] for some relevant concepts in the case of scalar-valued continuous functions. In this paper we consider a pair of maps, not assumed to be linear, jointly preserving common zeros in the sense which is defined below. Let X, Y be Hausdorff topological spaces, E, F be Hausdorff topological vector spaces, and A(X, E) and A(Y, F ) be subspaces of C(X, E) and C(Y, F ), respectively. We say that a pair of maps S, T : A(X, E) → A(Y, F ), not assumed to be linear, jointly preserves common zeros if for all f, f , g, g ∈ A(X, E). A map T : A(X, E) → A(Y, F ) preserves common zeros if the pair T , T jointly preserves common zeros. In Section 3, we study such maps for certain subspaces of vector-valued continuous functions on Hausdorff spaces with values in a Hausdorff topological vector space and obtain generalizations of the results in [7]. In particular, for the spaces of Lipschitz functions we give an extension of [5,Theorem 6]. In Section 4 we define the zero set for the elements of Banach modules and consider linear surjective maps which jointly preserve common zeros between Banach modules. We show that such maps admit representations similar to weighted composition operators. Preliminaries Let X be a Hausdorff topological space and let E be a Hausdorff topological vector space. We denote the space of all E-valued continuous functions on X by C(X, E) and we set C(X) = C(X, C). A subspace A(X, E) of C(X, E) is said to be completely regular if for each x ∈ X and each closed subset C of X such that x / ∈ C, there exists f ∈ A(X, E) with f (x) = 0 and f = 0 on C. For a bounded metric space (X, d), α ∈ (0, 1], and a Banach space E, = 0 and we use the notation lip b α (X) for lip b α (X,C) which is, indeed, a commutative unital Banach algebra. Clearly For a Banach algebra A with non-empty character space σ(A), by σ(A) ∪ {0} we mean the character space of the unitization of A. Let X be a Banach left A-module. Following [3], for a point ϕ ∈ σ(A) ∪ {0}, a linear functional ξ ∈ X * is said to be a point multiplier at ϕ if ξ, a · x = ϕ(a) ξ, x for all a ∈ A and x ∈ X . The submodules of X with codimension one will be referred to as hyper maximal (left) submodules of X . It is easy to see that the kernel of each point multiplier on X is a closed hyper maximal submodule of X and conversely, each closed hyper maximal submodule of X is the kernel of some non-trivial point multiplier on X (see for example [3] for the unital case). We denote the set of all non-trivial point multipliers on X which are in the unit ball of X * by σ A (X ) and the set of all closed hyper maximal submodules of X by ∆ A (X ). Then ν A : σ A (X ) → σ(A) ∪ {0} is the natural map which associates to each point multiplier ξ on X , the unique point ϕ ∈ σ(A) ∪ {0} satisfying ξ, a · x = ϕ(a) ξ, x for all a ∈ A and x ∈ X . The Gelfand radical Rad A (X ) of X is the intersection Consider the following subset of Π P ∈∆ A (X ) X /P : x P + P < ∞}. Then X is a Banach space under the norm defined by which is actually a left Banach A-module in a natural way (see [3] for the case that A is unital). Furthermore, the map G X : X → X defined by G X (x) = x, where for each x ∈ X , x = (x + P ) P ∈∆ A (X ) is a norm decreasing map which is injective if X is hyper semisimple. Jointly common zeros preserving maps between certain subspaces of vector-valued functions In the main theorem of this section (Theorem 3.1), we assume that X, Y are Hausdorff topological spaces, E, F are Hausdorff topological vector spaces, and A(X, E) and A(Y, F ) are subspaces of C(X, E) and C(Y, F ), respectively, satisfying the following property: (Z) For each x ∈ X and y ∈ Y and neighborhoods U and V of x and y, respectively, there are functions f ∈ A(X, E) and g ∈ A(Y, F ) with x ∈ Z(f ) ⊆ U and y ∈ Z(g) ⊆ V . We recall that a pair of, not necessarily linear, maps S, T : A(X, E) → A(Y, F ) jointly preserve common zeros if for any f, f , g, g ∈ A(X, E), Clearly under the additional assumption S0 = T 0 = 0, such maps preserve non-vanishing functions in the sense that for each f ∈ A(X, E), Theorem 3.1. Let A(X, E), A(Y, F ) satisfy property (Z), and let S, T : A(X, E) → A(Y, F ) be surjective, not necessarily linear, maps. If S, T jointly preserve common zeros, then there exist subsets X 0 and Y 0 of X and Y respectively, containing all zero points of A(X, E) and A(Y, F ) respectively, a function h : Y 0 → X 0 (which is a homeomorphism whenever A(X, E), A(Y, F ) are completely regular), subspaces E y ⊆ E, F y ⊆ F , and bijections J y : E y → F y , G y : E y → F y for each y ∈ Y 0 , such that Sf (y) = S0(y) + J y (f (h(y))), T f (y) = T 0(y) + G y (f (h(y))) for all f ∈ A(X, E) and y ∈ Y 0 . If furthermore S, T are linear, then for all y ∈ Y 0 , J y , and G y are linear. Proof: Since the pair S − S0 and T − T 0 jointly preserves common zeros, as well, without loss of generality we may assume that S0 = T 0 = 0. We prove the theorem through the following steps: Step 1: S and T are injective maps. Since the conditions are symmetric with respect to S and T , it suffices to show that T is injective. Let f, f ∈ A(X, E) with T f = T f . For each x ∈ X and neighborhood U of x there exists, by property (Z), a function Step 2: Given x ∈ X, the intersections I x (S) and I x (T ) are singleton and equal whenever both are non-empty. We show that if y 1 ∈ I x (S) and y 2 ∈ I x (T ) then y 1 = y 2 . Assume on the contrary that y 1 = y 2 and let V 1 , V 2 be disjoint neighborhoods of y 1 and y 2 in Y , respectively. Then, by property (Z), there are elements Therefore, Z(S −1 g) ∩ U = ∅ and since U is arbitrary, the closedness of Z(S −1 g) implies that x ∈ Z(S −1 g). A similar discussion shows that x ∈ Z(T −1 l), a contradiction. Clearly the above argument shows that two intersections are the same and singleton whenever both are nonempty. We now consider the subset Step For this purpose, suppose that g, g ∈ A(Y, F ) and (g − g )(k(x)) = 0. Let U be an arbitrary neighborhood of x and f ∈ A(X, E) be chosen such that A similar argument shows that for each y ∈ Y 0 , I h(y) (S) = I h(y) (T ) = {y}, that is k(h(y)) = y. Assume now that A(X, E) is completely regular. To prove the continuity of h, assume on the contrary that there exists a net (y α ) in Y 0 such that y α → y, for some y ∈ Y 0 while h(y α ) does not converge to h(y). Then there exists a subnet (h(y β )) of (h(y α )) such that h(y β ) ∈ X\U for some neighborhood U of h(y). Using the complete regularity of A(X, E), we can find a function f ∈ A(X, E) such that f (h(y β )) = 0 for each β while f (h(y)) = 0. Thus Sf (y β ) = Sf (k(h(y β ))) = 0 and consequently, Sf (y) = 0, since Sf is continuous. Hence f (h(y)) = 0, a contradiction. This argument shows that h is continuous. Similarly, k is continuous, whenever A(Y, F ) is completely regular. Now for each y ∈ Y 0 we consider the non-trivial subspaces E y := {f (h(y)) : f ∈ A(X, E)} and F y := {g(y) : g ∈ A(Y, F )} of E and F , respectively. We also define the maps J y : E y → F y and G y : E y → F y by J y (e) = Sf (y) and G y (e) = T f (y) for all e ∈ E y , where f ∈ A(X, E) is such that f (h(y)) = e. It is easy to see that J y and G y are well defined. Step 4: For each y ∈ Y 0 , J y and G y are bijective maps. For the injectivity, assume that e, e ∈ E y and J y (e) = J y (e ). Then Similarly, G y also is a bijective map. Clearly, for each y ∈ Y 0 , J y , G y are linear whenever S and T are linear. Step 5: We apply a similar argument as the one given in Step 2. Clearly Z(Sf 0 ) = ∅. Assume that y 1 , y 2 ∈ Y are distinct points in Z(Sf 0 ). Let g and l be defined as in Step 2. Then Z( Notice that in the theorem above, if A(X, E) and A(Y, F ) contain the constant functions, then E y = E and F y = F for each y ∈ Y 0 . Remark. Using the theorem above for the case where S = T , we conclude that each surjective, not necessarily linear, map T : A(X, E) → A(Y, F ) preserving common zeros has the same description given in this theorem. In particular, if T is linear and all points in X and Y are zero points of A(X, E) and A(Y, F ), respectively, then for each f ∈ A(X, E) and E) and A(Y, F ) respectively, then there exist a homeomorphism h : Y → X, subspaces E y ⊆ E, F y ⊆ F , and bijections J y , G y : E y → F y , y ∈ Y , such that Sf (y) = S0(y) + J y (f (h(y))), T f (y) = T 0(y) + G y (f (h(y))) for all f ∈ A(X, E) and y ∈ Y . According to [7,Lemma 2.3], if E and F are vector lattices, A(X, E) and A(Y, F ) are vector sublattices of C(X, E) and C(Y, F ), then for any vector lattice isomorphism S : . . , f n ∈ A(X, E). So the next corollary is a generalization of [5, Theorem 6]. Corollary 3.3. Let X and Y be bounded metric spaces, E and F be Banach spaces. Assume, further, that either Y has no isolated points, or one of E and F has finite dimension. If T, S : Lip b α (X, E) → Lip b α (Y, F ), 0 < α ≤ 1, are additive (resp. linear) surjections jointly preserving common zeros, then T , S are continuous and there exist an α-bi-Lipschitz homeomorphism h : Y → X, and equicontinuous families {J y } y∈Y and {G y } y∈Y of real-linear (resp. linear) bijections from E onto F such that Sf (y) = J y (f (h(y))), T f (y) = G y (f (h(y))) for all f ∈ Lip b α (X, E) and y ∈ Y . Proof: Since for Lip b α (X, E) (resp. Lip b α (Y, F )) every point of X (resp. Y ) is a zero point it follows that X 0 = X and Y 0 = Y , where X 0 , Y 0 are subsets defined in Theorem 3.1. It is also clear that E y = E and F y = F for all y ∈ Y . Hence, by Theorem 3.1, for each y ∈ Y , there exist additive bijections J y , G y : E → F such that α (X, E) and y ∈ Y . We note that S (and T ) are Q-homogeneous, that is, for each r ∈ Q and x ∈ X, S(rx) = rS(x). An argument similar to [2,Corollaries 5.11 and 5.12] shows that S has a closed graph. Now since S is additive and Q-homogeneous one can check easily that (as in the Closed Graph theorem, see, e.g., [10]) S is continuous. Similarly T is continuous. Hence T and S are, indeed, real-linear continuous maps which, in particular, yield the real-linearity of J y and G y for all y ∈ Y . Therefore, there exists M > 0 such that which shows that the family {J y } of real-linear maps is equicontinuous. To finish the proof it suffices to show that h and k are α-Lipschitz functions. Since by the above argument S −1 is continuous, the family {J −1 y } of real-linear maps is equicontinuous and so there exists N > 0 such that where the supremum is taken over all distinct points y, y ∈ Y . Therefore h : Y → X satisfies the Lipschitz condition of order α. Analogously it can be shown that k is an α-Lipschitz function. Remark. (i) The above result is also valid for the case where Lip b α (X, E) and Lip b α (Y, F ) are replaced by lip b α (X, E) and lip b α (X, E) respectively. (ii) Boundedness of the metric spaces X and Y cannot be removed from the above corollary. Indeed, if X is the discrete metric space N, Y is N equipped with the Euclidean metric, and E = F = C, then clearly Lip b α (Y, F ) = l ∞ and, by [21, Example 1.6.4], Lip b α (X, E) = l ∞ . Now the identity operator T : Lip b α (X) → Lip b α (Y ) preserves common zeros while T is not continuous and also X and Y are not Lipschitz homeomorphic. For a metric space X and a normed space E, let C u b (X, E) be the normed space of all bounded uniformly continuous functions from X into E, equipped with the sup norm. Then clearly each point of X is a zero point for C u b (X, E). Sf (y) = J y (f (h(y))), T f (y) = G y (f (h(y))) for all f ∈ C u b (X, E) and y ∈ Y , for some uniform homeomorphism h : Y → X, and linear bijections J y , G y : E → F , y ∈ Y . Proof: The description of T , S is immediate from Theorem 3.1. Using the fact that f • h ∈ C u b (Y ) whenever f ∈ C u b (X), and the argument given in [16,Theorem 2.3] and the remark after it, one can show that h is uniformly continuous. Remark. In the above corollary one can check easily that S is continuous if and only if sup y∈Y J y < ∞ and in this case S = sup y∈Y J y . In particular, if E or F is finite dimensional, then T and S are continuous. For a completely regular space X and a Hausdorff topological vector space E, C b (X, E) denotes the space of all bounded continuous functions from X into E. Corollary 3.5. Suppose that X and Y are completely regular spaces consisting of G δ -points, and E and F are Hausdorff topological vector spaces. Let T , S be surjections between the following spaces preserving jointly common zeros. Furthermore, if S, T are linear and E or F are finite dimensional, then T and S are continuous when the spaces in Cases 1 and 2 are equipped with the compact-open topology and sup norm topology, respectively. Proof: Let x ∈ X and take a sequence {V n } of the neighborhoods of x such that {x} = ∩ ∞ n=1 V n . Since X is a completely regular space, given n ∈ N, there exists f n ∈ C b (X) with 0 ≤ f n ≤ 1, f n (x) = 1, and Clearly C b (X, E) and C b (Y, F ) are completely regular, hence it follows from Corollary 3.2 that there exist a homeomorphism h : Y → X and bijections J y , G y : E → F for each y ∈ Y such that Sf (y) = S0(y) + J y (f (h(y))), T f (y) = T 0(y) + G y (f (h(y))) for all f ∈ C u b (X, E) and y ∈ Y . In particular, if S, T are linear then all bijections J y , G y are linear. The continuity of S, T is obtained by applying a similar argument as in [1,Corollary 4.2]. Remark. (i) Note that all points in a first countable completely regular space are G δ -points. But the converse is not true, that is there is a completely regular space consisting of G δ -points which is not first countable (see [12, 4M]). (ii) The following example borrowed from [11,Example 2.4] shows that in the above results when E (and so F ) is not a finite dimensional space, common zeros preserving maps need not be continuous (see also [1,Remarks(2)]). Jointly common zeros preserving maps between Banach modules In this section, using the notion of point multipliers, we introduce the notion of zero set for the elements of a Banach module X over a Banach algebra A with σ(A) = ∅, and then we give a module version of Theorem 3.1 for mappings preserving common zeros between certain Banach modules. We recall that ν A : σ A (X ) → σ(A) ∪ {0} is the natural map which sends each nontrivial point multiplier ξ on X in the unit ball of X * to the unique point ν A (ξ) = ϕ ∈ σ(A) ∪ {0} satisfying ξ, a · x = ϕ(a) ξ, x for all a ∈ A and x ∈ X . Since each closed hyper maximal submodule P of X is the kernel of some ξ ∈ σ A (X ), and clearly point multipliers with the same kernels have the same image under ν A , we may also use the notation ν A (P ) instead of ν A (ξ) for ξ ∈ σ A (X ) with ker(ξ) = P . Now consider the following equivalence relation on ∆ A (X ). For P, Q ∈ ∆ A (X ), P ∼ Q iff ν A (P ) = ν A (Q). There are some examples of left Banach modules X in which the equivalence classes are singleton. For instance, for a compact Hausdorff space K, the Banach C(K)-module C(K) * has this property [3, p. 317]. Definition 4.1. For an element x ∈ X , we define the zero set of x by Z(x) = {ϕ ∈ ν A (σ A (X ))\{0} : ξ, x = 0 for all ξ ∈ ν −1 A {ϕ}}. We should note that there exists a nontrivial point multiplier on X at 0 if and only if A · X is not dense in X . The definition shows that for each x ∈ X , a nonzero point ϕ in the range of ν A is in Z(x) if and only if x ∈ P for all P ∈ [P ] where P ∈ ν −1 A (ϕ) is arbitrary and [P ] is its equivalence class. If A is unital then, considering A as a Banach module over itself, we have σ A (A) = {λϕ : ϕ ∈ σ(A), 0 < |λ| ≤ 1} and so the defined zero set Z(a) of an element a ∈ A is the usual zero set of its Gelfand transformationâ as a function on σ(A). Clearly for a compact plane set K and uniformly closed subalgebra A of C(K) which contains the constants and separates the points of K, the subspace Af of C(K) for any non-vanishing function f ∈ C(K) is a Banach A-module. It is easy to see that the range of ν A is σ(A). Since f ∈ Af , for each nontrivial point multiplier ξ on Af at some ϕ ∈ σ(A) we have ξ, af = ϕ(a) ξ, f (a ∈ A) which shows that the zero set Z(af ) is the same as the usual zero set ofâ. The following examples show that the defined zero set may be very different according to the module action. Example 4.2. For a compact Hausdorff space K and a Banach space E, let X = C(K, E) be the Banach space of all continuous E-valued functions on K endowed with the supremum norm. We also recall that for e ∈ E and f ∈ C(K), e ⊗ f ∈ C(K, E) is defined by (e ⊗ f )(x) = f (x)e, x ∈ K. Then clearly C(K, E) is a Banach C(K)-module and for each Λ ∈ E * , Λ • ϕ x , where ϕ x is the evaluation functional on C(K) at x ∈ K, is a point multiplier at ϕ x . On the other hand, for each point multiplier ξ at some ϕ x , x ∈ K, since by the proof of Lemma 1 in [13] the linear span of {e ⊗ f : e ∈ E, f ∈ C(K)} is dense in C(K, E), it is easy to see that the functional Λ ∈ E * defined by Λ(e) = ξ, e ⊗ 1 satisfies 1 is the unit ball of E * . Moreover ν C(K) maps each Λ • ϕ x to ϕ x , that is the range of ν C(K) is σ(C(K)). Therefore, for a function F ∈ C(K, E) we have , that is the new defined zero set of each element of C(K)-Banach module C(K, E) is its zero set in the usual sense. It is easy to see that under this module structure on C(K, E), for every x ∈ K and point multiplier ξ on E at some point ϕ ∈ σ(A), η = ξ•ϕ x is a point multiplier on C(K, E) at ϕ. Assume, in addition that E is not hyper semisimple and the natural map ν E A : σ A (E) → σ(A)∪{0} for E, is onto σ(A) (for instance every non-semisimple commutative unital Banach algebra satisfies these properties as a Banach module over itself). Then clearly the associated natural map for C(K, E) is also onto σ(A). Now for a non-zero e 0 ∈ Rad A (E) and the constant function F = e 0 ⊗ 1, the usual zero set Z(F) = {x ∈ K : F(x) = 0} of F is empty while its zero set Z(F) as an element of C(K, E) is the whole σ(A), since for each point multiplier ξ on E, ξ, e 0 = 0 and for each point multiplier η on C(K, E) its restriction η| E is either zero or a nontrivial point multiplier on E. In the main theorem of this section (Theorem 4.5) we assume that A is a Banach algebra with non-empty spectrum, X is a left Banach A-module with σ A (X ) = ∅, satisfying the following property: (H) X contains an element with empty zero set and for each ϕ ∈ ν A (σ A (X ))\{0}, there exists x ∈ X such that Z(x) = {ϕ}. We also assume that B is a Banach algebra and Y is a left Banach B-module satisfying similar conditions. Here is an example of a Banach A-module satisfying these conditions: For a compact metric space K and a closed nonempty proper subset F of K, let A = {a ∈ C(K) : a = 0 on F} and X = C(K) endowed with the supremum norm and with the multiplication as its module action. Identifying each x ∈ K with its corresponding evaluation functionals ϕ x , since σ(A) = K\F it is easy to see that ν A maps σ A (X ) onto K\F ∪ {0} and for each ξ ∈ σ A (X ) with ν A (ξ) = 0 there exists x ∈ K\F such that ξ| A = λϕ x | A for some λ ∈ C. Clearly for each f ∈ X we have Z(f ) ⊆ Z(f )\F . Moreover, for each a ∈ A, we have Z(a) = Z(a)\F . Since for each x ∈ K\F there exists f ∈ X with f = 1 on F ∪ {x} and |f | < 1 on K\(F ∪ {x}) it follows that Z(1 − f ) = Z(1 − f )\F = {x}. Therefore for each x ∈ K\F , {x} is the zero set (in the defined sense) of an element of X . Obviously X contains an element with empty zero set. As before, we say that a pair of linear maps S, T : X → Y preserves jointly common zero sets if for two elements x, y ∈ X , Z(x)∩Z(y) = ∅ iff Z(Sx) ∩ Z(T y) = ∅. We note that the maps S and T with this property are automatically injective whenever X is hyper semisimple and A · X is dense in X . Indeed, suppose that x ∈ X with T x = 0. Since each ϕ ∈ ν A (σ A (X )) is nonzero, we can take, by property (H), x ∈ X such that Z(x ) = {ϕ}. By the assumption, Z(Sx ) ∩ Z(T x) = ∅ and so Z(x) ∩ Z(x ) = ∅. Hence ϕ ∈ Z(x), which implies that x = 0 by hyper semisimplicity of X and the density of A · X in X . Before stating the theorem we should note that for each x ∈ X and Q 0 ∈ ∆ A (X ), (x + Q) Q∈[Q0] may be considered as an element (x Q + Q) Q∈∆ A (X ) of X , where x Q = x for all Q ∈ [Q 0 ] and x Q = 0 for the other points Q ∈ ∆ A (X ). Finally let us put∆ A (X ) := {P ∈ ∆ A (X ) : ν A (P ) = 0} which is the same as ∆ A (X ) when A · X is dense in X . Theorem 4.5. Let X and Y satisfy property (H), and let S, T : X → Y be surjective maps jointly preserving common zeros. Then there exist a bijectionh :∆ B (Y)/ ∼ →∆ A (X )/ ∼, submodules E P , F P of X and Y, respectively, and linear bijections J P , G P : E P → F P , P ∈∆ B (Y), such that (Sx + P ) P ∈[P ] = J P ((x + Q) Q∈h([P ]) ), for all x ∈ X and P ∈∆ B (Y). Similarly there is a unique point in Z(Sx). Furthermore We should note that for each ϕ ∈ ν A (σ A (X ))\{0}, the unique point in Z(T x) does not depend on the element x ∈ X with Z(x) = {ϕ}, since for each x ∈ X with Z(x ) = {ϕ} we have clearly Z(Sx) ∩ Z(T x ) = ∅ and Z(T x) ∩ Z(Sx ) = ∅. A similar argument implies the other inclusion. For P ∈∆ B (Y) consider the following subsets of X and Y: x ∈ X }, F P := {(y + P ) P ∈[P ] : y ∈ Y}. Then E P and F P , which are subsets of X and Y respectively, are submodules of these modules. Now define the maps J P , G P : E P → F P by J P ((x + Q) Q∈h([P ]) ) = (Sx + P ) P ∈[P ] and G P ((x + Q) Q∈h([P ]) ) = (T x + P ) P ∈[P ] . Using the definition of Z(·) and Step 2, we see that J P and G P are well-defined. Step 4: For each P ∈∆ B (Y), J P and G P are linear bijections.
7,562.8
2016-07-01T00:00:00.000
[ "Mathematics" ]
“Banking and income inequality of the American community: an analysis” Community banks in American urban areas are found to have a significant effect on the local distribution of income. Banking activity is seen to both decrease inequality by increasing the median level of income and simultaneously increase inequality by increasing the size of either tail of the income distribution. The net effect of banks providing liquidity to the American local economy and increasing access to the banking infrastructure is to decrease income inequality in these communities. Introduction This paper examines the impact of community banking activities on the complex relationships between income levels and its distribution in urban American counties. The contribution of this paper lies in specifying the linkages between banking activity and income creation and how those linkages impact the local distribution of income. Recent concerns about the current growth in income inequality have resulted in politicians and social leaders calling for a reduction in income inequality, but this concern has been largely rhetorical. It is felt that growing inequality in the American economy reflects an increasing reliance on financial markets to accomplish economic objectives. This trend towards market driven incentives to promote innovation and entrepreneurship necessarily results in a less equal distribution of income (Piketty and Saez, 2003). The impact of banking activity on the distribution of income may be particularly strong in developing nations (Ali and Medhekar, 2013). Banking and inequality Research on the impact of banks on the distribution of income has been mixed. Piketty (2013) has found that over the long-term the return on capital in developed countries is greater than the rate of economic growth resulting in an increase in income inequality. Stiglitz (2015) similarly argues that a positive relationship between growth and inequality results from monopoly rents on finite resources. In contrast, Shuai (2015) finds a negative relationship between income growth and inequality. Other researchers can find no consistent relationship between inequality and growth (Lundberg and 2003). A considerable body of research has been unable to confirm a positive relationship between economic growth and equality as a result of the complexity of the issues involved (Dollar et al., 2015). The face of inequality. The most commonly used measure of income inequality is the Gini ratio (the ratio of the difference between the actual cumulative distribution of income (the Lorenz curve) and the cumulative distribution of income if all income was equally distributed). A Gini ratio of zero expresses perfect equality and a Gini ratio of one represents the maximum possible inequality. In a list of 141 countries, the Gini ratio ranged from .632 in Lesotho to .230 in Sweden with the United States 41st in the distribution (CIA, 2014). The Current Population Survey puts the Gini ratio in the United States for 2013 at .476, up from .454 in 1993, the earliest comparable year (DeNavas-Walt and Proctor, 2013). In 807 counties covered in 2013 U.S. Census American Community Survey in 2013 the average Gini ratio for counties was .449 with a standard deviation of .036. Socio-economic determinants of economic growth and inequality. The subject of the relationship between the financial system and the distribution of income has received much study, but numerous issues remain ambiguous. A particular difficulty in regional analysis are spillover effects arising from endogeneity among the different variables (LeSage, 2014). We understand that the location of a bank and its footprints are distinct and we are careful in the creation of our liquidity variables to account for the liquidity created in adjoining counties so as to account to an extent the spillover effects that liquidity created in the next county impacts the income level in our county of interest. The relationship between economic growth and the distribution of income is dependent on a host of sociological and economic factors. Fortunately, much work in this area has already been accomplished (Zhang and Hu, 2011). Following this tradition, a framework for determining income creation is postulated in this study as: where MDHY = Median Household Income, % MH = % of Married Household, % Ed < 9 = % of population with less than 9 years of education, % BACH = % of population with Bachelor degree, % FLFP = % of female labor force participation, % MLFP = % of male labor force participation. Family formation. Family formation, particularly within the covenant of marriage, has a welldocumented effect on lifetime earnings and wealth (Zissimopoulos et al., 2015). This effect is generally found to reflect the advantages of the division of labor within the household and advantages in creating human capital both of which result in higher productivity and increased earnings. Education. Research on the interdependence of education and income has a venerable history (Sauer and Zagler, 2012). Zeira (2009) found a positive relationship between economic growth and education based on the fact that economic growth requires technological change, and technological change is driven by an educated workforce. Recent research has suggested that limitations on the supply of highly educated workers lead to changes in the wage structure that contribute to inequality (Huber and Stephens, 2014). Labor force participation. This study uses the labor force participation of males and females as separate factors in determining household income. Labor force participation is an elastic concept because when labor market conditions improve, individuals who were out of the labor force because they were not actively seeking employment move into the labor force because of better perceived opportunities for employment (the additional worker effect). Conversely, there is a discouraged worker effect of workers of individuals ceasing to seek employment in the face of deteriorating labor market conditions (Dagsvik et al., 2013). LFP is higher for individuals living in families than for individuals living in single or multiple person households. Marriage, fertility decisions and the presence of young children in the family are also seen to influence LFP decisions (Kondo, 2011). Social mores concerning LFP tend to develop over time and become entrenched in regional behaviors. The banking sector and income growth In addition to the traditional formulation of income determinants presented in Equation 1, we wish to take account of the impact of the banking sector on economic growth. Banking activities are critical to creating economic growth (Ali, 2003). Economic growth requires the creation and deployment of capital to finance economic activity. Community banks may be expected to play an important role in this process. They may have greater insight into local market conditions and knowledge of the strengths and weaknesses of small businessmen and entrepreneurs in that area that regional or national banks might not have (Leyshon et al., 2006). Community banks have the opportunity to finance job-creating economic activities that would not, otherwise, occur, thus, increasing income levels in the locality. The requisite knowledge to undertake such capital deployment would be facilitated by a close relationship between the bank and individuals in the community. Consequently, creating liquidity and providing access to banking facilities may be considered to be a social responsibility of the banking system (Wise and Ali, 2008). Banks play a critical role in the creation and deployment of capital through the process of liquefying financial assets and liabilities. They facilitate this process by financing illiquid assets with liquid liabilities and thereby transforming risky loans into relatively riskless deposits (DeAngelo and Stulz, 2015). The impact of liquidity creation on economic growth and the distribution of income has not been thoroughly studied due to difficulties in measuring bank liquidity. Berger and Bouwman (2009) have recently studied the process of liquidity creation and developed a workable measure of liquidity which is used in this study. Methodology This study uses survey data developed by the Bureau of the Census, American Community Survey for 2013. The American Community Survey samples about 3.5 million households every year to supplement the Census Bureau's decennial census program. The Survey includes a wide range of selfreported social, economic and housing data. While the one year tabulation for 2013 covers 817 separate counties, this study utilizes the data from the 437 urban counties for which the relevant data are available in the 2013 survey. Gini ratios. A difficulty in using the Gini ratio to examine income inequality is the fact that the same Gini ratio can reflect two entirely different distributions of income (Hagerbaumer, 1977). While a given change in the level of income may leave the shape of the Lorenz curve unaffected, a more likely occurrence is that the slope of the Lorenz curve will change depending on whether the impact of a change in modality income levels is felt on the lower or upper end of the distribution (Ceria-ni and Verme, 2015). This problem may be addressed by examining the causes of changing income levels. An additional difficulty is that the larger and more heterogeneous the unit of observation, the less likely the Gini is to reflect the actual inequality in the distribution of income because of offsetting variations among population subgroups (Frosini, 2012). Banking activity. The impact of banking activity on economic growth and the distribution of income arises from (1) the extent to which banks provide access to the banking infrastructure and (2) the extent to which their liquidity creation activities foster economic growth (Ali, 2003). This paper uses a measure of liquidity creation developed by Berger and Bouwman (2009) to assess the impact of the banking system on the relationship between the distribution of income and economic growth. It uses their preferred measure of liquidity creation ("cat fat") to investigate the impact of community bank activity on the distribution of income in counties covered by the ACS. Access to the banking infrastructure. A healthy local economy provides access for individuals to the local banking system. Participation in the banking system tends to create feedback linkages supporting economic growth. Fully banked individuals maintain checking and savings accounts and borrow from local banks to finance their homes and purchase consumer durables. The quality and quantity of contact with local entrepreneurs and small businessmen creates important knowledge that al lows the local banks to fund the fixed and working capital needs of individuals and local businesses in the community. Unbanked individuals have no formal relationships with their community banks and, thus, are isolated to that extent from active participation in the economic life of their community. Liquidity creation. Banks have traditionally been seen as "risk transformers", shifting riskless deposits to finance risky loans. While this process usually creates liquidity, these two functions do not necessarily move in tandem. Research on the interaction of these functions has been limited by the difficulty in measuring liquidity creation until the ground-breaking work of Berger and Bouwman (2009). Their approach yielded four different measures of liquidity: "cat fat", "mat fat", "cat nonfat", and "mat nonfat". Berger and Bouwman (2009, p. 3797) prefer "cat fat" as the most appropriate way to measure liquidity because the type of asset has a greater impact on its liquidity than its maturation date and because off-balance sheet and onbalance sheet items are functionally similar. Modal income levels. Understanding the implications of a rise in general income levels for income inequality, thus, requires considering the determinants of the general level of income. Equation 1 in Table 1 Note: The dependent variable is the annual median household income. The independent variables in percentage are percentages of, respectively, MH (married household), ED < 9 (population with less than 9 years of education), BACH (population with a Bachelor degree), FLFP (female labor force participation), MLFP (male labor force participation), NB (population not banked), FB (population fully banked), CF/TA (cat fat as a % of total assets). CF is the absolute amount of cat fat created in the local economy. tvalues are reported in the parentheses. **, * denote significance at the 1% and 5% levels, respectively. The positive and significant coefficient for marriage in Table 1, Equation 1 confirms the findings of the literature on family formation above. The negative coefficient between median family income and those with less than 9 years of formal education, and the positive coefficient between median family income and those with a Bachelor degree in this study are also consistent with prior research on the relationship between education and income. Banking and income creation. The impact of the banking sector on median income is hypothesized to occur by (1) providing access to individuals to banking functions to allow them to fully participate in the local economy and (2) by creating liquid assets and liabilities that encourage the creation of capital in the local community and provides for risk shifting and risk pooling that encourages capital spending. The direct impact of these factors on median income is presented in Table 1, Equation 2. Banking access and income growth. A comparison of regressions 2 and 3 in Table 1 is suggestive of a strong interrelationship between liquidity creation and the extent of local banks relationships within the community. A negative relationship between those fully banked and median income and a positive relationship between the unbanked and median income in regression 3 would seem perverse. However, this relationship changes its sign (in the case of those unbanked) or becomes insignificant (in the case of the fully banked). This lack of consistency may reflect the complexity of the interrelationship between banking, income and the other determinants of income. A significant positive coefficient between the level of median income and fully banked individuals is not found, possibly, because this factor is overwhelmed by the other determinants of median income. It is certainly possible that in areas with lower incomes, individuals are less likely to enter into banking relationships. However, even in low income areas, there will be more individuals who are underbanked, rather than having no relationship at all with a bank. An environment which reduces the number of unbanked individuals draws more capital into the hands of financial intermediaries and fos-ters economic growth. Regression 2 in Table 1 may reflect the fact that the absence of an independent variable for liquidity creation removes the positive impact of liquidity creation on income which obscures the positive impact of the fully banked on income. Liquidity creation and income growth. Another important impact banks have on the level of income in the community is through the creation of liquidity which provides capital to maintain and grow the level of economic activity. Liquidity creation in the local economy has both demand side and supply side attributes. The demand side is characterized by the demand for liquid assets such as demand deposits and lines of credit by both business and consumers. Supply is created when banks finance illiquid assets such as mortgages and longterm commercial and industrial loans. Liquidity is created through the banks transforming the risky illiquid assets into almost riskless liquid assets (Horvath et al., 2016). The effect of making both banking assets and liabilities more liquid facilitates both consumption and investment spending, both of which can have a positive impact on income levels. Where the access to money and capital is easier, less costly and more timely economic processes are encouraged. Table 1 show the positive impact of liquidity creation ("catfat") on the level of median income, confirming earlier thought about the positive impact of liquidity creation on income levels. Equation 2 in Table 1 measures the relative extent of liquidity creating (CF/TA) by local banks refers to an issue discussed by financial theorists under the headings of the alternative "financial-fragility-crowding out" and "risk absorption" hypotheses (Berger and Bouwman, 2009). Whatever the cause of this banking behavior, a lower ratio would suggest less aggressiveness in creating liquidity, a higher ratio more liquidity more aggressiveness. The absence of a significant coefficient for this variable suggests that it is not the strategic posture of banks that impacts local economic activity, but the actual amount of liquidity created ("cat fat"). Table 2 reveals that when median household income is held constant, the two tails of the distribution are positively, but not perfectly correlated. This implies that there are factors in the income creation process that can impact inequality independent of their impact on median income. Specifically, this means that when median income increases reducing overall inequality, simultaneous economic forces operate at either end of the distribution to increase inequality. The significant positive coefficient of median income in Table 2 between the two tails of the distribution indicates that while some socio-economic forces may impact both tails, others may affect one tail but not the other. The specific impact of any change will depend on the idiosyncratic characteristics of that particular economic force. However, overall, it is clear that changes in the modal level of income will outweigh any impact on the tails of the distribution. Reconciling the impact of banking on inequality. While the socio-economic variables enter as expected in Table 1, Equation 4, the variables reflecting banking activities do not. The lack of significance for both "cat fat" and the incidence of those fully banked in influencing median income levels may be explained by either the absolute dominance of socio-economic variables in impacting median levels of income or an endogenous relationship between the banking and socio-economic variables (e.g., less educated individuals are more likely to be not banked). Factors impacting income inequality. The simple regression of median household income on the Gini coefficient presented in Table 3, Equation 1 below suggests that while the impact of income levels on income inequality is significantly negative (higher income levels reduce inequality), the effect is small. This result conceals the impact of income creation on the tails of the Lorenz curve. From the partial correlations presented in Table 2 it can be seen that both tails are significantly and positively related to income inequality. Thus, the overall negative effect of an increase in income on income inequality conceals shifts in the distribution of income that may have important social consequences. Table 3 also reports the results of more comprehensive regressions examining the relationship of the socioeconomic variables discussed above variables with the Gini coefficient. Equation 2 in Table 3 synthesizes the impact of this process on income inequality in terms of the eight socio-economic and banking variables discussed above that explain more than three-quarters of the variance in inequality among the ACS counties included in this survey. The relative presence of individuals with Bachelor degree is seen to, primarily, impact the upper end of the income distribution, resulting in greater income inequality in the local area. The absence of a significant positive coefficient with Gini for the relative number of individuals with less than 9 years of education resulting is unexpected and suggests that the structure of educational attainment in a given area has the most powerful effect at the higher end of the income distribution. Female labor force participation is seen to impact the upper end of the income distribution more than the lower end of the distribution, consistent with the latest findings in this area. Table 3. Influences on income inequality Note: The dependent variable is the Gini coefficient. The independent variables in percentage are percentages of, respectively, HY < 10k (households < $10,000 income), HY > 200k (households > $200,000 income), FLFP (female labor force participation), FB (population fully banked), ED < 9 (population with less than 9 years of education), BACH (population with a Bachelor degree). CF is the absolute amount of "cat fat" created in the local economy. t-values are reported in the parentheses. **, * denote significance at the 1% and 5% levels, respectively. Neither the liquidity creation activities of banks nor the presence of fully banked individuals are seen in Equation 2, Table 3 to have a significant impact on income inequality. However, the lack of significance of liquidity and the incidence of fully banked individual may be concealed by the large negative impact of median household income on inequality. Table 3 shows the positive and significant impact of both "cat fat" creation and the incidence of the fully banked on income inequality. While increases in either of these variables could increase income inequality by decreasing income in the lower portion of the Lorenz curve or increasing it in the upper portion of the curve, it is more likely that the impact will be felt on the upper reaches of the Lorenz curve because both factors will increase the level of income through expanding opportunities for the more affluent component of the population. Table 3 shows that increases in liquidity or banking participation are associated with greater income inequality and can be investigated by regressing these variables on the tails of the distribution as is done in Table 4. The positive impact on inequality as measured by the Gini is seen to result from the impact of both liquidity creation and bank participation increases on the lower tail of the distribution as evidenced by the positive coefficient for the percentage of households with less than $10,000 of income and the absence of this effect for households with over $200,000 of income. A significant impact from both liquidity creation and bank participation on the upper tail of the distribution is noticeably lacking. The result of this test is that "cat fat" and the percent of fully banked individuals contribute to income inequality, primarily, through their impact at the lower end of the Lorenz curve. It may be concluded from the above analysis that the impact of liquidity creation and the incidence of banking use in a community simultaneously impact both the modal level of income as well as the tails of the distribution, with the net effect of an increase in either of these variables decreases income inequality. The indirect impact of bank activity on income inequality. Exactly why Equation 3 in An alternative approach to testing the hypothesis that the banking variables impact both modal levels of income (decreasing inequality) and the tails of the distribution (increasing inequality) would be provided by examining the partial correlations of the indicated variables, holding median income constant. These are presented in Table 5. Note: Controlling for Median Household Income. Gini is the Gini coefficient, HY < 10k are households with income under $10,000, HY > 100k are households with income over $200,000, % FB is the percent of the population fully banked, % NB is the percent of the population not banked, CF is "cat fat", our measure of liquidity creation. ** p =.01, * p = .01. Table 5 reveals the importance of the tails of the income distribution in determining income inequality. The positive, but less than perfect, correlation between the two tails of the distribution in Table 5 indicates that the factors impacting inequality can operate independently at either end of the distribution. "Cat fat" and the incidence of those fully banked are seen significantly and positively related to income inequality. A connection is also found between liquidity creation and the incidence of those fully banked, suggesting the coming led impact of banks on their communities. Variations in the trend between the impact of banking on the distribution of income in developed and developing countries. Can the findings of a positive impact on the banking system in a developed country such as the USA be generalized to a developing nation such as Bangladesh? Sarkar et al. (2015) found a strong positive relationship between the banking sectors' financing in agriculture and agriculture output in Bangladesh and that banking credits also facilitate financial inclusion in Bangladesh. The current trend to the increasing deregulation of the banking sector in Bangladesh suggests increasing competition among banks that is creating behavior similar to that characterizing banks in developing countries (Uddin et al., 2015). However, increasing competition and other dynamic changes in the banking sector may mitigate against a positive relationship between banking activity and decreasing income inequality (Ali, 2003). This is, clearly, an area requiring further investigation. Conclusions Research findings. The findings of this paper provide evidence that the banking functions of creating liquidity and increasing access to the financial infrastructure have two effects: (1) increasing the general level of income (increasing income equality), and (2) decreasing the slope of the lower portion of the Lo-renz curve and increasing the slope of the upper portion of the Lorenz Curve (increasing income inequality). The net effect of community banks creating liquidity and access to the financial infrastructure is found to decrease inequality in the distribution of income. Limitations of the study. In America, the distribution of income reflects not only market activities, but also the operation of a host of non-market factors. Chief among these are governmental taxing and subsidy policies. The impact of these policies on the actual distribution of income may overwhelm the impact of market forces on this distribution. The excessive reliance on financial markets in America may be generating a dynamic force which, over time, will increase the adverse impact of banking on income inequality. It is difficult to capture such an effect in cross-sectional analysis. To the extent government policies do not support market outcomes characterized by greater equality, achieving a more equal distribution of income may remain an elusive goal. Recommendations for further research. The topic of the impact of the banking sector on the distribution of income is an important one. Insofar, as governmental policy fosters an approach to economic growth which relies on the banking system to provide capital and equal access to individuals, the longrun impact of such policies will be controversial. Addressing such controversial issues successfully will be more effective if research can provide information on exactly how the liquidity creation process works (especially in developing countries) and can identify the specific linkages between job creation and economic output and the activities of the banking sector. Access to the banking sector per se may not influence economic activity by itself. There are, no doubt, other factors in this process which must be considered.
6,001.8
2016-04-28T00:00:00.000
[ "Economics" ]
High speed, wide velocity dynamic range Doppler optical coherence tomography (Part I): System design, signal processing, and performance Improvements in real-time Doppler optical coherence tomography (DOCT), acquiring up to 32 frames per second at 250 × 512 pixels per image, are reported using signal processing techniques commonly employed in Doppler ultrasound imaging. The ability to measure a wide range of flow velocities, ranging from less than 20 μm/s to more than 10 cm/s, is demonstrated using an 1.3 μm DOCT system with flow phantoms in steady and pulsatile flow conditions. Based on full implementation of a coherent demodulator, four different modes of flow visualization are demonstrated: color Doppler, velocity variance, Doppler spectrum, and power Doppler. The performance of the former two, which are computationally suitable for real-time imaging, are analyzed in detail under various signal-to-noise and frame-rate conditions. The results serve as a guideline for choosing appropriate imaging parameters for detecting in vivo blood flow. 2003 Optical Society of America OCIS codes: (110.4500) Optical Coherence Tomography; (170.3880) Medical and biological imaging; (170.1650) Coherence imaging; (100.6950) Tomographic image processing References 1. D.Huang, E.A.Swanson, C.P.Lin, J.S.Schuman,W.G.Stinson, W.Chang, M.R.Hee, T.Flotte, K. Gregory, C.A.Puliafito, and J.G.Fujimoto, “Optical coherence tomography,” Science 254, 1178-81 (1991). 2. Z.Ding, Y.Zhao, H.Ren, J.S. Nelson, Z.Chen, “Real-time phase-resolved optical coherence tomography and optical Doppler tomography,” Opt. Express 10, 236-45 (2002), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-10-5-236. 3. V.Westphal, S.Yazdanfar, A.M.Rollins, and J.A.Izatt, “Real-time, high velocity-resolution color Doppler optical coherence tomography,” Opt. Lett. 27(1), 34-6 (2002). 4. A.M. Rollins, M.D. Kulkarni, S. Yazdanfar, R. Ung-arunyawee, and J.A. Izatt, “In vivo video rate optical coherence tomography,” Opt. Express 3, 220-29 (1998), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-3-6-219. 5. M. Laubscher, M. Ducros, B. Karamata, T. Lasser and R. Salathe, “Video-rate three-dimensional optical coherence tomography,” Opt. Express 10, 429-35 (2002), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-10-9-429. 6. S. Yazdanfar, M.D. Kulkarni, and J.A. Izatt, “High resolution imaging of in vivo cardiac dynamics using color Doppler optical coherence tomography,” Opt. Express 1, 424-31 (1997), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-1-13-424. 7. J.A.Jensen, Estimation of blood velocities using ultrasound (Cambridge, 1996). 8. G.J. Tearney, B.E. Bouma, and J.G. Fujimoto, “High-speed phaseand group-delay scanning with a grating-based phase control delay line,” Opt. Lett. 22, 1811-3 (1997). 9. V.X.D.Yang, M.L.Gordon, A.Mok, Y.Zhao, Z.Chen, R.S.C.Cobbold, B.C.Wilson, and I.A.Vitkin, “Improved phase-resolved optical Doppler tomography using the Kasai velocity estimator and histogram segmentation,” Opt. Commun. 208, 209-214 (2002). 10. Y.Zhao, Z.Chen, C.Saxer, Q.Shen, S.Xiang, J.F.de Boer, J.S. Nelson, “Doppler standard deviation imaging for clinical monitoring of in vivo human skin blood flow,” Opt. Lett. 25, 1358-60 (2000). 11. J.F. de Boer, C.E. Saxer, J.S. Nelson, “Stable carrier generation and phase-resolved digital data processing in optical coherence tomography,” Applied Opt. 40, 5787-90 (2001). 12. S.I. Fox, Human Physiology (W.C. Brown, Dubuque, Iowa, 1987). 13. B.E. Bouma, G.J. Tearney, Handbook of Optical Coherence Tomography (Marcel Dekker, 2002). (C) 2003 OSA 7 April 2003 / Vol. 11, No. 7 / OPTICS EXPRESS 794 #2105 $15.00 US Received February 04, 2003; Revised March 31, 2003 Introduction Optical coherence tomography (OCT) [1] and its Doppler variants, optical Doppler tomography (ODT) [2] or color Doppler optical coherence tomography (CDOCT) [3], are analogous to ultrasound imaging where light is used instead of sound.High speed microstructural OCT imaging has been demonstrated at various pixel-resolution and framerate combinations, using a rapid scanning optical delay line (RSOD) [4] to achieve 250 × 125 pixels at 32 frames per second (fps) or a smart-pixel detector array [5] to acquire 58 × 58 × 58 pixels at 25 fps.To image microvasculature blood flow, the velocity sensitivity can be increased by decreasing the frame rate.Most demonstrated systems [2,3] use averaging and/or correlation algorithms to obtain the velocity information, thus limiting the effective number of lines per second available for color Doppler imaging to a fraction of that in Bmode, depending on the number of axial scans used for averaging and/or correlation (typically 4 ~ 16).The high spatial resolution (1 ~ 20 µm) of OCT and high velocity sensitivity (~ 50 µm/s) of ODT allow in vivo imaging of microvasculature, which can be useful for studying angiogenesis, embryo cardiac dynamics in developmental biology [3,6], and vascular response to treatments that are beyond the reach of current clinical ultrasound systems. There are considerable differences between ultrasound and OCT (and their Doppler extensions), such as the underlying physical interactions, the different wavelength involved, and the much higher velocity of light than sound.Nevertheless, from the signal-engineering point of view, there are many similarities.Since ultrasound signal processing is a mature field with many validated techniques for real-time imaging [7], it can guide the design of highspeed OCT/ODT hardware and software processors.Here, we present a comprehensive approach to the design and implementation of a hardware processor (an analog coherent demodulator) and software processors that perform signal amplitude and phase estimation and motion artifact rejection.This system allows blood flow visualization in a number of modes, including color Doppler, velocity variance, Doppler spectrum, and power Doppler.We will refer to the general technique as Doppler optical coherence tomography (DOCT), and describe the signal-processing algorithms for each mode in detail.The performance of the DOCT system will be characterized by a number of stationary and flow phantoms.This information will allow appropriate trade-off decisions for detecting blood flow in different in vivo imaging applications. DOCT system A schematic of the DOCT system is shown in Fig. 1(a).A broadband light source (JDS, Ottawa, ON, Canada) with a polarized output power of 5 mW at center wavelength λ 0 = 1.3 µm and bandwidth ∆λ = 63 nm is coupled into a single-mode fiber optic interferometer.A rapid scanning optical delay line and phase modulator (JDS, Ottawa, ON, Canada) form the reference arm.The light is focused to a spot size of ~ 25 µm in the sample arm, which can be scanned in three dimensions via high-speed linear translators (Newport, Irvine, CA) and/or a galvanometer (Cambridge Technology, Cambridge, MA).Since the RSOD and the phase modulator are polarization sensitive, multiple polarization controllers (OZ Optics, Ottawa, ON, Canada) are used to optimize the signal transmission in the reference arm and to match polarization states between the reference and sample arms.An optical circulator (O-Net, Boston, MA) provides balanced optical detection to reject the high DC background and increase the signal-to-noise ratio (SNR).A wide bandwidth (~ 10 MHz), custom-made, transimpedance amplifier detects the AC interference signal.This signal, high-pass filtered at 30 kHz, is passed through a depth-gain-compensation amplifier, analogous to the time-gaincompensation amplifier employed in ultrasound, to increase the gain as a function of imaging depth.A coherent demodulator yields both the amplitude and phase of the AC signal, as explained later.Care must be taken to minimize the frequency response differences between the in-phase and quadrature channels of the analog hardware.Residual differences in DC bias and AC gain can be balanced digitally after the signals are analog-to-digital (A/D) converted.These steps are critical to reduce background noise in both structural and flow imaging and to ensure accurate estimation of flow velocity. One of the most important challenges in real-time DOCT imaging with high spatial/velocity resolution is achieving high axial scan frequency, analogous to pulse repetition frequency (PRF) in ultrasound.Commercial clinical ultrasound imaging systems (transducer center frequency 3 ~ 8 MHz) typically have PRFs in the range of 4 ~ 10 kHz for array transducers, whereas high frequency (20 ~ 100 MHz) systems may exceed 20 kHz for single element transducers [7].For a given image size (number of pixels), the B-mode imaging frame rate is proportional to the PRF.In the case of flow imaging, signal averaging between axial scan lines is often implemented, and higher PRFs with lower frame rates typically result in better flow images. Fourier-domain rapid scanning optical delay lines have been used in a number of highspeed OCT systems, offering axial scan frequencies from ~ 1 kHz (galvanometer [8]) to 4 kHz (resonant scanner [4]), corresponding to frame rates of 2 ~ 8 fps for 500 line images, if only the forward scans are used to form the images.The frame rate can be doubled if both the forward and reverse scans are recorded.To achieve real-time (e.g., 32 fps) imaging with high resolution (e.g., 500 lines per image), the axial scan frequency must be increased to at least 8 kHz (Table 1).Only resonant scanners can achieve such high frequencies.The resultant OCT signal bandwidth varies sinusoidally as [4]: where D is the axial scan depth, f a is the axial scan frequency, ∆λ = 63 nm is the light source (with a Gaussian spectrum) bandwidth, and λ 0 = 1.3 µm is the source center wavelength.We have constructed two RSODs operating at 8.05 kHz (D = 1.3 mm) and 12.95 kHz (D = 1.0 mm), resulting in maximum signal bandwidths of 2.4 and 3.0 MHz, respectively, assuming a Gaussian source spectrum (Fig. 2).The RSODs are adjusted to provide group delay only by centering the light spectrum on the scanning mirror.A phase modulator operating at 4.3 MHz provides the phase delay using a saw-tooth waveform triggered by the RSOD (Fig. 1).This arrangement ensures a stable carrier frequency (f C = 4.3 MHz) [11], which is essential for hardware coherent demodulation, and allows the use of small mirrors with low inertia in the RSOD to achieve high resonant frequencies.Compared to sinusoidal waveform, the saw-tooth phase modulation also suppresses the sidebands, which contribute to noise in the OCT signal.The full implementation of the coherent demodulator involves mixing the detected OCT signal with not only , yielding the in-phase (I) and quadrature (Q) components, as shown in Fig. 1(b).Since the I and Q signals are shifted to the base-band, low-pass-filters (LPF) instead of band-pass-filters (BPF) are used.The LPF used in our system has a bandwidth of approximately 1.55 MHz, and greater than 70 dB stop-band rejection.Such a LPF is easier to construct than a BPF with equivalent performance, and is required to avoid aliasing before A/D conversion.The required base-band filtering bandwidth is 1.2 and 1.5 MHz, for the 8.05 and 12.95 kHz RSOD, respectively (Fig. 2).(2) To increase data throughput from the A/D card to computer memory, an A/D conversion clock pulse train triggered by the RSOD is used, with data acquisition occurring during the central ~ 80% of the depth scan.The acquired signals can be expressed as 2D arrays: I m,n and Q m,n with m and n denoting the indices in the depth and lateral directions, respectively.These signals are transferred by direct memory access without computer CPU intervention, reducing the computational overhead. Digital signal processing In structural imaging mode, the intensity of the OCT signal is proportional to the local reflectivity at a particular pixel.Spatial averaging in the depth and lateral directions are performed to improve the SNR.The value of a given pixel in the structural image is calculated as: Due to the large dynamic range involved, the logarithm of the OCT signal is displayed in the structural image.Simultaneous to the structural OCT imaging, blood flow information can be presented in one of several modes, as the following. Color-Doppler mode The mean velocity at any pixel can be evaluated by the Kasai autocorrelation equation [7,9]: ( ) where f D is the Doppler frequency shift, n t is the tissue index of refraction (~ 1.4), θ is the Doppler angle, and Y, X will be used for estimating the variance of velocity estimate.The arctangent function evaluates the mean phase shift between axial scans (Fig. 3) and is calculated in all four quadrants with outputs ranging from -π to +π.The calculation of mean phase change from Eq. ( 5) is numerically efficient, especially in fixed-point, because it relies mostly on summations and multiplications, with only one each of division and arctangent operations using floating-point representation to maintain accuracy (Table 2). # of calculations (single pixel) # of calculations* (on average per pixel) * Estimated over the entire image with the total number of pixels >> MN The velocity estimation is performed within an N M × window (Fig. 4(a)), where M is the depth window length (often called the gate length in ultrasound literature) and N is the number of axial (depth) scans used in the autocorrelation (often called the ensemble length).Additional computation efficiency can be achieved if the N M × windows for adjacent pixels overlap when the flow image is processed, because the calculations for the numerator and denominator in Eq. ( 5) within the overlapping portions (αΜ and βN) can be shared between pixels.The accuracy of the velocity estimation improves via larger M and N, i.e., with more spatial averaging, as shown in Fig. 4(c).For a given axial scan frequency (f a ) and for a given number of axial-scans displayed per frame (K X ), the frame rate is: when only inward or outward axial scans are used.Clearly, a higher frame rate can be obtained by increasing β, at the expense of increased blurring in the lateral direction due to overlapping of sequential axial scans.In this work, the values of αΜ and βN were determined as follows: Because the A/D conversion rate is more than four times higher than the LPF bandwidth (i.e., over-sampled), Μ is set to 4 and α to 0.75, which does not result in significant axial blurring.The number of pixels per axial scan is set to K Y = 512.For the 8.05 kHz RSOD, β and N values are set such that the resultant horizontal resolution is 500 pixels for all frame rates except 32 fps (Table 3).At high frame rates (16 and 32 fps), high β and low N values are chosen to maintain the horizontal resolution.At lower frame rates (2 -8 fps), low β and high N values are used to reduce lateral blurring and improve velocity resolution. .Although aliasing can be phase unwrapped in principle, it is difficult in practice in the presence of noise.In addition, because velocity estimation accuracy deteriorates with increasing flow velocity (Fig. 10(b)), there is a physical upper limit for phase unwrapping techniques.The non-aliased velocity range is which is 3.9 mm/s for f a = 8.05 kHz and is 6.3 mm/s for 12.95 kHz.The velocity resolution is determined by the phase stability of the interferometer and electronics [3,11]: where φ ∆ is the phase error of the system measured using a stationary reflector as the target.At N = 8 and f a = 8.05 kHz, φ ∆ is ~ 0.06 radians (Fig. 4(c)), corresponding to a minimum detectable velocity of ± 17 µm/s.The velocity dynamic range (VDR), is 40.5 dB for the 8.05 kHz system.This can be improved by increasing M or N at the expense of spatial blurring or frame rate.For the 8.05 kHz system, the VDR can be as high as 63.9 dB with a minimum detectable velocity of ± 2 µm/s when N = 128 (Fig. 4(c)). During in vivo imaging, the biological sample can exhibit bulk motion, i.e., the entire sample can move relative to the DOCT system and create motion artifacts.We have shown previously [9] that these artifacts can be removed by calculating the velocity histogram along an axial line.The histogram of the axial line with K z pixels is defined as: where ( ) is the number of pixels at velocity level ( ) (11) Equation (11) ensures that the histogram has sufficient velocity resolution.The histogram h i reaches a maximum at the bulk tissue motion velocity: , where (12) Note that two conditions must be met: (a) the sample acceleration must small, i.e., the change in bulk motion velocity is smaller than the velocity resolution during one axial scan, and (b) the blood flow must occupy only a small portion of the axial scan depth.The second condition depends on the velocity distribution and the amount of bulk tissue in the depth scan.For example, for plug flow within bulk tissue that occupies the entire scan depth, the histogram method breaks down when blood flow occupies more than 50% of the axial scan.For flow conditions with a broader velocity distribution (e.g., laminar), this constraint is more relaxed. The corrected flow velocity, accounting for aliasing due to the block tissue motion, is: The velocities estimated by Eqs. ( 4) and ( 5) and corrected for bulk tissue motion with Eqs. ( 10) -( 13) are color-coded (see Fig. 4(b)) and form the color Doppler display mode of the DOCT system. Even with high axial scan frequency, the non-aliased velocity range (Eq.( 7)) is insufficient to measure high-speed blood flow in arterioles and larger vessels [12].In these cases, it is beneficial to use a display mode with a wider velocity measurement range.One approach is to compute the velocity standard deviation or variance [10,13], which can vary monotonically with increasing flow velocity (see below).If * S is the complex conjugate of S , the normalized velocity variance is [7]: Equation ( 14) can be evaluated efficiently, since X , Y and 2 S have all been calculated for the color Doppler and structural display modes.Hence, obtaining the velocity variance display only slightly increases the overall computation.Velocity variance mapping eliminates aliasing and can greatly increase the velocity measurement range, but the blood flow orientation is lost. Power Doppler mode Another approach to velocity mapping is power Doppler, in which the area under the Doppler spectrum (excluding the DC component due to stationary tissue) is calculated [7]: ( ) where I' and Q' are high pass filtered from I and Q using an infinite-impulse-response (IIR) filter to remove signal from bulk tissue.The power Doppler signal is related to the volume of moving blood within the imaging volume and results in the loss of blood velocity and directionality information [7].Since in vivo bulk tissue motion can place the stationary tissue OCT signal outside the IIR filter (especially with the high velocity sensitivity associated with DOCT), V btm must be calculated from the Kasai estimator and velocity histogram.Then the IIR filter must be dynamically generated, different for each axial scan line, essentially creating a band-pass filter centered at V btm to remove the stationary tissue signal and provide an artifact-free power Doppler image.In OCT imaging, this approach increases the computation complexity considerably, in contrast to ultrasound where power Doppler requires less computation than color Doppler.Hence for this work, power Doppler OCT can only be calculated during post processing rather than in real-time. Doppler spectrum mode A clinically important display mode for Doppler ultrasound is the so-called sonogram or spectral display [7], essentially a joint time-frequency analysis of the flow signal using a transform method such as short-time Fast Fourier Transform (ST-FFT).The spectral display is usually calculated at a particular location within a blood vessel, often identified by color Doppler imaging, to illustrate the velocity (or Doppler frequency) distribution as a function of time.The Doppler spectrum of that location at time nT a (T a is the period for one axial scan, equivalent to PRF 1 ) is then calculated by FFT over a window length N FFT , which is selected using similar criteria as h B : where ( ) is the complex OCT signal sequence from time (n−N FFT )T a to n T a , M is the range gate size, and W is a window function.This display mode is especially useful for imaging pulsatile flow.Associated with the spectral display, an audio output of the Doppler frequency distribution is often presented to the physician in Doppler ultrasound [7].Audio presentation can also be accomplished in DOCT (Fig. 5).The audio signals are digital-toanalog converted at 8 kHz (matching the axial scan frequency of ~ 8.05 kHz) with 16-bit resolution in stereo, and are played along with the spectral display (Figs. 14 and 15 below). Software Detailed software processing algorithms were first implemented in LabVIEW (National Instrument, Austin, TX), operating in post-processing to validate the code.The LabVIEW program can be used for real-time imaging when frame rate is less than 2 fps.Subsequently, a multi-threaded C++ program utilizing DirectX display and sound subroutines (Microsoft, Redmond, WA) was developed for a personal computer with dual Xeon (Intel, CA) processors.The data processing and image display time was minimized by code optimization and balancing the computation load on each processor, which allowed real-time signal acquisition, processing, and display of the structural and flow images. In summary, we have constructed a high speed DOCT system that can perform axial scans 2 ~ 8 times faster than previously reported, using a full implementation of a coherent demodulator.The digital signal processing technique is flexible so that the high axial scan frequency can provide high frame rate, high velocity sensitivity, or an optimized compromise between them.The combination of hardware and software processing strategy for velocity estimation allowed efficient computation for real-time operations. Performance results and discussion A key parameter describing the overall speed of an imaging system is the pixel-wise throughput rate.Implementing mature ultrasound signal processor designs, the current DOCT system achieved 4.1×10 6 pixels/sec at f a = 8.05 kHz and 6.6×10 6 pixels/sec at f a = 12.95 kHz, when digitizing only the forward scans.This system is comparable to the throughput rate of the smart detector array [5], but it is still compatible with catheter-based OCT designs to ensure high frame rate.At the present time, the software processing speed limits the efficiency to utilize all of the digitized data.With two 1.7 GHz Xeon processors, the software can process 3.1×10 6 pixels/sec (~75%).The efficiency can approach 100% when using two 2.4 GHz processors, resulting in few dropped frames. Compared to the simplified auto-correlation method using a demodulating logarithmic amplifier with limiter output [3], the phase noise of the full implementation of coherent demodulator is considerably lower, allowing detection of slower blood flow.In addition, the range for velocity detection is doubled from ± f a /4 to ± f a /2.The decreased noise and the increased velocity detection range both contribute to a wider VDR.Furthermore, the demodulating logarithmic amplifier and its limiter output are also more sensitive to noise [3], limiting the SNR for both structural and flow imaging. In the following sections we present example images and examine the performance of the DOCT system using stationary, steady flow, and pulsatile flow phantoms.The results presented here are obtained using the 8.05 kHz RSOD (imaging with the 12.95 kHz system will be reported separately). Background noise in color Doppler and velocity variance modes Even with no blood flow present, some background noise in the color Doppler and velocity variance images exists, arising from phase instability (Fig. 4), and physical factors such as Brownian motion in solutions.In addition, regions with low OCT signal SNR produce noisier velocity estimates (Fig. 7).With lateral scanning, especially at high frame rates, mechanical vibration of the sample arm fiber optics and inherent speckle modulation drastically increase the background noise.To evaluate this, we imaged a stationary target (0.5% Intralipid solution), and calculated the root-mean-square (RMS) values of the background Doppler frequency and normalized velocity variance at various frame rates and SNR levels (Figs. 6 and 7).The noise background in the Doppler frequency shift reaches a minimum, determined by the phase stability, when no lateral scanning is involved (i.e., 0 fps, or equivalent to M-mode ultrasound).At low frame rates (2 fps) and high SNRs (> 10 for Doppler frequency, and > 100 for velocity variance), the measured Doppler shift frequency background noise levels approached the minimum level.Slow (~ 100 µm/s), non-pulsatile blood flow can be imaged at 0 -2 fps, yielding a VDR ~ 40 dB.At intermediate frame rates (4 -8 fps), the background noise levels increased due to vibration and speckle modulation, decreasing the VDR to 30 ~ 32 dB.These conditions are probably adequate to image steady or low pulsatility (< 2 Hz) blood flow at 0.1 ~ 2 mm/s.Dynamic cardiac imaging demands high temporal resolution and fast frame rates (16 -32 fps); however, the much-reduced VDR (15 ~ 24 dB) means that only high speed flow can be reliably detected.Because speckle modulation is related to the lateral scanning speed, so that for a given frame rate a reduction in the lateral field of view can reduce the background noise.Finally, the velocity variance required much higher SNR than Doppler frequency to achieve the same background level, as expected since the variance estimation is affected more by the presence of OCT signal noise.In summary, Fig. 7 establishes the minimum detectable flow velocities at different frame rate and SNR conditions. Steady flow velocity calibration in color Doppler and velocity variance modes The accuracy of flow velocity measured by DOCT was verified using a calibrated infusion pump on a flow phantom.The flow phantom was in a plastic tube with inner diameter 0.75 mm through which 0.5% Intralipid solution was pumped (Harvard Apparatus, Holliston, MA).The flow was laminar, and a parabolic profile was assumed (and seen in color Doppler mode) such that the peak flow velocity could be calculated from the volumetric flow rate, set by the infusion pump.Imaging was performed at 0 fps (i.e., M-mode).The Doppler angle was 79.5° (81.4° after correction for refraction, assuming an refraction index of 1.33).The flow rate was adjusted from 10 µL/min to 3 mL/min, corresponding to peak velocities ranging from 0.7 mm/s to 22.6 cm/s.After accounting for Doppler angle, this corresponded to The accuracy of the color Doppler mode was assessed by comparing the peak velocity set by the flow pump and the measured velocity (Fig. 10).The results suggested that color Doppler mode DOCT could accurately detect flow velocities within the non-aliased velocity measurement range, which was about ± 1.9 mm/s.The increased scattering of data points with increasing velocity is due to the greater variance at higher flow velocities (Fig. 11), and is not due to the error in the infusion pump set velocity (< 0.5%).Measured velocity [mm/s] In color-Doppler mode, the non-aliased velocity measurement range is given by Eq. ( 7), and at 81.4° the maximum measurable velocity is ~ 1.3 cm/s, shown by the dashed line in Fig. 11.Using velocity variance as a measure of flow speed, DOCT could estimate flow speeds well beyond the Doppler aliasing range, in this case above 10 cm/s.The increase in flow speed detection range was more than 7-fold compared to color Doppler estimation at 81.4°.Because the aliasing range was Doppler angle dependent, this increase was also angle dependent and ranged from 2 to 50-fold.The inverted Gaussian fit in Fig. 11 can serve as a calibration curve for obtaining speed information from measured velocity variances.The gain in increased measurement range comes at the cost of loss in directionality and reduced accuracy in velocity estimation.Since the variance measurement does not distinguish flow direction, only the speed, not velocity, can be obtained.Due to the strong SNR dependence, the speed estimation by variance loses reliability when the SNR in the structural image is reduced (Fig. 7(b)).In addition, at high flow speeds (e.g., > 10 cm/s), the error in the speed estimation by variance can be as high as 100%.Hence, there is an upper limit in velocity (and speed) estimation by phase-shift measurements between axial scans, regardless of the method (color Doppler or velocity variance).This limit can be overcome using information along the axial scan, similar to ST-FFT methods [6] which estimate velocity from Doppler frequency shift of a single axial-scan. One key objective in the DOCT development was to increase the frame rate while preserving the velocity detection sensitivity, such that dynamic flow profiles can be visualized.In the next section, we demonstrate DOCT imaging of flow phantoms under pulsatile conditions. Pulsatile flow imaging by color Doppler, velocity variance, and Doppler spectrum modes The discontinuity in stepping motor motion of the infusion pump (Fig. 8) was exploited to provide pulsatility in the flow phantom.Using larger step sizes at low repetition rate, reproducible pulsatile flow (peak velocities 1 -2 mm/s) was obtained (Fig. 12).Cross-sectional videos of the pulsatile flow phantom were acquired at various frame rates (Fig. 13).Once a region of interest in the flow (e.g., the center of the flow phantom) was identified by the color-Doppler mode, the pulsatility could be characterized in detail using the Doppler spectrum mode, as shown in Fig. 15.For comparison, we include the result from steady flow conditions in Fig. 14, with the flow rate set at 150 µL/min or 1.7 mm/s peak velocity at the center of the phantom. The Doppler spectrum, indicating the velocity distribution at the center of the flow phantom, is plotted as a function of time in Fig. 14(a).The mean Doppler frequency was approximately 3.6 kHz, as expected for 1.7 mm/s flow velocity.The vertical spread of this 3.6 kHz signal is wider than the expected measurement error at this velocity (see Fig. 10(a)).This was due to the fast pulsatility of the flow as explained and confirmed by the vertical streaks in Fig. 14(b).The audio output, which can be calculated in real-time, does correspond to this artefact, since it is not a monotone.It should be pointed out that the Doppler spectrum contains more information than the color-Doppler mode, since the latter only displays the mean velocity, not the entire velocity distribution.This will be important in investigating high shear-rate flow conditions when large velocity gradients exist within the focal volume. When pulsatility was deliberately introduced, Fig. 15 clearly demonstrated the velocity distribution variation over time for the ~9 Hz pulsatile flow.The vertical spread at any instance in time in Fig. 15(a) is smaller than that of Fig. 14(a), since the pulsatility was slow enough to be resolved.The audio output confirms this observation, since it sounds like a clear tone with varying frequency.We believe the combination of structural, color-Doppler, and Doppler spectrum modes can offer a valuable system for clinical use that would provide every step in microvascular imaging, including identification of the vessel, locating of a region of interest for temporal analysis, and calculating not only of the mean velocity, but also of the velocity distribution, including the pulsatile profile. Comparison of power-Doppler and velocity variance imaging modes A typical set of images measuring steady flow in a rectangular flow phantom, are shown in Fig. 16.The color-Doppler image contained several rings due to aliasing phase wrap-around.In contrast, these rings were completely removed in the power-Doppler image (although the directional flow information was also lost).To compare power Doppler and velocity variance imaging modes, a similar set of images including the velocity variance is shown in Fig. 17.While both the velocity variance and power-Doppler images remove the aliasing effect (both losing the directional information), there is one important difference exists between them.The variance image is brighter in the center, corresponding to higher velocity flow (Fig. 17(c)).This gradient is not present in the power-Doppler image (Fig. 16(c)), since it does not measure flow velocity but provide a "flow" versus "no-flow" determination by thresholding. The power-Doppler image in Fig. 16 was processed using a high-pass IIR filter centered at zero velocity, since no bulk tissue motion was present.The IIR filter was identical for each axial scan, and provided a sharp cut-off at ~ 500 Hz.As explained earlier, if motion artifacts are expected, a band-pass IIR filter (probably different for each axial scan) must be used, which will significantly reduce the processing speed. Conclusions In conclusion, we have demonstrated a DOCT system capable of high speed imaging with wide velocity dynamic range using a full implementation of hardware coherent demodulation. A number of software processors for blood flow visualization, including color-Doppler, velocity variance, power-Doppler, and Doppler spectrum (including audio output) were demonstrated.Real-time imaging, at 16 fps with 500 × 512 pixels per frame and 32 fps with 250 × 512 pixels per frame, allowed visualization of 9 Hz pulsatile flow without motion artifacts.The wide velocity dynamic range of the system, covering from 2 µm/s to 10 cm/s, allowed detection of bi-directional flow up to ± 1.9 mm/s along the optical axis in color-Doppler mode.The minimum detectable velocity was ± 17 µm/s when 8 axial-scans were averaged, and ± 2 µm/s when 128 axial scans were used.When used in velocity variance mode, the upper limit for flow speed detection was more than 10 cm/s, significantly higher than previous reports using phase-shift methods [2,3]. Fig. 2 . Fig. 2. Signal and filter frequency response during coherent demodulation.Solid line: Calculated frequency response for 8.05 kHz RSOD when scanning 1.3 mm of depth.Dashed line: Frequency response of 12.95 kHz RSOD when scanning 1.0 mm of depth.After mixing with the 4.3 MHz carrier frequency, the signals are down-shifted to the base-band, and upshifted to 8.6 MHz.Blue line: Low pass filter response, approximating a matched filter for the base-band signals, and completely removing the up-shifted signal around 8.6 MHz.The I and Q signals are simultaneously digitized at ~ 10 MHz (for f a = 8.05 kHz) or ~ 16 MHz (for f a = 12.95 kHz) with 12-bit accuracy.The complex base-band OCT signal can be then expressed as jQ I S + = .(2) To increase data throughput from the A/D card to computer memory, an A/D conversion clock pulse train triggered by the RSOD is used, with data acquisition occurring during the central ~ 80% of the depth scan.The acquired signals can be expressed as 2D arrays: I m,n and Q m,n with m and n denoting the indices in the depth and lateral directions, respectively.These signals are transferred by direct memory access without computer CPU intervention, reducing the computational overhead. Fig. 3 . Fig. 3. (a) The mean phase shift can be calculated by evaluating the phase angle of the individual OCT signals and then computing the phase differences between axial scans.(b) The result is equivalent to computing the phase angle of the mean (〈X〉,〈Y〉) vector.The computation in (b) is numerically less complex than in (a), and so (b) is more suitable for realtime processing. Fig. 4 . Fig. 4. (a) Spatial averaging masks for improving the SNR in structural and flow images.Shaded areas indicate regions with shared computation between pixels.(b) Calculated axial velocity using two-quadrant arctan (red) and four-quadrant arctan (blue) functions, showing the aliasing effect and the color map used in color Doppler OCT.(c) Measured phase noise processed from different ensemble lengths (N = 2 to 128, and M = 1).Each data set (dots) represents the normalized distribution of phase noise measured from 20,000 pixels using a stationary diffuse reflectance target, with a Gaussian fit (line). h v ∆ is the velocity step size.The choice of h B is based on two considerations: (a) h B should be a power of 2 to allow binary tree sorting for computing the histogram efficiently, and (b) the velocity step size should match the expected velocity resolution: (C) 2003 OSA 7 April 2003 / Vol.11, No. 7 / OPTICS EXPRESS 801 Fig. 5 . Fig. 5. Schematic for generating audio output to accompany the Doppler spectrum display.The directional information is encoded in the stereo's left and right channels.H: digital Hilbert transform performing 90° phase shift.DAC: digital-to-analog convertor. Fig. 7 . Fig.7.SNR varied with depth; therefore, the image was divided into 5 horizontal regions of interest (ROI) to investigate velocity noise as a function of SNR.Each data point is the mean value within a ROI containing 25,000 pixels.(a) Background noise levels in the color Doppler mode at 2 -32 fps and the corresponding structural image SNR conditions.(b) Background noise levels in the velocity variance mode at various frame rates and corresponding structural image SNR conditions.Note: 0 fps indicates no lateral scanning, i.e., analogous to M-mode ultrasound. -3.4 cm/s.Two examples are shown in Figs. 8 and 9, illustrating low flow rate (not aliased) and high flow rate (aliased) conditions, respectively. Fig. 8 . Fig. 8. (a) Structural, (b) color Doppler, and (c) normalized velocity variance images of 0.5% Intralipid flow phantom, acquired at 0 fps.At 0 fps, the x-axis represents time, not distance.Flow rate = 60 µL/min, corresponding to 0.68 mm/s peak velocity after accounting for Doppler angle.Vertical streaks in (b) are due to the discrete stepping motor motion on the infusion pump, prominent at low flow rates.Scale bar = 0.5 sec. Fig. 9 . Fig. 9. (a) Structural, (b) color Doppler, and (c) normalized velocity variance images of 0.5% Intralipid flow phantom, acquired at 0 fps.At 0 fps, the x-axis represents time, not distance.Flow rate = 1.5 mL/min, corresponding to 1.7 cm/s peak velocity after accounting for Doppler angle.Notice the aliasing pattern in (b), and the disappearance of the vertical streaks.Scale bar = 0.5 sec. Fig. 10 .Fig. 11 . Fig. 10.Comparison of measured peak velocity in color Doppler mode (measuring the velocity vector along the optical axis) and the expected flow velocity, as set on the infusion pump.Each data point is the mean of 25,000 pixels at the center of the phantom.(a) Low flow rate conditions without aliasing.Solid line: Unity slope.Error bars: Standard deviation within the 25,000 pixels.(b) With flow velocities greater than 1.9 mm/s, aliasing occurs.Solid line: Expected relationship, following Fig. 4(b).Notice the increased scattering of data points with higher velocity, which forms the basis of velocity variance imaging. Fig. 12 . Fig. 12.(a) Structural, (b) color Doppler, and (c) normalized velocity variance images of a 0.5% Intralipid flow phantom, acquired at 0 fps.The x-axis is time and flow goes from right to left.The flow rate was adjusted to be pulsatile at ~ 9 Hz, with the peak velocity just below the aliasing velocity limit.Scale bar = 250 ms. Fig. 13 . Fig. 13.(77, 172, 359, 737 and 1423 kB, respectively) Left to right: 2, 4, 8, 16, and 32 fps videos of the pulsatile flow phantom, each containing structural (top), color Doppler (mid), and velocity variance (bottom) images.Each video is 2 seconds long.The diameter of the tube is ~ 0.75 mm.Notice the peak velocity drifting laterally in the 2 and 4 fps videos, which is an artifact of low frame rates.The slow pulsatility (~ 1 Hz) observed in the 8 fps video is an aliasing artifact, since the actual pulse rate is ~ 9 Hz.At 16 fps, the pulsatility can be visualized properly.At 32 fps, although the temporal resolution is improved, the reduced velocity sensitivity degrades the image quality markedly. Fig. 14 . Fig. 14.Click to hear the associated audio output [33 kB].(a) Doppler spectrum and (b) color Doppler images of the flow phantom driven at a steady flow rate of 150 µL/min, corresponding to 1.7 mm/s peak velocity after Doppler angle correction.The Doppler spectrum is obtained at the center of the phantom, along the white line shown in (b). Fig. 15 . Fig. 15.Click to hear the associated audio output [33 kB].(a) Doppler spectrum and (b) color Doppler images of the flow phantom driven under pulsatile conditions.The ~ 9 Hz pulses had peak flow rates ~ 2 mm/s.Notice the small reflection observed in (a), likely due to bias and gain mismatch in the I and Q channels. Fig. 16 . Fig. 16.(a) Structural, (b) color-Doppler, and (c) normalized power Doppler OCT images of a glass channel flow phantom of 0.25% Intralipid with peak velocity ~ 6 mm/s.Note the aliasing effect in (b) due to phase wrap-around, and the loss of flow directionality in (c). Table 1 . Requirements of axial scan frequency and analog-to-digital sampling rates for 500 × 500 pixels per frame. Table 2 . Computation complexity for velocity estimation for a single pixel Table 3 . Ensemble length and overlap values for various frame rates.
9,063.6
2003-04-07T00:00:00.000
[ "Physics", "Engineering" ]
Organic transistors on paper: a brief review Organic transistors are being developed for a variety of flexible electronics applications. They are usually fabricated on polymeric substrates, but considering the significant negative impact of plastic waste on the global environment and taking into account the many desirable properties of paper, there have also been efforts to use paper as a substrate for organic transistors. In this review we provide a brief overview of these efforts. Introduction Organic transistors are transistors in which the semiconductor is a conjugated organic material. This can be a polymer, a small-molecule semiconductor, or a combination of two or more materials. Transistors can be classified according to the device architecture and to the mechanism by which the electric current flowing through the transistor is modulated. For organic transistors, the most commonly implemented architecture is the thin-film transistor (TFT) in which the semiconductor and all other device components are deposited onto the substrate in the form of thin layers, while the most commonly exploited mechanism for the modulation of the electric current is the field effect. The latter requires that the semiconductor is separated from a metallic gate electrode by an electrically insulating layer, the gate insulator. This can be a dielectric or an electrolyte. When a voltage is applied between the gate electrode and the semiconductor, a thin sheet of mobile electronic charges is formed in the semiconductor in close vicinity of the interface to the gate insulator. This charge layer balances the charge (of opposite polarity) located on the gate electrode. By adjusting the gate-source voltage, the charge density in the semiconductor channel and thereby its electric conductance can be modulated over a wide range. With two metal contacts attached to the semiconductor (the source contact and the drain contact), the electric current flowing through the transistor can thus be efficiently controlled over a wide range by adjusting the gate-source voltage. In n-channel field-effect transistors, the gate-source voltage is usually positive and the drain current is due to negatively charged carriers (electrons), while in p-channel field-effect transistors, the gate-source voltage is usually negative and the drain current is due to positive charge carriers (holes). Depending on the materials employed for the semiconductor and the source and drain contacts, the transfer of one type of charge carrier between the contacts and the semiconductor and/or the flow of one type of carrier through the semiconductor is usually more efficient compared to the other, and as a result, organic transistors are usually either n-channel or p-channel transistors. This is the desired behavior for all practically relevant applications. Ambipolar behavior, i.e., the conduction of electrons and holes in the same transistor depending on the polarity of the applied voltages, is highly undesirable, as it is necessarily associated with large off-state drain currents, prohibitive power consumption and poor signal integrity, and thus needs to be avoided by proper materials selection. Another popular implementation of organic transistors is the organic electrochemical transistor (OECT) in which the electric current flowing through the organic semiconductor (usually a conducting or semiconducting polymer) is modulated not by an electric field, but by means of a reversible chemical (redox) reaction of the semiconductor that is controlled by an electric voltage applied to an electrolyte in contact with the semiconductor and which results in the controlled injection and extraction of ions into and out of the organic semiconductor. Due to their inherently low operating voltages, OECTs are particularly useful for bioelectronic applications. A wide range of semiconducting, insulating and metallically conducting materials and a wide range of deposition and patterning techniques are available or have been developed for the fabrication of organic transistors. The particular choice of these materials and processes is usually dictated by a variety of factors and is often a compromise involving device performance, parameter uniformity, long-term stability, manufacturing throughput, process reproducibility, and waste management. One aspect often associated with the large-scale manufacturing of organic transistors is the use of solution-based deposition and patterning techniques and of sheet-to-sheet or roll-to-roll printing approaches. A particularly useful aspect of organic transistors is that they can typically be fabricated at relatively low process temperatures, usually below about 200 1C and often even below about 100 1C. This makes it possible to fabricate organic transistors on a variety of unconventional substrates, including plastics and paper. Paper is particularly intriguing, as it is a naturally renewable, biodegradable, easily recyclable and rather inexpensive and ubiquitous material. Paper is manufactured in a wide variety of categories (e.g., as printing paper, wrapping paper, writing paper, drawing paper, specialty paper) and is thus available with a wide range of properties and specifications. For example, while the thickness of most types of paper ranges from 50 to 200 mm, organic transistors have also been fabricated on paper as thin as 800 nm and as thick as 2.5 mm. Most types of paper are optically opaque, but organic transistors have also been fabricated on optically transparent paper, with potentially useful implications for certain optoelectronic applications. In the dry state, paper is usually an electrical insulator, but due to its generally hygroscopic behavior, paper may also be an electrolyte, with potentially desirable or undesirable consequences for electronic devices fabricated on paper. One of the challenges associated with the use of paper as a substrate for electronic devices is its often significant surface roughness. This challenge can be addressed in a variety of ways, for example by fabricating the transistors in a device architecture that is less sensitive to the substrate roughness, by applying a smoothening surface coating prior to device fabrication, or by using some type of engineered or specialty paper with inherently small surface roughness, such as nanocellulose paper. Organic transistors on paper were first reported a little less than 20 years ago, and while the performance of early organic transistors fabricated on paper was substantially inferior to that of organic transistors fabricated on plastic substrates, the past ten years or so have brought much progress in this direction. The purpose of this review is to briefly summarize this progress. First, we would like to point to a few previous publications that have reviewed organic transistors in general and the use of paper in the fabrication of organic transistors and other types of electronic devices in particular. In a recent tutorial, Lamport et al. have summarized the most important aspects related to the basic device architecture and the current-voltage characteristics of organic field-effect transistors, with a focus on the contact resistance and a number of experimental techniques for extracting physical materials and device parameters. 1 Wang et al. have provided a comprehensive overview of small-molecule and polymeric semiconductors developed for and employed in organic transistors, with a focus on materials that have shown carrier mobilities greater than 1 cm 2 V À1 s À1 in either p-channel or n-channel organic transistors. 2 The intricate relations between the microstructure, the charge-transport efficiency and the chargecarrier mobility in organic semiconductors, particularly in highmobility solution-deposited donor-acceptor polymers, have been reviewed by Sirringhaus. 3 Guo et al. have examined the status of the design, modeling and large-scale manufacturing of analog and digital integrated circuits and active-matrix displays and imagers based on high-mobility and high-frequency organic TFTs on plastic substrates. 4 Li et al. have reviewed the various aspects of employing organic transistors for chemical and biomolecule sensing. 5 A comprehensive review of organic electrochemical transistors has recently been published by Rivnay et al. 6 Mihai Irimia-Vladu has extensively discussed the use of natural and nature-inspired materials, including paper, silk, leather, vinyl, gelatin and certain synthetic polymers, such as polydimethylsiloxane, parylene and polyvinyl alcohol, in the fabrication of electronic devices, with a clear focus on the important aspects of biocompatibility, biodegradability and sustainability which these materials have to offer. 7-10 Tobjörk and Österbacka have summarized the structural and electrical properties of paper, evaluated several printing techniques potentially useful for the fabrication of electronic devices on paper (gravure, flexography, offset, screen, inkjet, aerosol jet), and reviewed the early reports of active and passive electronic components fabricated on paper, with an emphasis on low-voltage devices, particularly electrochemical transistors and electrochromic displays. 11 In 2016, Lin et al. reviewed the fabrication of energy-storage and energyharvesting devices, particularly supercapacitors, piezoelectric power generators and printed antennas, on paper. 12 Most recently, Ha et al. examined the various ways in which paper can be employed either as a substrate or as a functional material (e.g., as an antireflection coating, conductive electrode, gate dielectric, diffusion barrier, etc.) for a wide variety of electronic devices, including transistors, solar cells, light-emitting diodes, batteries, supercapacitors, and antennas. 13 Unlike these earlier reviews, we will concentrate in the following exclusively on the use of paper as a substrate for organic transistors. Organic electrochemical transistors on paper The first organic transistors fabricated on paper were organic electrochemical transistors (OECTs). 14 This is certainly related to the relative ease with which OECTs can be fabricated and with the fact that the performance of OECTs tends to be less affected by the surface roughness of the substrate, which can be quite significant in the case of many types of paper. The paperbased OECTs reported initially (in 2002) by Andersson et al. 14 and later (in 2008) by Mannerbro et al. 15 were fabricated on commercially available glossy photo paper coated with a layer of polyethylene (PE) and were based on the conducting polymer polyethylenedioxythiophene/polystyrene sulfonic acid (PEDOT: PSS) in contact with an electrolyte. In addition to individual OECTs, Andersson et al. also fabricated 40-pixel active-matrix electrochromic displays in which each pixel was controlled by an OECT. 14 Mannerbro et al. evaluated the dynamic performance of 5-stage ring oscillators based on OECTs in which both the PEDOT:PSS and the electrolyte had been deposited by inkjetprinting. These ring oscillators showed a signal propagation delay of about 20 s per stage at a supply voltage of 1 V (see Fig. 1). 15 The general simplicity of the OECT fabrication process, the insensitivity of the performance to the substrate roughness, the low operating voltages and their potentially very large transconductance 16 make OECTs particularly useful for applications in sensing 17 and biological interfacing. 6 Fundamental drawbacks of OECTs are their relatively small on/off current ratio (typically smaller than 10 6 ) and their relatively small transit frequency (usually a few tens of kilohertz). Organic field-effect transistors on paper The first organic field-effect transistors fabricated on paper were reported by Eder et al. in 2004. 18 Commercially available hot-pressed cotton-fiber paper was chosen as the substrate, and its surface was sealed prior to the fabrication of the TFTs with a layer of polyvinylphenol (PVP) with a thickness of a few hundred nanometers. On this surface, the TFTs were fabricated in the bottom-gate, bottom-contact architecture using a combination of vacuum deposition (for the gate electrodes, source/drain contacts and semiconductor layer), spin-coating (for the PVP gate dielectric), photolithography, and wet and dry etching. The small-molecule material pentacene was employed as the semiconductor. TFTs with a channel length of 50 mm had a carrier mobility of 0.2 cm 2 V À1 s À1 and an on/off current ratio of 10 6 , both notably smaller compared to pentacene TFTs fabricated on plastic substrates. From a 5-stage unipolar ring oscillator based on TFTs with a channel length of 10 mm, a signal propagation delay of 12 ms per stage was obtained at a supply voltage of 50 V, which was inferior by about two orders of magnitude compared to the signal delay measured on similar ring oscillators fabricated on plastics. Also in 2004, Kim et al. described the fabrication of bottomgate, bottom-contact polymer TFTs on commercially available photo paper, sealed with a stack of vapor-deposited parylene having a thickness of 5 to 20 mm and silicon dioxide deposited by electron-beam evaporation having a thickness of 50 nm. 19,20 This double-layer coating was shown to significantly reduce the surface roughness of the paper. The gate dielectric was a combination of a 40 nm-thick layer of polyimide deposited by spin-coating and a 210 nm-thick layer of electron-beamevaporated SiO 2 . Regioregular poly(3-hexylthiophene) (P3HT) was used as the semiconductor and deposited either by spincoating or microcontact-printing. The TFTs had a channel length of 25 mm, a carrier mobility of 0.086 cm 2 V À1 s À1 (similar to the highest mobilities reported up to that point for P3HT TFTs), and an on/off current ratio of 10 4 . Bollström et al. 21 developed a multilayer coating system consisting of four different materials deposited successively onto the paper surface: a pre-coating layer of ground calcium carbonate (GCC), a smoothing layer of aluminum silicate hydroxide (kaolin), a barrier layer of acrylic or styrene acrylic copolymer latex blended with precipitated calcium carbonate (to produce a polar surface), and a calendered top-coating layer of kaolin. On this coated paper, the authors fabricated P3HT TFTs in the top-gate architecture using inkjet-printed silver source and drain contacts and polyvinylphenol (PVP) as the gate insulator. The hygroscopic nature of the PVP resulted in a very large gate-insulator capacitance, allowing these TFTs to be operated with voltages of about 1 V. However, due to the significant leakage currents, the TFTs in this initial report had a very small on/off current ratio (about 10). On the same type of paper, Pettersson et al. 22 later fabricated P3HT TFTs in which an ion-gel electrolyte obtained by gelation of a triblock copolymer (poly(styrene-block-ethylene oxide-block-styrene); PS-PEO-PS) in an ionic liquid (1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide; [EMIM][TFSI]) 23 was employed as the gate insulator. Owing to the large capacitance of the ion-gel electrolyte, these TFTs also had very low operating voltages (2 V), but a significantly improved on/off current ratio (about 10 6 ). On a 3-stage unipolar ring oscillator, the authors measured a signal propagation delay of 35 ms per stage at a supply voltage of 3 V. 22 In 2011, we showed that the large surface roughness of paper does not necessarily prevent the use of very thin gate dielectrics in the fabrication of organic TFTs. 24 Employing a hybrid gate dielectric consisting of a 3.6 nm-thick layer of oxygen-plasma-grown aluminum oxide (AlO x ) and a 2.1 nm-thick self-assembled monolayer (SAM) of an alkylphosphonic acid, we fabricated bottom-gate, top-contact p-channel and n-channel TFTs and unipolar and complementary inverters directly on the surface of four different types of banknotes. The large capacitance of the thin AlO x /SAM gate dielectric allowed these TFTs to operate with gate-source and drain-source voltages of 3 V, similar to the operating voltages of electrochemical and electrolyte-gated transistors, while offering the potential for higher switching frequencies. Except for the phosphonic acid SAM, all materials were grown or deposited in vacuum, and all patterning was performed using shadow masks and thus without the need for photoresists and subtractive patterning. The TFTs had channel lengths ranging from 10 to 30 mm. The p-channel TFTs were fabricated using the small-molecule semiconductor dinaphtho-[2,3-b:2 0 ,3 0 -f]thieno[3,2-b]thiophene (DNTT) and had a carrier mobility of 0.57 cm 2 V À1 s À1 , an on/off current ratio of 10 5 and a subthreshold slope of 0.11 V per decade. Hexadecafluorocopperphthalocyanine (F 16 CuPc) was used for the n-channel TFTs, providing an electron mobility of 0.005 cm 2 V À1 s À1 , an on/off current ratio of 10 4 and a subthreshold slope of 0.26 V per decade. Unipolar inverters showed switching frequencies of about 2 kHz. The fabrication of electronic devices on banknotes is partially motivated by the possibility of implementing active security and anti-counterfeiting features directly on the surface of the banknotes. In addition to TFTs and digital circuits, this would likely also require some type of memory devices. In 2012, Khan et al. reported on the fabrication of ferroelectric memory TFTs on a banknote. 25 The bottom-gate, top-contact TFTs were fabricated using polydimethylsiloxane (PDMS) as a planarization layer, PEDOT:PSS for the gate electrodes, the ferroelectric copolymer poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)) as the gate dielectric (all deposited by spin-coating), and vacuum-deposited pentacene as the semiconductor. The TFTs had a channel length of 60 mm, a carrier mobility of 0.12 cm 2 V À1 s À1 , an on/off current ratio of 10 3 , a memory window of about 8 V, and a retention time of several hours. The first organic transistors on paper that showed a carrier mobility greater than 1 cm 2 V À1 s À1 were reported in 2012 by Li et al. 26 Perhaps more important than the large carrier mobility was the fact that these TFTs also had a very large on/off current ratio of 10 8 . The bottom-gate, top-contact TFTs were fabricated on commercially available photo paper coated with a 3 mm-thick layer of vapor-polymerized parylene. A 500 nm-thick gate dielectric of the fluoropolymer Cytop and a blend of the small-molecule semiconductor 2,7-dioctyl [1]benzothieno [3,2-b][1]benzothiophene (C 8 -BTBT) and the insulating polymer poly(methyl methacrylate) (PMMA) were successively deposited by spin-coating. The large carrier mobility was in part due to the formation of large crystalline domains in the semiconductor layer resulting from the phase separation in the solution-deposited C 8 -BTBT/PMMA blend. Zhang et al. fabricated organic TFTs and circuits on a 320 nm-thick stack of polyacrylonitrile (PAN) and polystyrene (PS) that served as both the substrate and the gate dielectric, and this plastic sheet with the TFTs and circuits was then laminated onto the surface of a banknote. 27 The p-channel pentacene and n-channel bis(octyl)-perylene tetracarboxylic diimide (PTCDI-(C 8 H 17 ) 2 ) TFTs had hole and electron mobilities of 0.52 cm 2 V À1 s À1 and 0.23 cm 2 V À1 s À1 , respectively. A 5-stage complementary ring oscillator showed a signal propagation delay of 59 ms per stage at a supply voltage of 50 V. Peng et al. screen-printed silver-nanoparticle-based gate electrodes directly onto the surface of commercially available laser-printing paper to fabricate bottom-gate, top-contact TFTs with a vapor-deposited parylene gate dielectric (680 nm or 2 mm thick), vacuum-deposited DNTT as the semiconductor, and screenprinted silver-nanoparticle-based source and drain contacts. 28,29 With a channel length of 85 mm, these TFTs showed carrier mobilities between about 0.3 and 0.6 cm 2 V À1 s À1 , on/off current ratios up to 10 8 , a subthreshold slope of 0.9 V per decade and a transit frequency of 50 kHz, quite similar to the performance of TFTs fabricated on a plastic substrate and with excellent uniformity across an array of 64 TFTs (see Fig. 2). Zocco et al. compared the performance of pentacene TFTs fabricated on glass and on two types of commercially available paper, Hewlett Packard photo paper and Sappi High Gloss specialty paper. 30 320 nm-thick parylene was used as the gate dielectric. The TFT performance turned out to be very similar on all three substrates, with carrier mobilities of 0.11, 0.09 and 0.05 cm 2 V À1 s À1 on the glass, the photo paper and the specialty paper, respectively. These results again show that parylene can be a very suitable surface-coating and gate-dielectric material for the fabrication of high-performance organic TFTs on paper. The largest carrier mobility published to date for organic transistors on paper is 2.5 cm 2 V À1 s À1 , and these TFTs were reported by Minari et al. in 2014. 31 On commercially available photo (inkjet) paper coated with a 3 mm-thick layer of parylene, the authors fabricated top-gate TFTs based on the small-molecule semiconductor C 8 -BTBT. The semiconductor layer was deposited by drop-casting and formed a polycrystalline layer consisting of large crystalline domains. Gold nanoparticles functionalized with conjugated molecular ligands and patterned using a combination of photolithography and solution-coating were employed to form the source and drain contacts and the gate electrodes, with a stack of two fluoropolymers with a total thickness of 500 nm serving as the gate dielectric. The TFTs had a channel length of 100 mm, and in addition to a record mobility of 2.5 cm 2 V À1 s À1 , they showed an on/off current ratio of 10 6 and a subthreshold slope of 1.4 V per decade (see Fig. 3). Rather than sealing the entire paper surface with a blanket smoothing layer prior to transistor fabrication, Grau et al. employed gravure printing to apply a surface coating only in those regions in which the TFTs were to be fabricated, thus preserving V À1 s À1 , the largest carrier mobility reported to date for organic transistors on paper. Prior to TFT fabrication, the paper was coated with a 3 mm-thick layer of parylene. Gold nanoparticles functionalized with conjugated molecular ligands and patterned using a combination of photolithography and solution-coating were employed to form the source and drain contacts and the gate electrodes. Reprinted with permission. 31 Copyright 2014, Wiley-VCH. the natural properties of the paper in the remaining areas. 32 Gravure printing is an established, mass-production-capable, high-quality, multi-purpose roll-to-roll printing technique and was utilized here to locally coat the paper with 6 mm-thick polyvinylphenol (PVP) and to print the 200 nm-thick PVP gate dielectric of the bottom-gate, bottom-contact polymer TFTs. Inkjet printing was used to define Ag-nanoparticle-based gate electrodes and source/drain contacts. The semiconducting polymer poly(2,5-bis(3-tetradecyl-thiophen-2-yl)thieno[3,2-b]thiophene) (pBTTT) was deposited by spin-coating. The TFTs had a channel length of 25 mm, a carrier mobility of 0.086 cm 2 V À1 s À1 , an on/off current ratio above 10 4 and a subthreshold slope of 18 V per decade. Organic transistors on nanocellulose, supercalendered and specialty paper When paper is manufactured by traditional papermaking techniques, the microstructure of the cellulose fibers is mostly preserved. Depending on the type of wood from which the cellulose is obtained, these fibers have a diameter of 10 to 20 mm, which is the main reason for the large surface roughness of regular paper. An alternative approach to papermaking is to first disintegrate the cellulose fibers by high-pressure homogenization into their constituent fibrils, called micro-or nanofibrillated cellulose (MFC, NFC), cellulose nanofibers (CNF), cellulose nanocrystals (CNC) or nanocellulose. These can then be pressed into thin sheets to produce cellulose nanopaper. 33 Due to the small diameter of the nanofibers or nanocrystals (between a few nanometers to a few tens of nanometers), nanopaper is significantly smoother than regular paper. Organic TFTs fabricated on nanopaper were reported by Chinga-Carrasco et al. [34][35][36][37][38] Chinga-Carrasco et al. 34 explored the effect of treating the cellulose nanofibers prior to homogenization by carboxymethylation or 2,2,6,6-tetramethylpiperidine-1-oxyl-mediated oxidation and modified the nanopaper surface with a hexamethyldisilazane coating. On this surface, the authors fabricated top-gate p-channel polymer TFTs based on inkjet-printed silver-nanoparticle source/ drain contacts, spin-coated poly(3,3 00 -didodecyl-quaterthiophene) (PQT-12) and PVP as semiconductor and gate insulator, and dropcast PEDOT:PSS gate electrodes. Due to the hygroscopic nature of the PVP, these TFTs had very low operating voltages (2 V), but also a very small on/off current ratio, similar to the TFTs reported earlier by Bollström et al. 21 Huang et al. 35 applied a hot-pressing process to produce nanopaper sheets with a preferred thickness. On these sheets, the authors fabricated bottom-gate, top-contact n-channel TFTs using a 1 mm-thick PMMA gate dielectric deposited by spincoating and a vacuum-deposited layer of the small-molecule semiconductor bis(pentadecafluorooctyl)-naphthalene tetracarboxylic diimide (NTCDI-(CH 2 C 7 F 15 ) 2 ). These TFTs had a channel length of 100 mm, an electron mobility of 0.004 cm 2 V À1 s À1 and an on/off current ratio of 200 (the latter possibly limited by lack of patterning of the semiconductor layer). Fujisaki et al. 36 took advantage of a modified protocol that preserved the native chemical structure of the cellulose in the nanofibers, yielding nanopaper with greatly improved thermal stability. The authors coated their 20 mm-thick nanopaper with a 2 mm-thick olefin polymer and fabricated bottom-gate, bottom-contact p-channel polymer TFTs with a channel length of 10 mm using a 300 nm-thick fluoropolymer gate dielectric; these TFTs had a carrier mobility of 1.3 cm 2 V À1 s À1 , on/off current ratios up to 10 8 and a subthreshold slope of 0.84 V per decade (see Fig. 4). Instead of nanofibers, Wang et al. 37 utilized nanocrystals, which tend to be shorter than nanofibers (tens or hundreds of nanometers, rather than several microns) and thus tend to give a smoother surface of the nanopaper produced from them. Atomic layer deposition (ALD) was then used to coat the nanopaper with a thin aluminum-oxide layer, on which the authors fabricated top-gate TFTs based on a phase-separating blend of TIPS pentacene and poly[bis(4-phenyl)(2,4,6-trimethylphenyl)amine] (PTAA). The gate dielectric was a stack of 35 nm-thick Cytop (deposited by spin-coating) and 40 nm-thick Al 2 O 3 (deposited by ALD). Owing to the relatively small thickness and large capacitance (31 nF cm À2 ) of this double-layer gate dielectric, it was possible to operate these TFTs with relatively low voltages of 10 V. The TFTs had a channel length of 180 mm, a carrier mobility of 0.23 cm 2 V À1 s À1 , an on/off ratio of 10 4 and a subthreshold slope of 0.9 V per decade. Dai et al. 38 exploited the fact that the 2,2,6,6-tetramethylpiperidine-1-oxyl-mediated oxidation process introduces a significant density of mobile sodium ions into the nanocellulose, which makes the nanopaper also an electrolyte. To fabricate TFTs, the authors used a sheet of 40 mm-thick nanopaper with a capacitance of 220 nF cm À2 as both the substrate and the gate insulator, with the gate electrodes located on one surface and the organic semiconductor and the source/drain contacts located on the other surface of the substrate. Both p-channel and n-channel TFTs were fabricated, using C 8 -BTBT, PQT-12 and NTCDI-(CH 2 C 7 F 15 ) 2 as the semiconductors. The TFTs had a channel length of 100 mm, carrier mobilities between 0.01 and 0.07 cm 2 V À1 s À1 , and on/off current ratios of about 10 3 . One drawback of manufacturing nanocellulose is that the process of disintegrating the native cellulose fibers into nanocellulose is associated with a relatively large energy consumption. An alternative is supercalendering, a technique in which conventionally manufactured paper is flattened at the end of the papermaking process by passing it through stacks of hard and soft cylindrical rollers. Paper produced by supercalendering is called glassine and is often used as an interleaving paper to protect fine art or delicate objects from contact with other materials. Its smooth surface makes glassine also useful for flexible electronics. In 2015, Hyun et al. reported on the fabrication of electrolyte-gated polymer TFTs on glassine paper in a side-gate architecture, using screen-printed graphene to define the gate electrodes and the source and drain contacts on the substrate surface, aerosol-jet-printed poly(3-hexylthiophene) P3HT as the semiconductor, and a drop-cast ion-gel electrolyte. 39 Due to the large capacitance of the electrolyte (22 mF cm À2 ), the TFTs had a low operating voltage of 2 V. For TFTs with a channel length of 60 mm, the authors demonstrated a carrier mobility of 0.14 cm 2 V À1 s À1 , an on/off ratio of about 10 3 , and excellent bending stability of the TFTs (see Fig. 5). In response to the specific demands of flexible and printed electronics in terms of the substrate properties, a number of paper manufacturers have developed and commercialized specialty paper characterized by a small surface roughness. One example is PowerCoatt HD from Arjowiggins Creative Papers, which was introduced in 2014. On this substrate, Wang et al. 40 and later Raghuwanshi et al. 41 fabricated organic TFTs based on a phaseseparating blend of TIPS pentacene and either PTAA or polystyrene. The gate dielectric was a stack of an insulating polymer (either 45 nm-thick Cytop or 160 nm-thick PVP, deposited by spin-coating) and an insulating metal oxide (40 nm-thick Al 2 O 3 and/or HfO 2 , deposited by atomic layer deposition). The TFTs had a channel length of 90 or 180 mm, operating voltages of 10 V, Fig. 5 Electrolyte-gated organic TFTs fabricated by Hyun et al. on glassine paper in a side-gate architecture using screen-printed graphene source/drain contacts and gate electrodes, aerosol-jet-printed poly(3-hexylthiophene) and a drop-cast ion-gel electrolyte. The TFTs have a carrier mobility of 0.14 cm 2 V À1 s À1 , an on/off ratio of about 10 3 and excellent bending stability. Reprinted with permission. 39 Copyright 2015, Wiley-VCH. a carrier mobility of about 0.4 cm 2 V À1 s À1 and an on/off ratio of 10 5 , and they displayed excellent long-term stability. Another brand of smooth specialty paper for printed electronics is p_e:smart from the Felix Schoeller Group, also introduced around 2014. On this paper, Mitra et al. fabricated top-gate polymer TFTs in which all functional layers were deposited by inkjet printing. 42 Two different commercially available silvernanoparticle inks were printed to define the source/drain contacts and the gate electrodes, an epoxy/nanosilica ink was used for the 4 mm-thick gate dielectric, and the amorphous polymer poly[bis(4phenyl)(2,4,6-trimethylphenyl)amine] was employed as the semiconductor. The TFTs had a channel length of 50 mm, a carrier mobility of 0.087 cm 2 V À1 s À1 and an on/off ratio of 10 2 (see Fig. 6). For certain applications, such as conformable sensor arrays, a substrate with a thickness of less than 1 mm may be required. Lei et al. thus prepared sheets of paper with a thickness of 800 nm and a size of a few square-centimeters by reacting microcrystalline cellulose with hexamethyldisilazane, depositing the product onto a solid substrate by spin-coating and hydrolyzing the film in acetic acid vapor. 43 On the 800 nm-thick substrates, the authors fabricated bottom-gate, top-contact TFTs based on a decomposable (natural-dye-based) semiconducting polymer with a channel length of 50 mm, a carrier mobility of 0.21 cm 2 V À1 s À1 and an on/off current ratio of 10 5 . Paper is usually manufactured from cellulose, a linear polysaccharide forming the main structural component of the cell walls of green plants, including wood and cotton. A potential alternative to cellulose for papermaking is starch, which consists of linear and branched polysaccharides and is contained in large quantities in various agricultural crops, such as rice, wheat, corn and potatoes. In 2018, Jeong et al. reported on the preparation of thin, smooth and optically transparent substrates by gelatinization of potato starch (blended with a small amount of chemically crosslinked polyvinyl alcohol to enhance the mechanical properties) and on the fabrication of bottom-gate, top-contact TFTs on these substrates. 44 The authors used a vapor-deposited parylene layer with a thickness of 870 nm as the gate dielectric and evaluated three 46,47 organic semiconductors: pentacene, DNTT and poly(dimethyltriarylamine) (PTAA). The best performance was obtained using pentacene and DNTT, for which a carrier mobility of about 0.3 cm 2 V À1 s À1 and an on/off current ratio above 10 5 were obtained. Lee et al. 45 recently described the fabrication of bottom-gate, top-contact pentacene TFTs on Bristol board, a smooth and relatively thick type of paperboard manufactured primarily for applications in fine arts and print media. The TFTs were fabricated directly on the surface of the 2.5 mm-thick paperboard without a coating layer. The TFTs had a channel length of 150 mm, an on/off current ratio of 10 3 and a subthreshold slope of 0.3 V per decade. As the gate insulator, the authors employed a 630 nm-thick stack of gelatin and gelatin mixed with iron. Due to the electrolytic and hygroscopic properties of gelatin, its permittivity varies over several orders of magnitude depending on the humidity and the frequency at which the measurements are performed, which makes it difficult to extract a meaningful value for the carrier mobility of the TFTs from their currentvoltage characteristics. (The authors measured the gate-insulator capacitance at a frequency of 1 MHz and then used the capacitance determined from this measurement to calculate a carrier mobility of 8 cm 2 V À1 s À1 for the TFTs, but since the currentvoltage characteristics of the TFTs were measured under quasistatic conditions, this value significantly overestimates the true carrier mobility.) Low-voltage organic TFTs and circuits on paper Finally, we would like to briefly address the issue of low-voltage operation of organic TFTs and circuits fabricated on paper. Given that organic transistors are being developed primarily for mobile systems that will likely be powered by small batteries or solar cells, the maximum available supply voltage will be on the order of a few volts. One possibility to address this issue is the use of a thin, high-capacitance gate dielectric that allows the transistors to operate with small gate-source voltages. For this purpose, we have developed a hybrid AlO x /SAM gate dielectric with a thickness of about 5 nm and a capacitance of about 600 nF cm À2 that allows a charge-carrier density close to 10 13 cm À2 to be induced in the organic semiconductor layer at gate-source voltages of about 2 to 3 V. 24,46 Fig. 7 shows photographs and measured electrical characteristics of p-channel and n-channel organic TFTs and circuits fabricated on commercially available cleanroom paper and on a banknote using this approach. The TFTs and circuits operate with supply voltages between 2 and 4 V, with small static power consumption (o100 pW per stage) and with signal propagation delays of a few microseconds per stage in 11-stage complementary ring oscillators. 47 6. Summary and outlook Table 1 provides a summary of the characteristic properties and performance parameters reported in the literature for organic transistors fabricated on paper. By comparing the device-performance parameters listed in Table 1 with those commonly reported for organic transistors fabricated on plastic substrates, it can be seen that the performance of organic transistors on paper still lags behind that of the best organic transistors on plastic substrates. For example, while hole and electron mobilities of about 5 and 1 cm 2 V À1 s À1 are routinely achieved for p-channel and n-channel organic transistors fabricated on plastic substrates, respectively, the mobilities are currently smaller by a factor of about 2 to 5 for organic transistors fabricated on paper. For organic transistors fabricated on plastic substrates, on/off current ratios, subthreshold slopes and signal propagation delays of 10 9 , 62 mV per decade and 138 ns have been reported, 48 while the best values for organic transistors on paper are currently 10 8 , 90 mV per decade and 2 ms. 26,29,36,46,47 But the same comparison also suggests that the rate at which the performance of organic transistors on paper has been improved over the years is comparable to the rate at which the performance of organic transistors on plastic substrates has been improved. It is therefore not unreasonable to anticipate that paper may eventually replace plastics at least in some applications as the preferred substrate for organic electronics. One aspect for future work will be the further reduction of the operating voltage of the transistors, because operating voltages greater than about 10 V are unrealistic for most applications. There are more than a hundred publications in which operating voltages of 1 V or less have been reported for organic transistors fabricated on glass, silicon or plastic substrates, 49 while for organic transistors on paper, only one such report exists. 17 This may reflect the difficulty of minimizing the gate-dielectric thickness without introducing prohibitively large gate leakage on substrates with significant surface roughness, but this must be considered a solvable problem. Another interesting challenge will be the fabrication of organic permeable-base transistors 50 on paper. Organic permeable-base transistors are usually fabricated in a vertical architecture, which means that tight control of the thicknesses of the various layer in the transistors is even more critical than in planar field-effect transistors, and this will certainly lead to some interesting issues. Finally, a variety of aspects related to the integration of organic transistors into circuits and systems will need to be addressed, including deviceparameter uniformity, passive components (capacitors, resistors), robust circuit design, signal integrity, memory, reliability and packaging. 51,52 Conflicts of interest There are no conflicts to declare. Table 1 Summary of characteristic properties and performance parameters of organic transistors fabricated on paper. The first column refers to the list of references at the end of this article (OECT: organic electrochemical transistor, FET: field-effect transistor, EGFET: electrolyte-gated field-effect transistor, FeFET: ferroelectric memory field-effect transistor, inv.: inverter, RO: ring oscillator) Ref.
8,439.2
2019-05-16T00:00:00.000
[ "Materials Science" ]
AUTONOMOUS LEARNING OF ENGLISH AS A FOREIGN LANGUAGE IN A CULTURALLY INTEGRATED B-LEARNING ECOSYSTEM APRENDIZAJE AUTÓNOMO DEL INGLÉS COMO LENGUA EXTRANJERA EN UN ECOSISTEMA B-LEARNING CULTURALMENTE INTEGRADO The low level of oral skills in learning English as a foreign language seems to be related to the lack of spaces and opportunities to interact in dynamic learning environments since the 1<EMAIL_ADDRESS>Universidad Pedagógica y Tecnológica de Colombia lorena<EMAIL_ADDRESS>https://orcid.org/0000-0003-2471-7631 Licenciada en idiomas Modernos español-inglès. 2 Luis Facundo Maldonado G. Universidad Pedagógica y Tecnológica de Colombia<EMAIL_ADDRESS>Universidad Pedagógica y Tecnológica de Colombia Estudiante Maestría en TIC. https://orcid.org/0000-0002-5113-6210 texts and study materials are not related to the context in which the student lives. This research answers the question of whether a b-learning ecosystem with devices for monitoring learning and integrated into cultural dimensions of the students’ environment improves the learning of oral skills in learning English. The proposal is based on advances in research on ecosystems of learning and embodied cognition. A system is designed from the specification of learning spaces integrated into a spiral structure. An online learning environment integrates with an A U T O N O M O U S L E A R N I N G O F E N G L I S H A S A F O R E I G N L A N G U A G E I N A C U L T U R A L L Y I N T E G R A T E D B L E A R N I N G E C O S Y S T E M R E V I S T A B O L E T Í N R E D I P E 9 ( 7 ) : 1 8 2 2 0 2 J U L I O 2 0 2 0 I S S N 2 2 5 6 1 5 3 6 · 1 8 3 · ecosystem with elements of Boyacá cuisine to develop communicative interactions and autonomous learning activities. The proposal is validated by taking as population, grade 11 students from the Colombian system and two equivalent samples of selected students, based on previous performance in the English subject in the current school year. Statistical analysis of results supports the positive answer to the research question and supports the importance of the cultural integration of learning blearning ecosystems in foreign language learning. RESUMEN El bajo nivel de competencias orales en el aprendizaje del inglés como lengua extranjera parece relacionarse con la falta de espacios y las oportunidades de interactuar y de entornos dinámicos de aprendizaje y a que los textos y materiales de estudio no se relacionan con el contexto en el que vive el estudiante. Esta investigación responde a la pregunta de si un ecosistema b-learning con dispositivos para el monitoreo del aprendizaje e integrado a dimensiones culturales del entorno de los estudiantes mejora el aprendizaje de competencias orales en el aprendizaje del Inglés. La propuesta se fundamenta en los avances de investigación sobre ecosistemas de aprendizaje y conocimiento incorporadoembodied cognition -. El diseño parte de la especificación de un sistema de espacios de aprendizaje integrados en una estructura de espiral. Un ambiente de aprendizaje en línea se integra con un ecosistema con elementos de cocina boyacense para desarrollar interacciones comunicativas y actividades autónomas de aprendizaje. La propuesta se valida tomando como población, estudiantes de grado 11 del sistema colombiano y dos muestras equivalentes de estudiantes seleccionadas con base en el rendimiento previo en la asignatura de inglés en el año escolar en curso. Los resultados muestran que el rendimiento promedio del grupo experimental es significativamente superior al del grupo control, al hacer la comparación con la prueba T. INTRODUCTION Research trends in the areas that have been grouped under the name of cognitive science are promoting new perspectives for the understanding of the different learning processes, vital for human development. One of these is the learning of a second language, which may well be classified as an enhancer of new learning and that is developed through cultural interaction (Heyes, 2018). The specific case of learning English is on the list of needs for the 21st century in the Colombian educational prospective (Grimaldo et al., 2019) The levels of learning in English development, in the Colombian case, are not desired and the satisfaction of this need requires changes in conventional institutional practices. Indeed, research shows that the development of skills to produce sentences and ideas with fluency and appropriate intonation requires spaces and opportunities for interaction and use of the foreign language, if the study materials are related to the context in which the student lives, the learning of the foreign language can be better anchored (Barrera, 2014;Velásquez, 2015) This research emerges from the interest in developing alternatives for improving the learning of English as a foreign language through the identification of relevant research · 1 8 4 · in cognitive science; in particular, on embedded cognition, learning ecosystems and information technology. Learning ecosystems have special potential to integrate their own cultural elements with new learning content and promote competencygenerating experiences, cooperative work and learning (Maldonado et al., 2019). 2. BACKGROUNDS AND FRAMEWORK Embodied cognition and language The learning of a second language can be approached, from theories and studies that converge in the explanation of cognitive development from the sensorimotor experience. (Wellsby & Pexman, 2014) Learning ecosystems Another theory that supports and provides solid foundations to approach language learning from an environmental perspective is ecology. According to (Maldonado et al., 2019, p.03) "Ecology shows learning as a property of every living being with autonomous movement, which emerges from its interaction with its environment and is sustained by networks of information exchange". Interaction with the environment is the basis of experience and its elements promote the development of people's capacities to adapt and act on the environment in which they live. An ecological approach to language learning is related to the dynamics of the environment as Van Lier (2004, pp. 11) Maldonado et al. 2019, p. 26). In the case of the English language as a foreign language, the learning niche is the environment in which its elements are linked to the physical From the ecological perspective, the student is immersed in an environment full of potential meanings that are activated as the student acts and interacts within and with it; there the apprentice finds the necessary information and sufficient stimulation to act and react in the face of various situations (Van Lier, Leo, 2000). Communicative Affordance The development of oral ability in the English language finds its full progress with the communicative approach. In this regard, (Wade, 2009) (Hymes, 1974). In the scenario we propose, the student activates communicative affordances at each level for each cycle, allowing him to have direct contact with his environment and peers, which facilitates the acquisition of knowledge and information available. According to (Gibson, 1979) B-learning Systems The b B-learning is a didactic strategy that makes it easier for the student to acquire their learning in an interactive way; due to the extensive tools available in platforms such as Moodle or Google classroom, it allows the evaluation of learning and planning of activities, and the student is aware of the evolution of his own learning, his acquired skills to work independently, and is able to control and monitor his progress. Autonomous Learning An agent is autonomous in relation to an environment as each change has possibilities of alternative action to adapt. Perceiving the change in the environment, being aware of the possibilities of action, deciding on an option and controlling its execution are basic components of autonomy (Maldonado, 2012). According to (Morales, 2008;McGroarty, 1993;Olsen and Kagan, 1992), as the student takes responsibility for his own learning, he also motivates and encourages peers to learn through the interaction and socialization of knowledge acquired in a setting where they are distributed in pairs or in small groups. Every student may learn to plan, monitor and evaluate their learning, which requires an active and direct participation (Richards and Rodgers, 2001). Hubbard (2004) "indicates that autonomy is related to the ability of students to acquire the language deliberately and systematically outside the classroom with or without the guidance of the teacher, tutor or classmate" (cited in García et al, 2012 p. 12). Therefore, according to (Healey, 1999) "the main objective of the training is that the student can learn to manage time, place, the way forward to achieve the goal and to realize the success of their performance" (quoted in García et al, 2012 p. 12). METHODOLOGY The development of oral competence in learning English as a foreign language is a necessity, given the problems identified in the evaluations of Colombian schools that run in parallel with the lack of learning ecosystems that stimulate the exercise of active oral communication. The research referred to in the previous sections suggests possibilities for structuring an ecosystem with a b-learning approach from the computer perspective and anchoring the knowledge incorporated in cultural dimensions of the environment in which students live to promote autonomous learning. Issue This study searches about the structural The project is developed in two stages: in the first, the b-learning eco-system is designed and implemented, and in the second, its pedagogical value in the development of oral competence in learning English is tested. Design of the b-learning Ecosystem for autonomous learning This stage included three processes: a. Design of learning spaces A learning space is made up of a set of concepts that relate to actions. The set of concepts is the base or foundation from which the student consolidates and projects their own development and from which a process of collaboration of actors in the learning scenario is interwoven (Maldonado, 2012). This learning space is framed in a regular basic training study program in the Colombian educational system. It is formally identified as an English subject of 11th grade. In the context of this subject, the topic is chosen: typical culinary recipes from Boyacá, which determine the specific components of learning. b. Design of affordance levels Five levels related to each other are delimited, forming a system for verbal production in such a way that each level generates an advance in differentiation that facilitates the next level (Illustration 01). The levels are the following: 1. Vocabulary: grammatical categories are identified and objects are associated with names. 2. Propositional: the associationaffordance -is integrated between word-object in propositions: perception is of a dynamic structure as an acting subject -propositional affordance -. 3. Dialogal: integrates the previous ones in the dialogue as a new interactive affordance. 4. Narrative: integrates complex units to represent processes as systemic units. 5. History: integrates the previous levels in the action of making accounts of lived or imagined processes. c. Finally, in the writing category, it is proposed to structure the instructional text of a typical recipe from the department of Boyacá, following the structure of the recipe text for chicken Sancocho worked in the previous category. As a final product of the level, a podcast is uploaded where the brief description of the chosen recipe is made. Level III: dialogal affordance For level three, a description process was followed through a sequence of questions and answers. An announcer and an interlocutor, Level V: story level For the fifth level, the proposed concepts are related to the structure of a story; the student creates and narrates the entire process of selection, preparation and presentation of his : 1 8 2 -2 0 2 -J U L I O 2 0 2 0 -I S S N 2 2 5 6 -1 5 3 6 · 1 9 3 · typical dish, through a descriptive text both orally and in writing; and to create this story, he followed a sequence of steps or actions that allowed him to build the text with meaning and coherence. Table 5. Combine paragraphs to get a story with a start, middle, and end. The absolute value of the mean of the control group is 0.289 higher than the experimental group, but the comparison using Student's t-test finds that the relationship between systematic variance and error is -1.8617 with a probability of associated error of 0.0732, greater than the minimum acceptable value of 0.05. Concepts Consequently, the two groups are assumed to be equivalent. The two groups are accepted as equivalent samples from the same population. Instruments To collect the information and analysis data from this research, the following instruments were used: 1. Written test on knowledge of Boyacense culinary and gastronomy, consisting of 15 multiple-choice questions with a single answer and applied in two ways, in the control group on paper and in person, in the experimental group virtually and through the Quizizz platform. 2. Rubric for the evaluation of · 1 9 5 · expose the process of making a typical recipe in front of their peers and invited public. Experimentation The two selected groups followed the same RESULTS The oral competence of the students in learning the English language is considered as a dependent variable, since the two groups, both control and experimental, focused on the development of these competences in the English language but with different methodologies. The validated b-learning ecosystem in the experimental group is considered as an independent variable. Application of Student's t-test to find mean difference. Control Group Experimental Group Group mean 17,2069 25,7419 Standard Deviation 14,9126 18,1787 Sample size 29 31 Table 7. Descriptive parameters for the two groups. Table 7 shows the descriptive analysis of the results in oral competence for the two groups. Experimentation The Conventional Group shows a lower mean than the B-Learning Group, but the standard deviation is also lower, which indicates lower performance, but also less dispersion than the data of the experimental group. Group is greater than that of the conventional group and that the difference is statistically significant. Analysis of results in written test As a complement to the research hypothesis test, the Student's t-test was applied to the results of a written test. The number of correct responses is taken as the dependent variable. However, there is no statistical support to affirm significant differences related to gender. DISCUSSION The results obtained demonstrate that the proposal implemented as a b-learning ecosystem that integrates elements of the students' culture for learning the English language as a second language is a better pedagogical solution than the conventional system followed in the institution. Pedagogy is a discipline that integrates knowledge derived from other disciplines in the search for better solutions to the training problems of the students and, in this way, contributes to the improvement of educational quality. The design and implementation of the validated proposal configures learning spaces that relate language concepts with actions forming pairs, each of which is considered a projected learning state and, therefore, a goal to achieve (Heller et al. 2006 Likewise, autonomous language learning is encouraged, allowing the student to direct and monitor their progress in the learning process. The implemented ecosystem favors the development of meaningful learning experiences in students, since by interacting and creating student-student, student-environment, studentteacher relationships, cognitive processes of language learning are enriched and complemented. Since students learned the second language in a real context, they faced situations where they were exposed and forced to use the foreign language, and to achieve assertive communication, they had to put their knowledge into practice. The scenarios and virtual learning environments This work has limitations typical of an investigation carried out with groups given in an institution during the normal development of its mission. The expansion of the sample, the development of more learning units and for longer with students from other levels could contribute positively to testing the external validity of the results. This entire development process is aimed at meeting the educational needs of the 21st century, and anchoring the training of students in their real and daily lives, providing instruments and strategies that allow them to seek solutions to the problems they face every day. 7. ACKNOWLEDGMENT I want to thank to the Master in ICT applied to the Educational Sciences at the Pedagogical and
3,754.8
2020-07-01T00:00:00.000
[ "Education", "Computer Science" ]
A Study of Water Treatment Chemical Effects on Type I” Pitting Corrosion of Copper Tubes It is known that one of the causes of pitting corrosion of copper tubes is residual carbon on the inner surface. It was confirmed that type I” pitting corrosion of the copper tube is suppressed by keeping the residual carbon amount at 2 mg/m 2 or less, which is lower than that of the type I’ pitting corrosion, or by removing the fine particles that are the corrosion product of galvanized steel pipes. The developed water treatment chemical was evaluated using three types of copper tubes with residual carbon amounts of 0 mg/m 2 , 0.5 mg/m 2 , and 6.1 mg/m 2 . The evaluation was conducted for three months in an open-circulation cooling water system and compared with the current water treatment chemical. Under the current water treatment chemical conditions, only the copper tube with a residual carbon amount of 6.1 mg/m 2 showed a significant increase in the natural corrosion potential after two weeks, and pitting corrosion occurred. No pitting corrosion and no increase in the natural corrosion potential were observed in any of the copper tubes that were treated with the developed water treatment chemical. In addition, the polarization curve was measured using the cooling water from this field test, and the anodic polarization of two cooling waters was compared. For copper tubes with a large amount of residual carbon, the current density near 0 mV vs. Ag/AgCl electrode (SSE) increased when the developed water treatment chemical was added. Introduction Copper has a high thermal conductivity and is easy to process, so it is used for heat exchangers in refrigerators and air conditioning units. Copper tubes are resistant to corrosion in freshwater but may experience pitting corrosion. It has been reported that one of the causes of this pitting corrosion is the effect of the residual carbon on the inner surface of the copper tube [1] [2] [3] [4]. The pitting corrosion of copper tubes is categorized as type I or type II. Type I pitting corrosion is further categorized into two types. One is type I' pitting corrosion when using groundwater, and the other is type I" pitting corrosion when using heat storage or cooling water [5]. To prevent these types of corrosion, it has been proposed that in the case of type I' corrosion, the concentration of free carbonic acid be reduced to less than 15 ppm or the amount of mean residual carbon on the copper tube inner surface (hereafter referred to as "residual carbon amount") be reduced to 5 mg/m 2 or less [6]. On the other hand, type I" is suppressed by reducing the residual carbon amount to 2 mg/m 2 or less, which is lower than that of type I', or by filtering out fine particles that are the corrosion products of the main galvanized steel pipe [7]. However, due to the production cost of copper tubes as industrial products, the development of a water treatment chemical capable of suppressing pitting corrosion even in the presence of residual carbon is desired, rather than removing the residual carbon [8]. In this study, the corrosion suppression effect of the newly developed water treatment chemical was evaluated in a field test of an open-circulating cooling water system using copper tubes with different amounts of residual carbon. In addition, a comparison with a current water treatment chemical was performed by polarization measurement using cooling water from this field test. Table 1 shows the types of test specimens used in the field test. The specimen material was phosphorus deoxidized soft copper tube (JIS H3300 C1220). The residual carbon amount in the tube was controlled to three different levels, 0 mg/m 2 (hereafter referred to as "C_0"), 0.5 mg/m 2 (hereafter referred to as "C_0.5") and 6.1 mg/m 2 (hereafter referred to as "C_6.1"). The C_0 and C_0.5, specimens had an outer diameter of 15.88 mm, wall thickness of 1.0 mm, and length of 200 mm. The C_6.1 specimen had an outer diameter of 15.88 mm, wall thickness of 0.8 mm, and length of 200 mm. They were not subjected to degreasing or other treatments. The test specimens of the field test, which were taken out every month, were cut in half along the length, and the inner surface was inspected. After removing the scale and corrosion products from each specimen with dilute sulfuric acid, the pit depth was measured with a digital microscope (VHX-5000; Keyence). (JIS H3300 C1220). The residual carbon amount in the tube was four types, C_0, C_0.5 and 6.6 mg/m 2 (hereafter referred to as "C_6.6"), 13.0 mg/m 2 (hereafter referred to as "C_13.0"). The C_6.6 and 13.0 specimens had an outer diameter of 15.88 mm, and wall thickness of 0.8 mm. All test specimens were half-cut that had a length of 100 mm that were coated with silicone resin, except for the 1 cm 2 test area on its inner surface. These were degreased before the test using acetone. Figure 1 shows a schematic diagram of the test equipment installed in the cooling tower of the open-circulating cooling water system. There were two refrigerators of the same type, one of which was treated with a newly developed water treatment chemical (hereafter referred to as the "development chemical"), and the other with a current water treatment chemical (hereafter referred to as the "current chemical"). The development chemical was manufactured by Kurita Water Industries. and contained an oxidant-type biofouling control agent, azole-type anti-corrosive agent and phosphonic acid, and the current chemical manufactured by other companies contained a biofouling control agent, an azole-type anti-corrosive agent, and chelating agents. The test was conducted for three months, and the average water quality during the test period is shown in Table 3. Four sets of three types of test specimens were installed in each of the two systems, and the corrosion potential of the specimens was monitored every two weeks with reference to an Ag/AgCl electrode (SSE). For the other three, one was taken out every month for corrosion inspection. The test equipment was designed to circulate water through the test specimen at a flow rate of 0.1 m/s and then return to the cooling tower. Corrosion products were observed in the test specimens after one month, and it was observed that corrosion products became larger and increase in number over time. Similar to this test, pitting corrosion was reported to occur in copper tubes with residual carbon amount of 3 mg/m 2 or more in a field test of a cooling water system, which confirmed these results [9]. weeks. After that, although it temporarily decreased, it remained at approximately 160 mV vs. SSE, which was a noble value compared to that of the other copper tubes with low residual carbon amount. It has been reported that the potential for pitting corrosion affected by residual carbon is approximately 100 mV vs. SCE (approximately 147 mV vs. SSE in SSE conversion, broken line in Figure 5) [7]. Specimen C_6.1 B reached this potential after two weeks, and pitting was considered to have occurred during this period. the natural corrosion potential of test water A was more noble than that for test water B. In addition, the current density also showed a difference near 0 mV vs. Anodic Polarization Curve in Cooling Water SSE for a residual carbon amount of 6.6 mg/m 2 or more, and the current density in test water A remained higher. Figure 8 shows the relationship between the residual carbon amount and the maximum current density in the anodic polarization measurement. It has been reported that a current density near 0 mV vs. SSE suggests the formation of an oxide film [10]. These results indicated that copper tubes in the test water A were more likely to form an oxide film than those in test water B. Regardless of the residual carbon amount, the current densities of the anodic polarization curves of both test waters A and B increased significantly from approximately +120 mV vs. SSE. Therefore, under the present measurement conditions, the film breakdown potential of type I" was considered to be approximately +120 mV vs. SSE. In test water B, the natural corrosion potential of 6.1 mg/m 2 reached +180 mV vs. SSE after two weeks, and pitting corrosion appeared. Thus, there was good agreement between this consideration and the field test results. As described above for the investigation results for the copper tube and the measurement results of the natural corrosion potential, no pitting corrosion and a significant increase in potential were observed in the development chemical regardless of residual carbon amount. It was found that pitting corrosion was greatly affected by the residual carbon amount and the water treatment chemical used. It is currently believed that the pitting corrosion inhibition of this development chemical is due to a composite film consisting of phosphonic acid, an azole type anticorrosive agent contained in the water treatment chemical, and dissolved components in the water [11]. This is still under investigation and will be clarified in the future. Conclusions The corrosion suppression effect of the developed water treatment chemical was evaluated in a field test of a cooling water system, and a comparison with a current water treatment chemical was performed by polarization measurements using copper tubes with different amounts of residual carbon. The results were as follows. 1) Regardless of the residual carbon amount, the test specimens with the developed water treatment chemical did not exhibit any pitting corrosion or a significant increase in the natural corrosion potential. 2) No pitting corrosion was observed at the residual carbon amount of 0.5 treatment, but at 6.1 mg/m 2 , a significant increase in natural corrosion potential and the occurrence of pitting corrosion were observed. 3) Regarding the effect of residual carbon amount, regardless of the water treatment chemical, 0 mg/m 2 and 0.5 mg/m 2 showed relatively close behavior, 6.6 mg/m 2 and 13.0 mg/m 2 showed relatively similar behavior, and the polarization behavior tended to be different when the residual carbon amount was 0.5 mg/m 2 or less and 6.6 mg/m 2 or more. 4) Regarding the difference in water treatment, for copper tubes with a large amount of residual carbon, the current density near 0 mV vs. SSE increased when the developed water treatment chemical was added. 5) Under these measurement conditions, the film breakdown potential of type I" pitting corrosion was considered to be approximately +120 mV vs. SSE. 6) The corrosion inhibition mechanism of the development chemical will be clarified in the future.
2,459.8
2020-06-29T00:00:00.000
[ "Materials Science" ]
Estimation of small area proportions under a bivariate logistic mixed model A variety of data is of geographic interest but is not available at a small area level from large-scale national sample surveys. Small area estimation can be used to estimate parameters of target variables to detailed geographical scales based on relationships between the target variables and relevant auxiliary information. Small area estimation of proportions is a topic of great interest in many fields of study, where binary variables are diffused, such as in labour force, business, and social exclusion surveys. The univariate generalised mixed model with logit link function is widely adopted in this context. The small area estimation literature has shown that multivariate small area estimators, where correlations among response variables are taken into account, provide more efficient estimates than the traditional univariate approaches. However, the estimation problem of multivariate proportions has not been studied yet. In this article, we propose a bivariate small area estimator of proportions based on a bivariate generalised mixed model with logit link function. A simulation study and an application are presented to evaluate the good properties of the bivariate estimator compared to its univariate setting. We found that the extent of the improved efficiency of the bivariate over the univariate approach is associated with the degree of correlation of the area-specific random effects and the intraclass correlation, whereas it is not strongly related to the area sample size. Introduction Large-scale national sample surveys are usually designed to produce precise and accurate estimates for large population domains, for example large geographical areas. However, many phenomena, such as poverty, well-being, and social exclusion present spatial heterogeneity. Thus, policy makers in charge of implementing policies at sub-national level ask for disaggregated estimates. Direct estimates obtained for these areas may return large variability due to small sample sizes (Rao and Molina 2015). 3 In the last decade, there has been an increasing attention to the development of efficient small area estimators based on different approaches. In particular, mixed models have become prominent in this field (Rao and Molina 2015;Rao 2003;Jiang and Lahiri 2006). Since many social phenomena are multidimensional, thus naturally correlated, (Betti and Lemmi 2013) we argue, in line with the literature, that this property can be used to further improve the efficiency of the small area estimates. This is carried out by taking into account for the correlation in the data in the model estimation stage by including correlated random-area effects (Moretti et al. 2020a(Moretti et al. , 2021Benavent and Morales 2016;Ubaidillah et al. 2019;Fabrizi et al. 2005;Moretti et al. 2020b;Guha and Chandra 2021). Importantly, this problem finds also its motivation in the statistical modelling literature. In fact, as pointed by Klein et al. (2015), although the vast majority of regression models are implemented for each single response variable separately, modelling multivariate correlated response variables simultaneously can be extremely relevant, given that it is possible to gain detailed information on the joint stochastic behaviour of multivariate response vectors accounting for complex regression effects. Specifically, Klein et al. (2015) propose a unified Bayesian framework for multivariate structured additive distributional regression analysis considering a large class of continuous, discrete and latent multivariate response distributions. This point is also stressed in Gueorguieva (2001), where it is discussed that there are important advantages of estimating the multivariate model over fitting separate models. In particular, these include better control over the type I error rates and potential gains in efficiency in the parameter estimates. In addition, practitioners can answer multivariate questions. The focus of this article is on the unit-level approach where we assume that the auxiliary variables are known for all units of the sample. Fuller and Harter (1987) introduce the multivariate variance components model in small area estimation, later used by Datta et al. (1999) to estimate small area mean vectors of multiple characteristics, and, particularly, use this in empirical best linear unbiased and empirical Bayes prediction. Molina (2009) studies a multivariate mixed model under a logarithmic transformation, and Baíllo and Molina (2009) focus on a particular case of the multivariate nested error regression model for uncorrelated random effects. Recently, Moretti et al. (2020a) investigate the multivariate small area estimation problem of latent well-being indicators and Moretti et al. (2020b) use a parametric bootstrap to estimate the mean squared error of a multivariate Empirical Best Linear Unbiased Predictor (EBLUP) of small area means. However, all these articles focus on continuous variables only. Therefore, an important gap in the literature is how to deal with the multivariate small area estimation issue in presence of non-continuous response variables, in our case binary variables. In fact, these types of variables are widely diffused in social surveys. For example, there are poverty and well-being indicators that are based on binary variables (Betti and Lemmi 2013). Some social indicators estimated on Labour Force Surveys by Official Statistics are also constructed on dichotomous variables (see e.g. Chambers et al. (2016)). Therefore, there is the need to estimate small area proportions as target parameters. The aim of this article is to provide a small area estimation approach to compute estimates of a multidimensional characteristic depending on correlated dichotomous response variables. Particularly, our target characteristic is a vector of small area proportions, based on these binary response variables. The response is a vector of observations of K binary variables, taking values 0 or 1, for a unit nested in a small area. For example, from a sample survey two binary variables can be constructed, i.e., an employment indicator, and poverty indicator. Hence, the goal could be estimating two proportions, i.e., the proportions of unemployed persons and the proportion of people living in a household with the income below a certain poverty line. In this article, we focus on the bivariate (two response variables only) small area estimation problem only. Our approach extends the traditional univariate Generalised Linear Mixed Model (GLMM) with logit link function i.e. logistic mixed model. A pioneer work on the use of logistic mixed models in univariate small area estimation is MacGibbon and Tomberlin (1987). The reason why we are focusing on an extension of this model is firstly motivated by the fact that the univariate model is extensively adopted and studied in national statistical agencies for a variety of estimation problems in labour force and more widely social surveys. However, so far, there has not been attention to the multivariate extension in this context which shows potential from a modelling perspective. Second, the properties of small area predictors based on the univariate GLMM are well studied in both the small area estimation literature (Chandra et al. 2018;Chambers et al. 2016) and statistical modelling literature (Coull and Agresti 2000;Rabe-Hesketh and Skrondal 2001;Berridge and Crouchley 2011). Finally, it allows for taking into account for unit-level information available in the sample (auxiliary variables). Under this framework, once the model parameters are estimated, an Empirical Plug-in Predictor (EPP) under a GLMM is used to provide small area estimates of proportions. This is widely adopted in Official Statistics (Chandra et al. 2018;Molina and Strzalkowska-Kominiak 2020;Chandra et al. 2012;Salvati et al. 2012;Rao and Molina 2015). As pointed by Chandra et al. (2018), the EPP predictor is not the most efficient under the model, compared to empirical best predictors. We refer to Jiang and Lahiri (2001) for a detailed study on the Empirical Best Predictor (EBP) that minimises the Mean Squared Error (MSE) for binary response variables. However, since the EBP does not have a closed-form expression it has to be computed via numerical approximations. This is not a straightforward exercise. For instance, the Office for National Statistics (in the United Kingdom) and the Australian Bureau of Statistics prefer the use of approximations such as the EPP (Chandra et al. 2018;Chambers et al. 2016). There are also other applications in Official Statistics, such as in the United States, where small area predictors are evaluated under the traditional univariate GLMM and EPPs are used (Slud 1999(Slud , 2004. Thus, the EPP under a GLMM is used in practice as a good alternative to the EBP (Jiang 2003). We also refer to Molina et al. (2007) and López-Vizcaíno et al. (2013) for other studies that evaluate this type of small area predictor. It is important to acknowledge that there are other modelling strategies that can be implemented in case of correlated multivariate binary variables. For example, the multivariate probit model is also proposed (Edwards and Allenby 2003). However, the main drawback here is that the computations involve high-dimensional integrals which cannot be solved analytically. Numerical integration methods are proposed, but the literature has shown that these are not very accurate in case of probit models and can be slow in case high dimensions. Hence, simulation-based approaches are often implemented (Cappellari and Jenkins 2006). To overcome these problems, the multivariate logit modelling approach is often used (Bel et al. 2018). This is the focus of our research. Considering other techniques to treat compositional data, Aitchison (1982) developed a unified approach to the statistical analysis of compositional data. A range of methods are proposed in this work. Interestingly, Hijazi and Jernigan (2009) investigate the Dirichlet covariate model as an alternative to the logratio techniques. This model is of a particular interest given that it is possible to simultaneously assess the effects of the covariates on the relative contributions of the different components of a particular measure (Gueorguieva et al. 2008). The remainder of this article is organised as follows. In Sect. 2, we describe the small area estimation problem and multivariate GLMM we used to provide the predictor. In Sect. 3, we describe a parametric bootstrap approach to estimate the mean squared error of the bivariate predictor. In Sect. 4, we present a model-based simulation study and its results are discussed. In Sect. 5, we show an application. We conclude the article in Sect. 6 with a final discussion and future research directions. Notation and small area problem We consider a target finite population U of size N partitioned in D non-overlapping small areas, From U we select a random sample s of size n, with n d denoting the sample size in small area d, such that ∑ D d=1 n d = n. Let di = (y di1 , y di2 ) T denotes a vector of the values of k = 1, 2 variables of interest for unit i in area d. Suppose that y dik is binary, i.e., y dik = 0 or 1. Thus, the population parameter of interest is a vector of proportions of for area d, and denoted by d = (p d1 , p d2 ) , where the generic element related to variable k is given as follows: where s d denotes the sample elements and r d the out of sample elements in area d. The direct estimator for the k th small area proportion p dk is given by: where w di denotes the survey weight for unit i in area d. We refer to Särndal et al. (2003) for details of the variance of 2. Estimator 2 is based on area-specific sample information only, thus, it becomes unstable when the sample size in area d is small. In particular, the direct estimates may return larger variability. In addition, the estimator cannot be computed for areas with zero sample sizes. Hence, model-based small area estimation methods that 'borrow strength' across areas via the use of statistical models are used to produce accurate and precise small area estimates of 1 (Rao and Molina 2015). Estimator 2 is based on area-specific sample information only, thus, it becomes unstable when the sample size in area d is small. In particular, the direct estimates may return large variability. In addition, the estimator cannot be computed for areas with zero sample sizes. Hence, model-based small area estimation methods that 'borrow strength' from auxiliary information via the use of statistical models are used to produce accurate and precise small area estimates of the target parameter given by 1 (Rao and Molina 2015). The bivariate binomial-logit mixed model Statistical models with random area-specific effects taking into account for between and within areas variability are often used to build indirect small area estimators. As we preliminary stated in the Introduction, the small area EPP for proportions under a GLMM with logit link function is widely adopted in the literature and Official Statistics. We refer to Chandra et al. (2012Chandra et al. ( , 2018; Molina and Strzalkowska-Kominiak (2020); Rao (2003); Rao and Molina (2015) for detailed discussions around this topic. In Sect. 2.1, we assumed that y dik is binary, i.e., y dik = 0 or 1. Suppose dik denotes the vector of observed values of p unit-level auxiliary information (including intercept) for unit i in area d related to y dik , which is the observation of variable k for unit i in area d. Let dik be the probability that unit i in small area d assumes value equal to 1 related to variable k. We assume that the following bivariate GLMM with logistic link function relates dik to y dik , for i = 1, ..., N d , d = 1, ..., D and k = 1, 2 (Coull and Agresti 2000;Rabe-Hesketh and Skrondal 2001;Berridge and Crouchley 2011): where K is a p-dimensional vector of regression coefficients for response k, u dk is the random area effect for area d and response k, and it measures the difference between the average of the variable for area d and its average in the entire sample. Therefore, the random area effects take into account for the variability that is not explained by the fixed effects. We assume these following a bivariate Normal distribution, i. where u denotes a 2 × 2 an unknown positive-definite variance-covariance matrix. Its offdiagonal elements are the covariances between u dv and u dj with v ≠ j. Furthermore, we assume that y dik |u dk ∼ Binomial(1, dik ) with dik = E(y dik |u dk ) . Thus, it holds that (Gueorguieva 2001;Coull and Agresti 2000;Rabe-Hesketh and Skrondal 2001): That is the characteristic for unit i in area d related to a specific variable k is Bernoulli distributed conditionally on the random effects (Jiang et al. 2019). Note that when the covariances between u dv and u dj with v ≠ j are equal to 0, model 3 and 4 is equivalent to two separate GLMM models for the two response variables (Gueorguieva 2001). Model 3 and 4 can be written for the sample elements i = 1, ..., n d without loss of generality. Hence, the model parameters are estimated on a random sample s drawn from U (Rao and Molina 2015). In order to estimate the model parameters we follow the Maximum Likelihood (ML) approach. We refer to McCulloch (1997), McCulloch (1994 and Booth and Hobert (1999) for the theory and Berridge and Crouchley (2011) for its implementation. In addition, we also refer to Rabe-Hesketh and Skrondal (2008), Skrondal and Rabe-Hesketh (2004) and Gueorguieva (2001) for other practical implementations. Small area predictor We can now present the Empirical Plug-in Predictor of the small area proportions for area d under 3 and 4 introduced in Sect. 2.2. This is given as follows: where ̂ k and û dk denote the estimates of the regression coefficients and predictions of random effects, respectively. In practice, the auxiliary variables are available for the sample units and area-specific aggregates are available for the population, e.g. from the Census or administrative data. Thus, 5 cannot be applied and a modification is available in the literature (Rao and Molina 2015;Chandra et al. 2018). In the economy of space, we write the estimator only for the general proportion k, that is given by, assuming small sampling fractions: , and ̄ dk denotes the means of the auxiliary variable for the population (e.g., from the Census). Mean squared error estimation via parametric bootstrap In this section, we describe a parametric bootstrap algorithm we used to estimate the Mean Squared Error of p EPP1 dk , denoted by MSE(p EPP1 dk ) . This type of bootstrap for GLMM with logit link function is well-known and widely studied in the literature. The reader may want to refer to González-Manteiga et al. (2007) where its properties are also evaluated. There are also applications of this algorithm in the literature, such as in Chandra et al. (2018) and Hobza et al. (2018). Moreover, the algorithm is extended in small area estimation under bivariate mixed models e.g. Moretti et al. (2020b) following the same ideas. The parametric bootstrap algorithm steps are the following: 1. Estimate the GLMM given in Sect. 2.2 on the random sample s and the following estimates are obtained: ̂ u and ̂ k for k =, 1, 2. Step 4 and obtain the bootstrap EPP1 according to 6. This is denoted by p EPP1 * (b) dk . 6. Repeat steps 2-5 B times, and the bootstrap estimator for the MSE of p EPP1.1 dk is given by: There are other bootstrap algorithms that are helpful in case of dependent data. This is an area of ongoing research in small area estimation. We can find some applications of block bootstrap in (Mokhtarian and Chambers 2013). Wild bootstrap is also study in (Rojas-Perilla et al. 2020) in case there are some mild model failures such as non-normality after using transformations (see also Feng et al. (2011)). Model-based simulation study In this section, we present the results of a model-based simulation study designed to evaluate the performances of the bivariate predictor of small area proportions compared to the univariate case under different scenarios. In addition, we also evaluate the performance of the MSE bootstrap estimator described in Sect. 3. Given the computational burdens, this has been carried out for some scenarios only. In order to choose the setting of this simulation, we follow model-based simulation studies in small area estimation (see Chambers et al. (2016) and González-Manteiga et al. (2007)). All the computations in this section are produced in R. The mixed models parameter estimates are computed using the software developed by Crouchley and Crouchley (2012). Simulation parameters In this section, we show all the parameters used to generate the population in Sect. 4.2 (first bullet point) so that the experiment can be replicated by users. Two binary response variables are generated in the population according to the GLMM model introduced in 3 and 4 with the following parameters: • regression coefficients, 1 = (0.05, 1) , and 2 = (0.05, 2) . The first element of each vector is related to the intercept. • area-specific random effect are generated from a bivariate Normal distribution (according to assumption in model 3 and 4 ): d ∼ N( , u ) , with variance-covariance matrix: 2 u1 and 2 u2 denote the variances of the random effects related to responses 1 and 2, respectively. u denotes the correlation coefficient. We choose two realistic levels of correlation in u (small and large correlation) i.e. u = {0.09, 0.40}. Regarding the values of the variances 2 u1 and 2 u2 , these are chosen as a function of the intraclass correlation. This is practice in model-based simulation studies in small area estimation (see e.g. Moretti and Whitworth (2020), Moretti et al. (2020a) and Burgard and Münnich (2014)). Indeed, the variability of small area estimators depends on this coefficient (Moretti and Whitworth 2020; Moretti et al. 2020a) and it affects the accuracy of mixed models parameter estimates (see Goldstein (2011)). The intraclass correlation coefficient, denoted by ICC k for variables k = 1, 2 , gives information on the partition of the total variance which is between-areas and within-areas. In particular, it measures the degree of homogeneity of the units belonging to the same areas and is between 0 and 1. The intraclass correlation of variable k = 1, 2 in case of generalised mixed models with logit link function is given as follows (Guo and Zhao 2000): As outlined in Table 1 As pointed in Moretti and Whitworth (2020), in the social sciences, the intraclass correlation does not often assume very large values. For example, in economic wellbeing indicators Moretti et al. (2021) note an ICC close to 0.20. Whereas, in medical or agricultural applications, the intraclass correlation coefficient can reach large values (Koo and Li 2016;Pleil et al. 2018). Regarding ICC in health indicators we refer also to (Castelli et al. 2013), where large ICCs e.g. about 0.40 are noted. The population size in each small area d is N d = 100 , and the number of areas is equal to D = 50 . We keep this in small scale for computational reasons. The auxiliary variable is generated from a Uniform distribution i.e. x di ∼ Unif(−1, 20) , and it is kept fixed over the simulations. Table 1 shows the scenarios that we consider in this simulation study. Simulation steps The simulation consists in the following steps, where l = 1, ..., L , with L = 500 denotes the repetitions: For some scenarios only (A, C, F, see Table 1 for the details) we evaluate the bootstrap MSE estimator described in Sect. 3. 4. The following measures of performance are also calculated in order to evaluate the estimators for k = 1, 2 in both the univariate and bivariate case (here, ȳ (l) dk denotes any estimator for ȳ dk , for proportion k and area d): Generate the population values Absolute Relative Bias (ARB) % Relative Reduction in Terms of RMSE (RelRed%) Equation 14 can be seen as a measure of efficiency, since our hypothesis is that the RMSE of the bivariate small area estimator is smaller than its univariate setting (Moretti et al. 2020a). In order to present summary statistics, the median across the small areas D is shown as a robust central tendency measure that avoids the impact of extreme values in some small areas (Giusti et al. 2014). The mean values across the small areas are also presented in parenthesis in the outputs. In this case, the same notation as above is used but the index 'd' is dropped. Results In this section, we present the results of the simulation study. In order to present results relevant to users, we split this section according to the focus of the discussion. In particular, we are investigating how the correlation in the variance-covariance matrix of the random effects, intraclass correlation and sample size impact on the the quality measures. Table 2 shows the evaluation of the bivariate model parameter estimates of and u . In order to evaluate their quality we show the averages of the estimates of those across the simulations S. The biases were negligible, i.e. very close to zero, hence they have been omitted. ̂2 1 , ̂2 2 and ̂ are the averages across the simulations of the elements in u . These results can be compared to the true values that are used to generate the population and given in Table 1 above. Role of correlation u (Scenarios A and B) In this section, we present the results of the impact of the correlation coefficients u between random effects on the quality measures. In particular, we are focusing in scenarios A and B (Table 1), with u = {0.09, 0.40}. Table 3 shows the median across the small areas of the RRMSE and Table 4 shows the median across the small areas of the absolute relative bias of the estimators for both u = 0.09 and u = 0.40 cases. Mean values are shown in parenthesis. Overall, looking at Table 3, in line with the literature, we can see smaller RRMSE when model-based small area estimates are used compared to direct estimates. We can also see that the bivariate predictor provides estimates with smaller RRMSE than the univariate case, and this is in line with the bivariate small area estimation literature of continuous response variables (see e.g. Datta et al. (1999) and Moretti et al. (2020a)). In addition, it can be seen from Table 4 that both the predictors provide small area estimates with a negligible ARB; both the bivariate and univariate predictors produce estimates with a very small bias. We also calculated the relative bias, however, these are also close to zero, hence, negligible. Thus, we present the ARB only. Table 5 shows the median values of the percentage relative reductions in terms of root mean squared error of the small area estimates across the areas varying u i.e. u = {0.09, 0.40} . Mean values are shown in parenthesis. For example, in Table 5 RelRed(ŷ 1 ) relates to the % relative difference in terms of RMSE of the bivariate predictor over the univariate predictor for k = 1 proportion. Larger gains in efficiency are obtained when u = 0.40 . When u becomes smaller, u = 0.09 , there are still good performances in terms of efficiency of the bivariate estimator over the univariate estimator. If u = 0 , the bivariate case corresponds to the univariate case and the performances of the estimators would be the same (see e.g. Datta et al. (1999)). Role of intraclass correlation ICC k (Scenarios B, C, D, E) In this section, we present the results of the impact of the intraclass correlation on the quality measures of the estimators. Particularly, we are considering scenarios B, C, D and E of Table 1 where different levels of intraclass correlations are selected. We show in Table 6 the % relative reductions in terms of RMSE of the small area estimators under different level of ICC, indicated again in the table to compare the results easily. It can be seen that, the largest gains in efficiency of using the bivariate predictor over the univariate predictor are obtained when the intraclass correlation is large. In this case, the variables borrow more strength from each other, achieving larger reduction in terms of RMSE. The smaller the intraclass correlation, the higher is the improvement of the modelbased estimates over the direct estimates. Thus, we expect that the univariate estimates return a large efficiency already, compared to the direct estimates. Therefore, the reductions in terms of RMSE of the bivariate estimates compared to their univariate setting are modest here. Although, as shows in Table 6 there are still gains in efficiency of using the bivariate estimator over its univariate setting in case of smaller ICC (see scenario E). We present in Table 7 the ARB of the small area estimators under different level of ICC. We still can see that the small area estimates produced by the model-based estimators show a negligible small ARB. Smaller biases can be observed in case of larger ICC. Role of the area sample size n d (Scenario F) We present now the results of scenario F, which we reminder to the reader, is the scenario where the sample size in area d, n d , varies across areas i.e. between 1 and 10. This is to evaluate the impact of the n d onto the small area estimates. As exercise, we also run a small scale simulation study where n d varied between 20 and 50, and we observed similar patterns to what we found here under scenario F. In the economy of space, those results are omitted. We found that there is a moderate relationship between the percentage relative reductions in terms of RMSE and the small area sample size. In fact, the estimates of Pearson correlation coefficient between n d and RelRed(ŷ 1 ) and RelRed(ŷ 2 ) are modest and equal to − 0.163 and − 0.129, respectively. This means that when the area sample size becomes larger, the percentage relative reductions in terms of RMSE of using the bivariate predictor over the univariate predictor become smaller. However, this relationship is not large. The median values across the areas of the % Relative Reductions are equal to − 11.607% and − 41.959% for k = 1 and k = 2 , respectively. We present the ARB and RRMSE of the model-based small area estimators in Table 8. This shows that the estimates present a negligible bias and that there is a gain in efficiency by using the bivariate estimator over the univariate predictor. This is consistent to our previous results of the sections above. We also depict in Figs. 1 and 2 the RRMSE of the model-based estimators, univariate versus bivariate case, for the small area means of the responses k = 1, 2 , respectively. The estimates are ordered by growing sample size in area d. The dotted line shows the RRMSE for the bivariate estimator, whereas the continuous line shows the RRMSE of the univariate estimator. As noted above, we can see gains in efficiency when the bivariate predictor is used, and these are larger for the second response, given that its ICC is larger than the one of reponse k = 1 (Fig. 2). Evaluations of the bootstrap MSE In this section, we present the evaluation of the bootstrap MSE estimator. Since simulation studies to evaluate bootstrap MSE estimators are computationally heavy, we focus on some of the scenarios only, i.e. A, C, and F. Table 9 shows the Empirical MSE (EMSE), the average values of the bootstrap MSE across the simulations and their ARB. Median (and average) values across the small area are presented as summary statistics, as in the tables above. By looking at Table 9 it can be seen that, the bootstrap algorithm presented in this article provides good estimates of the EMSE of the bivariate small area estimators. In fact, the average of the bootstrap MSE across the simulation approximates well the EMSE. In addition, the ARB are negligible (i.e., very close to zero). This is in line with other studies in bivariate small area estimation on the bootstrap MSE estimation (see, Moretti et al. (2020b)). Final remarks on the simulation study This simulation study shows good performances of the bivariate small area predictor under all the scenarios considered. By good performance, we mean that the bivariate approach does not introduce bias in the estimates, thus, it provides estimates with smaller variance. This shows that the bivariate small area estimates are more efficient than the univariate small area estimates. Larger gains in efficiency are obtained when the correlation in u ( u ) is larger. Even when the correlation is smaller, equal to 0.09, we can see good gains in efficiency, i.e. the mean squared error is smaller, thus the efficiency improves. The bivariate predictor provides more efficient estimates when the intraclass correlation increases. When this is small instead, we notice a smaller gain, but the results are still satisfactory. We do not find a strong relationship between small area sample size and gains in efficiency, indeed, the relationship was moderate under the scenarios considered. Application In this section, we present an application where the performances of the univariate EPP are compared to the bivariate EPP. For the estimators i.e. EPP and bootstrap MSE related to the univariate case we refer to Chandra et al. (2018), Molina and Strzalkowska-Kominiak (2020), and Rao and Molina (2015). Data Data from Lehtonen and Veijanen (2016) is used and this is available from Pratesi (2016). The data was derived from AMELIA data (see Burgard et al. (2017) and Lehtonen and Veijanen (2016) for the details). AMELIA is a synthetic dataset that allows for comparative and reproducible research. The aim of the project was to generate a synthetic and realistic data based on European Union Statistics for Income and Living Conditions (EU-SILC) variables. Although the data is not a real dataset, it mimics the statistical properties of the real data behind (Burgard et al. 2017). In particular, the sample size is equal to n = 2000 and we use the "Districts" ( D = 40 ) as small areas with sample sizes ranging between 25 and 84 and average sampling fraction f d = 0.002. We create two binary variables and their proportions are the target parameters for which we produce the small area estimates. Based on the variable RB210 (Basic activity status) we create a binary variables called 'Employed', Y 1 , taking value 1 if the unit is employed and 0 otherwise. We also create another variable, Y 2 which is called 'Poor'. This variable takes value 1, denoting that the unit is poor, if the value of the income of the unit is below the poverty line calculated as 60% of the median of the income (Chatterjee 2011). Small area estimates The target parameters are the proportion of employed people and people with an income below the poverty line. We compute the direct estimates and their standard deviations using the survey weights according to (2); these are denoted by p We also check the normality of the random effects estimated from both univariate and bivariate models and the Kolmogorov-Smirnov test (with = 0.05 ) is performed to investigate the normality of the area-specific random effects predicted for both the univariate and bivariate GLMMs. The null hypothesis of the test is that the data is normally distributed. The results, with p-value in parenthesis, for the univariate case are 0.161 (0.224) and 0.091 (0.864) for employed and poor, respectively; and for the bivariate case 0.068 (0.986) and 0.103 (0.755) for employed and poor, respectively. Given that the p-value are larger than = 0.05 , we cannot reject the null hypothesis and we can say that the distributions of the random effects are not statistically different from the normal distribution. Figures 3 and 4 show the Relative Root Mean Squared Error (RRMSE) % of the small area estimates for employed and poor proportions, respectively. These are ordered by growing sample size. The RRMSE of the direct estimates can be approximated by the coefficient of variation (standard deviation divided by direct estimate) (Rao and Molina 2015). It can be seen that, in line with the simulation study, the use of the bivariate GLMM provides more efficient estimates that the univariate model for both small area proportions. The median percentage relative reductions in terms of RMSE across the areas is 48.8% for employment and 26.4% for poor, showing important gains in efficiency. We can also see that the RRMSE% estimates of the small area proportions obtained via the bivariate predictors are all below 20%, thus, reliable for many statistical agencies, see for example Commonwealth Department of Social Services (2015). Conclusion In this article, we studied the bivariate small area estimation problem of proportions. This is an important problem in applications, since many variables present a binary nature. For example, many variables related to labour force, deprivation, poverty, health are binary. Here, we focus on the problem of providing small area estimates based on sample surveys that are not representative for small domains. We recognise that the use of geo-referenced administrative data is still important to study social phenomena. However, there might some privacy and confidentiality issues regarding their access at small area level. In addition, the availability of administrative data varies depending on the country. Future research will take into account for it. Sample surveys, such as the European Statistics on Income and Living Conditions (EU-SILC), are still very much important to study social phenomena, since they contain crucial information on poverty and social exclusion variables, and they can be used to estimate a large variety of poverty indicators, such as the Laeken indicators (see also Betti and Lemmi (2013)). In this work, we compared the univariate empirical plug-in predictor i.e. EPP under a unit-level generalised linear mixed model (GLMM) with logit link function to its bivariate extension. As mentioned in the Introduction, the univariate predictor is used by statistical agencies given its good properties and simplicity. The performances of the small area estimators are compared via a model-based simulation study and an application. Our results show that the use of the bivariate generalised mixed model provides more efficient small area estimates of proportions compared to the use of its univariate setting in all scenarios considered. We found that good gains in efficiency can also be seen when the correlation of the area-specific random effects is small e.g. = 0.09 . This is an important result, since in applications correlations may be small. Of course, as expected, larger gains are obtained when the correlation is large. It is however important to stress that the performances of the Fig. 4 Relative Root Mean Squared Error % of Small Area Estimates for Poor Proportion, ordered by growing sample size multivariate estimator depend also on the intraclass correlation coefficient. In fact, these gains in efficiency become larger when the intraclass correlation increases. Thus, larger correlation between random effects does not always guarantee a large reduction in the mean squared error. We did not find a large effect of the area sample size on the quality measures considered. In fact, the relationship is rather modest. We can also see that the RRMSE% values of the small area proportions obtained via the bivariate predictors are all below 20%, thus, reliable for many statistical agencies for example Statistics Canada (Spagnolo et al. 2018). Our findings are in line with those from the multivariate small area estimation for continuous outcome variables (Moretti et al. 2020a, b), these results have been studied theoretically in Datta et al. (1999) for the Normal case. However, (Normal) continuous variables are rarely present in real data, especially, in poverty and well-being field. Our results pose the basis for extending the use of bivariate generalised mixed models in small area estimation of social indicators, given the different types of variables that are available in social surveys (e.g., binary, count, ordinal etc.). In practice, when multivariate regression models are applied, users need to consider model selection issues, taking into account for the choice of a response distribution and predictors. The reader may want to refer to Klein et al. (2015), where the authors discuss guidelines that facilitate the model choice in presence of multivariate models. In summary, regarding model choice in multivariate distributional regression, the Deviance Information Criterion (DIC) (Spiegelhalter et al. 2002), normalized quantile residuals (Dunn and Smyth 1996), and proper scoring rules (Gneiting and Raftery 2007) are studied in the literature. In order to evaluate sensitivity against distributional choices, we also recommend to perform simulation studies, which can be based on real data that users aim to use in their applications. The object of this article was the bivariate case. However, extensions to more than two outcomes can be derived. Here, computational problems need to be considered. In the multivariate small area estimation literature (Moretti et al. 2020a) consider four responses for the continuous case, and show good performances (in terms of bias and mean squared error) of the multivariate approach compared to the univariate approach. To overcome issues where the aim is to estimate many single indicators, they propose the use of data dimensionality reduction techniques, such as factor analysis. Thus, the multivariate small area estimation problem can be reduced to a small number of variables (in their application on well-being two variables). Their work can be interestingly extended to the binary variables scenario. In case one is interested in estimating one indicator only, other small area estimation techniques can be used to improve the small area estimates obtained via traditional methods. For example, the use of spatial models where borrowing strength from related small area can produce more reliable estimates (Pratesi and Salvati 2008). We argue that the use of bivariate small area estimators is very useful for data users. In fact, auxiliary variables may not be good enough to explain the between areas variation. Users might be restricted to the use of these variables for privacy and confidentiality reasons, thus they need to rely on covariates that suffer from that issue. As pointed in Benavent and Morales (2016) the use of complex statistical modelling, taking into account for additional relationships between variables can produce small area estimates with higher quality i.e. in terms of precision, than simpler models such as univariate models. Since multivariate small area estimation of proportions is a field under investigation, there are still areas that need to be explored. In particular, in this article we considered a parametric bootstrap approach to estimate the mean squared error of the small area predictors. This is a well-known algorithm applied in small area estimation (see Rao and Molina (2015)). However, analytical methods based on a linearization techniques need to be studied in small area estimation under multivariate GLMM. Thus, comparisons to the boostrap mean squared error can be carried out, as it is practice in small area estimation. This topic is highly challenging in under multivariate GLMM. Model-robust approaches related to parametric bootstrap methods under GLMM in the multivariate setting are also interesting future areas of research. Future research will also consider other bootstrap approaches under this model, e.g., wild and block bootstraps, these are studied in the literature for other types of models (see Mokhtarian and Chambers (2013); Rojas-Perilla et al. (2020)). The use of generalised models using other link functions in small area estimation is another interesting topic of further work. Spatial extensions of this model need also to be considered where random area effects depend on a spatial process which may improve the small area estimates. In the multivariate small area estimation literature, there is some work on the use of spatial models, however, this is for the area-level case (see for example Porter et al. (2015)). There is potential to use these models in the context of unit-level multivariate small area estimation as well.
9,689.4
2022-09-16T00:00:00.000
[ "Economics" ]
Spontaneous Healing of Mycobacterium ulcerans Lesions in the Guinea Pig Model Buruli Ulcer (BU) is a necrotizing skin disease caused by Mycobacterium ulcerans infection. BU is characterized by a wide range of clinical forms, including non-ulcerative cutaneous lesions that can evolve into severe ulcers if left untreated. Nevertheless, spontaneous healing has been reported to occur, although knowledge on this process is scarce both in naturally infected humans and experimental models of infection. Animal models are useful since they mimic different spectrums of human BU disease and have the potential to elucidate the pathogenic/protective pathway(s) involved in disease/healing. In this time-lapsed study, we characterized the guinea pig, an animal model of resistance to M. ulcerans, focusing on the macroscopic, microbiological and histological evolution throughout the entire experimental infectious process. Subcutaneous infection of guinea pigs with a virulent strain of M. ulcerans led to early localized swelling, which evolved into small well defined ulcers. These macroscopic observations correlated with the presence of necrosis, acute inflammatory infiltrate and an abundant bacterial load. By the end of the infectious process when ulcerative lesions healed, M. ulcerans viability decreased and the subcutaneous tissue organization returned to its normal state after a process of continuous healing characterized by tissue granulation and reepethelialization. In conclusion, we show that the experimental M. ulcerans infection of the guinea pig mimics the process of spontaneous healing described in BU patients, displaying the potential to uncover correlates of protection against BU, which can ultimately contribute to the development of new prophylactic and therapeutic strategies. Introduction Buruli Ulcer (BU) is a necrotizing skin disease caused by Mycobacterium ulcerans. M. ulcerans pathogenesis is mediated by mycolactone, a potent polyketide-derived macrolide that triggers apoptotic cell death [1,2]. Initial pre-ulcerative BU lesions are indolent with no systemic symptoms, which often results in a delay in health-care seeking. If lesions are left untreated, they can evolve into severe extensive ulcers with characteristic undermined edges and a necrotic sloughed center [3]. In the most extreme cases, M. ulcerans may invade the bone [4,5]. Spontaneous healing of active lesions is thought to be frequent [6][7][8], although the rate at which this occurs has not been specifically addressed. Therefore, all forms of BU are subjected to the standard first-line treatment, consisting of daily administration of streptomycin and rifampicin for an 8-12-week period [9]. Additional surgical debridement and skin grafting may also be required in the case of extensive lesions. The healing process of severe lesions, whether spontaneous or antibiotic-induced, often results in atrophic scarring, contractures, and severe functional disabilities, ultimately having a negative impact on socioeconomical development [10]. This is why prevention, early detection and treatment have been a priority for the control of BU. Therefore, a better knowledge on the nature of protective responses against M. ulcerans could improve patient management. In that sense, animal models of infection have been extensively used, not only to characterize M. ulcerans disease progression [11], but also to dissect immunological pathways [12][13][14][15][16] and to evaluate the efficiency of antibiotic regimens [17][18][19][20][21] or vaccine candidates [22][23][24][25][26][27]. The most common animal model used for the study of M. ulcerans infection has been the mouse, in which experimental infection was shown to be progressive, leading to an exponential increase in bacterial burdens, a continuous destruction of infiltrating cells, expansion of necrosis into healthy tissue and ulceration of the epidermis, similar to what is observed in active, progressive human BU [12,28]. The guinea pig model has also been used for the study of M. ulcerans infection [1,11,[29][30][31]; however, definitive experimentation on the nature of disease progression in this animal model has been limited. Therefore, in this study, we proposed to perform a detailed characterization of the progression of M. ulcerans infection in the guinea pig model, focusing on macroscopic, microbiological and histological evolution. Spontaneous healing of ulcerative lesions and clearance of M. ulcerans in infected guinea pigs The mouse has been the animal model of choice to characterize progressive M. ulcerans disease [11], to dissect immunopathologic pathways [12][13][14][15][16] and to evaluate the efficiency of antibiotic regimens [17][18][19][20][21] or vaccine candidates [22][23][24][25][26][27]. Although less explored, the guinea pig model has also been used to study M. ulcerans infection, with published reports focusing mainly on the evolution of the macroscopic features and on the histopathology of early time points [11,[29][30][31]. In this study, we decided to explore this overlooked animal model, by performing a time-lapsed study of the macroscopic, histological, and microbiological evolution of M. ulcerans infection in the ear or the back of the guinea pig. Initially, subcutaneous infection with M. ulcerans 98-912 in the ear or back of guinea pigs led to early focal indurated erythema (Fig 1A, 1B, 1G and 1H), reminiscent of a nodule-like structure. During the course of infection, these lesions progressed to small ulcers with overlying scabs (Fig 1B, 1C, 1I and 1J). Between days 18-26, scabs detached revealing a healed and reepithelialized lesion, with occasional scarring (Fig 1D-1F and 1K). An important observation was that these macroscopic alterations were accompanied by bacterial clearance, regardless the site of infection (Fig 2A and 2B Histological analysis of the healing process in M. ulcerans-infected tissue of guinea pigs In line with the macroscopic observations described above, histological analysis of the infected skin of the guinea pig showed signs of a dynamic infectious process culminating with a tissue reparation and bacterial clearance (Figs 3 and 4). During the first week of infection, the underlying dermis was edematous (Figs 3A and 4B) and focally necrotic (Figs 3A, 3B and 4A-4F), with diffuse infiltration of neutrophils and histiocytes ( Fig 4E). Bacteria were mainly concentrated in clumps within the necrotic foci, but could also be found co-localized with phagocytic cells (Fig 4C and 4F). These necrotic areas expanded progressively during the first week of infection, resulting both in an increase in cells with apoptotic morphology ( Fig 4D) and in an initial epidermal destruction ( Fig 4B). However, (Fig 3C and 3D), accumulation of fibroblasts in the subcutaneous tissue (Fig 4G and 4J), intense neovascularization (Fig 4H and 4K) and granulation tissue that gradually replaced the early acute cellular infiltrate (Fig 4G, 4H, 4J and 4K). Analysis of the infected tissue also indicated an ongoing process of skin reepithelialization, evidenced by the formation of an eschar composed of degenerating epidermis and collagen over the ulcerated wound, and also by the thickening of the adjacent epidermal layer (Fig 3C and 3D). Interestingly, few bacilli were scattered throughout the granulation tissue, being mainly found nearby the wound surface (Fig 4I and 4L). By the end of the experimental period of infection (day [26][27][28][29][30][31][32][33][34][35], reepithelialization was complete (Fig 3E and 3F), although fibroplasia (increased deposition of collagen fibers) (Fig 4M, 4N, 4P and 4Q) and epidermal hyperplasia (elongation of rete ridges) (Fig 3E and 3F) were still evident. In accordance with the CFU counts shown in Fig 2, rare or no bacilli could be observed in the infected tissue by day 26 post-infection and until the end of the experimental period (Fig 4O and 4R). Discussion M. ulcerans is a human pathogen, which has recently been shown to naturally infect other mammals in Australia [32][33][34][35]. Regarding experimental models of infection, early studies have established the capacity of M. ulcerans to infect a variety of mammals, namely mice and guinea pigs [11,29,30]. It has been well demonstrated that experimental M. ulcerans infection in the murine model mimics the main features of active, progressive BU [36] (S1 Fig). Indeed, infection with virulent M. ulcerans strains induces a persistent acute inflammatory response throughout infection [12]; however, mycolactone destroys the recruited inflammatory infiltrates generating necrotic acellular areas with extracellular bacilli released by the lysis of infected phagocytes [12,28]. These necrotic areas gradually expand, through the progressive invasion of healthy tissues, eventually resulting in ulceration of the epidermis [12,28]. The guinea pig model, on the other hand, has been occasionally used in BU research, namely to characterize the pathogenicity of M. ulcerans and its toxin [1,11,[29][30][31]. It has been described that following intradermal infection guinea pigs developed initial localized swelling followed by the appearance of ulcerative lesions. Histologically, these lesions were characterized by necrosis, clumps of AFB and diffuse infiltration of polymorphonuclear cells and histiocytes [29,30]. Interestingly, these ulcerative lesions ultimately healed, although further analysis of the healing phase was not explored [29,30]. In the present study, we confirm the previous observations in the guinea pig model and further these findings with a time-lapsed histological and microbiological analysis of M. ulcerans infected tissues. At early time points, infected guinea pig tissue showed focal necrotic areas harbouring intense bacterial loads, compromised epidermal layers, and recruitment of neutrophils and mononuclear cells to the infectious foci. This acute inflammatory profile was gradually replaced by granulation tissue and it is during this transition phase that we observe a decrease in M. ulcerans viability. By the end of the infectious process, we observed complete reepithelialization of the wound surface and scarce/absent bacilli in the tissue. This study contributes to ears collected at different time points were stained with HE. Magnification of images 4x. At day 4 (A) and day 8 (B), edema (dotted line) and necrosis (full line), associated with a surrounding inflammatory, infiltrate were the predominant features. By day 8, necrosis expanded throughout the tissue, resulting in epidermal ulceration (triangles). Day 14 (C) and day 18 (D), were characterized by the presence of an eschar (arrows) over the wounded surface and epidermal hyperplasia (asterisks). By day 26 (E) and day 35 (F), complete repethilialization was evident, as well as epidermal hyperplasia and elongation of rete ridges (asterisks). doi:10.1371/journal.pntd.0004265.g003 Similar observations have been made in BU patients that spontaneously resolved M. ulcerans infection before developing extensive destructive lesions [6][7][8]. Connor and Lunn were the first to propose three histopathological stages in human M. ulcerans infection: the stage of necrosis, the organizing stage, and the healing stage [6], all of which were identified in the guinea pig tissue during M. ulcerans experimental infection. Additionally, Revill et al. reported the spontaneous healing of small BU lesions in 29% of the clinically diagnosed patients that were receiving placebo during a randomized study of clofazamine therapy [7]. More recently, Gordon et al. reported a case in which a patient became IS2404 PCR, ZN and culture negative, one month after laboratory confirmation of initial BU diagnosis [8]. This decrease in bacterial viability also coincided with the presence of mixed inflammatory cell infiltration and granulation tissue [8]. Taking into consideration these similarities, studies on the guinea pig model have the potential to disclose the protective mechanisms underlying resistance to BU in humans. More recently, other animal models of M. ulcerans infection have been proposed to better understand the early host-pathogen interactions and the pathogenesis of BU, namely the monkey [37] and pig models [38]. The cynomolgus monkey developed papules that progressed to ulcers with undermined borders after M. ulcerans infection. Histological analysis of biopsies from ulcer edges showed necrosis, robust inflammatory infiltrates, granulomatous-like responses, mild edema, and extracellular acid-fast bacilli [37]. Similarly to the guinea pig, ulcers eventually healed spontaneously with no signs of systemic infection [37]. On the other hand, in the porcine model, infection with M. ulcerans leads to the development of nodular lesions that subsequently progress to ulcers [38], similar to what we describe here for the guinea pig. However, in the porcine model, only low doses of M. ulcerans infection lead to spontaneous healing of ulcerative lesions, while high doses resulted in progressive infection during the experimental period [38]. Among these animal models of resistance, the guinea pig presents several advantages, namely smaller size, lower cost and maintenance, making it a more accessible model to study mechanisms of protection against M. ulcerans infection. In conclusion, both the mouse and the guinea pig are useful experimental models since they mimic different spectrums of human M. ulcerans disease, therefore contributing to elucidate the pathogenic/protective pathway(s) involved in disease-progression/healing, respectively. Here we show that further studies on the cellular and molecular components of the protective response in the guinea pig model have the potential to uncover correlates of protection against BU. Methods Bacteria M. ulcerans 98-912 (ITM collection, Antwerp, Belgium) is a mycolactone D producing strain isolated in China from an ulcerative lesion [39]. M. ulcerans 1615 is a mycolactones A/B producing strain isolated in Malaysia from an ulcerative case. The isolates were grown on Middlebrook 7H9 medium (Becton, Dickinson and Company) with agar at 32°C for approximately 6-8 weeks. For the preparation of the inoculum, M. ulcerans was recovered, diluted in phosphate-buffered saline (PBS) and vortexed using glass beads. M. ulcerans experimental infection Female Hartley guinea-pigs and Balb/c mice were obtained from Charles River (Barcelona, Spain) and were housed in specific pathogen free conditions with food and water ad libitum. Before experimental infection, guinea pigs were anesthetized with 40mg/kg of ketamine and 0.5mg/kg of medetomidine via intraperitoneal (i.p.) injection. Guinea pigs were infected subcutaneously (s.c.) in the dorsal area (after the removal of the fur) or the ear with 100μl of M. ulcerans strain 98-912 or 1615. After infection, anesthesia was reverted with 1mg/kg of atipamezol via i.p. injection. At different time points post-infection, guinea pigs were sacrificed by i.p. injection of 150mg/kg of pentobarbital. In parallel, mice were anesthetized with 75mg/kg ketamine and 1mg/kg medetomidine administered via i.p. and were then s.c. infected in the ear with 10μl of M. ulcerans inoculum. Anesthesia was reverted with 1mg/kg of atipamezol via i.p. injection. Mice were sacrificed by asphyxiation with increasing concentrations of CO 2 . For each experiment, the susceptible mouse model was used to confirm of the virulence of the M. ulcerans strain. Bacterial load determination Infected tissue was excised, homogenized and diluted in PBS. Suspensions were decontaminated with hydrochloric acid (HCl) 1M containing 0,001% phenol red solution for 15min at room temperature, followed by neutralization with sodium hydroxide (NaOH) 1M. Suspensions were centrifuged and the pellet resuspended in PBS. Serial dilutions of the suspension were plated on nutrient 7H9 + 1,5% agar. Bacterial colony formation was counted after 6-8 weeks of incubation at 32°C. Following ASTM guidelines, the reported value for the limit of detection for microbiological purposes should be "< the dilution value" if no colonies are recovered; therefore the limit of detection for our experiments is 1log 10 CFU/ml. All countable colonies, even those below the countable range, were counted and reported as an estimated count. Histological studies Infected tissue was fixed in buffered formalin and embedded in paraffin. Light-microscopy studies were performed on tissue sections stained with hematoxylin and eosin (HE) or Ziehl-Neelsen (ZN). Images were obtained with an Olympus BX61 microscope. Statistical analysis Differences between the means of experimental groups were analyzed with the One Way Anova-test post-hoc Tukey. Differences with a p-value of 0.05 were considered significant. Ethics statement This study was approved by the Portuguese National Authority for animal experimentation Direcção Geral de Alimentação e Veterinária. Animals were kept and handled in accordance with the guidelines for the care and handling of laboratory animals in the European Union Directive 86/609/EEC.
3,491.6
2015-12-01T00:00:00.000
[ "Medicine", "Biology" ]
Unraveling the Metabolic Changes in Acute Pancreatitis: A Metabolomics-Based Approach for Etiological Differentiation and Acute Biomarker Discovery Acute pancreatitis (AP) remains a challenging medical condition, where a deeper metabolic insight could pave the way for innovative treatments. This research harnessed serum metabolomics to discern potential diagnostic markers for AP and distinguish between its biliary (BAP) and alcohol-induced (AAP) forms. Leveraging high-performance liquid chromatography coupled with mass spectrometry, the metabolic signatures of 34 AP patients were contrasted against 26 healthy participants, and then between different etiologies of AP. The results identified metabolites primarily from glycerophospholipids, glycerolipids, fatty acyls, sterol lipids, and pteridines and derivative classes, with the Human Metabolome Database aiding in classification. Notably, these metabolites differentiated AP from healthy states with high AUROC values above 0.8. Another set of metabolites revealed differences between BAP and AAP, but these results were not as marked as the former. This lipidomic analysis provides an introduction to the metabolic landscape of acute pancreatitis, revealing changes in multiple lipid classes and metabolites and identifying these metabolites. Future research could add and discover new diagnostic biomarkers and therapeutic strategies enhancing the management of acute pancreatitis. Introduction Acute pancreatitis (AP) is a prevalent and severe gastrointestinal disorder that globally affects between 5 and 80 out of 100,000 subjects, with an observed increase in incidence over the years [1,2].Described as an autodigestive disease, AP results from the inflammation of the exocrine pancreas, responsible for secreting digestive enzymes into the duodenum.The pathogenesis of AP remains largely elusive, with premature intracellular protease activation and oxidative stress among the suggested mechanisms underpinning disease onset [3].Once activated, these intracellular proteases induce acinar cell injury, causing cell membrane disruption, edema, interstitial hemorrhaging, and necrosis and igniting an inflammatory response marked by leukocyte infiltration [4]. The most prevalent etiologies of AP to date are alcohol and gallstones, reported a decade ago to account for around 20% and 40-50% of cases, respectively, with geographic differences [5,6].Different types of AP demand divergent treatment strategies; for instance, patients with obstructive biliary stones necessitate stone removal.Consequently, there is an increasing need for rapid and accurate methods to diagnose AP by its specific pathogenesis [7,8].The specificity and accuracy of the standard diagnostic criteria of increased serum amylase and lipase enzymes fall short of being absolute [9][10][11]. Emerging from the "omics" frontier, metabolomics-a component of systems biology-offers promising insights into the diagnosis and pathogenesis of various diseases, including AP. Metabolomics facilitates the comprehensive profiling of small endogenous metabolites present in blood or urine, providing a robust platform for mapping disease-specific perturbations in biochemical pathways [12][13][14].This potential capability to identify predictive biomarkers associated with disease stages, severity, or etiology recommends metabolomics as a powerful tool in the diagnosis of AP.Preliminary evidence suggests that metabolite profiling could serve not only as a diagnostic instrument for AP but also as a means of phenotypic detection and pharmaceutical innovation [15]. A better understanding of AP's pathologic mechanism at the metabolic level is a precursor to novel drug discovery.Metabolomics, with its capacity to offer specific and sensitive biomarkers, could elucidate the potential associations between metabolites and physiological or pathological changes, thus distinguishing different states of the biological system [15,16].As the final products of cellular regulatory processes, metabolite levels are considered the definitive response of the biological system to genetic or environmental alterations [1]. The present study aims to harness the power of serum metabolomics to identify potential diagnostic biomarkers for AP compared to healthy subjects and to differentiate between its two major forms-biliary AP (BAP) and alcohol-induced AP (AAP). Study Period and Participants This study was a prospective observational cohort study, conducted at a tertiary department of gastroenterology, and was performed in compliance with relevant guidelines and regulations.Ethical approval for this study was obtained from the Emergency County Timisoara Hospital Committee for Ethics (decision number 206, 7 September 2020).All patients provided written informed consent. Study Participants Patients admitted with acute pancreatitis (AP) between 1 October 2020 and 31 November 2020 were considered for inclusion.AP was confirmed when at least two of the following criteria were met: consistent abdominal pain, serum lipase level 3-fold higher than the normal level, or typical aspects of AP on a CT scan.These patients were allocated to the P group of our study. Patients were excluded if they were concurrently diagnosed with SARS-CoV-2 infection, referred from other hospitals late after the onset of the disease, were under 18 years old, or were pregnant.A control group comprising hospital employees without known gastrointestinal disease and matched for age and sex distribution, who agreed to participate in this study, was also recruited and allocated to the C (control) group. Data Collection Patients' demographic and clinical data were collected at admission and included age, gender, vital signs, complete blood count, serum chemistry, and disease severity scores (BISAP and Ranson). The Bedside Index for Severity in Acute Pancreatitis (BISAP) score, comprising variables like blood urea nitrogen (BUN) level, impaired mental status, systemic inflammatory response syndrome (SIRS), age > 60 years, and the presence of pleural effusion, was assessed at admission, utilizing the most severe parameters within the first 24 h [17]. The Ranson score was also calculated with five admission parameters (age, white blood cell count, blood glucose, AST, LDH) and six parameters assessed 48 h post-admission (hematocrit, increase in BUN, serum calcium, arterial PO 2 , base deficit, and fluid sequestration) [18]. In addition, we gathered data on etiology (biliary, named BAP; alcohol-induced, named AAP; and non-biliary-non-alcoholic), disease severity (categorized as mild (MAP), moderate (MSAP), or severe AP (SAP) according to the Revised Atlanta Criteria), hospital stay duration, and patient mortality [19]. Alcohol consumption assessment was based on self-reported consumption, using the AUDIT C questionnaire, where for females, scores ≥ 3 are consistent with alcohol misuse, and for males, scores ≥ 4 are consistent with alcohol misuse [20].Binge drinking was defined using the criteria of the National Institute on Alcoholism and Alcohol Abuse (NIAAA), where for a typical adult, this pattern corresponds to consuming 5 or more drinks (male), or 4 or more drinks (female), in about 2 h [21]. Follow-up was performed 90 days after admission by clinic visit or by phone. Blood Collection and Processing The blood samples were collected on the second day of admission.Blood serum samples were collected according to standardized procedures in accordance with the ethical standards of the institutional and national research ethical committee and with the 1964 Helsinki Declaration and its later amendments for ethical standards. The blood was collected in vacutainer tubes without anticoagulant, kept at room temperature for 30 min to allow clotting, and centrifuged for 10 min at 3000 rpm (4 • C) to separate clear serum.After separation, the blood serum was stored at −80 • C. To a volume of 0.3 mL serum, 0.7 mL from a mixture of methanol and acetonitrile (1:1) was added to precipitate proteins.The mixture was vortexed for 1 min and kept at −20 • C overnight; after defreezing, it was vortexed again for 1 min.After mixing, the vials were centrifuged at 12,500 rpm (4 • C) for 10 min, and the supernatant was collected and filtered through PTFE filters of 0.25 µm.All samples were processed in duplicate.Several QC samples obtained from each group were used in parallel in order to calibrate the HPLC separations. HPLC-QTOF-ESI + MS analysis of blood serum: Aliquots of 3 µL serum were subjected to ultrahigh-pressure chromatography on a Thermo Scientific HPLC UltiMate 3000 system equipped with a DionexUltiMate 3000 quaternary pump system (UHPLC), a DionexUltimate 3000 photodiode array detector, a column oven, and an autosampler.Serum metabolites were separated using a Thermo Scientific (Waltham, MA, USA) C18 reverse-phase column (Acquity, UPLC C18 BEH, Dionex, Sunnyvale, CA, USA) (5 µm, 2.1 × 75 mm) at 25 • C and a flow rate of 0.3 mL/min.The mobile phase was represented by a gradient of eluent A (water containing 0.1% formic acid) and eluent B (methanol-acetonitrile, 1:1, containing 0.1% formic acid).The gradient system consisted of 97% A (min 0), 93% A (min 3), 75% A (min 6), 50% A (min 8), 15% A (min 10), and 97%A (min 15), followed by 3 min with 97% A. The total running time was 18 min.The mass spectrometry was performed on a Bruker Daltonics MaXis Impact QTOF instrument operating in positive ion mode (ESI+).The mass range was set between 50 and 1000 m/z.For measurements, the nebulizing gas pressure was set at 2.8 bar, the drying gas flow at 12 L/min, and the drying gas temperature at 300 • C. Before each chromatographic run, a calibrant solution of sodium formate was injected.The control of the instrument, acquisition, and data processing were performed using Chromeleon, TofControl 3.2, Hystar 3.2, and Data Analysis 4.2 (Bruker Daltonics, Billerica, MA, USA). Statistical Analysis Standard laboratory data were analyzed using MedCalc software(version 19.3).Continuous variables were expressed as mean ± standard deviation (SD) and categorical variables as percentages. To compare the mean values between the two independent groups, the independent samples t-test (or Welch's t-test, as appropriate) was applied; the chi-square test of independence was used to compare proportions.A two-tailed p-value of less than 0.05 was considered indicative of statistical significance. The Bruker software Data Analysis 4.2 attached to the instrument was used to process the acquired data, after 3 replications for each sample.The mean m/z vs. peak intensity values were considered for each sample.First, from the base peak chromatogram (BPC), using the Dissect or the Find Molecular Features (FMF) algorithm, an advanced bucket matrix was generated.The matrix released by Data Analysis contained the retention time, the peak areas and intensities, and the signal/noise (S/N) ratio for each component together with its m/z value.The mass spectra were also recorded.Generally, the number of separated compounds ranged between 350 and 400. In this first step, a matrix for all samples was obtained and saved as an Excel file.To eliminate the small signals with S/N values under 5, a first filtration was performed and then a second matrix containing m/z values and peak intensities was saved and filtered in a second step eliminating the small intensities, <1000.The number of peaks remained at 180-200.Only metabolites that were detected in more than 60% of the samples were included in the statistical analysis, so to produce an adequate alignment of the peak's m/z values, the online software from bioinformatica.isa.cnr.it/NEAPOLIS was applied.The aligned matrix (3) allowed the calculation of mean intensity values and standard deviation for the control group and for group P. The aligned matrix (containing 120 m/z values) as a .csvfile was uploaded to the specialized online platform Metaboanalyst 5.0.(https://www.metaboanalyst.ca/MetaboAnalyst/;accessed on 20 March 2021).The statistical analysis included principal component analysis (PCA) and partial least squares discriminant analysis (PLSDA) combined with VIP ranking and cross-validation.On the same platform, the random forest algorithm was applied for a prediction of molecules ranked by their contributions to classification accuracy (mean decrease accuracy).Finally, using biomarker analysis, the receiver operating characteristic (ROC) curves were obtained and the values of areas under ROC curves (AUROCs) were determined.Therefore, the molecules identified were ranked according to their sensitivity/specificity.AUROC values higher than 0.800 were considered as significant.Finally, the enrichment analysis allowed the identification of specific classes of lipids affected in acute pancreatitis compared to controls. The identification of molecules that can be considered potential biomarkers was performed considering the precursor values obtained (m/z), confronted with the LC-MS search considering a max error of 1 ppm.The two most relevant databases for the identification were LIPID MAPS ® Lipidomics Gateway and Human Metabolome Database (https://hmdb.ca/;accessed on 22 March 2021).Preliminarily, in our lab, we checked the m/z values with some pure standards for confirmation. Results During the study period, a total of forty-five patients diagnosed with acute pancreatitis (AP) were initially considered for inclusion in this study.Following the application of exclusion criteria, eleven patients were excluded from this study.Among the excluded patients, six were diagnosed with a concomitant SARS-CoV-2 infection, three patients presented late after the onset of the disease, and two patients were under 18 years old. Consequently, a total of thirty-four patients with AP were finally included in this study and constituted the patient group (group P).The P group consisted of 52% (18/34) males, with an average age of 57 ± 16 years old. A control group (group C) was also included in this study, comprising twenty-six healthy individuals with no known gastrointestinal diseases.The control group was made up of 53% (14/26) males, with a mean age of 54 ± 6 years old. The detailed characteristics of the patients are presented in Tables 1 and 2. The length of the hospital stay was 6 ± 3 days.The total mortality rate was 5%.Etiology was BAP 61.75% (21/34), AAP 20.60% (7/34), and non-BAP/AAP 17.65% (6/34).Severity of AP was classified as follows: MAP 41% (14/34), MSAP 52% (18/34), and SAP 5% (2/34).S1).The unidentified molecules were named "NI1-NI52".The PLSDA score plot is represented in Figure 1a for the first two components, with a covariance of 24.7%, while Figure 1b represents the ranking of the first 15 molecules which may explain the discrimination between groups C and P. The statistical significance of the discrimination was certified by the cross-validation algorithm which showed a high accuracy and R2 value (close to 1) and significant, high Q2 values (>0.8), indicating very good validation and predictability for this model (Figure S1). Statistical analysis considered 120 m/z values, and the number of identified molecules was 69 (see Supplementary File Table S1).The unidentified molecules were named "NI1-NI52".The PLSDA score plot is represented in Figure 1a for the first two components, with a covariance of 24.7%, while Figure 1b represents the ranking of the first 15 molecules which may explain the discrimination between groups C and P. The statistical significance of the discrimination was certified by the cross-validation algorithm which showed a high accuracy and R2 value (close to 1) and significant, high Q2 values (>0.8), indicating very good validation and predictability for this model (Figure S1). Significant Metabolites Based on Multivariate Analysis From the initial number of 123 common metabolites in both groups, based on significant t-and p-values < 0.05, 69 molecules were identified based on the parental ion value (m/z), as presented in Table S1 including their Pubchem codes (https://pubchem.ncbi.nlm.nih.gov/;accessed on 20 March, 2021), as possible biomarkers involved in the metabolic pathways of acute pancreatitis. Table 3 includes the parameters that explain the discrimination between these groups (fold change, log2FC, p-values) and the significance (decrease or increase in the P group compared to the C group). Significant Metabolites Based on Multivariate Analysis From the initial number of 123 common metabolites in both groups, based on significant t-and p-values < 0.05, 69 molecules were identified based on the parental ion value (m/z), as presented in Table S1 including their Pubchem codes (https://pubchem.ncbi.nlm.nih.gov/; accessed on 20 March 2021), as possible biomarkers involved in the metabolic pathways of acute pancreatitis. Table 3 includes the parameters that explain the discrimination between these groups (fold change, log 2 FC, p-values) and the significance (decrease or increase in the P group compared to the C group).Besides the information presented above, the random forest (TF) algorithm was able to indicate more accurately the predictive value (as potential biomarkers) of the metabolites that differentiated the C and P groups.Figure 2 presents the m/z values of the first 15 molecules to be considered as predictive by the RF analysis, according to the mean decrease accuracy (MDA) values higher than 0.002.Besides the information presented above, the random forest (TF) a to indicate more accurately the predictive value (as potential biomarke lites that differentiated the C and P groups.Figure 2 presents the m/z v molecules to be considered as predictive by the RF analysis, accordin crease accuracy (MDA) values higher than 0.002.According to this analysis, increases in group P metabolites such (16:1), dihydrobiopterin, LPC (18:0), sterol, all-trans-retinol, and C18:1 ticed.Meanwhile, metabolites like LPC (20:3), PE (30:3), and DG37:6 group. Biomarker analysis was also conducted, building the receiver operating characteristic (ROC) curve, as a useful tool to evaluate the sensitivity vs. specificity of each molecule considered as a potential biomarker.Table 4 The graphical representation of the first 10 molecules to be considered as putative biomarkers of acute pancreatitis is presented in Supplementary File Figure S2. The most significant metabolites were LPC (20:3), all-trans-retinyl oleate, and LPE (P-16:0/0:0), which decreased in the P group, and LPC (16:1) and LPA (20:5) with increased levels in the P group.Also, dihydrobiopterin showed increases in the P group.These results are in good agreement with the RF analysis and offer a better image of the metabolic disturbances in pancreatitis. The enrichment and pathway analysis complemented the above-mentioned results and showed the sub-classes of molecules that are relevant in this study.Figure 3 includes the graph of this analysis. Therefore, sterol lipids and glycerophospholipids (mainly lysoderivatives) were mostly affected by pancreatitis, as well as fatty acids and prenol lipids.The graphical representation of the first 10 molecules to be conside biomarkers of acute pancreatitis is presented in Supplementary File Figur The most significant metabolites were LPC (20:3), all-trans-retinyl olea 16:0/0:0), which decreased in the P group, and LPC (16:1) and LPA (20:5) levels in the P group.Also, dihydrobiopterin showed increases in the P g sults are in good agreement with the RF analysis and offer a better image o disturbances in pancreatitis. The enrichment and pathway analysis complemented the above-me and showed the sub-classes of molecules that are relevant in this study.Fi the graph of this analysis.Therefore, sterol lipids and glycerophospholipids (mainly lysode mostly affected by pancreatitis, as well as fatty acids and prenol lipids. Discrimination Analysis by PLSDA and VIP Scores The multivariate analysis using PLSDA and VIP scores higher than 1 (presented in Figure 4) shows good discrimination (covariance of 23.9% for the first two components), but due to the small size of the AAP group, the cross-validation algorithm showed lower accuracy (0.65), an R2 value of 0.45, and nonsignificant Q2 values for the first two components, indicating a lower predictability of this model compared to the P vs. C group comparison (see Section 3.2).This model had lower R and Q values, showing nonsignificant differences between these groups.The VIP scores higher than 1 (Figure 4b) show the ranking of the first 15 molecules.According to the t-test, only seven molecules showed p-values < 0.1, namely MG (0:0/18:0/0:0), myristyl linolenate, LPC (24:1), (S)-3-hydroxystearic acid, PC (P-18:0/16:0), all-trans-retinyl oleate, and LPC (O-16:0) Biomarker Analysis Table 5 shows the metabolites with the highest AUC values (>0.72) and p-values < 0.1.According to these data, the monoglyceride of stearic acid and esters like lauryl stearate and myristyl linoleate may be considered as putative biomarkers of the differentiation between groups BAP and AAP. Discussion The pathophysiology of acute pancreatitis has been a topic of extensive research over the past decade.We conducted a thorough metabolomic analysis, illuminating significant changes in the lipidomic profile associated with acute pancreatitis.The findings significantly contribute to the understanding of this disease, particularly the profound alterations observed in a wide array of lipids and metabolites implicated in its progression and inflammatory response. Fatty acyls, including all-trans-retinyl oleate and (S)-3-hydroxystearic acid, were found in elevated levels.While the precise role of these lipids in acute pancreatitis remains ambiguous, they are suspected to play a crucial role in inflammatory responses [16].Particularly, retinoids such as all-trans-retinyl oleate have known immunomodulatory functions [22], and hydroxystearic acids have recently been linked to lipid-induced inflammation [23]. A noteworthy change was also observed in sterol lipids, including metabolites like 18:2 cholesterol ester and 1a,25-dihydroxypentyl cholecalciferol.Considering the crucial functions of sterol lipids in maintaining membrane integrity and regulating immune responses, such alterations could signify an adaptive response to inflammation and tissue injury in acute pancreatitis [26][27][28].According to these data, the monoglyceride of stearic acid and esters like lauryl stearate and myristyl linoleate may be considered as putative biomarkers of the differentiation between groups BAP and AAP. Discussion The pathophysiology of acute pancreatitis has been a topic of extensive research over the past decade.We conducted a thorough metabolomic analysis, illuminating significant changes in the lipidomic profile associated with acute pancreatitis.The findings significantly contribute to the understanding of this disease, particularly the profound alterations observed in a wide array of lipids and metabolites implicated in its progression and inflammatory response. Fatty acyls, including all-trans-retinyl oleate and (S)-3-hydroxystearic acid, were found in elevated levels.While the precise role of these lipids in acute pancreatitis remains ambiguous, they are suspected to play a crucial role in inflammatory responses [16].Particularly, retinoids such as all-trans-retinyl oleate have known immunomodulatory functions [22], and hydroxystearic acids have recently been linked to lipid-induced inflammation [23]. A noteworthy change was also observed in sterol lipids, including metabolites like 18:2 cholesterol ester and 1a,25-dihydroxypentyl cholecalciferol.Considering the crucial functions of sterol lipids in maintaining membrane integrity and regulating immune responses, such alterations could signify an adaptive response to inflammation and tissue injury in acute pancreatitis [26][27][28]. Sphingolipids, such as Cer (d18:0/15:0) and Cer (t18:0/19:0(2OH)), also were found as biomarkers of AP, aligning with increasing evidence of their involvement in inflammatory and apoptotic pathways [24].In the context of acute pancreatitis, disruptions in sphingolipid metabolism may exacerbate the inflammatory response and tissue damage and potentially lead to organ failure [29]. Further, elevated levels of prostaglandin E2, a potent inflammatory mediator, were detected.Though the role of prenol lipids in acute pancreatitis needs further investigation, this rise may indicate the severe inflammation characteristic of this disease.This study also noted significant changes in several other metabolites, including 9-Hexadecenoylcholine and β-Neuraminic acid, suggesting their potential as markers of acute pancreatitis [30]. Comparatively, in two metabolomics studies by Huang et al. and Xiao et al., a consensus was observed in terms of elevated levels of glycerophospholipids and fatty acyls in acute pancreatitis [10,15].In Huang's study, multiple lipids, including glycerophospholipids, were identified as crucial contributors to the metabolic alterations of acute pancreatitis.The researchers reported these lipid perturbations might be associated with an exacerbated inflammatory response and tissue damage in acute pancreatitis, corroborating our findings [15]. Notably, the global metabolic phenotyping of acute pancreatitis conducted by Villasenor et al. presented a broad array of metabolites implicated in the pathogenesis of acute pancreatitis.The study particularly highlighted the elevation of lipid metabolites in the patient group.While the authors did not specify the exact lipid species, the overarching theme of lipid perturbations in acute pancreatitis seems to echo the lipid-centric findings from our search [2]. All these findings can lead to a deeper understanding of the lipid-mediated inflammation and tissue damage seen in acute pancreatitis.Lipid dysregulation, as suggested by these studies, potentially contributes to the hyper-inflammatory state observed in acute pancreatitis.The highlighted glycerophospholipids, for instance, play a crucial role in the formation and functionality of cell membranes and can trigger inflammatory pathways when dysregulated [31].Additionally, fatty acyls, like retinyl oleate, modulate inflammation and immunity, which are crucial aspects of acute pancreatitis pathophysiology [32]. The highlighted sterol lipids, particularly the cholesterol esters, can contribute to cell membrane integrity, and their alterations might relate to cellular damage observed in acute pancreatitis.Our study proposition of glycerolipids, such as DG (17:1/20:5/0:0) [iso2] and TG (57:3), also reinforces the concept of lipid dysregulation.Dihydrobiopterin, a pteridine metabolite, is known for its involvement in nitric oxide synthesis, which can contribute to the inflammatory process [33]. Lou et al. also conducted a thorough investigation of the metabolomic changes in acute pancreatitis but focused on plasma extracellular vesicles.Extracellular vesicles (EVs) play crucial roles in cellular communication and pathophysiological processes, including inflammation and immune responses.The researchers' quantitative metabolic analysis of plasma EVs provides a unique perspective into the diagnostic potential of these vesicles in severe acute pancreatitis.Comparing the specific metabolites identified in the two studies, both show profound alterations in glycerophospholipids. Similarly, fatty acyls such as alltrans-retinyl oleate and (S)-3-hydroxystearic acid were found in both studies.The consensus on these metabolites underlines the crucial part they may play in the inflammatory response and disease progression.The analyses of glycerolipids by both research groups also mirror each other, further supporting their potential significance in acute pancreatitis [32].However, some differences in the findings of the two research groups exist.Our study includes a wider range of lipid classes such as sterol lipids and sphingolipids, while Lou et al.'s analysis, focused on plasma EVs, might not cover the same breadth [34]. The identification of metabolites capable of differentiating between the various causes of acute pancreatitis is a targeted development in the personalization of diagnosis and treatment for this condition.Huang et al.'s study is the only one we found that concentrates on using metabolomics to distinguish between the etiologies of acute pancreatitis, yet their findings offer different perspectives [15]. On the other hand, Huang et al. focused on the lipidomic changes related to three different types of acute pancreatitis, including alcoholic, biliary, and hypertriglyceridemic.However, the specific metabolites implicated in their study are not mentioned for a direct comparison.It is worth noting that their findings underscored the importance of lipidomic alterations in acute pancreatitis of different etiologies. In summary, both studies affirm the role of lipidomic profiling in the investigation of acute pancreatitis etiologies.While direct comparison is limited due to the unavailability of specific metabolites identified by Huang et al., it is evident that the metabolomic approach taken by both studies holds great promise for delineating the heterogeneity of acute pancreatitis etiologies. Despite this study offering a snapshot of metabolite alterations in acute pancreatitis, it is crucial to validate these findings in larger cohorts and over the disease's course.Future investigations should examine if these metabolic changes are causative or consequential, exploring their roles in disease pathogenesis and their potential as biomarkers for disease diagnosis, prognosis, or therapeutic response [35]. Moreover, studies delving into the mechanistic pathways of these metabolites might unveil novel therapeutic strategies.For instance, targeting specific metabolites might help mitigate the inflammatory response and tissue injury in acute pancreatitis [16,36,37].This study significantly adds to our knowledge of the connection between metabolic changes and acute pancreatitis, paving the way for a nuanced, multi-targeted approach to managing this complex disease.But, as the several studies published until now on this subject had different ways of reporting their findings, systematization is required; the situation right now looks similar to the Renaissance geographic discoveries-a multitude of possible findings, a multitude of ways to find them, and, above all, a lack of real knowledge for interpreting these findings. Our comprehensive metabolic profiling combined with those of other researchers provides novel insights into the pathophysiology of acute pancreatitis.The distinct lipid perturbations highlight the potential role of lipid-mediated mechanisms in acute pancreatitis and propose potential diagnostic markers.However, further studies are needed to validate these findings and to explore the exact mechanisms by which these lipids contribute to acute pancreatitis pathophysiology in larger, more diverse cohorts of patients; one source of potential bias in our study is the size of our study groups and the relatively low number of severe AP and AAP patients. The disparity in the group sizes could introduce potential biases in the statistical analysis and may influence the power of this study. Furthermore, although the control group was matched for age and sex distribution, using hospital employees as controls introduces an inherent selection bias.Hospital employees may have different lifestyles, dietary habits, or other unmeasured confounders, which might influence the metabolic profile, compared to the general population.Such biases could affect the generalizability of the identified metabolites as potential biomarkers for acute pancreatitis. It is also noteworthy that while matching for age and sex is crucial, other potential confounders, such as underlying comorbidities, medication usage, and genetic predispositions, were not accounted for.These factors might play a role in metabolic variations and could mask or exaggerate the true differences solely attributed to acute pancreatitis. Conclusions This lipidomic analysis provides an introduction to the metabolic landscape of acute pancreatitis, revealing changes in multiple lipid classes and metabolites.Future research could add and discover new diagnostic biomarkers and therapeutic strategies, enhancing the management of acute pancreatitis. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/biom13101558/s1,Table S1: Identification of 69 molecules based on the parental ion value (m/z) and the Pubchem codes of each molecule (https://pubchem.ncbi.nlm.nih.gov/;accessed on 22 March 2021).Figure S1: Cross-validation graph, according to PLSDA analysis.Figure S2: Representation of the variation in peak intensity for the first 9 molecules from groups C and P, which were selected by biomarker analysis, having AUROC values over 0.8.Informed Consent Statement: Informed consent was obtained from all subjects involved in this study. Figure 1 . Figure 1.(a) PLSDA score plot, showing the discrimination between C and P groups.(b) Ranking of VIP scores for the first 15 molecules which may explain the discrimination between groups C and P. Figure 1 . Figure 1.(a) PLSDA score plot, showing the discrimination between C and P groups.(b) Ranking of VIP scores for the first 15 molecules which may explain the discrimination between groups C and P. Figure 2 . Figure 2. The ranking of the first 15 molecules, according to RF analysis, as p for acute pancreatitis (group P) vs. controls (group C). shows the m/z values and cation of the first 19 molecules with AUROC values > 0.8, p-values < 0 values for each molecule identified, indicating their increase or decre group (negative log2FC indicates increases in metabolite levels in the P tive values indicate decreases in P vs. C group). Figure 2 . Figure 2. The ranking of the first 15 molecules, according to RF analysis, as predictive biomarkers for acute pancreatitis (group P) vs. controls (group C). shows the m/z values and putative identification of the first 19 molecules with AUROC values > 0.8, p-values < 0.00003, and log 2 FC values for each molecule identified, indicating their increase or decrease in the P vs. C group (negative log 2 FC indicates increases in metabolite levels in the P group while positive values indicate decreases in P vs. C group). Figure 3 . Figure 3.The graph of enrichment analysis, showing the main classes of lipids tha ered as putative biomarkers for acute pancreatitis. Figure 3 . Figure 3.The graph of enrichment analysis, showing the main classes of lipids that may be considered as putative biomarkers for acute pancreatitis. 65), an R2 value of 0.45, and nonsignificant Q2 values for the first two components, indicating a lower predictability of this model compared to the P vs. C group comparison (see Section 3.2.). Figure 4 . Figure 4.The PLSDA score plot (a) and the VIP scores (b) higher than 1 for discriminating between BAP and AAP groups.(a) PLSDA score plot, showing the discrimination between BAP and AAP groups.(b) Ranking of VIP scores for the first 15 molecules which may explain the discrimination between groups BAP and AAP. Figure 4 . Figure 4.The PLSDA score plot (a) and the VIP scores (b) higher than 1 for discriminating between BAP and AAP groups.(a) PLSDA score plot, showing the discrimination between BAP and AAP groups.(b) Ranking of VIP scores for the first 15 molecules which may explain the discrimination between groups BAP and AAP. FigureFigure 5 . Figure 5a-c represent, in arbitrary units, the significant differences between the levels of these three molecules.Biomolecules 2023, 13, x FOR PEER REVIEW 10 of 15 Table 1 . Characteristics of the patients with PA enrolled in this study compared to controls. Table 2 . Laboratory data of patients with AP and severity prediction scores. Table 3 . Fold change (FC), log2FC, and p-values for the molecules selected as responsible for the discrimination between groups C and P. Column 4 shows the increase or decrease in these molecules in group P vs. C. Table 3 . Fold change (FC), log 2 FC, and p-values for the molecules selected as responsible for the discrimination between groups C and P. Column 4 shows the increase or decrease in these molecules in group P vs. C. Table 4 . The AUROC, p-values, log2FC, and identification of the first 19 mol potential biomarkers.Negative log2FC indicates increases in metabolite levels i itive values indicate decreases in P vs. C group. Table 4 . The AUROC, p-values, log 2 FC, and identification of the first 19 molecules considered as potential biomarkers.Negative log 2 FC indicates increases in metabolite levels in P group while positive values indicate decreases in P vs. C group.
7,457.8
2023-10-01T00:00:00.000
[ "Chemistry", "Medicine" ]
Designing topological interface states in phononic crystals based on the full phase diagrams The topological invariants of a periodic system can be used to define the topological phase of each band and determine the existence of topological interface states within a certain bandgap. Here, we propose a scheme based on the full phase diagrams, and design the topological interface states within any specified bandgaps. As an example, here we propose a kind of one-dimensional phononic crystals. By connecting two semi-infinite structures with different topological phases, the interface states within any specific bandgap or their combinations can be achieved in a rational manner. The existence of interface states in a single bandgap, in all odd bandgaps, in all even bandgaps, or in all bandgaps, are verified in simulations and experiments. The scheme of full phase diagrams we introduce here can be extended to other kinds of periodic systems, such as photonic crystals and designer plasmonic crystals. Introduction The topological physics is growing rapidly in condensed matter physics, from quantum Hall effect [1] to topological insulators [2,3] and Weyl semimetals [4]. For one-dimensional (1D) periodic systems, the simplest topologically nontrivial phase exists in polyacetylene [21]. It is found that the band topology of this kind of systems can be characterized by Zak phases, which are quantized topological invariants as long as the unit cell possesses inversion symmetry [22]. In recent years, the topological description has been introduced in various classical counterpart such as 1D photonic crystals and phononic crystals (PCs) [23,24], and been successfully applied to predict the existence of interface states from the Zak phases of bulk bands. In the system of a 1D PC, for example, a cylindrical waveguide with periodically alternating structures, the macroscopic controllability enables it to be a capable platform to realize the advanced concepts such as band inversion and topological phase transition. Recently, interesting topological phenomena, such as topological interface and edge states [24][25][26], valley-Hall effect [27], have been observed in acoustic systems. In this work, we focus on the topologically induced interface states within arbitrary bandgap (or bandgaps) by judiciously designing the Zak phases of each constituent PC. The existence of interface states can be equivalently predicted by the Zak phases, surface impedances, and transmission spectra. As an example, all possible existence of the interface states in the lowest four bandgaps is observed numerically and experimentally, including the interface states in any single bandgap, all odd bandgaps, all even bandgaps, and all bandgaps. The transmission spectra and spatial distributions of the pressure field for the interface states are measured and exhibit excellent agreement with the simulated results. Results Topological properties of 1D PCs. The 1D periodic system under study is shown in plotted by red dashed and black solid lines, respectively. The dimensions for this structure (denoted by S1) are rA=1.3 cm, rB=1.7 cm, dA=8 cm, dB=6 cm. The frequencies of the lowest four band-edge modes at k=0 are indicated by orange (2.321 kHz), green (2.541 kHz), blue (4.664 kHz) and pink (5.046 kHz) dots, respectively. d, Simulated eigen-fields of pressure in the unit cell of S1, corresponding to the four labeled bandedge modes in c. These four eigenmodes can be classified into the even modes (first and last) or odd modes (second and third) according to the field distributions with respect to the inversion center. The band structure of the 1D PC with sound hard boundaries can be obtained by the transfer matrix method (TMM) [28,29], where k is the Bloch wave vector,  is the angular frequency, va is the sound speed in air (343 m/s), a is the lattice constant (a=dA+dB), and The topological property of bulk bands for the 1D PC can be represented by their Zak phases, which are defined as [24]: where n Zak is the Zak phase of n th bulk band, and  is the density of air. In general, n Zak can be any value if the choice of the unit cell is arbitrary. However, when the unit cell is chosen to be inversion symmetric with the inversion center being the middle of tube-A or tube-B, it can be proved that n Zak should be quantized as either 0 or . The Zak phase can take different values (0 or for different choices of the unit cell [22,23]. The quantized n Zak characterizes the topology of the corresponding band. When two semi-infinite PCs are connected at an interface, the condition for the existence of an interface state in the n th bandgap is that the impedances on both sides of the interface satisfy the condition: ZL+ZR=0 [23]. Here, ZL and ZR are respectively the impedances of the left-hand and right-hand PCs, and can be expressed by the reflection coefficient from the left (right) side of the interface as where Z0 is the impedance of the free space. As is known, for the semi-infinite phononic crystal, the surface impedance Z is purely imaginary in the bandgap, i.e. Z/Z0=i. According to the bulk-interface correspondence for the 1D PC, the sign of  (n) within the n th gap can be related to the Zak phases of the bulk bands below this gap by the following relation, as long as there is no band crossing below this gap [23]: or cyan (negative) strips, respectively. The sign of  (n) can be identified not only by Eq. (4) but also by the parity of the band-edge states below and above the corresponding bandgap [23]. The band-edge states are marked by the dots at the Brillouin zone boundary (k=±/a) and center (k=0), and their parities are represented by their colors with blue for odd and orange for even respectively to the odd and even modes. If the eigenmode at the lower edge of the bandgap has odd parity while the one at upper edge is even, then sgn(<0; otherwise, if the eigenmode at lower edge is even and the one at the upper edge is odd, then sgn()>0 [23]. Topologically induced interface states in n th gap can be achieved by different combinations of structures Si and Sj (i, j=1, 2, 3, 4, and i ≠ j) as long as  (n) of the left-hand and right-hand PCs take opposite sign, i.e. the colors in the n th bandgap for Si and Sj are different. As shown in Fig. 2, for the configuration of S1+S2 (or S3+S4), there should exist interface states in all the odd bandgaps. Whereas for the configuration of S1+S3 (or S2+S4), there are interface states in the even bandgaps. For S1+S4 (or S2+S3), the two PCs fall into the opposite phases in all bandgaps, consequently, interface states appear in all the bandgaps. Note that when the system is impinged by a plane wave, the inference for these 3 cases is not restricted to the first four bandgaps, but is applicable for all higher-ordinal bandgaps as long as the weak dispersion limit is satisfied [30]. Achieving interface states from the full phase diagram. Analogous to the band inversion in electronic systems [31,32], when the bandgap closes and reopens implies that the topological phase transition occurs in the system of PC. The loci of the band crossing point will separate the different phases of a certain bandgap, which can be characterized by the sign of (n) within this bandgap. With the condition of band crossing [23,30], band inversion can be achieved when the geometric parameters in Eq. (1) satisfy: (i) rA=rB, then the structure degenerates to the trivial case, i.e. a non-structured tube with a constant cross section; or (ii) dA/dB=(dm+d )/(dm−d )=n1/n2, where n1 and n2 are integers, as such the band crossing occurs at the (n1+n2) th bandgap [23]. According to these two criterions, all the phase transition lines in the phase diagram of a bandgap can be obtained. With the full phase diagrams in the (r,d ) space, we can construct topologically induced interface states in arbitrary bandgap(s). As shown in Table 1 The simulated transmission spectra of two connected PCs (6 unit cells for both the left and right PCs), for S1+S2, S1+S3, and S1+S4, respectively. h-j, The corresponding measured transmission spectra, where the transmission peaks arising from the interface states in each bandgap are highlighted by colored dots. Further, the existences of topological interface states are experimentally verified, and the experimental set-up is shown in Fig. 5a. The sample contains two connected PCs, the right-hand one in red is manufactured with the geometric parameters of S1 and the lefthand one in blue is manufactured with the geometric parameters of S2,3,4 (only the photo of S1+S2 is shown). The junction is marked by the green arrow, and the PCs on either sides are truncated at the boundaries of an intact unit cell. Meanwhile, a loudspeaker and four microphones are used to generate input signals and to measure the transmission spectra, respectively. The measured transmission spectra for the three configurations are respectively shown in the lowest panel of Fig. 5. The transmission peaks marked by colored dots in the corresponding bandgaps confirm the existence of the interface states. The experimental results verify that interface states appear in all odd bandgaps for configurations of S1+S2, whereas in all even bandgaps for configurations of S1+S3, moreover, in all bandgaps for configurations of S1+S4. The frequencies of interface states agree well with our theoretical predictions and simulated results. Discussion In summary, we have proposed a scheme to design interface states at the junction of two 1D Data availability. The data which support the figures and other findings within this paper are available from the corresponding authors upon request. Additional Information Supplementary information is available in the online version of the paper.
2,242.8
2018-04-28T00:00:00.000
[ "Physics" ]
Mapping the Impact of Non-Tectonic Forcing mechanisms on GNSS measured Coseismic Ionospheric Perturbations Global Navigation Satellite System (GNSS) measured Total Electron Content (TEC) is now widely used to study the near and far-field coseismic ionospheric perturbations (CIP). The generation of near field (~500–600 km surrounding an epicenter) CIP is mainly attributed to the coseismic crustal deformation. The azimuthal distribution of near field CIP may contain information on the seismic/tectonic source characteristics of rupture propagation direction and thrust orientations. However, numerous studies cautioned that before deriving the listed source characteristics based on coseismic TEC signatures, the contribution of non-tectonic forcing mechanisms needs to be examined. These mechanisms which are operative at ionospheric altitudes are classified as the i) orientation between the geomagnetic field and tectonically induced atmospheric wave perturbations ii) orientation between the GNSS satellite line of sight (LOS) geometry and coseismic atmospheric wave perturbations and iii) ambient electron density gradients. So far, the combined effects of these mechanisms have not been quantified. We propose a 3D geometrical model, based on acoustic ray tracing in space and time to estimate the combined effects of non-tectonic forcing mechanisms on the manifestations of GNSS measured near field CIP. Further, this model is tested on earthquakes occurring at different latitudes with a view to quickly quantify the collective effects of these mechanisms. We presume that this simple and direct 3D model would induce and enhance a proper perception among the researchers about the tectonic source characteristics derived based on the corresponding ionospheric manifestations. Imprints of seismic forcing on the ionosphere during earthquake events are extensively studied using Global Positioning System (GPS) -Total Electron Content (TEC) measurement technique [1][2][3][4][5][6][7] . GPS is part of the Global Navigation Satellite System (GNSS), being operated by the US. The GPS recorded coseismic ionospheric perturbations (CIP), are considered to be very useful in providing the spatio-temporal manifestations of tectonic forcing in the ionosphere. Tracing back to the source of CIP, the CIP evolution surrounding the epicenter highly depends on the source characteristics in terms of crustal deformation pattern which evolves mainly during the rupture propagation 5,8 . The ground based measurement of crustal deformation has a stringent requirement of availability of the sounding probe at or very near to the deformed area e.g. GPS receiver in precise positioning mode. With the advent of space based technology like InSAR, it is easy to acquire this information, provided successive satellite passes coincide over the earthquake occurrence region 9 . But for offshore earthquakes, InSAR also fails in providing reliable measurements of crustal deformation. In other cases, one may have to rely on the model based estimation constrained by the available ground measurements, for e.g. 11 March 2011 Tohoku earthquake 10 . Therefore, an alternative tool needs to be explored that can provide reliable information about the tectonic source characteristics. In recent times, efforts are on to identify the tectonic source characteristics, such as rupture propagation direction, crustal deformation pattern and thrust orientations, based on the azimuthal distribution of near field CIP as recorded by GPS 2,[5][6][7]11,12 . So, is it always possible to characterize the tectonic source using GNSS measured near field CIP? It seems possible, but there lays the difficulty in terms of non-tectonic forcing mechanisms that act upon the CIP evolution at ionospheric altitudes [5][6][7]12 . Methodology Modeling of seismo-acoustic rays in 3D space and time. The upper atmosphere responds profoundly to the tectonic energy within 1 to 10 mHz. This frequency domain contains both acoustic as well as gravity waves. The acoustic wave propagation is significantly affected by the temperature of a medium. This could be understood from the Eq. (1) 18 which estimates the acoustic wave velocity. Accordingly, the seismo-acoustic wave propagation in the overlying atmosphere depends highly on the ambient atmospheric temperature. Here γ is the ratio of specific heat capacities, R is the universal gas constant, T is the temperature and M is the molecular mass density. The varying temperature and density cause the refraction of seismo-acoustic waves with atmospheric altitudes by changing their velocity. We model the propagation of seismo-acoustic rays (k) in space and time based on the wave refraction phenomenon. For this, the neutral atmospheric temperature and density are obtained from the NRLMSISE-00 19 model. The acoustic rays are modeled up to the peak ionospheric electron density altitude obtained from the IRI-2016 empirical model 20 . 3D model to quantify the effects of non-tectonic forcing mechanisms on the manifestations of GnSS measured cip. We, firstly, model the Geomagnetic field -Neutral wave orientation Factor (GNF). This factor is similar to the ionospheric coupling factor (α) by Calais et al. 13 . In order to do so, we extract the values of magnetic inclination and declination corresponding to the geographic location of a tectonic source and nearby region based on the IGRF-12 model 21 . Using these values, we estimate the geomagnetic field unit vector and denote it as b. The unit vector b and the seismo-acoustic rays, modeled using the method described in (1), are the main ingredients for estimating the GNF. The equation 15,16 to calculate the GNF as follows. In the second step, we estimate the non-tectonic forcing effects arising due to the ambient electron density variations. For this, we use the International GNSS Service (IGS) TEC maps which provide a snapshot of the global ionosphere at every 15 min, 1 hr, and 2 hrs 22,23 . These maps are prepared by the IGS iono working group using the global IGS TEC observations (http://cdaweb.gsfc.nasa.gov). In maps, TEC is estimated as vertical TEC at ionospheric piercing point (IPP) altitude of 400 km and at latitude × longitude grid of 2.5° × 5.0° within the spatial domain of −87.5° to 87.5° latitudes and −180° to 180° longitudes. The satellite elevation cutoff is 10°. We extract the TEC values for our preferred geographical grid at 15 min interval and interpolate it to obtain the variations at finer grid points. The interpolated TEC values are then normalized and denoted as, Electron Density Factor, EDF(λ, ϕ, h IPP ). Since the obtained TEC variations are at fixed IPP altitude of 400 km, the h is presented as h IPP . In the third step, the effects of moving satellite geometry are computed. The orientation between the satellite line of sight (LOS) and vertically propagating seismo-acoustic rays has significant effects in terms of phase integration of the propagating waves. Inspired by Georges and Hooke, (1970) 6 , Bagiya et al. 16 invoked an elementary version of satellite geometry factor (SGF) to account for the wave phase cancelation effects during varying GPS satellite geometry. This factor was formulated as, where r is the satellite LOS wave vector and χ is the satellite zenith angle. We modify the algebraic expression for the SGF and reformulate it as follows: We consider the absolute values of k(λ, ϕ, h) •r(λ, ϕ, h)in Eq. (4) and thus the SGF can vary from 0 to 1. The requirement for the algebraic transformation of Eq. (3) is discussed further in the text. We estimate the SGF for near field GNSS stations for all possible satellite LOS in view. Finally, we integrate the effects of GNF, SGF, and EDF by using the algebraic multiplication and propose a simplified NTFM factor as follows. Here × denotes the algebraic multiplication. The computations described under Eqs. (1, 2, 4, and 5) are essentially named as the 3D NTFM model. Since electron density measurements from each ionospheric altitudes over a specific geographical location are not always available, we use the IGS map derived TEC for computing the NTFM factor at various ionospheric altitudes. Coseismic crustal displacement fields based on the static solution. The coseismic crustal deformation in terms of horizontal and vertical displacement fields are computed based on daily position estimates of the permanent GPS sites. The GAMIT/GLOBK software package is used to estimate the daily positions of GPS sites 24,25 . In the GAMIT processing, a loosely constrained daily solution is computed which further processed with GLOBK to obtain an accurate solution. The analysis has been performed using GPS measured positioning time series over 41 days (20 days prior to the event, event day and 20 days after the event) in each case. The difference in position coordinates on the day before and day after the earthquake provides the respective coseismic displacement fields. coseismic ionospheric perturbations (cip) from GpS-tec observations. GPS data in RINEX format from permanent GPS sites have been analyzed based on the following formula to estimate the slant Total Electron Content (sTEC), where f 1 and f 2 are the carrier frequencies (1575.42 MHz and 1227.60 MHz respectively), L 1 and L 2 are the carrier phases and λ 1 and λ 2 are the corresponding wavelengths. The data sampling interval is 30 s. An elevation mask of 30° is applied to exclude the low elevation satellite measurements. The sTEC is further converted to vertical TEC using the mapping function M as follows 26 , where R E denotes the Earth's mean radius, E is the satellite elevation angle, and h is the IPP height. To highlight the CIP, the obtained vertical TEC is applied with a bandpass filter of 1 to 10 mHz which removes long period trends in vertical TEC. Implantation of 3D NTFM model for Trial earthquake source. We consider a Trial earthquake source at the northern low latitude (Fig. 1) and compute the effects of non-tectonic forcing mechanisms at various ionospheric altitudes over the epicenter through our proposed 3D model. The Trial earthquake is assumed on 02 March 2010 at 12:00 UT. It should be noted that this day was a geomagnetically quiet day. To start with, we estimate the altitudinal variations of acoustic wave velocity in the atmosphere over the Trial source location and present in Fig. S1 along with the neutral atmospheric temperature profile obtained from the NRLMSISE-00 model. Using the estimated acoustic wave velocity profile, we model the propagation of seismo-acoustic rays in space with time based on the method described in (1). Figure 1 demonstrates the modelled propagation of six different rays launched at zenith angles (angle from the vertical) of ~58°, ~38.8°, ~31.6°, ~28.7°, ~27.1°, and ~26.2° which are the respective threshold (i.e. beyond which rays refract downward) angles for ray propagation at ionospheric altitudes of 120 km, 150 km, 200 km, 250 km, 300 km, and 350 km. The inset shows the variation of the threshold angle and maximum horizontal distance as a function of the atmospheric altitudes. It should be noted that the NTFM model computation is performed using the rays modeled at every 0.1° resolution within the estimated threshold angle at a given specific altitude. The 2D manifestation of rays modeled at every 0.1° for all possible launch angles (till ~58° from the zenith) could be seen in Fig. S3 and S4. Further, at an altitude of 350 km, the seismo-acoustic rays can propagate to the radial distance of ~680 km from the ionospheric projection of the Trial earthquake source thus we restrict the proposed model computation for a distance of ~600 km surrounding the tectonic source. We now compute the GNF, EDF, and SGF. The GNF and SGF are computed based on the obtained ray traces in 3D space and time. The GNF depends on two parameters (Eq. 2), the geomagnetic field and atmospheric temperature through acoustic wave velocity. The acoustic wave velocity, as mentioned, is a function of ambient temperature that varies with time of the day, season, solar activity, and latitude as well. On the contrary, the geomagnetic field exhibits secular variations but it varies significantly with latitudes. This could be verified from Fig. S2 which shows the latitudinal variations of magnetic inclination (dip) over the globe obtained from the IGRF-12 model. The location of the Trial epicenter is also shown in the figure. According to the variations of magnetic inclination, we classify the region between 0º-~ 40° as low, ~ 40°-~ 60° as mid and ~60° and above as high latitude region [27][28][29] . Figure 2 shows the GNF estimated at ionospheric peak density altitude of 350 km over the Trial seismic source. The red star in the figure demonstrates the ionospheric projection of the Trial epicenter. The 2D schematic in Fig. S3 represents the interaction between the seismo-acoustic rays and geomagnetic field for the Trial seismic source. Since the source is considered in the northern hemisphere, the geomagnetic field vectors are inclined down from the horizontal as shown in the figure. The angle between k and b is ~180° towards the equator (case-I) and thus the absolute values of GNF ≅ 1 (Eq. 2). Near to the Trial source, the angle is more than 90° (case-II) while both the vectors become almost perpendicular further poleward (case-III). The GNF approaches 0 value for perpendicular geometry. We present the absolute values of GNF in Fig. 2. It has to be noted that GNF varies from 0 to 1 which shows respectively unfavorable to favorable conditions for the manifestations of ionospheric perturbations from the geomagnetic field geometry point of view. Therefore, GNF favors the equatorward evolution and propagation of CIP in the Trial case. Figure 1. Propagation of seismo-acoustic waves in 3D space with time from the Trial seismic source assumed at 25°N 85°E. The propagation of six rays modeled at six different launch angles is shown. The first ray is launched at an angle of ~58° that is the threshold angle at 120 km altitude. The rays with launch angles higher than this refract downward while those of lower than this propagate further upward. Similarly, the second ray is launched at an angle of ~38.8° that is the threshold propagation angle at 150 km altitude and so on. The inset shows the variation of the threshold angle and maximum horizontal distance along with the atmospheric altitudes. The figure is prepared using the Generic Mapping Tools (GMT) 5.4.4 43 . www.nature.com/scientificreports www.nature.com/scientificreports/ The latitudinal gradient of ambient ionospheric density plays an important role in manifesting the amplitude of ionospheric perturbations. In the case of higher electron density, the perturbations can evolve with higher amplitudes. However, the reduced density results in perturbations with smaller amplitudes (e.g. during nighttime). In order to estimate the EDF, we extract TEC from the IGS map with a latitude-longitude grid of 18°N-31°N and 78°E-92°E on 02 March 2010 at 12:00 UT and subsequently interpolate it to obtain the TEC variations at a finer resolution. The EDF is estimated by normalizing the TEC and therefore the variations are positive and less than 1 (Fig. 2). The figure demonstrates that the electron density gradient is relatively higher south of the Trial epicenter and thus the CIP in the south may evolve with higher amplitudes. It should be noted that TEC in IGS maps is computed at an ionospheric altitude of 400 km. In view of the unavailability of electron density measurements at specific ionospheric altitudes, the obtained TEC variations are considered to represent the spatial variation of ionization density at other altitudes as well. It is understood that though the ionospheric electron density changes with altitude, the latitudinal density gradient more or less remains the same at all altitudes. Therefore, the density gradient derived in terms of the normalized EDF at a given ionospheric altitude may reflect the gradient for other altitudes also. However, during geomagnetic storms the ionospheric -plasmaspheric density changes significantly thus this assumption requires extra caution [30][31][32] . Next, we compute the SGF using Eq. (4). The GPS satellites orbit in near-circular paths at ~22,000 km. On account of such high orbital elevation and very less eccentricity (<0.02), the satellite azimuth-elevation angles (i.e. LOS) vary in the same manner at any geographical location at a given time. However, the orientation of satellite LOS with seismo-acoustic rays vary with respect to the station-source location. Thus, the SGF has to be estimated at each ground station adjoining the source. The orientation of a satellite LOS with vertically propagating seismo-acoustic waves can have two extremes. The schematic in Fig. S4 shows realistic interaction between the vertically propagating seismo-acoustic rays and satellite LOS in 2D for station located at 400 km from the Trial source. If both vectors (LOS and wave) are orientated at angles ≅ 180° then the absolute values of SGF ≅ 1 (Eq. 3) and the perturbation phases integrate the CIP amplitudes to zero (case-II in Fig. S4). The favorable satellite LOS geometries occur at orientations near 90° with respect to the wave vector (case-I in Fig. S4) and the SGF values ≅ 0 on the occasions (Eq. 3). It is noteworthy that the maximum favorable GNF value is 1 while the SGF favors most at 0. This flip-flop behavior is due to the geometry of the respective vectors involved in the computation of these factors. Since we aim to combine both these factors and to propose a single model estimation of non-tectonic forcing mechanisms surrounding a seismic source, the current mathematical expressions of these two factors may not be adequate. Thus, we transform the algebraic expression for SGF to Eq. (4) which computes variations closer to 1 for favorable satellite geometry. www.nature.com/scientificreports www.nature.com/scientificreports/ As mentioned, the SGF has to be computed for individual GNSS stations. We assume three stations located at 2°, 4° and 6° south of the Trial source and compute SGF for all possible satellite LOS from a specific station (Fig. 2). From Fig. 2, for a station situated near to the Trial epicenter (200 km), the GNSS satellite geometry moderately supports the evolution of CIP. The SGF gradually becomes favorable as the station-source distance increases. It should be noted that the SGF computation would manifest in the same fashion for a station located in the north, east, and west. The maximum station-source distance is restricted within 6° (~600 km) as beyond this distance, the seismo-acoustic rays start refracting downward from ionospheric peak density altitude of 350 km. Further, since ionospheric peak density altitude is 350 km in the Trial case, we could consider the station till 6°. However, in the case of lower ionospheric peak density altitude, the station has to be closer than this. In the final step, we estimate the NTFM factor at an ionospheric altitude of 350 km using Eq. (5) and present in Fig. 2. We segregate the model output in three domains: (i) [1 0.5] as highly favorable to favorable, (ii) [0.5 0.3] as favorable to moderately favorable (iii) [0.3 0] as moderately favorable to poor. The NTFM factor, in Fig. 2, suggests that the evolution of ionospheric perturbations is favored equatorward, especially for GPS stations located at 4° to 6° latitudinal distance from the source. The CIP growth might be hampered towards the north mainly due to the poor GNF and SGF. Further, for a station located near to the source (2°), the CIP evolution towards the equator might be moderate due to the moderate SGF. It should be noted that Fig. 2 presents the modeled NTFM factor along with GNF, EDF, and SGF at a single altitude of 350 km. We extend this analysis for various atmospheric altitudes, from 120 km to 300 km, and present the modeled 3D NTFM factor in Fig. 3. It should be noted that ionospheric altitudes of ~110-120 km and above can only host the ionospheric perturbations due to the fact that ionization density is very small below these altitudes. Therefore, the NTFM factor computation is started at 120 km altitude. The next atmospheric altitude is considered at 150 km and then at every 50 km, the factor is computed for stations located at 2°, 4°, and 6° latitudes from the Trial source (Fig. 3). In Fig. 3, for a station located close to the source (at 2°) the non-tectonic effects are more favorable at lower ionospheric altitudes than the higher ones. Whilst, for a station located at 4° the NTFM factor favors the evolution of ionospheric perturbations at higher ionospheric altitudes. The favorable NTFM factor gradually weakens as the station distance increases beyond 4° from the source. This could be verified from a station located at 6° latitude. It is important to note that the NTFM factor and their horizontal extent vary significantly with altitudes. The variations in the horizontal extent are attributed to the atmospheric refraction of seismo-acoustic rays (Figs. 1, S3 and S4). Since the GNF and SGF computations are based on the propagation of seismo-acoustic rays, the horizontal extent of the NTFM model output evidently reflects the wave propagation characteristics at various atmospheric altitudes. Further, the altitudinal variations of NTFM factor (i.e. favorable and vice versa) depend on the manifestations of GNF and SGF. The geomagnetic field inclination and declination do not vary significantly with altitudes. But the propagation angles of seismo-acoustic rays vary considerably. Therefore, GNF computed based on these two parameters (Eq. 2) varies with altitudes. For SGF, seismo-acoustic wave vectors, satellite LOS, and station-source distance are the key parameters. In the Trial case, for a station located at 2°, the favorable satellite line of sights happen to be those with higher elevations (83°-90°) that align suitably with the upward propagating seismo-acoustic rays (parallel to wavefronts) and thus favor the evolution of CIP. For stations located at 4° and 6°, the favorable line of sights occur for elevation range of ~46° to ~90° (Fig. S4) and ~33° to ~90° respectively. The varying alignments of such satellite geometries with vertically propagating seismo-acoustic rays significantly control the altitudinal behavior of NTFM factor through SGF. Validation. In this section, we validate our proposed NTFM model based on the near field coseismic ionospheric response to actual earthquake events that occurred at different latitudes. For this, we analyze ionospheric variations during the Mw 8. 34 with hypocenter at 19.61°S 70.77°W at a depth of ~25 km (https://earthquake.usgs.gov). Extensive studies on source characteristics, constrained by various slip models and data suggest that the seismic rupture propagated south of the epicenter during this offshore event [34][35][36][37][38] . From a geomagnetic coordinate point of view, the IQ earthquake could be categorized as a near-equatorial earthquake (~−15.16° magnetic inclination). Figure S2 could be referred for more information on this. The geographical map in Fig. 4a shows the location of IQ epicenter and the earthquake fault mechanism (thrust). The figure also contains the static horizontal and vertical displacement fields estimated using nearby GPS geodetic observations. The displacement fields, presented with horizontal and vertical arrows, exhibited profound westward and downward movements at the coast. From the figure, the maximum of westward and downward movements respectively of ~57.08 cm and ~15.91 cm were observed at atjn station. Further, GPS stations from the south of the epicenter also observed significant displacements. The slip variations along with the estimated displacement fields corroborate the maximum energy propagation south of the epicenter. Therefore, the coseismic ionospheric response should be more intense in the south during this event. Figure 4b-e demonstrate the ionospheric variations over the IQ source region. GPS satellites with Pseudo Random Noise (PRN) codes 01, 20, and 23 from stations shown in Fig. 4b were found to be suitable to observe the near field CIP. This could be verified from the IPP tracks of these satellites in Fig. 4b. The satellites adequately cover the ionosphere over and around the IQ earthquake fault region. Figure 4c-e show the temporal evolution of CIP as observed by these PRNs. The CIP are derived as detrended vertical TEC. Each time series is presented with same color as that of the respective satellite track. PRN 23 and 01 captured clear and intense CIP far north and south of the epicenter. However, the satellites recorded relatively feeble CIP closer to the epicenter. Interestingly, the CIP as recorded by PRN 01 from far north stations of areq, atic and nzca were more intense. In contrast to these, PRN 20 recorded more CIP amplitudes south of the epicenter. Due to the fact that most of the seismic energy propagated south of the epicenter, the CIP should evolve stronger in the south compared to those in the north. However, maxima of CIP amplitudes, as recorded by PRN 23 and PRN 01 were observed far north of the epicenter. We attempt to quantify the captured ionospheric imprints of seismic energy during the IQ event in light of the manifestation of non-tectonic forcing mechanisms surrounding the earthquake source region. We run the proposed 3D NTFM model to compute the GNF, EDF, SGF and their combined effects as NTFM factor and present the model output at IPP altitude of 350 km in Fig. 5. The red star demonstrates the ionospheric projection of the IQ epicenter. From the estimated GNF variations, the geomagnetic field favored the CIP evolution more in the north as compared to the south during the IQ earthquake. Moreover, the region far north was relatively more favorable for the evolution of CIP from the geomagnetic field geometry point of view. The EDF exhibited relatively higher density in the south during the earthquake occurrence time. The modelled SGF variations in Fig. 5 suggest that the GPS satellite geometry from stations closer to the IQ epicenter (200 km and less) obstruct the CIP evolution. In contrast to this, the SGF considerably favors the CIP evolution for stations located between 200-400 km, both north and south of the epicenter. Finally, the combined NTFM factor in Fig. 5 suggests that stations located within 200 km of the IQ epicenter were not adequate to capture the seismic energy imprints during the event. However, distant stations could capture this energy in a more profound manner. We assume that poor manifestations of NTFM factor for the nearby GNSS stations (<200 km) did not allow the maximum released seismic energy to evolve in the ionosphere. But in the north, CIP amplitudes remained higher due to the favorable non-tectonic forcing mechanisms. Significant CIP amplitudes were observed in areq -PRN 01 and areq-PRN 23. From Fig. 5, the NTFM factor exhibited favorable to moderately favorable domain for the evolution of CIP at the IPP locations of these PRNs. Near the IQ epicenter, comparatively lesser CIP amplitudes were observed by iqqe-PRN 01, iqqe-PRN 20, and iqqe-PRN 23 which are in accordance with the poor NTFM factor. However, irrespective of a similar or more favorable non-tectonic forcing mechanisms, CIP amplitudes remained smaller from jrgn station than those of areq. This could be attributed to the larger distance between the ionospheric projection of the IQ epicenter and the IPPs from jrgn station. Thus, our proposed 3D www.nature.com/scientificreports www.nature.com/scientificreports/ NTFM model, in light of the prevailing non-tectonic forcing mechanisms, effectively explains the ionospheric variations induced by the earthquake nucleating at near-equatorial latitude. The IL earthquake is another great subduction earthquake that ruptured the offshore region of central Chile with a hypocenter located at 31.57°S 71.67°W at a depth of ~22.3 km (https://earthquake.usgs.gov). The geographical map in Fig. 6a shows the IL earthquake epicenter location, fault mechanism (thrust) and static horizontal and vertical displacement fields estimated using the geodetic observations from nearby GPS sites. From the figure, significant trenchward (westward) horizontal displacement is evident at all GPS sites. The maximum horizontal displacement of ~1.43 m was observed at pfrj GPS station. A vertical uplift of ~2.67 cm was recorded at cnba GPS site while other GPS sites subsided by few millimeters to centimeters. This suggests that the offshore coseismic slip exhibited significant slip distribution below the pfrj and cnba GPS stations 39 . From the estimated displacement fields and reproduced slip distribution, it is obvious that most of the seismic moment was released north of the epicenter during this event. The IL earthquake triggered at the geomagnetic low latitude region. The magnetic inclination at the epicenter location is ~−32.43°. This could be verified from Fig. S2. Thus, we classify this earthquake as a low latitude earthquake. The TEC as observed by PRNs 25, 12, and 24 from GPS stations shown in Fig. 6b were scrutinized into for coseismic ionospheric variations during the IL event. The IPP trajectories of these PRNs at the earthquake occurrence time are also shown in the figure. From Fig. 6c-e, all three PRNs could capture discernible CIP surrounding the epicenter. However, higher CIP amplitudes were observed north of the epicenter. To quantify the effects of non-tectonic forcing mechanisms on the observed CIP manifestations, we run our 3D NTFM model for the earthquake source located in low latitude region. The model output, estimated at IPP altitude of 350 km, in terms of GNF, EDF, SGF, and NTFM factor are presented in Fig. 7. The GNF behavior demonstrates that the geomagnetic field favored the CIP evolution more in the north during the IL earthquake. Also the ambient electron density gradient was higher towards the north. The modelled SGF variations corroborate the earlier results for Trial and IQ earthquakes and suggest that GPS stations located within ~200-400 km of epicenter are more suitable to observe the seismic energy imprints in the ionosphere. The estimated NTFM www.nature.com/scientificreports www.nature.com/scientificreports/ factors for cmpn, sant and pmqe stations indicate that CIP evolution in the north was highly favored by the non-tectonic forcing mechanisms. However, the CIP could not develop in the south mainly due to the poor GNF and EDF. More clarity on this could be obtained by examining the NTFM factor manifestation along the respective satellite tracks from cmpn, sant and pmqe GPS stations in Fig. 7. From Figs. 6 and 7, the CIP evolution preferentially follows the seismic energy propagation during the IL event. The NP earthquake ruptured the Main Himalayan Thrust (MHT) and was the largest earthquake to occur in the central Himalayas since the 1934 Bihar-Nepal earthquake. The earthquake nucleated at 28.23°N 84.73°E and at a depth of ~8.2 km on 25 April 2015 at ~06:11 UT (https://earthquake.usgs.gov). Figure 8a shows the location of the NP epicenter and the fault mechanism (thrust). The arrows demonstrate the static horizontal and vertical displacement fields estimated using nearby GPS geodetic observations. The horizontal movement of ~1.75 m in the south-southwest and uplift of ~1.17 m were observed at kkn4 station. The chlm GPS station subsided by ~0.54 m. From Fig. 8a, most of the seismic energy during the NP earthquake propagated in the east-southeast of the epicenter 5,9 . Based on the magnetic inclination variations (Fig. S2), the NP earthquake could be categorized as low-mid latitude event (magnetic inclination ~44.31). From Fig. 8b-e, GPS satellites with PRN codes 16, 23, and 26 were rightly located over the NP fault region during the earthquake period and could capture significant ionospheric signatures from the GPS stations located surrounding the epicenter. It should be noted that PRNs 16, 23, and 26 observed significant CIP amplitudes southeast of the epicenter. IPP tracks of these PRNs in Fig. 8b and temporal evolution of CIP as recorded by respective PRNs in Fig. 8(c-e), specially CIP time series of ptna-16, bhgp-23, and ptna-23, could be verified for this. However, the CIP that observed north and southwest of the epicenter could not exhibit comparable amplitudes. The observed coseismic ionospheric signatures during the NP earthquake are then examined for the effects of non-tectonic forcing mechanisms. We estimate the GNF, EDF, SGF, and total effects of all three based on the proposed model and present the output at IPP altitude of 350 km in Fig. 9. The GNF favors the CIP evolution south of the NP epicenter i.e. equatorward and impedes the evolution in the north. The GNF estimated during the NP earthquake corroborated well with that of the Trial source. From the estimated EDF over the NP seismic source, the electron density was higher in the southeast of the epicenter. For SGF estimation, we select GPS stations of chlm, smkt, and ptna. The modelled SGF was favorable for the manifestation of ionospheric signatures from stations located at the epicentral distance of ~200 km and beyond (i.e. smkt and ptna). However, the SGF was moderate to poor from a station closer than this. The estimated SGF also corroborated well with that of the Trial source. The NTFM model output in Fig. 9 demonstrates that non-tectonic forcing mechanisms favored the CIP evolution south of the epicenter during the NP earthquake. Whilst, the CIP in the north were not supported by these mechanisms. Therefore, for a source located in the northern low latitudes, if seismic energy propagates south of the epicenter (equatorward) then the induced ionospheric perturbations could be well captured by the stations located south of the seismic source. It should be noted that though the NTFM factor was favorable south of the NP epicenter, the CIP preferably evolved in the southeast which substantiate the seismic rupture propagation in the same direction. We extend a similar analysis during the SU (near-equatorial latitude), and NZ (mid latitude) earthquakes. The SU earthquake was one of the largest strike-slip events recorded ever. The rupture process of this earthquake was reported to be quite complex with a sequence of ruptures propagating westward to the epicenter 40,41 . The strike-slip faults that ruptured were mainly trending west-northwest to east-southeast and north-northwest to south-southwest. Figure S5 shows the coseismic ionospheric response to this event as observed by PRNs 32, 16, and 06 from various Sumatran GPS Array (SuGAR) stations. It could be noted that PRNs orbiting north and south of the epicenter recorded discernible CIP. However, CIP remained quite feeble nearby the epicenter (e.g. umlh -6, bthl-16). The NTFM factor along with GNF, EDF, and SGF during the SU event is shown in Fig. S6. Similar to the IQ earthquake, the NTFM factor during the SU event favored the CIP evolution in the north and south of the epicenter and hampered the same nearby the epicenter. Interestingly, the CIP amplitudes were consistently larger towards the north. This corroborated with the fact that maximum vertical displacement during this event was observed in the north of the epicenter 4,42 . Despite the complex rupture process during the SU event, the coseismic ionospheric response could reflect the maximum tectonic energy propagation reasonably well. Bagiya et al. 7 analyzed the NZ earthquake and explained the coseismic ionospheric response based on the proposition of tectonic thrust orientations as distinct sources responsible for peculiar CIP azimuthal propagation surrounding the Kaikoura epicenter. This novel proposition was derived considering the non-tectonic forcing mechanisms in the account. However, they could not propose any mechanism to quantify all three non-tectonic effects collectively. It may be recalled that NZ earthquake was a mid latitude event. We reanalyzed this event using our proposed model and considering the Campbell Coseismic Thrust Zone as the source of CIP 7 . We found that during the NZ event also the non-tectonic forcing mechanisms support the equatorward evolution of seismic induced ionospheric perturbations. This could be realized from the evolution of CIP over the NZ fault region www.nature.com/scientificreports www.nature.com/scientificreports/ presented in Fig. S7. The NTFM model output during the NZ earthquake is presented in Fig. S8 to acquire better clarity on this. In the final step, we evaluate the correlation between the peak-to-peak amplitude variations of CIP and corresponding variations of NTFM factor at IPP altitude of 350 km for each satellite and station pairs shown in Figs. 4, 6, 8, S5 and S7 and present in Fig. 10. The correlation analysis is presented for different latitude conditions. Therefore, the figure contains three sets of events analyzed in this case study i.e. near-equatorial, low, and mid latitudes. The region around the ionospheric projection of respective epicenters is divided into various concentric circles with a 100 km span. The correlation is estimated at each range from a radial distance of 100 km and onward. The ionospheric perturbations corresponding to each event are presented with a specific symbol for easy classification. It could be noted that the correlation between CIP amplitudes and corresponding NTFM factor withstands fairly well at each range of distance. However, the amplitudes weaken beyond 400 km of radial distance in the present case study. We propose that beyond these threshold distances, direct propagation of epicentral energy may not be adequate to induce significant ionospheric signatures even if the NTFM factor is favorable. This allows us to provide a first estimation of the threshold distance for the CIP evolution from the viewpoint of direct epicentral energy propagation and the impact of non-tectonic forcing around the epicenter. The correlation analysis for the radial distance <100 km was not feasible due to a lack of sufficient CIP observations. Further, during the NZ earthquake event, very few CIP were observed after 400 km so no correlation analysis is presented. From the present study, it should be noted that for earthquakes occurring at near-equatorial latitudes, the NTFM factor favors the evolution of CIP north and south of the epicenter (>200 km) and not nearby the epicenter. The coseismic ionospheric variations during the IQ and SU earthquakes could be verified for this. Further, for earthquake nucleating at low and mid latitudes, the NTFM factor favors the CIP manifestation equatorward. The coseismic ionospheric responses to the NP, IL and NZ earthquakes substantiate this. For easier and quick review of this study, the response of NTFM model for earthquakes occurring at different latitudes is summarized in Table 1. conclusion A 3D model to map the combined effects of the non-tectonic forcing mechanisms of geomagnetic field, GNSS satellite geometry, and ambient electron density gradient is proposed for the first time. The 3D NTFM model can compute these effects at various ionospheric altitudes depending on the propagation characteristics of seismo-acoustic rays. The model not only successfully explains the ionospheric manifestations during seismic events occurring at different latitudes but also cautions that any correlation between the seismic source Figure 9. 2D manifestation of GNF, EDF, SGF, and their collective effects as NTFM factor at IPP altitude of 350 km surrounding the NP earthquake epicenter. The SGF is estimated at GPS stations of chlm, smkt, and ptna located at respective distances of ~50 km, ~195 km, and ~300 km from the epicenter. The figure is prepared using the GMT 5.4.4 43 . www.nature.com/scientificreports www.nature.com/scientificreports/ manifestations at the ground and corresponding ionospheric perturbations could be erroneous in the absence of quantifying the effects of these mechanisms. Further, the threshold distance from the viewpoint of direct epicentral energy propagation and impact of non-tectonic forcing around the epicenter is modeled for the first time. It should be noted that the proposed 3D model is specifically designed for the spatial analysis of GPS-TEC derived seismic induced ionospheric perturbations. The preparation of GUI based online version of the proposed model is in progress to facilitate the researcher to quantify the non-tectonic effects on ionospheric perturbations during earthquakes occurring at various latitudes. This model will be shortly online at iigm.res.in. It is believed that the proposed study will assist the researchers to identify the non-tectonic effects on the GNSS measured CIP without performing any further quantitative analysis. Data availability GPS-TEC data over the Chile and Nepal regions are obtained respectively from http://gps.csn.uchile.cl/ and https://www.unavco.org/data/. Figure 10. Correlation between the peak-to-peak CIP amplitude variations and corresponding values of NTFM factor computed at respective IPP altitudes for near-equatorial, low and mid latitude earthquakes. The correlation is estimated at radial distances of (a) 100-200 km (b) 200-300 km (c) 300-400 km (d) 400-500 km (e) 500-600 km and (f) 600-700 km from ionospheric projection of respective epicenters. R is the correlation coefficient and computed based on the linear regression fit to the data. The NTFM factor variations are presented in percentage. The symbol color represents the distance of individual CIP from the ionospheric projection of the respective epicenter. IQ, SU, IL, NP, and NZ denote Iquique, Sumatra, Illapel, Nepal, and New Zealand earthquakes, respectively.
9,206.4
2019-12-01T00:00:00.000
[ "Geology", "Environmental Science", "Physics" ]
Nonlinear correction schemes for the phase 1 LHC insertion region upgrade and dynamic aperture studies The phase 1 LHC interaction region (IR) upgrade aims at increasing the machine luminosity essentially by reducing the beam size at the interaction point. This requires a total redesign of the full IR. A large set of options has been proposed with conceptually different designs. This paper reports on a general approach for the compensation of the multipolar errors of the IR magnets in the design phase. The goal is to use the same correction approach for the different designs. The correction algorithm is based on the minimization of the differences between the IR transfer map with errors and the design IR transfer map. Its performance is tested using the dynamic aperture as a figure of merit. The relation between map coefficients and resonance terms is also given as a way to target particular resonances by selecting the right map coefficients. The dynamic aperture is studied versus magnet aperture using recently established relations between magnetic errors and magnet aperture. I. INTRODUCTION The design of the interaction region (IR) of a circular collider is one of the most critical issues for the machine performance. Many constraints should be satisfied at the same time and the parameter space to be studied is huge [1,2]. The strong focusing required to increase the luminosity generates large values of the beta functions at the triplet quadrupoles. This in turn enhances the harmful effects of the magnets' field quality on the beam dynamics. It is therefore customary to foresee a system of nonlinear corrector magnets to perform a quasilocal compensation of the nonlinear aberrations. This is the case of the nominal LHC ring, for which corrector magnets are located in the Q 1 , Q 2 , and Q 3 quadrupoles, the latter including nonlinear corrector elements. The strategy for determining the strength of correctors was presented in Ref. [3] and is based on the compensation of those first-order resonance driving terms that were verified to be dangerous for the nominal LHC machine. The proposed approach is based on a number of assumptions that are valid for the nominal LHC machine, but not necessarily true for the proposed upgrade scenarios [4,5], such as perfect antisymmetry of the IR optics between the two beams circulating in opposite directions. Indeed, some LHC upgrade options may not respect the antisymmetry of the IR optics between the two beams and the set of dangerous resonances might not be the same as for the nominal LHC or even be different among the LHC upgrade options. Furthermore, it might be advisable to use a method that should take into account all possible sources of nonlinearities within the IR, such as the field quality of the separation dipoles and also collective beam effects like the long-range beam-beam interactions. For these reasons a more general correction algorithm should be envisaged, thus allowing a direct and straightforward application to any of the upgrade options or, more generally, to any section of an accelerator. The proposed method is based in the analysis of the nonlinear transfer map for a given section of a particle accelerator. Therefore it is conceived for the design phase when the magnetic errors have been measured. For an experimental setting of the corrector circuits other methods have been proposed [6][7][8][9]. The essential details about the nonlinear effects of the elements comprised in the section of the machine under consideration are retained in the nonlinear transfer map over one turn. For this reason the one-turn transfer map was proposed as an early indicator of single-particle instability with a reasonable correlation with the dynamic aperture [10][11][12]. This method relies on the developments on normal form theory, e.g. [13], which have also been the basis of other local correction schemes in the past, e.g., Refs. [14,15], for the Superconducting Super Collider arcs or Ref. [16] for the Large Hadron Collider (LHC). Different free parameters to minimize the selected figure of merit as well as different observables to benchmark the effectiveness of the correction proposed, such as analytical and numerical smears or detuning with amplitude, were studied. In the next sections the proposed method is described and some applications to phase 1 LHC upgrade layouts are given, including the analysis of the effectiveness of the methods using the dynamic aperture (DA) as a figure of II. MATHEMATICAL BACKGROUND The transfer map between two locations of a beam line is expressed in the form wherex f represents the vector of final coordinates ðx f ; p xf ; y f ; p yf ; f Þ, the initial coordinates being represented with the zero subindex, andX jklmn is the vector containing the map coefficients for the four phase-space coordinates and the momentum deviation , considered as a parameter. The MAD-X [17] program together with the polymorphic tracking code (PTC) [18] provide the computation of the quantitiesX jklmn up to the desired order. To assess how much two maps, X and X 0 , deviate from each other, the following quantity is defined: where k Á k stands for the quadratic norm of the vector. To disentangle the contribution of the various orders to the global quantity 2 , the partial sum 2 q over the map coefficients of order q is defined, namely, The definition of 2 q can be easily extended to introduce weighting of the different terms, using characteristic distances and divergences to compute the weights or simply to select those terms of more relevance. The applications described in this paper have all equal weights for all terms. Furthermore, 2 q is split into a chromatic 2 q;c and achromatic 2 q;a contribution, corresponding to It is immediate to verify that 2 q ¼ 2 q;c þ 2 q;a . Throughout this paper only the achromatic part will be considered since it typically dominates the particle stability in circular machines. A. Relation to resonance driving terms This section gives an illustrative first-order relation between achromatic map coefficients and resonance driving terms. This allows comparing this new approach to other correction algorithms based on the minimization of impor-tant resonance terms. Eventually these relations could also be used to target particular resonances by minimizing the right collection of map coefficients. However, in practice MAD-X and PTC are used to provide the map coefficients to the desired order, including all feed-down and feed-up effects. Transverse coupling is assumed to be a perturbation however adequate coordinate transformations could take stronger coupling into account. In the Hamiltonian formalism Eq. (1) is written aŝ where h is the nonlinear Hamiltonian and M represents the linear transport andx is the normalized coordinates, i.e. . The Hamiltonian h is expanded in the resonance driving terms as follows: AE x being the complex normalized coordinates, AE x ¼x AE ip x : Using the previous expressions into Eq. (6), where c:c: stands for complex conjugate and Á x is the phase advance between the two locations. To expand this expression the two following properties are used: This equation is the equivalent to Eq. (1) but using resonance terms instead of map coefficients. Therefore to relate them it is enough to take the right derivatives to isolate the term of interest. For example, which after some algebra yields This expression already captures the most important features of the relation between map coefficients and resonance terms. For example, the sextupolar map coefficient (1,0) resonances, respectively. It can be proved that the coefficient X x pq000 depends on the same terms as X x ðpþqÞ0000 . The number of resonances involved in the relation increases linearly with the order of the map coefficient. Therefore minimizing local map coefficients implies a minimization of a collection of resonances. Hence, this approach might be useful when the knowledge of the full accelerator is limited. For completeness a more general coupling map coefficient is given below as a function of resonance terms, showing the features already described: rh rðpÀrÞsðqÀsÞ þ c:c: A. Algorithm The basic assumption is that the multipolar field errors of the IR magnets are available as the results of magnetic measurements. The ideal IR map X without errors is computed using MAD-X and PTC to the desired order and stored for later computations. Including the magnetic errors to the IR elements perturbs the ideal map. To cancel or compensate this perturbation, distributed multipolar correctors need to be located in the IR. Throughout this paper we assume adjacent correctors to the triplet quadrupoles. Corrector choice will be based on performance. The map including both the errors and the effect of the correctors will be indicated with X 0 . The corrector strength is determined by simply minimizing 2 q for these two maps. For efficiency, the minimization is accomplished order by order (see, e.g., Ref. [19] for a description of the dependence of the various orders of the nonlinear transfer map on the nonlinear multipoles). In such an approach the sextupolar correctors are used to act on 2 2 , the octupolar ones on 2 3 , and so on. The code MAPCLASS [20] already used in [21] has been extended to compute 2 q from MAD-X output. The correction is achieved by the numerical minimization of 2 q using any of the existing algorithms in MAD-X for this purpose. B. Performance evaluation The evaluation of the performance of the method previously described is carried out using two of the three layouts proposed for the upgrade of the LHC insertions (see, e.g., Refs. [2,4,5,22] for the details on the various configurations under consideration). The field quality of the low-beta triplets is considered to follow the assumption reported in Ref. [23]. This implies that the various multiple components b n , a n given by where B x , B y represents the transverse components of the magnetic field and B 2 the field at the reference radius R ref , scale down linearly with the reference radius, taken at a given fraction of the magnet aperture , according to [23] ðb n ; a n ; ; R ref Þ ¼ 1 ðb n ; a n ; ; R ref Þ; where ðb n ; a n ; ; R ref Þ stands for the random field components of order n and represents any scaling factor of the magnet aperture and the reference radius. As a natural consequence, large-bore quadrupoles will feature a better field quality than smaller aperture ones. The multipolar components used for the simulations discussed in this paper are listed in Table I. An example of the order-by-order correction is shown in Fig. 1 for the so-called lowmax configuration [2,5,22]. A total of 60 realizations of the LHC lattice are used in the computations. It is worthwhile stressing that, even though the random errors are Gaussian distributed with zero mean and sigma given by the values in Table I rescaled to the appropriate value of the magnet aperture, the limited statistics used to draw the values for a single realization implies that in reality nonzero systematic errors are included in the simulations. One corrector per IR side and per type (normal and skew component) is used. Different locations of the nonlinear correctors can be used for the minimization of 2 q . The configuration having the lowest 2 q after correction is selected for additional studies (see the next section). The difference between a nonoptimized positioning and the best possible one is illustrated in Fig. 2. There, the results of the proposed correction scheme in the case of a symmetric configuration (see Refs. [2,4,22]) are shown. The configuration corresponding to the gray dots achieves slightly better corrections over the ensemble of realizations and therefore is selected for further studies. Both configurations use normal and skew sextupole and dodecapole correctors. The worse configuration uses correctors between Q 2A and Q 2B and between Q 2B and Q 3 while the better configuration uses correctors between Q 2A and Q 2B TABLE I. Random part of the relative magnetic errors of the low-beta quadrupoles at 17 mm radius [24]. The components b n and a n stand for normal and skew multipolar errors, respectively. and after Q 3 . Also the worse configuration required considerably larger strengths. It is worth mentioning that the apertures of triplet magnets Q 2;3 of the two scenarios lowmax and symmetric is 130 mm, while Q 1 is 90 mm for the lowmax and 130 mm for symmetric. A. Assessment of the nonlinear correction algorithm The main goal of the error compensation is to increase the domain in phase space where the motion is quasilinear, thus improving the single-particle stability. It is customary to quantify the stability of single-particle motion using the concept of dynamic aperture. The DA is defined as the minimum initial transverse amplitude becoming unstable beyond a given number N of turns. The standard protocol used to compute the DA for the LHC machine is based on N ¼ 10 5 and a sampling of the transverse phase space ðx; yÞ via a polar grid of initial conditions of type ð cos; 0; sin; 0Þ with 2 ½0; =2. In practice, five values for are used. The scan in is such that a 2 interval is covered with 30 initial conditions. The momentum offset is set to 3=4 of the bucket height, which equals to 2:7 Â 10 À4 in relative momentum deviation at top energy. Even though the correction settings are computed for on-momentum particles, we still use the LHC standard (Color) Evaluation of the various orders of 2 q (upper plot) before (blue markers) and after (gray and red markers) correction. The red markers represent a nonoptimized (in terms of correctors location) compensation scheme. Sixty realizations of the random magnetic errors are used. The layout is the symmetric one, whose optical functions are also reported (lower plot). protocol, which includes an energy offset, since anyway DA is dominated by geometrical aberrations. As far as the magnetic field errors used in the numerical simulations are concerned, the as-built configuration of the LHC is used. The information concerning the measured errors, as well as the actual slot allocation of the various magnets, is taken into account in the numerical simulations. The errors on the results of the magnetic measurements are included in the numerical simulations by adding random errors to the various realizations of the LHC ring. On the other hand, the field quality of the low-beta triplets from Table I and the scaling law from Ref. [23] are used. It is worth mentioning that the layouts under studies are not finalized, yet. In particular, the details for the implementation of the separation dipoles D 1 and D 2 are not fixed. As a consequence, no estimate concerning their field quality was taken into account in the modeling of the LHC ring. As for the evaluation of the correction schemes, sixty realizations of the random multipolar errors in the triplets are used and the value of DA represents the minimum over the realizations. The accuracy of the numerical computation of the minimum DA is considered to be at the level of AE0:5. In Fig. 3 the DA for the two LHC upgrade options, lowmax and symmetric, as a function of phase-space angle is plotted with and without nonlinear corrections schemes. The correction algorithm proved to be particularly successful in the case of the symmetric layout. Indeed, for this configuration about 2:5 are recovered thanks to the correction of the nonlinear b 3 and b 6 errors. The improvement in the case of the lowmax layout is less dramatic, as it allows recovering 2:5 for small angles, only. It is also important to stress that the baseline DA is not the same for the two layouts, as the lowmax is already well above 14:5 without any correction. Furthermore, not only the optics is different for the options, but also the triplets' aperture. The first implies a different enhancement of the harmful effects of the triplets' field quality, while the latter has a direct impact on the actual field quality because of the scaling law [23]. Note that the case with larger DA, namely, lowmax , also features lower 2 q values for all orders after correction, implying that this quantity might be a good indicator of particle stability. It is clear that the DA for the lowmax is already well beyond the targets used for the design of the nominal LHC even without nonlinear correctors. The situation for the symmetric option is slightly worse and a correction scheme might be envisaged. B. Digression: Dynamic aperture vs low-beta triplet aperture A third layout proposed as a candidate for the LHC IR upgrade is the so-called compact [2,5,22]. It features very large aperture triplet quadrupoles, namely, 150 mm diameter for Q 1 and 220 mm for Q 2 and Q 3 . Thanks to the proposed scaling law, the field quality is excellent and the resulting DA is beyond 16. Hence, no correction scheme is required for this layout. Nonetheless, a detailed study of the dependence of the dynamic aperture on the magnets aperture is carried out. The overall LHC model is the same as the one described in the previous sections, the main difference being the scan over the aperture of Q 1 and simultaneously over the apertures of Q 2 and Q 3 . The optics is assumed to be constant, which implies that the configurations corresponding to larger magnets apertures than the nominal ones cannot be realized in practice. The results are shown in Fig. 4. The minimum, average, and maximum (over the realizations) DA are shown for the two types of scans. The horizontal lines represent the asymptotic value of the DA and are obtained by using a huge (and unrealistic) value for the triplets aperture. The dependence on the aperture of Q 1 is rather mild and an asymptotic value is quickly achieved. Furthermore, there exists a rather wide range of apertures for which the DA is almost constant. In particular, for > 110 mm the asymptotic value of the DA is reached. A constant drop of DA is observed for < 100 mm and, in general, the three curves behave the same. The asymptotic value of DA for the scan of the Q 2 and Q 3 aperture is reached for apertures much larger than 280 mm. This is due to the larger value of the beta function in Q 2;3 than in Q 1 , which enhance the impact of the quadrupoles' field quality on the beam dynamics. The spread between the asymptotic values for minimum, average, and maximum DA is smaller than for the case of the scan over the aperture of Q 1 . The way the asymptotic value is achieved is remarkably the same for both types of scans and was studied in more detail to assess whether it could be explained by a general scaling law. The hypothesis is that, due to the scaling law, Eq. (16), for high values of the magnets' aperture the lowest order multipole, i.e., the sextupolar one, dominates the beam dynamics. Therefore, the asymptotic behavior of the DA should scale inversely to the third power of the aperture . This hypothesis was applied not only to the compact layout, but also to the symmetric one, to ensure the independence of the conclusion on the details of the layout under study. To avoid the potential numerical instabilities related with the use of the minimum DA, the average of the realization was used for this study. Figure 5 shows the DA for the two layouts and the two sets of quadrupole apertures together with a fit of the function fðÞ ¼ a À3 þ b. In all cases the inverse cubic asymptotic behavior seems to be in very good agreement with the numerical data. It is worth stressing that all the points shown in Fig. 5 were used for the computation of the fit curves. The asymptotic character of the scaling law implies that it should hold only for sufficiently large magnet apertures. This, indeed, explains why the agreement between the fit and the numerical data in the intermediate regime is not excellent (see Fig. 5, top). V. CONCLUSIONS A general algorithm for the correction of multipolar errors in a given section of a circular accelerator has been developed. It is based on the computation and comparison of map coefficients obtained from standard accelerator codes such as MAD-X and PTC. The algorithm aims at minimizing the difference between a target transfer map and the actual one. Both order-by-order and global optimization strategies are possible. Of course, the algorithm can be used also to optimize the location of the corrector elements. In its present form the nonlinear magnetic field errors are the only source of nonlinearities included in the transfer map. On the other hand, sources of nonlinear effects in the transfer map, such as beam-beam kicks from long-range encounters, could also be included in the correction algorithm. The efficiency of such an approach should be tested in practice with dedicated studies. Direct relations between map coefficients and resonance terms have been computed. These relations could be used to extend the correction method to target specific resonances by selecting the right collection of map coefficients. The correction algorithm was successfully tested on two layouts for the proposed IR upgrade of the LHC machine. The quality of the correction was also assessed by means of numerical simulations aimed at computing the dynamic aperture. In the two cases under consideration, a sizable increase of the dynamic aperture due to the correction scheme is observed. Moreover, the case with larger DA also features lower 2 q values for all orders implying that this quantity is a good indicator of particle stability. In the numerical simulations used to evaluate the dynamic aperture, a new scaling law for the magnetic field errors as a function of the low-beta quadrupoles aperture was used. The impact of such an assumption on the value of the dynamic aperture was assessed in detail with a series of dedicated studies, where the triplets aperture is scanned. Smooth dependency of the dynamic aperture with respect to the magnets aperture is found, and a power law is fitted to the numerical data with very good agreement. These results could be used as an additional criterion for the definition of the required aperture of triplet quadrupoles. Indeed, one could derive the minimum aperture for which the dynamic aperture does not require any correction. Such a condition should then be taken into account together with the ones related to the needed beam aperture and energy deposition issues.
5,276.6
2009-01-21T00:00:00.000
[ "Physics" ]
An Improvement of the Differential Transformation Method and Its Application for Boundary Layer Flow of a Nanofluid The main feature of the boundary layer flow problems of nanofluids or classical fluids is the inclusion of the boundary conditions at infinity. Such boundary conditions cause difficulties for any of the series methods when applied to solve such a kind of problems. In order to solve these difficulties, the authors usually resort to either Padé approximants or the commercial numerical codes. However, an intensive work is needed to perform the calculations using Padé technique. Due to the importance of the nanofluids flow as a growing field of research and the difficulties caused by using Padé approximants to solve such problems, a suggestion is proposed in this paper tomap the semi-infinite domain into a finite one by the help of a transformation. Accordingly, the differential equations governing the fluid flow are transformed into singular differential equationswith classical boundary conditionswhich can be directly solved by using the differential transformationmethod.The numerical results obtained by using the proposed technique are compared with the available exact solutions, where excellent accuracy is found.The main advantage of the present technique is the complete avoidance of using Padé approximants to treat the infinity boundary conditions. Introduction Nanotechnology is an advanced technology, which deals with the synthesis of nanoparticles, processing of the nano materials and their applications. It is well known that 1 nm (nanometer) = 10 −9 meter. Normally, if the particle sizes are in the 1-100 nm range, they are generally called nanoparticles. Nanotechnology has been widely used in industry since materials with sizes of nanometers possess unique physical and chemical properties. Nanoscale particle added fluids are called as nanofluid. The term "nanofluid" was first used by Choi [1] to describe a fluid in which nanometer-sized particles are suspended in conventional heat transfer basic fluids. Fluids such as oil, water, and ethylene glycol mixture are poor heat transfer fluids, since the thermal conductivity of these fluids plays important role on the heat transfer coefficient between the heat transfer medium and the heat transfer surface. Numerous methods have been taken to improve the thermal conductivity of these fluids by suspending nano/micro or larger-sized particle materials in liquids. An innovative technique to improve heat transfer is by using nanoscale particles in the base fluid [1]. Therefore, the effective thermal conductivity of nanofluids is expected to enhance heat transfer compared with conventional heat transfer liquids (Masuda et al. [2]). This phenomenon suggests the possibility of using nanofluids in advanced nuclear systems (Buongiorno and Hu [3]). Choi et al. [4] showed that the addition of a small amount (less than 1% by volume) of nanoparticles to conventional heat transfer liquids increased the thermal conductivity of the fluid up to approximately two times. A comprehensive survey of convective transport in nanofluids was made by Buongiorno and Hu [3] and very recently by Kakaç and Pramuanjaroenkij [5]. It may be also important to mention that a valuable book in nanofluids is published recently by Das et al. [6]. In addition, various interesting results in this regard can be found in [7][8][9][10][11][12][13][14][15][16][17]. Khan and Pop [18] were the first to investigate the boundary-layer flow of a nanofluid past a stretching sheet. The main feature of the boundary layer flow of nanofluids or classical fluid is the inclusion of the boundary conditions at 2 International Journal of Differential Equations infinity. Such conditions cause difficulties for any of the series methods when applied to solve this kind of problems. This because the infinity boundary condition cannot be applied directly to the series solution, where Padé approximants should be established before applying the boundary condition at infinity. Many authors [19][20][21][22][23][24][25][26][27][28][29][30][31][32] have been resorted to either Padé technique or some numerical commercial codes to solve the boundary value problems in unbounded domain. However, Padé technique requires a massive computational work to obtain accurate approximate solutions. Searching for a direct method to treat the boundary condition at infinity has been the main goal of many researchers for a long time to solve boundary value problems in unbounded domain. Such a direct method is proposed in this paper. The main idea is to transform the physical domain from unbounded into bounded through a transformation. Accordingly, a new system arises which is now subject to classical boundary conditions, where the boundary conditions at infinity disappeared as a result of the new transformation. The transformed system can be directly solved by the differential transformation method (DTM) [33][34][35][36][37][38][39][40][41][42][43][44][45] without any need to Padé approximants. In this paper, the governing system of ordinary differential equations describing the boundarylayer flow of a nanofluid past a stretching sheet is analyzed through the proposed improved version of the DTM. The main advantage of the present method is that not only it avoids the use of Padé approximants, but also gives the series solution in a straightforward manner. Basic Equations The basic equations of the steady two-dimensional boundary layer flow of a nanofluid past a stretching surface with the linear velocity ( ) = , where is a constant and is the coordinate measured along the stretching surface, as given by Kuznetsov and Nield [15] and Nield and Kuznetsov [16] and later by Khan and Pop [18], are as follows: subject to the boundary conditions: A complete physical description of the present problem was well presented by Khan and Pop [18] as follows. The flow takes place at ≥ 0, where is the coordinate measured normal to the stretching surface. A steady uniform stress leading to equal and opposite forces is applied along theaxis so that the sheet is stretched keeping the origin fixed. It is assumed that at the stretching surface, the temperature and the nanoparticle fraction take constant values and , respectively. The ambient values, attained as tends to infinity, of and , are denoted by ∞ and ∞ , respectively. Here and V are the velocity components along the axes and , respectively, is the fluid pressure, is the density of the base fluid, is the thermal diffusivity, ] is the kinematic viscosity, is a positive constant, is the Brownian diffusion coefficient, is the thermophoretic diffusion coefficient, = ( ) /( ) is the ratio between the effective heat capacity of the nanoparticle material and heat capacity of the fluid with being the density, is the volumetric volume expansion coefficient, and is the density of the particles. Khan and Pop [18] have looked for a similarity solution of (1) with the boundary conditions (2) by assuming that where the stream function is defined in the usual way as = / and V = − / . Hence, a set of ordinary differential equations were obtained by [18] as subject to the boundary conditions: where primes denote differentiation with respect to and the four parameters are defined by where and are the wall heat and mass fluxes, respectively. According to Kuznetsov and Nield [15], Re −1/2 Nu and Re −1/2 Sh are known as the reduced Nusselt number Nur and reduced Sherwood number Shr, respectively, where Re = ( )/] is the local Reynolds number based on the stretching velocity ( ). It should be noted that the exact solution of (4) with the boundary conditions given in (7) was first obtained by Crane [46] and given as Substituting ( ) into (5)-(6), the given system reduces to a system of two coupled differential equations as subject to the boundary conditions: Transformed Equations In order to solve boundary value problems in unbounded domain by using the DTM, authors are usually resort to Padé approximant due to the boundary condition at infinity, and an approximate solution is only available in this case. As a well-known fact, Padé approximant requires a huge amount of computational work to find out the approximate solution. In this regard, we think that if it is possible to transform the unbounded domain into a bounded one then the BVPs may be easily solved without any need to Padé approximant. A first step in this direction is to transform the unbounded domain of the independent variable ∈ [0, ∞) into a bounded one ∈ [0, 1). Such a transformation is found as = 1 − − , accordingly the governing equations should be changed to be in terms of the new variable . The effectiveness of this procedure shall be discussed in the next subsection to show the possibility of obtaining very accurate numerical solutions. In view of the mentioned transformation, the system (1)- (3) with the boundary conditions (7) is transformed into a new system in bounded domain given by with the boundary conditions: where primes denote differentiation with respect to . Analysis and Results In this section, the application of the DTM is discussed without resorting to Padé approximants. Applying the DTM to the previously mentioned system yielded the following recurrence scheme: Equations (17a), (17b), (18), and (19) are used to obtain very accurate approximate numerical solutions, where two different cases are derived and discussed in the next two subsections. Case 1: At =0 and =0. At Nt = 0 and Nb = 0, the boundary value problem for becomes ill-posed and consequently the boundary value problem for becomes subject to the boundary conditions in (15). Accordingly, a simple recurrence scheme is obtained form (17a) where Δ = 1 4515P r 4 − 158682P r 3 + 1514564P r 2 − 5753736P r + 10628640 . Here, it may be useful to mention that an exact solution for the current case is obtained very recently by the first author in [47] and given by where Γ( , 0 , 1 ) = ∫ 1 0 −1 − is the generalized Gamma function which can be expressed in terms of the incomplete Gamma function as Γ( , . There is no doubt that the availability of the exact solution gives the opportunity to validate the accuracy of the suggested approach. For the purpose of illustration, the approximate solutions are compared in Figures 1, 2, and 3 with the exact one given by (24) at different values of Prandtl number. These primary results reveal that the 10-term approximate solution is sufficient to obtain numerical solutions of high accuracy for certain range of Prandtl number, mainly Pr = 1, 2, 3. However, at Pr = 4 the 10-term approximate solution is not accurate as can be seen in Figure 4. This observation leads to the conclusion that with increasing Pr more terms in the approximate series solution are in fact needed. For example, at Pr = 5 the 20term approximate solution is found sufficient in Figure 5, while 30-term approximate solution is found identical to the exact one at Pr = 10 in Figure 6. However, the main advantage of the suggested approach is the avoidance of Padéapproximants which has been used a long time to treat the boundary condition at infinity. Case 2: At =0, = =1, and ̸ = 0. Substituting Nt = 0, Pr = 1, and Le = 1 into (17a) and (17b) yields Using the recurrence scheme (25) with the transformed initial conditions (18) and (19) for = 0, 1, 2, . . . , 4, we get a system of algebraic equations in Θ(1), Θ(2), . . . , Θ(6) and Φ(1), Φ(2), . . . , Φ (6). The solution of the required system leads to the following 6-term approximate solution for the equation: where Θ(1), Θ(2), . . ., and Θ(6) are expressed in terms of Nb but ignored here for lengthy results. However, the 6-term series solution for the equation is given explicitly as The exact solutions are obtained in [47] as The obtained truncated series solution Θ 6 ( ) is compared with the exact one in Figures 7-9 at several values of Nb. As observed from Figures 7 and 8, the approximate solution is coincided with the exact one at certain values, Nb = 0.1 and Nb = 0.3. However, it approaches the exact curve at Nb = 0.5, where more terms are needed in this case. In addition, the approximate solution Φ 6 ( ) is found identical to the exact curve as shown in Figure 10. Conclusions A system of ordinary differential equations describing the boundary layer flow of a nanofluid past a stretching sheet is investigated in this paper via a new approach. The suggested approach is based on transforming the boundary conditions at infinity into classical conditions prior to the application of the differential transformation method. A transformation is successfully used to map the unbounded physical domain into a bounded one. In addition, the current results are validated through various comparisons with the available exact solutions. In comparison with Padé technique, the new method of solution is found not only straightforward but also effective in obtaining accurate numerical solutions, where Padé approximant was completely avoided.
2,986.2
2013-05-16T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Detecting Buried Human Bodies Using Ground-Penetrating Radar Being located at the dense tectonic activity area, Indonesia has to cope with the constant risk of earthquakes. High frequency of earthquakes occurrence causes the crust instability and leads into another natural disaster such as landslides. Sometimes, the landslide avalanches are covering the high populated area destroying buildings and causing victims. Unfortunately, the treatment for the affected building and landslide victims searching are still using conventional methods. The purpose of this study is to detect buried human bodies using GPR method, so it can increase the effectiveness and the efficiency of disaster victims searching under the landslide avalanche. Ground-penetrating radar (GPR) is one of the geophysical methods that can be used to study shallow subsurface of the earth. GPR has been successfully used to locate grave and forensic evidence. However, more controlled research is needed to improve the effectiveness and efficiency of disaster victim detection that buried under landslides or earthquake avalanche. A detailed GPR survey was conducted in the Cikutra graveyard, Bandung, with corpses buried one week until two months before the survey. The radar profiles from this survey showed the clear amplitude contrast anomalies, emanated from the corpses. The strongest amplitude contrasts are observed at most recent grave compared to the older grave. We obtained the amplitude contrast at around 1.2 meters depth which is consistent with the depth of the buried corpses. In addition, the results of forward modeling of homogenous subsurface and corpses in subsurface will be presented. Introduction Indonesia is located at the ring of fire and is the one of the densest center of seismic activity in the world. Earthquakes and landslides occur frequently and cause a lot of victims. In Indonesia, the searching of victims buried under landslide or earthquake avalanches is still using conventional methods by sniffer dog or random searching at the human activity area. The conventional method was taking too much time and sometime left some victims unfound. The use of conventional method was due to the absence of technology application that able to search the buried victim effectively and efficiently. The purpose of this study is to prove that the GPR method can be used to detect human bodies underground, so it would be applicable for disaster victims searching in more effective and efficient way. Buried victims under landslide or earthquake avalanche are approximated by buried corpses in the graveyard. The study site is located within the Cikutra burial ground, in the northern area of Bandung city of longitude 107°38'13.4"E and latitude 6°53'23.5"S. The survey was taken at October 16, 2015, at eleven adjacent graves. The corpses buried one week until two months before the survey was available. The choosing of the new buried bodies was to approximate the real disaster case where the searching of victim bodies is done as soon as the disaster happened. The process of the burial includes wrapping of the corpse body with a sheet of shroud and then placing the body in the grave. The topography of the study site was relatively flat with increasing elevation 10 cm right above each grave. The Cikutra graveyard consists of very dry soil (Figure 1) that mostly contain small air cavities or cracks resulted from overexposure of the sun and the minimal amount of vegetation. Figure 1. Soil condition at the survey area GPR is a geophysical method that transmits electromagnetic (EM) waves from the antennas into the ground. GPR can accurately map the shallow subsurface by detecting the changes of electrical properties. Bed boundaries and other discontinuities in the ground will reflect the wavelet back to the surface and got recorded (Conyers, 2013). GPR measures the time between EM wave transmission and reception, referred as the two-way travel time that commonly measured in nanoseconds, which is a function of reflector depth and the EM wave propagation velocity (Robinson et al., 2004). As the radar waves are transmitted through various materials on their way to the buried target, their velocity will change depending on the physical properties of the material through which they traveled (Neal, 2004). If the travel times of the EM wave are measured, and the velocity through the ground is known, the distance from antenna as the source of GPR wave and the reflection point can be accurately measured (Conyers and Goodman, 1997). For GPR to works effectively, the target must exhibit electrical properties (dielectric constant and electrical conductivity) which contrast with the host subsurface (Neal, 2004). Subsurface discontinuities where reflections occur are usually created by changes in the electrical or magnetic properties of the rock, sediment or soil (Conyers and Goodman, 1997). By the electrical properties contrast of dry soil, human blood, and human bones, it is predicted that the GPR survey at graveyard will show a clear anomaly indicating the buried human body location. The success of GPR research to detect the human bone is present by Mellett (1992) that was able to locate a buried victim of a homicide 0.5 m deep using a 500 MHz antenna. Hammon et al. (2000) forward modeled GPR responses from human remains using different frequencies and found that a skull was detectable at 0.8 m depth in the dry sand. Damiata et al. (2013) can detect the human bone inside a well buried grave. The principle of the search is based on the contrast of the electric permittivity and the magnetic permeability of the bone with its surroundings. The circle in Figure 2 shows a radargram that isolate the hyperbolas associated with the excavated grave. Arrows point to reflections from various types of reflectors such as a stratigraphic layering, an enclosure wall and a chest cavity. (2013)) Method The GPR equipment used for this survey was MALA RAMAC X3M 800 MHz shielded antenna powered by a separate battery. The use of 800 MHz antenna reasoned for it can cover the observation depth of the survey that well resolved until 6 meters depth. Data was recorded and managed through a laptop connected to the equipment with RamacGroundVision software. The output appearance is indicating the continuity of data sampling used as survey quality control. We also knew the acquisition depth of the survey directly and it can be used as the control parameter whether the depth of the survey targets are resolved or not. The length of survey line showed consistently with the length of the survey line that measured manually. A trial survey was conducted over a short distance to estimate the appropriate velocity and the sampling frequency. The trigger interval was decided to be 0.05 m. Figure 3. Survey area Before the actual survey, forward modeling step was conducted by generating the synthetic model of buried human body using MatGPR Release 3 (Tzanis, 2013). Forward modeling is a useful process in geophysics to estimate the geophysical response acquired from the actual survey based on the calculation of a synthetic earth model. The electrical properties used in forward modeling are permittivity and conductivity of soil and buried human bone. The permittivity of soil is defined as 3 F/m and the conductivity of soil is defined as 0.05 mho. The permittivity of human bone is defined as 23 F/m with conductivity defined as 0.7 mho (Foster, 2000). Background conductivity is added and defined as 0.05 mho. The frequency of GPR wave that used in forward modeling process is 800 MHz agree with the actual GPR wave that used in the survey. The line was chosen with maximum offset 12 m. Figure 4 displays the determined line that crossed eleven graves. The head parts of the corpses are on the north and the legs parts of the corpse are on the south part of the grave. The acquisition was measured from west to east. The topography along the survey lines was relatively flat with increasing elevation 10 cm right above each grave. A board was used to help the instrument moving from one grave to the next grave due to the elevation variation ( Figure. 5). The atmospheric condition during the survey was clear with no cloud, rain, or lightning so it could be predicted that there was only a minimal amount of EM wave noise during the survey. Vol. 5, No. 2;2016 by removing the random noise and enhancing the amplitude response of interest. Moreover, most of the high amplitude air wave and ground wave arrivals that usually disfigure the top of radargrams are effectively removed. The defined processing sequence in ReflexW is shown in Table 1. Stacking Static correction process is applied to bring the signal into the correct datum. In this research the datum reference that used is mean sea level. Subtract mean (dewow) is applied in all traces to increase the data resolution by choosing only the spike wavelet. The 2.5 ns time window is used because it results the best spiking wavelet form. The chosen time window value is affected by lithological factors such us soil moisture and type of rock, and it also affected by acquisition parameter such us GPR wave frequency, and the trigger interval. The best time window value will be different for any different survey condition. After subtract mean process, band pass filter is applied to separate the primary signal from noise. Band pass filter that used at this processing stage is 150MHz-260MHz because this value resulting the better appearance of radargram profile. After the band pass filter step, the background noise was not removed completely from the radargram. To remove the left background noise, the background removal process was applied. The last step of processing is stacking that aimed to increase the signal to noise ratio resulting the best appearance of radargram profile. Discussion and Results At the forward modeling stage, two synthetic models were used. The first model is the homogeneous subsurface condition and the second model is non homogeneous subsurface with three corpses appearance in subsurface ( Figure 6a). The homogeneous subsurface model is purposed to distinguish the GPR response between the corpse appearance and homogeneous background soil.The GPR frequency used for the forward modeling was 800 MHz. The velocity of the radar wave in soil was derived from the electrical properties of the dry soil by MatGPR release 3.0 defined as 0.17022 m/ns. Meanwhile, the velocity of the radar wave in human bone was derived from the electrical properties of the human bone defined as 0.061642 m/ns. The GPR response in homogeneous subsurface model shows no anomalies in radargram ( Figure 6b) agree with the homogeneous subsurface condition in the first model. In the second synthetic model, the locations of corpses are at 1.2 m depth ( Figure 6c). The GPR response can detect the existence of the corpses very clearly. The subsurface model more that 2.5 m depth is not resolved in radargram. The amplitude contrasts are shown at travel time 3 ns (Figure 6d). Because the target depth on actual survey is also on 1.2 m depth, we can expect that by using the 800 MHz GPR equipment will result the strong anomaly. The contrast electrical properties that used at forward modelling are permtivity, conductivity, and velocity. Meanwhile, the permeability of subsurface component. in two synthetic models are similar. .a. Red arrow shows corpse appearence in radargram. b. Black arrow shows the effect of board Radargram profile showed that components from subsurface materials are visible. These components are unconsolidated rock, undisturbed sand, and amplitude contrasts. The unconsolidated rock is showed by a zero amplitude wave caused by background soil that not being compacted yet in the upper part of radargram profiles. The soil is not well consolidated due to the filling of the graves are less than 2 months prior to the survey. The undisturbed sand is showed by a layering pattern in the lower part of radargram profile because its well consolidates compared to the upper soil. The amplitude contrasts at around 1.2 m depth is interpreted as the location where the corpses are buried (marked by the blue circle in Figure 8a) and it was consistent with the depth of actual buried bodies known at Cikutra graveyard that determined as 1.2 m. The empty grave didn't show the contrast amplitude anomaly (marked by red circle in Figure 8a). The black arrow (Figure 8b) show the discontinuity caused by the air effect under the board that used to help the instrument moving from one grave to another grave during acquisition that were agree with its location at field . The discontinuities that lie between graves location are shown in Table 2. The amplitude contrasts are produced from the contact of EM waves, human blood, and the human bone. The strongest amplitude contrast was interpreted as the electrical properties contrast between human blood, human bone, and background soil. According to Foster (2000) the permittivity of human bone is defined as 23 F/m with conductivity defined as 0.7 mho, the permittivity of human blood is defined as 58-62 F/m with conductivity defined as 1.4 mho. In this survey the strongest amplitude is shown by two weeks coprse compared to other graves that were filled by the older corpses. The most recent corpse is estimated to have more blood than the older copses. For the most recent grave, because the background soil has high contrast conductivity relative to human blood, the amplitude contrasts shown in radargram profile was better displayed. But the oldest corpse in this survey still gave amplitude contrast anomaly compared to the empty grave. This happens due to the contrast electrical properties between the human bone and soil is lower than human blood and soil. So it can be expected to detect the victim in disaster area can be conducted until 2 months after the disaster occurs. To prove the method is also work in detecting the older corpse, the further research must be conducted. This line of survey was only cross the abdomen part of the corpse. To elaborate this research will work if the survey line is crossing the other part of human body, the further research must be conducted. The anomaly parameter of this survey is different with Damiata et al. (2013) as anomaly parameter to show the forensic evidence; meanwhile the anomaly parameter used at this survey is the amplitude contrast. But this research is still giving the good results confirmed by field information. Conclusion The use of GPR 800 MHz equipment can map the subsurface well until 2.5 m depth. The amplitude contrast anomaly at the depth around 1.2 m is indicated as the location of the buried bodies, and consistent with the depth of actual buried bodies known at Cikutra graveyard that determined at 1.2 m. The air effect caused by the hole in subsurface can result the discontinuities at radargram profile. The contrast amplitude was strongest at the most recent grave compared to the older grave because of the human blood in the younger corpses give bigger contrast with soil compared with the contrast of the human bones with soil. This supports the initial hypothesis that the GPR method can be used to detect a human body buried underground or under disaster avalanches. However, further research is needed in an actual landslide or earthquake area to investigate the GPR reflection pattern at the landslide cite. Moreover, additional research to remove the effect of subsurface hole and the survey with www.ccsenet.org/esr Vol. 5, No. 2;2016 different GPR wave frequency are important.
3,574.2
2016-04-08T00:00:00.000
[ "Geology" ]
Endothelial Progenitor Cell in Cardiovascular Diseases The last decade has seen a huge interest in the field of regenerative biology, with particular emphasis on the use of isolated or purified stem and progenitor cells to restore structure and function to damaged organs. Circulating endothelial progenitor cells (EPCs) have been studied as a potential cell source that contributes to neovascularization via postnatal vasculogenesis (Asahara et al., 1997). EPCs are reported to naturally home and integrate into sites of physiological vessel formation in vivo and incorporate into the vasculature of tumors, ischemic skeletal and cardiac muscle (Asahara et al., 1999). Furthermore, Accumulating evidence demonstrates a relationship between the frequency of circulating EPCs and cardiovascular disease risk (Hill et al., 2003). In the following, we will review the putative role of EPCs in endothelial repair and provides evidence for their influence on atherosclerosis. Introduction The last decade has seen a huge interest in the field of regenerative biology, with particular emphasis on the use of isolated or purified stem and progenitor cells to restore structure and function to damaged organs. Circulating endothelial progenitor cells (EPCs) have been studied as a potential cell source that contributes to neovascularization via postnatal vasculogenesis (Asahara et al., 1997). EPCs are reported to naturally home and integrate into sites of physiological vessel formation in vivo and incorporate into the vasculature of tumors, ischemic skeletal and cardiac muscle (Asahara et al., 1999). Furthermore, Accumulating evidence demonstrates a relationship between the frequency of circulating EPCs and cardiovascular disease risk (Hill et al., 2003). In the following, we will review the putative role of EPCs in endothelial repair and provides evidence for their influence on atherosclerosis. Identification of EPCs Despite the availability of effective preventive measures, coronary artery disease (CAD) remains a leading cause of morbidity and mortality in most industrialized countries. Convincing evidence indicates that the integrity and functional activity of the endothelial monolayer play an important role in atherogenesis. Traditional view suggests that endothelium integrity is maintained by neighboring mature endothelial cells which migrate and proliferate to restore injured endothelial cells. However, a series of clinical and basic studies prompted by the discovery of bone marrow-derived EPCs have provided new insights into these processes and demonstrate that the injured endothelial monolayer is regenerated partly by circulating EPCs. Putative circulating endothelial progenitors were first described in the adult human by Asahara et al. (Asahara et al., 1997) in 1997. They used the presence of CD34 to sort cells from the adult peripheral blood mononuclear component, based on the knowledge that this antigen is carried by both the angioblasts and haemopoietic stem cells responsible for vasculogenesis in embryonic life. By culturing MNCs (mononuclear cells) enriched or depleted in these CD34 + cells, they showed that the CD34 + component is able to give rise to spindle-shaped cells after 3 days, which become attached to fibronectin. Such culture led to an up-regulation of endothelial lineage markers such as CD31, Flk-1 and Tie2, and loss of the pan-leucocyte CD45 antigen, in these attaching cells. Asahara et al. (Asahara et al., 1999) went on to deliver labelled, CD34 + -enriched, MNCs into mouse and rabbit models of hindlimb ischemia and demonstrated neovascularization in the relevant limb with apparent incorporation of labelled cells into capillary walls. In separate experiments, they delivered murine labelled-MNCs enriched for Flk-1 and similarly found incorporation into capillaries and small arteries in the mouse hindlimb ischemia model. Carriage of CD31 and lectin binding was observed in these incorporated cells. These landmark studies suggest that circulating EPCs in adult peripheral blood could differentiate into cells of endothelial lineage and enhance revascularization through vasculogenesis. Issues of definition for EPCs Since these important findings, an enormous amount of research has been undertaken into EPCs; however, in attempting to collate and interpret these results, a major limiting factor is that no simple definition of EPCs exists at the present time, and various methods to define EPC have been reported. This pertains to the unresolved issue of how EPCs should best be defined. Antigen-based definitions of EPC The first method of classification of EPC is based on expression of cell-surface antigens, typically using flow cytometry to quantify relevant populations. Endothelial cells (EC) display a characteristic combination of such antigens, including CD34, KDR (kinase insert domain-containing receptor, a type of VEGFR2), VE-cadherin, vWF (von Willebrand factor) and E-selectin. In order to distinguish mature endothelial cells from circulating endothelial progenitors, some groups have additionally used other antigens which are lost during maturation of endothelial lineage cells, most commonly CD133 (also termed AC133) (Hristov & Weber, 2004). The combination of CD34, KDR and CD133 has been used by several investigators, although many others have used only two of these three. Unfortunately, even with use of all three, this phenotype is not entirely specific, since this same cluster of antigens may also be found on haemopoietic stem cells (Adams & Scadden, 2006;Verfaillie, 2002). This relates to the probable origin of haemopoietic and EC lines from a common precursor, termed the haemangioblast. As haemopoietic stem cells differentiate, CD34, KDR and CD133 antigens are down-regulated and disappear. Furthermore, the use of CD133 to make the distinction from mature ECs will also lead to the exclusion of 'more mature' EPCs which may have lost this marker, while not yet being terminally differentiated. To complicate matters further, while the use of antigenic combinations may have logical appeal, whether this approach actually identifies a group of precursors capable of producing ECs has recently been challenged (Case et al., 2007). Culture-based definitions of EPC The second commonly employed definition for EPCs derives from in vitro culture work. Asahara et al. described in vitro culture of CD34-enriched MNCs leading to the formation of spindle-shaped attaching cells within 3 days (Asahara et al., 1997). Co-culture of CD34enriched and CD34-depleted cells gave rise, within 12 hours, to multiple clusters, containing round cells centrally and sprouts of spindle-shaped cells at the periphery. This cluster appearance was reminiscent of the blood islands previously described, wherein angioblasts surround hematopoeitic stem cells as the initial stage of vasculogenesis (Flamme & Risau, 1992). Various culture preparations have been used to encourage endothelial lineage proliferation from human blood-derived MNCs. There has been a considerable variation in the details of techniques used: for example, some have replated the adherent cells after 2 days initial culture, whereas others have used the non-adherent cells at this time (Vasa, 2001). Then, endothelial cell lineage was confirmed by indirect immunostaining with the use of DiI-acLDL and co-staining with BS-1 lectin. However, controversy exists with respect to the identification and the origin of EPCs, which are isolated from peripheral blood mononuclear cells by cultivation in medium favoring endothelial differentiation. Early and Late outgrowth EPCs EPCs can be isolated, cultured, and differentiated ex vivo from the circulating mononuclear cells (MNCs) and exhibit characteristic endothelial properties and markers. Currently, two types of EPCs, namely early and late outgrowth EPCs, can be derived and identified from peripheral blood. The early EPCs appear after 3-5 days of culture, are spindle-shaped, have peak growth at approximately 2 weeks and die by 4 weeks. These have been variously termed 'early EPCs' by and Hur et al. (Hur et al., 2004), 'attaching cells' by Asahara et al. (Asahara et al., 1997) and CACs (circulating angiogenic cells) by Rehman et al. (Rehman et al., 2003). The second type of EPCs appears only after longer culture, of approximately 2-3 weeks, forming a cobblestone monolayer with near-complete confluence, and can show exponential population growth without senescence over 4-8 weeks and live for up to 12 weeks. These were termed 'late EPCs' by Hur et al. (Hur et al., 2004) or OECs by Lin et al. (Lin et al., 2000) and . Early and late outgrowth EPCs (OECs) share some endothelial phenotype similarities but show different morphology, proliferation rate, survival features, and functions in neovascularization. For clarity, we will use the terms early EPCs and OECs in the present review. Early EPCs, in contrast, do not participate in tube-forming assays, have only weak invasive ability on gels and produce only low levels of NO. They do, however, demonstrate some features in keeping with an endothelial lineage such as acetylated LDL uptake and lectin binding. In addition, early EPCs do not develop into OECs upon prolonged culture. Among antigenic markers, CD14 (a monocytic marker) has been found by several groups on early EPCs (Romagnani et al., 2005;Urbich et al., 2003). Early EPCs lack the impressive replicative ability of OECs, but are prolific producers of several growth factors, cytokines and chemokines, including VEGF, HGF (hepatocyte growth factor), G-CSF (granulocyte colonystimulating factor) and GM-CSF (granulocyte/macrophage colony-stimulating factor). The lineage origin of these two culture-derived endothelial-type cells has been examined. Expression of the pan-leucocyte antigen CD45 is relatively greatest in MNCs, lower in early EPCs and lowest in OECs. It appears that early EPCs are mostly derived from a CD14 + population of MNCs, implying a monocytic, rather than true endothelial, lineage (Yoon et al., 2005). In contrast, OECs derive exclusively, or almost exclusively, from the CD14 − population of MNCs (Yoon et al., 2005). It has been suggested that the MNCs from which OECs are derived may represent a 'true' circulating endothelial precursor (angioblasts). OECs have many similarities to mature ECs, in terms of surface antigens (including KDR, vWF, and VE-cadherin) and high levels of NO (nitric oxide) production by eNOS (endothelial NO synthase). They are able to participate effectively in tube-forming assays in vitro. However, OECs differ from mature ECs in having far greater proliferative ability in www.intechopen.com vitro and greater angiogenic potential in vivo. A small population of OECs with the highest proliferative potential was able to produce more than 200 progeny per replated cell. Based on these findings, these features make OECs attractive candidates for therapeutic use in ischemia-related neovascularization. Endothelial progenitor cells and atherosclerosis The discovery of endothelial progenitors within adult peripheral blood presents another possible means of vascular maintenance, namely a reservoir of circulating cells which can home to sites of injury and restore endothelial integrity thus allowing continued normal function. Hill et al. (Hill et al., 2003) studied men without known cardiovascular disease but with varying degrees of estimated cardiovascular risk. Endothelial function was determined by using brachial artery flow-mediated vasodilation, and EPC numbers were measured using their CFU assay in study subjects. An inverse correlation was found between numbers of CFUs and the overall Framingham risk score of the participants. Furthermore, they found a positive correlation between the number of EPCs and endothelial function as assessed by brachial artery reactivity of the subjects. These findings are compatible with the hypothesis that an adequate pool of EPCs in the blood may be a key requirement for appropriate endothelial function. It appears that bone marrow-derived EPCs play a pivotal role in the maintenance of adult vascular endothelium. However, the basis of this correlation between EPC levels and endothelial function remains to be determined. Although the critical role of circulating EPCs in the pathogenesis of atherosclerotic diseases is substantiated by several observations, the relationship between circulating EPCs and coronary artery disease (CAD) remains a subject of debate. Several studies have examined the association between circulating EPCs and CAD or risk factors predisposing to coronary artery disease. Vasa et al. reported that the circulating EPCs levels were significantly reduced in patients with CAD compared to those without CAD (Vasa et al., 2001). Wang and coworkers indicated that decreased number and activity of EPCs were observed in patients with stable CAD, and EPC levels were negatively correlated with the severity of coronary stenosis assessed by Gensini score (Wang et al., 2007). Fadini et al. also reported that EPCs were significantly reduced in subjects with increased intimamedia thickness (Fadini et al., 2006), implying that depletion of EPCs may be an independent predictor of subclinical atherosclerosis. However, Guven et al. showed that increased EPCs levels were associated with the presence of significant CAD, and EPC numbers correlated with maximum angiographic stenosis severity (Guven et al., 2006). The apparent conflicting results between different studies may have many explanations, including fundamental differences in the methodologies used to identify circulating EPCs in different studies; heterogeneity of patient population, and effect of the disease stage on biological properties of circulating EPC levels. Based on the angiographic classifications by Syntax score, our recent work has shown that severe CAD patients (with higher Syntax Score) have decreased circulating EPCs numbers than mild CAD patients and subjects with normal angiographic results (unpublished data). Moreover, circulating EPC levels were shown to be negatively correlated with the SXscore in patients with angiographic evidence of CAD. These findings are consistent with a recent study showing that lower level of circulating EPCs predicts CAD progression (Briguori et al., 2010), suggesting the critical role of EPCs in the pathogenesis of CAD. Anti-atherosclerotic actions of EPC Rapid and complete restoration of endothelial integrity and function prevents development and growth of a neointimal lesion; however, inadequate response to injury will instead allow the formation of an atheromatous lesion. The discovery of circulating endothelial progenitors has led to the theory that they are important mediators of this repair arm, and hence that a depletion or dysfunction in these cells would result in an imbalance between endothelial injury and repair, favoring atherosclerosis. Schmidt-Lucke et al. (Schmidt-Lucke et al., 2005) followed up a group of 120 individuals, comprising normal subjects and also patients with either stable or unstable coronary artery disease. They found that major cardiovascular events, CABG (coronary artery bypass grafting) or ischemic stroke were significantly more frequent in the subgroup with lower levels of circulating CD34/KDR double-positive cells at baseline. This association persisted after accounting for conventional cardiovascular risk factors. Werner et al. (Werner et al., 2005) studied CD34/KDR doublepositive cell numbers in a cohort of 519 patients diagnosed with coronary artery disease by angiography. After adjustment for confounding variables, higher levels of EPCs were associated with a reduced risk of death from cardiovascular causes and of occurrence of a first cardiovascular event at 12 months follow-up. The authors followed up the outcomes when patients were grouped by baseline levels of CFUs (i.e. a culture-based definition of colony formation). Higher CFU formation was associated with a reduced occurrence of a first major cardiovascular event and reduced revascularization at follow-up. However, as discussed above, recent work on the CFU assay suggests that it is assessing the in vitro activity of cells which may be relevant to vascular function, but which are not actually EPCs themselves (Rohde et al., 2007;Hur et al., 2007). Moreover, there is relevant animal-based work in this area of progenitor cells and endothelial function. Wassmann et al. (Wassmann et al., 2006) studied endothelial function in ApoE-knockout mice, on a high-cholesterol diet, with atherosclerotic plaques and demonstrable endothelial dysfunction of aortic rings ex vivo. They showed that the intravenous administration of spleen-derived MNCs improved endothelium-dependent vasodilation. In addition, . used a rabbit model of balloon injury to the carotid arteries. They cultured peripheral blood MNCs in endothelial growth medium for 2 weeks, producing endothelial-phenotype cells carrying CD31 and eNOS, and delivered these culture-modified cells immediately after balloon injury. They found that, compared with saline-treated controls, local treatment with EPCs led to accelerated reendothelialization and improved endothelial function. Whether the improvement in endothelial function is directly due to increased numbers of new ECs or an indirect effect on pre-existing cells or a papacrine effect by implantation of EPCs remains unclear; however, an increase in vascular NOS activity was documented and is likely to mediate the effect. Therapeutic implications and perspective A crucial target in the treatment or prevention of atherosclerosis is to promote and maintain the integrity and health of endothelium. Since EPCs play a role in maintaining an intact and functional endothelium, decreased and dysfunctional EPCs may contribute to endothelial dysfunction and susceptibility to atherosclerosis. Enhancement of the regenerative capacity of the injured endothelium seems one way to reduce the incidence of atherosclerotic lesions (Hristov & Weber, 2007). Transplantation of human cord blood-derived EPCs was reported to contribute to neovascularization in various ischemic diseases, and EPC transplantation on diabetic wounds has a beneficial effect, mainly achieved by their direct paracrine action on keratinocytes, fibroblasts, and endothelial cells, rather than through their physical engraftment into host tissues (vasculogenesis). In the TOPCARE-AMI (i.e., "Transplantation of Progenitor Cells and Regeneration Enhancement in Acute Myocardial Infarction") trial (Assmus et al., 2002), intracoronary infusion of cultured human EPCs in patients with recent myocardial infarction was associated with improvements in global left ventricular function and microvascular function. In addition, an EPC-conditioned medium was shown to be therapeutically equivalent to EPCs, at least for the treatment of diabetic dermal wounds (Kim et al., 2010). There are several ways to increase levels of circulating EPCs and improve their function by pharmacological strategies and lifestyle modification. Notably, it was shown that the angiotensin-converting enzyme (ACE) inhibitors such as ramipril (Min et al., 2004), and angiotensin II (AT II) inhibitors, like valsartan (Bahlmann et al., 2005) increased EPC levels in patients, probably interfering with the CD26/dipeptidylpeptidase IV system. O u r r e c e n t d a t a s h o w e d t h a t m o d e r a t e i n t a k e o f r e d w i n e s i g n i f i c a n t l y e n h a n c e d circulating EPC levels and improved EPC functions by modifying NO bioavailability (Huang et al., 2010). Other studies indicated that either the phosphatidylinositol 3kinase/Akt/endothelial nitric oxide synthase/NO (PI3K/Akt/eNOS/NO) signaling pathway or the interaction between hyperglycemia and hyperlipidemia in diabetic patients who have vascular diseases, are potential therapeutic targets for abolishing the impaired function of EPCs (Wang et al., 2011). Neutralization of the p66 ShcA gene, which regulates the apoptotic response to oxidative stress, prevented high glucose-induced EPC impairment in vitro (Di et al., 2009). The existence of molecules acting on EPCs can be used to positively condition cultured EPCs before therapeutic transplantation. Thus, because it is known that chemokine SDF-1α is able to mobilize EPCs, and because EPCs are known to have receptors for SDF-1α, it was demonstrated that SDF-1α -primed EPCs exhibit increased adhesion to HUVEC, resulting in more efficient incorporation of EPCs into sites of neovascularization (Zemani et al., 2008). Conclusions In conclusion, EPCs are biomarkers of endothelial repair with therapeutic potential, since low EPC levels predict endothelial dysfunction and a poor clinical outcome. Various studies have focused on the important role of EPCs in vasculogenesis and angiogenesis of ischemic tissue in peripheral artery disease as well as acute myocardial infarction, but only a few studies have concentrated on the role of EPCs in the prevention and therapy of atherosclerosis. Acknowledgements This study was supported in part by research grants from the UST-UCSD International Center of Excellence in Advanced Bio-engineering NSC-99-2911-I-009-101 from the National Science Council; VGH-V98B1-003 and VGH-V100E2-002 from Taipei Veterans General Hospital, and also a grant from the Ministry of Education "Aim for the Top University" Plan.
4,432.4
2012-04-20T00:00:00.000
[ "Biology", "Medicine" ]
Fractional model of magnetic field penetration into a toroidal soft ferromagnetic sample We propose an original approach to solve the coupled problem of alternative magnetic field penetration inside a toroidal soft ferromagnetic sample and frequency dependent magnetic hysteresis. Local repartition of ferromagnetic losses depends on the instantaneous material properties and on the frequency of the excitation field waveform. A correct solution to the model, with respect to this repartition, implies a higher resolution in two dimensions of the diffusion equation including local dynamic hysteresis consideration. The resulting model gives precious local information but requires complex parameter setting, high computational capacity and long simulation time. Due to the toroidal shape, a single dimension algorithm solving of the diffusion equation is clearly insufficient and it would lead to inaccurate simulation results. Consequently, a large number of discretization nodes and extended simulation time must be considered in two dimensional configurations. In our alternative solution, starting from a lumped model, we add a fractional time derivative of the dynamic hysteresis losses. It leads to an accurate formulation of the problem with a reduction in complexity and simulation times. Introduction Local ferromagnetic losses through the cross section of a toroidal magnetic core cannot be physically measured. The understanding and modeling of such losses is of a major interest as soon as we need accurate simulations of complete electromagnetic systems. Commonly used differential breakers could be examples of this case. The simulation of the magnetic field evolution and propagation should be considered with proper geometrical constraints and with presence of environing electronics. Fairly high response levels (amplitude, frequency) can be reached in transient phases. A correct treatment should also take into account material properties. In particular, the magnetic material law is necessary to obtain correct simulations of the whole system [1][2][3][4]. A number of research teams have already implemented successfully a hysteresis dynamic model in the diffusion equation of the magnetic excitation field [5][6][7][8][9]. The resolution in one dimension problem of such coupled consideration leads to accurate simulations. Unfortunately, it implies work on samples of specific geometry. For instance, ferromagnetic sheets which exhibit much lower thickness comparing to both remaining dimensions. In this article, a similar approach will be used in a two dimensional consideration which is necessary as soon as we work on toroidal geometry. The instantaneous resolution of the diffusion equation and of the dynamic hysteresis model provides very interesting information such as the repartition of the magnetic hysteresis losses through the cross section of the tested sample. It gives an accurate losses cartography of the tested toroidal sample. This sensitive and accurate simulation provides good results on a large frequency bandwidth. Unfortunately, such coupled techniques have some disadvantages. It is not modular. It lacks flexibility requires an important space in the memory allocation and usually excessive simulation times. An alternative option should be using a lumped model, without space consideration and focusing on the dynamic hysteresis behavior. Note that some parameters should be fit to geometrical constraints.In terms of frequency dependence or time derivative, the model can be a first order model (product of a constant to the time derivation of the induction field) or/and something a little more complicated such as taking into account of the excess losses by another contribution varying proportionally to the square root of the frequency. However, as demonstrated frequently these simple hysteresis models give correct behavior on a relatively limited frequency bandwidth [9]. In this article, we study frequently used configurations of toroidal soft magnet with applied oscillating magnetic field H. A schematics of the experimental configuration is shown in Fig. 1 To reveal the dynamical hysteretic behaviour we propose to use a lumped model where the space consideration is replaced by a fractional derivation taking into account the dynamic losses. The fractional order, using the system memory to parametrize the delayed response of the deeper regions, gives an alternative freedom in the simulation. We can perfectly fit or simulate the hysteresis curve area versus frequency curve on an extremely large frequency bandwidth by adjusting it [9][10][11]. To describe this alternative approach and to draw the main conclusions, this article has the following structure. After the current introduction (Sect. 1), a theoretical presentation of the two simulations scheme used during this study is provided (Sect. 2). Our first approach is constituted by the coupled two dimension discretization technique in finite difference/dynamic hysteresis model. The second consideration is a lumped model with a fractional derivative hysteresis. In the second part of this article, a large number of comparison simulation/results are given. Advantages and disadvantages of both simulations are also discussed. In Sect. 3, experimental results are presented and compared to simulations. Finnaly, Sect. 4 finish the paper with conclusions. 2D finite differences scheme for the simulation of the magnetic field diffusion The diffusion equation is solved through the square crosssection of a toroidal magnetic flux. We assume unidirectional and inhomogeneous surface excitation field [according to the diameter of the magnetic core (see Fig. 1)]. In addition, we assume constant and homogenous electrical conductivity. By considering the dimensions of the studied magnetic core (namely width and thickness) a 2 dimensions study is carried out. Symmetrical considerations allow reducing the studied area to half the width of the magnetic flux. This consideration constitutes a significant reduction of the calculation time. The magnetic field diffusion results from the Maxwells equations [12]: where H is an applied magnetic field, B is the induction, and σ is electrical conductivity, As the magnetic field, in our configuation, is always perpendicular to a cross section of the toroidal sample (see Fig. 2), div (H) = 0, using Cartesian coordinates Eq. 2.1 becomes By using a single dimension finite differences temporal discretization of the temporal derivation of the magnetic induction in Eq. 2.2, the following expression appears: By considering a linear relation between B and H, one can obtain the analytical solution of Eq. 2.3. Unfortunately, this simple consideration gives simulation results far from the experimental ones. As soon as the frequency of the excitation field exceeds a few hertz, frequency effect appears. If we want to obtain correct simulation results under such external conditions, it implies taking into account the frequency dependence and the hysteresis phenomenon due to wall domain motions through the tested sample. A simple consideration of the magnetic dynamic hysteresis gives: where H stat (B) is a fictitious contribution derived from a quasi-static consideration of hysteresis. Namely, a quasistatic hysteresis model has been used to provide this contri-bution. A lot of quasi-static hysteresis models exist, however, the specific one we used is described in the next subsection. The parameter β depends on material and temperature and in our case is fitted from the experiment. It is independent of the geometry and of the excitation waveform. Former experimental validations have shown that this dynamic material law constitutes a good description of dynamic (damping) effects due to domain Blochs wall motions [6][7][8][9][10][11]. This material law represents a statistical behavior of the wall motions. It can then be considered as isotropic and characteristic of the material. Equation 2.4 compiles intwo hysteresis contributions, a quasi-static contribution described hereafter and a dynamic contribution product of the constant β and the time derivation of the induction field B. Quasi-static hysteresis model The quasi-static hysteresis model we used is based on the following assumption: under quasi-static excitation, we assume that Blochs domain wall movements behave like mechanical dry frictions. A static (frequency-independent) equation based on this assumption is proposed for the numerical simulation of these movements. A major hysteresis loop is obtained by translating a nonhysteretic curve. The sign of this translation is equal to the sign of the time derivative of the induction B and its amplitude which is equal to the coercive field, H c . where, f (H ) (and reciprocally f −1 (B)) is a nonhysteretic and saturated function determined using an experimental major hysteresis loop (H max > H c ). B = B z and H = H z are defined on at a single point along z direction. The function f is characterized by the material parameters σ 1 and γ by comparison (simulation/measure) to a quasi-static major hysteresis loop: Note that, Eqs. 2.5 and 2.6 give a description of an analogue of a single parameter dry friction phenomenon. It is obviously not enough to describe correctly the whole range of tested sample behaviors related to a set of much larger contributions. However, this model can be considerably improved by taking into account a set of similar dry frictions with coeficients distributed in some specific domains. In our case each componnent is characterized by its own coercive field H ci and its own weight, depending on the position, in the final reconstitution of the magnetization and induction fields. More realistic cycles, including minor loops, are obtained by introducing a distribution of a basic element (spectrum), characterized by the corresponding coercive field and weight. where k is number of componnents and Spectrum(i) represents the distribution of various elementary dry frictions. (B(x, y, t))) (2.8) Two dimension finite differences resolution Thanks to the simplicity of a two dimensional case and the symmetries of the proposed problem (half width of the magnetic core as illustrated on Fig. 3), a formulation by finite differences method of the diffusion equation is carried out. Finite differences method is in that case accurate enough to get correct numerical results. A two dimension discretization is done in the Oy and Ox directions. In the case of a 25 nodes discretization, the matrix system equations resulting from the finite difference resolution is: where H i composes locally defined magnetic field and the matrices M, S 1 , S 2 , are defined as follows: Fractional model Numerical resolutions (finite differences approximation) lead to precise information (local distribution of the magnetic losses). Unfortunately, if no restrictive assumptions are proposed, whatever the solving numerical method adopted, the non-linear behavior of the system always implies long time calculation and uncertain convergences [13]. To overcome such inconvenient, alternative solution is proposed hereafter. Under weak frequency conditions, quasi-static [14][15][16], Jiles-Atherton model [17][18][19] or a fractional model [12,20] (quasi-static contribution previously described [12]); provide accurate results of the evolution of the average magnetic induction B versus magnetic excitation field H . Such particular extern conditions mean homogeneous distributions of the induction through the cross section of the sample and consequently homogeneous distributions of the magnetic losses. Unfortunately for those simple models as soon as the quasi-static external conditions expire, huge differences appear. Small improvements can be obtained by adding to this lump model a simple dynamic contribution,product of a damping constant ρ to the time domain derivation of the induction field B. This product is homogeneous to an equivalent excitation field H. Here again, even if this adjunction provides a relative improvement, correct simulation results are obtained on an unfortunately too narrow frequency bandwidth. It seems that a simple viscous losses term ρd P/dt leads to an overestimation in the high frequency part of the magnetization hysteresis loop area versus frequency curve. Another correction of the lump model must be done to reach correct simulation results on a large frequency bandwidth. A mathematical operator dealing with the low frequency and the high frequency component in a different way than a straight time derivative is required. Such operators can be found in the framework of fractional calculus; they are the so-called non entire derivatives or fractional derivatives. Fractional derivation generalizes the concept of derivative to complex and non-integer orders. Fractional time derivative d n B/dt n can be added in our lump model thanks to Grunwald-Letnikov or Riemman-Liouville definitions [21][22][23][24][25]. Both of them are particular cases of a general fractional order operator namely, the first one represents the n order derivative, while the other represents the n fold integral. In this sense, the class of functions described by the Riemman-Liouville definition is broader (function must be integrable) than the one defined by Grunwald and Letnikov. However, for a function from the Grunwald-Letnikov class, both definitions are equivalent. In the present paper, we use the RiemmanLiouville form for n ∈ [0, 1] where is the Euler gamma function. According to the above definition (Eq. 2.9), the fractional derivative of a function f (t) can also be considered as the convolution of a f (t) function and t n / (1 − n). In that case n is the order of the fractional derivation. The additional time derivative present in the formula coincides with the occurrence of positive argument of the gamma function, (.), leading to its convergence to a finite value. It is obvious that fractional derivative includes memory of the previous states. From a spectral point of view, an interesting consequence of fractional derivative consideration is the correction of the frequency spectrum f (ω) of the time domain function f (t) which will multiplied by jω n instead of jω for a first order usual derivation. As a consequence, the fractional derivative provides an interesting alternative able to content the balance requirements, previously discussed between the low and high frequency component of f (ω) [12,26]. Fractional derivative is introduced in our lumped model through a dynamic contribution. We replace the term βdB/dt by ρd n B/dt n , and add this contribution to the quasi-static contribution, Eq. 2.5. Both contributions are included in the last version of the dynamic hysteresis lump model (2.12) Experimental validation 3.1 Simulation results Specific experimental set up has been carried out in order to compare both simulation models. An illustration of this experimental set up is given in Fig. 1 The sample tested is a low cobalt iron electrical alloy referenced SV142b. Both thickness and width are 5 mm; its conductivity is 1.4 × 10 7 ( m) −1 . Due to the high electrical conductivity of the tested sample, even for relatively weak variation of excitation frequencies, large differences appear in the losses distribution (Fig. 4). The magnetic excitation amplitude decreases in the radius direction (Ampre theorem), as a consequence a weak non symmetrical distribution of the local magnetic losses is obtained in the same direction. Figure 5a-d show the superimposition of simulated and measured averaged dynamic loops when the surface excitation field is a 1, 50, 200 and 500 Hz frequency sine wave. Experimental results All measurments have been done following the international standard on toroidal samples, i.e., with a sinus induction field B imposed of maximum value of 1.3 T. The lumped model simulation is calculated using both contributions. Static contribution provided by a quasi-static model (developed elsewhere), and the fractional model including both wall movement dynamic consideration provide the dynamic contribution. Finite differences model is performed using 10 × 20 nodes discretization. Simulation dynamic parameters are displayed in Table 1. Macroscopic temporal induction field is determined by calculating the average value of the local induction field of each node. In Fig. 6, we check the accuracy of the fractional lumped model under non sinus type waveform; the magnetic excitation field is composed by a fundamental sinus 50 and 100 Hz of a harmonic 3 sinus excitation. The good correlations between simulations and experimental results give a validation to the approach developed here and confirm that the coupled diffusion equation/wall movement consideration can be described by the lumped fractional derivative model. Note that in the above simulations, the fractional order n = 0.52 gives consistent results for any thickness of the sample with respect to the numerical solution of the corresponding diffusion equation, as well as the experimental data. Conclusions In this paper, we succeed in replacing the complex 2D coupling model of magnetic hysteresis and diffusion problem by a lumped model including fractional derivative contribution. Local repartition of ferromagnetic losses depends on the instantaneous material properties and on the frequency of the excitation waveform field. A correct solution to model precisely this repartition implies the resolution in two dimensions of the diffusion equation including local dynamic hysteresis consideration. The resulting model gives precious information but requires complex parameter setting, high computational capacity and long simulation time. Due to the toroidal shape, a 1 dimension solving of the diffusion equation is clearly insufficient and will lead to inaccurate simulation results. A 2 dimensions resolution must be considered and consequently, a large number of discretization nodes is necessary, resulting in prohibitive simulation time and complex memory solicitation. These toroidal shape magnetic cores are mainly industrially employed in current sensors or differential breaker applications. In such application the excitation frequency is clearly unpredictable and high frequency levels are common. Precise model including frequency dependence hysteresis are required. Indeed as soon as the frequency overpassed the usual 50 Hz, the dynamic hysteresis losses increased exponentially. Lumped model including entire fractional derivation provides correct behavior on very large frequency bandwidth; a large improvement is obtained by replacing this entire consideration to fractional one. The fractional lumped model gives precise macroscopic behavior B(H ), it leads to a very accurate formulation of the problem with a high reduction of the complexity and of the simulation times. In future work, our fractional model will be tested to other diffusive and hysteresis phenomenal such as those of ferroelectric materials or other physical domain such as thermodynamic [27].
4,004
2017-01-25T00:00:00.000
[ "Physics" ]
The particle physics reach of high-energy neutrino astronomy We discuss the prospects for high-energy neutrino astronomy to study particle physics in the energy regime comparable to and beyond that obtainable at the current and planned colliders. We describe the various signatures of high-energy cosmic neutrinos expected in both neutrino telescopes and air shower experiments and discuss these measurements within the context of theoretical models with a quantum gravity or string scale near a TeV, supersymmetry and scenarios with interactions induced by electroweak instantons. We attempt to access the particle physics reach of these experiments. Introduction In recent endeavours, exploration of high-energy particle physics has largely been conducted in accelerator experiments, and for good reason. Accelerator laboratories provide controlled, high-luminosity environments in which very precise levels of measurement can be reached. Despite these advantages, astrophysics experiments have also revealed a great deal of particle physics beginning with Anderson's discovery of the positron in 1932, then the muon, the pion, etc, predating accelerator experiments and continuing to the observation of neutrino masses and mixings. It is clear that astrophysics has much to offer in studying the fundamental aspects of particle physics. Particle physics has entered an exciting era. The standard model (SM) of the strong and electroweak interactions has been experimentally verified to high precision, while the mechanisms for electroweak symmetry breaking and mass generation remain largely unknown. Theoretical arguments and indirect experimental evidence imply the existence of new physics near the electroweak scale below a few TeV (for a recent review on the status of the SM and beyond, see e.g. [1]). The leading candidates of theoretical models beyond the SM include weak-scale supersymmetry (SUSY) (for an introductory review on SUSY, see e.g. [2]), strongly interacting dynamics (for a recent review on strong dynamics in the electroweak sector, see e.g. [3]) and low-scale string or quantum gravity [4,5]. It is encouraging that all of the above scenarios often lead to observable signatures in next-generation colliders such as the CERN Large Hadron Collider (LHC) and an e + e − linear collider. The field of high-energy neutrino astronomy finds itself in a position to contribute to two very different areas of science: astronomy and particle physics (for recent reviews of high-energy neutrino astronomy, see e.g. [6]). On the one hand, the next generation of neutrino telescopes may reveal the origins of the highest energy cosmic rays, help us understand the progenitors of γ-ray bursts and provide other insights into some of the greatest outstanding astrophysical puzzles. On the other, very-high-energy cosmic neutrinos present a unique opportunity to study the interactions of elementary particles at energies comparable to and beyond those obtainable in current or planned colliders. This is the main advantage of such experiments over traditional collider experiments. Currently, the highest energy achieved in collider experiments is at Fermilab's Tevatron, with E CM ≈ 2 TeV. This centre-of-mass energy roughly corresponds to a PeV neutrino striking a nucleon at rest, E ν = E 2 CM /2m N . Even the LHC at CERN will only reach energies that correspond to 100 PeV cosmic neutrinos. It is certainly plausible, as we will discuss, that there is a neutrino flux at energies well beyond 1 EeV. Even crude measurements of neutrino cross-sections at extremely high energies would provide powerful tests of fundamental physics at and beyond a scale of 1-10 TeV. Additionally, sources of high-energy neutrinos may be observed from distances of hundreds or thousands of Mps, providing baselines for tests of neutrino oscillations or decays that could not be carried out using accelerator, atmospheric or solar neutrinos. The experimental status of high-energy neutrino astronomy is developing rapidly. Current technologies such as the AMANDA-II [7] and RICE [8] experiments at the South Pole have proven successful, but with too little sensitivity to reach many of the most interesting physics goals. Several new experiments are soon to enter the field fortunately. IceCube [9] will expand the effective area of AMANDA-II by more than a factor of 20 while also improving both the angular and energy resolution. ANTARES [10], in the Mediterranean, will use a similar technique, but with sensitivity to neutrino-induced muons of lower energy (down to 10 GeV). Radio techniques will be employed in the balloon-based ANITA [11] experiment, which has its first flight scheduled in the next year or two. High-energy cosmic-ray experiments, such as the PierreAuger observatory [12] and space-based observatories such as OWL [13] or EUSO [14], will also be sensitive to ultrahigh-energy cosmic neutrinos. Other proposals, such as using acoustic techniques [15] or natural salt domes as a Cerenkov medium [16] or expanding IceCube into a multi-kilometre ultrahigh-energy experiment [17], have been discussed as well. With the development of these many and varied techniques, a new window into fundamental particle physics will be opened [6], possibly predating the LHC experiments. This paper is a mini review on the potential of high-energy neutrino astronomy in studying particle physics beyond the SM. We first present the current theoretical predictions on the highenergy cosmic neutrino flux in section 2. We then discuss methods for exploring particle physics via high-energy neutrino astronomy in section 3, paying particular attention to future neutrino telescopes and air shower observatories. We summarize in section 4 the predicted experimental signatures from various new physics scenarios, including models with low-scale quantum gravity, low-scale string resonances, black holes and p-branes, electroweak instantons and SUSY. We draw our conclusions in section 5. The high-energy cosmic neutrino flux Just as the performance of an accelerator experiment crucially depends on its luminosity, the particle physics reach of high-energy neutrino astronomy will depend on the incoming flux of cosmic neutrinos. Here, we will briefly review some of the arguments for various neutrino fluxes from cosmic accelerators. The spectrum of cosmic rays has been well measured up to energies near 10 20 eV (10 8 TeV in the laboratory frame) where the experiments become limited by poor statistics. The spectrum consists of a series of power laws that change at energies called as the 'knee' and 'ankle' (see figure 1). The standard process believed to be responsible for the observed spectrum of cosmic rays is the acceleration of charged particles via second-order Fermi acceleration (for a discussion of Fermi acceleration, see [18]). In Fermi's original paper on the subject, he proposed that cosmic rays were accelerated by the reflecting off of time-varying magnetic fields associated with galactic clouds moving with randomly distributed velocities. Although this process will statically accelerate charged particles, it does so very slowly and is generally not capable of even countering the effects of energy losses by ionization and other processes. If instead we consider a compact region of dense plasma, however, the random motion of the matter and the associated magnetic fields can be sufficient to accelerate charged cosmic rays to very high energies. Although there is no strong evidence as of yet, it is likely that supernova remnants accelerate most of the cosmic rays up to the knee in the spectrum, occurring around 10 15 eV, by this mechanism. A generic feature of Fermi acceleration is a power-law spectrum, dN/dE ∝ E −α , where α 2 [19]. The maximum energy to which a cosmic ray source may accelerate particles can be estimated by a simple argument. First, we assume that, to accelerate a proton to a given energy in a given magnetic field, the size of the accelerator must be larger than the gyroradius of the orbit of the particle: This condition yields a maximum energy of where γ is the Lorentz factor of the cosmic accelerator. To produce cosmic rays with energies near the highest observed (∼10 20 eV), very compact objects are required. For the most compact objects, we can consider R ∼ GM/c 2 or the Schwartzchild radius of the object. For such a source, we find a maximum energy of With only micro-Gauss galactic magnetic fields, we must turn to extragalactic sources to accelerate cosmic rays to energies above the EeV scale. Extragalactic sources potentially capable of accelerating particles to such energies include the relativistic jets of active galactic nuclei or gamma-ray bursts. As protons are accelerated to very high energies in such sources, they may undergo photomeson interactions with the surrounding radiation fields. In such interactions, both charged and neutral pions are produced. These pions then decay, producing neutrinos and γ rays, respectively. This process essentially guarantees the existence of the accompanying neutrinos and γ rays given the observed cosmic-ray flux. Alternatively, bounds can be placed on the cosmic neutrino flux by relating it to the cosmic-ray spectrum. Using this method, Waxman and Bahcall [20] have placed a upper bound for each neutrino flavour, This assumes that the sources in question are optically thin or transparent to protons. If sources are optically thick to protons however, the bound on the neutrino flux can be based only on γ-ray observations by EGRET and is thus weaker by a factor of about 40 [21]. Furthermore, if some sources were truly 'hidden', implying that neither nucleons nor photons could escape, no upper bound could be made on the corresponding neutrino flux. For a further discussion of these and similar arguments, see [22]. In addition to the neutrino production from cosmic-ray interactions in or near cosmic accelerators, ultrahigh-energy cosmic rays produce neutrinos during propagation over cosmological distances [23]. Protons of energy above a few times 10 19 eV can scatter off cosmic microwave background photons with a centre-of-mass energy roughly equal to the resonance corresponding to the mass of the -hadron (1.232 GeV). Again, both charged and neutral pions can be produced this way, yielding neutrinos and γ rays. The neutrino flux corresponding to this process is called the 'cosmogenic neutrino flux'. Unlike the flux of neutrinos from cosmic accelerators, the spectrum of cosmogenic neutrinos depends only on the spectrum of ultrahighenergy protons and the distribution of their sources and thus can be reliably calculated [24]. Bahcall [20] (solid line) and the cosmogenic neutrino flux (dashed curve for ν and dotted curve forν) [27]. Note that, after neutrino oscillations are considered, these fluxes will contain all the three neutrino flavours (ν e , ν µ , ν τ ) in equal quantities. The figure was taken from [28]. The cosmogenic neutrino flux is often thought of as a 'guaranteed' source of ultrahigh-energy neutrinos, assuming that the cosmic-ray primaries at the highest observed energies are protons and not heavy nuclei [25]. There are ways in which neutrino fluxes larger than those described here could be produced. For example, in models of non-accelerator cosmic-ray origins, i.e., models in which the highest energy cosmic rays are produced in the decay or annihilations of superheavy objects. In such models, the resulting neutrino flux is not constrained by the arguments shown here. Although such scenarios are certainly interesting, we will not consider them in this paper. For a further discussion on cosmic neutrino fluxes and their constraints, see [26]. Throughout the remainder of this paper, we will primarily consider two representative choices for the cosmic neutrino spectrum. The first is a flux equal to the bound set by Waxman and Bahcall [20], which we call the Waxman-Bahcall flux of equation (4). The second is the cosmogenic neutrino flux, as calculated in [27]. Each of these is shown in figure 2. Particle physics with high-energy neutrino astronomy To identify potential signatures of new physics in high-energy neutrino interactions, one must first understand the phenomenology predicted by the SM, in particular the features of charged and neutral current interactions between high-energy neutrinos and target nuclei. The SM predicts the cross-sections for neutrino-nucleon interactions, up to uncertainties in parton distribution functions at extremely small values of the momentum fraction x, to energies beyond those probed by any planned neutrino telescopes [29]. In this section, we will describe the experimental features predicted by the SM in neutrino telescopes and air shower experiments. Neutrino telescopes Neutrino telescopes are essentially arrays of detectors distributed over a large volume of a Cerenkov medium, such as water or ice. These detectors may be sensitive to optical Cerenkov radiation, as are AMANDA-II, IceCube and ANTARES, or to radio, as is RICE. We will focus on optical Cerenkov detectors here, although the treatment of shower detection is quite easily generalized to include radio. Muons produced in the charged current interactions of muon neutrinos can travel several kilometres through a detector medium, producing a 'track'of Cerenkov light that can be observed and accurately reconstructed by neutrino telescopes. The rate of muon events observed in a large volume neutrino telescope is given by where θ z is the zenith angle of an event (θ = 0 is vertically down-going), N A theAvogadro number, dN ν /dE ν dt d the flux of muon neutrinos (per unit energy, per unit time, per solid angle), dσ/dy the differential neutrino-nucleon cross-section (where y is defined such that E µ = (1 − y)E ν µ ), P S (E ν , θ z ) the survival probability of a neutrino travelling through the Earth, A eff the effective area of the detector ( 1 km 2 for IceCube), T the length of time observed and R µ (E µ , θ z ) is either the muon range or the length of material (i.e. ice) between the detector and the Earth's surface, whichever is smaller. The muon range is defined as the distance a muon propagates in the medium surrounding the detector before falling below a cutoff energy. The muon range is given by [30] where E cut µ is the minimum muon energy required to produce an event. This value is selected to reduce the number of background events while retaining as many signal events as possible. In optical Cerenkov neutrino telescopes, muons with energy as low as 10-100 GeV can be observed, although cuts well above this energy are often imposed when searching for high-energy neutrinos. In ice, α 2 × 10 −6 TeV cm 2 g −1 and β 4.2 × 10 −6 cm 2 g −1 [30]. For a PeV muon in ice and a 100 GeV muon energy threshold, the range is approximately 1.7 km. For muons with energies of 10 PeV, 100 PeV or 1 EeV, the range increases to 7, 13 and 18 km, respectively. Thus, for very energetic muon neutrinos, the target volume of the experiment becomes a long cylinder, rather than a box. This is particularly relevant for neutrinos coming from a direction near the horizon. Unlike in accelerator experiments, the flux (or luminosity) of a cosmic neutrino beam may be unknown. Therefore, simply counting the number of events will not provide sufficient information to measure a neutrino cross-section. Instead, information from the angular and energy distributions of events must be used [31]. For a cross-section of about 2 × 10 −7 mb, a particle's interaction length as it travels through the Earth is equal to the Earth's diameter. This cross-section is reached near E ν ∼ 100 TeV according to the SM prediction. Thus, as the neutrino-nucleon cross-section is increased from its SM value, the effect of absorption in the Earth becomes more pronounced and fewer of the observed events will come from neutrinos travelling through the Earth. A crude way to measure the neutrino-nucleon cross-section could, therefore, be a comparison of the up-going and down-going (or Earth-skimming, etc) events in a high-energy neutrino telescope. However, a more sophisticated analysis of the angular distribution of events as a function of energy would be more useful. In addition to muon tracks, neutrino telescopes are sensitive to electromagnetic and hadronic showers. These events can be produced by all three neutrino flavours in neutral current interactions, or in some charged current interactions. For example, electromagnetic showers are produced in the charged current interaction of an electron neutrino. The rate of shower events is calculated in an expression similar to equation (5), but the muon range, R µ , together with the effective area, A eff , are replaced by the effective volume of the detector. Also, the shower's energy is given by E sh = yE ν for neutral current events and E sh = E ν for electron neutrino charged current events. The minimum energy a shower must have to be observed by an optical Cerenkov neutrino telescope is of the order of a few TeV. For radio detectors, the shower threshold is much higher, in the PeV-EeV range. Finally, very large volume neutrino telescopes, such as IceCube, are also capable of observing events uniquely associated with tau neutrinos. Tau neutrinos that interact via charged current in the detector medium produce a shower and a charged tau lepton. Below a few PeV, the tau lepton's lifetime is sufficiently short that it decays, producing a second shower essentially spatially coincident with the first one. Such an event is indistinguishable from a single shower. At higher energies, however, the tau lifetime can be long enough to distinguish these two showers. For example, at 10 PeV, a tau travels, on average, about 500 m before decaying. If both showers occur within the detector volume, such an event is called a 'double bang' and is a clear signature of a tau neutrino [32]. If the first of these showers occurs outside of the detector, with only the second shower being observed, the event is called a 'lollipop'. Here, the observation of the shower with a minimum ionizing track (produced by the tau) constitutes the candy and the stick of the 'lollipop', respectively. Again, this is a clear signature of a tau neutrino. Standard cosmic accelerators produce neutrinos via charged pion decay (see section 2). Pion decays produce flavours of neutrinos in the ratio φ ν e : φ ν µ : φ ν τ = 1 : 2 : 0. Over the long baselines such neutrinos travel before reaching the Earth, neutrino oscillations modify this ratio to φ ν e : φ ν µ : φ ν τ ∼ = 1 : 1 : 1 or nearly equal quantities of all three flavours. Considering only SM neutrino interactions, these incoming flavour ratios can be translated to ratios of observed muon tracks, electromagnetic and hadronic showers and tau unique events [33]. By measuring the ratios of these event types observed in IceCube, the presence of interactions beyond the SM may be tested. Neutrinos in air shower experiments Very high-energy cosmic neutrinos can occasionally interact with particles in the Earth's atmosphere producing extended air showers observable in high-energy cosmic ray experiments such as AGASA, HiRes or the next generation Pierre Auger Observatory [12] (for a review of ultrahigh-energy air shower experiments, see e.g. [34]). Although the characteristics of neutrinoinduced showers do, in principle, differ from those initiated by hadronic cosmic rays [35], significantly more hadronic showers are expected, thus making showers from neutrino primaries difficult to conclusively identify. Primary particles that have a near-horizontal trajectory can provide an opportunity to distinguish neutrinos from hadronic events, however. In contrast to hadronic cosmic rays (which interact at the top of the atmosphere), neutrino primaries have considerably smaller cross-sections and thus interact with nearly equal probability throughout the atmosphere. If a shower is observed that was initiated deep inside the atmosphere, it can be associated with a neutrino (or other weakly interacting particle) primary. For there to be sufficient column depth to make this distinction (typically 3000-4000 g cm −2 is required), only primaries within about 15 • of the horizon can be considered [36,37]. The class of cosmic ray events that can be associated with neutrino primaries are called 'deeply penetrating, quasi-horizontal showers'. To calculate the rate of neutrino-induced deeply penetrating, quasi-horizontal showers in an air shower experiment, one must estimate the acceptance to neutrino detection. This quantity is essentially the effective target mass multiplied by the accessible solid angle. It is often given in km 3 water equivalent steradians (km 3 we sr), where, for example, 1 km 3 water equivalent would be the target mass contained in 1 km 3 of water or ice. Anchordoqui et al [37] estimates the acceptance of the AGASA experiment to deeply penetrating, quasi-horizontal showers to be 0.05 km 3 we sr at 10 8 GeV, rising to 1.0 km 3 we sr at 10 10 GeV and above. They estimate the acceptance of Auger to be larger than AGASA by a factor of 20, 20 and 50 at 10 8 , 10 10 and 10 12 GeV, respectively. These estimates consider showers within 15 • of the horizon and with a maximum height of 15 km. For a discussion of the HiRes acceptance, see [38]. The number of neutrino events observed as deeply penetrating, quasi-horizontal showers is given by where N A is Avogadro's number, dN ν /dE ν dt d is the flux of neutrinos (per unit energy, per unit time, per solid angle), σ(E ν ) is the neutrino-nucleon scattering cross-section, A(E ν ) is the acceptance of the detector and T is the length of time observed. This expression assumes that roughly all of the energy of the neutrino goes into the produced shower. If this is not the case, such as with neutral current interactions, a differential cross-section should be used as in equation (5) and the acceptance be written as a function of the shower energy rather than neutrino energy. At the very high energies at which air shower experiments are most effective (0.1 EeV and higher), a reasonable and conservative flux of neutrinos to consider is the cosmogenic flux (see section 2). This flux peaks at about 0.1 EeV, but is substantial at 1 EeV and above. If we insert this flux, an experimental acceptance and the neutrino-nucleon cross-section, we can predict the number of neutrino-induced deeply penetrating, quasi-horizontal events that would be observed in an air shower experiment. In addition to deeply penetrating quasi-horizontal showers, it may be possible to identify showers produced by Earth-skimming tau neutrinos using the fluorescence detectors of the Auger experiment [39]. Earth-skimming, ultrahigh-energy tau neutrinos produce tau leptons in charged current interactions. Since, at ultrahigh energies, the tau decay length is comparable to its interaction length, a shower produced in the tau decay can be observed as an extended air shower if the tau is produced not too deep beneath the Earth's surface. The rates for this class of events is expected to be rather small however, and we will not study this signature in further detail here. Signatures of new physics In high-energy cosmic neutrino experiments, the new physics enters the neutrino-nucleon scattering cross-section σ νN (E ν ) as in equations (5) and (7). It is given by whereσ i is the scattering cross-section for the neutrino-parton subprocesses, which reflects the fundamental dynamics of neutrino interactions. The sum is over all the contributing partons i and f i are the parton distribution functions. The cross-sections are insensitive to the choice of momentum transfer Q. The cross-sections are also insensitive to uncertainties in the parton distribution functions at low x if there is a TeV scale threshold. For instance, the highest energy neutrino fluxes, which are large at E ν ∼ 10 10 GeV, probe x ∼ (1 TeV) 2 /10 10 GeV 2 ≈ 10 −4 , within the range of validity of the parton distribution functions that we take as CTEQ5 [40]. There are many interesting scenarios in which neutrino-nucleon interactions would be substantially enhanced over the SM prediction at high energies. In this section, we summarize the literature on these models, describing how their signatures may be observed in high-energy cosmic neutrino experiments. Contributions of KK gravitons The most significant proposal, perhaps, is that of TeV scale gravity with the existence of extra where V n ∼ R n is the volume of the extra dimensions if they are flat with a compactification scale R [4]; and V n ∼ e −kx i d n x if the ith extra dimension is 'warped' with a curvature k [5]. An immediate implication of this scenario would be to naturally understand the largeness of the Planck scale in comparison with the electroweak scale. Namely, it can be interpreted as due to the large volume of the extra dimensions while the fundamental D-dimensional Planck scale M D may be low, possibly at the TeV scale. In models with large and flat extra dimensions, often called the large extra dimensions (ADD) scenario [4], the fundamental Planck scale is assumed to be of the order of 1 TeV and a large number of Kaluza-Klein (KK) graviton states of mass 1/R of sub-eV become accessible. Their couplings to the SM particles at an energy E are enhanced to (ER) n /M 2 PL ∼ E n /M n+2 D , and thus potentially large effects on high-energy processes may occur [41]. High-energy neutrinos can exchange these KK gravitons with quarks or gluons in target nucleons, resulting in an enhancement. For a scale of gravity near 1 TeV, neutrinos above PeV energies begin to interact largely by the effects of new physics. In the Randall-Sundrum scenario, an anti-de Sitter dimension with a non-factorizable warped geometry is introduced [5]. Again, KK gravitons become accessible at the TeV scale [42], enhancing the neutral current neutrino interaction rate above this scale. The effect on the neutral current neutrino-nucleon cross section in these scenarios is shown in figure 3 [43]. The solid lines represent models with ADD for the quantum gravity scale taken as 1 TeV. The three curves correspond to different choices for the unitarization scheme of the partial wave amplitudes proposed in [43], as compared to some other calculations [44]. Short dashed lines represent the Randall-Sundrum model with a varied AdS scale ( =1-3 TeV) and KK graviton masses (500 GeV-1 TeV). Dotted lines represent the prediction of the SM. In the scenarios considered here, neutrino telescopes expect to observe more neutral current events per charged current event than predicted by the SM. This effect sets in right above the threshold near M D , providing a clear indication for new physics. Furthermore, the ratio of down-going to up-going events will be enhanced over the SM prediction as more neutrinos are absorbed as they propagate through the Earth [43]. The behaviour of the energy spectrum due to these effects is shown in figure 4. TeV string resonances At energies above the compactification scale 1/R, the extra dimensions and KK effects may become observable if M D is not too high as discussed in the previous section. At even higher energies near the string scale M S , string effects dominate over gravitational effects based on string perturbation arguments [45,46]. The string scale may be related to the D-dimensional gravity scale by S(s, t) = ( where α = M −2 S is the string tension. This amplitude develops simple poles at √ s = √ nM S with n = 1, 2, . . . , leading to resonances in the matrix elements. The physical effects of these resonances have been explored [46,47], including their signatures in cosmic neutrino experiments [48,49]. We present the neutrino-nucleon cross-sections due to Veneziano amplitude resonances in figure 5 [49]. The solid curve shows the prediction for the SM neutral current process, while the dashed and dot-dashed curves represent string excitations with and without the gluon contribution, respectively. We see that neutrino-gluon scattering can be the dominant process, 5-10 times larger than the neutrino-quark-induced processes. It is interesting to note that, even for processes that vanish in the SM at tree level, there can still be substantial stringy contributions to their amplitudes at high energies. Generally speaking, the energy dependence of the crosssections for string resonances are weaker than those for KK states. The event rates expected in IceCube and Auger have been calculated for these models [48,49]. With the SM interactions, only about 0.2 (0.7) shower events per year are expected in the experiment from a cosmogenic (Waxman-Bahcall) neutrino flux. These rates can be enhanced by a factor of 5-6 due to string excitations with M S = 1 TeV or a factor of about 1.5-1.7 with M S = 2 TeV. Microscopic black hole production At trans-Planckian energies E M D , it has been argued that black hole production will be the leading process [50] (for a recent review black holes in theories with extra dimensions, see e.g. [51]). This is because the energy dependence of the black hole production cross-section grows faster than for sub-Planckian processes and the number of non-perturbative states grows faster than the perturbative string states. The cross-section for black hole production can be naively estimated by the geometric description, where r sch (E CM ) is the Schwarzchild radius of a black hole formed with a mass equal to the centre-of-mass energy of the collision. In 4 + n dimensions, the Schwarzchild radius of a black hole of mass M BH is given by Although some studies support the validity of the geometric cross-section argument [52], it is possible that a substantial fraction of the total energy will be radiated away in the form of gravitational waves, reducing the mass of any black hole that may be formed and reducing the corresponding cross-section [53]. Although the lowest mass possible for the black hole creation is approximately the fundamental Planck scale M D , the effective centre-of-mass energy should be several times larger The figure is taken from [28]. for the semi-classical argument to be valid. To parametrize this effect, we introduce the quantity, x min = M min BH /M D > 1. In addition to the ambiguity of the value of x min , other uncertainties can possibly arise in the estimation of the cross-section [54], which we do not consider further here. In figure 6, we show the cross-section for black hole production in neutrino-nucleon interactions for different choices of M D and x min . As a result of the sum over all partons and the lack of suppression from small perturbative couplings, the black hole cross-section may exceed SM interaction rates by two or more orders of magnitude. The cross-sections corresponding to neutrino interaction lengths equal to the horizontal and vertical depths of IceCube position are also given in figure 6 by the horizontal dotted lines. We see that, for the geometric cross-section M D ∼ 1 TeV and neutrino energies E ν ∼ 10 9 GeV where the cosmogenic flux peaks, black hole production increases the probability of conversion in down-going neutrinos without increasing the cross-section so much that vertical neutrinos would be shadowed before reaching the detector. We therefore expect significantly enhanced rates in neutrino telescopes. The energy dependence of the black hole production cross-section is stronger than that for the other processes we have discussed so far, thus confirming the argument that black hole production is likely to be the dominant effect of low-scale gravity at higher energies. Black holes decay via Hawking evaporation almost instantly (with a lifetime of the order 10 −27 s). The Hawking radiation follows a thermal distribution with temperature T H = (1 + n)/4πr sch with an average multiplicity of the particles radiated of approximately N ∼ = M BH /2T H . Naively, the particles are radiated in numbers proportional to their degrees of freedom, i.e. 75% hadronic, 10% to charged leptons, etc. 3 Assume that the signals in a neutrino telescope produced in black hole decays follow these ratios [28,57]. The 10% of Hawking radiation that produces charged leptons generates equal numbers of muons, taus and electrons, which can be observed as muon tracks, tau unique events and electromagnetic showers, respectively. The full 75% of Hawking radiation that goes into hadronic modes results in hadronic showers. This is in contrast to the ratios of event types predicted for SM interactions. Taking into account the degrees of freedom corresponding to each channel and the factors effecting the probability of detection (i.e. muon range, etc), the ratios of muons to taus to showers can be predicted for a particular black hole production model. For example, for a model with M D = 1 TeV and x min = 1, about twice as many showers are expected than muon tracks (considering a dN ν /dE ν ∝ E −2 ν flux). In contrast, the SM prediction is about 20% more muons than showers [28]. By combining flavour ratio measurements with angular and energy distributions, large-volume neutrino telescopes such as IceCube will be capable of searching for evidence of black hole production in models with a fundamental Planck scale up to 1-2 TeV. The angular distributions of muon tracks above 500 TeV in a kilometre-scale neutrino telescope, such as IceCube, in models of black hole production are shown in figure 7. While the enhanced cross-section significantly increases the down-going event rate (cos θ z > 0) over the SM prediction (dotted curves), the rate of up-going events is suppressed due to absorption in the Earth. Air shower experiments, unlike neutrino telescopes, do not have the ability to observe muon tracks or identify tau unique events. They are, however, very sensitive to EeV scale cosmic neutrinos and are thus capable of placing valuable limits on models of black hole production. Currently, the strongest such limit comes from the AGASA air shower experiment. AGASA has reported the observation of one neutrino-like (deeply penetrating, quasi-horizontal) event, and predicts a background to this signal of 1.7 events from misidentifying hadronic primaries [37]. At the 95% confidence level, this places an upper limit of 3.5 black hole events. This can be directly translated into a limit on the fundamental Planck scale, M D . For values of x min over the range 1-3, AGASA can place a lower limit on M D of 1.0-1.4 TeV [37,58], a limit which is competitive to the strongest bounds from collider experiments [59]. Auger, with considerably higher acceptance to these events, is expected to improve this sensitivity to 3-4 TeV for n 4 [37]. p-brane production p-branes are p-dimensional, spatially extended solutions of gravitational theories. The existence of such objects is a generic prediction of theories with extra dimensions. If the fundamental scale of gravity is of the order of 1 TeV, then it is reasonable to expect that, in addition to black holes (a spherically symmetric 0-brane), higher dimensional states may also be generated in high-energy collisions [60,61]. The cross-section for p-brane production is argued to be geometrical, similar to that for black hole production, except that it may have a lower threshold near the quantum gravity scale. If the p-brane wraps only around the small (compact) dimensions, the cross-section for p-brane production can be comparable to, or even larger than, the cross-section for black hole production (for a recent review on p-brane production, see e.g. [62]). If the p-brane wraps around 50 WB The angular distribution of muon tracks above 500 TeV in a kilometrescale neutrino telescope, such as IceCube, in models of black hole production [28]. The dotted line represents the prediction for the SM prediction, while the solid and dashed lines are for the black hole production models with x min = 1 and 3, respectively. All models shown have n = 1 and M D = 1 TeV. The Waxman-Bahcall and cosmogenic neutrino fluxes were used in the left and right frames, respectively (see section 2). cos θ z = 0 corresponds to a horizontal event, while positive and negative values correspond to down-going and up-going muons, respectively. While the enhanced cross-section significantly increases the downgoing event rate, the rate of up-going events is suppressed due to absorption in the Earth. The figure is taken from [28]. large dimensions as well, their production will be suppressed by powers of M D /M PL [60,61]. Typical cross-sections for p-brane production in νN collisions are presented in figure 8 [63], which could be higher when compared with the black hole production by orders of magnitude. Unlike the standard Hawking radiation picture for black hole evaporation, the decay of pbranes is not well understood. p-branes may decay into branes of lower dimension. Alternatively, they may decay directly into a combination of brane and bulk particles. Below the energy threshold for p-brane or black hole production, lighter states, called 'string balls' may also be produced [64]. We do not study these objects further here, since we consider our presentation to be already quite representative for the conservative scenario as in the string resonances in section 4.2, and for the more optimistic scheme in this section. Electroweak instanton-induced processes SM electroweak instantons represent tunnelling transitions between topologically inequivalent vacua, leading to baryon plus lepton number (B + L) violating processes. Such processes are exponentially suppressed below the so-called 'Sphaleron' energy, E sph ∼ πM W /α W ∼ 8 TeV. Above this scale, however, such process may be unsuppressed and the corresponding cross-sections can be quite large [65], potentially resulting in enhanced neutrino scattering signals in cosmic neutrino experiments. The neutrino-nucleon cross-section corresponding to electroweak instanton-induced interactions is difficult to reliably calculate. One approach to this problem is to use a perturbative method in close analogy to QCD [66]. Alternatively, this calculation has been performed using a generalized semi-classical approach [67]. In this second approach, these interactions remain suppressed to much higher energies up to about 30 times the Sphaleron energy. The estimated neutrino-nucleon cross-sections corresponding to these two approaches are shown in figure 9 [68]. From the standpoint of neutrino phenomenology, it is important to note the extremely rapid increase of the neutrino-nucleon cross-section demonstrated in these models. This is in contrast to the more gradual growth predicted for the cross-sections for black hole production, KK exchanges, etc. Below the energy thresholds for such interactions, the SM predictions are accurate. At energies roughly a factor of 10 higher, the cross-section becomes sufficiently large that the Earth efficiently absorbs the incoming neutrino flux, as indicated above the horizontal dotted lines (horizontal and downward depths) in figure 9. Thus, in a neutrino telescope, a sharp enhancement in a fairly narrow range of energies is predicted for these models, as depicted in figure 10. Although spectacular, the ability of planned experiments to observe such features is limited, however. Even with the more optimistic of the models considered here, an experiment such as IceCube is expected to see only of the order of 1 event/year from instanton-induced processes [68]. Future experiments with very large volumes will be required to further probe such models. Another interesting characteristic feature of instanton-induced processes is the large multiplicity of final state particles and the violation of B + L. The basic operators involving quark and lepton fields are of the form (qqq ) n g [65], where n g = 3 is the number of fermion generations. It has been argued that the processes involving multiple gauge bosons and Higgs bosons, such as (qqq ) n g W n H m , can be significantly enhanced [69]. A typical neutrino-induced E ν (GeV) 10 10 10 11 10 7 10 8 10 9 10 10 10 11 Figure 10. The spectrum of neutrino (shower) events predicted in a neutrino telescope including the electroweak instanton-induced interactions for the models described in the text. The dotted line represents the perturbative approach of [66]. The dashed line represents the semi-classical approach of [67]. The solid lines is the SM prediction. The figure is taken from [68]. event could thus be ν e u →dd +ccsµ + +ttbτ + + nW + mH. With both quarks and leptons of all three generations involved simultaneously in the primary production, this type of events should look quite unique. It is difficult to predict how such events would appear to the IceCube detector however, given the fact that the particles will be highly collimated and difficult to separate. Signatures of SUSY SUSY remains a leading candidate for physics beyond the SM. Although weak-scale SUSY is only weakly coupled to the SM and generally would not lead to substantially enhanced neutrino scattering cross-sections, certain charged particles produced by cosmic neutrinos may be long lived and may provide observable signatures. This scenario could naturally be realized when the gravitino is the stable lightest supersymmetric particle (LSP) and a charged slepton (such as stau) is the next-to-lightest supersymmetric particle (NLSP) [70,71]. Interactions of highenergy neutrinos may be able to produce pairs of sparticles, which rapidly decay to charged slepton NLSPs, which can only decay further into states including a gravitino. With only highly suppressed couplings allowing this decay, the NLSP stau can be sufficiently long-lived to be potentially observable in a large volume neutrino telescope such as IceCube [70]. Similar features may appear in supersymmetric models with R-parity violation [72]. Sparticle pair production in neutrino-nucleon interactions is dominated by the t-channel chargino exchange, resulting in a slepton and a squark. The squark then quickly decays into a slepton NLSP. The cross-section for this process is rather small however, typically 2-3 orders of magnitude below the SM processes in the energy range well above the kinematic threshold as shown in figure 11. The key observation here is that sleptons produced in neutrino interactions travel through the Earth, losing energy via ionization processes and radiation. Due to their much greater mass, sleptons lose far less energy than muons produced in SM charged current interactions. The 'slepton range' can extend to hundreds or thousands of kilometres, thus in part making up for the low cross-section for their production. The two sparticles produced in these interactions travel from their point-of-origin separated by an angle of θ 2m˜l/E ν . Therefore, the signature of this process consists of a pair of Cerenkov tracks, separated by a distance Lθ, where L is the distance between the detector and the sparticles' point-of-origin. Considering a PeV neutrino, for example, two sleptons separated by θ 10 −3 -10 −4 could be produced. After travelling ∼1000 km, their tracks would be separated by ∼100-1000 m, which could be potentially distinguished in a neutrino telescope. This 'double track' signature would provide a method of distinguishing sparticle tracks from ordinary muon tracks. Typical models with stau-NLSP predict only a one-track or a few double-track events per year in IceCube [70]. Larger volume detectors, e.g. extensions of IceCube, may be needed to further explore this possibility. As a final remark on the possibility of observing supersymmetric particles in high-energy cosmic-ray interactions, neutrino experiments may be able to identify particles that are part of the cosmic-ray spectrum, such as in cosmic ray models of top-down origin [73]. For instance, if a neutralino is the LSP, it will interact with nucleons in a manner that somewhat resembles a neutrino neutral current interaction. Thus, without very large fluxes of high-energy cosmic neutralinos, it would be very difficult to distinguish any such particle from neutrinos. Neutralinos can have considerably smaller cross-sections with nucleons than neutrinos however, allowing them to travel through the Earth at energies at which neutrinos will be efficiently absorbed [73]. Thus ultrahigh-energy, neutralino-induced showers provide a low background signal in the direction of the Earth. Future space-based air shower experiments, such as OWL or EUSO, may be sensitive to this signature in some scenarios [73,74]. Conclusions and summary In this paper, we have reviewed the ability of neutrino telescopes and air shower experiments to study particle physics with high-energy cosmic neutrinos. The main advantage of such experiments over traditional collider experiments is the higher energy at which interactions can be studied. Several of the experiments we describe in this paper could expect multiple events per year at energies above 1 EeV, corresponding to about 40 TeV in the centre-of-mass frame of a neutrino-nucleon collision. Clearly, collider experiments have advantages over astroparticle techniques as well. Most notably, the high luminosities and well-controlled conditions of collider experiments are luxuries astronomers do not often enjoy. Together, these advantages and disadvantages determine the areas of particle physics in which neutrino astronomy can be most useful. In particular, models in which significant deviations from the SM occur at energies beyond the reach of colliders can often be tested in such experiments. In this paper, we have reviewed several of such scenarios, summarized as follows: • KK gravitons in low-scale quantum gravity scenarios: sensitive near the threshold, presumably E CM ≈ M D ∼ 1 TeV or E ν ∼ 1 PeV. • TeV string resonances in low string-scale scenarios: near and above the string scale, presumably M S ∼ 1 TeV, thus E ν > 1 PeV. • p-brane production in low-scale quantum gravity scenarios: near and above the quantum gravity scale, presumably M D ∼ 1 TeV, thus E ν > 1 PeV. • Black hole production in low-scale quantum gravity scenarios: likely, the dominant signature at trans-Planckian energies E CM M D ∼ 1 TeV, thus E ν 1 PeV. • Electroweak instanton-induced processes: above the sphaleron energy E CM > 10 TeV, thus E ν > 100 PeV. • SUSY with a charged slepton as a long-lived NLSP. Although not discussed in this paper, other probes of exotic particle physics are possible using neutrino astronomy. These include searches for particle dark matter and neutrinos associated with models of top-down cosmic ray origin. Although astronomy, and not particle physics, is the primary objective of neutrino telescopes and air shower experiments, upcoming experiments such as IceCube, AUGER, ANITA, OWL and EUSO will each study interactions at energies well beyond the reach of colliders and provide complementary probes to the traditional techniques used to study our Universe at the smallest scales.
10,593.8
2004-08-31T00:00:00.000
[ "Physics" ]
Mapping Submerged Aquatic Vegetation along the Central Vietnamese Coast Using Multi-Source Remote Sensing : Submerged aquatic vegetation (SAV) in the Khanh Hoa (Vietnam) coastal area plays an important role in coastal communities and the marine ecosystem. However, SAV distribution varies widely, in terms of depth and substrate types, making it di ffi cult to monitor using in-situ measurement. Remote sensing can help address this issue. High spatial resolution satellites, with more bands and higher radiometric sensitivity, have been launched recently, including the Vietnamese Natural Resources, Environment, and Disaster Monitoring Satellite (VNREDSat-1) (V1) sensor from Vietnam, launched in 2013. The objective of the study described here was to establish SAV distribution maps for South-Central Vietnam, particularly in the Khanh Hoa coastal area, using Sentinel-2 (S2), Landsat-8, and V1 imagery, and then to assess any changes to SAV over the last ten years, using selected historical data. The satellite top-of-atmosphere signals were initially converted to radiance, and then corrected for atmospheric e ff ects. This treated signal was then used to classify Khanh Hoa coastal water substrates, and these classifications were evaluated using 101 in-situ measurements, collected in 2017 and 2018. The results showed that the three satellites could provide high accuracy, with Kappa coe ffi cients above 0.84, with V1 achieving over 0.87. Our results showed that, from 2008 to 2018, SAV acreage in Khanh Hoa was reduced by 74.2%, while gains in new areas compensated for less than half of these losses. This is the first study to show the potential for using V1 and S2 data to assess the distribution status of SAV in Vietnam, and its outcomes will contribute to the conservation of SAV beds, and to the sustainable exploitation of aquatic resources in the Khanh Hoa coastal area. Introduction Vietnam is a coastal country located on the western side of the Eastern Sea (Biển Ðông); it has 3260 km of coast, and a highly diverse assemblage of submerged aquatic vegetation (SAV) [1][2][3], which consists of two main groups-seagrasses and seaweeds. Seagrasses are flowering plants, while seaweeds are macro algae consisting of aggregating cells [2,4,5]. SAV is usually distributed in satellite remote sensing imagery sources for interpreting the distribution of SAV ecosystems; ii) to define SAV ecosystem distribution in the Khanh Hoa coastal area; and iii) to assess spatial and temporal changes to SAV ecosystems in the Khanh Hoa coastal area, thereby providing a baseline for improved monitoring of Khanh Hoa SAV, and to support their sustainable protection. Study Area The study area is located in South-Central Vietnam (Figure 1). Khanh Hoa province has the longest coastline in Vietnam, approximately 385 km from the edge of Dai Lanh commune to the South end of Cam Ranh Bay. The Khanh Hoa coast is diverse and complex, with a system of bays, islands, lagoons, and estuaries, and includes the continental shelf. Khanh Hoa has ~ 200 islands along its coast, and includes five lagoons and bays, including Van Phong Bay, Nha Trang Bay, Cam Ranh Bay, Nha Phu Lagoon, and Thuy Trieu Lagoon [7]. Of these locations, Van Phong Bay is in the North, while Cam Ranh Bay lies in the South, and has greater potential for use, as it is wider and deeper, with less sedimentation and fewer storms, than the former. Despite its name, Nha Phu Lagoon is not really a "lagoon", like, say, Thuy Trieu Lagoon, being simply a small shallow bay, while Thuy Trieu Lagoon itself forms one of the 12 typical lagoons found along the central Vietnamese coast [7]. The climate in Khanh Hoa province is dominated by the tropical monsoon climate and the nature of the ocean climate, so climate was relatively mild. There are two seasons in Khanh Hoa province: rainy and dry season [29]. The rainy season is short, from about mid-September to mid-December, rainfall often accounts for over 50% of the annual rainfall. From January to August are in the dry season, with an average of 2600 hours of sunshine annually. The average annual temperature of Khanh Hoa is about 26.7 °C [29]. The relative humidity is about 80.5%. In the dry season, the early months from January to April are cool, the temperature is 17-25 °C. However, from May to August The climate in Khanh Hoa province is dominated by the tropical monsoon climate and the nature of the ocean climate, so climate was relatively mild. There are two seasons in Khanh Hoa province: rainy and dry season [29]. The rainy season is short, from about mid-September to mid-December, rainfall often accounts for over 50% of the annual rainfall. From January to August are in the dry season, with an average of 2600 h of sunshine annually. The average annual temperature of Khanh Hoa is about 26.7 • C [29]. The relative humidity is about 80.5%. In the dry season, the early months from January to April are cool, the temperature is 17-25 • C. However, from May to August are hot, temperatures can reach 34 • C (in Nha Trang) and 37-38 • C (in Cam Ranh). In the rainy season, the temperature varies from 20-27 • C (in Nha Trang) and 20-26 • C (in Cam Ranh) [29]. SAV distribution at the study sites was observed using a motorboat and diving along transects in the direction from the shore seaward, until either water column depth reached 10 m, or SAV was no longer seen. Global Positioning System (GPS) positions were recorded for each survey point, and the substrate form and structure were also logged for each area. Substrates were classified as being either SAV, rock-coral, or sandy/mud bottom, as shown in Table 3. A total of 155 sample sites were established, including 18 deep water sites, 47 sandy sites, 54 SAV sites, 16 mud-sand sites, and 20 rock-coral sites. The positions of the 155 sample sites were located by GPS, and used as ground truthing points for later satellite image interpretation. SAV distribution at the study sites was observed using a motorboat and diving along transects in the direction from the shore seaward, until either water column depth reached 10 m, or SAV was no longer seen. Global Positioning System (GPS) positions were recorded for each survey point, and the substrate form and structure were also logged for each area. Substrates were classified as being either SAV, rock-coral, or sandy/mud bottom, as shown in Table 3. A total of 155 sample sites were established, including 18 deep water sites, 47 sandy sites, 54 SAV sites, 16 mud-sand sites, and 20 rock-coral sites. The positions of the 155 sample sites were located by GPS, and used as ground truthing points for later satellite image interpretation. Data collection, analysis, and processing were carried out using ENVI 5.5 and MapInfo 12.0 software. The process flow chart for mapping SAV temporal changes can be seen in Figure 3. Data collection, analysis, and processing were carried out using ENVI 5.5 and MapInfo 12.0 software. The process flow chart for mapping SAV temporal changes can be seen in Figure 3. Data collection, analysis, and processing were carried out using ENVI 5.5 and MapInfo 12.0 software. The process flow chart for mapping SAV temporal changes can be seen in Figure 3. Mud-sandy bottom Thuy Trieu lagoon These areas have muddy or mixed mud and sand substrates. SAV species were not found on this bottom type, which was < 10 m deep Rock-coral bottom My Giang The area included corals or rocks, and was < 10 m deep; SAV were found on this bottom type, with < 5% coverage SAV Spatial Distribution and Area Change Mapping Data collection, analysis, and processing were carried out using ENVI 5.5 and MapInfo 12.0 software. The process flow chart for mapping SAV temporal changes can be seen in Figure 3. My Giang The area included corals or rocks, and was < 10 m deep; SAV were found on this bottom type, with < 5% coverage 2.3.2. SAV Spatial Distribution and Area Change Mapping Data collection, analysis, and processing were carried out using ENVI 5.5 and MapInfo 12.0 software. The process flow chart for mapping SAV temporal changes can be seen in Figure 3. Geometric correction: this step was done after imagery had been obtained from suppliers, to register the satellite image coordinates [14,19]. In this study, the Universal Transverse Mercator (UTM)-The World Geodetic System 84 (WGS84)-Zone 48 projection system was used for the Landsat and S2 imagery, while the GEOGRAPHIC-WGS84 system was applied to V1 imagery. We therefore needed this step in order to unify the geographic coordinates of the imagery according to the UTM-WGS84-Zone 48 projection system. V1 imagery also contained geometric distortions, as received from the satellite, and so prior to atmospheric correction, it was corrected geometrically, to reduce the deviations encountered during photography, and to convert them into local geographic coordinates using other reference data sources (UTM project, WGS84 datum). Radiometric correction: the purpose of this step was to convert the digital number of each image pixel into spectral radiation, using Equation (1) [37]: where Rad λ represents Top of atmosphere (TOA) spectral radiance (as Watts/(m 2 *srad*µm)), DN refers to the digital number of the band to be corrected, a λ (gain value) stands for a band-specific, multiplicative rescaling factor from the image header, and b λ (offset/bias value) represents a band-specific, additive rescaling factor from the image header. Gain and offset/bias values were provided in Landsat, S2 and V1 metadata files. Atmospheric correction: the purpose of this step was to remove contributions from the atmosphere-which could include aerosols, dust, gas and air molecules [38]-to the total signal measured by the remote sensor, in order to obtain just that part of the signal referring to the sea. Use of atmosphere corrected image is to potentially improve the extraction of surface parameters and to produce more accurate surface reflectance. In this study, Landsat and S2 imagery were corrected using the Fast Line-of-sight Atmospheric Analysis of Hypercubes (FLAASH) method, in ENVI 5.5 [39]. For V1 imagery, the atmospheric impact was removed using the QUick Atmospheric Correction (QUAC) model because FLAASH method was no supporting V1 image. The FLAASH model corrects the atmosphere for data in the visible through near-infrared and shortwave infrared ranges, up to 3 µm, for super-spectral or multi-spectral imagery, calculating the atmospheric radiation transmission pattern using the most recent MODerate resolution atmospheric TRANsmission (MODTRAN) information [39]. The QUAC model is an atmosphere correction method for multi-spectral imagery applied through the shortwave infrared band (VNIR-SWIR). Unlike the FLAASH method, it determines atmospheric compensation parameters directly from information contained in the scene, without needing supporting information. QUAC performs a more accurate atmospheric correction than FLAASH, which usually produces spectral reflectance within approximately +/− 15%, based on physical methods [39]. Water column correction: the purpose of this step was to remove influencing factors stemming from dissolved or solid particles in the water column-including phytoplankton, colored dissolved organic matter (CDOM), and total suspended solids (TSS) [40]-which reduce light transmission from the surface to the euphotic depth. This step was based on calculating the depth invariant index (DII), which is the linear relationship (logarithm) between the surface reflectance spectrum of band i and band j, according to the randomly selected sandy bottom points at different depths. The principle of applying DII is that when light penetrates the water, its intensity decreases exponentially as the depth increases [41]. This index allowed conversion of surface reflectance and bottom reflection, and we had a total of 101 points, including 47 sandy points and 54 SAV points, from which to build the linear relationship between the reflection spectra of image band pairs. The linear relationship of band pairs, using randomly selected sand beds at different depths, formed the basis of the DII calculation, which was completed using Equation (2) [40]: This index was developed by Lyzenga in 1981; it does not require measuring the reflectance at the survey points but rather determines it through information directly on the image band. An improved formula was introduced by Lyzenga (2003), using a combination of multiple image bands, as shown in Equation (3) [40]. where DII stands for the depth invariant index, Li and Lj represent the outputs from atmospheric corrections for bands i and j respectively, K i /K j denotes the ratio of the water attenuation coefficients in bands i and j, and was calculated using Equation (4): in which σ ii and σ jj represent the variance of bands i and band j respectively, and σj stands for the covariance of band i and band j. We calculated K i /K j coefficients for the Landsat, V1, and S2 bands from the spectral reflection variance, and then selected the three band pairs with the best correlation. Table 4 shows the K i /K j ratios of the V1, L8, and S2 band pairs. Depth invariant indexes, estimated using the different K i /K j coefficients for V1, have been presented in Figure 4. . where DII stands for the depth invariant index, Li and Lj represent the outputs from atmospheric corrections for bands i and j respectively, Ki / Kj denotes the ratio of the water attenuation coefficients in bands i and j, and was calculated using Equation (4): in which σii and σjj represent the variance of bands i and band j respectively, and σj stands for the covariance of band i and band j. We calculated Ki / Kj coefficients for the Landsat, V1, and S2 bands from the spectral reflection variance, and then selected the three band pairs with the best correlation. Table 4 shows the Ki / Kj ratios of the V1, L8, and S2 band pairs. Figure 4. The correlation coefficients for the reflectance spectra of band 1 and 2, band 1 and 3, and band 2 and 3 were the highest, so we selected three new DII bands (DII 12 , DII 13 , DII 23 ) for further classification. The same procedure was then applied to S2 and L8 data, and the correlation coefficients for each image channel have been listed in Table 5. Analyzing and processing images: after water column correction, the remote sensing data were processed through further steps, such as clipping, increasing quality, and masking. Clipping allowed us to remove non-study areas covered by the imagery, focusing on just the main Khanh Hoa coast, while masking allowed us to hide land layers not involved in our work. These processing steps contributed to improving image quality in preparation for classification. Supervised classification: the Maximum Likelihood method was used for classification, based on survey points for different bottom types [42]. This method allocates each pixel to the most probable class from the variance-covariance matrix, statistical indicators, and mean vector of each category, based on Bayes theorem. This resulted in creation of five classified layers, using Equation (5) [16,43]. This probability density function applies (x) as an arbitrary pixel, (Wi) as class (i), and (S) as the variance-covariance matrix of class (i), derived from training samples, and characterized as the basic function in the Maximum Likelihood Classification algorithm by assuming that the values in each spectral band were normally distributed. The five classes were characterized in Table 3. Assessing the accuracy of the classification: the accuracy of the classification was based on a standard confusion matrix. Accuracy was checked using four coefficients, User accuracy (Ua), Producer's accuracy (Pa), Overall Accuracy (OA), and the Kappa coefficient (Ҡ). Kappa coefficients range from 0 to 1, and a desired value is usually > 0.7 [44,45]. Ua occurs when pixels separating a single class are allocated into other classes, while Pa is the ratio of the pixels in a column (the total pixels not correctly classified for each class in the reference data) and the total pixels in the column (the total pixels for that class in the reference data). OA is the ratio between the total number of correct pixels and the total number of pixels in the confusion matrix-which is shown in Table 6 [44,45]. x 13 x 1+ =(x 11 /x 1+ )*100 Class 2 x 21 Ҡ and OA were calculated using Equations (6) and (7) respectively [29]: where N represents the total number of pixels in the confusion matrix, r stands for the number of class objects, X ii denotes the sum of correctly classified pixels in the confusion matrix, X i+ stands for total number of pixels in column I, and X +I represents the total number of pixels in row i. Assessment the temporal change of SAV distribution: after validation of the classification, Landsat-5 and Landsat-8 data were used to map SAV distribution changes for the ten-year period 2008-2018 ( Figure 3). MapInfo 12.0 software was used to prepare general SAV distribution mapping for the Khanh Hoa coast, at a scale of 1:50,000, and in more detail for the five study areas, at the scale of 1:25,000. Assessing the Accuracy of Classification Results The accuracy of image classifying depended not only on sample area selection accuracy but also on the coverage and distribution of SAV. The results achieved on assessing classification accuracies have been listed in Tables 7 and 8 (the confusion matrix for each sensor is shown in the Appendix A). As shown in Table 7, all satellites were found to be able to provide accurate SAV distribution estimates. The Kappa coefficient and Overall Accuracy accuracies exceeded 0.84 and 87%, respectively, with V1 showing the best results (Ҡ = 0.87), followed by L8 (Ҡ = 0.85), and then S2 (Ҡ = 0.84). Considering the Pa and Ua coefficients, sandy bottom and deep water were the most accurately identified bottom classes, for all three satellites, achieving both Pa and Ua values > 88%. The next most accurate identification was for rock-coral bottom (>79%), followed by detection of mud-sandy bottom, with SAV showing the lowest accuracies, although with both Pa and Ua still better than 77%. The reflected spectrum of sandy bottom and deep water may be higher and less confusing than the others, explaining its higher Pa and Ua values. The reason why SAV achieved the lowest Pa and Ua was that it could be confused with rock-coral or mud-sandy substrates, and if there was algae growing on rocks, dead corals, or several areas with high turbidity, this could lead to similar reflectance spectra between SAV and mud or rock (Appendix A). Spatial Distribution of SAV in Selected Sections of the Khanh Hoa Coastal Area The SAV distribution mapping results from L8, S2, and V1 have been depicted in Figure 5, with the SAV areas for each sub-section listed in Table 9. It was found that SAV was mainly distributed in the center of Khanh Hoa province, typical in Nha Trang Bay, with approximately 49.6 ha, and in Nha Phu Lagoon, with 70.1 ha of SAV (using V1 data). The coastal areas to the S and N of the province also showed well-developed SAV resources, with Van Phong Bay, Cam Ranh and Thuy Trieu lagoons showing SAV beds extending over 380.18, 144.4 and 155.5 ha, respectively (using V1 data). The reflected spectrum of sandy bottom and deep water may be higher and less confusing than the others, explaining its higher Pa and Ua values. The reason why SAV achieved the lowest Pa and Ua was that it could be confused with rock-coral or mud-sandy substrates, and if there was algae growing on rocks, dead corals, or several areas with high turbidity, this could lead to similar reflectance spectra between SAV and mud or rock (Appendix A). Spatial Distribution of SAV in Selected Sections of the Khanh Hoa Coastal Area The SAV distribution mapping results from L8, S2, and V1 have been depicted in Figure 5, with the SAV areas for each sub-section listed in Table 9. It was found that SAV was mainly distributed in the center of Khanh Hoa province, typical in Nha Trang Bay, with approximately 49.6 ha, and in Nha Phu Lagoon, with 70.1 ha of SAV (using V1 data). The coastal areas to the S and N of the province also showed well-developed SAV resources, with Van Phong Bay, Cam Ranh and Thuy Trieu lagoons showing SAV beds extending over 380.18, 144.4 and 155.5 ha, respectively (using V1 data). As shown in Figure 5 and in Table 5, the SAV distribution results from the sensors differed slightly. This was particularly true for small areas, such as Nha Phu Lagoon and Nha Trang Bay. The largest SAV distribution area according to the V1 (2017) data was approximately 809.8 ha, whereas S2 (2018) and L8 (2018) data indicated maximum SAV distribution areas of 799.4 and 771.2 ha, respectively (Table 9). Van Phong Bay: SAV here was usually distributed in shallow water, over a depth range of 0-1.5 m, and could be found over various substrates, including sand, mud-sand, or sand mixed with coral. As shown in Figure 5 and in Table 5, the SAV distribution results from the sensors differed slightly. This was particularly true for small areas, such as Nha Phu Lagoon and Nha Trang Bay. The largest SAV distribution area according to the V1 (2017) data was approximately 809.8 ha, whereas S2 (2018) and L8 (2018) data indicated maximum SAV distribution areas of 799.4 and 771.2 ha, respectively (Table 9). Van Phong Bay: SAV here was usually distributed in shallow water, over a depth range of 0-1.5 m, and could be found over various substrates, including sand, mud-sand, or sand mixed with coral. Seaweed developed to over 5 m on rocky bottoms, or on dead coral. SAV was found to be growing strongly in the coastal areas of Van Tho and Van Thanh cities, with distribution sparser around Ninh Thuy and Ninh Hai cities, and around several small islands (Bip and Lon islands), with total area estimates for the sub-section ranging from 270.2 to 390.2 ha ( Figure 6, Table 9). The total acreage reported by the V1 and S2 imagery differed slightly (390.2 ha vs 324.2 ha), while L8 showed a more significant disparity, with a value of 270.2 ha. Table 9). The total acreage reported by the V1 and S2 imagery differed slightly (390.2 ha vs 324.2 ha), while L8 showed a more significant disparity, with a value of 270.2 ha. Nha Phu Lagoon: this sub-section supported the least amount of SAV compared to the other locations in the province, with the SAV mainly distributed around islands-such as Thi, Lao, and Giua islands-with additional small areas observed in the coastal area near Ninh Van city. The total SAV acreages reported for this sub-section ranged from 62.6 to 70.1 ha (Figure 7, Table 9). Several small vegetation beds (smaller than 0.09 ha (1 pixel of L8)), were not reported in the L8 data, while they were detected by both the S2 and V1 platforms. This emphasized the importance of spatial resolution for detecting small SAV beds, with the L8 spatial resolution being lower than both S2 and V1, and this issue became most apparent in the SAV distribution reported around Giua, Lang, and Nua islands. The total SAV acreage and distribution reported for this sub-section by the three satellites were very similar, ranging from 62.6 (S2) to 70.1 ha (L8). Nha Phu Lagoon: this sub-section supported the least amount of SAV compared to the other locations in the province, with the SAV mainly distributed around islands-such as Thi, Lao, and Giua islands-with additional small areas observed in the coastal area near Ninh Van city. The total SAV acreages reported for this sub-section ranged from 62.6 to 70.1 ha (Figure 7, Table 9). Several small vegetation beds (smaller than 0.09 ha (1 pixel of L8)), were not reported in the L8 data, while they were detected by both the S2 and V1 platforms. This emphasized the importance of spatial resolution for detecting small SAV beds, with the L8 spatial resolution being lower than both S2 and V1, and this issue became most apparent in the SAV distribution reported around Giua, Lang, and Nua islands. The total SAV acreage and distribution reported for this sub-section by the three satellites were very similar, ranging from 62.6 (S2) to 70.1 ha (L8). SAV acreages reported for this sub-section ranged from 62.6 to 70.1 ha (Figure 7, Table 9). Several small vegetation beds (smaller than 0.09 ha (1 pixel of L8)), were not reported in the L8 data, while they were detected by both the S2 and V1 platforms. This emphasized the importance of spatial resolution for detecting small SAV beds, with the L8 spatial resolution being lower than both S2 and V1, and this issue became most apparent in the SAV distribution reported around Giua, Lang, and Nua islands. The total SAV acreage and distribution reported for this sub-section by the three satellites were very similar, ranging from 62.6 (S2) to 70.1 ha (L8). Table 9. Similarly to Nha Phu lagoon, several small vegetation beds in this sub-section (<0.09 ha) were not reported by L8. Table 9. Similarly to Nha Phu lagoon, several small vegetation beds in this sub-section (< 0.09 ha) were not reported by L8. Thuy Trieu Lagoon: Rhodophyta were found to dominate this area, being distributed together with SAV, creating mixed seaweed and seagrass vegetation beds. The SAV was reported to be concentrated mainly in the upper (N) reach of the lagoon, constituting dense beds, with this area consisting of shallow water (0-1.5 m), with mud or mud-sand substrates, around Cam Hoa, Cam Hai Dong, Cam Thanh Bac, and Cam Phuc cities (Figure 9). The total reported acreage of seagrass beds in the lagoon was approximately 155.5 ha (V1) ( Table 9). The SAV acreages and distribution reported by the three satellites differed at the northern head of the lagoon, while distribution records for other sub-section localities, including Cam Hai Dong, Cam Thanh Bac, and Cam Duc cities, showed no noticeable differences. The N area of the lagoon showed high turbidity, which made it difficult to estimate SAV distribution, and was probably the cause of the varying SAV acreage and distribution estimates here (Table 9). Thuy Trieu Lagoon: Rhodophyta were found to dominate this area, being distributed together with SAV, creating mixed seaweed and seagrass vegetation beds. The SAV was reported to be concentrated mainly in the upper (N) reach of the lagoon, constituting dense beds, with this area consisting of shallow water (0-1.5 m), with mud or mud-sand substrates, around Cam Hoa, Cam Hai Dong, Cam Thanh Bac, and Cam Phuc cities (Figure 9). The total reported acreage of seagrass beds in the lagoon was approximately 155.5 ha (V1) ( Table 9). The SAV acreages and distribution reported by the three satellites differed at the northern head of the lagoon, while distribution records for other sub-section localities, including Cam Hai Dong, Cam Thanh Bac, and Cam Duc cities, showed no noticeable differences. The N area of the lagoon showed high turbidity, which made it difficult to estimate SAV distribution, and was probably the cause of the varying SAV acreage and distribution estimates here (Table 9). Cam Ranh Bay: in this sub-section, SAV distribution was restricted to shallow waters and the area around Binh Ba Island (Figure 10), with an estimated total acreage of 144.4 ha (V1 image, Table 9). In the coastal areas, SAV was mainly found near Cam Hai Dong city, while around Binh Ba Island, it was concentrated in a few small beds off the N end of the island. The SAV bed acreage estimates for this sub-region varied between the satellites, ranging from 144.4 (L8) to 193.9 ha (S2). SAV distribution reports were relatively consistent between the satellites ( Figure 10). Dong, Cam Thanh Bac, and Cam Phuc cities (Figure 9). The total reported acreage of seagrass beds in the lagoon was approximately 155.5 ha (V1) ( Table 9). The SAV acreages and distribution reported by the three satellites differed at the northern head of the lagoon, while distribution records for other sub-section localities, including Cam Hai Dong, Cam Thanh Bac, and Cam Duc cities, showed no noticeable differences. The N area of the lagoon showed high turbidity, which made it difficult to estimate SAV distribution, and was probably the cause of the varying SAV acreage and distribution estimates here (Table 9). Assessment of the Temporal Changes to SAV in the Khanh Hoa Area We compared data from Landsat-5 (2008) and L8 (2018) to study variations in SAV distribution and acreage, with the results listed in Table 10 and illustrated in Figure 11. The total SAV area along the Khanh Hoa coast, in 2008, was approximately 1307 ha, which had decreased to 771.2 ha, by 2018 (Table 10). Van Phong Bay lost the most SAV, with a decline of 338.5 ha, at an average annual loss rate of 33.9 ha. The other regions which lost SAV included Van Phong Bay, in the Van Ninh coastal area, and My Giang, Thuy Trieu and Cam Ranh bays, with losses of 225.1 ha and 182.6 ha, respectively. In Nha Phu Lagoon and Nha Trang Bay, the area of SAV lost was 5-8 times greater than the area gained. Assessment of the Temporal Changes to SAV in the Khanh Hoa Area We compared data from Landsat-5 (2008) and L8 (2018) to study variations in SAV distribution and acreage, with the results listed in Table 10 and illustrated in Figure 11. Assessing the Accuracy The The average annual SAV acreage lost in the area has been 97 ha. In general, the losses tended to occur in the shallows and along shorelines in areas such as the Van Ninh district (Van Phong), Nha Trang city and from Cam Ranh city, My Giang (Van Phong Bay). Assessing the Accuracy The V1 data had the highest Kappa coefficient and accuracy for SAV area classification (Ҡ = 0.87; OA = 89.40), while the lowest results were for S2 (Ҡ = 0.84; OA = 89.40). On the other hand, when the SAV Pa and Ua results for the three sensors were reviewed, S2 imagery performed better, with the highest values (Pa > 92%; Ua > 79%) while V1 exhibited the lowest accuracy (Pa > 79%; Ua > 77%). In general, the accuracy results of three data are relatively high with the water column correction method. This method can be applied to several coastal areas in central Vietnam which have similar climatic and hydrological conditions, such as Quang Nam, Quang Ngai, Binh Dinh, Phu Yen, Khanh Hoa, Ninh Thuan, and Binh Thuan. Several studies have established SAV coverage with a high degree of accuracy. These include Yang et al. (2006), who mapped Xincun Bay, China, using QuickBird and China-Brazil Earth Resources Satellite (CBERS) satellite data, achieving 85% overall accuracy [46], while Ha (2010) classified ALOS AVNIR-2 imagery, achieving an accuracy of 81.8% in Lap An lagoon [25], and Manuputty et al. (2017) developed a status map in Kotok, Indonesia, with an accuracy of 84%, using WorldView-2 imagery [22]. Compared with these works, the present study achieved a higher level of accuracy, by using data from three different satellites. It is also quite likely that our use of the water column correction method (Lyzenga, 1981) contributed to the accuracy of our results, and it appears that it would be appropriate to use this correction when SAV mapping in our region. Environmental factors such as turbidity, depth, and waves influence the outputs of satellite data processing and classification [22,47,48]. High turbidity is an important factor, as the intensity of light decreases very quickly with depth, making it harder to detect SAV. Khanh Hoa coastal waters are affected by high turbidity continental water streams, and, based on the Jerlov (1964) seawater clarity level assessment scale [12], the coastal waters in Khanh Hoa province could be divided into two groups: medium turbidity (including Van Phong, Nha Trang, and Cam Ranh bays), and high turbidity (including Nha Phu, and Thuy Trieu lagoons). Nha Phu Lagoon showed higher turbidity, making it harder to estimate SAV presence and distribution than it was in other areas. This caused our Kappa coefficients and accuracy levels to be lower than those reported in several other SAV distribution studies around islands. These included Hoang et al. (2015), with a classification accuracy of 98.3% and Kappa coefficient of 0.96, in Rottnest Island, Western Australia [21], and Nguyen (2015) in Ly Son Island, Quang Ngai province, Vietnam, with high classification accuracy and Kappa coefficients of 94 % and 0.93, respectively [24]. In this study, the Kappa coefficient was higher for L8 (Ҡ = 0.85) than it was for S2 (Ҡ = 0.84), showing that spatial resolution was not always the answer for improving classification accuracy. It was of interest to note, however, that Konstantinos et al. (2016) compared S2 and L8 data around Lesvos Island, Greece when estimating SAV status, and showed that using S2 significantly improved the SAV classification accuracy, compared to L8, with a Kappa coefficient of 0.9 [27]. The authors did not need to include any water column correction, since the waters near Lesvos Island were very clear, which facilitated SAV detection. Factors Influencing Interpretation Processes In general, the classification outputs for bottom types that were established using the Lyzenga (1981) water column correction method showed high levels of accuracy, but the classification process developed some erroneous classifications. These may have originated from different factors. Firstly, the average spatial resolution of the three satellites-L8 (30 m), S2 (10 m), and V1 (10 m)-were not high enough to allow detection of SAV areas < 0.01 ha. Moreover, there was a one year discrepancy between the timing of our field surveys and the selected S2 and V1 satellite imagery. This difference led to a few discrepancies in interpretation processing, even though it did not lead to noticeable differences between the closely timed imagery. Environmental factors such as turbidity, depth, the spatial resolution of the remote sensing images, and satellite sensor sources were also factors capable of influencing classification results [17,24,27]. Figure 12 is an illustration of how satellite spatial resolution could affect SAV estimation, using two typical shapes (points and regions) for L8, S2, and V1 imagery for Khanh Hoa province. In this figure, it is apparent that smaller pixel sizes led to better characterization of fine scale water bodies, and so to more detailed SAV distribution results. These results showed that spatial resolution plays an important role in coastal substrate characterization accuracy. For SAV mapping, misclassification usually relate to the impact of the mixed substrate. The lower classification accuracy from L8 data might due to situations where the spatial SAV extent was smaller than its pixel size, leading to unrecognized SAV, and confusion over rock-coral or mudsandy bottoms that hosted small SAV patches. Over-classification in S2 was caused by mixed SAV, rock, and water reflections within a pixel. Meanwhile, the Kappa coefficient and OA for L8 were higher than those for S2 (Table 3), and this was thought to be due to the difference of satellite sensor spectral bands. These issues illustrated that spatial resolution was not the only influencing factor for SAV estimation and substrate type classification in the study area. All three satellites had blue bands, which were better for detecting sunlight transmission through the water column than others [12,40,49]. The S2 coastal band could penetrate even better than the blue band, but was not used in this study due to its 60 m spatial resolution, which was more appropriate for the general resolution of S2 data. Hence, bands comprising blue, red, and green were applied to water column correction for two of the image sources (V1 and S2), while the coastal, blue, and green bands from L8 imagery were used (Table 5). V1 and S2 had the same resolution as well as the same bands, and their initial prediction results showed good similarity; however, V1 achieved the best output accuracy assessment, while S2 achieved the lowest value (Table 7). This result probably occurred because V1 has a spatial resolution of 10 m and the band pairs correlation coefficient was quite high (R 2 > 0.9), making the V1 bottom For SAV mapping, misclassification usually relate to the impact of the mixed substrate. The lower classification accuracy from L8 data might due to situations where the spatial SAV extent was smaller than its pixel size, leading to unrecognized SAV, and confusion over rock-coral or mud-sandy bottoms that hosted small SAV patches. Over-classification in S2 was caused by mixed SAV, rock, and water reflections within a pixel. Meanwhile, the Kappa coefficient and OA for L8 were higher than those for S2 (Table 3), and this was thought to be due to the difference of satellite sensor spectral bands. These issues illustrated that spatial resolution was not the only influencing factor for SAV estimation and substrate type classification in the study area. All three satellites had blue bands, which were better for detecting sunlight transmission through the water column than others [12,40,49]. The S2 coastal band could penetrate even better than the blue band, but was not used in this study due to its 60 m spatial resolution, which was more appropriate for the general resolution of S2 data. Hence, bands comprising blue, red, and green were applied to water column correction for two of the image sources (V1 and S2), while the coastal, blue, and green bands from L8 imagery were used (Table 5). V1 and S2 had the same resolution as well as the same bands, and their initial prediction results showed good similarity; however, V1 achieved the best output accuracy assessment, while S2 achieved the lowest value (Table 7). This result probably occurred because V1 has a spatial resolution of 10 m and the band pairs correlation coefficient was quite high (R 2 > 0.9), making the V1 bottom classification accuracy higher than it was for L8 or S2 (Tables 7 and 8). L8 had a lower spatial resolution (30 m), potentially causing less detail to be available than for V1-and its accuracy was correspondingly lower. Although L8 had a lower spatial resolution than the other two imagery sources, the image collection time coincided with our survey time (L8 imagery was acquired in May 2018, and field surveys were conducted in June and October 2018), and the L8 Kappa coefficient was higher than that of S2. Furthermore, the S2 spectral band pairs correlation coefficient was lower than the other image sources (0.93 > R 2 > 0.56), meaning its band pairs DII index was relatively poor, leading to reduced accuracy. Apart from its advantages in substrate classifying, V1 had a limitation in that, unlike L8 and S2, it has no coastal band-and so could not acquire information on deep water in coastal areas. In addition, this is a small satellite type, with a comparatively small image size (17.5 × 17.5 km) [34,35]. While Landsat only used two scenes and S2 needed three images, to cover the study area, V1 required nine images, each covering a small area and making it difficult to collate all the imagery needed to cover the Khanh Hoa coast. Moreover, due to the larger number of images, it took much longer to process the V1 imagery than it did for the larger L8 and S2 imagery. In this study, optical data analyzing satellite imagery were used to map SAV distribution and temporal change. This method effective for mapping broad areas but it limited to deep waters due to light attenuation in waters [50,51]. So, the scope of the study is shallow waters, which was < 10 m deep. Several other methods can detect the distribution of SAV living > 10 m deep, such as the multibeam echosounder and the side scan sonar. These methods can map SAV depth variations with a large range from 0 to 25 m [50,52]. Besides this drawback, the optical data also depends on the weather, clouds, rain, seasons, and day or night [51][52][53]. This is difficult for collecting satellite data to detect seasonal changes subjects such as SAV. Because of the method limitation, the study was unable to collect three data sources in the same months of one year. At present, there are very few studies on seasonal variation of SAV or the growth of seasonal SAV in Vietnam in particular and Khanh Hoa in general. May-Lin et al. (2013) studied the seasonal rate of Sargassum species at Port Dickson, Malaysia show that the monthly mean growth rates with two periods of high growth rates (January-February, and June-July), which mean that Sargassum in the phase of increasing and stabilization biomass [53]. The climate of Malaysia which is a tropical rainforest climate is similar to the climate of Vietnam. This led to similarities in the growth and coverage characteristics of SAV between Vietnam and Malaysia. Another study by Pham (2006) on the variation of seagrass population in the Khanh Hoa coast also shows that the density, above-ground biomass, leaf growth rate and leaf production of seagrass beds were often high in the dry season and often low in the rainy season. In the period May-August, seagrass in the Khanh Hoa coast in the phase of stabilization biomass includes the late-growth and reproduction stage [54]. Therefore, the growth and coverage characteristics of SAV can be assessed to be similar and were no significant variation in the dry season, which can be easily detected to SAV distribution in this period. In addition to environmental factors and satellite sensor sources, the interpretation process of SAV was also dependent on the classification method [55,56]. In this study, the maximum likelihood classifier (MLC) algorithm was applied for substrate type classification. However, when the boundaries of SAV and other benthic habitats are not well defined, the linear discrimination functions of the MLC may not work [55]. Besides, conditions of the MCL method require large amounts of training samples and equal covariance, which may result in misclassification between SAV and other benthic habitats [55,57]. The recent technique advances on machine learning (ML) can improve these limitations and encourage new approaches for SAV maps over various time scales. When comparing traditional MCL and ML method including random forests, rotation forests, and canonical correlation forests for SAV mapping using Sentinel-2, Nam et al. (2020) revealed that the ML method outperformed the MLC [55]. Similarly, Pham et al. (2019) also reviewed the limitations of the MLC method and suggested ML techniques for mapping coastal vegetation [57]. Although ML techniques have several benefits, the application of this method is infancy in the field of SAV map [55,57]. Thus, a variety of algorithms with the development of ML techniques using for multi-source remote sensing image applied SAV mapping should be encouraged for future studies. The interpretation processes are affected by a number of factors, so this study has some limitations. However, this study could be one of the fundamental studies and the first of multi-source remote sensing data about the assessment of SAV distribution in the coastal areas in Khanh Hoa province and central Vietnam. Temporal Changes to SAV Distribution Other studies have quantified SAV distribution elsewhere along the Vietnamese coast, and comparisons show that our study area had less SAV than the Quang Ninh coastal area (1450 ha), and more than both the Hai Phong (490 ha) and Quang Binh coastal areas (350 ha) (Cao, 2014). The results from this study mean that SAV distribution in the Khanh Hoa coastal zone can now be recognized as one of the most extensive along the Vietnamese coast. In Cam Ranh Bay, the SAV distribution area estimate found in this study was lower than that of Chen et al. (2015), who estimated that there was approximately 195.3 ha between Cam Hai Dong and Cam Phuc cities, for 2015 (Table 11). They also reported that an SAV decrease over the period 1996 to 2015 was caused by encroachment of aquaculture-based activities [26]. This encroachment continued between 2015 and 2019, explaining why our SAV estimate for this location was lower still than theirs from 2015. [9]. In comparison with the results from these two studies (Nguyen et al. (2012(Nguyen et al. ( , 2014, our results for the same areas were much lower (Table 11). Temporal Changes in SAV Extent After assessing the overall accuracy and Kappa coefficients, all three images had relatively high quality outputs, making it possible to establish SAV distribution status in the study area. Although V1 data gave the best results for substrate classification, as it was only launched in 2013, earlier data was not available [34,35]. Therefore, selecting Landsat gave us the advantages of not only high levels of accuracy, but also a longer data record than was available from either S2 or V1. According to Pham (2006), and hydrological-coastal environment data (2003) in the Khanh Hoa province in particular, and Central Vietnam in general, the growth and coverage characteristics of SAV do not change significantly in the dry season, especially from May to August, SAV in the Khanh Hoa coast in the phase of increasing and stabilization biomass [54,56]. Therefore, this study have selected the best quality images in the dry months to conduct the interpretation of SAV temporal changes. In 2008, several decisions of the government approving the overall planning on the socio-economic development of Central Vietnam up to 2020 focus on economic development to build the coastal economic infrastructure, comprising the system of seaports, Van Phong international entrepot port, and tourist development including water sports tourism, coastal landscape tourism and so on [58]. This allowed us to select historical Landsat-5 imagery from 2008 to assess the extent of SAV changes over the last 10 years. Khanh Hoa coastal waters have lost approximately 969.3 ha of SAV over the 10 year period 2008-2018, at an average annual loss of~97 ha. Generally, the losses occurred in shallow water, near shorelines which could be directly impacted by human activities [8,15,26,27]. Our findings were in accord with those of several previous studies, including Nguyen et al. (2013), who studied SAV degradation in Van Phong Bay. The identified reasons included damage caused by locals, who continually dug into the SAV areas with hoes, looking for oysters and mussels, and by the developing economic sector, with port and ship repair infrastructure such as the Hyundai Vinashin factory having negative effects on the adjacent SAV beds [9,10]. Similarly, Chen et al. reasoned that SAV acreage changes in Cam Ranh Bay were caused by infrastructure and aquaculture developments there [26]. Nguyen et al. (2014) noted that degradation of SAV beds in Thuy Trieu Lagoon was caused by locals digging on them for shellfish, and by the encroachment of tourism infrastructure buildings [9]. As an example, the Vinpearl Land resort involved golf course construction, which brought about the loss of 18 ha of seagrass from the area [9,15]. In general, SAV acreage has been reduced by various human activities, including overexploitation of marine resources, shellfish collection, shrimp pond construction, tourism, and seaport development, leading to a remarkable reduction in regional SAV [10,15,26]. Conclusions In this study, the authors evaluated the performance of the V1 sensor for detection of SAV along the coast of Khanh Hoa province, Vietnam, and compared its results to those achieved using S2 and L8 satellite sensors. This has been the first study to apply V1 data to establish SAV distribution status, and the results showed that all three satellite sensors had relatively high levels of accuracy (Ҡ > 0.83), and could be used to prepare SAV distribution maps. The L8 Kappa coefficient and overall accuracy reached 0.85 and 88.27%, respectively, which were better than those achieved by the S2 (Ҡ = 0.84; OA = 87.21%) and V1 sensors (Ҡ = 0.87; OA = 89.40%). The SAV bed distribution status results from the three satellites showed differences that were centered around several small areas, such as Nua, Lao, and Rua islands. Regarding the SAV acreage estimates for the study area, the estimate were broadly similar, ranging from 771.2 to 809.8 ha, with Van Phong Bay hosting the largest acreage, at~390.2 ha (V1). Over the period 2008-2018, SAV declined significantly across the study area, with approximately 74.2% of its area lost, with replacement areas limited to less than half of this. SAV was seen to have almost disappeared from many shallow areas and from locations close to the shore, to the extent that the overall SAV area decreased in the study area at an annual average rate of~97 ha. Currently, there are no studies using V1 data for coastal resource monitoring and management, with this being the first study to apply V1 data to SAV detection in coastal waters-with the results exhibiting high classification accuracy, and demonstrating its potential in this field. To enhance the applicability of remote sensing technology in Vietnam, it will be important to continue studying and applying V1 data to map SAV distribution around islands and in coastal areas. It would also be beneficial to examine the application of V1 to the determination of SAV species composition, coverage, and dried biomass, thus establishing an even better understanding of the application of satellite-sourced data to SAV studies. This work has identified that V1 imagery has an important shortcoming, in that the area captured in each scene is relatively small, making development of SAV distribution maps for large coastal areas complex and time-consuming. This suggests that it would be beneficial to combine V1 data with that of other satellites, such as WorldView-2, BKA, THAICHOTE, SSOT, ALSAT-2, and so on, in an effort to overcome this limitation.
11,488.4
2020-06-16T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Spaceflight Changes the Production and Bioactivity of Secondary Metabolites in Beauveria bassiana Studies on microorganism response spaceflight date back to 1960. However, nothing conclusive is known concerning the effects of spaceflight on virulence and environmental tolerance of entomopathogenic fungi; thus, this area of research remains open to further exploration. In this study, the entomopathogenic fungus Beauveria bassiana (strain SB010) was exposed to spaceflight (ChangZheng 5 space shuttle during 5 May 2020 to 8 May 2020) as a part of the Key Research and Development Program of Guangdong Province, China, in collaboration with the China Space Program. The study revealed significant differences between the secondary metabolite profiles of the wild isolate (SB010) and the spaceflight-exposed isolate (BHT021, BH030, BHT098) of B. bassiana. Some of the secondary metabolites/toxins, including enniatin A2, brevianamide F, macrosporin, aphidicolin, and diacetoxyscirpenol, were only produced by the spaceflight-exposed isolate (BHT021, BHT030). The study revealed increased insecticidal activities for of crude protein extracts of B. bassiana spaceflight mutants (BHT021 and BH030, respectively) against Megalurothrips usitatus 5 days post application when compared crude protein extracts of the wild isolate (SB010). The data obtained support the idea of using space mutation as a tool for development/screening of fungal strains producing higher quantities of secondary metabolites, ultimately leading to increased toxicity/virulence against the target insect host. Introduction An increase in space exploration has resulted in an increase in studies on understanding the changes in physiology of living organisms under spaceflight conditions [1]. Spaceflight conditions include long-term exposure to microgravity, radiation, and isolation [2]. The higher costs, limited number of launches, and complexity of experimental design under space habitats are the main difficulties in performing studies under true spaceflight conditions, and consequently, many studies have also been performed under ground-based simulated microgravity [3]. Different investigations have explored the effects of space conditions on plants, animals, as well as microorganisms. However, microorganisms can be considered strong candidates to study the responses to variations in environmental conditions because of their rapid life cycle, easy handling, and stability [4,5]. The explorations regarding influences of spaceflight on growth and metabolism of microorganisms have few significant implications. Firstly, microorganisms can impact (positively Toxins 2022, 14, 555 2 of 13 or negatively) human, animal and plant life [6]. Secondly, they are known for producing secondary metabolites, which are used as medicine, biopesticides, plant growth-promoting agents, etc. [7,8]. Several studies have explained the relationship between changed environmental conditions during spaceflight and variations in morphological as well as physiological characteristics of microorganisms, including germination, virulence, host pathogen relationship, secondary metabolite production, and gene expression [2,5,9]. Metabolism is the sum of all reactions in a living cell required for maintenance, development, and division. Microbial metabolism is comprised of primary metabolites (the intracellular molecules that enable growth and proliferation) and secondary metabolites (predominantly extracellular molecules that facilitate a microbe's interaction and adaptation with its environment) [10,11]. Secondary metabolites produced by microorganisms are predominantly low-molecular-weight extracellular compounds [12]. They are usually produced in late-exponential and stationary phases and are not directly associated with growth, development, or division of microorganisms [10]. These specialized products are most notable for their use in healthcare settings as antimicrobial, antiparasitic, and pest-control agents [13][14][15][16][17]. Many of the intermediates in primary metabolism are precursors of secondary metabolites, and cells have evolved complex molecular switches linking primary and secondary metabolic pathways. These include high expression of the secondary metabolism genes at specific times in the cell cycle and controlling the flow of primary metabolites (carbon and nitrogen) through different pathways by feedback regulation [12,13,18]. Thus far, studies on secondary metabolism have focused on only a few microorganisms (mainly Streptomycetes, Escherichia coli, and Bacillus) and are mostly limited to one or a few metabolites per study. These studies have suggested altered secondary metabolite production levels, but the specific responses have been unique to each species. Furthermore, previous studies have been either limited to already-known metabolites or have focused on microorganisms that are already well-known metabolite producers. However, space vehicles hold diverse species whose behaviors are unstudied and could have responses under microgravity beyond prediction. Additionally, understanding microbes at a global metabolomics level could provide more comprehensive knowledge about the overall responses exhibited under microgravity. Filamentous fungi are a major group of microorganisms critical to the production of different commercial enzymes, biopesticides, and organic compounds [19,20]. Several species of filamentous fungi belonging to the phylum Ascomycota (Subkingdom Dikarya) are known for their pathogenicity against insects [21]. Fungi belonging to genus Beauveria (Hypocreales: Cordycipitaceae) are one of the most common insect pathogenic fungal species. Beauveria species are known to cause widespread epizootics of insect populations because of their saprophytic behavior [22]. Studies regarding responses of microorganisms to spaceflight date back to 1960, but the responses to microgravity and its analogs have been investigated in a few microorganisms (bacteria as well as fungi), showing plausible but conflicting results for cellular growth rates and secondary metabolism under spaceflight and simulated microgravity experiments [23][24][25][26]. To date, limited concrete studies on the influences/utilization of spaceflight on production of secondary metabolites by entomopathogenic fungi make this area of research open for detailed investigations. In this study, the entomopathogenic fungus Beauveria bassiana (SB010) was sent to the Taingong space station for exposure to spaceflight conditions on the ChangZheng 5 space shuttle during 5 May 2020 to 8 May 2020 as a part of the Key Research and Development Program of Guangdong Province, China, in collaboration with the China Space Program. The aims of this study were to (i) extract and characterize the changes in mycelial protein extract/secondary metabolites profiles of different B. bassiana spaceflight mutants and (ii) undertake toxicity assays of mycelial extract/secondary metabolites B. bassiana spaceflight mutants against Megalurothrips usitatus (Thysanoptera: Thripidae). Results Secondary metabolite production was quantified from the ethyl acetate extracts of wild isolate (SB010) and spaceflight mutants (BHT021, BHT030, BHT098) of B. bassiana. The total protein concentration of ethyl acetate extract differed significantly among all four isolates (F 3,8 = 41.32, p < 0.001). The highest protein contents were obtained from the spaceflight mutant BHT021, with a mean value of 1.87 ± 0.035 mg/mL. For the wild isolate (SB010), the concentration of protein observed was 1.26 ± 0.03 mg/mL. The lowest protein content was produced by the space mutant BHT098, with a mean value of 1.05 ± 0.05 mg/mL ( Figure 1). oxins 2022, 14, x FOR PEER REVIEW 3 Results Secondary metabolite production was quantified from the ethyl acetate extra wild isolate (SB010) and spaceflight mutants (BHT021, BHT030, BHT098) of B. bas The total protein concentration of ethyl acetate extract differed significantly amo four isolates (F3,8 = 41.32, p < 0.001). The highest protein contents were obtained fro spaceflight mutant BHT021, with a mean value of 1.87 ± 0.035 mg/mL. For the wild i (SB010), the concentration of protein observed was 1.26 ± 0.03 mg/mL. The lowest p content was produced by the space mutant BHT098, with a mean value of 1.05 mg/mL ( Figure 1). As determined by LC-MS analysis, several differences were observed betwee metabolite profiles of the four isolates ( Figure 2; Supplementary file S1). A section of between retention time of 8-9 min and 14.5-17 min was evident in the LC-MS pro the spaceflight-exposed mutants (BHT021, BHT030, and BHT098) when compared the wild isolate (SB010). In addition, a broader pattern of peaks between retention ti 10-11 min was observed for the spaceflight mutant BHT021 when compared with the isolate (SB010). Analysis of fragmentation pattern of peaks from the LC-MS profiles four isolates revealed the production of 43, 79, 44, and 47 secondary metabolites k for insecticidal activity by the SB010, BHT021, BHT030, and BHT098, respectively plementary File S1). Some of the secondary metabolites/toxins, including enniati brevianamide F, macrosporin, aphidicolin, and diacetoxyscirpenol, were only prod by the spaceflight mutants BHT021 and HT030 (Supplementary File S1). The As determined by LC-MS analysis, several differences were observed between the metabolite profiles of the four isolates ( Figure 2; Supplementary file S1). A section of peaks between retention time of 8-9 min and 14.5-17 min was evident in the LC-MS profile of the spaceflight-exposed mutants (BHT021, BHT030, and BHT098) when compared with the wild isolate (SB010). In addition, a broader pattern of peaks between retention time of 10-11 min was observed for the spaceflight mutant BHT021 when compared with the wild isolate (SB010). Analysis of fragmentation pattern of peaks from the LC-MS profiles of the four isolates revealed the production of 43, 79, 44, and 47 secondary metabolites known for insecticidal activity by the SB010, BHT021, BHT030, and BHT098, respectively (Supplementary file S1). Some of the secondary metabolites/toxins, including enniatin A2, brevianamide F, macrosporin, aphidicolin, and diacetoxyscirpenol, were only produced by the spaceflight mutants BHT021 and HT030 (Supplementary file S1). The Toxicity of Metabolites against M. usitatus The insecticidal activities of crude protein extracts from B. bassiana wild isolate (SB010) and space mutants (BHT021, BHT030, and BHT098) applied at different concentrations against M. usitatus adults differed significantly among different isolates and their concentrations (F 15,48 = 23.81; p = 0.0034). The different concentrations of the crude protein extracts of B. bassiana spaceflight mutants (BHT021 and BHT030) had higher efficacy against M. usitatus adults 5 days post application compared to the wild isolate (SB010) ( Table 1). The observed mortality (%) of M. usitatus adults following treatment with different concentration of the crude protein extracts of spaceflight mutants (BHT098) were lower than the mortality values observed for the crude protein extracts of wild isolate (SB010). The difference between the means (±SE) followed by various letters is significant (Tukey's p < 0.05). Transmission Electron Microscopic Examination of M. usitatus Midgut following Treatment with Mycelial Extracts from B. bassina Wild Isolate and Space Mutants The ultrastructural changes of the fat body and somatic cells of the treated group were observed by transmission electron microscopy after feeding the toxin for 72 h compared with the control. The microvilli of the midgut cells in the control were abundant and neatly arranged with a fenestrated morphology, and after 72 h of toxin feeding, the microvilli of the midgut of adult thrips were swollen, shortened and thinned, vacuolated, and partially dislodged ( Figure 5). Compared with the control, the microvilli in the lumen of the midgut cells of M. usitatus treated with mycelial extracts of space mutants (BHT012 and BHT030) were sparse, disorganized, and vacuolated, while some of them were even completely lost. The nuclei of the blank group were flat, but after treatment with mycelial extracts of space mutants (BHT021 and BHT030), the fat body cells of M. usitatus adults were severely damaged, and their nuclei appeared expanded, folded, and detached, and the lipid droplet membrane structure became transparent and vacuolated ( Figure 5). completely lost. The nuclei of the blank group were flat, but after treatment with extracts of space mutants (BHT021 and BHT030), the fat body cells of M. usitat were severely damaged, and their nuclei appeared expanded, folded, and detac the lipid droplet membrane structure became transparent and vacuolated (Figur Discussion Studies investigating the influences of spaceflight on secondary metabolite production by microorganisms would be interesting, as the biosynthesis and concentration of microbial secondary metabolites are sensitive to extracellular environmental cues (nutrients availability, temperature, and osmotic stress) [27,28]. The results showed significant changes in total yield and secondary metabolite profiles of B. bassiana spaceflight mutants and the wild isolate. These findings are consistent with Lam et al. [29], who observed increased production of actinomycin D by Streptomyces plicatus WC56452 after spaceflight on the U.S. Space Shuttle mission STS-80. Similarly, Luo et al. [30] observed 13-18% higher production of nikkomycins by Streptomyces ansochromogenus during 15 days of spaceflight. In a different study, Nikkomycins-producing strains of Streptomyces ansochromogenus were investigated to understand the biological response to production onboard a satellite for 15 days. The production of Nikkomycins in nearly all strains was reported to be increased by 13-18%, with increases specifically in Nikkomycin X and Z [2,30]. Similarly, a study on Cupriavidus metallidurans under simulated microgravity showed increased production of the polyester polymer poly-b-hydroxybutyrate after 24 h but not after 48 h compared with ground controls [7]. All these results suggest that effects of microgravity on secondary metabolism may be specific depending on the strain, growth condition, pathway utilized, or time course analyzed. Based on already-reported data and the results of this study as well as complexity of regulatory mechanisms of secondary metabolites production, the observed changes in secondary metabolite profiles of B. bassiana in response to spaceflight exposure can be strain, growth media, and secondary metabolism production pathway, depicting an inconsistent trend. These changes are induced by the indirect physical influences of spaceflight (such as variations in fluid dynamics or the extracellular transport of metabolites) [2]. Furthermore, different extracellular environmental cues can induce or enhance secondary metabolisms in different microorganisms, followed by the transfer of extracellular stresses to the downstream responsive genes by a cascade of complex signal transduction steps [29]. Therefore, although all of the B. bassiana space flight mutants went through same sort of extracellular stress, the amount and number of secondary metabolites produced by each mutant were different. In short, the changes in secondary metabolites production in B. bassiana following spaceflight exposure were induced by the variations in microenvironments around the fungal cells [2]. However, space vehicles hold diverse species whose behaviors are unstudied and could have responses under microgravity beyond prediction. Additionally, understanding microbes at a global metabolomics level could provide more comprehensive knowledge about the overall responses exhibited under microgravity since entomopathogenic fungi offer a wealth of potential for the discovery of new and important microbial products for pest control. Broadening the horizon of fungal species and understanding the altered levels under microgravity could offer unique advantages for the biopesticides industry. There remains much to discover about the nature of diverse secondary metabolisms in such stressful environments of spaceflight. The changes of environmental factors, such as temperature, oxygen availability, and diffusion limitations, under microgravity can provide a condition that can be harnessed in the best way possible to be used for engineered microorganisms to generate useful metabolites. Therefore, understanding the specific cause-and-effect mechanisms of fungal responses to microgravity at the molecular level could provide ground-breaking discoveries for space applications and biopesticides development. However, in order to highlight the microbial responses to microgravity for long periods of time, different technologies (such as low-shear-modeled microgravity (LSMMG)) have been developed to mimic real microgravity [31], and these technologies can be used more often (to mimic microgravity and ground level) to induce higher production of secondary metabolites by insect pathogenic fungi. The examination of the biological activities of fungal crude protein extracts from B. bassiana wild isolate SB010 and space mutants (BHT021, BHT030, and BHT098), applied at different concentrations against M. usitatus adults, showed significantly different rates of insect mortality. The crude protein extracts of B. bassiana space mutants BHT021 and BHT030 (used at 100 µg/mL) induced higher mortality of M. usitatus adults when compared with the mortality caused by B. bassiana wild isolate SB010. On the other hand, M. usitatus mortality observed in response to treatment with crude protein extracts of B. bassiana space mutant BHT098 was lower than mortality caused by B. bassiana wild isolate SB010. Our results are similar to the already-reported previous studies on changes to microbial pathogens (Bacillus subtilis; Escherichia coli; Pseudomonas aeruginosa; Staphylococcus aureus) induced by spaceflight [29]. The transmission electron microscopy of the midgut of fourth instar M. usitatus adults showed nuclei enlargement, folding of nuclear membrane, and detachment of microvilli from midgut cells. The secretion of secondary metabolites is important for pathogenic processes, and variations in secondary metabolite profiles of spaceflight mutants compared to the wild isolate may have led to variations in biological activities of crude protein extracts against M. usitatus. These data further explain that increased secondary metabolism can increase the virulence of microorganisms [2]. Conclusions The results obtained from our studies provide substantial evidence that spaceflight exposure can alter the secondary metabolites production and biological activities of B. bassiana, which can serve as a baseline information for the studies on the effects of microgravity on insect pathogenic fungi. However, the specific reasons or mechanisms regulating the above-mentioned changes are unclear. Therefore, further studies on corresponding gene expression/regulation and characterization of micro-environments around the fungal cells should be emphasized in near future for identification as well as application of secondary metabolites produced by B. bassiana. Insect Cultures M. usitatus adults were reared on cowpea pods following the method of Espinosa et al. [32] and Du et al. [33] for multiple generations in an artificial climate chamber (Model PYX-400Q-A, Shaoguan City Keli Instrument Co., Ltd., Ningbo, China). Freshly emerged M. usitatus adult females were used in subsequent studies. Fungal Preparations Beauveria bassiana isolate SB010 (deposited at Guangdong Microbiological Research Centre repository under accession number GDMCC NO. 60359) was used for the study. The fugal isolate was cultured on potato dextrose agar (PDA) plates for 15 d followed by preparation of a basal conidial concentration (1 × 10 8 conidia/mL) using the method of Ali et al. [34]. Exposure of Beauveria bassiana to Spaceflight Conditions One milliliter of B. bassiana conidial suspension (1 × 10 6 conidia/mL) grown on PDA broth was individually loaded into polypropylene (PE) centrifuge tubes, which were then sealed with parafilm M (Bemis, Neenah, WI, USA). Four PE centrifuge tubes were exposed to simulated microgravity in a 3D rotating experimental device (temperature: 20 • C; speed: 9 rpm/min) for 72 h at Shenzhou Space Biology Science and Technology Corporation, Ltd., Beijing, China. Polypropylene centrifuge tubes (4 tubes) having B. bassiana conidial suspension were exposed to spaceflight conditions by the following method. The PE tube samples were pooled and placed into experimental boxes. The experimental boxes were placed in the ChangZheng 5 space shuttle. The samples were then flown to space within the shuttle during 5 May 2020 to 8 May 2020. The samples stayed in higher earth orbit (altitude 3000-8000 km) for 67 h and faced the Van Allen radiation belt (high-energy particle radiation belt) several times during the spaceflight. The sample box was retrieved from the returning space capsule and opened after 10 days, with the PE tubes being stored at −20 • C until further use. Aliquots (200 µL) of B. bassiana conidial suspension exposed to spaceflight conditions were cultured on PDA plates (15 plates each) following the method of Zhao et al. [35]. The fastest-growing colonies from spaceflight-exposed B. bassiana conidial suspensions were selected and named as BHT021, BHT030, and BHT098.The fugal isolate was cultured on PDA plates for 15 d followed by preparation of a basal conidial concentration (1 × 10 8 conidia/mL) using the method of Ali et al. [34]. Sterilized growth medium, 100 mL, (containing/L: glucose 30 g, yeast extract 3 g, KH 2 PO 0.39 g, Na 2 HPO 4 ·12H 2 O 1.42 g, NH 4 NO 3 0.70 g, and KCl 1.00 g), added to Erlenmeyer flasks (250 mL), was inoculated with 5 mL of conidial suspension (1 × 10 8 conidia/mL) of wild isolate (SB010) and spaceflight mutants (BHT021, BHT030, and BHT098) of B. bassiana followed by incubation at 150 rpm and 27 • C for 5 days. After 5 days of growth, cultures were centrifuged in an Eppendorf 5804R centrifuge (Eppendorf, Framingham, MA, USA) at 10,000 rpm, 4 • C for 10 min, and the resultant supernatant was extracted with ethyl acetate (1: 1 v/v ratio) following Wu et al. [36]. Three individual samples were run for each treatment as biological replicate. The total protein concentration of the extracts was quantified by Bradfords' method using bovine albumin serum as standard [37]. The liquid chromatography-mass spectrophotometry (LC-MS) analysis of obtained protein extracts was carried out by the method of Wu et al. [36] using LC Agilent 1200 LC-MS/MS system. The detailed description of LC-MS protocol can be seen in the supplementary materials (Supplementary file 2). Fourier-transformed infrared spectroscopy analysis was performed by using MIR8035 FTIR spectrometer (Thermo Fisher Scientific, Germany). All measurements were made at a resolution of 4 cm −1 over a frequency range of 400 to 4000 cm −1 . The liquid sample was loaded directly, and the spectra were recorded at room temperature. Nuclear magnetic resonance (NMR) was performed using a Bruker advance III-HD 600 NMR spectrometer (Bruker, Karlsruhe, Germany) by following the method of Wu et al. [36]. The concentration mortality response of crude protein extracts of wild isolate (SB010) and spaceflight mutants (BHT021, BHT030, and BHT098) of B. bassiana against M. usitatus adult females was studied by following the method Du et al. [26]. Briefly, centrifuge tubes (9 mL) and bean pods (1 cm) were individually immersed in different crude protein extracts of different concentrations (100, 75, 50, 25, and 12.5 µg/mL) followed by drying under sterile conditions. Centrifuge tubes and bean pods immersed in ddH 2 O with 0.05% Tween-80 served as control. Adult females of M. usitatus (100 individuals) were inoculated to treated bean pods with camel hair brush, and bean pods were placed in a treated centrifuge tube. Each centrifuge tube was sealed with a cotton plug to prevent the thrips from escaping and was placed at 26 ± 1 • C, 70 ± 5% R.H., and 16:8 L:D photoperiod. The insects were observed on a daily basis for 5 days to record mortality data as outlined by Du et al. [33]. The treatments were repeated three times with fresh batches of insects. Changes in the appearance of the infected M. usitatus midgut were directly monitored at 5 days post treatment under a JEM1011 transmission electron microscope (Nikon Co. Ltd., Tokyo, Japan) following Du et al. [26]. The treated M. usitatus adults were sampled at 5 d post treatment and were dissected under stereo microscope (Stemi 508, Carl-ZEISS, Jena, Germany) to obtain midgut samples. The samples were fixed overnight in 2.5% glutaraldehyde + 2% paraformaldehyde solution at 4 • C followed by rinsing with PBS buffer (0.1 M). The samples were then stained overnight with 1% uranyl acetate at 4 • C followed by dehydration with gradient concentrations of ethanol. The dehydrated tissues were embedded in silica gel blocks, and sections were cut using automatic microwave tissue processing instrument (EM AMW, Leica Microsystems, Wetzlar, Germany) and cryo-ultramicrotome (EM UC7/FC7, Leica Microsystems, Wetzlar, Germany). Data Analysis Data regarding total protein concentration were subjected to ANOVA-1. Means were compared by Tukey's HSD test (p < 0.05). Mortality (%) data were arcsine transformed before further analysis. The mortality (%) data were subjected to ANOVA-2, and significance between means was also tested by Tukey's HSD test (p < 0.05). SAS 9.2 was used for all statistical analysis [38]. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The raw data supporting the conclusion will be made available by the corresponding author on request.
5,301.8
2022-08-01T00:00:00.000
[ "Biology" ]
A new species of Portanus ( Hemiptera , Cicadellidae , Xestocephalinae ) from Brazil Portanus Ball, 1932 comprises 45 species that occur in Brazil, including Portanus felixi sp. nov. described and illustrated herein. The genus is close to Paraportanus Carvalho & Cavichioli, 2009 and can be distinguished from it by having a transversal groove on the basal third of the subgenital plates. The new species can be distinguished from the other species of the genus by the characters of male genitalia, especially by the pygofer with the apical process pointed, sclerotized and dorso-ventrally directed; and by the aedeagus with apodeme on the basal third. A new species of Portanus (Hemiptera, Cicadellidae, Xestocephalinae) from Brazil Adenomar N. de Carvalho Instituto de Biodiversidade e Florestas, Universidade Federal do Oeste de Pará, Rua Vera Paz, s/n, Salé,Santarém,Pará,Brasil. (adenomarc@yahoo.com.br)ABSTRACT.Portanus Ball, 1932 comprises 45 species that occur in Brazil, including Portanus felixi sp.nov.described and illustrated herein.The genus is close to Paraportanus Carvalho & Cavichioli, 2009 and can be distinguished from it by having a transversal groove on the basal third of the subgenital plates.The new species can be distinguished from the other species of the genus by the characters of male genitalia, especially by the pygofer with the apical process pointed, sclerotized and dorso-ventrally directed; and by the aedeagus with apodeme on the basal third.Portanini (Cicadellidade, Xestocephalinae) comprises two genera, 53 species and are among the smallest cicadellids, ranging from 4.00 to 7.00 mm (Carvalho & CaviChioli, 2009). KEYWORDS. Portanus Ball, 1932 is characterized by the following character combination: crown produced; anterior margin ranging from rounded to subangulate in dorsal view, without carinae between the transition with the face; ocelli on anterior margin, equidistant from anterior angle of the eyes and from the coronal suture, this presents itself as long as half the length of the crown; antennae as long as the body; styles with apex broad and bifid.It can be easily distinguished from Paraportanus Carvalho & Cavichioli, 2009 by the transverse groove in the basal third of the subgenital plates. MATERIAL AND METHODS For the analysis of genital structures, the abdomen was removed and placed in heated KOH 10%, following oman (1949).Softened genitalia were washed for 5 to 10 minutes in hot water and placed on a concave slide with glycerin to maintain the desired position for the illustrations.Dissected structures were kept in microvials with glycerin, and pinned together with the specimen.Illustrations were made using camera lucida attached to a stereoscopic microscope. The terminology follows Young (1968Young ( , 1977) ) and BloCker & Triplehorn (1985), except for the head structures, which follows hamilTon (1981).Information given within square brackets correspond to personal observations or additional data that are not present on the specimen labels. Portanus felixi sp. nov. (Figs 1-6) Diagnosis.Body light brown with white spots sparsely distributed; crown and apex of forewing dark brown; pygofer produced posteriorly with a long spiniform process, acute and directed dorsoventrally; aedeagus with dorsal apodeme in basal third; shaft of aedeagus with apex broad bearing a median tooth. Coloration.Body light brown with translucent and opaque areas.Crown light brown with black anterior margin; anterior margin with a narrow yellow band involving the eyes, ending in a black spot at apex of the crown; disc of the crown with whitish band along the coronal suture and several unsightly yellow spots widely distributed.Pronotum light brown with two pairs of white spots and elongated anterior margin, followed by several straw-yellow spots of irregular shape; lateral margins white with a brown band.Forewings light brown semi-hyaline with dark brown spot at apex and transverse veins; clavo veins orange with white spots apically. Male genitalia.Pygofer enlarged basally in lateral view, apical half slightly narrowed towards rounded apex, bearing an apical spiniform process long, pointed, sclerotized and directed dorsoventrally (Fig. 2).Subgenital plates triangular, enlarged at basal two-thirds, narrowed at apical third; basal third with a transverse line unpigmented; each of the subgenital plates with uniseriate macrosetae on ventral side, followed by long, thin bristles (Fig. 3).Styles with apex broad and forked, with the internal branch unciform, approximately parallel and longer than the outer cell (Fig. 4).Connective T-shaped with stalk long and apically expanded (Fig. 4).Aedeagus with robust preatrium, corresponding to one third of the total length of aedeagus, articulated to connective; in lateral view, shaft with apical fifth abruptly curved dorsally; aedeagus with dorsal apodeme (Fig. 5); apex enlarged with a median tooth (Fig. 6).Gonopore apical.Female genitalia.Sternite VII, in ventral view, not very produced posteriorly; anterior margin straight; posterior margin with a small tooth in the middle portion. Distribution.Brazil (Rio de Janeiro and Minas Gerais). Etymology.The species name refers to the collector of specimens, Márcio Felix (FIOCRUZ, Brazil). Comments.Portanus felixi sp.nov.differs from all species of the genus by the costal cell with half the length of the outer anteapical cell, by the crown strongly produced anteriorly, and the anterior margin subangulate in dorsal view.However, aedeagus resembles P. tridens (DeLong, 1980), but this differs by having only one median tooth at the apex of aedeagus.
1,159.2
2012-11-28T00:00:00.000
[ "Biology" ]
Continuous window functions for NFFT In this paper, we study the error behavior of the nonequispaced fast Fourier transform (NFFT). This approximate algorithm is mainly based on the convenient choice of a compactly supported window function. Here we consider the continuous Kaiser--Bessel, continuous $\exp$-type, $\sinh$-type, and continuous $\cosh$-type window functions with the same support and same shape parameter. We present novel explicit error estimates for NFFT with such a window function and derive rules for the optimal choice of the parameters involved in NFFT. The error constant of a window function depends mainly on the oversampling factor and the truncation parameter. For the considered continuous window functions, the error constants have an exponential decay with respect to the truncation parameter. Introduction The nonequispaced fast Fourier transform (NFFT), see [7,6,21,17] and [16,Chapter 7] is an important generalization of the fast Fourier transform (FFT). The window-based approximation leads to the most efficient algorithms under different approaches [8,19]. Recently a new class of window functions were suggested in [3] and asymptotic error estimates are given in [4]. After [21], the similarities of the window-based algorithms for NFFT became clear. Recently we have analyzed the window-based NFFT used so far and presented the related error estimates in [18]. Now we continue this investigation and present new error estimates for the some other window functions. More precisely, we consider the continuous Kaiser-Bessel window function and two close relatives of the sinh-type window function, namely the continuous exp-type and cosh-type window functions. All these window functions have the same support and the same shape parameter. We show that these window functions are very useful for NFFT, since they produce very small errors. In this paper, we present novel explicit error estimates (2.3) with so-called error constants (2.1). The error constants of NFFT are defined by values of the Fourier transform of the window function. We show that an upper bound of (2.1) depends only on the oversampling factor σ > 1 and the truncation parameter m ≥ 2 and decreases with exponential rate with respect to m. In numerous applications of NFFT, one uses quite often an oversampling factor σ ∈ 5 4 , 2 and a truncation parameter m ∈ {2, 3, . . . , 6}. Therefore we will assume that σ ≥ 5 4 . The outline of the paper is as follows. In Section 2 we introduce the set Φ m,N 1 of continuous, even window functions with support − m N 1 , m N 1 , where m ∈ N \ {1} and N 1 = σN ∈ 2N (with N ∈ 2N and 2m ≪ N 1 ) are fixed. We emphasize that a continuous window function ϕ ∈ Φ m,N 1 tends to zero at the endpoints ± m N 1 of its compact support − m N 1 , m N 1 ⊂ − 1 2 , 1 2 . In Section 3 we show that the simple rectangular window function (3.1) is not convenient for NFFT. The main results of this paper are contained in Sections 4 -7. For the first time, we present explicit estimates of the error constants (2.1) for fixed truncation parameter m and oversampling factor σ. In Section 4, we derive explicit error estimates for the continuous Kaiser-Bessel window function (4.1). In comparison, we show that the popular standard Kaiser-Bessel window function (4.13) has a similar error behavior as (4.1). A very useful continuous window function is the sinh-type window function (5.1) which is handled in Section 5. The main drawback for the numerical analysis of the exp-type and cosh-type window function is the fact that an explicit Fourier transform of this window function is unknown. In Sections 6 and 7, we develop a new technique. We split the continuous exp-type/coshtype window function into a sum ψ + ρ, where the Fourier transform of the compactly supported function ψ is explicitly known and where the compactly supported function ρ has small magnitude. Here we use the fact that both window functions (6.1) and (7.1) are close relatives of the sinh-type window function (5.1) which was introduced by the authors in [18]. The Fourier transform of ρ is explicitly estimated for small as well as large frequencies, where σ and m are fixed. We present many numerical results so that the error constants of the different window functions can be easily compared. After this investigation, we favor the use of a continuous window function with small error constant, which can be very fast computed, such as the sinh-type, standard/continuous exp-type, or continuous cosh-type window function. Continuous window functions for NFFT Let σ > 1 be an oversampling factor. Assume that N ∈ 2 N and N 1 := σN ∈ 2 N are given. For fixed truncation parameter m ∈ N \ {1} with 2m ≪ N 1 , we introduce the open interval I := − m N 1 , m N 1 and the set Φ m,N 1 of all continuous window functions ϕ : R → [0, 1] with following properties: • Each window function ϕ is even, has the supportĪ, and is continuous on R. • For each window function ϕ, the Fourier transform Examples of continuous window functions of Φ m,N 1 are the (modified) B-spline window functions, algebraic window functions, Bessel window functions, and sinh-type window functions (see [18] and [16,Chapter 7]). More examples are presented in Sections 4 -7. In the following, we denote the torus R/Z by T and the Banach space of continuous, 1-periodic functions by C(T). Let I N := {−N/2, . . . , N/2 − 1} be the index set for N ∈ 2N. We say that a continuous window function fulfills the condition e σ (ϕ) ≪ 1 for conveniently chosen oversampling factor σ > 1. Now we show that the error of the nonequispaced fast Fourier transform (NFFT) with a window function ϕ ∈ Φ m,N 1 can be estimated by the error constant (2.1). The NFFT (with nonequispaced spatial data and equispaced frequencies) is an approximate, fast algorithm which computes approximately the values p(x j ), j = 1, . . . , M , of any 1periodic trigonometric polynomial at finitely many nonequispaced nodes x j ∈ − 1 2 , 1 2 , j = 1, . . . , M , where c k ∈ C, k ∈ I N , are given coefficients. By the properties of the window function ϕ ∈ Φ m,N 1 , the 1-periodic functionφ is continuous on T and of bounded variation over − We approximate the trigonometric polynomial p by the 1-periodic function with the coefficients The computation of the values s(x j ), j = 1, . . . , M , is very easy, since ϕ is compactly supported. The computational cost of the algorithm is O(N 1 log N 1 + (2m + 1)M ), see [16,Algorithm 7.1] and [11] for details. We interpret s − p as the error function of the NFFT which we measure in the norm of C(T). Then the error function of the NFFT can be estimated by 3) The proof of Theorem 2.1 is based on the equality For details of the proof see [ Rectangular window function In this Section, we present a simple discontinuous window function which is not convenient for NFFT. Later we will use this discontinuous window function in Remarks 4.6 and 6.8, where we estimate the C(T)-error constants for the standard Kaiser-Bessel window function and the original exp-type window function, respectively. The simplest window function is the rectangular window function Now we show that the rectangular window function (3.1) is not convenient for NFFT. The Fourier transform of (3.1) has the form The discontinuous window function ϕ rect doesn't belong to Φ m,N 1 . Lemma 3.1 For each n ∈ I N \ {0}, the Fourier series Proof. For fixed n ∈ I N \ {0}, we consider the Fourier series of the special 1 N 1 -periodic function g n (x) := e −2πi nx for x ∈ (0, 1 For n = 0 we have g 0 (x) = 1. Then the rth Fourier coefficient of g n reads as follows for r ∈ Z. By the Convergence Theorem of Dirichlet-Jordan, the Fourier series of g n is pointwise convergent such that for each x ∈ R and n ∈ I N \ {0} This completes the proof. i.e., the rectangular window function (3.1) is not convenient for NFFT. By (3.2) we have for n ∈ I N \ {0} and r ∈ Z,φ rect (n+rN 1 ) ϕrect(n) = n n+rN 1 . Thus we obtain by Lemma 3.1 that for x ∈ (0, 1 Analogously, we estimate Consequently the rectangular window function is not convenient for NFFT, since the corresponding C(T)-error constant e σ (ϕ rect ) can be estimated by (3.3). In the case n = 0 we have f 0 (x) := r∈Z sinc(2πmr) e 2πirN 1 x = 1 . For arbitrary x ∈ R and m ∈ N, it holds obviously sin(2mx) ≤ 2m | sin x| and so 1 − e −2πin/N 1 = 2 sin πn N 1 . We obtain for n ∈ I N \ {0} that Thus for n ∈ I N \ {0} and x ∈ 0, 1 N 1 , we receive In the case n = 0, the above estimate is also true, since This completes the proof. is positive and decreasing such that Using the scaled frequency w = 2πmv/N 1 , we obtain Proof. Since the sinc-function is even, we consider only the case w ≥ β. For w = β, the above inequality is true, since |sinc 0 − sinc β| ≤ 1 + |sinc β| < 2 . For w > β we obtain it follows that Further we receive by (4.4) that This completes the proof. In our study we use the following Proof. For arbitrary u ∈ (−1, 1) and r ∈ N, we have Using (4.5), the following series can be estimated by Hence it follows by the integral test for convergence of series that Hence we conclude that We illustrate Lemma 4.2 for some special functions f , which we need later. Especially for µ = 2, it follows that For the function f (x) = e −ax , x > 0, with a > 0, we obtain by Lemma 4.2 that for each Proof. By (4.3) and Lemma 4.1 we obtain for all frequencies |v| since by (4.7) it holds for all n ∈ I N , By the special choice of b = 2π 1 − 1 2σ , we obtain the above inequality. From Lemma 4.4 it follows immediately that for all n ∈ I N , (4.10) Then the C(T)-error constant of the continuous Kaiser-Bessel window function (4.1) can be estimated by Proof. By the definition (2.1) of the C(T)-error constant, it holds where it holds (4.2), i.e., Thus from (4.10) it follows the assertion (4.11). Remark 4.6 As in [9] and [4], we consider also the standard Kaiser-Bessel window function with the shape parameter β = mb = 2πm (1 − 1 2σ ), σ > 1, N ∈ 2 N, and N 1 ∈ 2 N. Further we assume that m ∈ N \ {1} fulfills 2m ≪ N 1 . This window function possesses jump discontinuities at x = ± m N 1 with very small jump height I 0 (β) −1 , such that (4.13) is "almost continuous". The Fourier transform of (4.13) is even and reads by [14, p. 95] as followsφ 2π ) is positive and decreasing such that we can estimate for all n ∈ I N , Then from Lemma 4.4 and (3.4) it follows that max n∈I N r∈Z\{0}φ Consequently, we obtain the following estimate of the C(T)-error constant of (4.13) For σ ≥ 5 4 , we sustain by (4.12) that For a fixed oversampling factor σ ≥ 5 4 , the C(T)-error constant of (5.1) decays exponentially with the truncation parameter m ≥ 2. On the other hand, the computational cost of NFFT increases with respect to m (see [16, pp. 380-381]) such that m should be not too large. For σ = 2 and m = 4, we obtain e σ (ϕ sinh ) ≤ 3.7 · 10 −6 . Continuous exp-type window function For fixed shape parameter β = bm with m ∈ N \ {1}, b = 2π 1 − 1 2σ , and oversampling factor σ ≥ 5 4 , we consider the continuous exp-type window function Obviously, ϕ cexp ∈ Φ m,N 1 is a continuous window function. Note that a discontinuous version of this window function was suggested in [3,4]. A corresponding error estimate for the NFFT was proved in [4], where an asymptotic value of its Fourier transform was determined for β → ∞ by saddle point integration. We present new explicit error estimates for fixed shape parameter β of moderate size. In the following, we present a new approach to an error estimate for the NFFT with the continuous exp-type window function (6.1). Unfortunately, the Fourier transform of (6.1) is unknown analytically. Therefore we represent (6.1) as sum where the Fourier transform of ψ is known and where the correction term ρ has small magnitude |ρ|. We choose m −ρ(0) = e −β 2 8.06 · 10 −5 3 7.24 · 10 −7 4 6.51 · 10 −9 5 5.85 · 10 −11 6 5.25 · 10 −13 Since ρ has small absolute values in the small supportĪ, the Fourier transformρ is small too and it holds |ρ Substituting t = N 1 x/m, we determine the Fourier transform For simplicity, we introduce the scaled frequency w := 2πmv/N 1 such that where I 1 denotes the modified Bessel function and J 1 the Bessel function of first order. Therefore we consider as correction term of (6.4). Now we estimate the integral by complex contour integrals. Then the stronger form of Cauchy's Integral Theorem (see [5]) provides with the contour integrals Note that I 3 (w) is the complex conjugate of I 1 (w) such that |I 3 (w)| = |I 1 (w)|. The line segment C 2 can be parametrized by z = t + i, t ∈ [−1, 1] such that and hence Then we have e −β we obtain the estimate A parametrization of the line segment C 1 is z = −1 + i t, t ∈ [0, 1], such that For w ≥ β > 0, we split I 1 (w) into the sum of two integrals Since 2 ≤ | √ 2i + t| ≤ 4 √ 5, t ∈ [0, 1] , the integral I 1,0 (w) is bounded in magnitude by Above we have used the simple inequality 1 − e −x ≤ x for x ≥ 0. Finally we estimate the integral I 1,1 (w) as follows it follows that Thus we receive for w ≥ β, This completes the proof. is bounded by Thus we receive for all n ∈ I N and r ∈ Z \ {0, ±1}, Hence for all n ∈ I N , we obtain by (4.6) that For all the other v = n ± N 1 = − N 2 + N 1 , n ∈ I N , we have such that by (6.3),ψ(n ± N 1 ) reads as follows Since can be small for n ∈ I N , we estimate the Bessel function J 1 (x), x ≥ 0, by Poisson's integral (see [22, p. 47]) By (6.11), this estimate of |ψ(n ± N 1 )| is valid for all n ∈ I N . This completes the proof. Now for arbitrary n ∈ I N , we have to estimate the series r∈Z\{0} |ρ(n + rN 1 )| . By (6.5) and Lemma 6.1, we obtain for any v ∈ R \ {0}, Thus we obtain that The inequalities (4.6), (4.8), and (4.9) imply that Thus we obtain the following Lemma 6.6 Let N ∈ 2N and σ ≥ 5 4 be given, where Hence from Lemmas 6.5 and 6.6 it follows that max n∈I N r∈Z\{0}φ Using Lemma 6.4, we obtain by the following Theorem 6.7 Let N ∈ 2N and σ ≥ 5 4 be given, where N 1 = σN ∈ 2N. Further let m ∈ N with 2m ≪ N 1 , β = bm, and b = 2π 1 − 1 2σ . Then the C(T)-error constant of the continuous exp-type window function (6.1) can be estimated by In other words, the continuous exp-type window function (6.1) is convenient for NFFT. Note that for σ ∈ 5 4 , 2 and m ≥ 2, it holds by (6.19), In order to compute the Fourier transformφ of window function ϕ ∈ Φ m,N 1 , we approximate this window function by numerical integration. In our next numerical examples we apply the following method. Since the window function ϕ ∈ Φ m,N 1 is even and supported We evaluate the last integral using a global adaptive quadrature [20] forφ(k), k = 0, . . . , N . In general, this values can be precomputed, see [10,2]. We split (6.20) in the form with the window functions (6.1) and (3.1). Then the Fourier transform of (6.20) reads as followsφ , v ∈ R , By Lemma 6.4 it follows that Using (6.18) and (3.4), we estimate for all n ∈ I N , Thus we obtain Thus the discontinuous window function (6.20) possesses a similar C(T)-error constant as the continuous exp-type window function (6.1). Obviously, ϕ cosh ∈ Φ m,N 1 is a continuous window function. Note that recently a discontinuous version of this window function was suggested in [3, Remark 13]. But up to now, a corresponding error estimate for the related NFFT was unknown. Now we show that the C(T)-error constant e σ (ϕ cosh ) can be estimated by a similar upper bound as e σ (ϕ cexp ) in Theorem 6.7. Thus the window functions (6.1) and (7.1) possess the same error behavior with respect to the NFFT. In the following, we use the same technique as in Section 6. Since the Fourier transform of (7.1) is unknown analytically, we represent (7.1) as the sum where the Fourier transform of ψ 1 is known and where the correction term ρ 1 has small magnitude |ρ 1 |. We choose x ∈ R \ I and x ∈ R \ I . Conclusion In this paper, we prefer the use of continuous, compactly supported window functions for NFFT (with nonequispaced spatial data and equispaced frequencies). Such window functions simplify the algorithm for NFFT, since the truncation error of NFFT vanishes. Further, such window functions can produce very small errors of NFFT. Examples of such window functions are the continuous Kaiser-Bessel window function (4.1), continuous exp-type window function (6.1), sinh-type window function (5.1), and continuous coshtype window function (7.1) which possess the same support and shape parameter. For these window functions, we present novel explicit error estimates for NFFT and we derive rules for the convenient choice of the truncation parameter m ≥ 2 and the oversampling parameter σ ≥ 5 4 . The main tool of this approach is the decay of the Fourier transform ϕ(v) of ϕ ∈ Φ m,N 1 for |v| → ∞. A rapid decay ofφ is essential for small error constants. Unfortunately, the Fourier transform of certain window function ϕ, such as (6.1) and (7.1), is unknown analytically. Therefore we propose a new technique and split ϕ into a sum of two compactly supported functions ψ and ρ, where the Fourier transformψ is explicitly known and where |ρ| is sufficiently small. Further, it is shown that the standard Kaiser-Bessel window function and original exp-type window function which have jump discontinuities with very small jump heights at the endpoints of their support, possess a similar error behavior as the corresponding continuous window functions. In summary, the C(T)-error constant of the continuous/standard Kaiser-Bessel window function is of best order O m e −2πm √ 1−1/σ . For the sinh-type, continuous/original
4,723.6
2020-10-14T00:00:00.000
[ "Mathematics" ]
QTM: Computational package using MPI protocol for Quantum Trajectories Method The Quantum Trajectories Method (QTM) is one of the frequently used methods for studying open quantum systems. The main idea of this method is the evolution of wave functions which describe the system (as functions of time). Then, so-called quantum jumps are applied at a randomly selected point in time. The obtained system state is called as a trajectory. After averaging many single trajectories, we obtain the approximation of the behavior of a quantum system. This fact also allows us to use parallel computation methods. In the article, we discuss the QTM package which is supported by the MPI technology. Using MPI allowed utilizing the parallel computing for calculating the trajectories and averaging them—as the effect of these actions, the time taken by calculations is shorter. In spite of using the C++ programming language, the presented solution is easy to utilize and does not need any advanced programming techniques. At the same time, it offers a higher performance than other packages realizing the QTM. It is especially important in the case of harder computational tasks, and the use of MPI allows improving the performance of particular problems which can be solved in the field of open quantum systems. Introduction The Quantum Trajectories Method (QTM) is an important method actively applied for investigation in the field of open quantum systems [1], [2], [3], [4]. It was implemented as packages in a few programming languages and tools. The first implementation was a package written by Sze M. Tan for the Matlab environment [5]. There is also a package for the C++ language [6], [7]. The latest implementations are QuTIP [8], [9] for the Python Ecosystem and Quantu-mOptics.jl [10] prepared for the Julia language. The aforementioned solutions, especially QuTIP and QuantumOptics.jl, allow utilizing parallel computing inside the environments of Python and Julia. However, the packages are not intended for High-Performance Computing (HPC) [11], [12] where Message Process Interface, termed as MPI [13], plays a significant role. Using MPI allows us to re-implement the QTM for HPC systems, regardless of their scale (the QTM is scale-free because each trajectory may be calculated separately). The scale-free character of the QTM will allow utilizing more computing power, and that will result in shorter time of calculations, which is especially important for cases in which 20,000 and more trajectories are generated. Previously, we prepared the implementations of the QTM for CPUs and GPUs [14], [15]. The results presented in this work relate to a brand new implementation of the QTM for the MPI protocol (the actual source code of the QTM can be found at [16]). The version 1.0a of the QTM package is also available at [17]. As far as the QTM is concerned, a proper selection method for solving systems of Ordinary Differential Equations (ODEs) must be considered. This is a basic action taken while numerical computations are realized. More precisely, a very important issue is an Initial Value Problem (IVP). Due to that, the selection of an appropriate method for solving IVP (in general ODEs) is crucial. Especially when the system of equations is difficult to solve, e.g. so-called systems of stiff ODEs. The stiff ODEs constitute a significant type of ODEs and the correct solving of these equations is pivotal for numerical computations in many cases-especially for the QTM where calculating many trajectories requires solving many ODEs. A few groups of methods [18], [19] used in the context of stiff ODEs may be recalled, and these are the Backward Differentiation Formula (BDF) methods. Another approach, which is commonly used in numerical computations for solving ODEs, is Livermore Solver for ODE (LSODE) method [20]. In this work, we reuse the LSODE variant called ZVODE (LSODE for complex numbers) for the implementation of the QTM in the MPI environment. The ZVODE package is a stable solution offering high accuracy, but this is not a reentrant solution which may be directly utilized in a parallel environment. However, this problem can be solved by utilizing MPI where the processes are separate programs communicating with one another using message passing. Therefore, one MPI process calls only one instance of the ZVODE method. The ZVODE package was implemented in the Fortran language. The QTM package, implemented in the C++ language, uses the ZVODE package efficiently. However, we made some effort to assure that functions offered by the QTM package are easily accessible as it is in the packages QuTIP and QuantumOptics.jl. Naturally, programs (delivered as examples or written by the user) utilizing the QTM have to be compiled. This should not be a problem because of used makefile mechanism for simplifying this process. The paper is organized in the following way: in Section 2, we shortly present selected mathematical features of the QTM. The BDF numerical methods for solving IVP, used in the presented package are discussed in subsection 2.1. Whereas in 2.2 the algorithm for the QTM is presented. There are also some remarks pointing out where the parallel processing techniques and the MPI technology are utilized. Section 3 contains selected remarks referring to the implementation of the QTM package. In this section, the most important data types implemented in package are presented. We also describe how the ZVODE package is used to solve ODEs since solving ODEs poses a significant problem in the QTM. We analyze the efficiency of our implementation in a comparison with other recently developed packages in section 4. The most important issue presented in this section is the scalability of the QTM. A summary of achieved results is discussed in Section 5. There are also presented further aims which are planned to be realized as the next steps in the evolution of the demonstrated implementation of the QTM. The article is ended with acknowledgments and bibliography. Performing the QTM requires solving ODEs. The presented solution, as mentioned above, uses the ZVODE package which allows utilizing some numerical methods indispensable for the proper functioning of the QTM. The fundamental information concerning the ZVODE package is presented in subsection 2.1, while subsection 2.2 contains the description of the QTM, including a method of calculating a single trajectory [14], [15] with the use of MPI technology. The properties of the QTM allow us to easily distribute the computations-this feature helps to accelerate the calculations and makes the process scalable. Another important element of the solution is a pseudorandom number generator. In subsection 2.3, we describe the generator which we chose to utilize in the implementation of the QTM package. Methods for solving ODEs The ZVODE package was built on a basis of the VODE package [21]. The VODE package was based on the LSODE package. The numerical methods, used for the LSODE (Livermore Solver for Ordinary Differential Equations) package implementation, are the computational routines based on the group of Linear Multistep Methods (LMMs). In general, LSODE uses the Adams methods (so-called predictor-corrector methods) for the non-stiff ODEs solving. In case of stiff ODEs, the Backward Differentiation Formula (BDF) methods are used. In this section, the most important mathematical aspects of the aforementioned methods are briefly presentedthe detailed report concerning standard (i.e. sequential) implementation of LSODE may be found in [22]. The choice of the ZVODE package is determined by the fact that the QTM needs using complex numbers and the ZDOVE package allows performing calculations on complex numbers of single and double precision. For the Initial Value Problem (IVP) in general: The LMMs, approximating the problem's solution, may be described as: where a i ; b i 2 R and α k 6 ¼ 0 and |α 0 | + |β 0 | > 0. The parameter h represents the step width for integration. Naturally, a value of h is selected during the solver's work with the use of adaptive methods. The LSODE routine uses two methods based on the LMMs: the Adams-Moulton method and the BDF method. The implicit Adams-Moulton method, this means the values of β i 6 ¼ 0 (the detailed description of used symbols may be found in [23]), may be presented as: where The r i symbol stands for the backward differences. It should be pointed out that: r i f n = f n and r i+1 f n = r i f n − r i f n−1 . The value k expresses the number of steps, i.e. the values of y i used in the specified method. The BDF method is defined as: The above notation means that β k 6 ¼ 0 but β i = 0 for i = 0, 1, 2, . . ., k − 1. The values of α i are arbitrarily defined and may be found in many publications concerning the methods of solving ODEs. In both cases, for the Adams-Moulton and the BDF method, the problem is to estimate the value of f n+1 . It should be stressed that this value is needed to estimate itself. The Newton method [24] is used in this case to calculate the estimation. The Newton method is quite rapidly convergent, though for the system of equations it needs to calculate the Jacobian. This method for (ZV/LS)ODE may be expressed as: where y ð2Þ nþ1 and y ð1Þ nþ1 stand for the next approximations of y n+1 . The function g(�) represents the function approximating values of y n+1 . The BDF method is used for stiff ODE problems and the Adams-Moulton method for nonstiff ODE problems. Using these both approaches together makes the (ZV/LS)ODE a hybrid method-see Fig 1. The presented QTM implementation allows choosing the method, respectively to solving easier (non-stiff ODE) or more difficult (stiff ODE) QTM problems. Moreover, in ZVODE, for every method the method's order can also be controlled automatically. For the Adams-Moulton method, the orders from the first to the twelfth are available, and for the BDF method the orders from the first to the sixth. Quantum Trajectories Method A description of quantum states' dynamics may be presented for two fundamental situations. The first case is an evolution of a closed quantum system, whereas the second case concerns an evolution of an open quantum system. As an example of a closed quantum system, we may refer to a model of a quantum circuit. If we want to consider a quantum system where its dynamics is affected by the influence of an external environment, we deal with an open quantum system. In this subsection, we do not aim to describe the mathematical models of dynamics in open and closed quantum systems-the details concerning this subject may be found in [4] and [25]. For clarity, it should be recalled that for closed systems, the evolution is a unitary operation, and it can be denoted as Schrödinger equation: where (C1) is the form of a partial differential equation and (C2) is the form which is convenient to use in numerical simulations. In (C2) H is a Hamiltonian describing system's dynamics, and |ψi stands for the initial system's state. The von Neumann equation describes the quantum system's evolution if an influence of an external environment has to be considered: where H sys denotes the dynamics of a closed/core system, H env stands for the environment's dynamics, and H int describes the interaction's dynamics between the external environment and the system. The environment's influence can be removed from (8) by the partial trace operation. In such a case, we obtain an equation which describes the dynamics of the core system. Such a system can be expressed by the Lindblad master equation: where C n stands for a set of collapse operators. These operators represent the influence of an external environment which affects a simulated system. Naturally, applying a collapse operator causes irreversible modification of a quantum state. It should be emphasized that simulating the behavior of a quantum system needs exponentially growing memory capacity according to the system's dimension. The QTM is a method which facilitates reducing the memory requirements during the simulation. Of course, any simulation of a quantum system's behavior needs calculating many single trajectories-the more, the better-because they will be averaged to one final trajectory, and a greater number of trajectories ensures improved accuracy. However, every trajectory may be simulated separately, and this fact provides a natural background to utilize a parallel approach while implementing the QTM. If we would like to compare simulating the behavior of a quantum system with the use of Lindblad master equation and the QTM, we should consider the requirements of both methods on computational resources. The Lindblad master equation methods utilize the density matrix formalism and the QTM is based on a wave function of n-dimensional state's vector (termed as a pure state). A number of this vector's entries grows exponentially, but using sparse matrices facilitates efficient simulation of a quantum system's behavior. It should be stressed that a simulation based on the wave function concerns only one state of a quantum system what seems to be a disadvantage of this solution because for the Lindblad master equation methods, the density matrix describes many different states of the same system. Unfortunately, using density matrices in most cases is not possible because of memory requirementsthe size of the density matrix grows exponentially when the dimension of a simulated system increases. While the QTM enables monitoring the influence of an external environment on a quantum state by modifying the state's vector with the use of a collapse operator. For the QTM, an evolution of a quantum system is described by so-called effective Hamiltonian H eff , defined with the use of a set of collapse operators C n , and a system's Hamiltonian H sys : The set of operators C n determines the probability of a quantum jump occurrence. This phenomenon is caused by a single collapse operator acting on a current quantum state of the system. A probability of a quantum jump occurrence is: If a quantum jump takes place, the system's state at the moment of time (t + δt)-that is just after the collapse operation-can be expressed as: jcðt þ dtÞi ¼ C n jcðtÞi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi hcðtÞjC y n C n jcðtÞi Furthermore, if many collapse operators may be applied in the considered model, the probability of using i-th operator is: Of course, simulating the phenomenon of a system's collapse needs a random numbers generator to ensure its probabilistic character (this issue is discussed in the following subsection). Let us emphasize that all the considered calculations correspond to operations implemented on matrices and vectors within the QTM package. The needed matrices are usually band matrices. Namely, it is convenient, in terms of the memory consumption and the speed of calculation, to use sparse matrices. In consequence, the compressed sparse row (CSR) format may be utilized-it also gives an additional speed-up, especially when many matrix-vector multiplications have to be realized. By summarizing the above remarks, we can formulate an algorithm which presents how a single quantum trajectory is calculated: (C) if a quantum jump is realized, then the system's state projection at the moment t, to one of the states given by Eq (12), is calculated. The operator C n is selected to meet the following relation for the adequate n: P n i¼1 P i ðtÞ � r and P i (t) is given by Eq (13); (D) the state obtained by the projection of wave-function in the previous step is a new initial value corresponding to the moment of time t; next, the new value of r is randomly selected, and the procedure repeats the process of the quantum trajectory generation, starting from the step (B)-more precisely: the simulation is performed again, but starts from the previously given value of t. Pseudorandom number generator The QTM package described in this work utilizes a pseudorandom number generator from the Generalized Feedback Shift Register (GFSR) class. More precisely, we chose the LFSR113 method which is defined by using recurrence over the field F 2 consisting of elements 0, 1: where a j 1 are generator's parameters (j 1 = 1, . . ., k) and x j 2 are generator's seeds (j 2 = n − 1, . . ., n − k). The generator's period is r = 2 k − 1 if and only if the characteristic polynomial of recurrence: is primitive. The generated values, for n � 0, may be expressed as: where s denotes the step size, and L stands for the number of bits in a generated word. If (x 0 , x 1 , . . ., x k−1 ) 6 ¼ 0, and s is coprime to r, then we obtain a periodic sequence of u n (with a period denoted as r). Of course, the quality of a generator is determined by the sequence x n . The proper choice of seeds is described in [28] where it is also shown that four seeds (k = 4) are sufficient to generate high-quality pseudorandom numbers. The LFSR113 generator's realization is very fast because of the bitwise operations usage-this feature also does not collide with ensuring a sufficient period for generated numbers: 2 113 . However, it should be emphasized that the selection of generator's seeds is crucial-there are four initial values, and they have to be integers greater than 1, 7, 15 and 127, respectively. General implementation remarks In the QTM, we calculate many independent trajectories. Because there is no relations between trajectories, it is easy to implement them utilizing parallel computing. Naturally, parallel computation shortens the time of calculations in comparison to serial computation. The calculated trajectories must be averaged to obtain the final trajectory. Fig 2 depicts the idea of computing trajectories in many computational nodes. The exchange of necessary information between nodes is realized with use of MPI protocol. This protocol offers a scalable solution, what means that our QTM package works efficiently within a cluster of workstations, connected with the use of the Ethernet network, and also with one multi-core personal computer. In section 4, we show the acceleration of computations carried out with the use of the QTM package-the acceleration is noticeable regardless of whether we work with a cluster of workstations or with a one multiprocessor computer. The process of calculating the final trajectory is presented in Fig 2. The diagram shows the flow of messages during the calculations. The messages are sent by the master node and other nodes respond to these messages. The main part of the calculations is preceded by some preliminary activities, e.g. initialization of pseudorandom numbers generators. Then, the body of calculations begins, and computational nodes calculate and average trajectories. The number of trajectories is influenced by the number of computational nodes and by the number of trajectories planned to be generated to simulate an analyzed problem. We assume an uniform load for every computational node. Each computational node calculates a number of trajectories and averages them. Finally, the averaged trajectories are sent to the master node where are averaged once again in order to obtain the final trajectory. Calculating one trajectory is directly connected with the ZVODE package. Despite the fact that this package was implemented in Fortran, the functions from the ZVODE library may be called in a code written in C++, just like other functions implemented in C++. To achieve that, we have to prepare an intermediary function based on a template. The exemplary function called zvode_method_for_mc is presented in Fig 3. The scheme of tasks realized during the calculation of a single trajectory, according to Algorithm 1, is shown in Fig 4. The algorithm is implemented in the C++ language, but it uses the zvode_method_for_mc method to solve a system of ODEs. The ZVODE package does not offer the reentrant property which allows many threads to utilize the same function (this property is realized by avoidance of using shared and global variables in function's implementation). Therefore, one MPI process may call only one instance of any method from ZVODE. However, this feature does not pose a problem because we may run many MPI processes at the same time. It is a very important assumption for our implementation that we use templates and static memory allocation (this applies to the code written by the user in order to, for example, describe some operators)-these techniques enable implementation of a code with an easier declaration of objects (the code is more similar to a code of a script language than to a C++ code with dynamic memory allocation). This makes using the package easier, but equally efficient. We even expect better efficiency because the static size of data structures is known during the compilation process, therefore, the compiler is able to optimize the numerical computation. The QTM package offers three basic data types. The first one is simpleComplex < T >. It is dedicated to the operations on complex numbers, where T may be a float or a double. If we want to use simpleComplex < T > type together with the functions from the ZVODE library, then T is always double. The second type uVector < T > (intsize) is dedicated to vectors-the first parameter describes the type of vector elements, and a constructor parameter represents the number of entries. Similarly for matrices, a type uMatrix < T > (intsize) was introduced, where the first parameter describes the type of matrix elements, and the constructor parameter stands for the dimension (only square matrices are utilized in the QTM). Package also utilize the static and dynamic memory allocation. However, dynamic aspects of memory management were hidden in the implementation layer. Users of the QTM package are not obliged to create objects in dynamic approach. An exemplary declaration of four matrices describing collapse operators is: A value of WV_LEAD_DIM expresses, in this case, the dimension of operators and matrices. In the QTM package, a uCSRMatrix < T > (size, rowptr, colind) type was also implemented. This type represents a column-row oriented sparse matrix. The sparse matrices are often utilized to describe processes taking place in open quantum systems. Using sparse matrices allows increasing the efficiency of computation and decreasing the amount of memory needed to hold, for example, collapse operators or expectation values. Furthermore, sparse matrices in CSR format offer a shorter time of multiplying these matrices by vectors. The QTM package utilizes the MPI standard. However, after defining the initial structures describing a simulated problem, the user is not obliged to deal with details of the MPI protocol. The whole computational process is realized by a function mpi_main: In the above form of mpi_main function, the Hamiltonian is time-independent. If we would like the Hamiltonian to be time-dependent, in the call of mpi_main, we point an additional function which is used during the calculation of trajectories. The whole process of communication with the use of MPI is automatically realized within the mpi_main function. The parameters of the final trajectory may be directed to standard output or to a text file, what is determined by a value of the parameter opt. This parameter also serves to indicate the numerical method for solving ODEs. The ZVODE package offers two methods: ADAMS and BDF. A method is selected as below: The BDF method is dedicated to solving stiff ODEs but the ADAMS method may also be used in the QTM (sometimes it needs to calculate more trajectories or to increase the accuracy of the Adams' ODEs solver). Selected problems-Implementation and performance During the realization of the QTM implementation, the essential task was to improve its performance in comparison to other existing QTM implementations. The usage of C++ programming language and especially the MPI technology facilitated us to obtain a stable solution with decent performance, thanks to the parallel processing of trajectories. The performance of the presented solution was compared with two recently developed packages the QuTIP and the QuantumOptics.jl, which also fully supports the QTM (and the QuTIP also utilizes the ZVODE method). We have prepared two examples to examine the efficiency of the unitary Hamiltonian and the trilinear Hamiltonian. We also show computations referring to the Jaynes-Cummings model. The fourth example presents the results of the experiment also conducted in [29], and shows the accuracy of the QTM package. Unitary Hamiltonian The first example concerns a simulation of a system described by the following unitary Hamiltonian: where σ x represents Pauli operator X (also termed as the NOT operator). The initial state of the analyzed system is The collapse operator and the expectation value operator, used in the simulation, are given below: where σ z stands for the Pauli operator Z. We ran the experiment on a PC equipped with the Intel i7-4950k 4.0 Ghz processor under the Ubuntu 16.04 LTS operating system. The utilized version of the QuTIP package is 4.2 and the QuantumOptics.jl ran on Julia 0.6.4. Although the dimensions of the above structures are quite small, the simulation of 1000 trajectories with the QuTIP package, when only one core is running, needs about � 0.90 seconds. Utilizing e.g. two cores for the calculation does not reduce the time of process because the QuTIP needs time for coordinating two threads. Of course, with the greater number of trajectories, it is easy to observe that the time of simulation is shorter when more cores are active. If the QTM package is used, calculating 1000 trajectories with the use of e.g. of eight MPI computational nodes cores takes 2-3 seconds. This time is mainly consumed by starting the MPI processes. It should be mentioned that the QuantumOptics.jl package needs more time for calculation because of the JIT compiler usage. However, for a such small problems the time of calculation is almost the same, irrespective of the used package. allows specifying data structures directly. Constants WD_LD and WD_LD_SQR respectively stand for the vector state dimension, and its square and they are equal to two and four; co represents the collapse operator, eo stands for the expectation operator value and H is the Hamiltonian: opt.tolerance = 1e-7; opt.file_name = strdup("output-data.txt"); opt.fnc = &myfex_fnc_f1; r = mpi_main<N, Ntrj, WV_LD, WV_LD_SQR, 1>(argc, argv, 1, 0, 10, 1, 1, opt); A significant part of the source code is a function calculating the right side of the ODEs. In the case of the example for a unitary Hamiltonian, we may utilize a direct approach that is multiplying the matrix form of the Hamiltonian H by the vector Y. The product is assigned to the variable YDOT. We can describe these calculation in detail as follows: Trilinear Hamiltonian In this example, we first assume that the Hamiltonian H is also time-independent. The simulated process is a time evolution of an optical parametric amplifier given by the following trilinear Hamiltonian [30]: The symbols a, b and c stand for the boson annihilation operators corresponding to the pump, signal and idler fields respectively. The variable K represents the value of the coupling constant. For the purpose of the tests, we assumed that K = 1, and i represents an imaginary unit. The initial state of analyzed system is: it is a coherent state for the pump mode (a), and the vacuum states for the signal (b) and the idler (c) modes. We also utilize three expectation operators defined as: where a, b, c still represent the boson annihilation operators with the same or different dimensionality. I a , I b and I c are the identity operators for fields: pump_mode (a), vacuum (b) and idler (c). The collapse operators are denoted as: where γ 0 = γ 1 = 0.1, γ 2 = 0.4. The obtained expectation values of the photons' number, during the experiment with 1000 trajectories, are presented in Fig 6. Fig (7) depicts the simulations' duration for the QuTIP, the QuantumOptics.jl and the QTM packages with the different number of trajectories. We also diversify the number of dimensions for the initial state in our experiments. The calculation for the MPI protocol was conducted with the use of nine computers equipped with Intel Core i7-4790K 4.0 GHZ processing units, and working under the Ubuntu 16.04 LTS operating system. Each processor has four cores, therefore, we were able to run the 32 MPI processes in eight computational nodes. The last processing unit serves as a master node. We can observe a direct profit thanks to dividing tasks between many computational nodes. We should expect a significant speed-up of calculations if the number of computational nodes increases, therefore, we consider this to be a very good result. Especially for high-dimensional systems utilizing parallel computing and many computational nodes, the execution of the tasks directly translates into better performance i.e. shorter computation time with the QTM package. It should be emphasized that in the case of calculating 10.000-20.000 trajectories 32 MPI processes still offer sufficient computing power to increase the size of the problem or the number of trajectories. For the QuTIP package, this computation was run with the use of one computational node and the whole computing power was consumed. The same situation can be observed in the case of the QuantumOptics.jl package. It also utilizes the JIT system of compilation and offers a very efficient usage of a processing unit. However, only calculating many trajectories using the MPI technology causes significant reduction of the calculation time. The simulation of a trilinear Hamiltonian requires using sparse matrices in order to maintain both low memory requirements and high performance. The QTM package offers basic data types to simplify the basic transformations and realize the calculation (also the definitions of operators may be given directly, as it was done for the Hamiltonian). A few selected steps connected with the preparation of the Hamiltonian representations are presented below: . . . destroy_operator(d1); eye_of_matrix(d2); eye_of_matrix(d3); a0 = tensor(d1, d2, d3); . . . H = unity � K � (a0 � dagger(a1) � dagger(a2)-dagger(a0) � a1 � a2); The changes also apply to the function calculating the right side of the ODEs. Luckily, only the function realizing multiplication has to be changed to the one supporting the CSR matrices. Therefore, the function calculating the right side of the ODEs is: Referring to the function calculating the right side of the ODEs, it should be emphasized that the access to the current time value (parameter T) is possible, what facilitates using a timedependent Hamiltonian. The QTM package is implemented in the C++ language and the presented examples have to be compiled, therefore, the computation is carried out with a high efficiency. Jaynes-Cummings model The third example refers to the Jaynes-Cummings Model (JCM). This problem may also be simulated in the QTM package. In this case, a single trajectory corresponds to a computation run with the use of the master equation. Let us assume that the system's dimension is N = 40. The dimensions of the operators are given in subscripts. where I 2 stands for the identity operator sized 2 × 2, σ − represents the annihilation operator for Pauli spins. The values Δ and g represent coupling strength between an atom and a cavity. The initial state is expressed as: where we make tensor product between a coherent state w a N and a single qubit in the state |1i. The results obtained during the simulation of the JCM are presented in Fig 8. It should be emphasized that we utilize only one trajectory, and needed operators were represented by sparse matrices. The birth and death of a photon in a cavity The fourth and the last example refers to the convergency of the simulation of photon's birth and death in a cavity, and it is based on paper [29]. Let N be a number of task's dimensions, and N = 5. Then, the Hamiltonian and the collapse operators are expressed as: where d − denotes the destroy operator, H-the Hamiltonian and c 0 , c 1 represent the collapse operators, t-the temperature of the environment. Fig 9 shows that the number of generated trajectories improves the accuracy for a solved problem. Naturally, it also confirms the correctness of the realized QTM implementation. The source code for this example is very similar to the previously presented pieces of the code. We utilize dense matrices because the system's dimension is low, and it does not influence the performance. Conclusions A package which implements the Quantum Trajectories Method approach was presented in this article. The package is dedicated to examining the properties of the quantum open systems. The implementation is based on the MPI standard. The package was prepared in the C++ programming language, but the implementation does not require from the final user any advanced programming techniques. Utilizing the MPI standard allows using the package within systems realizing high-performance computing, but also small systems like personal computers because the communication processes introduced by the MPI package do not virtually increase demand of computing powers. The current version of our package is the first version that has been made public. Naturally, further works and development are planned. We would like to implement a version based on the CUDA/OpenCL technology, which will be able to utilize computing powers of graphics processing units. Recently, a new approach to the QTM was presented [2]. In next versions of the package, this novelty will be considered: the version of used QTM will be matched in accordance with a given problem in order to obtain a shorter time of calculations.
7,805
2018-10-06T00:00:00.000
[ "Computer Science", "Physics" ]
Impact of Terrorism on Economic Growth in South Asian Country The association among terrorism and economic growth is crucial from macroeconomics perspective. This analysis is carried out to extract the influence of terrorism on economic growth in Pakistan, incorporating trade, foreign aid and capital for the period of 1981-2016. This study applied ARDL bounds testing to inquire the problem and found a co-integration nexus among the used macroeconomic indicators. The evidence shows that terrorism has an inverse relationship with economic growth that is statistically significant. Terrorism may cause to create insecurity and devastating recreational places. There is need to resolve the issue of terrorism to enhance the ability of other economic indicators which cause to enhance the growth of Pakistan. INTRODUCTION Terrorism is the usage of threats of ferocity to pursue religious, political or social goals. It has become a major and tremendously ruinous fact in all over the world. In fact, terrorism affects developing countries very much badly than developed nations, because terrorism is the main result of non-distributing resources towards safer economic sectors. In developing countries, resources are much more concentrated, few fields are affected badly due to terrorism. Global Terrorism Index report exposed that economic impact of terrorism has achieved three pinnacles since 2000 and is linked to the three mega waves of terrorism. Very first significant increase in the economic impact of terrorism took place in 2001, at the time when September 11 attacks happened in Washington and New York. The second summit occurred in 2007, at the height of the war in Iraq. The third one summit started in 2012 and goes on, with the economic impact of terrorism reaching a peak of US$ 105.6 billion in 2014. The increase over the past 4 years is mainly due to the rise of terrorism in Syria, Afghanistan and Iraq. In 2015, economic impact of terrorism on the world reached 89.6 billion US Dollars, diminishing by 15% from its 2014 level. In Pakistan, terrorist incidents are conducted in a designed and arranged way. Terrorists participated in black market, designed to produce panic between the populations and wish to attain their caustic and painful destination. They produced violence, plant bombs in local places, committed murder and abduct people for redeem. institutes took place from 1990 to 2013 is Pakistan. Here a total of 753 attacks occurred whose primary purpose was to destroy educational buildings, especially of female institutes, rather the human loss (Government of Pakistan). On 16 th December, 2014 an inhumanity happened in the Army Public school Peshawar, children were compelled to watch extreme violence and at the end, 150 were killed including 132 children, 120 are injured. This attack is regarded as the most brutal and violent attack on record since 1970. The terrorist war against the tribal areas of Pakistan has positively affected the growth of Pakistan. It is disappearing in the social, political and economic context and constitutes a danger for the tourism industry. Terrorism affects some key industries, including tourism, air transport and exports that can affect GDP and growth (Nitsch and Schumacher, 2004) (Table 1). Pakistan is confronting a serious trouble of terrorism which affects the international tourism, FDI, human capital, capital formation and investment. Afterward the consequences of 9/11, Pakistan made up mind to join the war on terrorism. The nation has endured enormous economic and humanistic loss because of the battle against terrorism. It also ruins foreign direct investment. To eliminate terrorism, more funds have been allocated to fight terrorism instead of spending it on general development projects. By using the ARDL bound test, this study focused on empirical inspection to demonstrate the existence of a linkage between economic growth and terrorism in a selected country, namely Pakistan. This analysis has been found using data set from 1981 to 2016. It is undertaken to analyze the most recent issue in the country and try to find out its consequences. The purpose to conduct the following research is to investigate the impact of terrorist attacks on the growth of Pakistan. It is an effort to provide the new dimensions to the prevailing studies which explore the repercussion of terrorism on various social factors. The research organization devoted into literature review, methodology, data, results and discussion, and the last one is conclusion. LITERATURE REVIEW The linkage among terrorism and economic growth was studied widely in the literature. There are some studies which are in the favor of conventional view that terrorism affect economic growth negatively, while some other studies found no link between them in case of developed countries. There are many other factors which affect the economic growth through terrorism, e.g. trade, FDI, foreign aid, and capital accumulation. This section presents the review of the literature into three subsections. Gaibulloev and Sandler (2009) described the effect of terrorism and conflicts on income per capita growth in Asia. They covered the time period from 1970 to 2004, and employed OLS, one and two way fixed effects model. Results found that transactional terrorism has significant growth threading effects for developing countries of Asia in short run. They also found that transactional terrorism reduces growth by herding in government spending and a loss of investment because of enhance in terrorist incidents. Bloomberg et al. (2004) explore the links among incidence of terrorism and economic circumstances for panel data of 130 countries for 1968-2000, and employed Markov model. They detected that terrorism has inverse effect on economic growth. They concluded that investment expenditure convert to government spending through terrorism. Terrorism has different consequences for different nations, for example, terrorist acts more frequent and impact less significant in advanced countries like OECD than under developed countries. Their empirical findings showed that terrorism and economic activity are interdependent. Motahari and Dehghani (2015) ascertained the effects of terrorism and globalization on the growth for MENA countries, using panel co-integration and GMM approach. They established that terrorism shocks have inverse impact on attraction of FDI, and trade liberalization. Persitz (2005) examined the effect of Palestinian terror incident on the Israeli economy and OECD countries. He utilized trimonthly data from 1980 to 2003, used counterfactual methodology. He found there is no terror since 1994 and GDP was 8.6% in 2003. Palestinian terror minified the contribution of merchandize balance and investment, and increased the share of government expenditures. Gries et al. (2011) ascertained the connection amid economic growth and terrorism for 7 western countries. They covered up the time from 1950 to 2004. They employed Hsiao granger causality test, bivariate and trivariate causality test. Their outcomes indicated that economic performance in influencing terrorist threats seems to have significant for some countries while attacked economies are good in adjusting the violence of terrorism. Hyder et al. (2015) investigated the effect of terrorism on economic development in Pakistan for 1981-2012. They found adversely effect of terrorism on growth in Pakistan. Moreover, foreign assistance has directly link with economic growth. Meierrieks and Gries (2013) inspected the causative linkage among the terrorism and economic growth for 160 countries. They covered data set from the period of 1970 to 2007, and employed granger causality test with panel data. They discovered that there exist causal coalition among terrorism and economic growth. Economic Growth and Terrorism Ocal and Yildirim (2010) examined the consequences of terrorism on economic growth in case of Turkey for the time period of 1987-2001. They employed regression analysis, spatial variation with geographically weighted regression (GWR). They used real per capita GDP, provincial per capita government expenditure, average level of education and terrorism indicator. They results showed that GWR model remarkably better the model suiting over traditional global model. Din et al. (2003) scrutinized the association among the trade openness and economic growth in case of Pakistan's economy. They employed time series data which covered the period of time from 1960 to 2001. They used Engle and Granger co-integration and error correction approach to determine the linkage among the trade openness and economic growth. Results indicated that in short period there exist no causal relationship among trade openness and economic growth, while two way directional causality found between openness and economic growth in long run. Adhikary (2011) inspected the relationship between capital, FDI, economic growth and trade openness in case of Bangladesh. They utilized time series analysis and covered the period of time from 1986 to 2008. They applied Johansen Juselius procedure to crack co-integration within the variables. Results showed long term linkage among GDP and other endogenous variables. FDI and GDP has direct and significant effect, while trade openness has diminishing effect on GDP growth rate. The empirical findings concluded that Bangladesh ought to develop FDI based policies and insure eminent degree of capital formation in order to improve its global economic growth rates. Economic Growth and Trade Shahbaz (2012) investigated effects of trade openness on economic growth in the nexus of Pakistan. He covered the period of time from 1971 to 2011. He employed the ARDL bound test technique to investigate the long term association. They found the cointegration between series, also found that trade openness, labor and capital has direct effect on economic growth and enhanced the growth in the long run. Khalid (2016) analyzed the effect of trade openness on economic growth in the case of Turkey. They used data set from 1960 to 2014. They applied ARDL bound test to investigate the short and long period connection among economic growth and trade openness. Results revealed that trade openness increased growth for short run period, while, in long run, no relationship exists. Moreover, results are statistically direct and significant for long run association. The results proposed that economic growth is driven by capital and the trade index, which contribute or sustaining short and long term economic growth. Benhabib and Spiegel (1994) found the relationship of human capital in economic development data from the global and regional United States. They used dataset the time period from 1965 to 1985. They employed growth accounting regressions by a Cobb Douglas production function and took differences to detect the long run connection. They found the direct relationship among growth and human capital. Galor and Tsiddon (1997) analyzed the interaction among the division of human capital, technology and logical progress and economic growth. This demonstrated that the interaction between an externality of the local domestic environment and a global technology, growth of human capital, the division of income, the difference in wages between expert and unskilled labor and economic growth. They suggested that an economy that is premature actually implement a policy to strengthen equality can be trick at an early phase of growth. Cadil et al. (2014) scrutinized the effect of human capital on regional economic growth and unemployment. They took regional NUTS data for the time period 2007-2011. They found that generally we can't say that human capital is a direct effect of growth in some regions of EU. Moreover, their findings showed that bounce effect of human capital endowment on growth and employment in the EU NUTS 2 region was available for a given time period. Pelinescu (2015) used data set from 1990 to 2000 for various EU countries to detect the effect of education on economic growth. They also covered the time period from 2000 to 2012 to determine the relationship between human capital and economic growth. The results found that statistically direct relationship among GDP and innovation capability of human capital highlighted through many patents and the qualification of the employees. Inverse linkage exists between education spending and GDP per capita. Siddique et al. (2017) tried to examine the effect of terrorism on domestic investment and FDI with the evidence of Pakistan. They employed ARDL co-integration approach ascertain the long run relationship. They detected that there exist long term association among terrorism and investment inflow. The empirical result revealed that domestic investment and FDI has negative effect on terrorism. Moreover, trade and human capital are cause to raise the investment. Theoretical Framework Terrorism often leads to the collapse education infrastructure, worsening school results; low enrollment rates, and all have the inverse impact on economic growth. Terrorism also destructs the human capital of a country. It limits the trade and commercial action that restrain the economic growth. Terrorism demolish the human capital worsening in the education standards, low rates of enrollment, effects the productivity of labor. "Falling investor confidence may trigger a generalized drop in an asset prices and a flight to quality that increases the borrowing costs for riskier borrowers (IMF, 2001)." These indicators have adversely impact on economic growth through terrorism. Capital assembling, by enhancing the productivity of the labor, plays an indispensable important role in the economic growth. Hence, capital accumulation by enlarging the scale of production and specialization increases the production and productivity in the economy and there by promotes economic growth. Economic Growth = f (Capital) (1) The effects of terrorism on growth are also discussed in the literature (Bloomberg et al., 2004;Gaibulloev and Sandler, 2009;Gries et al., 2011). Economic Growth = f (Capital, Terrorism) Trade serves as a crucial part in economic growth. It helps to achieve the efficiency in the allocation of resources through exports, which cause to enhance economic growth (see, for instance, Siddique et al., 2018;Siddique and Majeed, 2015) Economic Growth = f (Capital, Terrorism, Trade) (3) Pakistan is one of the major revivers of foreign aid, which is donated by many organizations and different countries. US had been donated over $66 billion in nonmilitary aid to Pakistan since 1947, in 9/11 incident flew $13 billion (Birdsall and Fukuyama, 2011). At that time, Pakistan economy faced the boom-bust cycle with a decreasing trend since 1960s (Planning Commission of Pakistan, 2010). Foreign aid is also added in growth model by Ali et al. (2018). Economic Growth = f (Capital, Terrorism, Trade, Foreign Aid) (4) In this given model economic growth (EG) is dependent variable while capital (K), trade (T), terrorism (TER) and foreign aid (FA) are independent variables. All variables are used with natural logarithm (ln), α shows elasticity of economic growth. Methodology Various tests are applied to crack the reliability of the data for Pakistan over 1981-2016. These are consistent on serial correlation LM test, hetroskedasticity test, Ramsey reset and JarqueBera test. Heteroscedasticity test is about problem of constant variance of error term while Ramsay reset indicates that our functional form is good and the last test Jarque-bera test is employed to assure the normal distribution. To establish the linkage among terrorism and economic growth, the co-integration approach is utilized for short and long run dynamics. Before applying co-integration approach, firstly, unit root test is employed to assure the integration order. If more or less variables are on the level and rest of the variables at first difference, then suitable method is ARDL of co-integration. Unit root test Before proceeding co-integration the initial part is to determine the integration order of the variables used in the study. So, firstly, ADF test is used which is originated by Dickey andFuller in (1970, 1981) to undertake the problem of autocorrelation. Eq. 9 shows ADF test with intercept time trend Eq. 10 shows ADF test without intercept and trend In Augmented Dickey Fuller test, rejection of null hypothesis in case of larger value than critical and take the alternative, which means variables are integrated of order I(0). The hypothesis is accepted if statistical value is not greater than critical value which means series of variables not stationary at level. First difference of the variables are taken to make the series stationary. ARDL co-integration test ARDL model additionally lengthened by Pesaran et al. (2001) and it approaches with individuals' co-integration. It is the leverage of ARDL approach that it does not accept all variables to be I(1) or not all variables I(0), it means all variables stationary at mix order. For small data, ARDL bound test is more advanced and yield stable results, accordingly to Shin and Pesaran (1999). To analyze time series data with different order of integration, this study employed ARDL bound testing for co-integration and substitute to co integration model for Engle Granger (1987). The following study utilized the ARDL model to inquire the long term and short term linkage among indicators. The ARDL model for co-integration in the short run may be written as following: Here, ∆ is the first deviation operator, ∆T t refers for the natural log of trade, ∆FA t refers the natural log of foreign aid, ∆TER t refers the terrorism and ∆K refers the natural log of capital formation. And α 1 ,α 2 ………α 5 show the short termvariation of the model while parameters γ 1 = γ 2 = γ 3 = γ 4 represent the long term connection. The null hypothesis is: Refusal of null hypothesis (H 0 ) will support the presence of cointegration. If there exist co-integration in the model then long term connection would be estimated by the following equation: DATA This section describes data on all variables which are brought from WDI (2018) except Terrorism from Global terrorism database over the period 1981-2016. In this study, economic growth is dependent variable while terrorism, gross fixed capital formation, foreign aid and trade are independent variable. Terrorism Terrorism has been determined as the yearly number of terrorist attacks in Pakistan. According to previous literature, the inversely impact of terrorism on economic growth is seen (Bloomberg et al., 2004;Gaibulloev and Sandler, 2009) According to past studies, the literature has dealt domestic and foreign tourists. Global terrorism database (GTB) is utilize to extract the data on terrorist incidents. The study used natural logarithm of terrorism which is measured by number of attacks. Economic Growth According to literature, GDP is used to inquire the linkage among terrorism and economic growth. Many studies found the negative impact of terrorism on economic growth (Bloomberg et al., 2004). This study is used natural logarithm of GDP growth at constant 2010 US dollars as proxy of economic growth. Foreign Aid Official development assistance (ODA), in current US$ ,includes disbursements of concessional loans and donations made by the official agencies of members of the development assistance committee (DAC), multilateral institutions and by non DAC courtiers encourage the economic development and wellbeing of the countries and districts on the DAC list of ODA receivers. Trade Trade acts as a substantial and direct part in economic growth and economic prosperity. According to literature, trade has direct effect on economic growth and there exist a strong causal link among trade and economic growth. (Shahbaz, 2012). According to Heckscher-Ohlin Theory, trade can use domestic capital resources more efficiently through imported capital goods and international inputs,other than these inputs are much expensive to be make locally (Yanikkaya, 2003). Trade is the aggregate of total imports and exports of goods and services, calculated on the basis of the proportion of GDP. International trade is beneficial because it provides quality and variety of products, easily accessible worldwide at lower prices, increase investment related to globalization, and creates opportunities, provides economies of scale and increase GDP per capita. Capital Gross national fixed investment includes land improvements such as ditches, drains, fence etc. Purchases of plants, tools, machinery and constructions of roads, railway, commercial and industrial buildings. Capital is used as an independent variable by taking natural logarithm (Table 2). RESULTS AND DISCUSSION Following chapter contains empirical determinations of model which detect the linkage among the terrorism and economic growth for Pakistan over 1981-2016. Results of ADF Test ADF test is not only used for intercept but also considered with trend and intercept, and without trend and intercept for all variables. Results of Augmented dickey fuller test are depicted in Table 3. The unit root test results detected that three out of five variables are stationary at first difference in case of ADF with intercept only. While two variables trade and economic growth are stationary at level. When it is applied with trend and intercept, it shows that 2 variables out of five are integrated of order 1, while other three variables are stationary at level. In the same way when ADF is applied without intercept and trend, unit root test indicates that one variable stationary at level while other four variables are stationary at 1 st difference. Further, it enforced towards ARDL model to find the short and long term linkage among dependent and independent variables. This methodology selected on the basis of ADF test results, as some variables are stationary at level and some are integrated at first difference. Results of ARDL Bounds F-test According to the ARDL Bound test results, the study refuse the null hypothesis of no co-integration and admit the alternative hypothesis. As in the given Table 4, F-statistics 4.4123 is higher than critical values, which shows the existence of a long run cointegration among economic growth, trade, terrorism, capital and foreign aid. Table 5 consists the outcomes of ARDL co-integration with dependent variable in short and long run. Results revealed that in short run terrorism and capital have inverse association with economic growth, while foreign aid and trade have direct association on economic growth in sort run. Results of ARDL Co-integration In long run, results indicated that the coefficient of trade which is taken as percentage of GDP is directly linked with economic growth and also statistically significant. A one percent enhance in trade extends to 0.1822% increase in economic growth in long run. Shahbaz (2012) found direct linkage among the trade and economic growth while Khalid (2016) found direct relationship in short run. Coefficient of foreign aid has also shown direct association with economic growth which is statistically significant. The coefficient of terrorism indicates the inverse and statistically substantial effect on economic growth. It yields that a one percent increase in terrorism would negatively affect the economic growth by 0.3053% in long run. The results of terrorism ordered with Bloomberg et al. (2004), Gaibulloev andSandler (2009), Farooq andKhan (2014) and Hyder et al. (2015). Coefficient of capital has positive association with economic growth which is statistically significant. As a one percent enhance in capital would directly affect the economic growth by 5.43%. CONCLUSION The study is conducted with the purposes of figuring the nexus among terrorism and economic growth in case of Pakistan. It is an attempt to analyze the most recent vague in the country and try to find out the consequences of terrorism, it give new dimensions to the prevailing studies which explore the repercussion of terrorism on various social factors. Data has been collected through world development indicator and global terrorism database for the period of 1981 to 2016. The findings indicated that terrorism has affected Trade (T) Net official development assistance and official aid received(current US$) Foreign aid (FA) Gross fixed capital formation (% of GDP) Capital (K) Number of attacks Terrorism (TER) Global terrorism database economic growth negatively. Terrorism is fundamentally a political economic issue. Government policies without direction and the international political scene over the past four decades are the cause of this problem. Terrorist attacks to destroy historic sites. In Pakistan, terrorist attacks are concentrating in areas of tourist attractions such as the Swat and Northern areas. Government should take concentrate measures to reduce terrorism and the cost of terrorism in the country. This analysis shows that terrorism has inverse impact on economic growth that, the results indicate to policy makers to protect resources and better allocation to shrink the economic losses from ferocity. Government should take steps, for migration of the people who are living illegally in the country. Measures need to be taken to block the illegal money entering in the country from overseas.
5,488
2020-07-10T00:00:00.000
[ "Economics", "Political Science" ]
Optimization Methods Applied to Motion Planning of Unmanned Aerial Vehicles: A Review : A system that can fly off and touches down to execute particular tasks is a flying robot. Nowadays, these flying robots are capable of flying without human control and make decisions according to the situation with the help of onboard sensors and controllers. Among flying robots, Unmanned Aerial Vehicles (UAVs) are highly attractive and applicable for military and civilian purposes. These applications require motion planning of UAVs along with collision avoidance protocols to get better robustness and a faster convergence rate to meet the target. Further, the optimization algorithm improves the performance of the system and minimizes the convergence error. In this survey, diverse scholarly articles were gathered to highlight the motion planning for UAVs that use bio-inspired algorithms. This study will assist researchers in understanding the latest work done in the motion planning of UAVs through various optimization techniques. Moreover, this review presents the contributions and limitations of every article to show the effectiveness of the proposed work. Introduction Flourishing high-tech innovations are making aerial robots an integral part of our daily lives. There are extensive research and analyses on flying robots that possess the mobility given by flight [1,2]. Among these, Unmanned Aerial vehicles (UAVs) are vastly used flying robots due to these distinguishing advantages over others, i.e., budget-friendly, small-sized, lighter in weight, and portable. Moreover, the state-of-the-art characteristics of UAVs are position controlling, sensor employment, auto-level application, structure monitoring, etc. [3][4][5]. It also has a diverse array of applications, whether in the military or civilian sectors [6]. There are two primary models of UAVs; one is fixed-wing, and the other one is multi-rotor UAVs. The essentials of UAV performance are higher in complex tasks or uncertain environments. Usually, a single UAV has a small size, which limits its volume of sensing, communication, and computation [7]. Thus, cooperative UAVs working together have more benefits and potential results in comparison to a single UAV [8]. A few of them are cost and operation time reduction, low failure of missions, and achievement of higher flexibility, survivability, configurability, and multi-tasks capability [9]. Background: It is one of the utmost evolving technologies from the 18th century and is advancing till now. At first, in 1849, Montgolfier's French brothers and Austrians employed unmanned balloons filled with bombs [10]. The development of UAVs with cameras occurred in 1860, which helped with vigilance [11]. In 1917, Charles F. Kettering invented an Aerial Torpedo and named unmanned balloons bugs. The Royal Navy tested a radiocontrolled pilotless aircraft during the 1930s [12]. The 1940s were marked by operation Challenges in Unmanned Aerial Vehicles There are extensive investigations regarding UAVs, but still, they face various challenges. The prime challenges that all the researchers face include the selection of UAVs with appropriate path planning that is suitable for the mission [18]. Then, forming efficient motion control and achieves optimal path planning. Moreover, employing proper techniques for navigation and communication so that obstacle avoidance and collision avoidance are possible. Along with this certification, regulation and human-machine interface issues are of much importance. Below are some of the challenges that require serious consideration: Navigation and Guidance UAVs have to track their mobility by measuring their distances, making maps, and sensing physical surroundings. To determine the positions of aerial robots, it is essential to develop a navigation system, which is automatic and does not require human interventions [19]. These robots are for flying at higher altitudes and under different environments and hazards. Therefore, the safety and reliability of the system to operate properly are major challenges. Obstacle Detection and Avoidance The navigation of UAVs is much influenced by obstacles and collisions. Providing UAVs with an ideal environment is not a viable option. Obstacles that come in the path can be avoided. Moreover, the performances of multiple aerial robots are more beneficial and efficient than a single flying robot. Working in groups can result in collisions. UAVs must be furnished with algorithms or techniques that can handle these issues [20]. Shape and Size Nowadays, UAVs are widely used for different purposes. They are required to fly at different levels with different ranges. Some have to stay for a longer period to accomplish their missions. Some use runways for flying and landing. Some have to pass through narrow areas. To solve all these issues, it is necessary to consider the appropriate shapes and sizes of UAVs according to the missions [21]. Figure 1 shows some of these challenges faced by UAV [22]. Obstacle Detection and Avoidance The navigation of UAVs is much influenced by obstacles and collisions. Providing UAVs with an ideal environment is not a viable option. Obstacles that come in the path can be avoided. Moreover, the performances of multiple aerial robots are more beneficial and efficient than a single flying robot. Working in groups can result in collisions. UAVs must be furnished with algorithms or techniques that can handle these issues [20]. Shape and Size Nowadays, UAVs are widely used for different purposes. They are required to fly at different levels with different ranges. Some have to stay for a longer period to accomplish their missions. Some use runways for flying and landing. Some have to pass through narrow areas. To solve all these issues, it is necessary to consider the appropriate shapes and sizes of UAVs according to the missions [21]. Figure 1 shows some of these challenges faced by UAV [22]. Formation Control Issues There are numerous studies on motion control, but it still lacks and requires consideration and further handling. For example, there stands a need to tackle distributed levels with their effects properly. Similarly, machine learning and reinforcement learning require a longer time for the online learning period and huge data sets for offline training. Therefore, the integration of artificial intelligence (AI) techniques into control protocols is essential. One more challenge in motion control protocol is its robustness, which is highly influenced by environmental disturbances [23]. Path Planning Issues Path planning is to obtain a path for UAVs from the starting to the goal point in such a way that they will carry out their tasks efficiently. UAVs require optimal paths that satisfy their performance constraints and ensure collision avoidance. Such optimal and dynamic paths consume less time and energy. Path planning is a global optimization problem that requires various technologies and algorithms to be integrated [24]. Among all the challenges, the most crucial is path planning and motion control for UAVs. These require considerations so that the UAV can perform well during tasks under Formation Control Issues There are numerous studies on motion control, but it still lacks and requires consideration and further handling. For example, there stands a need to tackle distributed levels with their effects properly. Similarly, machine learning and reinforcement learning require a longer time for the online learning period and huge data sets for offline training. Therefore, the integration of artificial intelligence (AI) techniques into control protocols is essential. One more challenge in motion control protocol is its robustness, which is highly influenced by environmental disturbances [23]. Path Planning Issues Path planning is to obtain a path for UAVs from the starting to the goal point in such a way that they will carry out their tasks efficiently. UAVs require optimal paths that satisfy their performance constraints and ensure collision avoidance. Such optimal and dynamic paths consume less time and energy. Path planning is a global optimization problem that requires various technologies and algorithms to be integrated [24]. Among all the challenges, the most crucial is path planning and motion control for UAVs. These require considerations so that the UAV can perform well during tasks under any environmental conditions. Several research centers, academies, and industries are analyzing the aforementioned challenges and trying to overcome these issues by developing more improved strategies. Section 3 reviews the development of various protocols and techniques used for the above challenges. Recent Developments in UAVs UAV technology is expanding due to technological innovations. UAVs are becoming more affordable and easy to use, which enhances their application in diverse areas [6]. This paper reviews the strength and development of navigation, communication, shape and size, collision avoidance, motion control methods, and path planning techniques. It deliberates how they provide solutions to challenging problems while making a considerable impact. Developments in Navigation and Guidance of UAVs Navigation technology is quite significant for UAV flight control. Various developed navigation technologies possess different features. Such as satellite, geometric, integrated, Doppler, and inertial navigations. Different purposes require different navigation technologies. The main navigation systems for UAVs are a tactical or medium range navigation system and a high-altitude long-endurance navigation system [25,26]. Development in navigation can be evaluated as: D. High-performance Navigation with Data Fusion: Navigation uses a Kalman filter; China introduced a data fusion mechanism using this filtering technology. This data fusion is improved by using AI technology. It helps to determine the flight status and guarantees the normal flight of UAVs. E. New Inertial Navigation System: Many researchers rendered services to develop optical fiber inertial navigation and laser inertial navigation. Improvement was required by the industry. The widely used silicon micro resonant accelerometer helps in UAV navigation. It simplifies the weight and volume, consumes less energy, and refines flight pliability. F. Intelligent Navigation Ability: An emergency navigation system utilizes various adaptive technologies along with mission characteristics and modes. Moreover, information technology is applied to boost the UAV technology and upgrade the navigation system. Developments in Shape and Size of UAVs Earlier, UAVs were applicable for military purposes only, but now they are used for various tasks. This is all due to the rapid progress in developing UAVs with a wide range of shapes and sizes [27]. Different UAVs are utilized for different purposes. According to physical types, we have fixed-wing and multi-rotor UAVs. Fixed-Wing UAVs: These UAVs possess only one long wing on any body's side and require a runway or a broad and flat area. These can consume less battery; therefore, they can stay in the air for maximum hours. They are widely used for long-distance purposes, especially for military surveillance. Multi-Rotor UAVs: These UAVs are built up with multiple propellers and rotors and do not require a runway for vertical flying and landing. With more rotors, the position of UAVs can be controlled in a better way. Mostly quad-rotors are used for small and regularsized UAVs. Similarly, UAVs are classified based on their sizes into micro or mini-UAVs, tactical UAVs, strategic UAVs, and special-task UAVs. Micro and Mini-UAVs: Many missions require small UAVs. Such as surveillance inside buildings, Nuclear, Biological, and Chemical (NBC) sampling, the agricultural sector, and broadcast industries. Micro and mini-UAVs were developed for these purposes. The take-off weight of a micro-UAV is 0.1 kg, and a mini-UAV is less than 30 kg. Both fly below 300 m with less than 2 h of endurance. The communication range is up to 10 km. Tactical UAVs: Missions such as search and rescue operations, mine detection, communication relays, and NBC sampling use tactical UAVs. They can have a take-off weight of up to 1500 kg. Tactical UAVs can fly up to 8000 m with an endurance of up to 48 h. The communication range is around 10-500 km. Strategic UAVs: For airport security, communication relays, intercept vehicles, and RSTA, strategic UAVs are highly suitable. They can have a maximum take-off weight of around 12,500 kg. They can fly up to 20,000 m with 48 h of endurance. The communication range is more than 2000 km. Developments in Collision Avoidance of UAVs A collision usually occurs between a UAV and its neighboring UAV or an obstacle whenever there is less distance between them. A collision avoidance system (CAS) makes sure that no collision takes place with any stationary or moving obstacle [28]. The CAS first requires the perception phase and is then followed by the action phase. Perception Phase: CAS detects an obstacle in this phase while utilizing various active or passive sensors according to their functionality principle. Active sensors possess their sources for wave emission or light transmission along with the receiver or detector. The most-used active sensors include radars, sonar, and LiDARs. All of these use minimum processing power, give a quick response, are less affected by weather, scan bigger portions in minimum time, and can return various parameters of the obstacles effectively. Whereas passive sensors are only capable of reading the emitted energy from another source such as the sun. Widely used passive sensors are visual or optical cameras and infrared (IR) or thermal cameras. The image formed by a visual camera requires visual light, whereas a thermal camera requires IR light. Action Phase: This phase utilizes four prime strategies for collision avoidance. These are geometric, force-field, optimized, and sense and avoid methods. The geometric approach utilizes the information about the location and velocity of the UAV along with its obstacle or neighbors. This is performed by trajectory simulation in which nodes are reformed for collision avoidance. In force-field, the approach manipulates the attractive or repulsive forces to avoid collisions. In the optimized method, the parameters of obstacles, which are already known, are utilized for route optimization. In the sense and avoid technique, runtime decisions are made based on obstacle avoidance. The development in CAS helps in simple tasks by warning the vehicle operator and in complex tasks partially or completely controlling the system for collision avoidance. Developments in Formation Control Protocols of UAVs Formation control aims to generate control signals, which pilot UAVs to form a specific shape. Along with the architecture of motion control, the developed strategies for obtaining it are of much importance [29]. Formation Control Design: Motion controls of UAVs require a flow of information within its team; therefore, it uses communication architectures. There may be a lack of availability of global information in a single UAV for a whole operation. Due to its restricted capabilities to compute and communicate, centralized architecture is considered or used rarely. Decentralized architecture is preferred more for multi-UAV systems and uses the consensus algorithm technique for designing it. It is based on local interactions with the neighbors while maintaining a certain distance. Formation Control Strategies: Various developed control approaches are discussed here that aid the researchers and possess certain benefits and limitations. They are: i. Leader-Follower Strategy: As obvious from its title, this approach assigns one UAV as a leader, while the remaining UAVs as followers in a group. The mission information remains with the leader only while the followers chase their leader with pre-designed spaces. The major benefit of this strategy is that it can be implemented simply and easily. Due to leader dependency, this strategy faces single-point failures. This limitation can be compensated by assigning multi-leaders and virtual leaders. ii. Behavior-based Strategy: This approach produces control signals, which consider several mission essentials, by adding various vector functions. Its greatest merit is that it is highly adaptable to any unknown environment. Its demerit is the requirement to model it mathematically, which leads to difficulty in analyzing system stabilities. iii. Virtual Structure Strategy: This approach considers rigid structure for the desired shape of the group of UAVs. To achieve the desired shape, there is a need to fly each UAV towards its corresponding virtual node. Abilities to maintain the formation and fault-tolerance are its greatest advantages. This approach faces failure when the detection of a UAV is faulty in the formation. The compensation for this faulty UAV requires reconfiguration of the formation shape. This approach calls for a strong ability to compute, which is a disadvantage of this approach. Developments in Path Planning Techniques of UAVs Path planning aims to design a flight path towards a target with fewer chances of being demolished while facing limitations. Extensive research proposed different methods that overcome the path planning complexity of UAVs. To design algorithms for path planning, certain parameters, such as obstacles, the environment, and constraints, require selection with considerations [30]. The approaches employed for path planning have classifications based on their features and methodology. Motion Planning In robotics, motion planning refers to the act of dissolving a specified mobility goal into distinct motions. However, it is used to fulfill movement limitations while also potentially optimizing some components of the motion. However, motion planning is the challenge of planning for a vehicle that operates in areas with a high number of objects, performing actions to move through the environment as well as modify the configuration of the objects [31]. Even though the motion planning situation has arisen in continuous C-space, the calculation is discrete. As a result, we need a means to "discretize" the problem if we want an algorithmic solution. As a result, there are mainly two types of planning, combinatorial planning and sampling-based planning. Combinatorial Motion Planning Combinatorial Motion Planning is a type of motion planning that involves more than one approach to achieve the task, as shown in Figure 2. Although combinatorial motion planning discovers the pathways through the continuous configuration space, by using these strategies, researchers obtain a better result. The effective combination of algorithms is commonly based on bio-inspired algorithms with different approaches. Sampling-Based Motion Planning Random selection is used in sampling-based motion planning to build a graph or tree (path) in C-space on which queries (start/goal configurations) can be solved, as shown in Figure 3. To increase planner performance, we look at a variety of general-purpose strategies. At times over the past years, sampling-based path planning algorithms, such as Probabilistic Road Maps (PRM) and Rapidly Exploring Random Trees (RRT), have been demonstrated to perform effectively in reality and to provide theoretical assurances such as probabilistic completeness. Sampling-Based Motion Planning Random selection is used in sampling-based motion planning to build a graph or tree (path) in C-space on which queries (start/goal configurations) can be solved, as shown in Figure 3. To increase planner performance, we look at a variety of general-purpose strategies. At times over the past years, sampling-based path planning algorithms, such as Probabilistic Road Maps (PRM) and Rapidly Exploring Random Trees (RRT), have been Sampling-Based Motion Planning Random selection is used in sampling-based motion planning to build a graph or tree (path) in C-space on which queries (start/goal configurations) can be solved, as shown in Figure 3. To increase planner performance, we look at a variety of general-purpose strategies. At times over the past years, sampling-based path planning algorithms, such as Probabilistic Road Maps (PRM) and Rapidly Exploring Random Trees (RRT), have been demonstrated to perform effectively in reality and to provide theoretical assurances such as probabilistic completeness. Optimization Approach in Motion Planning. The world has a desire for optimization concerning every natural phenomenon and its aspects. Therefore, many researchers developed optimization methods for multi-dimensional problems in various areas. These algorithms provide optimum solutions to the motion planning problems of UAVs, such as reducing production costs, convergence rate, energy consumption, and enhancing strength, efficiency, and reliability. The optimization algorithms are classified into biological algorithms, physical algorithms, and geographical algorithms, as presented in Figure 4 [34,35]. Biological algorithms have further classifications, namely swarm-based and evolution-based algorithms. Optimization Approach in Motion Planning The world has a desire for optimization concerning every natural phenomenon and its aspects. Therefore, many researchers developed optimization methods for multidimensional problems in various areas. These algorithms provide optimum solutions to the motion planning problems of UAVs, such as reducing production costs, convergence rate, energy consumption, and enhancing strength, efficiency, and reliability. The optimization algorithms are classified into biological algorithms, physical algorithms, and geographical algorithms, as presented in Figure 4 [34,35]. Biological algorithms have further classifications, namely swarm-based and evolution-based algorithms. Biological Algorithms Bionic researchers on a natural pattern developed nature-based algorithms termed them biological algorithms. These are stemmed according to the correspond between biological evolution and activities. The prime benefit of biological algorith their strength to tackle static as well as dynamic threats and ensure offline working. W out classifying these algorithms into further groups, we can label them as memetic rithms. On the contrary, we can classify these algorithms into two categories, evolu based algorithms and swarm-based algorithms [37]. A. Evolution-Based Algorithms Biological Algorithms Bionic researchers on a natural pattern developed nature-based algorithms and termed them biological algorithms. These are stemmed according to the correspondence between biological evolution and activities. The prime benefit of biological algorithms is their strength to tackle static as well as dynamic threats and ensure offline working. Without classifying these algorithms into further groups, we can label them as memetic algorithms. On the contrary, we can classify these algorithms into two categories, evolution-based algorithms and swarm-based algorithms [37]. A. Evolution-Based Algorithms An evolution-based algorithm provides an optimal path for UAVs with consideration of three aspects. These aspects include travel distance, cost incurred, and path reliability cost to track that path. These evolutionary algorithms choose practical and achievable solutions randomly as the first generation and consider the parameters later to explain which randomly selected feasible solutions are appropriate or not. For determining curved paths with essential aspects in 3D terrain; an offline path planner with an evolutionary algorithm is required [38]. By taking aspects into account, for example, beeline to destination, min-max distance related to targets, and topographical obstacles free tracks, one can display the B-spline curve as a flying path. Some examples of these algorithms include Genetic algorithm (GA), Evolutionary Programming (EP), Evolutionary Strategy (ES), Differential Evolution algorithm (DE), and Harmony Search algorithm (HS). GA gives the best optimal results in search space using three steps selection, crossover, and mutation. Besides its benefits, sometimes it gives long and premature convergence and loses optimal results. Moreover, it is not applied to real-time data. In 1990, Fogel introduced a technique called EP. It reaches optimal results after many iterations. Similarly, another evolutionary algorithm is ES, which uses specified principles in optimization problems. DE employs real coding instead of binary coding. It refines the final path while reducing the computational cost. The evolutionary algorithm that mimics a musician's improvisation process is the HS algorithm. It shows promising results in optimization problems. It is further improved with various versions. B. Swarm-Based Algorithms Nature-based along with population-based algorithms evolved into swarm-based algorithms [39]. The swarm represents the combined behavior of all the agents. Agents in a swarm have limited capabilities, but working together, they achieve the given tasks while being at distances. As a result of which, fast, low cost, and optimal solutions are obtained AIS is an intelligent swarm-based algorithm that is modeled on the natural principles of the immune system of humans. It has the characteristics of the immune system of memory and learning to utilize for solving problems. It gives adequate trajectories in path planning with less computation. The development of PSO is based on the mobility theory of an insect crowd. In the layout of this fact-finding approach, every solo particle in the crowd recognizes the points given by the last swarm and produces a velocity vector towards the target point. The key benefit of this algorithm is that it is capable of obtaining optimal path planning in 3D, whereas its disadvantages are premature convergence and high time complexity. Passino introduced an algorithm based on the foraging behavior of Escherichia coli bacteria that lies in human intestines. He labeled this intelligent algorithm as BFO. It provides rapid convergence and a global search. The CS algorithm replaces the average solutions and applies the solution that is potentially better. The ABC algorithm provides solutions to various optimization problems having constraints. The ACO algorithm is based on depositing characteristics of ants during food search and proved to be a meta-heuristic technique to derive the shortest path while dealing with continuous and multi-objective path planning issues. The CRO algorithm works efficiently with many advantages for difficult optimization problems. The TLBO algorithm requires minimum computational memory and can be employed easily. FA works efficiently for multimodal optimization problems. It finds the best location for UAVs with less energy consumption. SFLA depends on frogs' clusters that are looking for food. It gathers the best frog, which can give local optimum and evolves the frog with inaccurate positions. It continues making iterations until the accomplishment of an optimal path with better convergence. PIO works via sharing information and striving among all to quickly achieve the optimal global solution. C. Physical Algorithms Heuristic algorithms that imitate physical laws and processes of nature are known as physical algorithms. These algorithms copy the physical conduct and characteristics of matter [40]. These are applicable for non-linear, high-dimensional, multimodal as well as complex optimization problems. There is very little research available on physical algorithms. These are categorized as Simulated Annealing (SA), Gravitational Search algorithm (GSA), Chaotic Optimization algorithm (COA), Intelligent Water Drops algorithm (IWD), and Magnetic Optimization algorithm (MOA). SA is suggested after a technique, annealing in metallurgy. It is employed for more complex computational optimization problems and gives approximate global optimum within a fixed time. GSA is a newly introduced algorithm that mimics laws of motion and gravitational law. It is applied to optimization problems with various functions. COA is an easily implemented and powerful mechanism that can escape convergence to a local optimum within a short time. The IWD algorithm is based on how natural rivers can find the best paths among many probable paths to their ultimate destination. MOA, a newly emerging algorithm, is derived from the basic principles of magnetism. The dual function of this algorithm can balance the disadvantages against the advantages in optimization problems. D. Geographical Algorithms The meta-heuristic algorithms that give random outcomes in geographical search space are labeled as geographical algorithms [41]. Some of the geographical algorithms are the Tabu Search algorithm (TS) and Imperialistic Competition algorithm (ICA). The TS algorithm determines an optimal solution among various feasible solutions. Its memory can recall the recent optimal solution and guide the search to trace the previous solutions. It is employed for optimization problems in various areas. Another geographical algorithm for the global best solution in optimization is ICA. It imitates sociopolitical imperialist competition. It involves imperialistic competition among empires along with assimilation and revolution of colonies and so on. Due to its robust searching ability, it provides many benefits in optimization problems. Among all the aforementioned algorithms, most are based on the swarm. These population-based algorithms are robust at obtaining better global solutions via their cooperative and self-adaptive abilities. These algorithms are employed for solving challenging issues of UAVs. This review paper gives details on a comparison of the aforesaid algorithms used for motion control and path planning of UAVs. Related Review To succeed, most motion planning approaches necessitate the use of appropriate optimization algorithms. These strategies can be used on a single UAV as well as a group of UAVs or a swarm of UAVs. When several UAV missions are viable for civilian objectives, a nature-inspired algorithm is required for control and optimization. Table 1 presents a detailed overview of the manuscripts related to motion planning problems of UAVs. The review also helps scholars with the optimization techniques applied to single or multiple UAVs. Needs NP problem exploration. In multi-colonies, one colony follows same path as basic ACO. [74] "Multi-UAV coordination control by chaotic grey wolf optimization-based distributed MPC with event-triggered strategy" Chaotic GWO Multiple Gives efficiency in computations. Enhances the global search mobility convergence speed. Stability conditions are not analyzed. Has limited communication. [75] "Collective Motion and Self-Organization of a Swarm of UAVs: A Cluster-Based Architecture" PSO Multiple Gives fast connectivity and convergence. Assures stability with fewer turns. Not implemented on hardware. Focused on a specific scenario. [76] "A Cluster-Based Hierarchical-Approach for the Path Planning of Swarm" MMACO Multiple Gives superior performance. Gives an optimal path with better convergence. Variation in the optimization costs in colonies 2 and 3 is neglected. [77] "Cooperative Path Planning of Multiple UAVs by using Max-Min Ant Colony Optimization along with Cauchy Mutant Operator" MMACO CM Multiple Finds the optimal routes with the shortest distance. Avoids collision. Enhances the system complexity. [78] "A multi-strategy pigeon-inspired optimization approach to active disturbance rejection control parameters tuning for vertical take-off and landing fixed-wing UAV" MPIO Single Proves to be superior among all algorithms to solve multi-dimensional searching issues. It converges faster and exploits in a better way. Altitude fluctuation is still present. Immature result after 2nd iteration. [79] "Landing route planning method for micro drones based on hybrid optimization algorithm" DO Multiple Shows stronger convergence both locally and globally. Yields better outcomes than both single algorithms. Speeds up convergence after orthogonal learning. [80] "Energy Efficient Neuro-Fuzzy Cluster-based Topology Construction with Metaheuristic Route Planning Algorithm for Unmanned Aerial Vehicles" QALO Single Gives more energy-efficient results, more rounds, higher throughput, and lower average delay results. Selects optimal routes. Does not manage resources optimally. [81] "Coordinated path following control of fixed-wing unmanned aerial vehicles in wind" CPFC Single Attains leaderless synchronization. Satisfies UAVs' constraints and upper bound path following errors. Requires better simulation of the external environment and the wireless communications. [82] "A diversified group teaching optimization algorithm with segment-based fitness strategy for unmanned aerial vehicle route planning" GTO Single Gives faster convergence. Handles all the complex constrained problems. Parameters need automatic adjustments. [83] "Coverage path planning for multiple unmanned aerial vehicles in maritime search and rescue operations" RSH Multiple Gives optimal results in a shorter time. Robust to strong wind. Does not provide exact solutions for larger instances. [84] "Hybrid FWPS cooperation algorithm based unmanned aerial vehicle constrained path planning" FWPSALC Single Produces high and superior quality solutions. Handles constraints in a better way. Gives poor performance for fewer number of particles or a large number of fireworks. [85] "Safety-enhanced UAV path planning with spherical vector-based particle swarm optimization" PSO Single Reduces the cost function. Gives the shortest and smoothest paths with fast convergence. Faces premature convergence. In 2019, Yang et al. [42] proposed a spatial refined voting mechanism and PSO algorithm that gave a 4D-space path planning that was collision-free and obstacle-free for multi-UAVs. Duan et al. [43] used a dynamic discrete pigeon-inspired optimization technique for search attack missions by using distributed path generation and central tasks mission. Jain et al. [44] suggested MVO and Munkres algorithms for the path planning and coordination of multiples, it compared the results with the results of BBO and GSO and concluded that the proposed algorithm is highly efficient in reducing execution time and finding optimized path costs. Pérez-Carabaza et al. [45] worked on optimizing trajectories for UAVs that used less time in searching for targets, avoided collisions, and maintained communication. Then, there is a comparison of this MMAS-based algorithm with GA and CEO, and it yielded better results than they yield. Shao et al. [46] used comprehensively modified PSO for the path planning of UAVs. This method gave a faster and improved convergence rate and solution optimality when compared with SPSO and MGA. Mah et al. [47] suggested a joint optimization method that gave the best secrecy performance to combat eavesdropping on the flight path and transmits power and gave superior results to the max SNR method. Bo Li et al. [48] designed an improved ACO algorithm based on the metropolis criterion and predicted three trajectory corrections schemes for collision avoidance protocols and the inscribed circle method for smoothness. Geovanni et al. [49] proposed an optimized path planning method using a meta-heuristic in the continuous 3D environment. The study also minimizes the path length in the presence of static obstacles by manipulating control inputs. Ning et al. [50] solved the task-planning issue of multi-target and multi-aircraft by proposing a two-layer mission-planning model depending on the annealing and TS algorithms. Lihua et al. [51] gave an online priority configuration algorithm for the UAV swarm flight in an environment having compounded obstacles and showed superiority in cost of energy and time in simulation results. In 2020, Xu et al. [52] solved the LQG problem of quad-rotor UAVs by presenting a Gaussian information fusion control (GIFC) method that allowed accurate trajectory tracking and reduced the design complexity. Hu et al. [53] proposed a 3D multi-UAV cooperative velocity-aware motion planning using VeACA2D and VeACA3D. While comparing with LyCL and PALyCL, this algorithm gave higher possibilities of reaching the destination while following shorter paths and reduced time costs. Gao and li [54] considered the distributed cooperation approach formed on situation awareness consensus and its details processing method for UAV swarms. Shang et al. [55] linked a co-optimal coverage path planning method with a PSO algorithm for aerial scanning of compounded models. Qu et al. [56] evaluated a novel hybrid grey wolf optimizer algorithm with MSOS and gave better and improved results for UAV path planning in a complex environment. Krishnan et al. [57] optimized the continuous-time trajectory by combining a decentralized algorithm with third-order dynamics that helped robots to re-plan trajectories. Zhang et al. [58] introduced an ant-based self-heuristic method for path planning of multi-UAVs. In this study, the authors used U-shaped dense complex 3D space to reduce the confusion of obstacle detection. It reduces the deadlock state with a two-stage strategy. Zhou et al. [59] utilized the multi-string chromosome genetic and cuckoo search algorithms to improve the MDLS algorithm. This improved algorithm proved that it had a better global optimization capability and diversified scheme options, and completed tasks in a shorter time as compared to the simplified MDLS. Qiu and Duan [60] developed an improved MPIO formulated on hierarchical learning behavior that gave improved distributed flocking among obstacles. Comparison with MPIO and NSGA-II showed that the improved MPIO proved to be more suitablefort handling the various-objective optimization and obstacle avoidance for UAV flocking. Konatowski and Pawłowski [61] presented a path planning for UAVs with the help of ACO. It uses waypoints along its path with unknown parameters. The proposed work reduces the computational time and obtains the optimal route. Huang and Sun [62] detailed an approach to feasible trajectory planning formation that depends on a bi-directional fast search tree for UAVs. Radmanesh et al. [63] applied a PDE-based large-scale decentralized approach and compared it with centralized and sequential approaches to obtain collisionfree and optimal path planning of multiple UAVs. Xu et al. [64] linked the grey wolf optimizer algorithm with the PSO algorithm to achieve cooperative path planning of multi-UAVs under the threats of ground radar, missiles, and terrain. Yu et al. [65] introduced an improved constrained differential evolution algorithm that reduced the fitness functions and satisfied the three constraints, namely, height, angle, and slope of UAVs. Later, this improved algorithm was compared with FIDE, DE variants, RankDE, CMODE, and (µ + γ) − CDE and proved that the proposed CDE generated more optimal paths smoothly. Qu et al. [66] used a reinforcement learning-based grey wolf optimizer algorithm. Then, compared the outcomes with the results of GWO, MGWO, EEGWO, and IGWO algorithms and concluded that the proposed RLGWO gives better, feasible, and effective path planning for UAVs. Shen et al. [67] solved the air pollution detection problem for ships in ports and evaluated a synergistic path planning of multiple UAVs. He suggested an improved PSO algorithm with a Tabu Search (TS) table, proved the efficient detection of air pollution, and ensured less emission by ships. Zhen et al. [68] gave an improved method that is a hybrid artificial potential field with ant colony optimization (HAPF-ACO) method that executes tasks and avoids collisions and obstacles efficiently for the cooperative mission planning of fixed-wing UAVs. The results were compared with ACOAPF and PSO algorithms that proved the suggested algorithm to be highly efficient in task execution. Li et al. [69] detailed an ORPFOA algorithm that allows online changing tasks for optimal path planning of multi-UAVs for solving faster and giving higher optimization. Then, the outcomes of this suggested algorithm were compared with GWO, PSO, PIO, PSOGSA, PPPIO, and FOA. The proposed algorithm gave faster convergence and optimization than the others. Shao et al. [70] obtained multi-UAV path planning by using the distributed cooperative PSO approach. This study presents a complex dynamic environment with a higher success rate of 0.9 compared to CCGA. Ilango and R. [71] studied Bio-inspired algorithms and analyzed their performance in the autonomous landing of UAVs. Wu et al. [72] applied a new method to UAVs that is based on consensus theory for their formation control as well as obstacle avoidance. In 2021, recent research by Ali et al. [73] developed a multi-colonies optimization and combined MMACO and DE techniques for the cooperative path planning of many UAVs in a dynamic environment. WANG et al. [74] proposed an MPC framework along with Chaotic Grey Wolf Optimization (CGWO) and an event-triggered approach to give UAV coordination control and trajectory tracking. Ali et al. [75] used combined movement along with the reflexivity of a UAV swarm via the cluster-based technique by combining the PSO algorithm with the MAS. It showed better convergence and durability. Shafiq et al. [76] suggested a cluster-based hierarchical approach for control and path planning. It quickly finds the optimal path along with the minimal costs. Ali et al. [77] applied a hybrid algorithm of the max-min ant colony optimization algorithm with CM operators on multiple UAVs for collective path planning. It gives the optimal global solution in minimum time. He and Duan [78] considered flying, as well as touching down, issues and suggested an improved PIO for tuning the parameters of ADRC. Liang et al. [79] developed an optimal route planning for the landing of micro-UAVs using hybrid optimization algorithms with orthogonal learning. Pustokhina et al. [80] designed clustering that is energy efficient and plans optimal routes by developing Energy Efficient Neuro-Fuzzy Cluster-based Topology Construction with the MRP technique for UAVs. Chen et al. [81] suggested a coordination strategy for fixed-wing UAVs with wind disturbances and developed a hardware-in-the-loop (HIL) simulation. Jiang et al. [82] worked on path planning for UAVs under various obstacles and proposed a diversified group teaching optimization algorithm with a segment-based fitness approach that has better global exploration ability. Cho et al. [83] gave a coverage path planning strategy with two phases for multi-UAVs that helped in searching and rescuing in maritime environment. Zhang et al. [84] presented a hybrid FWPSALC mechanism for the path planning method for UAVs that proved to be robust in searching and handling constraints and had a better speed convergence. Phung and Ha. [85][86][87][88] developed a novel technique with spherical vector-based particle swarm optimization (SPSO) that ensures safety, feasibility, and optimal paths and gives results better than classic PSO, QPSO, θ-PSO, and various other algorithms. Discussion The most crucial challenge in the field of UAVs is efficient motion planning. It requires a state-of-the-art optimization method to counter issues. This research evaluates various challenges faced by UAVs and all the current designs of motion planning techniques. The recent developments discussed the results in high adaptable ability, cost and time reductions in task executions, energy efficiency, obstacles, and collision avoidance. While reviewing various motion planning approaches, it became evident that most of the researchers preferred to use an optimization approach with nature-inspired algorithms. While discussing numerous categories of path planning strategies, it appears that hybrid algorithms give better performance. These improved and optimized algorithms overcome the limitations of numerical and analytical techniques. By analyzing the manuscript, it can be concluded that the best optimization approaches are swarm-based due to their exceptional ability to solve complex issues with their simplified approach. Conclusions UAVs are flying machines that possess safe and task-oriented mobility in the presence of uncertainties with the help of modified techniques and the latest technological developments. The autonomous capability of these machines is also advancing and upgrading to provide efficient flying and stable formation in dynamic environments. However, motion planning issues in UAVs are most challenging among scholars. In this article, a detailed comparative study on the motion planning issues and achievements of UAVs has been presented, along with the limitations of each article. The study also presents recent challenges in all possible categories of UAVs to highlight the importance of UAVs in our society along with their developments and state-of-the-art work performed in the last 3 years. Future Work There is a very bound analysis in the comparison field of motion planning and optimization algorithms that exists already and the determination of the best among them. To deploy the multiple UAV systems in a finer way, various challenges and possibilities need more exploration, as well as a reduction in exploitations. Leads for future work are to model different swarm-based intelligent optimization approaches with high accuracy and efficiency and further feasible algorithms for 3D-path planning strategies. Funding: This research was supported by the European Regional Development project Green Smart Services in Developing Circular Economy SMEs (A77472). Data Availability Statement: All the data are in the article. Conflicts of Interest: The authors declare no conflict of interest.
9,483.6
2022-05-13T00:00:00.000
[ "Engineering", "Computer Science" ]
A superposed epoch analysis of the regions 1 and 2 Birkeland currents observed by AMPERE during substorms We perform a superposed epoch analysis of the evolution of the Birkeland currents (field‐aligned currents) observed by the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) during substorms. The study is composed of 2900 substorms provided by the SuperMAG experiment. We find that the current ovals expand and contract over the course of a substorm cycle and that currents increase in magnitude approaching substorm onset and are further enhanced in the expansion phase. Subsequently, we categorize the substorms by their onset latitude, a proxy for the amount of open magnetic flux in the magnetosphere, and find that Birkeland currents are significantly higher throughout the epoch for low‐latitude substorms. Our results agree with previous studies which indicate that substorms are more intense and close more open magnetic flux when the amount of open flux is larger at onset. We place these findings in the context of previous work linking dayside and nightside reconnection rate to Birkeland current strengths and locations. Introduction The Dungey cycle is the circulation of plasma and magnetic field in the Earth's magnetosphere, driven by its coupling with the solar wind [Dungey, 1961].Magnetic reconnection between the interplanetary magnetic field (IMF), frozen into the solar wind, and terrestrial field lines at the magnetopause creates open magnetic flux interconnecting the interplanetary medium to the polar regions.The motion of the solar wind past the planet leads to these open flux tubes moving antisunward from the dayside to the nightside.Subsequently, reconnection in the tail closes this open magnetic flux, and it returns to the dayside to complete the cycle.It is the opening and closing of flux that drives magnetospheric convection and a sympathetic circulation of plasma in the ionosphere.Dungey [1961] originally pictured a steady state situation, but it has become obvious that the Dungey cycle is much more dynamic than was first thought, leading to the proposal of the expanding/contracting polar cap (ECPC) paradigm [e.g., Cowley and Lockwood, 1992;Lockwood and Cowley, 1992].As dayside reconnection occurs, the amount of open magnetic flux inside Earth's magnetosphere increases.Nightside reconnection reduces the amount of open magnetic flux in the same way.The amount of open magnetic flux in the magnetosphere governs the location of the boundary between the open and closed flux in the ionosphere, enclosing the area known as the polar cap-when there is more open magnetic flux, the boundary is farther from the pole, and therefore, the size of the polar cap is increased [Milan et al., 2007[Milan et al., , 2012]]. The substorm [Akasofu and Chapman, 1961;Akasofu, 1964] is an integral component of the ECPC.The substorm cycle comprises three phases: the growth phase, the expansion phase, and the recovery phase [McPherron, 1970;Rostoker et al., 1980].The substorm growth phase, when the auroras move to lower latitudes, is associated with dayside reconnection [Siscoe and Huang, 1985].The expansion phase, when the nightside auroras brighten and move poleward, is associated with the onset of nightside reconnection [Cowley and Lockwood, 1992;Milan et al., 2007].The recovery phase is marked by a dimming of the auroras and a general contraction to higher latitudes as the system returns to quiescent conditions. It has been noted in previous papers [Akasofu, 1975[Akasofu, , 2013;;Kamide et al., 1999;Milan et al., 2009a] that the intensity of a substorm is associated with the extent of the polar cap at substorm onset; that is to say, the substorm is stronger when there is more open magnetic flux contained within the polar cap.This occurs because there is more energy stored within the magnetotail, and when a substorm occurs, there is therefore more energy available to dissipate. Current systems are a ubiquitous component of the magnetosphere, as they transmit stresses around the system.We are particularly interested in the field-aligned currents first proposed at the start of the twentieth century [Birkeland, 1908[Birkeland, , 1913]].The Birkeland current system is responsible for electrodynamically linking the magnetopause, the inner magnetosphere, and the ionosphere.The large-scale morphology of the currents was first deduced using TRIAD satellite observations [Iijima and Potemra, 1976a, 1976b, 1978].The current system forms two concentric rings above the auroral ionosphere: the poleward (region 1) ring and the equatorward (region 2) ring.Iijima and Potemra [1978] observed that the two regions appear to be driven by different parts of the system: the region 1 (R1) currents connect the ionosphere to currents in the magnetopause (also known as the Chapman-Ferraro currents) and the magnetotail and the region 2 (R2) currents connect to the partial ring current in the inner magnetosphere [e.g., Cowley, 2000].The region 1 Birkeland currents are believed to flow, in part, within the boundary between the open and closed flux, also called the OCB [Clausen et al., 2013a].Region 1 currents flow upward in the dusk sector and downward in the dawn sector, and region 2 currents are of opposite polarity.The regions 1 and 2 currents close through the ionosphere via horizontal Pedersen currents.The system is sketched schematically in Figure 1. The substorm current wedge (SCW) is a current system linked to the occurrence of a substorm [Clauer and McPherron, 1974;Forsyth et al., 2014;Sergeev et al., 2014].It is linked to the magnetic bay observed in AL at the time of substorm onset, which has been noted previously [e.g., Iijima and Nagata, 1972;Gjerloev et al., 2004].Currents are diverted from the magnetotail into the ionosphere, and these currents flow along the field lines, which lead to enhancements in the field-aligned currents during a substorm [Clausen et al., 2013a[Clausen et al., , 2013b;;Murphy et al., 2013].Clausen et al. [2012] demonstrated that the R1/R2 system moves to higher and lower latitudes in a manner consistent with the ECPC and substorm cycle.Coxon et al. [2014] subsequently investigated the magnitude of the current systems, showing that they were consistent with driving by dayside and nightside reconnection.Clausen et al. [2013aClausen et al. [ , 2013b] ] used a list of 772 substorms detected by the Thermal Emission Imaging System (THEMIS) mission between January and April 2010 to investigate the open flux content during substorms using the location of R1 currents as a proxy.However, they focused on investigating current density over the epoch rather than investigating the current magnitudes.In the present study, we use 2981 substorms detected by SuperMAG over a 3 year period to investigate the dynamics of the current systems during substorms, including both R1 and R2 current magnitudes, and focus on the influence of the open flux content of the magnetosphere at the time of substorm onset. Journal of Geophysical Research: Space Physics 10.1002/2014JA020500 AMPERE and Derived Products The Active Magnetosphere and Planetary Electrodynamics Experiment (AMPERE) was conceived to investigate the Birkeland currents using magnetometer data from the Iridium®telecommunications satellite network [Anderson et al., 2000].The Iridium®network of satellites comprises 66 active spacecraft that orbit the Earth in six polar orbital planes at an altitude of 780 km.Eleven spacecraft are found in each plane, and each is in a circular, polar orbit that takes 104 min to complete.These six orbital planes provide measurements along 12 meridians of magnetic local time (two values of magnetic local time (MLT) per orbital plane).Anderson et al. [2000] used the cross-track component of the magnetic perturbation measured by the spacecraft to deduce the current density and concluded that the Iridium®constellation data were useful for characterization of large-scale field-aligned currents (FACs) in both hemispheres on time scales of several hours or less.(Strictly speaking, the current density measured by AMPERE is the radial current density; however, in the polar regions, it is very close to the field-aligned current density.) The AMPERE data set used in the present study contains maps of Birkeland currents in the Northern and Southern Hemispheres, made at 10 min cadence for the period January 2010 to December 2012.In this study we are interested in the large-scale morphology of the R1/R2 system and wish to suppress small-scale, rapidly varying features.To characterize the location and strength of the Birkeland current ovals, we use a fitting method developed by Clausen et al. [2012] and Coxon et al. [2014]. We fit a sinusoid multiplied by a Gaussian to the current density along each MLT to identify the two signatures associated with R1 and R2.Taking each value of MLT for which a successful fit was achieved, the R1 and R2 signatures are integrated over both latitude and longitude.We then take the dawn sector MLTs, sum the results of the integration, and multiply by 12∕n where n is the number of successful fits achieved in the dawn sector.The process is repeated for dusk, such that we obtain the total R1 and R2 current flow in the dawn and dusk sectors.The absolute values of the two R1 currents and the absolute values of the two R2 currents are then summed to find J 1 and J 2 , respectively, which are the total current flowing measured in amperes [Coxon et al., 2014]. The latitudes of the peak current density associated with R1 and with R2 can be found in the fit we achieve [Clausen et al., 2012].We fit an oval to these latitudes to find the location of the Birkeland current ovals, described by l 1 or l 2 plus a cosine term (for the R1 or R2 current oval, respectively).Where there are no successful fits at the time of substorm onset, we eliminate the substorm from consideration, leaving 2900 substorms.It should be noted that the l 1 and l 2 parameters are properly described as the number of degrees of latitude from the geomagnetic pole; while this is equivalent to colatitude in the Northern Hemisphere, in the Southern Hemisphere it is the colatitude subtracted from 180 • .However, for ease of description, we will use the term "colatitude" to refer to these parameters in both hemispheres. OMNI and Derived Products The OMNI data set provides time series of solar wind parameters propagated to their impact on the bow shock [e.g., King, 1991;Papitashvili et al., 2000, and references therein].We use data from OMNI to estimate the dayside reconnection rate Φ D , using the expression by Milan et al. [2012]: (1) In the above equation L eff (V X ) is an effective length scale, given by and B YZ is the transverse component of the IMF, given by V X is the solar wind speed, is the clock angle between the IMF vector projected into the GSM Y-Z plane, and Z axis and R E is the radius of Earth.The dayside reconnection rate is the rate at which flux is transferred by the reconnection electric field across the effective length L eff and is therefore given by equation ( 1) in volts. SuperMAG The SuperMAG data set collates and unifies magnetometer data from across the globe [Gjerloev, 2009[Gjerloev, , 2012]].SMU and SML are SuperMAG-calculated equivalents of the electrojet indices AU and AL.An automated procedure identifies substorm onsets in these indices [Newell andGjerloev, 2011a, 2011b].We have used the SuperMAG substorm onset list for the period 2010 to 2012, coincident with the AMPERE data used in the present study, which contains approximately 3000 substorms.We filter out those substorms for which the onset MLT reported by SuperMAG is on the dayside, i.e., where 6 ≤ MLT ≤ 18.We use the colatitude of the R1 current oval recorded by AMPERE, l 1 , at the time of each substorm as a proxy for the open magnetic flux content of the magnetosphere prior to onset [see also Milan et al., 2009b]. The distribution of the onset colatitudes of the SuperMAG substorm list is presented in Figure 2, which shows substorms distributed between ∼10 • and ∼25 • and a peak measured at an onset colatitude of 18 • .The number of substorms in each bin varies somewhat between the Northern and Southern Hemispheres, with ∼550 substorms seen at the peak in the Northern Hemisphere and ∼500 substorms in the Southern Hemisphere.However, the distribution in both hemispheres is similar. As such, five bins are defined into which substorms can be sorted by onset colatitude.We denote these bins as I-V and define them as follows.I: 0 and V: 21 • < ≤ 30 • .We return to this categorization in section 4.2. Observations of the Birkeland Currents Made by AMPERE Figure 3 shows the interval between midnight on 28 June 2010 and midnight on 3 July 2010.Figures 3a and 3b show cuts through the dawn-dusk meridian of the current density observed by AMPERE.Figure 3a shows the Northern Hemisphere, whereas Figure 3b shows the Southern Hemisphere.Figures 3a and 3b show the densities of the currents as well as their extent in latitude, but in order to examine the current magnitudes, we turn to Figure 3c, which depicts the current magnitudes J 1 and J 2 for the Northern Hemisphere measured using our technique for analyzing the AMPERE data set [Coxon et al., 2014].Figure 3d shows the ratio J 1 ∕J 2 : when the ratio is above 1, the R1 current flowing is stronger than the R2 current.Figure 3e shows the dayside reconnection rate Φ D determined using equation (1). Figure 3f shows the AL/AU indices, and the dotted lines indicate substorm onsets given by SuperMAG for this time period [Newell andGjerloev, 2011a, 2011b]. The plot depicted in Figure 3 is of an active period.In Figure 3e, Φ D reaches values of 40 kV on several occasions and exceeds 100 kV near midnight on 30 June.Two periods of very high dayside reconnection rate are associated with two coincident enhancements in the current magnitudes, consistent with the relationship between reconnection and current magnitude found by Coxon et al. [2014].The opening of magnetic flux that occurs during dayside reconnection also leads to equatorward expansions of the currents observed in Figures 3a and 3b, consistent with the ECPC paradigm [Cowley and Lockwood, 1992] as discussed by Clausen et al. [2012]. Also seen are magnetic bays in the SML index in Figure 3f, which are associated with substorm onset.Twenty-four substorms are identified in the period indicated, and the current magnitudes are enhanced as a result of substorm onset in most of the substorms depicted in Figure 3.The ratio J 1 ∕J 2 is also enhanced (c and d) l 1 and l 2 in degrees, (e and f ) J 1 and J 2 (MA), (g and h) J 1 ∕J 2 , (i) Φ D (kV), and (j) SML and SMU (nT) plotted against the epoch time t in minutes on the x axis for the Northern Hemisphere (left) and Southern Hemisphere (right).R1 and R2 are denoted by red and blue, respectively.It should be noted that the scales in this figure are different from those used in subsequent figures. after substorm onset in a number of cases.Looking at Figures 3a and 3b, the current ovals appear to expand at substorm onset and contract in response to the onset of nightside reconnection in a substorm [Clausen et al., 2012].In order to examine these phenomena more quantitatively, we perform a superposed epoch analysis to see the general trends over 3 years of AMPERE data. Superposed Epoch Analysis We use the substorm onset times identified by SuperMAG to form a superposed epoch analysis of the parameters of interest, including the R1 and R2 current magnitudes J 1 and J 2 , oval colatitudes l 1 and l 2 , dayside reconnection rate Φ D , and geomagnetic indices SMU and SML.The analysis covered the period from 2 h before substorm onset to 2 h after.In the first instance we performed the analysis on all 2900 substorms.Subsequently, we performed the analysis on subsets of substorms binned by onset colatitude.There are cases in which substorm onsets occur within 2 h of one another (so that the 4 h window would contain multiple substorms; a case which can clearly be seen around midnight on 30 June 2010 in Figure 3); we did not filter out any onsets based on this criterion, however, such that the analysis presented includes some cases of substorm onsets occurring within 2 h of one another. In Figures 4-7, solid lines are used to indicate the median of the data plotted, while shaded areas are drawn which describe the upper and lower quartiles of the data in each plot.In order to differentiate between R1 and R2 currents, we use red and blue respectively for both the solid lines and shaded areas.Purple shading is used to indicate the areas in which the quartiles overlap. Quantifying the Reaction of the Coupled Magnetosphere-Ionosphere System to Substorms In Figure 4 the median response of the Birkeland currents to substorm onset is shown.From top to bottom: the variation of colatitude l 1 relative to the onset colatitude, the variation in l 1 , the variation in the R1 and R2 current magnitudes, the ratio R1/R2, the expected dayside reconnection rate Φ D , and the SML/SMU indices [Newell and Gjerloev, 2011a].For ease of reading, we describe SML in terms of its magnitude, neglecting the sign of the perturbations it measures, which means that an increase in SML would indicate a transition to more negative magnetic perturbations. In Figures 4a-4d, we see the change in position of the current ovals as measured by l 1 .We see the current ovals expand to lower latitudes (higher colatitudes) as substorm onset approaches and then begin to contract again at t ∼10 min.The R1 current oval starts at approximately 17 • , whereas R2 starts at approximately 21 • -both ovals expand and then contract back to their preonset state.The change in the latitudinal extent of the current ovals varies over a range of 2 • latitude during the 4 h period, as demonstrated by Figure 4a, in which Δl 1 is seen to vary between −1.5 and 0.5 (with the latter value occurring 10 min subsequent to onset).Both current ovals are seen to expand at the same rate until onset, at which point R2 starts to expand faster, indicating a broadening in latitude of the current system.It also indicates that R2 is more sluggish than R1 in its return to pre-substorm levels. The current magnitudes as measured by AMPERE start, in the Northern Hemisphere, at J ∼2.75 MA in Figure 4e and a ratio R1/R2 of ∼1.075 in Figure 4g.The current magnitudes slowly increase between t =−120 min and t = 0 min, before substorm onset leads to a more rapid increase and a peak in current magnitude of J ∼3.75 MA at t ∼20 min.The current magnitudes observed then decrease through the period, returning almost to their initial levels.The ratio follows an almost identical pattern, with a slow increase observed until substorm onset, a rapid increase to a peak value of approximately 1.2 at 20 < t < 40 min and then returning to preonset levels.The Southern Hemisphere follows a similar behavior, but the current magnitude (Figure 4f ) and ratio (Figure 4h) are lower throughout the epoch. In Figure 4i, the dayside reconnection rate Φ D ∼15 kV at t = −120 min.It increases as time progresses, with a peak of Φ D ∼30 kV slightly before substorm onset at t = 0 min.The level of dayside reconnection then falls, returning to its initial value at the end of the interval.In Figure 4j, SMU ∼100 nT at t = −120 min, whereas SMU ∼ −150 nT at that point.SMU increases slightly from the start of the period, with a peak occurring at t ∼20 min.The same is seen in SML until t = 0 min at which point a pronounced magnetic bay can be seen in SML at the time of substorm onset [Rostoker et al., 1980].This signature marks the formation of the substorm current wedge and is a recognized signature of substorm onset.Both SML and SMU then begin to decrease in magnitude through the epoch, returning to almost preonset levels at the end of the period. The Variation of Reactions to Substorms Given Different Levels of Activity 4.2.1. Reconnection Rate and Magnetic Indices Turning now to the colatitude of onset categories established in section 2.3, Figure 5 shows the dayside reconnection rate Φ D alongside SML/SMU averaged for the five categories outlined.These categories are defined by the colatitude of the R1 current oval at substorm onset.The plots corresponding to the smallest onset colatitudes (and thus the lowest level of activity) are depicted at the top of the figure and the plots for the largest onset colatitudes depicted at the bottom. In Bin I, it can be seen that at t = −120 min, Φ D < 10 kV and continues at that rate until t = −20 min.At this point, the reconnection rate begins to increase to a peak of approximately 15 kV located subsequent to the substorm onset.In Bin II, the reconnection rate increases from the initial value (∼10 kV) to the peak (∼20 kV), which is located at substorm onset.In Bins III-V, the initial and peak values increase until in Bin V, Φ D varies between approximately 70 and 100 kV.The location of the peak in dayside reconnection Journal of Geophysical Research: Space Physics 10.1002/2014JA020500 Figure 6.(left column) The magnitude in the Northern Hemisphere of the region 1 (J 1 , red) and region 2 (J 2 , blue) currents and (right column) the ratio J 1 J 2 with respect to substorm onset at t = 0 min, binned by substorm onset colatitude (increasing from top to bottom).N is the number of substorms in the relevant bin.rate gets earlier with bin: in Bin I, the peak is seen just after substorm onset, whereas the peak in Bin V is as much as 25 min prior.The peak value is approximately double the initial value in Bin I, whereas in Bin V the peak value is ∼140% that of the initial value. Turning to SML and SMU in Bin I, it is observed that both appear flat between the start of the interval (∼90 nT) and the onset of the substorm.At substorm onset, SMU increases by perhaps 10 nT but returns to its initial magnitude relatively quickly.The familiar magnetic bay in SML is present at onset, with the SML magnitude increasing to ∼200 nT and remaining higher than the initial magnitude for the rest of the interval.The sudden increase and then gradual decrease to a level higher than the initial magnitude is a common feature in every bin. In Bin II, both SML and SMU remain steady at their initial values (100 nT and 90 nT, respectively) until the substorm onset, at which point SMU increases slightly before quickly returning to its original level.SML again shows the signature magnetic bay, increasing to a higher magnitude of 300 nT.In Bins III-V, the magnitude of both indices increases slowly from t = −120 min to t = 0 min.At onset, the rate of increase of SMU climbs before the peak approximately 20 min later-it then returns to pre-substorm levels.The initial/peak magnitudes of SMU and SML increase with the bays: in Bin V, the initial value of SML is approximately 500 nT, increasing to 750 nT just after onset.SMU starts at 250 nT and increases to 300 nT before dropping back down again.In all bins we see a decrease in SML magnitude just prior to onset, although it is most pronounced in Bins IV and V. Current Magnitudes Figure 6 shows J 1 , J 2 and J 1 ∕J 2 .In Bin I, the observed current magnitudes remain uniform at ∼2 MA until onset which causes an increase in magnitude to an approximate value of 2.5 MA in R1 and 2.25 MA in R2.The currents appear to stay at this magnitude for the rest of the interval shown.Initially, R1 is higher than R2 at a ratio of 1.05; at onset, the ratio between the two increases to a value of 1.2, and the ratio then decreases to 1.1 over the 2 h subsequent to onset. In Bin II, the initial values of the currents are just above 2 MA.The current systems remain steady until onset, when relatively rapidly increase; R1 to J ∼3 MA and R2 to J ∼2.5 MA at t = 20 min.They then decrease gradually toward their initial value as time progresses.The ratio starts at a value of 1.05 and increases slightly until onset, at which point it climbs to a value of 1.15 and decreases slowly through the rest of the period. In Bin III, the initial value of the currents is around 2.5 MA, with an increase at onset to 3.5 MA in R1 and 3 MA in R2.The slow decrease that subsequently occurs leads to values at t = 120 min that are between 0.25 and 0.5 MA higher than initially.The ratio starts above 1.05 and increases to 1.2 before decreasing, again to a value slightly higher than the initial value.Bin IV sees an initial R1 and R2 magnitude of just above 3.5 MA Journal of Geophysical Research: Space Physics 10.1002/2014JA020500 Figure 7. l 1 (red) and l 2 (blue) in the Northern Hemisphere with respect to substorm onset at t = 0 min, binned by substorm onset colatitude (increasing from top to bottom).The left-hand plots show the value of l 1 and l 2 , whereas the right-hand plots show l 1,t − l 1,0 (and the same for l 2 ).The units are equivalent to the average colatitude of the current oval.N is the number of substorms in the relevant bin.and 3 MA, respectively, which increase to 4.75 MA and 4 MA at t = 20 min before decreasing to values comparable to those at the start.The ratio in this case starts between 1.05 and 1.1 before climbing to 1.2 and then returning.In Bin V, the R1 and R2 magnitudes are 6 and 5.5 MA, respectively, and they increase to a peak magnitude of 7 MA and 6.5 MA, again at t = 20 min, before decreasing to values of 6 MA and 5.5 MA.The ratio starts at 1.1, decreases toward onset, and climbs very slightly at onset before returning to its original value. The peak of the current magnitudes and ratios observed is consistently seen at t = 20 min, which does not appear to change with the variation in onset latitude.The increase in current magnitude coincidental with the substorm onset appears to be consistently larger for the region 1 current than for the region 2 current, matching the observed increase in ratio between the two at onset. Latitude of Current Ovals Figure 7 shows the value of l 1 and l 2 averaged per bin.In this case, Bin I shows that the current ovals get smaller by 0.5-1 • from the beginning of the interval depicted until the onset of the substorm at t = 0 min, with a sharp increase seen at the onset of the substorm.The size of the current oval increases by 2 • before decreasing throughout the period (at t = 120 min, they remain larger than their initial values).The two current ovals change in size similarly through the interval, but with l 1 decreasing in size more quickly than l 2 before the onset of the substorm and then increasing in size more quickly, reaching a higher Δl. In Bin II, the two current ovals increase slowly in size by ∼1 • from the start of the period until substorm onset, both current ovals showing the same increase in size.After onset, both current systems increase more rapidly, until both current ovals reach a peak of ∼1 • larger at t ∼20 min.The R2 current oval increases in size to a higher extent than R1, and Δl 2 remains larger than Δl 1 with the current ovals remaining ∼0.25 • larger at the end of the epoch than at the start. Bin III exhibits a similar pattern prior to substorm onset, with both current systems increasing in tandem by 1.5 • and the decrease in size after the peak being almost identical to Bin II.In this case, however, there is no increase in the rate of oval growth at the point of onset, with the peak in oval size occurring at t ∼10 min.In Bin IV, the currents increase by 2 • and the peak again moves earlier, to t = 0 min, with the ovals at the end of the epoch being approximately 0.25 • larger than at the beginning, as in Bins II and III.In Bin V, the peak is also at t = 0 min but the ovals are 0.5 • larger than their pre-substorm size at the end of the epoch, and the total increase is 2 • . As the onset colatitude increases with bin, the disparity between Δl 2 and Δl 1 increases.So too does the difference between l 1 and l 2 , with the two current ovals being separated by 8 • in Bin I but by 10 • in Bin V. Journal of Geophysical Research: Space Physics 10.1002/2014JA020500 Discussion In the ECPC paradigm, substorm growth and expansion phases manifest themselves as the expansion and contraction of the polar cap [Cowley and Lockwood, 1992], and so we discuss the spatial variation in the Birkeland current systems in that context.Ionospheric convection is driven through first dayside and then nightside reconnection and the subsequent motion of flux tubes in the magnetosphere [Milan, 2013].These ionospheric motions are resisted by frictional coupling with the neutral atmosphere, requiring horizontal ionospheric currents and field-aligned currents.Hence, both phases are expected to be associated with FAC enhancements, as demonstrated by Coxon et al. [2014], and we explore the relationship in more detail.We also utilize the categories in section 2.3 to discuss how the amount of open flux in the magnetosphere at onset affects the reaction of the Birkeland current system to substorms [Milan et al., 2009a]. The Reaction of the Birkeland Currents to Substorms 5.1.1. Spatial Variations As described in section 1, the polar cap expands as the amount of open flux in the magnetosphere increases.The R1 currents flow along the OCB (section 1), and so the motion of the R1 current oval can be used as a proxy for the polar cap boundary, indicating that we should see similar expansions and contractions to those seen in auroral data [Milan et al., 2003;Clausen et al., 2013b].In Figure 4 the extent of the current ovals expands through the growth phase as open flux is added by dayside reconnection.After substorm onset, the ovals maximize and then begin to contract again, which is consistent with open flux being closed in the magnetotail during the substorm expansion phase.Therefore, the current ovals can be used as a proxy for the amount of open flux in the magnetosphere, and the time derivative of l 1 could be used to examine dayside and nightside reconnection rates. The amount of open flux maximizes at the same time as the extent of the current ovals, at t ∼20 min, just after the onset of the substorm expansion phase.This indicates that although the dayside reconnection rate begins to wane immediately prior to onset, Φ D is still higher than the nightside reconnection rate Φ N until the point that the open flux content begins to decrease again.This indicates that Φ N becomes larger than Φ D just prior to the maximum of the magnetic bay observed in SML.We therefore infer that it marks the peak of Φ N . Magnitude Variations It can clearly be seen that field-aligned currents are strongly driven by substorms.The currents increase in magnitude as Φ D increases, with rises in both clearly observed in Figure 4.This indicates that dayside reconnection drives currents through the Birkeland current system during the substorm growth phase.It was previously shown by Coxon et al. [2014] that the magnitude of the Birkeland currents gets larger with increases in the value of Φ D and also the AL index, consistent with the result here.The total increase in the current magnitude over the substorm cycle is 1 MA. At substorm onset, the growth phase of the substorm is over, and the dayside reconnection rate on average begins to drop [Freeman and Morley, 2009] but the current magnitudes continue to increase.Since the onset of the substorm implies that magnetic reconnection has initiated in the magnetotail, we can infer that this is due to driving from the nightside reconnection increasing as the dayside reconnection rate decreases and represents part of the expansion phase of the substorm.Our inference is corroborated by an examination of the current densities over a 2 h epoch performed by Clausen et al. [2013a].The fact that the current magnitudes reach their peak coincident with the extent of the current ovals indicates that it is the point at which the sum of reconnection rates Φ D + Φ N , related to the total cross polar cap potential [Milan, 2013], is at its peak. The substorm current wedge also drives current through the Birkeland current system, causing the currents to increase more rapidly; the increase is a signature of the substorm expansion phase that can be seen in SML, which measures the magnetic perturbation associated with the substorm current wedge (SCW).As the SCW begins to decrease in magnitude so too does SML, coinciding with an expected decrease of the Birkeland currents.This is consistent with observations by Murphy et al. [2013] which show that both regions 1 and 2 are enhanced during the substorm cycle but is inconsistent with observations that substorms are seen only in the R1 currents [Clausen et al., 2013b]. It is clear that field-aligned currents are strongly driven by magnetic reconnection events in the solar wind-magnetosphere coupled system.The magnitude of the two current systems increases by up to 1 MA Journal of Geophysical Research: Space Physics 10.1002/2014JA020500 over the course of a substorm cycle, but the two current systems do not react identically to the onset of a substorm.The disparity in reaction can be seen by examination of the ratio J 1 ∕J 2 , which increases to as much as 1.2 after substorm onset.The increase implies proportionally more current flowing through R1, even though both current systems are enhanced.This may explain why previous observations of AMPERE data have differed on the role of R2 during the substorm cycle [Murphy et al., 2013;Clausen et al., 2013b]: our observations suggest that although R1 experiences a more notable enhancement, both current systems react to substorms.R1 experiencing larger enhancements than R2 is consistent with previous examinations of substorms [Sergeev et al., 2014]. The high ratio suggests that more R1 current closes across the noon/midnight meridian through the ionosphere during the substorm expansion phase, probably indicating significant current closure through the substorm auroral bulge.Usually, Hall currents flow sunward across the polar cap and antisunward around the flanks of the polar cap (also called the DP-2 current system), with Pedersen currents flowing in the auroral zone and also duskward across the polar cap.In this case, R1 currents can either close through R2 currents (via the auroral zone) or through R1 currents on the opposite side of the polar cap.During the substorm expansion phase, the substorm electrojet (also called DP-1) flows westward across midnight (from the dawn sector to the dusk sector), meaning that more R1 current can flow duskward and close through R1, explaining how the onset of expansion phase can increase the relative strength of R1 to R2. Finally, the Birkeland currents decrease both in magnitude and in spatial extent after the expansion phase, as the recovery phase leads back into a quiescent magnetospheric state. Reactions Varying With Geomagnetic Conditions Bins I-V show the change in the reaction of the Birkeland currents to substorms as the current ovals at onset are more equatorward (Figures 5-7).Bin I merits a separate discussion, presented in section 5.2.1. Within Bins II-V, it has been observed that the dayside reconnection rate is higher as the onset colatitude of the current ovals increases, which is also true for SML and SMU (Figure 5).In order for the current oval to reach high colatitudes, the dayside reconnection rate must be high to add enough open flux to expand the oval before the start of nightside reconnection at substorm onset.As such, this is consistent with existing pictures of the ECPC paradigm. As explained in section 1, substorm onsets that occur with higher amounts of open magnetic flux are more intense due to the higher amount of energy contained within the magnetotail [Milan et al., 2009a].This explains the larger magnetic bay in SML subsequent to substorm onset, as a more intense substorm is triggered.It also explains why the negative change in l 1 and l 2 is larger as the onset colatitude represented by the bins increases, since open magnetic flux is closed in the magnetotail at a higher rate and thus the polar cap will contract more quickly (Figure 7).It should be remembered, however, that l 1 and l 2 do not vary linearly with the open flux content of the magnetosphere. The Birkeland current magnitudes become more enhanced at all points of the epoch, per bin, as the onset colatitude increases.Since the current magnitudes are associated with higher reconnection rates, the higher Φ D values observed with onset colatitude are evidently responsible for driving the increase.As described, the ratio J 1 ∕J 2 at the start of the epoch is larger as the onset colatitudes increase, indicating that R1 currents are relatively larger than R2 currents with higher geomagnetic activity, consistent with previous observations [Coxon et al., 2014].The enhancement in the ratio that occurs after substorm onset becomes less obvious from Bin II to Bin V, however.We conclude that this indicates that the enhancement to the Birkeland currents is more evenly spread between R1 and R2 as conditions become more extreme or that the SCW intensity does not depend on onset latitude (Figure 6). Signatures Seen at Small Substorm Onset Colatitudes Contrary to the other bins, Bin I shows a decrease in SML, SMU, l 1 , l 2 , J 1 , and J 2 prior to onset.At onset, there is a sudden increase in these values.Unlike the other bins, which show Φ D increasing over the 2 h preceding onset, the dayside reconnection rate begins to increase approximately 20 min prior to onset and remains high until after onset, which would result in the coupled solar wind-magnetosphere system experiencing the addition of open flux during the substorm.This would therefore lead to SMU increasing with enhanced ionospheric convection and an increase in the size of the current ovals, both of which are seen just after the dayside reconnection rate has increased. Conclusions The work described in this paper gives an overview of the reaction of the Birkeland current system (in both magnitude and spatial extent) to substorms within the context of the expanding/contracting polar cap paradigm.We have demonstrated their reaction during various phases of the substorm and show that they become more intense in the growth phase and reach a maximum during the expansion phase soon after onset, decreasing to pre-substorm levels in the recovery phase. These results can be interpreted in the framework of currents being driven by ionospheric flows which are ultimately driven by magnetic reconnection.The magnitude of the two current systems increases by up to 1.25 MA over the course of a substorm cycle, and the ratio J 1 ∕J 2 increases to as much as 1.2 after substorm onset, suggesting that the SCW enhances both Birkeland current systems but preferentially flows through the poleward region 1 currents. We categorize the data by colatitude and assume that larger current ovals imply a larger polar cap and therefore more open flux.The change in the size of the current ovals can be used to pinpoint the stage at which nightside reconnection begins to dominate over dayside reconnection, and we show that nightside reconnection occurs at a higher rate after substorm onset when the current ovals (and therefore the amount of open magnetic flux) are higher. Figure 1 . Figure 1.The near-Earth electric current systems drawn as if Earth was eclipsing the Sun.Shown are the region 1, region 2, Pedersen, magnetopause (Chapman-Ferraro), and ring currents as well as the location of open and closed terrestrial magnetic field lines.In the magnification of the southern auroral zone, the arrow showing Pedersen current flow across the polar cap is smaller than the arrows for the auroral zone to indicate the relative strength of the Pedersen currents (not to scale).It can be seen from this image how the region 1 current sheet corresponds to the open/closed field line boundary or OCB. Figure 2 . Figure2.Histograms showing the value of l 1 for the region 1 current oval at t = 0 min (substorm onset) for the (top) Northern and (bottom) Southern Hemispheres.Larger values of l 1 at onset imply more active geomagnetic conditions prior to the onset of the substorm.The dotted lines show the boundaries of bins defined in section 2.3. Figure 3 . Figure3.Substorm-related parameters for a period beginning at midnight on 28 June 2010 and ending at midnight on 3 July 2010.From top to bottom, keograms showing the dawn-dusk meridian for (a) the Northern Hemisphere and (b) the Southern Hemisphere; (c) the R1 current magnitude J 1 and R2 current magnitude J 2 for the Northern Hemisphere; (d) J 1 ∕J 2 in the Northern Hemisphere; (e) the dayside reconnection rate Φ D ; and (f ) the AL and AU indices.Vertical dashed lines represent the locations of substorms as given by SuperMAG.The reader should be aware that upward and downward currents are denoted by red and blue in Figures3a and 3b(as given by the key at the bottom of the figure), but red and blue are used to denote R1 and R2, respectively, in Figure3c. Figure 5 . Figure5.(left column) Φ D as calculated using OMNI data and (right column) SML/SMU with respect to substorm onset at t = 0 min, binned by substorm onset colatitude (increasing from top to bottom).N is the number of substorms in the relevant bin. Research: Space Physics 10.1002/2014JA020500 Within the context of the ECPC paradigm, a decrease in the extent of the current ovals indicates that the amount of open flux contained within the polar cap decreases between the start of the epoch and the point of substorm onset.Such a decrease can only be explained by magnetic reconnection on the nightside causing the conversion of open to closed flux, which could imply reconnection at a distant neutral line in the magnetotail during extremely quiescent periods.
9,843.4
2014-12-01T00:00:00.000
[ "Physics" ]
Conditional Value-at-risk for Random Immediate Reward Variables in Markov Decision Processes We consider risk minimization problems for Markov decision processes. From a standpoint of making the risk of random reward variable at each time as small as possible, a risk measure is introduced using conditional value-at-risk for random immediate reward variables in Markov decision processes, under whose risk measure criteria the risk-optimal policies are characterized by the optimality equations for the discounted or average case. As an application, the inventory models are considered. Introduction As a measure of risk for income or loss random variables, the variance has been commonly considered since Markowitz work [1].The variance has the shortcoming that it does not approximately account for the phenomenon of "fat tail" in distribution functions.In recent years, many risk measures have been generated and analyzed by an economically motivated optimization problem, for example, value at risk , conditional value-at-risk [2,3], coherent risk of measure [4][5][6], convex risk of measure [7,8] and its applications [9,10]. On the other hand, a lot of research considering the risk have been progressed by many authors [11][12][13][14][15] in the framework of Markov decision processes (MDPs, for short).In [11,16], the risk control for the random total reward in MDPs is discussed.In the sequential decision making under uncertain circumstance, it may be better to minimize the total risk through the infinite horizon controlling the risk at each time.For example, in multiperiod inventory and production problem, we often want to order optimally by the ordering policy such that while it minimizes the total risk through all the periods it also makes the risk at each time as small as possible. In this paper, with above motivation in mind we introduce a new risk measure for each policy using conditional value-at-risk for random immediate reward variables, under whose risk measure criteria the optimization will be done, respectively, in the discounted and average case.As an application, the inventory model is consid-ered.In the reminder of this section, we shall establish notations that will be used throughout the paper and define the problem with a new risk measure. A Borel set is a Borel subset of a complete separable metric space.For a Borel set X, X denotes the B   algebra of Borel subset of X.For Borel sets X and Y,   X P and   X Y P be the sets of all probability measures on X and all conditional probability measures on X given Y respectively.The product of X and Y is denoted by XY.Let be the set of real numbers.Let I be a random income (or reward) variable on some probability space , and We define the inverse function Then, the Conditional Value-at-Risk for a level We note that   @ CV R I  is specified depending only on the law of the random variable I.For any Borel set X, the set of all bounded and Borel measurable functions on X will be denoted by  . is in The sample space is the product space . B =  such that the projections on the t-th factors describe the state and the action at the t-th time of the process denotes the set of all policies, i.e., for let , , , , 0 . for all  a policy is called stationary.Such a policy will be denoted by f.Let For any we assume that and for   P  We want to minimize the total reward risk making the risk at each time as small as possible.So, using for the random reward variable will be defined in the discounted or average case as follows.With some abuse of notation, we denote by b) The average case. For the family of random reward streams have same properties as those of coherent risk measures (cf.[4]), which is shown in the following proposition.   Other assertions in Proposition 1.1 are easily proved.This completes the proof.□ For  , where where     By the representation formula of @ CV R  (cf.[2,3]), the second equality of (7) holds, which completes the proof.□ The value function of the discounted and average cases are defined respectively by A policy is called discounted and average risk-optimal, respectively, if Risk-Optimization In this section, using for a random reward variable (1), we define a new immediate reward function by which the theory of MDPs will be easily applicable.Moreover, sufficient conditions are given for the existence of discounted or average risk optimal policies.@ CV R Another Representation of Risk Measures In this subsection, another representation for DS  and AV  are given.For any the corresponding immediate reward function will be defined by for each x S  and .a A  Then, we have the following, which shows that the original problem with is equivalent to the new problem with r. r Theorem 2.1.It holds that, for any . The Discounted Case Here, we drive the optimality equation for the discounted case, which characterizes a discount risk optimal policy.To this end, we need the following Assumption A. Assumption A. The following 1) -4) holds: i.e., for any For any  , so that there exists 1 2 , > 0 y   Therefore, from Assumption A 2) and convergence assumptions there exists N for which z    , n n n y x a  for , which implies (12).Thus, by the general convergence theorem (cf.[17]) and ( 11) and ( 12), we have that We can be in position to state the main theorem in the discounted case. Theorem 2.3.Suppose that Assumption A holds.Then, 1) The value function DS  is given by where is a unique solution to the optimality equation of the discounted case, 2) The exists a measurable function : attains the minimum in (14) and the stationary policy f  is discount risk-optimal. The Average Case In order to obtain the optimality equation for the average case, we assume that Assumption below holds, which guarantees the ergodicity of the process. Assumption B. There exists a number where  denotes the variation norm for signed measures. One of sufficient condition for Assumption B to hold, easily checked for applications, is as follows (cf.[19,20]). Assumption B ' There exists a measure  on with Theorem 2.4.Suppose that Assumptions A and B hold.Then, there exists Moreover, there is an average risk-optimal stationary policy f  such that minimizes the righthand side of (17). Proof.We have already obtained that is continuous in , applying the theory of average MDPs (cf.Corollary 3.6 in [19]), Theorem 2.4 follows, as required.□ An Application to Inventory Model We consider the single-item model with a finite capacity < C  , in which the demands   =0 X denotes the stock level at the beginning of period t and action t is the quantity ordered (and immediate supplied) at the beginning of period t.Putting the amount sold during period t, , the system equation is given as follows.   The transition probability Also, the immediate reward is given as , , r x a where is the unit sale price, the unit production cost and unit holding cost.Several lemmas are needed for risk analysis.Let > 0  be a random variable with a given demand distribution  and for where In order to the equivalent MDPs, we specify the immediate reward where   and the third equality follows from the monotonicity and homogeneous property of The function L defined above is proved to be a convex function. Lemma 3.2 The following 1) -2) hold. 1) The second and the third inequalities follow from the monotonicity and the convexity of , respectively.This means that @ CV R   L u is convex.□ To applying Theorems 2.3 and 2.4 to inventory problems, the following is needed.We can state the main theorem.Theorem 3.3.Suppose that Assumption C holds.Then, for each of discounted or average case, there exists a constant level stationary policy f   which is optimal, that is, the ordered amount for some , x    where the critical level x  for each case is given from the corresponding optimality Equations (14) and (17). Proof.First we verify that 1) -4) of Assumption A are satisfied.A 1) -A 4) are clearly true by definitions.For any Since   , r x a is convex in , using the result of Iglehant [21] (cf.[22]), it follows that the right-hand sides of the corresponding optimality equation ( 14) and ( 16) are convex in So, it is easily shown that there exists a risk-optimal policy f  of a constant level type (23) for each case.The proof is complete.□ Acknowledgements This study was partly supported by "Development of methodologies for risk trade-off analysis toward optimizing management of chemicals" funded by New Energy and Industrial Technology Development Organization (NEDO).  process is a controlled dynamic system defined by a six-tuple sets S and A are state and action spaces, respectively,   A x is non-empty Borel subset of A which denotes the set of feasible actions when the system is in state , is the law of motion, is an immediate reward function and is an initial state distribution.Throughout this paper, we suppose that the set shows that D is closed.Similarly, we can prove that  are i.i.d. with the distribution function which has a continuous density   x  w.r.t. the Lebesgue measure . The state space and action space are assertion(16) in Assumption B ' holds.Thus, Theorems 2.3 and 2.4 are applicable. ,
2,224.6
2011-09-19T00:00:00.000
[ "Computer Science", "Mathematics" ]
Al7129 Metal matrix enhanced with Titanium carbide (TiC) and Boron carbide(B 4 C) optimized machining parameters utilizing Taguchi method for Surface roughness The influence of spindle speed, feed rate, and depth of cut of alumina particle on lowering surface roughness during turning of Al7129/TiC/B 4 C hybrid composite is investigated in this study. The composite is turned using a TiN coated solid carbide tool. Taguchi's experimental design concept is utilized to optimize three tiers of design parameters for improved for better surface finish. The results of the experiments and the microstructure of the machined surface demonstrate that the samples with the lowest feed rate perform better in terms of surface roughness. Surface roughness is also influenced by the wt% of alumina, which is followed by spindle speed when spinning the produced samples. Introduction Aluminum matrix composite (AMC) is a relatively new material that has a wide range of uses due to its quantifiable benefits. In many engineering domains, they are employed in a variety of structural, non-structural, and functional applications. In recent years, the particulate reinforced AMCs are replacing the conventional materials that are used in aircraft and automotive components. The most common applications are aircraft's engine cowlings, landing gear doors, automotive pistons, bearings, etc. In those major applications, the manufactured components are expected to be with good surface finish and accuracy. Particulate reinforced Aluminium (Al) based composite are found very difficult in machining due to the presence of hard ceramic oxide reinforcement. Turning is one of the most prevalent machining operations related with the manufacturing process. In practise, producing machined with a decent surface quality has proven to be extremely challenging. As a result, minimising surface roughness in turning parts is difficult and must be managed.The hard ceramic particles in the matrix, such as Al2O3/SiC, make it difficult to machine, which affects the surface finish by increasing the composite's surface roughness. In such circumstances, adding graphite to the matrices minimises tool wear during milling and improves the composite's surface polish. [1]. Most of the studies on Metal matrix composite (MMC) are focused on the study of tool wear characteristics during machining of aluminium alloy composite. The surface finish of the component can be varied along the process parameters such as spindle speed, feed etc.The present work is focused on minimizing surface roughness in turning of Al7129/TiC/B4C hybrid composite. From the various literatures available, it has been observed that feed rate, cutting speed and wt % of the reinforcements are key factors influencing surface roughness. Palanikumar and Karthikeyan [2] made an attempt on assessing the factors influencing surface roughness on machining of Al/SiC particulate composite. They have used K 10 tungsten carbide tool inserts for machining. The machining parameters considered were % vol fraction of SiC, cutting speed, depth of cut and feed rate. They employed ANOVA technique to optimize the machining parameters. Saravanakumar and Sasikumar [3] made a study on prediction of surface roughness in turning using design of experiments. They concluded that selection of reinforcements plays an important role in improving the material properties and machinability of the composite. Considering two levels of factors, they had developed a mathematical model for the proposed cutting parametersIt contributes to the product's durability. The Taguchi method is used in a very short period of time and with minimal effort. As a result, Taguchi's method is being used in a variety of industries to improve process quality in the manufacturing sector.Surface roughness and cutting force are both critical parameters in the machining process. [4][5][6][7][8]. Cutting force is required for power machining calculations. Cutting forces have an impact on dimensional accuracy, workpiece deformation, and chip formation.In industries, components with a specific surface roughness are always required based on the needs of the customer. This can be accomplished through the optimization process.d that the quality of the drilled holes can be improved by proper selection of cutting parameters.In the present study, an attempt has been made to optimize the machining parameters for better surface roughness in turning of Al7129/TiC/B4C hybrid composite. [9][10][11][12][13][14]. Experimental methods 2.1. Materials and methods For the present work, Turning of Aluminium 7129 (Al7129) has been used as matrix phase and Titanium carbide (TiC) and Boran carbide (B4C) powder as reinforcement phase. Al 7129 has been chosen as the matrix material due to its lower strength and better machinability performance. In order to increase the strength of the matrix Al2O3 particles are added as the reinforcements where as particlesTiC& B4C addition improves the machiniablity. The chemical composition of the base metal and its physical properties are shown in Tables 1 & 2 respectively While the chemical compositions and physical properties reinforcement are shown in Tables 3 & 4 respectively. Al2O3 provides good wettability between the matrix and reinforcements particles acts as a solid lubricator which helps in easy machining of the composite.The composite have been fabricated using stir casting technique at an optimal speed to ensure even distribution of the reinforcements along the matrix. The composite is fabricated at three different compositions taking as 3-9 wt% in steps of 3 wt% to the matrix material. The machined composite are tested for surface roughness using Handy Surf surface roughness measuring device of E-DT5706 which consists of a probe connected to it. Figure 1 shows used and the surface roughness measuring device. Taguchi method Taguchi's experimental design is used to reduce quality loss by using the three possibilities provided in Taguchi's design analysis, according to a lot of research. The terms "the-nominal-the-best," "the-larger-the-better," and "the-smaller-the-better" are used to describe them. The concept of taguchi method is to find out the best combination of design parameters by conducting minimum number of experiments. As a result, it provides main effects and interaction effects which uses S/N ratio to quantify the data variation. For the present analysis, the machining parameters have to be optimized for minimum surface roughness. Therefore, "thesmaller-the-better" concept is chosen using the equation, Where, "n" is the number of replications of experiment and "y" is the quality score with smaller-the-bettervalueofexperimentaldata"i". Statistical software Minitab18 is used for the analysis and the results arrived are "main effects" and responsetable for the variables. The impacts of the design parameters and levels on the response variables are depicted in the main graphs for S/N ratio. The optimal factor levels are those that optimise the proper S/N ratio. At the greatest spindle speed (250 rpm), lowest feed rate (25 mm/min), and wt of alumina, a superior surface finish. The surface roughness of the composite increases with increasing feed rate and decreases with spindle speed, according to the findings. When the feed rate is increased, the tool's load is increased, which increases cutting force, resulting in a poor surface finish. At all situations, a similar effect was observed in reduced spindle speed. It shows for S/N ratio in which it is clearly visible that feed is the most influencing factor since it is ranked first followed by alumina and spindle speed in minimizing the surface roughness. Analysis of variance (ANOVA) Analysis of variance can be used to investigate and model the relationship between a response variable and independent variables. The link between a response variable and independent factors can be investigated and modelled using analysis of variance. With the help of the P-value, it was also possible to check the elements that are statistically significant at a 95% confidence level. The higher level of significance is indicated by a P-value of less than or equal to 0.05. Confirmation tests A confirmation tests has to be carried out at the optimal level of parameters. The parametric combinations obtained for minimum surface roughness was obtained as 250 rpm (spindle speed), 25 mm/min (feed rate), 6% (wt of alumina). Conclusions Using Taguchi's experimental design, the process parameter that has an impact on surface roughness during machining Al7129/TiC/B4C hybrid composite is optimized in this study. The experimental out comes drawn are: i.As the speed and feed rate are raised, the surface roughness of the rotating work piece reduces. ii. ii. It is visible from the major impacts plot for S/N ratio that 6 wt% alumina has a superior surface finish than other alumina compositions. iii. According to the S/N ratio response table, feed rate is the most important element in lowering surface roughness during machining of Al7129/TiC/B4C hybrid composite, followed by wt% alumina and spindle speed. iv. The optimal parametric conditions obtained for minimizing surface roughness are at highest spindle speed of 250 rpm, lowest feed rate 25 mm/min, 6wt % of Al2O3. v. The microstructure of the machined samples clearly reveals that the 6% Al2O3 composition shows less deformation during machining when compared to other samples. 95% CI for the Mean The pooled standard deviation is used to calculate the intervals.
2,028.6
2021-05-30T00:00:00.000
[ "Materials Science", "Engineering" ]
Blog-Based Writing Instruction in Fostering EFL Writing Performance: Students’ Belief and Attitudes The existence of blogs offers an innovation that encourages a new writing style integrated with technology. However, it seems that there are still not many EFL writing classes in Indonesia using online blogs, and little research has been conducted on how students’ perception of a blog in writing. This study aims to analyze the Indonesian EFL learners' perceptions toward the use of Blog as the medium of learning practices in English Writing class. This study use qualitative research involving a questionnaire and semi-structured interviews were used. For 12 weeks, thirty-four students participated in this study by posting weekly entries from their English writing assignments on blogs. The data from the questionnaire reveals that the majority of students strongly agree and agree with the usefulness, effectiveness, attitudes, and practicality of using a blog in English writing class. It means that blog as learning media is very influential in the development of students' learning process. The interview result also showed that online blogging provided many advantages such as being easy to operate, writing can be accessed by all readers in the world, receiving and giving comments on friends' blogs, no longer needing to the stationary, and encouraging the student's creativity to write systematically. This is an open access article under the CC BY-SA license. INTRODUCTION Discussions on blogging and its effects on learning have expanded in second-language writing due to increased access to technologies in the classroom.It quickly gained popularity among language learners supported by web designers and developers.In Malaysia, students in science classes use blogs as online portfolios to share experiences and post writing assignments and discussions conducted in the course (Fathi et al., 2019; H. J. Zou & Hyland, 2020).In Iran, students studying English use blog buddies for engineering classes.Students write down their learning outcomes on blogs and receive feedback from other students resulting in a significant improvement in students' writing abilities (Arochman & Yosintha, 2020).Furthermore, Vietnamese second-year English students post their writings on the blog and give and receive suggestions for revision through blog comments from classmates (Pham & Usaha, 2016).Likewise at Midwestern Universities in the US where regular interaction of students on blogs fosters collaborative skills by submitting reflections as journal/blog posts and commenting on their classmates' reflections (Thomas, 2017).In Indonesia, Since blogging is a useful tool for students to develop their English writing skills, an increasing number of EFL teachers have integrated blogging into their classroom activities (Fithriani et al., 2019;Rahayu, 2021). The rapid advancement of technological growth with Web 3.0 has created numerous innovative and engaging learning opportunities.Students now have access to a variety of Web 2.0 publishing tools, including blogging, social bookmarking, virtual world activities, audio and video podcasting, social networking, and wiki writing.Approximately 81% of school students use the Internet for various purposes, among them blogging (L.Lee, 2017;Simamora, 2020).The number of studies on weblogs as useful teaching tools in higher education keeps growing.Many university professors are interested in giving students the chance to produce content in online forms as a way of presenting their understanding.The use of blogs in teaching and learning in higher education, notably in Malaysia, Indonesia and India showing that educational blogging is not a new phenomenon.Blogs have been used in foreign language instruction in recent years because they give aspiring to writers an authentic language environment to reach a wider audience, as well as a way for the teacher and other students to provide critique and encourage negotiation for meaning.Through educational blogging, of course it can use to increase the interest in using Web 2.0 technologies to support learning in higher education (Alsubaie & Madini, 2018;Shafira Lubis & Hamuddin, 2019). In a teaching language classroom, blogging has long been considered a promising approach.This approach is popular in the field of teaching English as a second/foreign language (ESL/EFL) can increase the communicative potential of students and teachers and help expand learning opportunities.Language teachers can use it to teach English skills, such as writing, reading and speaking (Bahanshal et al., 2022;Murniati et al., 2018;Yeh et al., 2019).Furthermore, language learners can use it for various purposes such as discussions, reflections, sharing pictures, links and learning resources, vocabulary enrichment, grammar and paragraph writing (Garcia et al., 2019;Huang, 2016).Blogging in language classrooms have been outlined as "a more appropriate choice in language expression, giving new perspectives to the learner's thinking, and inspiring the awareness that one's voice resounds in distant area of the world and is heard by others" (Reinhardt, 2018;Villalba, 2022).In addition base on previous study results, blogs can be dynamic platforms that promote widespread practice, authorship, and the development of learning strategies (Prykhodko et al., 2019).By using blogs, teachers can upload information that is instantly available on the web and potentially attract comments from other users, including classmates and unknown bloggers. Several ELT-related researches have demonstrated the efficiency of blogs in enhancing knowledge, motivation, and critical thinking in higher institutions.It is used as a publication medium which does not have to spend much money, as brand building and sharpening writing skills (Galvin & Greenhow, 2020;Zheng et al., 2017).It enables students to give feedback on each other's work and communicate ideas, which promotes a more informal, student-centered, and self-paced environment for reading and writing in the target language (Alsubaie & Madini, 2018;Zhang, 2021;H. J. Zou & Hyland, 2020).As a result of the tendency toward the use blog in educational setting, it increases the research attention on the usage of blogs in L2 writing classes.This platform enables students to engage in writing process which involves drafting, reading, revising, and giving feedback on other postings.Since textbooks could not provide all of the content necessary to teach the writing process, the weblog offers ways to develop writing abilities using less expensive tools and procedures.For example, students can publish their writing freely, share their challenges, and get feedback on their blogs.It encourages a collaborative setting where students can exchange ideas and information (L.Lee, 2020;Sulistyo et al., 2019). Furthermore, numerous studies found that blogging has the potential to improve self-regulated and autonomous learning as well as learner motivation, engagement, and self-efficacy in L2 writing (Qiu & Lee, 2020;Wiederhold, 2018).The learners are more motivated to write because they are aware that their teacher, peers, and perhaps the world is reading their blog.It encourages students to develop and gain an understanding of the writing process.Peer feedback by blogging improves learners' grammatical accuracy and vocabulary, and it also helps them further their interlanguage development (Alsubaie & Madini, 2018;Tsao, 2021).That is a medium for individu to written expression, as a means of communication, as a platform for debate, and as a way to foster relationships in a real-world online community.The students have autonomy in trying to organize the entire content on the blog (Han, 2023;Yakut & Aydın, 2015).Student comments and the feedback they give and receive from colleagues in their respective blogs are very important in the language learning process.Through blog writing, the student enforces master writing ability, inspires writing activities, and effectively engages in social and real-world contexts. Although blogs are popular in education, only a few English teachers use them to teach writing.Only a small amount of research, primarily qualitative in nature, has been done to examine EFL students' perceptions and attitudes toward blogs in the Indonesian context (Bahanshal et al., 2022;Fithriani et al., 2019;Hamuddin & Dahler, 2018).Several studies have used blogs as a platform for teaching writing to increase writing abilities through experimental research, but they did not look at students' experiences and perceptions after utilizing blogs (Arochman & Yosintha, 2020;Rahayu, 2021;Sa'diyah & Cahyono, 2019).Several studies tested the effect of blog integration on cooperative learning and suggested that peer feedback facilitated learning to write (Jeong, 2016;Kuo et al., 2017).However, there is still a lack of focus given to writing on blogs combining with peer feedback. Peer feedback on blogs, in contrast to traditional face-to-face peer feedback, is an asynchronous computer-mediated communication (CMC) form.Online CMC modes don't involve face-to-face interaction, students may feel less pressure to offer critical feedback there.Reviews are permitted to have extended response times in CMC modes, and they can offer comments and correct writing errors at a convenient time and at their own leisure.Furthermore, previous study noted that as students are required to read critically when they are involved in the peer review process (Huang, 2016).Peer feedback has long been established as a collaborative activity that effective for L2 writing improvement.Peer feedback is advantageous for both those who give it and those who receive it.Peer feedback through blogs helps learners strengthen their interlanguage skills while also enhancing their grammatical accuracy and vocabulary (Alsubaie & Madini, 2018;Tsao, 2021).In the process of learning a language, student comments and the feedback they provide and get from peers on their personal blogs are important.Until now, a lot of teachers have utilized blogs as a teaching tool for writing classes, but few have added feedback from peers. To address the aforementioned gaps, suggestions from previous research by providing peer feedback on blog's writing are developments in this research (Kuo et al., 2017;Nugroho et al., 2017).As a lecturer in English writing at IAIN Palopo, the researcher has been using blogs in writing classes for two semesters.Previous research only used blogs as a medium for writing and then checked by the teacher.Thus, in this study researchers as well as lecturer added new activities in the form of peer feedback.Students submit their written work on their blog and send the link to the WhatsApp group for the teacher to check.All students in the class can read their friends' work and write down their feedback.The teacher also provides feedback on student work.Peer feedback provides a valuable and unique perspective on overall student performance.Also peer assessment can motivate students to produce high quality work. Related to the background, the present study is aimed to discover students' belief and attitudes toward the use of blogs as the media for the writing activity as well as the value of peer feedback.It is important to note that as a developing country, not all students and lecturer in Indonesia, particularly those at IAIN Palopo, where this research will be conducted, are familiar with the integration of ICT especially using blogs in their formal education.One of a teacher's main tasks is to work to improve students' digital literacy so that they can use ICT in the 21st century.The results of this study are expected to provide an overview to teachers about teaching models of writing through blogs and the students belief on using peer feedback on their blogs.Furthermore, the perceptions of these students will provide input on how blogs and peer feedback should be used for teaching writing.It is also expected to serve a model in teaching writing to be implemented by higher education especially in Indonesian universities. METHOD This study is an exploratory study adopting a quantitative and qualitative approach to investigate students' perceptions of the integration of blogs in the EFL writing class (Lassoued et al., 2020).The research was conducted at the end of the second semester of the 2022 academic year.The English writing course was held in 12 meetings for 2 hours.Writing activities are carried out using class blogs as e-forums by connecting student blogs to class blogs and teacher blogs so that students interact and discuss their problems with one another.The students through the blog were asked to write their draft on the blog and publish it for review by their classmates and teachers.They then exchange comments as feedback between members of the same class and their teacher via the class blog.The participants' research is on the fifth-semester students in the English Education department at IAIN Palopo totaling 34 students which were taken purposively.Participants consisted of 26 women and 8 men.This class was chosen purposively as the object of research because the students have studied academic writing for one semester.It used blogs to write their writing draft in ten different topics for one semester.So this participant already had sufficient experience in using blog based writing.The participants agreed to be investigated further about their experiences with online blogging.This study utilized a questionnaire to collect the quantitative data and semi-structure interview for qualitative data.All participants got open-ended questionnaires that ask for their opinions on the use of online blogging.The instrument is show in Table 1.Practicality to determine the level of ease and practicality in operating the blog Base on Table 1 the total statements in the questionnaire were 15 items, with 4 items for usefulness, 3 items for effectiveness, 4 items for interest and 4 items for practicality.The questionnaire data uses a Likert scale as a measurement scale.The items were responded by choosing one of four options provided i.e.: strongly disagree, disagree, agree, and strongly agree.An online questionnaire was distributed to find out the majority of student's perception of the utilization of blogs as learning media in English Writing Class.Additionally, semistructured interviews with the fifteen students as representatives were undertaken as a follow-up.The purpose of the interview was to obtain additional information related to student blogging and to support the results of the questionnaire filled out by students.The researchers adopted stratified sampling to ensure the presence of a certain representative subgroup of the population under study (Mackey & Gass, 2005).The percentage of data from the questionnaire is calculated for each item and then sorted based on the tendency of the score for each item.The qualitative data obtained from the semi-structured interview were recorded, analysed, and coded using thematic content analysis based on Braun and Clarke's procedures of thematic analysis.The questions were specifically designed to collect information related to previous statements on the questionnaire and then encourage more opinions from each participant because the goal of the interview was to explore each participant's perspectives and experiences on the use of blog. Result The researcher surveyed through a questionnaire to students blogging experience after implementing blogging for one semester.The researcher gets the data from the questionnaire as show in Table 2.The data in Table 2 displays participants' perceptions of the usefulness of blogs in English writing.Using the blog as a medium for improving writing skills has the highest score followed by a self-perception medium, then as the information exchange, and last as social networking.It shows a large proportion (68 %) of them strongly agree and (32%) agree that blogs can be the learning medium for improving writing skills.64% strongly agree and 34% used it as the writing medium for self-expression.Almost two third (67%) of the participants used it for information exchange.The last, a small minority (48%) agreed that the blog can be used as a social network medium.In general, the results reveal that the participants prioritized improving on writing skills.Approximately three quarter (91%) of the students either strongly agree or agree that their writing skills have improved since writing on the blog.Besides that, 80% strongly agree and agree that blogs can improve their enthusiasm for the writing process.Moreover, an insignificant minority (20%) choose to disagree.Lastly, 81% of students strongly agreed and agreed that they were motivated to write on the blog.Only a small minority (19%) disagree with it.The effectiveness of blogs in writing is show in Table 3. 3 displays the participants' interest toward the blog.In general, the main activities on the blog are writing, uploading posts, and continuing to read and comment on posts by classmates.Overall, most participants show a positive response.For example, 85% of the participants strongly agree and agree that they like posting their writing on the blog.Similarly, 90% like reading their classmates' blogs.Even though 90% liked reading their classmates' posts, only 74% continued to comment on their classmates' posts.What was most impressive was that 99% of participants liked it when their lecturers commented on their posts.Interests toward the use of blogs in English writing is show in Table 4. 4 gives information on the practicality of blogs in English Writing.It is worth noticing that around half (56%) of students disagree and strongly agree that blogs can be used anytime and anywhere.Less than half (44%) agree and strongly agree that writing on a blog can be done anywhere.On the other hand, a large majority (90%) strongly agree and agree that a blog is easy to operate.Responding to the statement, "I am easy to edit and type my essay," 72%of the participants agreed.The last, more than half 68% have adequate internet access while the remaining 42% still do not have adequate internet access.Internet access is the main key to being able to interact well on the blog.The practicality of blog in English writing is show in Table 5. Regarding the interview, the researcher also gets some information about students' perceptions of the blog.In this case, the researcher only interviewed 5 students as representatives.Based on the student responses, it can be seen that students like to write blogs because they only write using their phones, and all posts on the blog are open to the world reader.It is no longer a need to carry writing tools when learning to write.In contrast, some students give negative responses, some students find typing text to post a blog a major hurdle.They found writing in a more practical book much easier than typing.The students commented that the interesting thing was reading and commenting on other friends' posts; others responded that writing on a blog encouraged them to think critically so that their writing would be systematic.Some of the obstacles found in blogging are the responses above.The participant complained about the time spent on typing and editing, editing and posting on the blog requires internet access.It can be seen that blogs need to be applied in teaching writing because blogs can help teachers give assignments to students, and students can also do their assignments anytime and anywhere.Students can also learn more about blogs. It is clear from the three opinions that peer feedback helps students improve their writing.Students try to minimize errors and are more careful in writing so that their writing is minimally subject to correction.Plus, it encourages them to think critically to check their friends' writing and improve their own writing.The comments above show that peer feedback on blogs makes it easier for students to edit and improve their writing.Peer feedback on blogs is more practical to respond to and correct because there is no need to write by hand anymore. Discussion This study investigated the students' perception toward the integration of blogs in English Writing Class.The results reveal that the majority of students may have positive experiences using the blog.The findings support the earlier claims previous researchers state that blogging-assisted language learning improves EFL students' writing abilities and supports up the early empirical data which shows that blogging fosters better learner attitudes (Alsamadani, 2017;Galvin & Greenhow, 2020;Lin, 2019;Samah A Alenezi, 2022).The frequent blogging supports the belief of other study who revealed that blogging is satisfying the students' self-expression and fulfilling the educational need for linguistic feedback.In addition, it provides bloggers with private spaces to express themselves freely, especially in writing (Al-Jarf, 2022; L. Lee, 2017). Based on the questionnaire, the student's responses are overwhelmingly positive toward the use of blogs in English writing.It is proven that the blogging approach is suitable for EFL writing classes.In line with previous study insisting on the use of frequent blogging can improve the student's enthusiasm for writing (Lin, 2019;Sulistyo et al., 2019).Also, it is easy to operate and only uses a phone to write.Through blogging, students gain control of the online community and the content they produce.The most desirable advantages is flexible to access any location and at any time, and can be updated and edited in terms of posting new articles, images, links, and videos (Alsubaie & Madini, 2018;D. Lee, 2022). In the interview session, 5 students gave their perceptions about using blogs in English writing, which were very diverse.They are joyful with blog media because with mobile phones in their hands they can publish their writings for the world to read.They no longer need to prepare stationery to write.What they like the most is that they can do assignments wherever they are.Therefore, the implementation of blog media in teaching writing can be applied by teachers especially in teaching English writing.With plenty of interaction between the blog's writers and users, a blog's specific characteristics as an easily accessible online platform also increase readership.These findings support the claim made by previous study that blogging gives students responsible for the online environment and the work they publish, which results in a change from a traditional passive source of information (Fathi et al., 2019).All of the students remembered their blogging experiences as being pleasant, according to an interview that was carried out (Reinhardt, 2018;H. Zou & Hyland, 2019). In the blogging process, students play the dual role of posting their own entries, and reading and then responding to each other's posts.Blogging is not just a process of practicing writing skills but also has a full social responsibility in which ideas are shared and exchanged with a global audience (Han, 2023;Rahayu, 2021).To support this function, network-based writing is needed so that it encourages students to pay attention to writerreader interactions, and to think critically about how readers interpret their writing.As a result students will be careful in writing giving responses because there is social responsibility.Posts that are archived in reverse chronological order empower students to use metacognitive skills to monitor and assess their own writing performance (Istiqomah & Siswono, 2020;Naghdipour, 2022). In response to the aforementioned research findings, blog writing can be done in a variety of methods, including teachers can divide lesson materials, student tasks, and teacher feedback by creating many "class blogs" under one account.The other ways are each student is required to create a personal blog, as well as separate blogs that are solely accessible to professors and students and blogs that are public and each student is required to leave comments or ideas on their classmates' blogs in order to promote active learning.To control the students' progress, teachers are supposed to keep an eye on the growth of student blogs by giving comments and assessing the outcomes of their assignments.Finally, to encourage students, the instructor may offer rewards for the best blogs at the end of the semester s.These activities above are expected to be able to make blogging a highly effective method for assisting students in practicing and developing their language skills.This study has implications for educators in taking to consideration of the use of blog writing combining with peer feedback.The advantages and disadvantages of using blogs as writing media provide an overview for educators to design the best way to cover these deficiencies and maintain their effectiveness of using blog.In this technological era, it is hoped that educators can integrate ICT in teaching, one of which is through the use of blogs.The use of blogs for students will provide a new, more meaningful experience in learning web-based writing.Additionally, policymakers should use these findings to organize teacher training programs and the rollout of the new ESL curriculum for enhancing teaching strategies in writing classes.It is hoped that future studies will focus on blogging for other language skills.This study has several limitations including the small number of participants and only assessing aspects of student perception.It is recommended for future researchers to use a larger sample and examine other aspects such as student engagement and the effectiveness of using blogs combined with peer feedback. CONCLUSION Blogging gives students a creative and communicative platform where they may communicate with each other and present themselves in a meaningful and positive way.The students have positive perceptions of using blogs in English writing.The questionnaire determines the usefulness, effectiveness, interest, and practicality of using the blog combining with peer feedback in English writing class.The questionnaire data shows that they strongly agree that they are superior to others.It means that blogs as learning media are very influential in the development of students' learning process.The interview result also showed that students' perceptions about using blogs in the learning process provided many advantages such as being easy to operate, the writing can be accessed by all readers in the world, can comment on friends' blogs, no longer need to the stationary and improve students' enthusiasm in the writing class.In addition the peer feedback in online blogging allows the students became the active agents in the evaluation process, which they believed had allowed them to improve their own writing and critical thinking. REFERENCES Al-Jarf, R. (2022).Blogging About Current Global Events in the EFL Writing Classroom: Effects on Skill Table 1 . Instrument Grid for Student Perceptions of Blog-Based Writing Instruction Table 2 . The Usefulness of Blogs in English Writing Table 3 . The Effectiveness of Blogs in English Writing Table 4 . Interests Toward The Use of Blogs in English Writing Table 5 . The Practicality of Blog in English Writing
5,751.2
2024-01-01T00:00:00.000
[ "Education", "Linguistics" ]
SwarmDAG: A Partition Tolerant Distributed Ledger Protocol for Swarm Robotics . Blockchain technology has the potential to disrupt applications beyond cryptocur-rencies. This work applies the concepts of blockchain technology to swarm robotics applications. Swarm robots typically operate in a distributed fashion, wherein the collaboration and coordination between the robots are essential to accomplishing the application goals. However, robot swarms may experience network partitions either due to navigational and communication challenges or in order to perform certain tasks efficiently. We propose a novel protocol, SwarmDAG, that enables the maintenance of a distributed ledger based on the concept of extended virtual synchrony while managing and tolerating network partitions. l e d g e r j o u r n a l .o r g ISSN 2379-5980 (online) associated article DOI 10.5915/LEDGER.2019.174ii information when separate partitions merge through a novel ledger structure based on a directed acyclic graph. The authors describe the proposed protocol and list the assumptions.There are a few figures that illustrate the concept; more example with robots in motion would help strengthen the idea. The paper is well written and structured.Although it is clearly at a premature stage (no empirical results nor theoretical proofs), it is relevant to this community and provides an interesting basis for discussion at an intimate conference such as BROS. Reviewer B: This paper presents SwarmDAG, a distributed ledger that integrates mechanisms to cope with partition tolerance and ensure eventual consistency. The paper is central to the topic of the symposium.The paper is well-written and easy to follow. I enjoyed reading this paper as I share with the authors an ongoing interest in partitiontolerant distributed data structures.I think that this work is a good discussion point for the symposium. The main limitation of this paper is the extremely preliminary state at which it actually is.The authors do not seem to have implemented their approach, and the paper seems to describe a design proposal more than an actual system. 1B. Authors' Response Reviewer A: The paper proposes a method that allows to maintain blockchain technology on a swarm of robots that may not be sufficiently connected at all times.The key contribution is the proposal of a protocol that the authors call 'swarmDAG' that facilitates the operation of a distributed ledger.The authors adopt the Extended Virtual Synchrony method.This allows network partitions to continue updating their progress.They address the problem of conflicting information when separate partitions merge through a novel ledger structure based on a directed acyclic graph. The authors describe the proposed protocol and list the assumptions.There are a few figures that illustrate the concept; more example with robots in motion would help strengthen the idea. The paper is well written and structured.Although it is clearly at a premature stage (no empirical results nor theoretical proofs), it is relevant to this community and provides an interesting basis for discussion at an intimate conference such as BROS.  The authors thank Reviewer A for the feedback and agree that this work is relevant to the community and will spark interesting discussions at BROS.This paper presents iii the conceptual design of SwarmDAG, a distributed ledger protocol which aims to address the problem of network partitions in robotic swarms.The work presented is ongoing and the implementation of SwarmDAG is currently in progress which will be open sourced when complete.For portability and ease of reproducing the implementation, we are implementing our system components using Docker containers and focusing on compatibility for Raspberry Pis (ARM), which is commonly used in robots in the research community.The underlying blockchain toolkit in our current implementation is Tendermint, and to test for design correctness in the face of partitions, we are utilizing Blockade (https://github.com/worstcase/blockade) to emulate network partitions.To implement the Membership Management Service (MMS) as detailed in the paper, we are using the libp2p network stack (https://libp2p.io/) to facilitate peer-to-peer communication via gossip messages.In future work, we will present empirical results of the SwarmDAG implementation. In addition to the implementation, we will present a formal proof of SwarmDAG's partition tolerance in future work.Because the implementation is currently in progress, we do not have the current capability of illustrating SwarmDAG using moving robots. We intend to run SwarmDAG on our in-house Intelligent Robotic Internet of Things TeStbed (IRIShttps://github.com/ANRGUSC/iris-testbed/) built by the authors in previous work to illustrate the effectiveness of SwarmDAG in a real swarm of robots. Reviewer B: This paper presents SwarmDAG, a distributed ledger that integrates mechanisms to cope with partition tolerance and ensure eventual consistency. The paper is central to the topic of the symposium.The paper is well-written and easy to follow. I enjoyed reading this paper as I share with the authors an ongoing interest in partitiontolerant distributed data structures.I think that this work is a good discussion point for the symposium. The main limitation of this paper is the extremely preliminary state at which it actually is.The authors do not seem to have implemented their approach, and the paper seems to describe a design proposal more than an actual system.  The authors appreciate the positive feedback from Reviewer B, and agree the work is at an extremely preliminary state.Due to the page limit of this venue, we were unable to fit the additional progress of the ongoing work presented in this paper.We currently have an implementation of SwarmDAG in progress, which we will open source upon completion.We also intend to present the progress of this implementation at BROS.To facilitate ease of reproducibility and portability, we are implementing SwarmDAG using Docker containers and focusing on compatibility with Raspberry Pis, a computing platform widely used by robots in l e d g e r j o u r n a l .o r g ISSN 2379-5980 (online) associated article DOI 10.5915/LEDGER.2019.174
1,325.8
2019-04-09T00:00:00.000
[ "Computer Science" ]
Service life of reinforced concrete structures made with blended cements and exposed in urban environment Carbonation‐induced corrosion is one of the main causes of degradation of reinforced concrete (RC) structures exposed outdoor in urban environment. To prevent steel corrosion, a durability design, that considers both the initiation and the propagation time, is of fundamental importance. At this aim, the resistance to carbonation of concrete and the steel corrosion rate in the exposure environment need to be known. This paper reports the carbonation coefficient and the corrosion rate of 7‐day cured RC specimens made with different binders and exposed outdoor in Milan in unsheltered conditions. Corrosion rates in laboratory conditions with different temperatures and relative humidities are also reported. Experimental data were used to evaluate the service life in unsheltered condition. RC specimens made with Portland cement exhibited the lowest carbonation coefficient and corrosion rate, while specimens with 30% limestone and with 70% ground granulated blast furnace slag the highest. binders and exposed outdoor in Milan in unsheltered conditions. Corrosion rates in laboratory conditions with different temperatures and relative humidities are also reported. Experimental data were used to evaluate the service life in unsheltered condition. RC specimens made with Portland cement exhibited the lowest carbonation coefficient and corrosion rate, while specimens with 30% limestone and with 70% ground granulated blast furnace slag the highest. | INTRODUCTION One of the major problems faced by reinforced concrete (RC) structures is the corrosion of steel reinforcement induced by carbonation. 1 To prevent the steel corrosion, a durability design is needed and, for carbonationinduced corrosion, both the initiation and the propagation period have to be considered, since both contribute to define the duration of the service life. In such exposure conditions, the alternating wet and dry cycles foster the penetration of carbonation during the dry periods and-after depassivation-an active corrosion propagation during the wet periods, reducing the duration of the service life. For the design, it is, hence, of fundamental importance, to know both the resistance to carbonation of concrete and the corrosion rate of steel, which are key parameters in defining the duration respectively of the initiation and propagation time. These parameters depend on many factors, among which the exposure conditions and the type of concrete. Several studies have been carried out to evaluate the resistance to carbonation and the corrosion rate in concrete made with different types of binder. However, these studies are often performed in conditions which are not fully representative of natural exposure. Carbonation is often evaluated through accelerated tests that are characterized by a duration of the order of Discussion on this paper must be submitted within two months of the print publication. The discussion will then be published in print, along with the authors' closure, if any, approximately nine months after the print publication. months and by a CO 2 concentration that is significantly higher than that in atmosphere. [2][3][4][5] Those studies have shown that concretes with pozzolanic and hydraulic additions, such as ground granulated blast furnace slag (GGBS) and fly ash, as well as limestone cement have a lower carbonation resistance in comparison to Portland cement concrete, leading, therefore, to a shorter initiation period for carbonation. However, the results of accelerated tests cannot be directly used to design the service life of a RC structure, since they do not take into account the real exposure conditions, for example, the actual relative humidity and temperature. Nonetheless, the behavior of blended cements was confirmed by natural exposure tests. [6][7][8][9][10][11] As far as corrosion rate is concerned, since the 1980s, several researches have been performed on carbon steel bars embedded in carbonated concrete and mainly exposed to laboratory conditions with different temperatures and relative humidities. Portland cement was often used, while blended cements were seldom considered, especially natural pozzolan, limestone, and silica fume. Very few works allow a direct comparison between concretes made with Portland cement and blended cements. [12][13][14][15][16][17] Usually the studies showed higher corrosion rate in blended cement compared to Portland cement. No publication known to the authors is related to both carbonation and corrosion rate performed on the same concretes. This would allow to properly assess the effect of the environmental exposure conditions and the type of binder on the service life of a RC structure and, hence, to choose the most appropriate binder to guarantee the target durability requirement. This paper reports results of concrete carbonation and steel corrosion rate obtained on specimens made with different binders, that is, Portland, limestone, fly ash, pozzolan, and GGBS, different water/binder ratios and cured for 7 days and exposed outdoor in Milan in unsheltered conditions. Corrosion rate was also evaluated in laboratory conditions characterized by different temperatures and relative humidities. Experimental data were used to evaluate the service life in unsheltered conditions by means of a probabilistic approach. | MATERIALS AND METHODS Concretes with three different water/binder ratios, equal to 0.42, 0.46, and 0.61, and four different binder dosages, ranging from 250 to 400 kg/m 3 were made with a Portland cement CEM I 52.5R (OPC), according to EN 197-1 standard, and five blended cements. In blended cements, part of the Portland cement was replaced with 15% and 30% of ground limestone (15LI and 30LI), 30% of fly ash of class F (FA), 30% of natural pozzolan (PZ), and 70% of GGBS. Crushed limestone aggregate with maximum size of 16 mm was used, and an acrylic superplasticizer was added to the mixtures to achieve a class of consistence S4 according to EN 206 standard. Table 1 summarizes the main characteristics of the concrete mixes. Compressive strength and the accelerated carbonation coefficient evaluated on cubes cured for 7 days, whose results were discussed elsewhere 18,19 are also reported in Table 1. Each concrete is defined through a label that reports the type of cement, the water/binder ratio, and the binder content (i.e., OPC-0.61-300 indicates a concrete made with Portland cement, water/binder ratio of 0.61, and a binder content of 300 kg/m 3 ). After mixing, concretes were cast in cubic and reinforced prismatic molds, covered with a plastic foil and stored in laboratory at 20 C. After 24 h, specimens were demolded and cured, for 6 further days, at 20 C and 95% relative humidity (R.H.). Unreinforced (i.e., cubic) and reinforced specimens were made at the same time, with the same concrete batch. One hundred millimeter cubic specimens were used to evaluate carbonation (one specimen for each type of concrete). After curing, they were exposed outdoor in natural conditions, unsheltered from rain and sun-on the roof of the Department of Chemistry, Materials and Chemical Engineering of Politecnico di Milano. Milan average annual temperature is around 13 C, with minimum and maximum average monthly values around À1 C and 30 C. The annual rainfall is about 1000 mm, while the average annual relative humidity is around 75% with minimum and maximum average monthly values around 70% and 85%. The number of days in a year with daily cumulative rainfall higher than 0.2 mm is around 130, and, often, they are consecutive days. It can be estimated that, in a year, the periods of wetting are about 25-35, assuming that aforementioned periods have, at least, two consecutive days with rainfall higher than 0.2. Four faces of the cubes were masked with epoxy and carbonation was allowed to penetrate only from the cast and a mold surface, which were, during exposure, oriented vertically. After different exposure times up to 1.5 years of exposure, a 20 mm passing-core was drilled and the average value, determined as average between the minimum and the maximum values, of carbonation depths was measured, from the mold surface, with the phenolphthalein test (cores taken at different times were distanced at least 200 mm from each other). After about 15 years of exposure, some specimens were split and the average carbonation depth was measured on the fracture surface. Due to mislabeling caused by the outdoor exposure, after 15 years of exposure, carbonation depth was determined only on 15LI, FA, and GGBS concretes. The average carbonation depths, d, measured after different exposure times, t, were interpolated, with the least squares method, to obtain the natural carbonation coefficient, k unsheltered : Prismatic specimens, 60 mm  250 mm  150 mm, were reinforced with three ribbed carbon steel bars with a diameter of 10 mm and a cover depth of 10, 25, and 40 mm. Stainless steel wires, to be used as auxiliary electrodes in the electrochemical measurements, were placed near the rebars (Figure 1). For reinforced specimens, concretes with only water/binder ratio of 0.61 were made and, for each concrete, two replicate specimens were cast. After curing and 2 weeks in laboratory conditions, specimens were exposed to accelerated carbonation, with 100% of CO 2 , until they were fully carbonated (carbonation was periodically verified with phenolphthalein tests on concrete cores taken from the specimens). The lateral surfaces of the specimens and the external parts of the rebars were then covered with an epoxy coating and one specimen for each concrete was exposed, for about 2 years, outdoor, with the same exposure conditions as the cubic specimens, while the other specimen was exposed to controlled cycles of temperature and relative humidity (the top and the bottom surfaces of the specimens were left uncovered). In particular, the following cycles of temperature and relative humidity were carried T A B L E 1 Mix proportion of concrete mixes and results of compressive strength, f c,cube and accelerated carbonation coefficient, k acc evaluated on 7-day cured concretes 18 During the exposure, electrochemical measurements of half-cell potential of steel (E corr ) versus a saturated calomel electrode (SCE), placed on the top specimen surface in the central part of each bar, and linear polarization resistance measurements (R p ) were performed to monitor the corrosion behavior of steel. Corrosion current density, i corr , was determined from R p measurements as: i corr = B/R p , where B was assumed equal to 26 mV. Measurements of conductivity were also carried out between the two wires placed at the different depths, to investigate the concrete electrical resistivity. Cycles were changed, usually after 2-4 weeks, when electrochemical measurements were steady, indicating that stable conditions were reached. | RESULTS AND DISCUSSION The role of concrete characteristics on the carbonation coefficient in unsheltered natural environment, mainly the type of binder and the water/binder ratio, will be first presented and discussed, also in relation with other properties of the hardened concrete, that is, the compressive strength and the accelerated carbonation coefficient. Then the corrosion behavior of steel bars embedded in carbonated concrete made with the same binders and exposed both in natural conditions and in controlled cycles of T and R.H. will be described. Finally, experimental results of carbonation and corrosion rate will be used to evaluate the impact of the concrete characteristics on the service life of a RC element exposed in unsheltered conditions, by means of a probabilistic approach. Figure 2 shows, as an example, the trend with time of the carbonation depth on concretes with 15% of limestone, moist cured for 7 days and exposed in an outdoor unsheltered environment. Increasing the exposure time, the carbonation depth increased and the highest values were observed on concretes with the highest water/ binder ratio; a significant effect of the binder content was not observed. After almost 15 years of exposure, carbonation depths of the order of 7-12 mm were measured on the concretes made with water/binder ratio of 0.61, between 4 and 6 mm on the concretes made with water/ binder ratio of 0.46 and around 2.5 mm on the concretes with water/binder ratio of 0.42. | Carbonation of concrete Carbonation depths were fitted through the relationship (1) to determine the carbonation coefficient k unsheltered , and the fitting lines are also reported in Figure 2. Figure 3 shows the carbonation coefficient as a function of the water/binder ratio and the type of binder (the effect of the binder content was neglected). For each type of binder, the carbonation coefficient and the water/ cement ratio were fitted through an exponential relationship and a good relationship can be observed. For instance, for GGBS concretes, the carbonation coefficient increased from 1.5 to 3.6 mm/year 0.5 when the water/ binder ratio increased from 0.42 to 0.61. In Figure 3, the role of the type of binder can be also observed. The OPC concretes showed the lowest carbonation coefficient, with values ranging from 0.35 to 0.78 mm/year 0.5 when the water/binder ratio varied between 0.46 and 0.61. These values were slightly lower than those evaluated on PZ concretes. It is worth noting that for both types of binder, F I G U R E 1 Geometry of reinforced concrete specimens (dimensions in millimeters) F I G U R E 2 Carbonation depth as a function of time of concretes made with 15% limestone cement and exposed to outdoor unsheltered conditions measurements were carried out only after relatively short exposure times and carbonation depths of only few millimeters were measured, which might have led to a poor estimation of the carbonation coefficients. A higher carbonation coefficient in comparison with OPC concretes can be observed on concretes made with 15LI, FA, GGBS, and 30LI. For instance, the carbonation coefficient of concretes made with water/binder ratio of 0.61 was 1.8, 2.2, 3.6, and 3.6 mm/year 0.5 , respectively for 15LI, 30LI, FA, and GGBS. Results obtained on the different types of concrete are consistent with those obtained in the studies where the exposure sites had similar climatic conditions (e.g., Lyon), despite the different curing regime. 7,11 Limiting to concrete made with limestone, values of carbonation rate are consistent with average data obtained within the "Validation testing program on chloride penetration and carbonation standardized test methods" of CEN/TR 17172 on concretes made with CEM II-A/LL-42.5R and water/cement ratio of 0.49 and 0.58 and exposed up to 2 years outdoor in different European locations. 20 Higher carbonation rates, at least on OPC concrete, were, conversely, obtained in tropical climate, characterized by highest temperature, 21 suggesting that temperature, but also relative humidity and time of wetness have a significant impact on carbonation. To better investigate the impact of the binder on carbonation coefficient, Figure 4 shows the ratio between the carbonation coefficient measured on the concretes made with blended cement, k blended , and the carbonation coefficient measured on OPC concretes, k OPC , at equal water/binder ratio and binder dosage. The k blended /k OPC ratio was always higher than 1 (PZ-0.42-350 concrete was an exception). For some water/binder ratios and types of binder, a double, and in some case even higher, value of the carbonation coefficient was observed. The highest ratios were detected for 30LI, while the lowest for PZ, however, as previously observed, for these concretes only data after 1.5 years of exposure were available. These results are in agreement with data reported in the literature, showing a lower carbonation coefficient of concretes with Portland cement in comparison with concretes with pozzolanic and hydraulic binders, that is, natural pozzolan, fly ash, and GGBS and limestone binders. Correlations between the carbonation coefficient in a natural environment and other hardened concrete properties that are usually easily measured, such as the compressive strength and the accelerated carbonation coefficient, were explored. Figure 5 shows the relationship between the compressive strength and the carbonation coefficient both evaluated on concrete cured 7 days. In general, concretes with a higher compressive strength showed also a lower carbonation coefficient, hence a higher resistance to carbonation. A high variability was detected; however, data seems to be correlated through an exponential relationship, regardless the type of cement, suggesting that, in absence of data, carbonation could be evaluated from compressive strength (Figure 5a). Naturally this correlation depends on the specific climatic conditions where the specimens were exposed. Figure 5b shows the relationship between the accelerated carbonation coefficient obtained through accelerated tests (Table 1), k acc , and the carbonation coefficient measured on specimens exposed outdoor unsheltered from rain, k unsheltered , for the different types of concrete. This correlation, determined for the specific climate conditions considered in the study, is useful since usually the behavior of concrete in a certain exposure condition is evaluated from accelerated tests. Significant F I G U R E 3 Carbonation coefficient in unsheltered environment as a function of the water/binder ratio for different types of concrete cured for 7 days F I G U R E 4 Ratio between the carbonation coefficient of blended cement concretes and Portland cement concretes, both in unsheltered environment differences, of one order of magnitude, between the accelerated carbonation coefficient and the unsheltered carbonation coefficient can be observed. For instance, the carbonation coefficient measured on 30LI-0.61-300 concrete increased from about 3.5 to around 35 mm/year 0.5 when exposure varied between natural and accelerated conditions. Despite the variability of data, a correlation can be determined between k acc and k unsheltered , slightly dependent on the type of cement. This suggests that the carbonation coefficient can be estimated from results of accelerated tests; however, it should be taken into account that the correlation is affected by the actual exposure conditions and the cement type and cannot be generalized. Figure 6 shows, as an example, the variation in time of corrosion potential and corrosion current density of the three rebars embedded at different depths in concrete made with 15LI exposed to controlled cycles of temperature and relative humidity. Resistivity at the same bars depth is also reported. At the beginning of the exposure, that is at the end of the exposure in the accelerated carbonation chamber (T = 20 C, R.H. = 65%), at the depth of the three rebars high electrical resistivity were measured and quite low corrosion current density and quite high corrosion potential on the three bars were determined. The exposure to high relative humidity, that is, 95%, led to a decrease of the electrical resistivity, that approached values of the order of 1000-2000 ΩÁm, an increase of the corrosion current density which, however, remained lower than 2 mA/m 2 and a decrease of the corrosion potential. When exposed to the other controlled environments, all the parameters experienced some variations. Significant modifications occurred in submerged condition, where the highest corrosion current density, of the order of 10 mA/m 2 , the lowest electrical resistivity, of the order of 100 Ω‧m, and the lowest corrosion potential were determined. No significant differences on E corr , i corr , and ρ were observed, in all the environmental conditions, among the bars embedded at the different depths, suggesting that similar humidity and corrosion conditions were present. | Corrosion rate of steel bars in carbonated concrete To investigate the effect of the exposure conditions and concrete composition on the corrosion potential, corrosion current density, and concrete electrical resistivity, the average values and the range of variability were evaluated for each concrete and condition. Average values of the three parameters were evaluated considering values obtained on all the three rebars and the variability was evaluated considering the maximum and minimum values reached in each exposure condition. Figure 7 shows the average values and the range of variation of E corr , i corr , and ρ as a function of relative humidity and temperature; values obtained outdoor are also included. A significant effect of the relative humidity on the three parameters can be observed. The corrosion potential decreased when the R.H. increased to 95% and in submerged conditions, where values between À750 and À650 mV versus SCE were measured. At 80% R.H., the corrosion potential was higher than À250 mV versus SCE (Figure 7a). Corrosion current densities lower than 1-2 mA/m 2 were determined for R.H. lower than 90%; conversely i corr increased with R.H., reaching values between 3 and 10 mA/m 2 when the specimens were submerged (Figure 7b). Concrete electrical resistivity, as expected, showed the lowest values, between 130 and 480 ΩÁm, when concrete was submerged, and the highest values, between about 4000 and 6800 ΩÁm when the concretes were exposed to 80% R.H. (Figure 7c). To better illustrate the effect of relative humidity, Figure 8 shows F I G U R E 5 Relationship between the natural carbonation coefficient and the compressive strength (a) and the accelerated carbonation coefficient (b) evaluated on 7-day cured specimens the corrosion current density and the resistivity as a function of R.H. (for submerged conditions, an arbitrary value was assumed). It can be observed that i corr increased when the environmental relative humidity increased, while ρ decrease increasing the relative humidity. This clearly indicates that both parameters are affected by the water content in concrete. Temperature played an effect when R.H. was 90%. For instance, corrosion current density increased from values between 0.4 and 1 mA/m 2 to values between 1.4 and 3.9 mA/m 2 when T increased from 20 to 40 C. Conversely, when R.H. was 80%, low corrosion current density was detected both at temperatures of 20 and 40 C. Also for electrical resistivity a significant variation was observed only when R.H. was 90%; for instance, the electrical resistivity decreased from values between 1700 and 3600 ΩÁm to values between 215 and 900 ΩÁm when temperature increased from 20 to 40 C. In outdoor unsheltered conditions, E corr , i corr , and ρ were intermediate to those detected in the controlled laboratory conditions, indicating that the typical exposure conditions that can occur, at least in the temperate climate of Milan, were simulated through the selected cycles of temperature and relative humidity. In outdoor conditions, only the combined effect of relative humidity, raining events, and temperature can be detected. The highest values of corrosion current density, which were measured in outdoor conditions during the rainiest periods, were similar to the highest values measured in submerged conditions. The flat exposure of the specimens in outdoor conditions fostered their wetting during rain events, leading the concrete to conditions similar to saturation. Conversely, the lowest values, which were measured during drying period, were comparable with those measured with R.H. of 80%. A variability of corrosion rate was also observed in other works where specimens were exposed outdoor as shown in the review paper of Stefanoni et al. 22 or in the experimental work carried out by Andrade et al., 23 where beams with mixed-in chlorides were exposed in a climate with dry atmosphere in summer, mild autumn, spring, and winter. From Figure 7, the influence of the type of binder on the corrosion behavior of carbon steel bars and on the concrete electrical resistivity can be assessed. Portland cement concrete showed, both in laboratory and in outdoor conditions, the lowest corrosion current densities: for instance, in the more aggressive conditions, that is, the submerged ones, average values of the order of 3 mA/m 2 were measured on OPC concrete, whilst values even double were obtained on concretes with blended cement (Figure 7b). Also the highest values of concrete electrical resistivity were measured on OPC concrete. The FA concrete showed, in the different controlled exposure environments, higher resistivity and lower corrosion current density in comparison to 15LI, 30LI, PZ, and GGBS concretes. Corrosion rate obtained on bars embedded in OPC concrete was comparable to those obtained in the study carried out by Américo and Nepomuceno. 24 Lower values of corrosion rate were obtained on bars in FA, GGBS, and PZ concretes in comparison to those reported in the works of Alonso and Andrade, 12 Alonso et al., 13,14 and Andrade and Buj ak. 15 | Estimation of the service life Results presented in the previous sections highlighted that concrete characteristics, and in particular, the type of binder, have an impact on the concrete carbonation and corrosion rate. This means that the service life of a RC element can be significantly affected by the type of binder used to cast concrete and that certain types of binder might not be suitable to guarantee, for instance, long service life. To help the designers in detecting the most suitable type of concrete to be used for an element exposed in unsheltered condition, at least in climate conditions similar to that of Milan, the service life that can be reached with the different binders employed in this study was evaluated, by means of a probabilistic approach, starting from the experimental data. For structures exposed in urban environment, the initiation time can be defined as the time required for carbonation to reach the depth of the outermost steel bars, while the propagation period can be defined as the time required to consume the steel bars of an amount that leads to the cracking of the concrete cover. In the probabilistic approach, for the initiation period, the probability of failure, p f,i , was evaluated as the probability that the initiation limit state function, g i , reaches negative values: where x is the concrete cover thickness and t i is the initiation time. For the propagation period, p f,p was evaluated as: where P lim is the limit penetration of corrosion, v corr is the corrosion rate, and t p is the propagation time. These equations were solved for a RC element exposed outdoor in unsheltered conditions, determining the values for the involved parameters from the experimental results. The carbonation coefficient was described by a normal distribution. The mean value was determined, for each type of binder considered in this work, from data reported in Figure 3 considering a water/cement ratio of 0.5, according to the prescription for these exposure conditions, that is, XC4, of the European Standard EN 206. A SD equal to 0.2 times the mean value was assumed. A mean concrete cover thickness of 30 mm was taken into account, in agreement with the prescriptions of the European Standards for the exposure class XC4, with a SD of 10 mm. As far as the propagation period is concerned, the corrosion rate was described through a beta distribution function, with mean values, determined from the corrosion current density showed in Figure 7 (v corr = 1.16 . i corr ), in outdoor unsheltered exposure conditions, and a SD of 0.25 times the mean value (upper and lower limits equal to 0.1 and 15 μm/year for all the types of binder, in agreement with the minimum and maximum values detected outdoor, were considered). Although these data were obtained on concretes with water/binder ratio equal to 0.61, it was observed that the water/binder ratio did not strongly affect the corrosion rate. 25 P lim was described by means of a normal distribution with a mean value and a SD respectively equal to 100 and 30 μm, considering as limit state the concrete cracking. Table 2 summarizes the values selected for the different parameters. Figure 9 shows the probability of failure as a function of the initiation and the propagation time evaluated solving the limit state Equations (2) and (3) through a Monte Carlo method. The probability of failure, that is, the probability of reaching the limit state, increases with the time. Assuming a target probability, P 0 , equal to 10% (red horizontal line in Figure 9), both the initiation and the propagation time can be evaluated as a function of the type of binder. The initiation time varied from about 35 years for 30LI, to 44 years for GGBS, to 60 years for FA and to values even higher for OPC, 15LI, and PZ. The propagation time varied from about 18-20 years for GGBS, 30LI, and PZ, to 24-26 years for 15LI and FA to 60 years for OPC. From results shown in Figure 9, the service life was evaluated, for each binder, by summing the cumulative distribution functions of the initiation and of the propagation periods ( Figure 10). Assuming as the target probability of failure, P 0 , a value of 10%, that is, considering that the cracking of concrete cover occurs on 10% of the concrete surface, the service life was always higher than 60 years. However, significant differences among the binders were observed. The lowest service lives, around 60-70 years, were determined considering 30LI and GGBS concretes, being characterized by the highest T A B L E 2 Values of the selected parameters and types probability density function distribution, PDF, (BetaD = beta distribution; ND = normal distribution) used as inputs in the limit state equations (m = mean value, σ = standard deviation) concrete carbonation coefficient and the highest corrosion rate of steel. The use of OPC concrete led to a service life even significantly higher than 100 years, due to a low concrete carbonation coefficient and a quite low corrosion rate of steel. With FA and 15LI concretes, service lives, relatively long, of the order of 100 years, could be guaranteed. According to these results, the prescriptions provided in the European standards, in terms of water/ binder ratio and concrete cover thickness, allowed to guarantee, with a probability of failure of 10%, a service life of, at least, 50 years in an unsheltered environment with all the binders employed in this study. It is worth noting that these results, and in particular, results on initiation time, were obtained considering experimental data carried out on well-compacted concretes, wet cured 7 days. A significant decrease of the service life might occur if concrete had a higher water/binder ratio, the curing were significantly lower as well as a lower concrete cover thickness were considered in the simulation. Indeed, experiences on real structures have demonstrated that the corrosion conditions of RC structures are strongly affected by the initial quality of the concrete, the thickness of the concrete cover and the environmental conditions and severe damages were observed on structures made with poor quality concretes and low concrete cover thickness. [26][27][28][29][30] Nonetheless, within the limits of validity of the results obtained and the conditions considered, the analysis carried out in this paper, provided an example of a performance based approach for a quantitative estimation of the service life. | CONCLUSIONS Natural carbonation and corrosion rate was studied on 7-day cured specimens made with Portland cement and blended cements, that is, 15 and 30% limestone, 30% fly ash, 30% natural pozzolan, and 70% GGBS and exposed in unsheltered conditions. On the basis of the experimental results and their analysis, the following conclusions can be drawn: • For each water/binder, concretes made with Portland cement exhibited the lowest carbonation coefficient. The highest carbonation coefficients were observed on 30% limestone, while values slightly lower respect to 30LI were detected on 15% limestone, 30% fly ash, and 70% GGBS. Thirty percentage of natural pozzolan concretes showed comparable carbonation coefficient to F I G U R E 9 Probability of failure as a function of the initiation (a) and propagation time (b) for RC elements exposed in an unsheltered environment and indication of the target probability of failure, P 0 (red line) (concretes with water/binder ratio = 0.5 and concrete cover = 30 mm). RC, reinforced concrete F I G U R E 1 0 Probability of failure as a function of the service life for RC elements exposed in an unsheltered environment and indication of the target probability of failure, P 0 (red line) (concretes with water/binder ratio = 0.5 and concrete cover = 30 mm). RC, reinforced concrete Portland cement concrete; however, only data after 1.5 years of exposure were available. Results were consistent with data obtained by other Authors on concretes with similar composition and exposed to comparable climate conditions. Conversely different climate conditions, for instance in terms of temperature, led to significant differences in terms of carbonation rate; • The unsheltered carbonation coefficient was well correlated with compressive strength and accelerated carbonation coefficient, suggesting that, in absence of any experimental data on natural carbonation, an estimation of this parameter could be carried out from results of mechanical tests or accelerated tests, taking into account the actual exposure conditions and the type of cement; • Corrosion potential, corrosion current density, and concrete electrical resistivity were strongly affected by the exposure conditions and, in particular, the relative humidity. At 20 C, appreciable corrosion current density was detected for relative humidity higher than 95%. Temperature played a role only in environments with a high relative humidity (i.e., higher than around 90%). Concrete made with Portland cement had the highest resistivity and the lowest corrosion current density in all the exposure environments. Values obtained in this study were comparable with results obtained in other works for Portland cements while were lower for concrete made with blended cements; • All the binders employed in this study allowed to guarantee, with a probability of failure of 10%, a service life in an unsheltered environment, with exposure conditions comparable to that of Milan, higher than 50 years, provided that the water/binder ratio was 0.5 and the concrete cover thickness was 30 mm. However, a longer service life can be guaranteed with OPC concretes.
7,653.8
2021-11-15T00:00:00.000
[ "Engineering", "Environmental Science", "Materials Science" ]
Prediction of the Minimum Film Boiling Temperature of Quenching Vertical Rods in Water Using Random Forest Machine Learning Algorithm A great amount of research is focused, nowadays, on experimental, theoretical, and numerical analysis of transient pool boiling. Knowing the minimum film boiling temperature (Tmin) for rods with different substrate materials that are quenched in distilled water pools at various system pressures is known to be a complex and highly non-linear process. This work aims to develop a new correlation to predict the Tmin in the above process: Random forest machine learning technique is applied to predict the Tmin. The approach trains a machine learning algorithm using a set of experimental data collected from the literature. Several parameters such as liquid subcooling temperature (Tsub), fluid to the substrate material thermophysical properties (βf/βw), and system saturated pressure (Psat) are collected and used as inputs, whereas Tmin is measured and used as the output. Computational results show that the algorithm achieves superior results compared to other correlations reported in the literature. INTRODUCTION Intensive efforts to understand phase-change processes have increased over the last decade in many industrial sectors. Fusion, solidification, boiling, condensation, and sublimation are several forms of phase-change processes. These processes are widely encountered in energy applications due to their association with latent heat rather than sensible heat. Therefore, they are used in fields such as desalination, metallurgy, electronics cooling, and during thermal generation of electricity and food processing (Collier, 1972). Recently, a great amount of research has been focused on experimental, theoretical, and numerical analysis of transient pool boiling which is an example of phase-change processes. It is highly favored in various traditional and modern technologies due to its relative simplicity, high heat transfer rate, and low cost. Pool boiling heat transfer occurs when a sufficiently heated surface is submerged in a stagnant pool of a liquid coolant. Initially, the heated surface experiences a film boiling regime where a vapor layer is formed around the heated surface and prevents it from being in direct contact with the liquid coolant (Bonsignore, 1981). Due to the low thermal conductivity of the vapor compared to the liquid, the surface experiences a dramatic decrease in the cooling performance (Hsu, 1972;Jiang and Luxat, 2008). As the temperature of the heated surface decreases, the thickness of the vapor blanket reduces until it collapses at a temperature called Leidenfrost temperature or minimum film boiling temperature (T min ) (Leidenfrost, 1756). At this point, the liquid is able to dramatically cool the heated surface, and the boiling regime transfers from the film to transition boiling. Following that nucleate boiling, natural convection occurs. Since T min is the boundary between the film and transition boiling, any improvement in its value significantly enhances the heat transfer rate. Thus, investigating the T min point is essential in areas such as metal heat treating, nuclear engineering industry, in a hypothetical large break lossof-coolant accident (LOCA), evaporators and compressors in the air conditioning systems, refrigeration systems, chemical processes, and oil systems (Pettersson et al., 2009;Ramesh and Prabhu, 2015). T min has been widely studied in terms of various parameters such as substrate material (Peterson and Bajorek, 2002), surface conditions and oxidation (Sinha, 2003;Lee et al., 2014), system pressure (Henry, 1974;Sakurai et al., 1984), flow condition (Groeneveld and Stewart, 1982;Carbajo, 1985), initial surface temperature (Kang et al., 2018), rod diameter (Sakurai et al., 1987;Jun-young et al., 2018), liquid subcooling (Adler, 1979;Freud et al., 2009), vapor-liquid contact angle (Ebrahim et al., 2018), surface roughness and microstructure (Peterson and Bajorek, 2002;Carey, 2020), and alternative quenching fluids (Shoji et al., 1990;Lee and Kim, 2017). In the literature, it was recognized that the complexity and high non-linearity of the film boiling cause difficulty in recognizing the cause-effect relationship, and the prediction of T min is carried out mostly using correlations developed empirically with many experimental works for certain specific conditions. Recently, Yagov et al. (2021) showed evidence of the existence of the two distinct modes of film boiling during quenching. Steady film boiling of a saturated liquid is one of the most studied boiling regimes, due to the macroscopically impermeable liquid-vapor interface (Aziz et al., 1986;Zvirin et al., 1990). On the other hand, the unsteady film boiling, quenching, of saturated/subcooled water is quantitatively and qualitatively different from the steady one. This study focused on the unsteady film boiling heat transfer for various degrees of liquid subcooling pools, system pressures, and substrate materials. Limited correlations are available in the literature for the estimation of the T min . Zuber (1958Zuber ( , 1959 utilized Taylor instability analysis to build up a theoretical model to anticipate the minimum heat flux (q min ). Based on the differences of the gravity-driven density, the continuity of the vapor-liquid interface was demonstrated. The absence of data about the surface properties decreased the accuracy of this correlation since various experimental works noticed that the surface material and the surface roughness significantly affect the T min (Baumeister et al., 1970;Reed et al., 2013). Berenson (1961) developed a correlation to predict T min using the Taylor-Helmholtz hydrodynamic instability. He used Zuber's (1958) correlation for predicting T min . Since T min is significantly affected by the wall thermal properties, liquid subcooling, and surface condition, Henry (1974) modified Berenson's (1961) correlation including different parameters. Baumeister and Simon (1973) explored the impact of different parameters on T min , for instance, surface conditions, thermal properties of the heated surfaces, the liquid subcooling, and surface conditions. They developed a model to estimate T min utilizing the combination of an analytical conduction model for isothermal surfaces and experimental data available in the literature for the nonisothermal surfaces (Baumeister et al., 1970), Sakurai et al. (1987) studied tentatively the film boiling heat transfer mechanism on horizontal heated rods quenched in a pool of saturated or subcooled water at different system pressures. The proposed empirical equations were exclusively in terms of the system pressure, which is considered a restriction since T min is a function of different parameters. Later, Peterson and Bajorek (2002) developed another correlation for T min which was an extension of Berenson's (1961) and Henry's (1974) correlations, taking into account the heat transfer surface properties, liquid subcooling temperature (T sub ), and surface roughness. The mean absolute error (MAE) and the root mean square error (RMSE) were estimated to be 51.38 and 65.47%, respectively. A recent model by Yagov et al. (2018) was developed for copper, nickel, and stainless steel spheres quenched in water at various degrees of liquid subcooling and under atmospheric pressure. The model not only covered a wide range of materials but also the effect of the coolant with an error of ±30%. Ebrahim et al. (2018) developed an empirical correlation that involves the effect of liquid subcooling, surface roughness, and surface substrate material. The correlation was valid between 2 and 15 degrees of liquid subcooling, surface roughness between 0.3 and 0.9 µm, and wall thermal properties from 4.15 × 107 to 8.56 × 107 J-s/m 2 -K. When accounting for surface roughness, the results showed that the empirical correlation had an MAE of 1.5% and an RMSE of 9.3%. In the absence of the surface roughness value, the correlation predicted T min with relatively a higher MAE of 10.7% and RSME of 13.3%. Despite that Ebrahim et al. (2018) showed a high dependency for surface roughness on the T min predictions, surface roughness data are scarce in the literature. It is worth mentioning that most of the above studies concerning the prediction of T min have focused on special conditions, which limit their application. In this regard, a more comprehensive forecasting model, with applicability to a wide range of temperature, pressure, and material, needs to be developed. The present study is focused entirely on predicting the T min corresponding to the transient film pool boiling. Therefore, the goal of this study is to utilize a robust and reliable kind of machine learning technique called random forest (RF) to predict T min for various substrate rods quenched in either high-or low-pressure distilled water pools. Utilizing the RF model to predict T min could be effective in capturing the pattern of large sets of data collected from different experimental investigations. MODELING Available Models Berenson (1961) developed a correlation for the T min governed by Tylor-Helmholtz hydrodynamic instability mode. The minimum film boiling temperature T B min is calculated using the following equation: where T sat is the saturated temperature, the subscript g refers to the gas/vapor, and f refers to liquid; ρ is a density; k is the thermal conductivity; h fg is the latent heat of vaporization; µ is the dynamic viscosity; and g is the gravitational acceleration. This correlation agrees with the available experimental measurements within ± 10 percent. Later, Henry (1974) developed a minimum film boiling temperature T H min based on the above correlation to include the effects of the wall thermal properties, degree of liquid subcooling, and the surface condition as follow: where T sub is the subcooled temperature; c p,ω is the specific heat of the wall; and β f andβ w are the thermophysical properties of the fluid and wall, respectively. It is worth mentioning that for both the equations, the vapor properties are evaluated at the film temperature, the liquid properties are evaluated at the liquid bulk temperature, and the wall properties are evaluated at the wall surface temperature. Data Collection The model was developed using a total of 379 experimental data points for T min that have been stated in the literature. All the collected experimental data were collected from research papers that have similar experimental setups as shown in Figure 1 with the exception of Sakurai et al. (1984) data points which were taken from horizontal thin rods. The data were used in a previous work to develop a correlation for T min using an artificial neural network (Bahman and Ebrahim, 2020). The quenching facility is mainly consisting of a furnace, test sample, pool, and data acquisition (DAQ) system. The furnace is used to heat the test sample to the desired initial temperature before plunging it into the pool. All the test samples have a cylindrical shape, but they vary in length and diameter. Thermocouples are imbedded inside the test samples and are connected to the DAQ system and computer to monitor and measure the test sample temperature during the experiments. A pool with an immersion heater is used to heat the coolant to the desired degrees of liquid subcooling. An immersion thermocouple is placed in the bath to monitor and measure its temperature before, during, and after the quenching process. Transient pool boiling heat transfer experiments for various vertical quenched rods in stagnant water baths were conducted to investigate the effect of various parameters on T min . The quenching conditions vary in the degrees of liquid subcooling, initial rod temperature, saturation pressure, and thermophysical properties as listed in Table 1. The experiments followed similar procedures. First, the rods were heated to a certain initial wall temperature (T i ) in a furnace or a ceramic heater. Then, they were plunged into various degrees of liquid subcooling pools. The temperature of the water in the pool is controlled by an immersion heater and measured by an immersion thermocouple. The degrees of liquid subcooling of the pool represent the difference between the saturation and water temperatures (T sub = T sat -T w ). In each rod, thermocouples were embedded at the center and were connected to a DAQ system to monitor and record the temperature before and during the quenching process. The input data of the model are taken from different references as shown in the Supplementary Appendix. The summary of datasets is presented in Table 1. The data consist of degrees of subcooling temperature (T sub ), system pressure (P sys ), and substrate material thermophysical properties (β f /β w ) (thermophysical properties are the product of the density, thermal conductivity, and specific heats β = ρkcp). The Tmin is the output. Random Forest Algorithm Machine learning is a group of computer programs aimed toward learning complex problem behavior from data (Ho, 1995;Bishop, 2006). Learning from data has many applications which are categorized as classification, clustering, prediction, and association problems. Most of these algorithms work by presenting a "sample" of problem's behavioral data to the algorithm in order to create a "human brain" alike computer learning system that is able to understand such problem and generalize, correctly, its response toward never-seen behavioral data for the same problem later. From these algorithms, for example, is the neural network technique, which is being used extensively in thermal system application (Zabirov et al., 2020). The most widely used category of such computer programs is classification algorithm which concerns classifying different data samples into different classes. For example, having correct patient diagnosis data, a classification algorithm can tell, after the learning process, if that patient needs to be hospitalized (aka class 1) or not (aka class 2). Another example from engineering: having preliminary assessments data of the engineering project, one can tell, using a classification algorithm that was trained using assessment data from many previously conducted projects, if that project should be categorized as high risk (aka class 1), risky (aka class 2), low risk (aka class 3), or no risk at all (aka class 4). Many important applications using the classification learning process help different industries. Decision trees (DTs) are one of the most famous and old classification algorithms as shown in Figure 2. It generates a computational tree that uses, at every branching level, one of the data attributes that mostly minimize the entropy (i.e., degree of randomness) between data classification before branching and after branching. This branching should also increase the information gain within the resulting branches. Each branch has a group of data that can be classified into a possible class or into one of the several classes. A "leaf node" in a DT is that node which is used to make a final discrimination between two different classes of the problem. Random forest is a "Hyper" classification algorithm that combines the decision of an ensemble of DTs into a single decision using some sort of voting model (Ho, 1995). The main idea behind RF is that it samples the training set into N subsets, each of size M, created randomly with replacement from the total set (relative to the number of DTs created), then it uses these subsets to train different DTs separately. This operation, which is called bagging, leads to better model performance because it decreases the variance of the model, without increasing the bias (Breiman, 1994). Once trained, DTs will be used to obtain the predicted class of the remaining data samples, and their different results will be combined in a voting (or averaging) operation to obtain the final classification. Finally, for the assessment and performance of the model, three criteria, namely, relative error (RE), R-square (R 2 ), and mean square error (MSE) are used. Values of R 2 closer to 1 mean a higher confidence level of the model, whereas lower values of the RE and MSE are more favorable in terms of model accuracy. These statistical criteria are determined as follows: METHODS AND RESULTS Computation experiments were conducted using the RF machine learning algorithm from WEKA Frank et al. (2016) machine learning platform on Intel core i7 with 8GB PC. As mentioned above, the data were collected from different sources used by different researchers to measure T min . As a start, 379 data samples were used. The data have four main parameters: T sub , fluid to the substrate material thermophysical properties (β f /β w ), system saturated pressure (P sys ), material names, and T min. All were used as inputs except T min , which was considered as the output. The model was compared with two reported correlations in the literature under specific experimental conditions. Computational experimentations went through multiple phases before the final results were obtained: (1) Data cleansing phase: Data were analyzed for its suitability for the machine learning process. Some classes (i.e., material types) were immediately removed from the dataset due to the lack of enough samples. Two materials types were found to have very few samples (one has one sample and another has two samples only). Generally, in a machine learning process, you need to have a class of a sample size proportionate to the complexity of the pattern to be learned within the data. Furthermore, you need more samples from each class in the data in order to have some in the training set and some in the testing set as well. Thus, two samples of any class in the problem are considered not enough to learn anything useful. After cleansing those lowrepresented classes, the total remaining datasets were 373. (2) Initial analysis phase: The first set of experiments was conducted to see how good the results will be in general without tuning. The experiment used the full data without dividing it into training and testing. As shown in Figure 3A, the model performs well except in some areas where the error margin was relatively high (e.g., samples 133-157). RMSE was 33.19, which is also considered relatively high. This set of experiments revealed the need for further investigation: Looking at the area of highabsolute error and conducting some correlation studies within data samples of the same material types, we found that the data include some "outliers" (few data samples clearly differ from the dominant pattern of the rest of samples and was mostly probably generated via erroneous measurements). Eliminating those data samples resulted in a net total remaining number of 362 clean data. (3) Result analysis phase: In this phase, we run several experiments to analyze the effect of different parameters on the model. In these experiments, the RF model was optimized to give the best results on the training data given. Figure 3B shows the enhancement upon the results when the outliers were removed. RMSE was decreased significantly to 11.3. Parameters and results obtained in model optimizing are shown in Table 2. (4) Final result phase: The total remaining samples were divided into two parts-308 training sets and 54 testing sets (85% split). We use the same setup of parameters used in the previous phase as shown in Table 2. We ran the algorithm 30 times, and the average number of trees generated was around 200 trees. In Figure 4, we present the model prediction with the actual 54 experimental datasets. We can see that the RF model performance is excellent in predicting T min where the R 2 is 0.9758, which Root relative squared error 6.6 means appropriate prediction of the actual experimental data as shown in Figure 5. In addition to the values of R 2 , the RE of the model for the data was determined. The RE of the model is 4.86%, while in the majority of cases, the RE values were ± 2%. The rest of the results for the RF prediction model on testing the data sample are presented in Table 3. Comparing against existing models, the final results of RF were compared with two well-known correlation models obtained in the literature, Berenson (1961) and Henry (1974). The results of the comparison are shown in Figure 6A, where a very good behavior of the RF model in predicting the actual temperature is clear compared to the other models. In Figure 6B, the comparison between the models is presented by a box and whisker model. Furthermore, both models failed to capture the lower "whisker" limit compared to RF, while the Henry model relatively captured the upper "whisker" limit better than the RF and Berenson models. The lack of accuracy of Berenson's (1961) and Henry's (1974) models compared to the RF in this study was attributed to the developed correlations. Berenson's correlation was developed by modeling the bubble spacing and growth rate that were determined by using the Taylor instability. The film boiling heat transfer was analyzed by immersing a horizontal surface in n-pentane and carbon tetrachloride at atmospheric pressure. The vapor properties were evaluated at the film temperature and system pressure. Therefore, Berenson's correlation accounts for the system pressure by changing the vapor properties at different pressures. This correlation does not account for the surface thermophysical properties, surface roughness, and surface wettability, which limit its applications. Henry modified Berenson's correlation in order to account for the effect of thermophysical properties on T min , but it does not adequately include the effect of system pressure. A study by Kang et al. (2018) showed another limitation for Henry's correlation (1974). They performed experiments on stainless steel (SS) and copper (Cu) rods. The experimental results showed the same value for the T min , while Henry's correlation predicted different values due to the difference in the substrate materials. The disagreement between the experimental and predicted data could be due to the other effects such as surface conditions and vapor film collapse mode. Peterson and Bajorek (2002) concluded that T min has a strong, positive relationship with pressure at pressures below 1.0 MPa. Therefore, Berenson's correlation predicts T min accurately at lower system pressures compared to Henry's correlation which does not adequately account for a pressure effect on T min . CONCLUSION A new RF machine learning algorithm was used to formulate a correlation between the T min for rods with different substrate materials that are quenched in distilled water pools at various system pressures. The resulted model was compared to a wellknown correlation model in the literature. Experiments show that the RF model was able by far to predict T min than the compared ones. One of the drawbacks of the available models is their limited applicability range of input parameters, while the current model is tested in a wide range of all inputs. The key results of the current models can be summarized as follows: • The RF models were able to confidently forecast the T min of the quenching rods, with maximum deviations of 13.6%. • The R 2 values of the RF-based model were equal to 0.9752. • The average absolute REs of the RF model are 4.86% and for Berenson and Henry are 33.7% and 43.4%, respectively. • Among the considered inputs, the T sub had the greatest impact on the T min value, followed by the substrate material thermophysical properties (β f /β w ) and, finally the system pressure (P sys ). Future works can produce a more generalizable model by utilizing available experimental data for horizontal and vertical flat plates and spheres. In addition, the effect of surface tension and viscosity of the coolant can be involved in the model. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
5,328.4
2021-04-28T00:00:00.000
[ "Computer Science" ]
Spatial-Temporal Differentiation and Dynamic Evolution of Digital Finance Inclusive Development in the Yangtze River Delta Economic Cluster of China Clarifying the sources of the digital finance inclusive development gap helps profoundly understand the regional characteristics of inclusive digital finance and benefits and formulate and implement specific policies scientifically and reasonably. Based on the 2011–2020 “Peking University Digital Finance Inclusive Index,” this study explores the regional disparity in digital finance inclusive development and its sources in the Yangtze River Delta economic cluster using the methods of the center of gravity shift, standard deviation ellipse, nested Theil index difference decomposition, and Kernel density. We found that inclusive digital finance shows a rising trend year by year, with evident heterogeneity and spatial agglomeration characteristics; interprovincial differences are the primary sources of the overall differences in inclusive digital finance in the Yangtze River Delta. The spatial effect of digital inclusion finance between the Yangtze River Delta regions has continuity. Because of the significant positive spatial correlation of digital inclusion finance between regions, digital inclusion finance in this region is vulnerable to potential shocks from neighboring regions. Moreover, with or without the spatial lag term, the level of inclusive digital finance in all regions of the Yangtze River Delta shows a leap forward. This paper looks at the regional gap of inclusive digital finance and its structural decomposition in the Yangtze River Delta city cluster from three perspectives: time trend, spatial structure, and dynamic evolution. This gives an empirical basis for the different kinds of digital finance inclusive development policies and a guide for making decisions to speed up the formation of a regional digital finance inclusive synergistic development path. Introduction and Literature Review Digital nance inclusive is vital in promoting high-quality economic development and achieving shared prosperity in the Yangtze River Delta economic cluster. Digital nance inclusive has profoundly changed the course of inclusive nancial development, providing new ideas and a strong impetus for the sustainable and rapid development of inclusive nance. It has great potential for e ectively cracking the problem of nancial exclusion and solving the nancial dilemma of the three rural areas and small and microenterprises. Unlike traditional inclusive nancial services, which mainly rely on nancial institutions and physical outlets to carry out business, inclusive digital nance relies on core digital technologies such as mobile terminals and big data analysis. Inclusive digital nance can help break through industry boundaries, break the "two-eight law" pro t model, open up the long-tail nancial services market, and enable more nancially excluded groups to enjoynancial services. Financial services signi cantly improve the accessibility and coverage of nancial services and promote a more inclusive modern nancial system. erefore, digital nancial inclusion has received signi cant attention from the beginning of development. As the Yangtze River Delta Economic Belt is one of the three major support belts for China's economic development and the core area for the construction of the Common Wealth Demonstration Zone, research on how to promote the synergistic development of inclusive digital finance in the Yangtze River Delta region has gradually become a hot topic of attention. Due to the varying levels of traditional financial development, Internet development, economic development, and structure of each region in the Yangtze River Delta Economic Belt, there are noticeable regional differences in the development of inclusive digital finance in the Yangtze River Delta, which restricts the sustainable and rapid coordinated development of inclusive digital finance and brings challenges to the construction of the commonwealth demonstration zone in the Yangtze River Delta. So, it makes a lot of sense to look into how inclusive digital finance is different in different places and how it changes over time in the Yangtze River Delta Economic Zone. is not only helps local governments reevaluate the status of inclusive digital finance development in the region, but it also gives them a scientific basis and a decision-making guide for the coordinated and fast development of inclusive digital finance. Regarding regional differences and dynamic evolution studies, the existing literature has more analyses based on inclusive finance, but there is a lack of analyses based on inclusive digital finance. At present, the literature mainly adopts the following methods to analyze the regional differences in the development of inclusive finance: First, a simple data comparison of each region's inclusive financial development index. Based on the results of measuring the development level of inclusive finance in each region, a comparative analysis of the development status of local inclusive finance was conducted to explore the dynamic changes in the spatial distribution of inclusive finance in each region over time. ere are obvious spatial differences and multipolar differentiation in the development level of inclusive finance. At the same time, the high and low levels of socioeconomic policy support and other factors lead to the uneven development of inclusive regional finance, and there are differences in where households are getting out of poverty [1][2][3][4][5][6][7][8][9][10]. e second is the Moran index. e index measures the regional differences in the development of inclusive finance, the distribution pattern of inclusive finance, and factors such as government intervention and market-based and informal finance closely related to the development of inclusive finance [2][3][4][11][12][13][14]. ird, the distribution dynamics method analyzes the distribution and dynamic evolution process of regional differences in inclusive finance, mainly the eil index, Dagum Gini coefficient, Kernel density, other analysis methods, and the distribution dynamics analysis method. e Dagum Gini coefficient measure is used to analyze the size and source of spatial disparities and their trends over time and to decompose regional disparities into intraregional and interregional [15][16][17][18][19]. e contribution of disparity sources to the overall disparity can be quantified, while Kernel density estimation is used to analyze the distribution location, shape, and extension, which makes up for the shortcomings of the Dagum Gini coefficient. However, the estimation can provide limited information on the internal dynamics of the distribution of regional financial development levels. It cannot reflect the relative positional changes of each region in the distribution of regional financial development levels and the probability of such changes [20][21][22][23][24][25][26]. e limitations of the current study are mainly reflected in the following: First, the evaluation system has different designs for various indicators. Some indicators are too subjective and cannot reflect the reality of the development of inclusive digital finance. Secondly, in terms of research methods, most of the existing studies use the coefficient of variation and eil index to analyze the regional differences in inclusive digital finance and less involve the decomposition of regional differences. ird, most studies tend to analyze statically digital finance, while dynamic research lacks this. is paper conducts the following explorations on the basis of the existing research results: Firstly, it presents the actual development status and geographical differentiation of digital inclusive finance at the municipal level in the Yangtze River Delta Economic Zone, a national strategic support area, to make up for the shortcomings of the previous mostly qualitative studies; secondly, it makes comprehensive use of analytical tools such as spatial gravity shift, standardized ellipse analysis, and eil's Gini coefficient to decompose the regional differences of digital inclusive finance in the Yangtze River Delta Economic Zone. Secondly, we use the tools of spatial gravity shift, standardized ellipse analysis, and eil Gini coefficient to decompose the regional differences in digital inclusion in the Yangtze River Delta economic zone. Again, the introduction of spatial Kernel density estimation reveals the dynamic distribution and spatiotemporal evolution of digital inclusive finance development in the Yangtze River Delta Economic Zone, further clarifying the aspects that should be focused on in the future to enhance the level of digital inclusive finance in the Yangtze River Delta Economic Zone, which is of great practical significance for implementing the innovation drive, achieving high-quality economic development, and realizing common prosperity. Spatial Elliptic Formula. e overall characteristics of the interannual variation and dynamic spatial process of the digital financial inclusion level in the Yangtze River Delta region are explained based on five basic parameters of the standard deviation ellipse (SDE) model [27][28][29]. e standard deviation ellipse is calculated as follows: whereX w and Y w are the mean centers in the Yangtze River Delta. σ x denotes the x-axis standard deviation. σ i is the yaxis standard deviation, w i denote the weight. x i and y i are the spatial location of the Yangtze River Delta. x i and y i denote the coordinate deviation of the spatial locus of the study object to the mean center, and θ is the deflection angle. 2.2. eil's Index and Its Nested Decomposition Method. e eil index is an essential indicator of regional disparities. A more significant eil coefficient indicates more significant regional economic disparities, while the opposite is true [30,31]. e formula for calculating the eil coefficient is as follows: where y i is the proportion of i provincial digital finance inclusive to the overall digital finance inclusive level in the Yangtze River Delta, p i is the proportion of the i population of the region to the total population of the country, N is the number of regions, and takes the prefecture-level city as the primary regional unit. We can make a two-stage nested decomposition of the ayer coefficient to decompose the overall difference of digital finance inclusive in the Yangtze River Delta. (T d ) is decomposed into intraprovincial differences (T wP ), interprovincial differences (T BP ), and interbelt variation (T BR ). e specific decomposition formula is as follows: where i is the interprovincial variation in regions as follows: e intraprovincial variation in the province j: T d can be further decomposed as follows: where y ijk represents the level of digital financial inclusion in the regional administrative unit k in the j city of the i province, and p ijk represents the population in regional administrative unit k of the province i and city j. Y ij and P ij are the levels of digital financial inclusion and the population in province andj municipality, respectively; Y i and P i are the i provincial digital financial inclusion and population, respectively; Y andP are the overall digital financial inclusion and population in the Yangtze River Delta region, respectively. Estimation of Kernel Density under Spatial Conditions. e traditional Kernel density estimation can show the distribution pattern of variables but does not obtain the specific change after some time. In contrast, the Kernel density estimation method is used to estimate the probability density function of state transformation in a stochastic process under spatial conditions. It can accurately determine the changing pattern with the three-dimensional map of dynamic distribution of variables and density contour map, so it can be used to explore the changing trend of each region after some time [31,32]. erefore, it can be used to investigate the changing trend of each region after a while. In the stochastic Kernel density estimation, the Gaussian Kernel function is also used in this paper, and the expression is as follows: where f(x) denotes the marginal Kernel density function for x. f(x, y) denotes the joint Kernel density function of x and y, with the expression Study Area and Data Sources. e Yangtze River Delta urban agglomeration has undergone five expansions from 1992 to 2019, and the full coverage of the three provinces and one city was achieved in January, and the number of members increased from 15 to 41. erefore, this paper selects three counties in three provinces and one city in the Yangtze River Delta region after the fifth expansion as the research sample. ese include 16 counties in Shanghai, 96 counties in Jiangsu, 90 counties in Zhejiang, and 105 counties in Anhui. e "Peking University Digital Finance Inclusive Index," compiled by Peking University and Ant Financial using massive data on digital finance inclusive, calculates the digital finance inclusive index with 33 indicators in terms of coverage breadth and usage depth. It is a widely recognized index that can reflect the level and status of digital finance inclusive development, so this paper used the widely recognized index to reflect the level and status of digital finance inclusive development. Other data sources include the statistical yearbooks of three provinces and one city each year, as well as the China Statistical Yearbook. The Spatial Pattern of Digital Finance Inclusive in the Yangtze River Delta Spatial Distribution Characteristics of Digital Finance Inclusive in the Yangtze River Delta Region. As shown in Mobile Information Systems Figure 1, the overall level of digital finance inclusive in the Yangtze River Delta region continued to rise during the period 2011-2020. Although it tends to level off and show an overall convergence trend after 2014, it still grows at a rate of about 10% per year, and the Yangtze River Delta is still in a period of opportunity for digital finance inclusive. e overall level of digital finance inclusive in the Yangtze River Delta region was 69.31 in 2000 and reached 170.19 in 2014 and 293.55 in 2020. e level of digital finance inclusive in the Yangtze River Delta economic cluster improves, but the growth rate gradually decreases. In 2011, the average annual growth rate of digital finance inclusive volume in 2011∼2014 was 49.51, and the average annual growth rate in 2014∼2020 decreased to 12.08%. It indicates that the development of digital finance inclusive in the Yangtze River Delta economic cluster has entered a stable stage from quantitative change to qualitative change, and the growth rate of digital finance inclusive has changed significantly. From the analysis of the structure of members within the Yangtze River Delta, the volume of digital finance inclusive shows prominent spatial and temporal divergence characteristics: Shanghai > Zhejiang Province > Jiangsu Province > Anhui Province, Shanghai, and Zhejiang Province have been in rapid development stage in the Yangtze River Delta region, and the level of digital finance inclusive is relatively high; Jiangsu Province is developing faster, and the gap between its level of digital finance inclusive and that of Shanghai and Zhejiang Province is gradually narrowing. Anhui Province has a lower level of digital finance inclusive but is driven by the rapid digital economy of Hefei and the radiation of Nanjing. Its growth rate has gradually continued in Shanghai, Zhejiang Province, and Jiangsu Province. From the analysis of the growth rate of digital finance inclusive, the growth rate of digital finance inclusive in Shanghai and Zhejiang Province is consistent with the trend of the growth rate of digital finance inclusive in the Yangtze River Delta region. e growth rate of digital finance inclusive level in Jiangsu Province and Anhui Province is always higher than the overall growth rate of the Yangtze River Delta region. e growth rate of digital finance inclusive in Anhui Province is much higher than the average growth rate of other regions in the Yangtze River Delta economic group until 2017. After 2017, it is consistent with the growth rate of other regions. e growth rate of digital finance inclusive in Anhui Province was much higher than the average growth rate of other regions in the Yangtze River Delta economic group until 2014. After 2014, it was in line with the growth rates of other regions. In summary, digital finance inclusive in the Yangtze River Delta regions has changed from high growth to high quality, and the growth rate has declined significantly but still maintains a growth rate of about 10% in most years. It indicated that the Yangtze River Delta region is still in an opportunity period for the indepth development of digital finance inclusive. According to Figure 2, the Yangtze River Delta's digital finance inclusive at the local and municipal scales shows prominent spatially divergent characteristics. In Figure 2(a), the level of digital finance inclusive in Shanghai, Nanjing, Suzhou, Hangzhou, Jiaxing, Ningbo, Wenzhou, and Jinhua is relatively high in 2011 years, while the level of digital finance inclusive development in Anhui Province and northern Jiangsu Province is relatively low, and the overall level of digital finance inclusive development in the Yangtze River Delta is low. After 2015, with the rapid economic development in the Yangtze River Delta region and the continuous acceptance of developed countries and with the rapid economic development of the Yangtze River Delta region and the continuous industrial transfer from developed countries and regions, the differences in the positioning and development strategies of different regions in the Yangtze River Delta have caused significant changes in the spatial pattern of digital finance inclusive development between regions, resulting in more and more obvious spatial differences in the level of digital finance inclusive within the Yangtze River Delta region. Nanjing, CHangzhou, Suzhou, Shanghai, Hangzhou, Ningbo, Jinhua, and Wenzhou are continuously linked in an inverted "S" distribution and spread to the surrounding areas. e scale of digital finance inclusive in the above-mentioned central cities continues to rise, merging into a giant digital inclusive financial center, connecting with Hefei to the north and extending southward. It is connected with Hefei to the north and extends to Taizhou to the south, a more dense digital inclusive financial center with a centralized concentration and peripheral diffusion, forming the core network structure of digital finance inclusive in the Yangtze River Delta. Lishui City and Quzhou City also gradually connect and integrate to form the southern gathering area of digital finance inclusive in the Yangtze River Delta. Hefei City in Anhui Province has transformed into one of the cores of digital finance inclusive in the Yangtze River Delta region, and the center is spreading to the periphery. From 2015 to 2020, the southern part of Jiangsu Province in the Yangtze River Delta continues to develop and spread to the periphery, forming a linear radiation structure from the provincial capital cities and municipalities directly under the central government to the east and west. e development rate of digital finance inclusive in southern Jiangsu is relatively high and gradually forms a digital finance inclusive agglomeration in southern Jiangsu. e development of Wenzhou and the surrounding areas in southern Zhejiang is relatively slow. e Yangtze River Delta economic cluster shows a spatial pattern of "high in the east and low in the west" regarding digital finance inclusive development level. Center of Gravity Shifts and Standard Deviation Ellipse Analysis of Digital Finance Inclusive in Yangtze River Delta Region. In order to accurately portray the morphological evolution trend of the spatial pattern of digital finance inclusive development in the Yangtze River Delta region, the digital finance inclusive evolution trend is quantitatively identified by ArcGIS software with three years as a period, from 2011 to 2020, and seven of them are selected as the characteristic time points. e trajectory of digital finance inclusive's center of gravity shift in the Yangtze River Delta (Figure 3) and the standard deviation ellipse-related parameters (Table 1) was calculated. e center of gravity migration trajectory is shown in Figure 3, and the center of gravity of digital finance inclusive Note: the average ellipse shape index is the ratio of the short semi-axis to the long semi-axis. e ellipse area ratio for the current period is the ratio of the ellipse area for the current period to the ellipse area for 1999, which is the base period for ellipse area. erefore, the ratio of elliptical area in 1999 is 1. 6 Mobile Information Systems network-connected automobile industry clusters, home appliances, intelligent home, high-end equipment manufacturing, energy conservation, environmental protection, photovoltaic, and new energy industries has enabled Hefei to develop digital finance inclusive faster than the eastern region. Overall, the spatial center of gravity of digital finance inclusive in the Yangtze River Delta region shows characteristics of moving to the northwest, with the growth rate of digital finance inclusive in the northern region slightly higher than that in the southern region, and the growth rate in the western region slightly higher than that in the eastern region. e long and short semiaxes of the standard deviation ellipse represent the degree of spatial dispersion and the range of spatial distribution of the amount of digital finance inclusive, respectively. e more significant the difference between the two (the more considerable the variability of the ellipse), the more significant the directionality of digital finance inclusive, and the shorter the short semiaxis, the stronger the centripetal force of digital finance inclusive in the Yangtze River Delta region. As shown in the table, the long semiaxis of the digital finance inclusive ellipse in the Yangtze River Delta region increases by 14.46 km and the short semiaxis increases by 4.72 km, indicating that digital finance inclusive in the Yangtze River Delta region shows a slight trend of spatial diffusion. e average shape index gradually increases from 0.597 to 0.604. e elliptical shape gradually approaches the positive circle, indicating that the trend of digital finance in the Yangtze River Delta region tends to disperse. e θ trend of digital finance inclusive in the Yangtze River Delta region is decreasing, from 42.871°in 2011 to 140.73°in 2020. e distribution direction shows a 'southeast-northwest' trend, indicating that the pattern of digital finance inclusive in the Yangtze River Delta region is changing, with the northwest (such as Hefei City) and southeast (Taizhou City, Wenzhou City). e southwest (Taizhou City, Wenzhou City, etc.) is changing. e digital finance inclusive in the northwest (such as Hefei City) and southeast (Taizhou City and Wenzhou City) has a balanced development trend. Spatial Difference Decomposition. e interprovincial differences and intraprovincial-level administrative differences of digital finance inclusive in the Yangtze River Delta region are calculated by the nested decomposition method of the eil index. As shown in Table 2 and Figure 4, the overall difference of interprovincial digital finance inclusive in the Yangtze River Delta region shows a gradual decline from 2011 to 2020, with the eil index decreasing from 1.35 in 2011 to 0.12 in 2020. e interprovincial difference of digital finance inclusive within provincial-level administrative regions in the Yangtze River Delta region gradually decreases, and the contribution of intraprovincial difference to the overall difference of digital finance inclusive in the Yangtze River Delta as a whole decreases year by year, from 112.61% to 9.21%. The Evolution of the Dynamic Distribution of Digital Finance Inclusive e previous section focuses on the impact of the differences in digital finance inclusive at each scale of the Yangtze River Delta overall. is section uses the Kernel density method to portray the dynamic evolutionary trend of digital finance inclusive in the Yangtze River Delta. First, the unconditional Kernel density estimation method is used to study the dynamic evolution of digital finance inclusive at the local and municipal scales in the Yangtze River Delta over three years. e spatially conditional static Kernel density estimation is Unconditional Kernel Density Estimation for Digital Finance Inclusive in Yangtze River Delta. e two subplots on the left side of Figure 5 represent the unconditional Kernel density and density contours of digital inclusion in the Yangtze River Delta region from 2011 to 2020. e x-axis is the digital inclusion in the region in year t, and the y-axis is the digital inclusion in year t + 3. e Z axis perpendicular to the X − Y plane represents the Kernel density of digital inclusion, i.e., the probability (x, y) density of points. In the contour plot of unconditional Kernel density, the probability decreases with the contour line from inside to outside. Its density represents the degree of convergence of digital finance inclusive in the Yangtze River Delta region, and the denser indicates a stronger convergence trend. If it is concentrated near the positive 45°line, it means the level of digital finance inclusive in this region in year t + 3 is the same as that in period t, and there is no substantial change. If it is near a value on the Y-axis and parallel to the x-axis, all regional digital finance inclusive in the YRD converges to a certain level in year t + 3. From Figure 5, it can be found that the digital finance inclusive in the Yangtze River Delta can be divided into two interval characteristics when the digital finance inclusive in the region in year t is higher than 0.8 and less than 1.2. e nuclear density contour is mainly distributed in the 45°diagonal position, indicating that the digital finance inclusive in the Yangtze River Delta regional unit gradually increases from year t to t + 3. When the digital finance inclusive in the region in year t is higher than 1.2, the nuclear density contour is mainly distributed in the 45°diagonal below parallel to the x-axis, and the digital finance inclusive in year t + 3 mainly shifts to about 1.2 to 1.3. Digital finance, inclusive of the Yangtze River Delta regional unit, shows a noticeable convergence trend. is conclusion is consistent with the above comparative analysis of digital finance inclusive in the Yangtze River Delta space. Static Kernel Density Estimation under Spatial Conditions. In order to determine whether there is a spatial effect of digital finance inclusive in the Yangtze River Delta, static Kernel density estimation under spatial conditions is applied to explore the convergence pattern of digital finance inclusive in the Yangtze River Delta. As Figure 6 shows, the static Kernel density and density contours under the spatial conditions of digital finance inclusive in 2011 to 2020, the xaxis is the digital finance inclusive of the neighboring region in year t, and the Y-axis is the digital finance inclusive of the region in year t. If the region with high digital finance inclusive is adjacent to the region with high level and low digital finance inclusive is adjacent to the region with low level, the probability density in the figure will be concentrated near the 45°diagonal. ey indicated an apparent spatial effect of digital finance inclusive in the Yangtze River Delta. According to the static Kernel density and density contour maps under spatial conditions, it can be seen that the Kernel density contours show a positive 45°diagonal as well as a downward trend distribution, and the overall can be divided into three aggregation clusters. Digital finance inclusive is divided into low, medium, and high digital finance inclusive levels according to an interval. When the digital Y e a r T th is r e g io n Y e a r T + 3 t h i s r e g i o n finance inclusive in neighboring counties is medium level in year t, there is an obvious positive spatial correlation of digital finance inclusive between regions. e density contour is near the positive 45°diagonal, showing a "mediummiddle" digital finance inclusive spatial agglomeration trend. When the digital finance inclusive of neighboring regions is at low and high levels in year t, the contour of nuclear density is distributed below the positive 45°diagonal. e digital finance inclusive of this region grows more slowly with the increase of digital finance inclusive of neighboring regions, and the spatial effect at this time is relatively weaker and much smaller than that of the medium-level stage. Combined with the spatial distribution of digital finance inclusive in the Yangtze River Delta region, digital finance inclusive in the central Yangtze River Delta is generally in the medium level stage. e positive spatial correlation between digital finance inclusive in the region is significant, making digital finance inclusive in the region subject to the potential impact of neighboring regions in the middle. Dynamic Kernel Density Estimation under Spatial Conditions. e impact of neighboring regions in the Yangtze River Delta region on digital inclusion finance in the region is further examined by adding the period factor based on the spatial static Kernel density estimation. e x-axis in the figure is the digital inclusion finance of neighboring regions in year t, i.e., the spatial lag term of digital inclusion finance in the region, and the Y-axis is the level of digital inclusion finance in the region in year t + 3. As shown in Figure 7, in the low-level spatial lag term, the dynamic Mobile Information Systems Kernel density is distributed near the positive 45°diagonal and shifted above the 45°diagonal, which indicates that adjacent to the region with a low digital inclusion level, the digital inclusion in this region tends to shift upward. In the medium-level spatial lag term, the dynamic Kernel density is distributed near the positive 45°diagonal, which is adjacent to the high-level digital financial inclusion. e graph under the high-level digital finance inclusive spatial lag term is located below the positive 45°diagonal and parallel to the xaxis, which indicates that when the level of digital finance inclusive in a particular region rises to a certain degree and does not reach a high level. Even if it is adjacent to a highlevel region, the amount of digital finance inclusive not only does not rise accordingly but also tends to shift downward. Figure 7 shows that the spatial effect of digital finance inclusive among the Yangtze River Delta regions has continuity, and the digital finance inclusive level of this region in year t not only has a spatial relationship with the digital finance inclusive level of the neighboring regions in year t but also with the neighboring regions in a lag of three periods. Specifically, under the conditions of spatial and temporal span, the digital finance inclusive of each region in the Yangtze River Delta will be influenced by the digital finance inclusive level of the neighboring regions and the region of a certain level of digital finance inclusive in year t + 3 tends to converge to a specific intermediate level. Conclusions and Recommendations is paper examines the spatial distribution, spatial differences, and dynamic evolution of digital finance inclusive in the Yangtze River Delta region employing standard deviation ellipse, nested two-stage eil index, and Kernel density estimation, based on the current situation of digital finance inclusive development in the Yangtze River Delta city cluster at the city scale. e results show that first, the overall trend of digital inclusive finance continued to rise from 2011 to 2020, and although it tends to level off and show an overall convergence trend after 2014, it still grows at a rate of about 10% per year, and the Yangtze River Delta is still in a period of opportunity for digital inclusive finance. e spatial differences in the development of digital inclusive finance in the Yangtze River Delta economic cluster mainly stem from interprovincial disparities. Both interprovincial spatial disparity and intraprovincial spatial disparity are the main causes of the overall spatial disparity in the short term. Secondly, in terms of spatial distribution, digital inclusive finance in the Yangtze River Delta has a distribution pattern of high in the east and low in the west, high in the south and low in the north, with the center of gravity of distribution gradually shifting to the northwest, with the growth rate of digital inclusive finance in the northern region slightly higher than that in the southern region, and the growth rate in the western region slightly higher than that in the southern region. e reason for this may be that areas such as the northwestern Yangtze River Delta, such as Hefei City, have developed relatively quickly in recent years by undertaking industrial transfer, and the growth rate of digital inclusive finance is significantly higher than that of the eastern region, which has many national research and development institutions and higher education institutions, providing good technical support for the development of the digital economy. e creation of industrial clusters for newgeneration information technology, automotive and smartnet-connected vehicles, home appliances and smart homes, high-end equipment manufacturing, energy conservation and environmental protection, photovoltaics, and new energy has made the development of digital inclusive finance in the northwestern Yangtze River Delta faster than that in the eastern region. ird, in terms of spatial differences, the overall difference in digital finance inclusive between provinces in the Yangtze River Delta region shows a gradual decline from 1.35 in 2011 to 0.12 in 2020. e interprovincial difference in digital finance, inclusive within provincial administrative regions in the Yangtze River Delta region, gradually decreases. e contribution rate of provinces to the overall difference in digital finance inclusive in the Yangtze River Delta decreases year by year, from 112.61 percent to 9.21 percent. In terms of spatial and temporal evolution trends, the spatial distribution of internal differences and impacts of digital finance inclusive in the Yangtze River Delta counties is relatively stable, showing a Central > South > North pattern. Fourth, in terms of the dynamic evolution of the distribution, the unconditional Kernel density estimation results indicate that digital finance inclusive in the counties of the Yangtze River Delta will still show a trend of increasing year by year in the future period. e results of static and dynamic Kernel density estimation under spatial conditions illustrate that the spatial effect of digital finance inclusive among the Yangtze River Delta regions has continuity, and the positive spatial correlation of digital finance inclusive among regions is significant, making digital finance inclusive in this region among the potential shocks from neighboring regions. Considering the spatial factor in the short term, the level of digital finance inclusive in each region still overgrows, and with or without the spatial lag term, the level of digital finance inclusive in each region of the Yangtze River Delta shows a leapfrog increase. From a comprehensive perspective, digital finance inclusive in the Yangtze River Delta region is still growing, and there are apparent spatial differences and spatial correlation effects on the geospatial scale. erefore, it is necessary to combine the spatial characteristics of digital finance inclusive in the Yangtze River Delta to formulate specific policies and prevent the simple adoption of a "one-size-fits-all" approach. Based on this, this paper puts forward the following suggestions: On the one hand, to promote the balanced development of financial services through the synergistic development of digital finance in urban clusters. e gap between the development of inclusive finance in the Yangtze River Delta urban agglomerations has been narrowing and showing convergence characteristics, which provides the possibility for urban agglomerations with relatively backward economies to achieve the catch-up development of inclusive finance. By vigorously developing inclusive digital finance, lagging regions can enable low-and middle-income people to access financial services more efficiently and alleviate the region's financial services imbalance. At the same time, the government should speed up the interconnection of financial infrastructures between city clusters and cities within city clusters. It should also encourage the synergistic development of digital finance in city clusters by using the radiation effect and driving effect of the interconnection of financial infrastructures to fix the imbalance of financial services between regions. On the other hand, the central and eastern Yangtze River Delta regions should fully exploit their comparative advantages. It takes advantage of cloud computing, big data, and other technologies and capital. Deepening financial supply-side reform and continuously optimizing the financial market, organization, and protection system, paying more attention to "quality" improvement, financial structure optimization, service environment improvement, and actively exploring convergence with international development to promote high-quality development of digital finance inclusive. To promote the high-quality development of digital finance inclusive, including radiating better and driving the northern and western regions, the northern and western regions should speed up to make up for the shortcomings in the development of digital finance inclusive, solidify the foundation for development and improve the overall development level of the region while narrowing the development differences within the region. In addition, the government should accelerate the pace of financial infrastructure construction, enhance financial agglomeration and digital technology application capabilities, promote the deep integration of traditional financial services with digital technology, and continuously innovate products and services, extending the reach and scope of financial services opens up the "last mile" of financial services and meets the diversified financial needs of the public. Data Availability e data used to support the findings of this study are included within the Pratt &Whitney financial index of Peking University (PKU-DFIIC) (https://idf.pku.edu.cn/zsbz/ index.htm). Conflicts of Interest e authors declare that they have no conflicts of interest.
8,369.2
2022-10-03T00:00:00.000
[ "Economics", "Computer Science" ]
Frozen mode in an asymmetric serpentine optical waveguide The existence of a frozen mode in a periodic serpentine waveguide with broken longitudinal symmetry is demonstrated numerically. The frozen mode is associated with a stationary inflection point (SIP) of the Bloch dispersion relation, where three Bloch eigenmodes collapse on each other, as it is an exceptional point of order three. The frozen mode regime is characterized by vanishing group velocity and enhanced field amplitude, which can be very attractive in various applications including dispersion engineering, lasers, and delay lines. Useful and simple design equations that lead to realization of the frozen mode by adjusting a few parameters are derived. The trend in group delay and quality factor with waveguide length that is peculiar of the frozen mode is shown. The symmetry conditions for the existence of exceptional points of degeneracy associated with the frozen mode are also discussed. I. INTRODUCTION The confinement and slowing down of light in photonic structures has gained interest in the past two decades due to its growing feasibility and possible applications. Of particular interest is the excitation of the frozen mode regime [1], where the wave transmitted inside a waveguide or in a supporting medium exhibits both a vanishing group velocity and an enhanced amplitude [2]. The frozen mode regime is associated with a stationary inflection point (SIP) of the Bloch dispersion relation ω(k), where ∂ω/∂k = 0 and ∂ 2 ω/∂k 2 = 0 at k = k s , where k s is the SIP wavenumber. In this paper, we focus on SIPs because of their diverse potential applications: lossinduced transparency, unidirectional invisibility, lasing mode selection, lasing revivals and suppression, directional lasing, hypersensitive sensors, etc [3]. The SIP scenario is also interesting and attractive because the frozen mode regime can be observed over a wide frequency range, ranging from RF [4,5], to optical frequencies [6][7][8]. Moreover, third order exceptional points of degeneracy (EPDs) have been found in a diverse range of structures: loss-gain balanced coupled mode structures, such as PT-symmetric systems with glide symmetry [9,10], SIPs are found in periodic lossless and gainless coupled mode structures [6,11], periodic lossless and gainless gratings [12] and photonic crystals [2]. Furthermore, SIPs have been found in nonreciprocal structures, as shown in [13][14][15], where the system becomes unidirectional near the SIP frequency. The field amplitudes are defined to the right of the boundaries of the unit cell, with the sign of E + i , E − i corresponding to the sign of the projection of the direction of propagation of that wave with the z-axis in the vicinity of the i-th port. One fundamental feature of the SIP-related frozen mode is that it corresponds to a particular third order EPD, where three Bloch eigenmodes, one propagating and two evanescent, coalesce at the SIP frequency. For this to happen, all three Bloch eigenmodes collapsing on each other at the EPD must belong to the same onedimensional irreducible representation of the symmetry group G k of the Bloch wavevector k [16]. This requirement is quite different from the condition for the common symmetry related degeneracy, where the degenerate eigenmodes must belong to the same multidimensional irreducible representation of G k . Since at any given frequency we have just a limited number of Bloch eigenmodes, the easiest way to automatically satisfy the above condition for the SIP existence is to have the symmetry arXiv:2111.15556v2 [physics.optics] 3 Jun 2022 of the waveguide as low as possible. We will apply this guiding principle when choosing the waveguide geometry. In a reciprocal periodic waveguide, there will be a pair of reciprocal SIPs with equal and opposite Bloch wavenumbers k. Therefore, the existence of EPD in a reciprocal waveguide requires at least six Bloch eigenmodes with the same symmetry -three coalescing Bloch eigenmodes in either direction. Here, we consider a specific example of asymmetric serpentine optical waveguide (ASOW) by applying symmetry-breaking distortion to the symmetric optical waveguide (SOW) in [17]. This paper is organized as follows: In Section II, we describe the ASOW. In Section III, we develop a transfer matrix formalism that facilitates obtaining the ASOW eigenmodes. In Section IV we study the conditions for SIP existence. In Section V we analyze the scattering problem for a finite ASOW supporting a pair of reciprocal SIPs. In Section VI we summarize the results. FIG. 2: The n-th unit cell of the ASOW, with its boundaries represented by the two oblique dashed lines from the apex top of the upper loops at an angle β = α − α (the segment A of each top loop in the unit cell is exactly one quarter of a circle). The unit cell waveguide is formed by three different segments: A, B, B'. Segments A are quarters of a loop, marked in blue; Segment B is the waveguide that connects the upper loop with the bottom loop on the left, marked in green, and its length depends on the angle α; Segment B' is the waveguide that connects the bottom loop with the upper one, on the right, marked in orange and its length depends on α . The dashed region on the top encloses the lumped, lossless, coupling point z 0 , which represents the point where the adjacent loops are the closest. Coupling exists also between the bottom loop and the two adjacent unit cells on the left and right (not depicted here). II. GEOMETRY OF THE ASYMMETRIC SERPENTINE OPTICAL WAVEGUIDE A SOW related to the one shown in this paper was analyzed in [17]. It was shown that that structure supported slow-light mode at regular band edges (RBE), where the group velocity vanishes. Instead, here we focus on a modification of that SOW structure, where the applied deformation and the lack of symmetry in each unit cell enables the occurrence of an SIP. As traditionally assumed [18,19] and as in [17], we define the coupling as point-like and lossless, i.e., where κ and τ represent the field coupling and transmission coefficients respectively. Both coefficients are constrained to κ, τ ∈ [0, 1]. The ASOW shown in Fig. 1 is a lossless periodic structure in which the adjacent loops are coupled to one another, allowing the formation of resonating optical paths. The waveguide in each unit cell is divided into three segments: A, B & B'. Segment A is a quarter of a circle with radius R. In every unit-cell there are two A segments at the top part, marked blue in Figure 2, and additional two A segments that form a half of a circle at the bottom (also marked blue). Segment B is the left-side waveguide connecting the upper and bottom loops and depends directly on the radius R and α, marked in green. Segment B' on the right side of the unit cell is similar to segment B but it differs in that it depends on α , as shown in orange in Figure 2. The local slope at the transition between to top and bottom loops in Fig. 2 is continuous because the intersection is between two arcs with the same radius R interconnecting at the same angle, either α or α , therefore there is no slope discontinuity. The phase accumulation associated with each segment is given by where k 0 = ω/c is the wavenumber in vacuum, ω is the angular frequency, c is the speed of light in vacuum, R is the radius of the loops, and n w is the effective refractive index of the waveguide's mode. α is the angle between the line that crosses the center of the top left and bottom loops and the horizontal axis. α is the angle between the line that connects the centers of the bottom and top right loops and the horizontal axis. In this ideal design concept, we ignore the gaps between the waveguides in adjacent loops (gaps of the order of 50 − 100 nm) on the basis that they are significantly smaller than the radius of the loops (of the order of 10 µm). The precise gap size is decided based on the design of a realistic coupler, however in this paper for simplicity each coupler is considered as "point-like", satisfying Eq. (1). As such, the length of the unit cell is given by the diameter of the loops of the ASOW, d = 2R. The key modification of the ASOW in this paper with respect to the SOW in [17] is the difference between φ b and φ b , which breaks the left-right (i.e., longitudinal) symmetry of the unit cell in terms of effective propagation length (akin to the misaligned anisotropic layers studied in [1]) and enables the formation of an SIP. The broken symmetry can be understood as a shear deformation since it is realized by imposing α = α . The angle difference β = α−α assumed in this paper to obtain an SIP is very small, so the difference in the lengths of segment B and B' is barely noticeable in Figure 2. In Figure 1, β is the angle between the dashed oblique line that defines the boundary at the right side of the unit cell and the vertical orange line crossing the center of the bottom loop and its lowest point. III. TRANSFER MATRIX FORMALISM We model the electromagnetic guided fields in terms of forward E + i and backward waves E − i , with i = 1, 2, 3; where the superscripts denote the sign of the projection of the direction of propagation of the wave on the z-axis. The time convention e jωt is implicitly assumed. The unit cell has six ports, with E + 1 , E + 2 and E + 3 propagating towards the right and E − 1 , E − 2 and E − 3 propagating towards the left (at or in the vicinity of the ports). The fields are defined at the right of the boundaries. We assume the coupling between adjacent loops to be lumped and lossless. The scattering matrix relating the incoming and outcoming fields at the coupling point z 0 is defined in Appendix A and shown in Figure 2. We define a state vector with all the six electric field wave amplitudes as where n denotes the unit cell number, as seen in Figures 1 and 2. The six field terms are calculated at the right side of the same cell, and T denotes the transpose operator. Note that this definition has terms arranged differently from those used in [6], and it is the same used in [17], albeit with a different notation. The state vector on the right side of the n-th unit cell is ψ(n). Its "evolution" along the periodic ASOW is described by where T u is the 6x6 transfer matrix of the unit cell of the ASOW. As the ASOW is reciprocal, the determinant of the transfer matrix satisfies det(T u ) = 1, which causes the eigenvalues of this matrix to come in three reciprocal pairs. This causes the dispersion diagram to show the symmetry that if k(ω) is a solution of (8), then also −k(ω) is. Hence, the dispersion diagram is symmetric with respect to the center of the Brillouin Zone (BZ), defined here with Re(k) from −π/d to π/d. The transfer matrix of the unit cell is given by Its calculation is shown in Appendix A. Note that if φ b = φ b , this transfer matrix reduces to that of the SOW in [17], where the lossless coupling relation shown in Eq. (1) was defined in units of power instead of units of field amplitude used in this paper. From the Bloch theorem [20], which states that the field at each unit cell is determined by the field at the adjacent one and a unit cell phase shift, we obtain where k is the Bloch wavenumber of a guided eigenmode and d is the length of the unit cell. By using (4) and (6), we write the eigenvalue problem where ζ = e −jkd . Solving it gives us the eigenvalues and the eigenvectors of the system. When three of these eigenmodes coalesce to a degenerate one with (k) = 0, the SIP is formed, which is an EPD of order three. The eigenvalue solutions are found from the characteristic equation, After some algebraic manipulation, we arrive at the following characteristic polynomial The difference between φ b and φ b is manifested only inside the cosine function, which is an even function. As such, interchanging the values of φ b and φ b does not change the spectral properties of the ASOW. Notice that due to reciprocity of the ASOW, the solutions come in reciprocal pairs: In other words, if ζ is an eigenvalue, 1/ζ is an eigenvalue as well. In the following we represent the wavenumbers in the first BZ, defined here with its center at Re(k) = 0. Because of periodicity, a solution −k i has Floquet harmonics of the form −k i + 2πm/d, where m is any integer number. The transfer matrix of the unit cell is similar to the diagonal matrix, where V = [ψ 1 |ψ 2 |ψ 3 |ψ 4 |ψ 5 |ψ 6 ] is the similarity matrix transformation with eigenvectors ψ i as columns, and IV. EXCEPTIONAL POINTS OF DEGENERACY EPDs are defined as the points where the eigenmode orthogonality collapses, which means that the algebraic multiplicity of an eigenvalue (the number of identical roots of the characteristic polynomial) is higher than the geometric multiplicity (the number of independent eigenvectors associated with that eigenvalue). This dissonance causes the matrix to not be diagonalizable and it is similar to a matrix containing at least a nontrivial Jordan block. The number of coalesced eigenvectors gives the order of the EPD, with the SIP being an EPD of third order. Given the reciprocity of the ASOW, which is seen as a three-way waveguide (analogously to those in [4,6,7,21]), this waveguide supports at any given frequency three pairs of reciprocal Bloch eigenmodes, which allow only degeneracies of order 2, 3, 4 and 6 to form. For the ASOW to exhibit an SIP at a generic point, that is, away from the center or the boundaries of the BZ, all three Bloch eigenmodes with the same sign of Re(k) should coalesce. In the case of the undistorted SOW, the symmetry of a generic point of the BZ has a single nontrivial operation -the glide mirror plane normal to the x-direction. Any Bloch eigenmode, propagating or evanescent, of the undistorted structure is either even or odd with respect to the above symmetry operation. Normally, two of the three eigenmodes have the same parity, while the third one has the opposite parity. The states with the opposite parity do not usually coalesce and, thus, are less likely to participate in SIP formation. On the other hand, the two eigenmodes with the same parity can coalesce and form a regular band edge (RBE) [17]. To facilitate the coalescence of all three eigenmodes of Re(k) with the same sign, we break the glide plane symmetry by applying the shear distortion described in Figure 1 on the undistorted SOW. In this paper we focus on finding SIPs, which are found as inflection points at (k s , ω s ) in the dispersion diagram, locally approximated as The existence of an SIP indicates that the structure (ASOW in our case) possesses a frozen mode regime, exhibiting huge diverging amplitudes and low group velocity [1]. At frequencies in the vicinity of the SIP, the guided field is a superposition of a propagating and two evanescent Bloch modes, which develop a strong singularity close to the SIP frequency while remaining nearly equal and opposite in sign at the boundary of the structure, satisfying boundary conditions. The advantage of the SIP compared to EPDs of even order such as RBE and degenerate band edge, or DBE (order 2 and 4, respectively), is that it can exhibit a good coupling efficiency [12], with a significant fraction of the incident light coupling into the waveguide. The high coupling efficiency allows SIP-exhibiting structures to interact effectively with external devices. This is in contrast to structures exhibiting RBEs and DBEs where the impedance mismatch is substantially larger [22]. An SIP is defined as a third-order EPD, which means that (10) does not hold anymore, and that T u is degenerate with two reciprocal eigenvalues of algebraic mul-tiplicity 3 and geometric multiplicity 1 (i.e., there are only two eigenvalues ζ s = e −jksd and ζ −1 s = e jksd , repeated three times each, and two eigenvectors associated to those eigenvalues). A. Analytic Dispersion Relation for an SIP We derive analytically the system of equations that constraints the values of the ASOW parameters κ, R, α and α such that the ASOW exhibits an SIP. At the SIP angular frequency ω s the characteristic equation of the system, found in (8), can be cast in a simple way because it has two degenerate Floquet-Bloch eigenwaves. Hence, the characteristic equation evaluated at ω s must have the form By equating the coefficients of this polynomial with those of the dispersion relation in Equation (9) evaluated at ω s , we derive the following five necessary conditions: Equations (14) must be satisfied for the ASOW to exhibit an SIP at ω s . The last equation is automatically verified when the coupling and transmission coefficients satisfy Eq. (1), i.e., when each coupling is lossless. The fourth equation does not depend on φ a , φ b , and φ b , and the choice of the coupling and transmission coefficients determines the value of ζ s , hence the wavenumber k s of the SIP. The first and the third, are not independent: after equating their right hand sides, we get an equation in τ and κ only, which is verified assuming the coupling and transmission satisfy Eq. (1). Therefore, either the first or the third equation is useful to determine the phase difference φ b − φ b once the coupling and transmission coefficients have been determined. The second equation is useful to determine the phase term 4φ a + φ b + φ b , which is the total phase accumulated in a unit cell when we do not consider coupling effects. This shows that there are various combinations of the lengths of the segments A, B and B' that lead to an SIP. In order to quantify the coalescence of the eigenvectors, we use the concept of "coalescence parameter" introduced in [24] for the DBE and in [4] for the SIP. Here, we use a coalescence parameter σ defined similarly to that in [4], as The coalescence parameter is calculated by organizing the eigenvectors ψ i in two sets of three vectors, associated with ζ i , and 1/ζ i , respectively, with i = 1, 2, 3. Then, we calculate the euclidean distance of the angles between all the combinations in the set with respect to the origin. Here, θ mn is the angle between two 6-dimensional complex vectors ψ m , ψ n , which is defined as stated in Equation (15) using the inner product < ψ i |ψ j >= ψ † i ψ j with the dagger symbol † representing the complex conjugate transpose operation and ψ m denotes the norm of ψ m [4]. In this paper, we calculate the coalescence parameter using the norm based on the euclidean distance between the parameters θ mn , ∀m, n, and zero [23], instead of using the arithmetic average used in [4,24]. The reason for this change is that the optimization algorithm converges faster using the euclidean distance than the arithmetic average, as long as the algorithm does not generate a lot of points far from the optimization goal (known as outliers) [25]. The coalescence parameter is always positive and smaller than 1, with σ = 0 (the origin) indicating perfect coalescence of each set of three eigenvectors. This point constitutes an SIP. B. ASOW with SIP In this section we show that the proposed ASOW exhibits an SIP through the proper tuning of the various structure parameters. For practical purposes, the SIP wavelength is set at 1550 nm; the waveguide consists of a silicon over insulator structure, and the Si waveguide is assumed to have a height of 230 nm and a width of 430 nm. At this wavelength, the lowest TE-like mode has an effective refractive index of n w = 2.362, as can be seen in [17]. In [26] it is also seen that the variation of the refractive index is negligibly small in the frequency range of interest. In order for the assumption that the structure is lossless at optical frequencies to be reasonable, we restrict ourselves to ASOWs comprising loops with radius R ≥ 10 µm [27] to minimize radiation losses. Figure 1. It exhibits an SIP that can be seen by the coalescing of the three branches in both the real and imaginary parts. In addition to the SIP, we also find an RBE not far from the SIP. The distance between the RBE and the SIP most likely decreases with an increasing loop radius. The reasoning behind it is that a larger radius causes the structure to support multiple resonances and reduces its free spectral range, although more work has to be done to investigate how to design RBEs far from the SIP. The fact that both RBEs and SIPs are both found in a small frequency range could be problematic when realizing lasers. Therefore, learning how to optimize the size of the loops in order to balance between bending (radiation) losses and the formation of RBEs near the SIP frequency is important for exploiting the potential of an SIP. V. ANALYSIS OF A FINITE-LENGTH STRUCTURE We analyze an ASOW with a finite-length L = dN , where d is the period of the unit cell and N is the number of unit cells. This finite-length structure is shown in Fig. 4. and a last unit cell without the second coupling, which connects the ports 2 and 3 as defined in Figure 2. This last unit cell is described by the transfer matrix T aux and has a length d, as we neglect the gap between adjacent loops. All the units cells are defined within parallel oblique dashed lines as described in Section II. The evolution of the field amplitudes from one unit cell to the next is given by Equation (4). To find the field amplitudes at each unit cell of the structure shown in Fig. 4, we need to find the field amplitudes (i.e., the state vector) at either end of the structure. We consider the state vector at the left boundary of the first unit cell of the structure, ψ 0 = ψ(n = 0). The state vector at the end of the ASOW made of N cascaded unit cells is given by Dividing the ASOW in unit cells as shown in Figure 4, we find that where T aux is the transfer matrix of a unit cell without the second coupling point and it is given in Appendix A. The diagonal matrix Λ is defined as in Equation (10). At an SIP, the transfer matrix is non-diagonalizable and similar to a matrix containing two Jordan blocks. [6] FIG We assume the ASOW is excited by an incoming wave E + 1 (0) = E inc from the left, and the right end is terminated on a dielectric waveguide with the same shape and characteristic impedance of the waveguide used to form the ASOW. Considering the definitions in Fig. 4, the fields defining the Boundary Conditions (BC) of the waveguide are Applying these BC to the state vector "evolution" described in Eq. (16) gives the field amplitudes at either side of the boundary. By applying Equation (4), we obtain the field amplitudes at each unit cell from those at n = 0. The results for |E + 1 (n)|, |E − 1 (n)| and |E 1 (n)| over n ∈ [0, N ] are shown in Figure 5 where N = 32. The frozen mode regime, which is characteristic of light traveling with null group velocity followed by a dramatic enhancement of the field amplitudes [2], is in full display. The amplitudes of the fields in the middle of the finite-length structure are substantially larger than those located at its edges, where the BC shown in Equation (18) are satisfied. This frozen mode regime is visible in the magnitude of both the forward and backward waves, as seen in Fig. 5, where |E + 1 (n)|, |E − 1 (n)| and their sum |E 1 (n)| = |E − 1 (n)+E + 1 (n)| peak around the center of the finite-length waveguide. B. Transfer function Besides the spatial evolution of the field amplitudes along the finite-length structure, we are interested in finding the proportion of light that makes it through the waveguide and the proportion of light that is reflected from it. We define the transfer function as the ratio between the forward field amplitude at the output of the ASOW and the incident one. We also define the reflection function as the ratio between the backward field amplitude at the input of the ASOW and the incident one. The transfer function is equivalent to the s-parameter S 21 and R f is equivalent to S 11 . Figures 6 and 7 respectively show the magnitude of the transfer and the reflection functions (in dB) of an ASOW comprising N cascaded unit cells, for several values of N . The parameters of the structure are chosen to satisfy the conditions from Equation (14) to exhibit the SIP shown in Figure 3. The transmission curves reach their maximum in the vicinity of the SIP frequency, where the reflection curves reach their minimal level. The resonance closest to the SIP frequency is denoted hereon as SIP resonance. The distance between peaks in each curve shrinks as N increases. Notice that the peaks in Figs. 6(a) for even N are more bundled together around the SIP frequency ω s than for odd N . The quality factor (Q) of a cavity is a measure of the energy lost per cycle versus the energy stored in the cavity. Very large Q factors in the vicinity of SIPs originate from the combination of the frozen mode regime and the common slow-wave resonance [1]. Nevertheless, at EPDs other than the SIP (i.e., the DBE), systems can be highly mismatched to the termination impedance of most loads. This phenomenon stems from the Floquet-Bloch impedance [22] in a multi TL, and causes an EPDexhibiting structure to act as an isolated cavity. This is especially true for the case of DBEs [28]. The ASOW does not behave as a very high-Q resonator at the SIP frequency as the frozen mode regime is not a cavity resonance [1,11]. The Q factor, however, does depend on the particular design of the SIP and its Bloch impedance. In the following we calculate the quality factor as that provides a very good approximation for high quality factors [6]. Here ω res is the SIP resonance (the angular frequency corresponding to the closest peak to the SIP frequency) and τ g is the group delay at that frequency. As the range of frequencies we operate in is small, the resonant frequency ω res is approximately the same for all the group delay peaks in Fig. 8. The group delay is calculated as the negative of the derivative of the phase of the transfer function with respect to the angular frequency, i.e., Figure 8 shows the group delay versus angular frequency for structures with different number of unit cells, N . It is normalized by the baseline delay that occurs in a finite-length structure with the same length as the ASOW, without considering the couplings (i.e., without frozen mode). For the SIP-exhibiting ASOW from Section IV, which had R = 10 µm, α = 66.02 • , α = 56.18 • , we have τ 0 = 0.83 ps. In Figure 8 we can see the normalized group delay. As expected, for frequencies below the SIP frequency, τ g approximates τ 0 , although it does not quite reach that low value because τ 0 does not take into account the resonant paths enabled by the existence of the coupling points. For frequencies above the RBE frequency, which is the frequency at which the ASOW exhibits an RBE, τ g → 0, as there is no propagation through the waveguide and the field experiences an exponential decay while propagating inside the waveguide because of the bandgap in the dispersion diagram in Fig. 3. In Figure 9 we plot the quality factor Q versus the number of unit cells N of the ASOW in Fig. 4. The two plots are for even (a) and odd (b) numbers N . In both cases, Q grows with the number of unit cells following Despite the growing trend of Q with N , the frequency at which the Q is maximum does not necessarily get monotonically closer to the SIP frequency, as shown in Figure 8 looking at the group delay peaks. Moreover, for a relatively small number of unit cells, with N ∈ [10,20], ASOWs with an even number of unit cells have a higher Q than ASOWs with odd N of comparable length, suggesting a stronger cavity-like behavior for ASOWs with even N . This is seen in Fig. 9. For larger N , this difference disappear. Figure 8 shows the normalized group delay peaks in the vicinity of the SIP frequency and near the RBE frequency, indicating that Q is higher near EPD frequencies. As mentioned before, the Q around the SIP frequency grows as: Q SIP = b SIP N 3 . Note that also for the resonances near the RBE, we have the asymptotic trend Q RBE = b RBE N 3 as discussed in [1,28]. For the ASOW considered here, the Q in the vicinity of the SIP frequency is comparable to the Q in the vicinity of the RBE. This occurs even though the SIP displays a frozen mode regime and has a higher degeneracy order than the RBE. As the SIP exhibits high transmittance, it allows a balance between the dramatic enhancement of the field amplitudes associated with an exceptional point and the low coupling to external waveguides due to mismatch. A high level of mismatch is instead typically found in DBEs, which have a higher quality factor scaling law [6]. In [12], it is shown that SIP-exhibiting structures have a high coupling coefficient, with a significant part of the incident light being transmitted into the frozen mode regime. This feature reduces the Q factor of the structure and the cavity-like properties that, instead, band edges usually exhibit. As such, SIP-exhibiting structures can be devised to realize unidirectional lasers [13] that are otherwise not suitable with waveguides with an evenorder EPD, such as an RBE or a DBE [29], which are used to form high Q cavities with low transmittance. VI. CONCLUSION We have demonstrated that a lossless asymmetric serpentine optical waveguide (ASOW) can support a pair of reciprocal SIPs associated with the frozen mode regime. The SIP has been obtained using the extra degree of freedom by applying a shear distortion that breaks the glide symmetry of the original symmetric SOW. Our formulation explicitly reveals that the SIP is an exceptional point of third order in a lossless/gainless waveguide. To show that, we resort to the concept of "coalescence parameter" whose vanishing value reveals the coalescence of three eigenvectors, explicitly demonstrating that the SIP is indeed a third order exceptional point of degeneracy. The study of finite-length waveguides shows the field enhancement and a large group delay at Fabry-Perot resonances near the SIP frequency. We have also studied the evolution of the transfer and reflection functions in the vicinity of the SIP, varying the length of the waveguide cavity, revealing the cubic-length scaling of the quality factor. High transmission is observed, shown by a trans-fer function nearing 0 dB close to the SIP frequency, with a reasonably high quality factor, allowing for matching the SIP-exhibiting structure to external devices. Periodic waveguides supporting the SIP-related frozen mode regime can be used for cavity-less light amplification and lasing, optical sensors, microwave and optical modulators and switches. VII. APPENDIX A FIG. 10: Unit cell of the ASOW divided in subcells, which are the waveguide segments within the parallel dashed oblique lines. The subscells are modeled by the transfer matrices: T 1p , T 1c , T 2p and T 2c . The transfer matrices T ip , with i = 1, 2 describe segments of the waveguide where the waves travel in three uncoupled waveguides, whereas T ic , with i = 1, 2, describe the coupling points, assumed to have zero thickness. In this appendix we show how to obtain the transfer matrix of the unit cell of the ASOW T u and the auxiliary matrix T aux , which is akin to T u without modeling the second coupling point. The state vector is given in Equation (3). As the unit cell of the structure has different resonant paths, the transfer matrix for the unit cell cannot be calculated in one step. Instead, we break the unit cell into several segments shown in Figure 10. We call T 1c and T 2c the transfer matrices that model the relations between field amplitudes on either side of the infinitesimal segments (in z) that include the coupling points. The transfer matrices T 1p and T 2p account for the phase accumulation in the segments of the unit cell. The matrices T 1p & T 2p are trivial: and but the matrices T 1c & T 2c demand a more careful consideration. They are: and In the following we show how to obtain the transfer matrix T 1c , with T 2c being analogously derived. The transfer matrix T 1c represents the infinitesimally thin (in z) segment with the top coupling point. As seen in Figure 10, there is no phase accumulation at the bottom ports (identified by the field amplitudes E ± 3 ). This explains the 2x2 identity matrix at the bottom right of T 1c . To model the change in the field amplitudes before and after the coupling point we use a 4x4 scattering matrix, which gives the outputs in terms of the inputs. For the coupling point modeled in T 1c , z c , we have the following scattering matrix, This 4x4 scattering matrix is transformed into the 4x4 transfer matrix embedded at the top left of the 6x6 T 1c . The transformations are [6] T 11 = S 21 − S 22 S −1 12 S 11 , T 21 = −S −1 12 S 11 , T 12 = S 22 S −1 12 , T 22 = S −1 12 (29) where each component, S ij and T ij , with i, j = 1, 2, is a 2x2 matrix that forms the 4x4 scattering and transfer matrices, respectively. The transfer matrix, which relates the field amplitudes at the left of the coupling point (z − c ) with the field amplitudes at the right of the coupling point (z + c ), is shown below: (30) Embedding this 4x4 transfer matrix on the top left of the 6x6 transfer matrix T 1c we obtain a full model of the infinitesimal segment with the top coupling point. For the transfer matrix T 2c , the coupling occurs for the field amplitudes E ± 2 and E ± 3 , so the 4x4 transfer matrix modeling the coupling point is embedded in the bottom right part of the 6x6 T 2c . As there is no change in E ± 1 (due to the aforementioned infinitesimal thickness of the modeled segment), a 2x2 identity matrix goes at the top left. The rest of the matrix is filled with zeros. The last step to obtain the transfer matrix is to rightmultiply the transfer matrix of each segment: The full expression of T u is shown in Eq. (5). The 6x6 transfer matrix T aux , which describes a modified unit cell without the bottom coupling to be used as a last cell, containing the outport port, is similar to the transfer matrix T u but without right-multiplying the matrix T 2c , as
8,537.2
2021-11-30T00:00:00.000
[ "Physics" ]
MAPPER – A NOVEL CAPABILITY TO SUPPORT NUCLEAR MODEL VALIDATION AND MAPPING OF BIASES AND UNCERTAINTIES This paper overviews the initial results of a new project at the Oak Ridge National Laboratory, supported via an internal seed funding program, to develop a novel computational capability for model validation: MAPPER. MAPPER will eliminate the need for empirical criteria such as the similarity indices often employed to identify applicable experiments for given application conditions. To achieve this, MAPPER uses an information-theoretic approach based on the Kullback-Leibler (KL) divergence principle to combine responses of available or planned experiments with application responses of interest. This is accomplished with a training set of samples generated using randomized experiment execution and application of high-fidelity analysis models. These samples are condensed using reduced order modeling techniques in the form of a joint probability distribution function (PDF) connecting each application response of interest with a new effective experimental response. MAPPER’s initial objective will be to support confirmation of criticality safety analysis of storage facilities which require known keff biases for safe operation. This paper reports some of the initial results obtained with MAPPER as applied to a set of critical experiments for which existing similarity-based methods have been shown to provide inaccurate estimates of the biases. INTRODUCTION Validation of computation method or model is a key requirement for all engineering applications, requiring the analyst to provide proof that models can accurately predict real behavior based on the available body of experiments and computer analysis results. In most practical situations, the experimental conditions are only partially similar to the application due to many factors, including infeasibility to reproduce application conditions, construction cost, etc. Therefore, the ultimate goal is to devise scientifically defendable methods that can predict the expected bias between model predictions and real behavior for conditions that are not exactly reproduced by the experiments. For validation of nuclear systems, experimental data are routinely collected from the critical experiments and reactor startup tests. These data can be used for uncertainty assessment and bias estimation, taking into account differences between experimental and application conditions such as variations in fuel type, structural materials, geometry, etc., in order to support model validation for a wide range of application conditions. The reactor startup procedures for detector calibration and rod worth calculations are also used to support bias calculations as the reactor core is driven to criticality. The first step in any model validation is to determine the set of experiments most relevant to the application conditions. Existing nuclear systems validation techniques [1,2,3] leave some ambiguity with regard to rigorously proving that the selected experimental data can guarantee confidence in the estimated biases and their uncertainties for applications that could be sufficiently different from the experiments. For example, critical experiments are typically conducted at zero power conditions using small mock-ups of reactor geometry/composition, which is not sufficient to analyze the wide range of steady-state and transient power conditions. Although reactor startup tests provide data for the actual reactor core configurations rather than a reactor core-like mockup, they continue to be challenged, as they are conducted at very low power. The credibility question persists: how can the mapped biases and uncertainties be credible at conditions sufficiently different from experimental conditions? Existing methods have relied on the concept of similarity to construct experiments with conditions that experts consider to be representative to application conditions. Then biases and uncertainties which occur at the experimental conditions are assumed to applicable at the application conditions as a function of experiment's similarity (usually linear). The challenge lies in determining how to judge similarity between two different systems. Existing methods, which have been primarily used for criticality safety applications employ the first-order derivatives of key responses of interest (e.g., multiplication factor, and spectral indices) with respect to nuclear data, which are assumed to constitute the major source of uncertainty. These nuclear data are typically referred to as nuclear cross sections, and they characterize the probabilities of interaction between radiation and matter. Adjoint sensitivity analysis is typically used to calculate the derivatives of keff with respect to all nuclear cross sections for both the experimental and application conditions, collected into two n-dimensional vectors, called sensitivity profiles, with n being the number of cross sections used in the models. Using linear algebra, a single number, the similarity index, which has a value between 0 and 1, is calculated. The similarity index represents a weighted inner product of the two sensitivity profiles, with the weights calculated using the cross sections' covariance matrix. If the number is above a preset threshold such as 0.85, for example, then the experiments are deemed to be similar to the application [4]. Experiment selection based on a single similarity index can be misleading due to dominant sensitivities or uncertainties that are not relevant at application conditions. For example, two experiments could be deemed similar even though their associated biases are significantly different. This is one of the limitations of similarity-based approaches for validation which have become well-known over the years. Other limitations include the inability to account for errors/uncertainties from sources not common to both experiment, and application conditions, uncertainties in modeling parameters other than nuclear data, possible nonlinearities resulting from feedback terms at the application conditions, etc. After selecting applicable experiments, cross sections are calibrated to minimize discrepancies between the measured and predicted experimental responses. The calibrated data are used in the application model, and the differences in the application model responses with the calibrated data and with the original data are accepted as the application model bias. The premise of calibration techniques for cross sections is that with the experimental discrepancies reduced, the adjustments will likely be adequate at the application conditions, with the caveat that the experiments are similar to the application. Also, correcting for uncertainties might be a reasonable approach, as most calibration techniques cannot guarantee that they have adjusted for the correct sources of discrepancies. The limitations of calibration techniques have also been recognized over the years. These limitations include ill-posedness and error compensation phenomena when dealing with many model parameters, as well as the inability to include application conditions in the adjustment procedure [5]. The implication is that the adjusted parameters are blind to the application conditions. As in the similarity methods, the success of calibration relies heavily on the cleverness of the experimentalists to establish experiments that are very similar to the application conditions, and it also relies on the analysts to exclude experiments that could degrade the calibration results. For example, it has been observed that the bias could change dramatically upon adding/removing some experiments with low relevance, and the bias could be overconfidently estimated with small uncertainties, causing the results to show sensitivities to the experiments used. Explanation for this complex behavior has not been recorded in the nuclear engineering literature, which has diminished the value of calibration-based techniques. The need for the MAPPER capability is based on the fact that existing validation philosophy has not kept pace with the information theory advances introduced in the mid 20 th century. These advances started with Shannon's information content principle, which was further developed in the 1950s into the Kullback-Leibler (KL) divergence principle [6]. These advances have proved that the most rigorous approach to judge similarity is via a full PDF, which, in this context, can encode all possible variations for the experimental and application responses. Interestingly, the similarity index, which is a single number, can be derived from this PDF after making several simplifying assumptions, such as linearity, Gaussian uncertainties, and PDF marginalization over the common sources of uncertainties only. This PDF provides a natural approach to mapping the biases between the experimental and application domains by marginalizing biases over the PDF describing the measured response. This approach is also not restricted to being Gaussian, allowing all uncommon sources of uncertainties to be considered. If the experiments provide little information about the application, then this joint PDF will not be informative, as it will have very wide scatter, indicating that application conditions cannot be confidently predicted using available experiments. The degree of correlation between the experiment and the application can be quantitatively measured using the concept of mutual information, which provides an acceptable approach to optimize and select the experiments that are best correlated with the application of interest. DETAILS OF IMPLEMENTATION MAPPER is a computational sequence designed to automate the construction of the noted joint PDF. For this initial proof-of-principle implementation, MAPPER is designed to leverage the SAMPLER code under the Oak Ridge National Laboratory (ORNL) SCALE suite of codes [8], which facilitates automated execution of uncertainty analysis. Given two sets of models-one representing the experiments, and one the application-MAPPER executes the SAMPLER code for all models and records all generated samples for the experimental and application responses. For this initial implementation, MAPPER uses the samples calculated directly by SAMPLER which are generated in a manner that preserves the statistical consistency of the cross section based on the prior covariance matrix. The recorded samples for the response of interest represent a discrete representation of the sought joint PDF. Since the experiments contain numerous responses, a single effective response will be generated by maximizing the mutual information between the response and the application response of interest. This maximization process can be achieved using any number of parametric or nonparametric regression techniques, such as alternating conditional estimation or projection pursuit or reduced order modeling techniques. In this initial work, the alternating conditional expectations (ACE) algorithm is used to determine optimum relationships between the experiments and the application. Given the high similarity between the experiments, the training samples for ACE must be preconditioned to ensure their independence. Details of input preparation for ACE will be discussed in a follow-up full-length journal article. Next, given available measurements and their uncertainties either from an existing or hypothesized (i.e., future, experiments), the constructed joint PDF will be used to calculate the full PDF for each application response. These PDFs can be reduced to biases and uncertainties representing mean values and standard deviations of the PDFs, per the user's selection. The full implementation of MAPPER comprises four steps [7] (Figure 1): a. A comprehensive uncertainty quantification exercise using SAMPLER is executed for both experimental and application conditions by sampling all uncertain parameters within their prior uncertainties, including parameters that are both common and uncommon to the experimental and application conditions. Step a generates N (on the order of ~300-500) samples for all responses of interest, which do not necessarily have to be of the same type. In the initial implementation, only cross section uncertainties will be propagated. For future work, both modeling and cross section uncertainties will be propagated. b. An automated procedure using nonparametric regression combines all samples from the experimental domain to construct a new effective experimental variable, k-eff-exp, which is selected to have the highest mutual information possible with an application response of k-eff-app. This effective response serves as a general nonparametric function of all available experimental data. c. A joint PDF is constructed between k-eff-exp and k-eff-app using kernel density estimation techniques. d. This joint PDF is used to propagate the biases and uncertainties of measured responses to the given application response. Steps b-d are repeated for each application response. The computational cost of this algorithm is primarily associated with the cost of executing the simulation in step a. However, this must be done only once for available or new experiments. If a new experiment or a new application must be added, then the uncertainty quantification is executed for new models only. A series of critical experiments were used to train the MAPPER algorithm and to predict biases for two experiments that are not included in the trained set. The two selected experiments (applications) are highly correlated (high similarity), but their biases are more than 600 pcm different, which represents a known challenge for similarity-based methods. Critical Experiments A set of critical experiments using highly enriched (> 60% U 235 ) metal fuel and operating in fast spectrum (HEU-MET-FAST) were selected from the VALID [9] database for testing. The VALID database includes input models for more than 600 critical experiments from the ICSBEP Handbook [10]. These models are maintained by ORNL. The list of experiments from the selected set and the experimentally measured values [11] are provided in Table 1. The provided multigroup KENO (KENO V and KENO VI) input models were run for each experiment using SCALE 6.2.3 with the ENDF/B VII.1 56g cross section library. The difference between the measured and calculated keff values of Eq. (1) are also included in Table I. Note that 56g library is optimized for thermal systems, and the biases are considerably reduced when the 252g library is used. Since the focus of this paper is bias estimation for a given model and nuclear data, the 56g library is adequate. Table I were sampled for 300 perturbations to the ENDFB/VII.1 56g library using the 56g covariance library provided by SCALE. The calculated keff values were used in training the MAPPER algorithm. Evaluations for the critical experiments-except the applications (HEU-MET-FAST-025-1 and HEU-MET-FAST-025-25) and their associated uncertainties-were also provided to MAPPER. Table I The results are shown in Table II. MAPPER accurately predicts measured keff values and corollary modeling biases. On the other hand, the bias calculated based on a similarity-based method (SCALE 6.2.3/TSURFER) overpredicts the computational model bias for HEU-MET-FAST-025-1 by 81 pcm and underpredicts the computational model bias for HEU-MET-FAST-025-5 by 305 pcm. The reason for this over and underprediction is that while the included experiments have strong similarity to the application, their biases are inconsistent, which forces similarity-based methods to average the biases to calculate the application bias. This paper discusses the initial results for a prototypic implementation of the MAPPER sequence, a new tool within SCALE which was developed to support model validation by addressing the challenges of existing similarity-based and calibration-based techniques. MAPPER relies on the use of information theory rather than sensitivity analysis to establish the relevance of experiments to the given applications. All experiments listed in As supported by information-theoretic principles, some of the existing challenges facing similarity and calibration-based methods can be eliminated-such as the sensitivity of the solutions to the prior uncertainties-thus precluding the need for prescreening experiments based on an empirical approach, as in the use of similarity indices. This eliminates the need for evaluation of sensitivity coefficients, which could be computationally infeasible, etc. Future work will focus on expanding the implementation of MAPPER to analyze the impact of adding and removing experiments on the calculated biases and their uncertainties, with the goal of fully automating the experiments selection as opposed to the trial and error approach currently used in similarity-based methods.
3,421.6
2021-01-01T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
Study of QCL Laser Sources for the Realization of Advanced Sensors We study the nonlinear dynamics of a quantum cascade laser (QCL) with a strong reinjection provided by the feedback from two external targets in a double cavity configuration. The nonlinear coupling of interferometric signals from the two targets allows us to propose a displacement sensor with nanometric resolution. The system exploits the ultra-stability of QCLs in self-mixing configuration to access the intrinsic nonlinearity of the laser, described by the Lang–Kobayashi model, and it relies on a stroboscopic-like effect in the voltage signal registered at the QCL terminals that relates the “slow” target motion to the “fast” target one. Introduction In the last decade, quantum cascade lasers (QCLs), which represent compact, high power, highly coherent, widely tunable laser sources in the mid-infrared to terahertz range of the electromagnetic spectrum, have been extensively used in a number of sensing applications, like imaging, medical diagnosis and spectroscopy [1]. With respect to conventional bipolar semiconductor lasers, continuous wave (CW) emission in QCLs is much more stable against strong optical feedback provided by an external target in the so-called self-mixing (SM) configuration. As we recently demonstrated, this follows from the absence of relaxation oscillations due to the ultra-fast carriers recombination and to the small value of the linewidth enhancement factor (or α factor) [2,3]. Then, QCLs can be exploited to realize robust, detectorless, real-time sensors when an external target provides back-reflected radiation, which induces changes in the emitter properties (field intensity, compliance voltage at laser contacts). The modified signal hence carries information about corresponding variations of the target complex reflectivity and its displacement with respect to the laser exit facet [4]. Fields of applications range from coherent imaging [5] to motion tracking [6] and material processing [7]. The search for displacement sensors based on optical interferometry with nanometric resolution has been recently triggered by the enormous development of nanoscale technology, even in systems working at longer wavelengths (from mid-infrared to terahertz). So far, most of the systems showing these performances employ "off-line" signal post-processing to overcome the half wavelength intrinsic resolution of standard "on-line" fringe counting techniques [8][9][10]. A prototype system able to measure "on-line" nanometer-size amplitude displacements has been proposed in [11], based on a modification of the standard self-mixing scheme, namely a differential optical feedback interferometry in a two-laser configuration. Here, we provide a proof-of-principle demonstration of a nanoscale position-sensing system based on the nonlinear dynamical response of a QCL subject to strong optical feedback. We refer in particular to the collinear double-arm configuration sketched in Figure 1 where optical feedback is provided by a slow object target (OT) with constant and unknown speed and a fast reference target (RT) with constant, controlled speed [12]. In the strong feedback regime, we show that the fast switching fringes in the SM signal typically associated with the RT motion are characterized by additional sub-features carrying information about the OT motion. As illustrated in detail in Sections 2 and 3, this allows for a denser sampling of the OT fringes, which, in turn, leads to a displacement measure with a resolution much smaller than λ/2. With respect to our first results reported in [13], we present here a radically-improved version of the OT motion retrieval algorithm that allows us to reduce the estimated sensors resolution down to a few nanometers. We also provide an evaluation of the role of the feedback strength and discuss the role of the QCL fluctuations as a source of errors in the sensing procedure. We believe that the a super-resolved displacement sensor, like the one proposed here, may be of interest also in other applicative fields, such as QCL-based 3D imaging systems for medical and material processing, where the measurement of the separation between layers orthogonal to the optical axis is essential [14]. A theoretical analysis of the QCL subject to optical feedback from two independent targets is presented in Section 2. In Section 3, we describe in detail a numerical fitting procedure able to extract in real time the information about the object target displacement with a resolution of a few nanometers, which corresponds to ∼λ/1000. The numerical limits in the attainable resolution and the range of application of the above algorithm are discussed in Sections 3.2 and 3.3. In Section 3.4, we estimate the influence of the finite laser linewidth on the proposed sensor accuracy. Finally, Section 4 is devoted to the conclusions and a discussion of a possible extension of our approach. Theoretical Section: Self-Mixing in QCL with a Double External Cavity The sensor scheme is sketched as in Figure 1, where a partially-transparent reference target moving with a known constant velocity v r is inserted in the external cavity formed by the QCL and the object target that translates with an unknown constant velocity v o < v r . In the single longitudinal mode, slowly varying envelope approximations and single reflection in the external cavity, the behavior of a QCL under optical feedback from two independent targets can be described by an extended version of the Lang-Kobayashi (LK) equations [4] that account for an external double cavity [15]: where the adimensional field E, carriers' density N and pump intensity I p are scaled as in [2], and the time t is expressed in units of the photon lifetime τ p . Moreover, α is the linewidth enhancement factor; τ c is the laser cavity round trip time; ω 0 is the solitary laser frequency (equal to the laser cavity resonance taken as the reference frequency); γ is the photon-to-carrier lifetime ratio. The feedback strength parameters k i with i = o, r depend on the effective fraction of the field back-reflected from OT and RT that re-enters the laser cavity and the reflectivities of the external targets and of the laser exit facet. The delays τ i change in time due to the target motions, so that Looking for CW solutions of Equations (1) and (2), we get the following expressions for the CW QCL frequency ω F and the associated difference ∆N between the carriers' density in the presence of feedback and its value in the free running laser case [12]: where A i = 2L 0,i ω F /c and ω i = 2v i ω F /c. Hence, the temporal evolution of the quantity ∆N = ∆N (t), which is proportional to the voltage offset at the QCL terminals and, thus, represents the experimentally-accessible SM signal [13], contains information about speed (and consequently, position) of both targets. From this, it follows that an explicit solution of Equations (3) and (4) would allow one, in principle, to determine the value of the OT velocity and, thus, the OT displacement. Due to the highly implicit character of the transcendent Equation (3), this is analytically impossible, and numerical methods have to be used to recover the information about the OT from the ∆N time trace. For this purpose, we simulate the evolution of the SM function ∆N , by numerically solving Equations (3) and (4) for the values of v r and v o indicated by dots in the parameter space depicted in Figure 2. The other parameters used in our simulations are typical for a mid-infrared QCL and are reported in Table 1. Nonlinear Frequency Mixing An example of the carrier density difference ∆N (t) obtained from our simulations is represented in Figure 3. Its temporal trace exhibits two distinct modulations, which show the peculiar interference fringes of the lasers operating in SM configuration: fast fringes (from Mark Ato Mark B in Figure 3a) on the scale of a few 10 −1 ms, modulated by slower fringes (see also Figure 2a in [12]). While one could expect that they correspond to the fast and slow motions of the RT and OT, respectively, the inspection of the Fourier transform of ∆N (t), shown in Figure 4, reveals the nonlinear nature of the SM signal. In fact, the first peak, at frequency ω o = 4πv o /λ 0 = 100 Hz, is associated with the slow periodicity due to the movement of the OT; another peak at ω r = 4πv r /λ 0 = 10 kHz, is associated with the motion of the RT, but the dominant peak in the spectrum occurs at ω r − ω o . This shows that the feedback fields provided by the targets cause a nonlinear response in the laser, due to the intrinsic nonlinearity of the LK Equations (3) and (4). The presence of super harmonics of the mixed frequencies are another proof of this phenomenon and will be briefly discussed in Section 3.3. In the time domain, this amounts to saying that there should exist temporal features of the time trace, associated with a fast time scale (being 2π/(ω r − ω o ) ≈ 2π/ω r ), which carry information about the slow time scale. Since the former is associated with the RT motion and the latter to the OT motion, the introduction of the RT should allow one to gather information about the OT motion by sampling the system at a fast rate. This overcomes the limit λ/2 for the appreciable displacement when only one target is considered. More interestingly, again, if the values of the feedback parameters are high enough [13], novel sub-features can be seen in the time trace (see Figure 3); such sub-features are the fingerprint of the two-target scheme, and their temporal characteristics are linked to the frequency combination arising from the nonlinear response described above. The study of these sub-features will be the main focus of the next section, so it will be worth sketching an interpretation about their origin and evolution. For moderate and strong feedback levels, multiple solutions of Equation (3) may exist, resulting in different possible continuous wave solutions (or modes), and following the well-established literature [4], we assume that the actual lasing mode is the "maximum gain mode" (MGM), i.e., the one that minimizes ∆N [4] (simulations of the dynamical LK equations and comparison with experiments [13] show that this assumption is correct in the operating conditions that we considered). During the target evolution, it is possible that a new mode with higher gain becomes available, thus causing the lasing frequency to switch on this new mode. In the case of feedback from a single target, this leads to the typical sawtooth behavior of the ∆N temporal trace: its discontinuity represents the switching of the lasing frequency from one mode to another. In the double target setup we are considering, because of the frequency mixing described by Equation (3), the appearance of a new MGM is influenced by both target feedbacks. In particular, during the movement of the RT on a timescale comparable to the fast fringe period, there can be a time lapse (typically shorter than the fast fringe period itself) in which the new MGM exists. During this time, the laser frequency switches between the two different modes, inducing a sudden transition in the frequency and carrier density, leaving a peculiar and easily recognizable fingerprint in the time trace, which we call the "sub-feature". The duration of these sub-features, and, conversely, the time interval between two subsequent sub-features, also depends on the OT motion, as indicated by the fact that it varies from one fast fringe to another, within the periodicity of the slow fringe. This is precisely the temporal feature on the basis of which the slow motion of OT can be tracked in the fast fringes, provided that one is able to extract the dependence on the OT velocity encoded in the duration of the sub-features. Referring to Figure 3, the slow fringe is the part of the temporal trace between Points A and B in Figure 3a, while the fast fringe is the part between Points C and E in Figure 3b (its duration is indicated by ∆T ), and the sub-feature is the part between Points D (the left cusp) and E (the right cusp); finally, with ∆t, we denote the time interval between the edges of two consecutive sub-features, namely between Points C and D in Figure 3b. Super-Resolved Displacement Sensor In this section, we first describe the procedure for analyzing ∆N (t) and for identifying the relevant temporal marks of the sub-features described above. Then, we present the numerical scheme used to relate such data to the OT displacement. As we have discussed in the previous section, information about the movement of RT and OT is linked together in the features of the temporal trace via Equation (4). We performed an extensive fitting procedure, to relate the position of the OT to the time interval ∆t between the cusps of two subsequent sub-features. The foundations of this method have been presented in [13]. Here, we radically improve it and correlate it with an analysis of the achievable precision in relation to the calibration stage. We will make use of the increased number and accuracy of the simulations and of the broader parameter space (see Figure 2). Numerical Approach All of the simulations show that the sub-features emerge at the beginning of the slow fringe. Their duration t E − t D = ∆T − ∆t, when compared to the fast fringe duration ∆T , is very small at the beginning, and it grows as the OT moves, until, near the end of the slow fringe, it becomes of the same magnitude as ∆T . This actually results in the vanishing of the sub-features for a short period of time (indicated by ∆t B in Figure 3a) that terminates at the slow fringe jump, after which they reappear in the next slow fringe; of course, the dynamical behavior of ∆t across the slow fringe is complementary). We now illustrate how it is possible to relate the change in the time lapse ∆t to the position of the OT. In our simulation, the OT velocity v o is fixed, so we can define a "theoretical" position of the OT as given by the simple formula: The time trace is thus sampled to extract the values of ∆t at different instants of time; at each time, we assume that the position of the OT is given by S th , and we will perform a best fit analysis to find a function that relates the position S th to ∆t. Since, as we mentioned above, there can be a short period across the slow fringe change in which the sub-features disappear, the fitting procedure can be performed in principle only along a single slow fringe. In the next subsection, we will tackle the problem of overcoming this limit, because an actual sensor must be capable of operating on an (ideally) arbitrarily long period, while at this stage, we will limit our analysis to a single slow fringe. We have developed an algorithm capable of identifying and classifying (potentially in real time) all of the critical points of the temporal trace labeled in Figure 3, so that we can register the time lapses between them. We set the origin of time at the first left cusp identified by the analysis algorithm, and we take the reference times t n at each subsequent left cusp, while, as already mentioned before, we register the time lapse ∆t n that occurs between the right cusp of the (n−1)-th sub-feature and the left cusp of the n-th one. Upon varying the fit on a broad basis of polynomial and transcendent functions, we found that a quadratic dependence works surprisingly well in approximating such dependence, and the candidate test function can be cast as: The least squares method can be applied to evaluate the coefficients C i for all of the simulations. Of course, coefficients C i change for simulations at different v r , because a larger v r implies a reduced ∆T and, thus, ∆t. A proper interferometric sensor cannot suffer from such "reference arm" dependence, so the next step is to make the v r dependence explicit in Equation (6). For such a purpose, a new best fit procedure was carried out, resorting to the sets of simulations with varying v r . The coefficients C 2 and C 1 have been found to have a quadratic and a linear dependence on v r , respectively, while C 0 is independent of v r . The general formula we sought, which is supposed to hold for a wide range of the (v r , v o ) plane, could then be cast as: In this formula, we insert the solitary laser wavelength λ 0 = 2πc/ω 0 to make the dimensions of the coefficients clear. To determine the parameters γ i from the entire set of simulations, we have evaluated the parameters C i for all of the complete slow fringes in each simulation, and we have extracted the values of the γ i , along with their errors, by using their expressions as functions of the C i . Each value obtained in this way has been treated as an independent detection of the "true" γ i value, so that its best estimate is the weighted means evaluated from all of the occurrences. We found: and these parameters are constant within the assumptions described so far. The Sensing Procedure: Methods, Sensitivity and Limits Having obtained the relation defined in Equation (7), it can be used to determine the displacement of the OT for any velocity pair (v r , v o ) compatible with the limits of validity, which are going to be discussed here. First of all, the model was built on the stationary solutions of the LK equations, so we must ensure that the evolution of the target is adiabatic, i.e., the laser system has enough time to reach a stationary state as the targets move. The slowest time on which the system evolves is of the order of tens of nanoseconds [2,3], while the evolution timescale of the carrier density difference associated with the motion of the RT is of the order of the fast fringe period, ∆T λ 0 /2v r . Therefore, adiabaticity requires that v r << v r,max with v r,max 100 m/s. In our simulations, the maximum value for the RT velocity was thus set to 5 m/s. The reliability of the phenomenological relation Equation (6) can be assessed by appreciation of the normalized root mean square deviation for each fitting procedure: where N is the number of sub-features identified by the algorithm. We observed that its value was very small (of the order of a few 10 −4 ) as long as the ratio v o /v r was well above 10 −4 . Near this threshold, it jumped abruptly to values of the order of 10 −1 , thus indicating that the assumption of a quadratic dependence is no longer valid. In order to be conservative, in the evaluation of the parameters γ i in Equation (8), we have used only simulations with v o /v r ≥ 0.5 × 10 −3 . The smallest displacement that can be observed depends on the time between two subsequent observations of ∆t, which is of the order of the fast fringe periodicity, ∆T λ 0 /2v r , so that, in principle, Since, as we discussed above, the sensing procedure allows v o /v r 10 −3 , the scheme we are proposing should be capable of reaching a sensitivity of λ 0 /1000, which is of order of a few nanometers. An example of the results that can be obtained with the sensing procedure described so far is given in Figure 5. Figure 3. The solid line represents the theoretical position S th (see Equation (5)). The dots mark the position S phen as predicted by the phenomenological Equation (7) and the associated errors σ S as given by Equation (11). (a) The representation over the entire period of a slow fringe; (b) a close-up in which the accuracy of S phen in retrieving S th can be appreciated. It is evident by inspection of the close-up, Figure 5b, that the potential measurements of the OT are well within the 10-nm deviation from the theoretically-expected values (the solid line). Of course, this result depends crucially on the accuracy in determining the coefficients γ i of Equation (8). In fact, we can assume that the error on the determination on the OT displacement due to uncertainties in the γ i parameters is given by applying the error propagation law to the phenomenological Equation (7): and this error must be smaller than the expected sensitivity given by Equation (10). Increasing the number of simulated slow fringes, on which the determination of the γ i parameters is performed, will result in better accuracy. To check this, we have evaluated the mean error σ S as a function of the number of simulations. The result is shown in Figure 6. As we can see, the curve falls quite rapidly below 10 nm and then tends to saturate around 5 nm. This confirms that the γ i parameters can be evaluated with sufficient precision for our sensing scheme to reach nanometric sensitivity. The values reported in Equation (8) have been obtained using a set of 100 simulated slow fringes. In a realistic sensor, the determination of the γ i parameters and their uncertainties is based on a calibration procedure where targets with known velocities are employed. The calibration should be performed for a set of pairs (v r , v o ) conveniently distributed in the parameter space according to the guidelines discussed in the first part of this subsection. This calibration needs to be done only once, since the parameters determined in this way should remain valid for a range of at least three orders of magnitude in both target velocities. The number of calibration runs determines the accuracy of the value obtained for the γ i parameters and, consequently, the sensitivity of the device. A curve representing the error on the determination of the position of the target as a function of the numbers of calibration runs should have the behavior reported in Figure 6, but other factors related to the experimental conditions may hinder the capability of achieving the predicted nanometric sensitivity. Just as an example, mechanical vibrations of the set-up, irregularities in the motion provided by the step motor driving the RT and/or OT and current fluctuations in the power supply of the laser are all obvious sources of errors in the measure. While such sources can be tamed in principle, we will consider the effect of a more intrinsic limitation in Section 3.4. Another issue to be addressed is how to circumvent the detection limitation to a single slow fringe due to the disappearance of the sub-feature discussed above. Our simulations proved that the number of fast fringes not exhibiting a sub-feature occurs invariably at the end of a slow fringe, and it decreases with increasing feedback strengths k o and k r . In any case, in the time lapse ∆t B , our algorithm is "blind" to the target motion. A sufficiently high feedback should limit this interval to a small fraction of the slow fringe period, and during this period, the OT displacement can be extrapolated via the simple formula ∆S blind = v l ∆t B , where v l is the last detected OT velocity, just before sub-feature disappearance. This simple method has been experimentally tested in [13] and proven to be sufficiently accurate within experimental uncertainties. In the next subsection, we will discuss in detail this and other aspects of the sensing procedure related to the feedback level. Finally, let us stress that, in principle, the sensitivity could be pushed further beyond, since the limiting ratio v o /v r 0.5 × 10 −3 is set by the failure of the quadratic fit proposed in Equation (7). For lower values, new sets of simulations and a more refined fitting procedure might yield a reliable relation S phen (∆t n ). In any case, the necessity to detect a sufficient number of slow fringes and of densely sampling each fast fringe might increase the requirement in terms of bandwidth and the buffer memory of the electronics reading out the voltage at the QCL contacts. The Role of the Feedback Parameters in the Sensing Scheme As was shown in Section 2.1, the key element for the sensor proposed in this work is the high feedback level, which ensures, in the spectral domain, the nonlinear frequency coupling and, in the time domain, the appearance of the sub-features, whence our algorithm extracts the information of the OT motion with nanometric accuracy. We stress that this feature is critical of the nonlinear dynamics of the laser in providing the coupling and of the QCL in particular, since this emitter can sustain large feedback without entering chaotic regimes [2]. We will now illustrate in some detail the role of the feedback strength on the system dynamics. We On the contrary, as the feedback parameters increase, new temporal features appear in the fast fringes, in the form of additional pairs of cusps with even shorter duration (see Figure 8a). While such "sub-sub-features" may be the subject of further investigations to enhance even further the potential sensitivity of our scheme, another interesting insight for this phenomenon can be gathered from inspecting the Fourier transform of ∆N (t) for different values of the feedback parameters. Figure 4, the main peak occurs at the frequency difference ω r − ω o . Remarkably, there are several higher harmonics of this dominant note, whose peaks decrease in intensity with a power law, indicating once again the strong nonlinear character of the interaction, brought about by the laser dynamics. Interestingly, the time trace is still regularly periodic in this case, although the peak pattern is complicated by the appearance of the new cusps. Upon further increase of the feedback strength, we observe (see Figure 8d) that the background increases considerably, while the peaks at the dominant frequencies become more intense and do not decrease linearly any more; accordingly, the time trace (see Figure 8c) still exhibits regular, though complex features on the short time scale, while at longer timescales, comparable with the slow fringe period, it appears irregular, and we cannot expect to recover a relation formally similar to Equation (7). Summing up, as concerns the proposed sensing scheme, in the general dynamical scenario of the retro-injected QCL, the most suitable feedback level one should try to set as the operational point for the sensor is the one close to the threshold of the appearance of the sub-sub-features. In this condition, in fact, the time lapse ∆t B in which the sub-features disappear is the shortest possible, thus reducing the error implied by the extrapolation procedure described above. Influence of the QCL Linewidth on the Sensitivity As we mentioned in Section 3.2, several other sources of error may worsen the accuracy of Equation (7), as given by the errors on its coefficients in Equation (8), which are solely determined by the sample set on which the calibration is performed. We consider now the effect of the finite linewidth of the QCL emission. The latter is associated with phase variation induced by spontaneous emission, carrier-induced refractive index change and injection current fluctuations [16,17]. Moreover, it is known that the presence of optical feedback leads to linewidth broadening or narrowing depending on the external cavity phase shift [18]. A rigorous theoretical approach that describes all of these phenomena would consist of adding Langevin noise sources in the LK Equations (1) and (2), as is described in [17]. Here, in order to provide a simple estimation of the role of the QCL linewidth in limiting the proposed sensor accuracy, we suppose that the stochastic fluctuations of the free running laser frequency, denoted now as ω QCL , which follows a normal distribution centered in ω 0 with amplitude ∆ω QCL , affect the frequency ω F of the reinjected QCL trough Equation (3) and, in turn, the values of ∆N as given by Equation (4). We can thus expect the fringe jumps and the sub-feature duration to fluctuate accordingly; this will induce additional uncertainties in the determination of the target displacement as evaluated in Equation (7). In an ergodic hypothesis, we consider the ensemble average of a large number of simulations with fixed ω QCL as representative of the time average in the temporal evolution of fluctuating variables, as provided by the integration of Equations (1) and (2) with the inclusion of Langevin noise sources [17]. In particular, for the study case v r = 5 × 10 −3 m/s and v o = 5 × 10 −5 m/s and for values of ∆ω QCL ranging from 100 kHz to 10 MHz, which are in agreement with estimations reported in recent literature [19][20][21], a set of 50 determinations of ω QCL was randomly generated and used to obtain 50 ∆N (t) traces, via Equation (3) (where, of course, ω 0 was replaced by ω QCL ). The immediate visual effect of the ω QCL fluctuations introduced in this way is the jittering of the cusps delimiting the sub-features (see Figure 9). This amounts to saying that one can detect a set of t C,i , i = 1, . . . 50, corresponding to left cusps initiating a sub-feature, and another set t D,j , j = 1, . . . 50, corresponding to right cusps ending a sub-feature (see Figure 3b), which define a 50 × 50 array of detectable sub-feature time lapses ∆t i,j = t D,j − t C,i . Indicated as ∆t max and ∆t min , the maximum and minimum values of ∆t i,j , respectively (see Figure 9), the dispersion σ ∆t = (∆t max − ∆t min ) is considered as the error on the determination of ∆t induced by the finite linewidth. The uncertainty on the OT position has been derived with the error propagation law: In Figure 10, we represent the behavior of the mean error σ S,lw (obtained by averaging σ S,lw over the set of sub-features considered) as a function of the QCL linewidth. As we can see, the errors grow with linewidth according to a power law, and for ∆ω QCL ≤ 5 MHz, they are below 10 nm, thus comparable to those intrinsic to our deterministic method. Conclusions and Perspectives In conclusion, we have studied the dynamics of a QCL with retro-injection provided by two translating collinear targets, showing that the nonlinear behavior of the emitter provides a nontrivial coupling of the two feedback fields, which allows one to extract information on the slower target on a time basis linked to the interferometric fringes associated with the fast target translation. Thus, we have proposed a scheme for a real-time, nanometric displacement sensor with a resolution on the order of λ/1000, a calibration procedure for its use in a wide range of OT velocities, and we have provided an estimation of the accuracy in dependence of several factors, among which is the intrinsic linewidth of the emitter. While this scheme could be exported in principle to any range of wavelengths, it must be noted that high levels of feedback strength are necessary to ensure a significant nonlinear response and, thus, the occurrence of the sub-features upon whose durations we based the sensor scheme. Conventional diode lasers appear to be unsuited for such scheme, since the undamped amplification of the relaxation oscillations frequency causes the emitter to enter a chaotic regime at high feedback levels [4]. On the other hand, the QCL is quite appealing, not only for the wavelength range, at which several materials of interest are transparent, but also because of the narrow linewidth, which is beneficial for the sensitivity of the present sensor scheme. We believe that our work shows how, quite generally, the introduction of an additional interferometric element (here, the moving RT) provides a second, controlled and fast "clock tick", which allows fast sampling of an independent process (the OT translation), while the nonlinear dynamics of the laser provides the coupling between the two. It will be of course interesting to extend the present scheme to analyze arbitrary OT motions and to model more convenient RT dynamics, such as vibrations of the RT etalon, as they may be provided by a piezo controller. Other than faster RT displacements and, thus, improved basic resolution (see Equation (10)), such an extension may provide a simpler device with a reduced footprint. In this respect, entirely new knowledge must be acquired concerning the relation among RT and OT spectral features appearing in the QCL output. Work is in progress in this direction. Finally, and on a more fundamental note, it has been shown that a QCL with feedback can exhibit a multimode regime of regular oscillations, corresponding to coherent locking of modes of the external cavity provided by a reflector [3]. The inclusion of two reflectors, thus the existence of two sets of independently-tunable, external modes, might provide an "engineered" modal competition (e.g., when the two sets have free spectral range ratios in the rational or in the irrational domain) and reveal novel features in the coherent QCL dynamics.
7,754.6
2015-08-01T00:00:00.000
[ "Engineering", "Physics" ]